Randall Munroe’s XKCD ‘Advent Calendar Advent Calendar’
via the comic humor & dry wit of Randall Munroe, creator of XKCD
The post Randall Munroe’s XKCD ‘Advent Calendar Advent Calendar’ appeared first on Security Boulevard.
via the comic humor & dry wit of Randall Munroe, creator of XKCD
The post Randall Munroe’s XKCD ‘Advent Calendar Advent Calendar’ appeared first on Security Boulevard.
Introduction: Splunk SOAR (Security, Orchestration, Automation, and Response) is a very useful tool that can super charge your security operations by giving your security team a relatively easy, low code, automation capability that has great integrations with tools you already use, straight out of the box. One of the things that makes SOAR a [...]
The post Splunk SOAR – Sorting Containers to Improve SOAR On-Poll Functionality (Free Custom Function Provided) appeared first on Hurricane Labs.
The post Splunk SOAR – Sorting Containers to Improve SOAR On-Poll Functionality (Free Custom Function Provided) appeared first on Security Boulevard.
Authors/Presenters: Michal Grygarek, Martin Petr
Our sincere appreciation to DEF CON, and the Presenters/Authors for publishing their erudite DEF CON 32 content. Originating from the conference’s events located at the Las Vegas Convention Center; and via the organizations YouTube channel.
The post DEF CON 32 – Nano Enigma Uncovering The Secrets In eFuse Memories appeared first on Security Boulevard.
Recently, Palo Alto Networks identified and patched a critical zero-day vulnerability in their next-generation firewalls (NGFWs). This vulnerability, tracked as CVE-2024-0012, allowed attackers to execute code on vulnerable devices remotely. This vulnerability has been actively exploited in attacks dubbed "Operation Lunar Peek."
The post Why Zero-Day Attacks Bypass Traditional Firewall Security: Defending Against Zero-Day’s like Palo Alto Networks CVE-2024-0012 appeared first on Security Boulevard.
There’s a reason why retailers call the final three months of the year the “golden quarter.” As festive shopping ramps up, many will be hoping to generate a large part of their annual revenue in the period between Black Friday and the end of the year. But where there’s money to be made, there’s also likely to be criminal activity: 100% of data breaches over the past year were financially motivated.
The post Why Retailers Must Secure Their Payment Data This Golden Quarter appeared first on Security Boulevard.
Amazon Web Services (AWS) this week made a bevy of updates to improve cloud security, including additional machine learning algorithms for the Amazon GuardDuty service that make it simpler to detect attack patterns.
The post AWS Adds Mutiple Tools and Services to Strengthen Cloud Security appeared first on Security Boulevard.
The concept of a RACE condition and its potential for application vulnerabilities is nothing new. First mentioned back in the […]
The post RACE Conditions in Modern Web Applications appeared first on Security Boulevard.
The post Protecting SLED Organizations: How Schools Can Secure Data Against Modern Threats appeared first on Votiro.
The post Protecting SLED Organizations: How Schools Can Secure Data Against Modern Threats appeared first on Security Boulevard.
The call metadate of a "large number" of Americans was stolen by Chinese state-sponsored Salt Typhoon's hack of eight U.S. telecoms and dozens more around the world, according to U.S. officials, who are scrambling to map out the scope of the attack.
The post Metadata of Americans Stolen in Chinese Hack: U.S. Official appeared first on Security Boulevard.
A disturbing new cybersecurity incident has raised alarms across U.S. telecoms, with revelations this week about a large-scale Chinese hacking campaign known as Salt Typhoon. The sophisticated breach targeted at least eight major U.S. telecom providers, including Verizon, AT&T, and T-Mobile, with attackers successfully infiltrating the networks and siphoning off sensitive metadata—potentially compromising millions of […]
The post Salt Typhoon Campaign: A Wake-Up Call for U.S. Telecoms and National Security appeared first on Centraleyes.
The post Salt Typhoon Campaign: A Wake-Up Call for U.S. Telecoms and National Security appeared first on Security Boulevard.
From phishing schemes and ransomware attacks to social engineering and doxxing, high-net-worth individuals (HNWIs) face an ever-evolving array of cyber threats, and the risks of digital exposure are greater than ever. Wealth, influence, and access make HNWIs prime targets for cybercriminals, and the financial, professional, and reputational consequences of a breach can be devastating. This […]
The post Why HNWIs are Seeking Personal Cybersecurity Consultants appeared first on BlackCloak | Protect Your Digital Life™.
The post Why HNWIs are Seeking Personal Cybersecurity Consultants appeared first on Security Boulevard.
Protected Health Information (PHI) is a critical aspect of healthcare, encompassing any data that can identify an individual and is used in the context of medical care. Examples of PHI include personal identifiers (name, address, Social Security number), medical records, health insurance information, and even communications containing health details.
The post What is PHI? (Protected Health Information) first appeared on TrustCloud.
The post What is PHI? (Protected Health Information) appeared first on Security Boulevard.
“I have not failed. I've just found 10,000 ways that won't work”
- Thomas Edison
Introduction:This is a continuation of a deep dive into John the Ripper's new Tokenizer attack. Instruction on how to configure and run the original version of Tokenizer can be found [Here]. As a warning, those instructions need to be updated as a new version of Tokenizer has been released that makes it easier to configure. The first part of my analysis can be found [Here].
This is going to be a bit of a weird blog entry as this is a post about failure. Spoiler alert: If you are reading this post to learn how to crack passwords, just go ahead and skip it. My tests failed, my tools failed, and my understanding of my tools failed. A disappointing number of passwords were cracked in the creation of this write-up. I'll admit, I was very tempted to shelve this blog post. But I strongly believe that documenting failures is important. Often when reading blog posts you don't really see the messy process that is research. Stuff just doesn't work, error messages that are obvious in retrospect are missed, and tests don't always turn out the way you expect. So as you read this, understand that it's more a journal of troubleshooting research tests when they go wrong, vs. a documentation of what to do.
To put it another way, the main audience for this blog post is:In response to my previous blog entry, Solar Designer wrote: "One thing that surprised me is that your top 25 for training on RockYou Full (including dupes, right?) is different from what I had posted in here at all (even if similar)." [Link].
That's a good question, and one that I had been wondering as well. There's a couple of things that could be causing this, from the way our Linux shells handle character encoding, the order of our training lists, to differences in our training lists. Or it could be something totally different that I'm not imaginative enough to come up with yet. At a high level, it's probably not that big of a deal since our experiences running Tokenizer attacks seem roughly the same (Solar Designer has posted tests comparing it to Incremental mode, and they roughly match what I've been seeing). But this can be a useful rabbit hole to dive down since it can expose some optimizations or environmental issue that could cause problems as more people start to use this tool. There's a big gulf between "it works on my machine" and "its easy for anyone else to run".
Conclusion Up Front:
Tokenizer (and base-Incremental mode) seem resilient to the order of the passwords they are trained on, and setting 'export LC_CTYPE=C' did not seem to impact guess generation.
Bonus Finding:
When manually analyzing password guesses DO NOT pipe the output of a password cracking session into "less". At least in my WSL Ubuntu shell, this seemed to add artifacts into the guesses I was creating which gave me bad data. Note, This doesn't impact running actual password cracking sessions.
Instead, when using John the Ripper, make use of the "--max-candidates" option. Aka:
Discussion
This was an area where my analysis setup really let me down so I chased a lot of unproductive leads before I was able to find the ground truth. For my first test I sorted Rockyou1 and compared a Tokenizer attack trained on it to a Tokenizer attack trained on an unsorted Rockyou1 training set. Initially they appeared to generate different guesses. For example:
This led down an unproductive rabbit hole where I ended up generating a lot of different character sets for Incremental mode to try and track down what was causing the differences in guess generation. It wasn't until I got really frustrated and ran a "diff" on the different .chr files that Incremental mode uses and found they were EXACTLY THE SAME that I realized the problem might be in how I was displaying the guesses.
Still, I learned a few new things, and improved my testing process. So it wasn't a complete waste.
Question 2: How does the PCFG OMEN mode attack differ from the original Rub-SysSec version? Background:This question was inspired by a comment by @justpretending on the Hashcat Discord channel.
OMEN stands for Ordered Markov ENumerator and the original paper/implementation can be found [Here]. I became interested in it after it was presented at PasswordsCon where it was shown to be more effective than my PCFG attack and could generate guesses at speeds making it practical. That's certainly one way to get my attention! To better understand the OMEN attack I took the Rub-SysSec OMEN code and re-implemented it in Python. The standalone version of the python code (py-omen) is still available [Here]. Liking what I saw, I then replaced the existing Markov attack (based on JtR's --Markov mode) in the PCFG toolset with OMEN for the PCFG version 4 rewrite.
That's a lot of words to say that while the different implementations weren't 1 to 1, I expected my version of OMEN be "mostly" similar to the original Rub-SysSec version. But it appears there are differences, so let's look into them!
Challenges with Unsupported Tools:
The first challenge I ran into was getting the stand-alone versions of py-omen to run. For example, I get the following error when try to generate guesses with py-omen:
I vaguely remember having to update my ConfigParser calls in the PCFG toolset, so that error tracks. My guess is if you ran py-omen with Python3.6 it would work, but it looks like it isn't compatible with Python3.12. While it is tempting to fix this bug now as having py-omen working would be nice for doing more experimentation with OMEN, it's really outside the scope of this investigation and the error message brings me joy. Long story short, I'm going to defer that work until a later point.
The important tool to run though is the C OMEN version developed by Rub-SysSec. The "make" build process worked without a hitch, but when I tried to run it the following error was displayed:
Looking into the open issues for the code I found one [Link] that highlighted that this problem occurs when you run it from an Ubuntu system. I verified this happens both with a WSL install of Ubuntu as well as a copy of Ubuntu running on bare metal. When I installed a Debian WSL environment though, I was able to get the original OMEN code to work.
Another challenge I ran into with the Rub-SysSec OMEN code was that by default it only generates 1 billion guesses and then stops. I "believe" you can override this using the "-e" endless flag, but I didn't figure this out until I had run my tests, so the following tests only display a 1 billion guess session vs. the 5 billion guess sessions I used in my previous blog posts.
Test 2) How does the original OMEN code perform compared to the PCFG OMEN code?
Test 2 Design:
Training the Rub-SysSec OMEN code on the RockYou1 training set (1 million random RockYou passwords), an attack will be run using a variation of the following command. Disclaimer, I didn't actually use the "-e" flag but I'm including it here to make it easier the next time I need to copy/paste a command from this blog into a terminal.You'll notice I don't specify a specific training ruleset for enumNG since the Rub-SysSec code only supports one training set at a time (aka you need to retrain it every time you want to use a different ruleset).
As to the target sets, I'm going to run enumNG against both the RockYou32 test set (a different set of 1 million passwords from RockYou), and the LinkedIn password dump that I used in the previous blog posts.
Test 2 Results:
Test 2 Discussion:
While I expected to see difference between the PCFG OMEN and the Rub-SysSec OMEN, I was still surprised by how much they differed. I obviously made some improvements in the PCFG version of OMEN while totally forgetting what they were. As you can see from these tests, the original Rub-SysSec OMEN performs comparably to the new JtR Tokenize attack (the original OMEN did better on Rockyou, but roughly the same or worse against LinkedIn).
The PCFG OMEN did much better though. These two attacks should be "mostly" the same! This difference in performance is like a grain of sand in my boot and I'd really like to better understand what makes them different. You'll notice both attacks have the "sawtooth" pattern though so there's certainly room for optimizing the underlying OMEN attack regardless of the implementation.
My first thought was these two tools used different Markov orders (or NGrams). Both of the tools can have the length of NGrams be specified via command line options during training, so having different default settings was a likely source of differences. Unfortunately when looking at their settings, both the Rub-SysSec and the PCFG OMEN use a default of NGram=4 (same as a 3rd order Markov chain). So that's ruled out.
Another source of differences could be the alphabets each OMEN attacks uses. The alphabet is the set of characters OMEN selects from when generating guesses. One change I made to the PCFG OMEN code was to allow for a variable number of characters in the alphabet based on the training data, (as well as support for different character encodings such at UTF-8). You can see the differences between the two different tools which were both trained on the RockYou1 training data below:
While the different alphabets are probably the cause of some of the differences, given that the "extra" characters in the PCFG OMEN are unlikely to be in many passwords, this doesn't explain the entire difference. My current theory is that the probability smoothing and how the PCFG toolset uses the "Initial Probability" (IP) may be the source of many of the other differences. Side note: Neither Rub-SysSec or PCFG OMEN used "Ending Probability" (EP) by default.
What is very annoying though is I don't have any notes on what I did differently for smoothing, so to better understand this I need to dig back into the original Rub-SysSec version of OMEN as well as my own code. So this is something I need to research, but I'm going to defer most of that investigation to a later blog post.
TLDR: The Rub-SysSec and PCFG toolsets both use the OMEN attack, but there are implementation differences which cause them to behave very differently.Question 3) Can the Tokenize approach be applied to the PCFG OMEN attack?
This test was brought up by SolarDesigner on the John-Users mailing and we discussed what this attack might look like [Here]. The proposed approach to test this can be summed up as follows:
This should be good enough "smoke test" to see if there might be value to add tokenize support to the main PCFG toolset.
Test 3 Training:
For this test, I'm finally updating my Tokenize code to the latest version in the John the Ripper Bleeding Jumbo github repository. To make things easier to compare against previous runs, I'll be training the new version Tok-OMEN on the RockYou1 1 million training subset. Here are the commands I used:
This appeared to work correctly as can be seen when I view the CP.level (basically 4 letter substrings OMEN uses for NGRAM=4) in Visual Studio Code:
Test 3 Design:
For a target set of passwords, I'm going to use the same RockYou32 1 million subset of passwords (different from the training passwords), and the LinkedIn password dump. This will allow me to directly compare this attack to the previous attacks I ran.
To test the attack and generate the first 25 guesses I used the following command. Please note: It is very important to pipe the result of the pcfg_guesser into JtR to make use of the associated untokenize_omen rule.
Note: Since OMEN works using "levels" it doesn't generate the most probable guesses first. Instead it generates all guesses at a specific level first. So for level 1 there are 104 guesses that can be created with this training set. You can see the keyspace per level in the PCFG rules file under Omen/omen_keyspace.txt. Still, this looks weird, and [[Spoiler Alert]] indicated a deeper problem with this attack run.
To actually run (and record) the attack I used the following command for the RockYou32 test set:
Test 3 Results:
Test 3a Discussion:
I ended up not running the test against the LinkedIn passwords, because ... Yikes. The TokOMEN attack only cracked 20,418 passwords. I was actually expecting it to struggle based on the first 25 passwords it created, but this was way worse than I expected.
As a quick check, I ran two short attacks (10 million guesses). One was a "normal" TokOMEN attack as above, and the other one was an intentionally "broken" TokOMEN attack without using the JtR External mode. I expected the second "broken" attack to totally fail as the tokenized guesses will be "junk", but that would at least tell me if the JtR External mode was working. The results of this were the "normal" attack cracking 15,920 passwords and the "broken" attack cracking 14,934 passwords. So the JtR External mode appears to be working to a degree.
Still, not great. Thinking it might be an issue with the current NGRAM setting, I reran the PCFG trainer using -n 2 (NGRAM=2). That's when I looked at the trainer output and noticed something going horribly wrong...
TLDR: The file encoding autodetect was going south due to the new tokens in the training data. This caused the pcfg-trainer to horribly misread many of the training passwords. The real results are even worse than the errors suggested since there are a million total training passwords, so many of the different training passwords were also incorrectly "merged". No wonder things went so wrong! Teaches me to ignore error messages past-me put into the code.
Setting the encoding to UTF-8 or ASCII made ... negative progress. The PCFG trainer was rejecting most of the Tokenized training data. Looking at my code, I quickly realized the cause of most of these errors was my "dirty training dataset reject function". Basically the tokenized training data looks like all the "junk" that normally shows up in real password dumps. The PCFG trainer includes logic to reject these "junk" passwords to generate more effective rulesets for cracking real passwords. Below is an example of some of the logic that the PCFG trainer uses to clean up training datasets.
Removing the sanity checks from the training data helped a bit, but ended up causing a problem when the PCFG trainer was trying to save the ruleset and write the OMEN data to disk:
Messing around with various options, I was able to get "different" errors, but no successful training runs. So there currently isn't an easy way to apply the Tokenizer attack to the current PCFG OMEN code.
Based on this, I started looking at the Rub-SysSec OMEN code, but that had similar issues. These issues were also compounded by the fact that the Rub-SysSec OMEN alphabet was hardcoded. So there isn't an easy option with that toolset either.
Test 3 Conclusion:
There currently doesn't exist an easy way to apply the Tokenizer attack to current OMEN implementations. I think there is a lot of possibility to incorporate "variable Markov order" aka "variable NGRAM" functionality into OMEN. I'm not convinced that a Tokenizer trainer such as tokenize.pl is the best way to go about that though considering password length is a component in the OMEN level calculations. But I think the lessons learned by looking at tokenizer.pl and seeing it applied to JtR's Incremental mode can be applied to however this approach is incorporated into OMEN.
Question 4) When performing research (or running real cracking sessions), what is a "good" ruleset and dictionary to use?
This question was inspired by the following comment/question I received on the Hashcat Discord channel.
The follow-up discussion led to a really good conversation where @Br0ken shared their cracking techniques, rules, and wordlists they used. This is a good example of where I personally really benefit from writing these blog posts since I learn a lot from the comments/discussions they generate.
Because of that I wanted to share/document some of the advice and links that came out of that conversation.
Good Wordlists:
Discussion of Rulesets and Wordlists:
The list of topics keeps growing and growing. Here are new items to throw on my backlog:
The post Analyzing Tokenizer Part 2: Omen + Tokenizer appeared first on Security Boulevard.
How Does API Security Influence Cybersecurity? As a seasoned data management expert and cybersecurity specialist, I’ve witnessed firsthand the significant impact API security can have on an organization’s overall cybersecurity posture. But why is API security so integral? Let’s delve into that. Application Programming Interfaces (APIs) are the connective tissue of modern software development, bridging […]
The post Why Robust API Security is a Must for Your Business appeared first on Entro.
The post Why Robust API Security is a Must for Your Business appeared first on Security Boulevard.
Why Are IAM Strategies Strategic to Data Breach Prevention? IAM strategies, or Identity Access Management strategies, prioritize the control and monitoring of digital identities within a system. Particularly in the world of cybersecurity, increasingly sophisticated threats are making it vital for organizations to ensure the right access to the right entities. This is where the […]
The post Preventing Data Breaches with Advanced IAM Strategies appeared first on Entro.
The post Preventing Data Breaches with Advanced IAM Strategies appeared first on Security Boulevard.
National Public Data, the data broker whose systems were breached and 2.9 billion files holding sensitive data from 170 million this year, has shut down following the attack and after a judge dismissed parent company Jerico Pictures' bankruptcy filing.
The post National Public Data Shuts Down Months After Massive Breach appeared first on Security Boulevard.
Authors/Presenters: Xiling Gong, Eugene Rodionov
Our sincere appreciation to DEF CON, and the Presenters/Authors for publishing their erudite DEF CON 32 content. Originating from the conference’s events located at the Las Vegas Convention Center; and via the organizations YouTube channel.
The post DEF CON 32 – The Way To Android Root: Exploiting Smartphone GPU appeared first on Security Boulevard.
via the comic humor & dry wit of Randall Munroe, creator of XKCD
The post Randall Munroe’s XKCD ‘Second Stage’ appeared first on Security Boulevard.
Mitigate shadow SaaS and shadow AI risks more effectively by aligning innovation with control. Explore how to build a proactive SaaS security strategy for 2025.
The post SaaS Security Outlook for 2025 | Grip Security appeared first on Security Boulevard.
Hell froze over: FBI and NSA recommend you use strong encryption.
The post China is Still Inside US Networks — It’s Been SIX Months appeared first on Security Boulevard.