Microsoft browsers are hit with a 0-day, Apple severs ties with Supermicro, IoT toy are spying on kids, and more. Jason Wood of Paladin Security joins us to talk about how the NSA is using cyberattacks for defense!
On February 23, 2017, the French Data Protection Authority (“CNIL”) launched an online public consultation on three topics identified by the Article 29 Working Party (“Working Party”) in its 2017 action plan for the implementation of the EU General Data Protection Regulation (“GDPR”). The three topics are consent, profiling and data breach notification.
This is the second online public consultation that the CNIL has launched on the GDPR. In June 2016, the CNIL launched a public consultation on the right to data portability, the data protection officer, data protection impact assessments (“DPIAs”) and certification. In November 2016, the CNIL published the results of the June 2016 public consultation.
The CNIL’s purpose for launching these consultations is to collect concrete questions regarding the GDPR, potential difficulties in interpreting the GDPR and examples of best practices. The responses are intended to inform the Working Party’s discussion regarding various GDPR topics. The Working Party will finalize guidelines on certification and DPIAs, issue new guidelines on the topics of consent and profiling, and update its opinions and guidance on data breach notifications.
The new consultation will be open through March 23, 2017. It will be followed by a second “FabLab” organized by the Working Party in Brussels on April 5 and 6, 2017, in which relevant stakeholders will be invited to present their views on the new 2017 priorities.
Alrighty... now that my RSA summary post is out of the way, let's get into a deeply personal post about how absolutely horrible of a week I had at RSA. Actually, that's not fair. The first half of the week was ok, but some truly horrible human beings targeted me (on social media) on Wednesday of that week, and it drove me straight down into a major depressive crash that left me reeling for days (well, frankly, through to today still).
I've written about my struggles with depression in the past, and so in the name of continued transparency and hope that my stories can help others, I wanted to relate this fairly recent tale.
If you can't understand or identify with this story, I'm sorry, but that's on you.
The Holy Trinity: Health, Career, and Relationships
This story really starts before the RSA conference. 2016 was an up-and-down year for a variety of reasons, but overall my health had been ok as I was able to re-establish a regular exercise routine. My weight was higher throughout the year (a negative), in large part to ending 2015 with a major flu bug and sinusitis that lingered for several months. Frankly, even today, I'm worn out and not as resilient as I think I should be.
At any rate, I was doing ok in the health department until November when I traveled to Austin, TX, to speak at a small event. The night before I was supposed to speak I ended up eating something bad (I suspect a pickled jalapeÃ±o plucked from a jar on the table of a BBQ place) and contracting food poisoning. I got no sleep and was unable to eat all day, so speaking at 4pm after all of that to an audience of 5 (or less) was... not good. This lead into more travel in early December such that by the middle of the month, I was sick. Two weeks of vacation on the road (sick), and suffice to say, by the time 2017 rolled around, I was completely worn out. Once health falls, poor diet routines tend to fall into place as I caffeinate to be functional during the day, which negatively impacts sleep, which negatively impacts weight, which creates the negative, reinforcing cycle around which everything else starts to circle and devolve.
Suffice to say, one of the three pillars had fallen, and as is common for me the past few years (ever since get pneumonia in June 2014), the road back is slow and requires a lot of willpower. From a mental health perspective, once health falls, the danger is real that a depressive episode may approach if anything else takes a hit. Enter the career/work angle...
I'm not going to say a lot about this, but suffice to say, there's been a lot of personal job stress. Such an occurrence has been a trigger for me in the past, because - like so many people - a lot of my personal identity is wrapped around the work that I do. For the rare person reading this post who doesn't know, I work in the cybersecurity space, which is already beset with far above average burnout rates, which means the conditions are already tilted against success, happiness, and mental well-being. Add in my career history that's been so incredibly adverse and challenging, and the picture quickly shapes up that I can very quickly start feeling like I'm nothing more than a waste of space. After all, if work isn't fulfilling, and if I don't feel like I'm doing anything meaningful with my life, then it translates into feeling like I am meaningless. Don't argue, don't comment, don't provide some response about "no, man, you matter." It's not about rationality in this context, it's about how I feel at my core, which tends to be incredibly dark when the wheels fall off and the downward spiral commences.
To sum up, all of this describes the conditions going into RSA week. I was feeling fat, I was feeling tired, I was feeling incredibly undervalued and worthless at work... which set the conditions for what happened next, which was the sense of loss of the third pillar of relationships.
(many) People Suck
I'm not by nature a misanthrope, but I've started to become one over the years, because at the end of the day, I lot of people are miserable, awful, and just downright mean. I unfortunately experienced all of this first-hand during RSA week (all day Wednesday, to be precise - literally starting around midnight, early in the morning). What I found is that there are lots of hateful, evil people in the world who love nothing more than to shit all over everyone; especially people with whom they think they disagree. The best/worst part of this is that they're willing to shit all over people for things you may have never said, but to which statements were (falsely) ascribed.
In the cab home from our company RSA party late Tuesday (aka early Wednesday) I made the mistake of responding to someone's tweet (on the Twitters). A person whom apparently is a major figure in the "women in science" movement (a true dyed-in-the-wool hard core feminist in all the worst connotations) had shared an article about getting more women into science (a worthy goal), but I felt the tone was very anti-male, which I view as being anti-helpful in many ways. So, I replied in what I thought was a very neutral, thoughtful manner, along the lines of "I think this is great, but we need to be mindful not be inclusive via exclusion." I later added "Building one group up by tearing another down is not a net positive." as well as "When the oppressed becomes the oppressor, you still have oppression, which is not truly beneficial to everyone."
It was appalling the degree and amount of raw, vile vitriol leveled at me for what I had viewed as thoughtful, respectful, constructive comments. Moreover, these comments were spewed at me literally day Wednesday, to the tune of hundreds of tweets attacking me, calling me names and declaring things about me (clearly I'm such a product of "white male privilege," what with having grown up in a predominantly white rural community in a single-income academic household where we typically lived paycheck to paycheck and were consistently among the lowest social ranks). In some ways it was infuriating, but the constant onslaught of negativity and ad hominem attacks also took a severe toll on me in that I was already feeling crappy, and the NOP slide (so to speak) hit hard, driving me straight into the ground.
Even Small Things Amount to Piling-On
For those unfamiliar with the RSA Conference, Wednesday night during RSA week is historically an evening filled with corporate sponsored parties/receptions. As the event has grown, this has quickly become an overloaded evening of frivolity. Except this year I literally received no invitations. It was surreal. When I was with Gartner, it was all I could do to find a free moment. Even post-Gartner, as a buyer, there were myriad invitations. However, this year? Nothing. It was beyond strange, and by the time I realized it, pretty much all the parties were fully booked.
I figured, at worst, I could just tag along with people to a couple events, have a little fun, call it an early night. Sounded ok in theory. Right up until I got ditched twice in 30 minutes (by different people), and the tailspin started. Add onto this that I'd been trying to meet-up with a couple dear friend in particular, to no avail (busy schedules). And, because of work-related issues, I ended up with far too much unscheduled down time during the week (a rant for another day). But, for someone teetering on an emotional collapse, this became a rather big deal.
The biggest disaster of the night was when my phone got smacked out of my hand causing it to fly and smash against something (in the dark). When I retrieved it, I found the screen was now non-functional... which was highly problematic considering it was the only computing device that I'd brought with me for the week. I had no laptop, no back-up phone, nothing. I was terrified! I immediately felt cutoff (from the world abusing me). I was already in emotional freefall, and now was completely offline and unavailable in case anyone did try to reach out. Panic ensued. It was late at night and I had to wait until morning.
All of these things (and many more) piled onto a bad day and rapidly accelerated a downward spiral. By Thursday morning I was exhausted and disconsolate. The only reason I got out of bed was the drive to replace my phone. I dragged myself to the Verizon Wireless store, only to find out they didn't open for another hour. I went to the office, only to find out that we don't actually have *any* phones (not even a polycom!). I was able to use one of the conference room computers to look up info for phone replacement, and then when a coworker arrived in the office, I borrowed her phone to call VzW to get details on my options. I then headed to the store a little before opening time (still ended up 4th in line) and quickly picked up a replacement device (which I subsequently hated and replaced once I got home). A couple hours after that and I finally was back online to an adequate reasonable degree. But... the damage was done... and I was just ready to be done, too...
All of these things might strike you as trivial or insignificant, but you have to understand things in context. Already down due to ongoing health issues. Dragged/driven down even further by work issues. And then to have the social stuff go completely sideways? The spiral into the black hole was a rapid ascent, and the recovery less than trivial. Imagine falling into a hole, and as you try to climb out, the ground falls away and you collapse into a deeper hole. And then everything starts to fall in on you... as you fall deeper into the hole, the darker it gets, but gravity also increases, crushing you, making it harder to breathe, not to mention being buried, buried, buried... you feel like there's no way out... you feel like there's no air to breathe... you feel crushed... that is what it felt like...
This is my RSA story. It could have been an ok week overall, but the bottom quickly fell out of it. There really were several potential positives (plus a few negatives), but it was hard to recognize them given Wednesday's NOP slide to disaster.
How am I doing today? If I'm being honest, no better than so-so. Including travel, I logged 101 hours Sun-Sat for RSA week. I was exhausted last wk and am simply not recovered. I don't feel like my health or diet are in a good place yet. Work is still very stressful and I'm just not in a good place there. I'm in fact incredibly frustrated with work/career stuff right now. It's hands-down the single most vexing and depressing thing to me (I feel like a failure. I'm literally on my 3rd post-Gartner job in 2 years). It's really hard to bounce back when the pillars continue to remain shattered. Things don't feel right, and that makes everything more difficult.
But... if there's good news, it's that there are positives to be found, if I let myself see them. I do see the patterns, and I recognize changes I need to make to interdict those bad patterns. At least, to do so where I have actual control. But, it's really not an easy thing to do, and it's very difficult not to see and feel the dark cloud as it shrouds everything else. In the meantime, I do my best to soldier on, and try very hard to make better choices, such as around diet and exercise - asserting some degree of conscious choice and control where I can. Really, that's about all that one can do...
Here's to hoping 2017 turns around!
Mike Kail of Cybric join us. In the news, Verizon closes in on Yahoo, 8 key ingredients to a profitable consulting business, building a repeatable sales process, and when should you fire yourself? Stay tuned!
Now that I've had a week to recover from the annual infosec circus event to end all circus events, I figured it's a good time to attempt being reflective and proffer my thoughts on the event, themes, what I saw, etc, etc, etc.
For starters, holy moly, 43,000+ people?!?!?!?!?! I mean... good grief... the event was about a quarter of that a decade ago. If you've never been to RSA, or if you only started attending in the last couple years, then it's really hard to describe to you how dramatic the change has been since ~2010 when the numbers started growing like this (to be fair, yoy growth from 2016 to 2017 wasn't all that huge).
With that... let's drill into my key highlights...
Why do people like me go to RSA? Because it's the one week in the year where I can see almost every single vendor in the industry, as well as see people I know and like who I otherwise would never get to see in person (aka "networking"). It truly is an enormous event, and it has definitely passed the threshold of being overwhelming. Several people I've know for years did not make the trip this year, and I suspect this will become a trend, but in the meantime, in many ways it's a "must attend" event.
The down-side to an event this large, and something I learned back in my Gartner days, is that - as someone with nearly 2 decades of industry experience - this is not an event where you're going to find much great content. Talks must, out of necessity, be tuned to the median audience, which means looking backward at what was cutting-edge 5-10 years ago. Sad, but true. There's simply not much room for cutting-edge thinking or discussion at the event nay more.
Soooo... why go back? Again, so long as there's business development and networking benefit, it is an essential event, but it's also very costly. Hotel pricing alone makes this an increasingly difficult prospect. For as much as we're spending on hotels each year, I could very likely visit friends in 3-4 different parts of the country and break even on travel costs. It's also increasingly a lot of noise, and much harder to sift value from that noise. I truly believe RSA is nearing the point where they'll have to either break the event into multiple events (kind of like 3 wks of SxSW), or they'll at least need to move to a different model where you're attending a conference within a conference (similar to "schools" within a large university). As it stands today, it's simply too easy to get lost in the shuffle and derive diminishing value.
Automation Nearing the Mainstream
We've been talking about security automation and orchestration for several years now, but it's often been with only a handful of examples, and generally quick forward-looking. We're just now finally reaching the point where the automation message is being picked up in the mainstream and more expansive examples are emerging.
One thing I noticed this year is that "automation" was prevalent in many booths. There are now at least a dozen vendors purportedly in the space (up from the days of it being Invotas (FireEye) and Phantom). No, I can't remember any names, but suffice to say, it's a growing list. Also, separately, I've noticed that orgs like Chef and Puppet have also made an attempt to expand their automation appeal to security (not to mention Service Now doing the same).
The point here is this: The mainstream consensus is finally starting to catch up with the reality that we will never be able to scale human resources fast enough to successfully address the rapidly changing threat landscape. Thus, we absolutely must automate as much as possible. We don't need SOC analysts staring at screens, pushing buttons when a color changes from green to red. That can be automated. Instead, we need to think about these processes and make smart decisions about when and where a human actually needs to be in the loop. This is our future, which we should eagerly embrace because it then frees us up to do much more interesting and exciting things.
Since we're talking about automation, it's only natural to pivot briefly into DevOps/DevSecOps/Secure DevOps. This year's Monday event on DevSecOps was ok, if not highly repetitive. However, initial attendance was strong, and feedback has reportedly been good (the schedule got a bit foobar, so attendance declined after lunch, c'est la vie).
Here's what's important: Companies are continuing to reinvent how they operate, and DevOps is the underlying model. As such, we need to push hard to ensure that Dev and Ops teams have security responsibilities in their assigned duties, and that they are held accountable accordingly. A DevOps co-worker recently complained about this "DevSecOps" thing, and I pointed out that the entire reason for it is as a kludge because security has once again been left behind, and neither Dev nor Ops has taken on (or been assigned) security responsibilities, nor are they being held accountable for poor security decisions. THIS IS A CULTURAL FAILING THAT AFFECTS ALMOST EVERY SINGLE COMPANY AROUND.
In DevOps, the norm is always to point to "gold standard" examples like Netflix, Facebook, Etsy, etc. However, what people oftentimes forget in looking at these orgs is that, for the most part, they started out doing DevOps from the early days. There was very little need for cultural transformation because they were already operating in a DevOps manner. For companies that have been around for much, much, much longer, there will be internal opposition and institutional inertia that will slow down transformations. It's imperative that these cultural attributes be supplanted, aggressively if necessary, in order to remove barriers to change. DevOps provides an amazing template for operating an agile, efficient, effective organization... but only if companies fundamentally change how they function, including cultural transformation.
AI, ML, and Big Data Lies
If we were to take all the marketing at face value, then we'd be led to believe that the machines are thinking for themselves and we're a mere small step away from becoming part of The Matrix. Thankfully, that's not really true at all. The majority of companies claiming "AI" today are really being misleading and disingenuous. The simple fact is the majority of products are still based on heuristics or machine learning (ML) - sometimes both.
Heuristics is the traditional pattern matching we've seen for decades upon decades upon decades. Your traditional AV or IDS "solution"? It's primarily based on heuristically matching patterns and signatures to detect "a known bad thing." These are ok, but in the grand scheme they're providing little lasting value.
ML has emerged as an alternative, wherein rather than looking for patterns, we instead model environments or behaviors, and then do alerts based on either matching or deviating from the models (sometimes both!). The ML approach is actually quite promising, though it's premised on the ability to actual create a discrete model of an environment or behavior. It is also imperative that ML engines be constantly rebuilding the models to account for changes in an environment or behavior (for example, imagine building a model of your diet starting in mid-October and running through the end of the year, and then trying to apply that same model to your diet Jan-Mar after you've made major life changes, perhaps as part of a New Year's Resolution).
There is a lot of hope in AI, ML, et al, and I think for good reason. Frankly, ML gives us a lot of value when applied to reasonably discrete environments (e.g., containers), and thus I think we'll continue to see great growth and success in this space. I expect that computing environments will also continue to evolve and grow to make modeling of them that much easier. I think there's much promise.
As for AI itself, we'll have to wait and see, but I suspect we're a good decade+ away from true examples of real-world applications. However, that said, if you're in a lower-level role (analyst, basic infrastructure config, etc.), then now is a good time to invest in training/education to improve your skills to raise yourself up to a higher-level job that will be less easily threatened by AI+automation. As I noted above, we really do not need SOC analysts staring at screens clicking buttons according to a set process. Machines can already do that today. Thus there's no job security in it. Instead, become the person who builds and trains these automation tools, or be the higher-level "fixer" who is activated one automation has done all the base enumeration and examination. The world is changing rapidly, and will look quite different in a decade.
The Threat Is Real / Ignore the FUD
One of my favorite tunes from last year is Megadeth's "The Threat Is Real" as it's really quite an appropriate phrase. Hacks are succeeding every day. Breaches are so commonplace that the mainstream media has all but lost interest in reporting on them. Incidents are inevitable. And, yet, in some ways they needn't be so inevitable; at least, not to the degree and severity we continually see. Whether it be massive DDoS attacks built on the back of woefully insecure IoT devices or sizable holes in cloud CDN infrastructure a la Cloudbleed, there are a lot of holes, a lot of bugs, and a lot of undertrained people, all of which will lead to bad days.
That said, we also need to be incredibly mindful and diligent to avoid the FUD. There's too much FUD. It's like running around telling us that "we're all gonna die" as if we don't all accept this as an inevitability. Come on, folks, let's get out of that red mental state (fear/panic/anger) and apply some rational thought. There are tons of things we can be doing to prepare and protect our organizations, our customers/clients, and our resources. We just need to take a deep breath, settle down, and execute.
What should we do? Well, interestingly, it's not all that strange a list. First and foremost, Basic Security Hygiene, which I wrote about while at Gartner nearly 2.5 years ago. Things like robust IAM (centralized, processized, monitored), vuln and patch mgmt, and applying consistent, secure standards for infrastructure and development are all great starting points. Beyond that, it comes down to taking the time to understand your environment and exposures, and investing in tools and techniques that will produce measurable results (measurement is key!!!). A progressive security awareness program can be critical to educating and incentivizing people to make better decisions, and really reflects the overall imperative to transform the business and it's underlying culture. We can absolutely make things better, but it requires effort and thoughtfulness.
*whew* Ok... so, there you have it... my thoughts from RSA 2017. All told, it was a so-so week for me personally, but I'll definitely be back for one more year. TBD after that. It's really quite the circus these days. This year was especially difficult with how spread out things were (Moscone South has a major construction project underway, so the Marriott Marquis was enlisted). The wifi and mobile signals in the Marquis dungeon were nonexistent, which was painful. Also painful was the 4 spread out venues for Codebreaker's Bash on Thursday evening. It didn't work. Because people were spread all over, it was difficult to casually run into folks I was hoping to see. Hopefully next year they'll revert to a large single venue (I really, really, really enjoyed the Bash at AT&T Park, though many folks complained about it). Finding a venue for 43k+ people has to be incredibly challenging. Of course, so is finding a hotel room each year, so, ya know, there's that, too. Ha.
Hope you find this interesting/useful! Until next time...
Don Pezet of ITPro.TV joins us, David Fletcher of Symantec delivers a technical segment, and we cover the security news for the week. Stay tuned!
Jim Routh of Aetna and InfoSec World joins us. In the news, Cisco touts next-generation firewall gear, a new decryption tool from Avast, Centrify stops breaches in real time, and more. Stay tuned!
The Proposed Agreements prohibit each company from misrepresenting their participation, membership or certification in any privacy or security program sponsored by a government or self-regulatory or standard-setting organization. The Proposed Agreements will be open for public comment until March 24, 2017, after which point the FTC will vote on whether to finalize them.
Revision Note: V2.1 (February 23, 2017): Revised bulletin to announce a detection logic change to Monthly Rollup Release KB3205403 and Monthly Rollup Release KB3205404. This is an informational change only. Customers who have already successfully updated their systems do not need to take any action.
Summary: This security update resolves a vulnerability in Microsoft .NET 4.6.2 Framework’s Data Provider for SQL Server. A security vulnerability exists in Microsoft .NET Framework 4.6.2 that could allow an attacker to access information that is defended by the Always Encrypted feature.
On February 20, 2017, the Article 29 Working Party (“Working Party”) issued a template complaint form and Rules of Procedure that clarify the role of the EU Data Protection Authorities (“DPAs”) in resolving EU-U.S. Privacy Shield-related (“Privacy Shield”) complaints.
The Working Party’s template complaint form indicates that it is intended for use by EU individuals who wish to have their commercial-related complaints associated with Privacy Shield-certified organizations resolved by their national DPAs. Individuals are not required to use the form to submit a complaint to their DPA, but the form indicates that the information requested in the form is necessary to facilitate the handling of individuals’ complaints. The form asks for relevant information about the complaint, such as which companies may be involved in processing the individual’s personal data, the reasons why the data has been transferred, the alleged violation and what relief is being sought. Importantly, the form advises individuals that “in most cases, it would be advisable that you first contact the U.S. Privacy Shield-certified company to attempt to resolve your case.” Once the DPA receives the complaint, it is the DPA’s responsibility to determine whether a DPA panel would be the “competent body” to resolve the complaint. A DPA panel is the competent body only if the complaint is related to a Privacy Shield-certified organization that has committed to cooperate with the DPA panel or that processes human resources data collected in the context of an employment relationship. Otherwise, the complaint may be forwarded to another competent body, such as the U.S. Department of Commerce or the FTC.
The Rules of Procedure clarify the roles of the DPA panel and lead DPA in resolving individuals’ complaints. The Rules of Procedure indicate that upon a DPA receiving a relevant complaint or referral, a DPA panel will be formed (only if competent to resolve the complaint, as described above) in a “timely manner” and “in principle, be confirmed within two weeks’ time from the receipt of the initial complaint/referral.” Each DPA panel will consist of a lead DPA and at least two co-reviewer DPAs. The lead DPA typically will be the DPA that received the complaint. Additional co-reviewer DPAs may be added in “appropriate circumstances…if more than two DPAs are interested in participating in the panel and can put forward a specific interest.” Where fewer than two DPAs indicate an interest in acting as a co-reviewer, the lead DPA has the power to appoint two co-reviewers and should take into account such factors as (1) where the company’s EU headquarters or significant subsidiaries are located, (2) where in the EU the relevant data processing is facilitated, (3) the place in the EU from which most of the data transfers take place, (4) the place where a large number of EU individuals are likely to be affected by the alleged violation and (5) available resources.
The Rules of Procedure also specify additional roles of the lead DPA, such as (1) informing all Working Party members about which DPAs are participating in the panel, (2) informing the company of the substance of the complaint, (3) offering both sides in the dispute a reasonable opportunity to comment and provide any evidence they wish on the matter within a reasonable timeframe, (4) drafting a binding advice opinion that includes remedies, where appropriate, and (5) considering DPA co-reviewers’ advice and attempting to reach a consensus on the advice. If the lead DPA and co-reviewer DPAs cannot reach a consensus on the advice, the lead DPA may request that the Working Party Chair mediate a solution or, as a last resort, a simple majority vote may be used to determine the advice. According to the Rules of Procedure, the DPA panel will “seek to deliver advice as quickly as the requirement for due process allows [and] [a]s a general rule, the panel will aim to provide advice within 60 days after receiving a complaint…and more quickly where possible. However, advice will be issued only after both sides in a dispute have had a reasonable opportunity to comment and to provide any evidence they wish.”
Importantly, if the company fails to comply with the DPA panel’s advice within 25 days after receipt, the lead DPA is required to (1) give notice of the panel’s intention to refer the matter to the FTC or other U.S. Federal or state body with statutory powers to take enforcement action in cases of deception or misrepresentation, or (2) conclude that the company’s agreement to cooperate with the DPA panel has been seriously breached and must therefore be considered null and void, and inform the Department of Commerce so that the Privacy Shield list can be duly amended.
A lone hacker breaches 60 universities and federal agencies, Yahoo loses $350 million from breaches, more bug bounty programs for porn sites, and is your child a hacker? Jason Wood of Paladin Security joins us to talk about smart city technology that could make military bases more secure!
While this is technically a CTF writeup, like I frequently do, this one is going to be a bit backwards: this is for a CTF I ran, instead of one I played! I've gotta say, it's been a little while since I played in a CTF, but I had a really good time running the BSidesSF CTF! I just wanted to thank the other organizers - in alphabetical order - @bmenrigh, @cornflakesavage, @itsc0rg1, and @matir. I couldn't have done it without you folks!
The goal of this post is to explain a little bit of the motivation behind the challenges I wrote, and to give basic solutions. It's not going to have a step-by-step walkthrough of each challenge - though you might find that in the writeups list - but, rather, I'll cover what I intended to teach, and some interesting (to me :) ) trivia.
If you want to see the source of the challenges, our notes, and mostly everything else we generated as part of creating this CTF, you can find them here:
- Original sourcecode on github
- Google Drive notes (note that that's not the complete set of notes - some stuff (like comments from our meetings, brainstorming docs, etc) are a little too private, and contain ideas for future challenges :) )
Part of my goal for releasing all of our source + planning documents + deployment files is to a) show others how a CTF can be run, and b) encourage other CTF developers to follow suit and release their stuff!
As of the writing, the scoreboard and challenges are still online. We plan to keep them around for a couple more days before finally shutting them down.
The rest of my team can most definitely confirm this: I'm not an infrastructure kinda guy. I was happy to write challenges, and relied on others for infrastructure bits. The only thing I did was write a Dockerfile for each of my challenges.
As such, I'll defer to my team on this part. I'm hoping that others on my team will post more details about the configurations, which I'll share on my Twitter feed. You can also find all the Dockerfiles and deployment scripts on our Github repository.
What I do know is, we used:
- Googles CTF Scoreboard running on AppEngine for our scoreboard
- Dockerfiles for each challenge that had an online component, and Docker for testing
- docker-compose for testing
- Kubernetes for deployment
- Google Container Engine for running all of that in The Cloud
As I said, all the configurations are on Github. The infrastructure worked great, though, we had absolutely no traffic or load problems, and only very minor other problems.
I'm also super excited that Google graciously sponsored all of our Google Cloud expenses! The CTF weekend cost us roughly $500 - $600, and as of now we've spent a little over $800.
Just a few numbers:
- We had 728 teams register
- We had 531 teams score at least one point
- We had 354 teams score at least 100 points
- We had 23 teams submit at least one on-site flag (presumably, that many teams played on-site)
Also, the top-10 teams were:
- dcua :: 6773
- OpenToAll :: 5178
- scryptos :: 5093
- Dragon Sector :: 4877
- Antichat :: 4877
- p4 :: 4777
- khack40 :: 4677
- squareroots :: 4643
- ASIS :: 4427
- Ox002147 :: 4397
The top-10 teams on-site were:
- OpenToAll :: 5178
- ▣ :: 3548
- hash_slinging_hackers :: 3278
- NeverTry :: 2912
- 0x41434142 :: 2668
- DevOps Solution :: 1823
- Shadow Cats :: 1532
- HOW BOU DAH :: 1448
- Newbie :: 762
- CTYS :: 694
The full list can be found on our CTFTime.org page.
We had three on-site challenges (none of them created by me):
This was a one-point challenge designed simply to determine who's eligible for on-site prizes. We had to flag taped to the wall. Not super interesting. :)
(Speaking of prizes, I want to give a shout out to Synack for providing some prizes, and in particular to working with us on a fairly complex set-up for dealing with said prizes. :)
Shared Secrets 
The Shared Secrets challenge was a last-minute idea. We wanted more on-site challenges, and others on the CTF organizers team came up with Shamir Shared Secret Scheme. We posted QR Codes containing pieces of a secret around the venue.
It was a "3 of 6" scheme, so only three were actually needed to get the secret.
The quotes on top of each image try to push people towards either "Shamir" or "ACM 22(11)". My favourite was, "Hi, hi, howdy, howdy, hi, hi! While everyone is minus, you could call me multiply", which is a line from a Shamir (the rapper) song. I did not determine if Shamir the rapper and Shamir the cryptographer were the same person. :)
Locker is really cool! We basically set up a padlock with an Arduino and a receipt printer. After successfully picking the lock, you'd get a one-time-use flag printed out by the printer.
(We had some problems with submitting the flag early-on, because we forgot to build the database for the one-time-use flags, but got that resolved quickly!)
@bmenrigh developed the lock post, which detected the lock opening, and @matir developed the software for the receipt printer.
I'm not going to go over others' challenges, other than the on-site ones I already covered, I don't have the insight to make comments on them. However, I do want to cover all my challenges. Not a ton of detail, but enough to understand the context. I'll likely blog about a couple of them specifically later.
I probably don't need to say it, but: challenge spoilers coming!
'easy' challenges [10-40]
I wrote a series of what I called 'easy' challenges. They don't really have a trick to them, but teach a fundamental concept necessary to do CTFs. They're also a teaching tool that I plan to use for years to come. :)
easy  - a couldn't-be-easier reversing challenge. Asks for a password then prints out a flag. You can get both the password and the flag by running strings on the binary.
easyauth  - a web challenge that sets a cookie, and tells you it's setting a cookie. The cookie is simply 'username=guest'. If you change the cookie to 'username=administrator', you're given the flag. This is to force people to learn how to edit cookies in their browser.
easyshell  and easyshell64  - these are both simple programs where you can send it shellcode, and they run it. It requires the player to figure out what shellcode is and how to use it (eg, from msfvenom or an online shellcode database). There's both a 32- and a 64-bit version, as well.
easyshell and easyshell64 are also good ways to test shellcode, and a place where people can grab libc binaries, if needed.
And finally, easycap  is a simple packet capture, where a flag is sent across the network one packet at a time. I didn't keep my generator, but it's essentially a ruby script that would do a s.send() on each byte of a string.
skipper  and skipper2 
Now, we're starting to get into some of the levels that require some amount of specialized knowledge. I wrote skipper and skipper2 for an internal company CTF a long time ago, and have kept them around as useful teaching tools.
One of the first thing I ever did in reverse engineering was write a registration bypass for some icon-maker program on 16-bit DOS using the debug.com command and some dumb luck. Something where you had to find the "Sorry, your registration code is invalid" message and bypass it. I wanted to simulate this, and that's where these came from.
With skipper, you can bypass the checks by just changing the program counter ($eip or $rip) or nop'ing out the checks. skipper2, however, incorporates the results from the checks into the final flag, so they can't be skipped quite so easily. Rather, you have to stop before each check and load the proper value into memory to get the flag. This simulates situations I've legitimately run into while writing keygens.
When I originally conceived of hashecute, I had imagined it being fairly difficult. The idea is, you can send any shellcode you want to the server, but you have to prepend the MD5 of the shellcode to it, and the prepended shellcode runs as well. That's gotta be hard, right? Making an MD5 that's executable??
Except it's not, really. You just need to make sure your checksum starts with a short-jump to the end of the checksum (or to a NOP sled if you want to do it even faster!). That's \xeb\x0e (for jmp) or \e9\x0e (for call), as the simplest examples (there are practically infinite others). And it's really easy to do that by just appending crap to the end of the shellcode: you can see that in my solution.
It does, however, teach a little critical thinking to somebody who might not be super accustomed to dealing with machine code, so I intend to continue using this one as a teaching tool. :)
b-64-b-tuff has the dual-honour of both having the stupidest name and being the biggest waste of my own time .:)
So, I came up with the idea of writing this challenge during a conversation with a friend: I said that I know people have written shellcode encoders for unicode and other stuff, but nobody had ever written one for Base64. We should make that a challenge!
So I spent a couple minutes writing the challenge. It's mostly just Base64 code from StackOverflow or something, and the rest is the same skeleton as easyshell/easyshell64.
Then I spent a few hours writing a pure Base64 shellcode encoder. I intend to do a future blog 100% about that process, because I think it's actually a kind of interesting problem. I eventually got to the point where it worked perfectly, and I was happy that I could prove that this was, indeed, solveable! So I gave it a stupid name and sent out my PR.
That's when I think @matir said, "isn't Base64 just a superset of alphanumeric?".
Yes. Yes it is. I could have used any off-the-shelf alphanumeric shellcode encoder such as msfvenom. D'OH!
But, the process was really interesting, and I do plan to write about it, so it's not a total loss. And I know at least one player did the same (hi @Grazfather! [he graciously shared his code where he encoded it all by hand]), so I feel good about that :-D
I like to joke that I only write challenges to drive traffic to my blog. This is sort of the opposite: it rewards teams that read my blog. :)
A few months ago, while writing the delphi-status challenge (more on that one later), I realized that when encrypting data using a padding oracle, the last block can be arbitrarily chosen! I wrote about it in an off-handed sort of way at that time.
Shortly after, I realized that it could make a neat CTF challenge, and thus was born in-plain-site.
It's kind of a silly little challenge. Like one of those puzzles you get in riddle books. The ciphertext was literally the string "HiddenCiphertext", which I tell you in the description, but of course you probably wouldn't notice that. When you do, it's a groaner. :)
Fun story: I had a guy from the team OpenToAll bring up the blog before we released the challenge, and mention how he was looking for a challenge involving plaintext ciphertext. I had to resist laughing, because I knew it was coming!
This was a silly little level, which once again forces people to get shellcode. You're allowed to send up to 5 bytes of shellcode to the server, where the flag is loaded into memory, and the server executes them.
Obviously, 5 bytes isn't enough to do a proper syscall, so you have to be creative. It's more of a puzzle challenge than anything.
The trick is, I used a bunch of in-line assembly when developing the challenge (see the original source, it isn't pretty!) that ensures that the registers are basically set up to make a syscall - all you have to do it move esi (a pointer to the flag) into ecx. I later discovered that you can "link" variables to specific registers in gcc.
The intended method was for people to send \xcc for the shellcode (or similar) and to investigate the registers, determining what the state was, and then to use shellcode along the lines of xchg esi, ecx / int 0x80. And that's what most solvers I talked to did.
One fun thing: eax (which is the syscall number when a syscall is made) is set to len(shellcode) (the return value of read()). Since sys_write, the syscall you want to make, is number 4, you can easily trigger it by sending 4 bytes. If you send 5 bytes, it makes the wrong call.
Several of the solutions I saw had a dec eax instruction in them, however! The irony is, you only need that instruction because you have it. If you had just left it off, eax would already be 4!
delphi-status was another of those levels where I spent way more time on the solution than on the challenge.
It seems common enough to see tools to decrypt data using a padding oracle, but not super common to see challenges where you have to encrypt data with a padding oracle. So I decided to create a challenge where you have to encrypt arbitrary data!
The original goal was to make somebody write a padding oracle encryptor tool for me. That seemed like a good idea!
But, I wanted to make sure this was do-able, and I was just generally curious, so I wrote it myself. Then I updated my tool Poracle to support encryption, and wrote a blog about it. If there wasn't a tool available that could encrypt arbitrary data with a padding oracle, I was going to hold back on releasing the code. But tools do exist, so I just released mine.
It turns out, there was a simpler solution: you could simply xor-out the data from the block when it's only one block, and xor-in arbitrary data. I don't have exact details, but I know it works. Basically, it's a classic stream-cipher-style attack.
And that just demonstrates the Cryptographic Doom Principle :)
ximage might be my favourite level. Some time ago - possibly years - I was chatting with a friend, and steganography came up. I wondered if it was possible to create an image where the very pixels were executable!?
I went home wondering if that was possible, and started trying to think of 3-byte NOP-equivalent instructions. I managed to think of a large number of work-able combinations, including ones that modified registers I don't care about, plus combinations of 1- and 2-byte NOP-equivalents. By the end, I could reasonably do most colours in an image, including black (though it was slightly greenish) and white. You can find the code here.
(I got totally nerdsniped while writing this, and just spent a couple days trying to find every 3-byte NOP equivalent to see how much I can improve this!)
Originally, I just made the image data executable, so you'd have to ignore the header and run the image body. Eventually, I noticed that the bitmap header, 'BM', was effectively inc edx / dec ebp, which is a NOP for all I'm concerned. That's followed by a 2-byte length value. I changed that length on every image to be \xeb\x32, which is effectively a jump to the end of the header. That also caused weird errors when reading the image, which I was totally fine with leaving as a hint.
So what you have is an image that's effectively shellcode; it can be loaded into memory and run. A steganographic method that has probably never been done. :)
beez-fight was an item-duplication vulnerability that was modeled after a similar vulnerability in Diablo 2. I had a friend a lonnnng time ago who discovered a vulnerability in Diablo 2, where when you sold an item it was copied through a buffer, and that buffer could be sold again. I was trying to think of a similar vulnerability, where a buffer wasn't cleared correctly.
I started by writing a simple game engine. While I was creating items, locations, monsters, etc., I didn't really think about how the game was going to be played - browser? A binary I distribute? netcat? Distributing a binary can be fun, because the player has to reverse engineer the protocol. But netcat is easier! The problem is, the vulnerability has to be a bit more subtle in netcat, because I can't depend on a numbered buffer - what you see is what you get!
Eventually, I came upon the idea of equip/unequip being problematic. Not clearing the buffer properly!
Something I see far too much in real life is code that checks if an object exists in a different way in different places. So I decided to replicate that - I had both an item that's NULL-able, and a flag :is_equipped. When you tried to use an item, it would check if the :is_equipped flag is set. But when you unequipped it, it checked if the item was NULL, which never actually happened (unequipping it only toggled the flag). As a result, you could unequip the item multiple times and duplicate it!
Once that was done, the rest was easy: make a game that's too difficult to reasonably survive, and put a flag in the store that's worth a lot of gold. The only reasonable way to get the flag is to duplicate an item a bunch, then sell it to buy the flag.
I think I got the most positive feedback on this challenge, people seem to enjoy game hacking!
vhash + vhash-fixed 
It all dates back to a conversation I had with @joswr1ght about a SANS Holiday Hack Challenge level I was designing. I suggested using a hash-extension vulnerability, and he said we can't, because of hash_extender, recklessly written by yours truly, ruining hash extension vulnerabilities forever!
I found that funny, and mentioned it to @bmenrigh. We decided to make our own novel hashing algorithm that's vulnerable to an extension attack. We decided to make it extra hard by not giving out source! Players would have to reverse engineer the algorithm in order to implement the extension attack. PERFECT! Nobody knows as well as me how difficult it can be to create a new hash extension attack. :)
Now, there is where it gets a bit fun. I agreed to write the front-end if he wrote the back-end. The front-end was almost exactly easyauth, except the cookie was signed. We decided to use an md5sum-like interface, which was a bit awkward in PHP, but that was fine. I wrote and tested everything with md5sum, and then awaited the vhash binary.
When he sent it, I assumed vhash was a drop-in replacement without thinking too much about it. I updated the hash binary, and could log in just fine, and that was it.
When the challenge came out, the first solve happened in only a couple minutes. That doesn't seem possible! I managed to get in touch with the solver, and he said that he just changed the cookie and ignored the hash. Oh no! Our only big mess-up!
After investigation, we discovered that the agreed md5sum-like interface meant, to @bmenrigh, that the data would come on stdin, and to me it meant that the file would be passed as a parameter. So, we were hashing the empty string every time. Oops!
Luckily, we found it, fixed it, and rolled out an updated version shortly after. The original challenge became an easy 450-pointer for anybody who bothered to try, and the real challenge was only solved by a few, as intended.
dnscap is simply a packet-capture from dnscat2, running in unecrypted-mode, over a laggy connection (coincidentally, I'm writing this writeup at the same bar where I wrote the original challenge!). In dnscat2, I sent a .png file that contains the dnscat2 logo, as well as the flag. Product placement anyone?
I assumed it would be fairly difficult to disentangle the packets going through, which is why we gave it a high point-value. Ultimately, it was easier than we'd expected, people were able to solve it fairly quickly.
And finally, my old friend nibbler.
At some point in the past few months, I had the realization: nibbles (the snake game for QBasic where I learned to program) sounds like nibble (a 4-bit value). I forget where it came from exactly, but I had the idea to build a nibbles-clone with a vulnerability where you'd have to exploit it by collecting the 'fruit' at the right time.
I originally stored the scores in an array, and each 'fruit' would change between between worth 00 and FF points. You'd have to overflow the stack and build an exploit by gathering fruit with the snake. You'll notice that the name that I ask for at the start uses read() - that's so it can have NUL bytes so you can build a ROP-chain in your name.
I realized that picking values between 00 and FF would take FOREVER, and wanted to get back to the original idea: nibbles! But I couldn't think of a way to make it realistic while only collecting 4-bit values.
Eventually, I decided to drop the premise of performing an exploit, and instead, just let the user write shellcode that is run directly. As a result, it went from a pwn to a programming challenge, but I didn't re-categorize it, largely because we don't have programming challenges.
It ended up being difficult, but solveable! One of my favourite writeups is here; I HIGHLY recommend reading it. My favourite part is that he named the snakes and drew some damn sexy images!
I just want to give a shout out to the poor soul, who I won't name here, who solved this level BY HAND, but didn't cat the flag file fast enough. I shouldn't have had the 10-second timeout, but we did. As a result, he didn't get the flag. I'm so sorry. :(
Fun fact: @bmenrigh was confident enough that this level was impossible to solve that he made me a large bet that less than 2 people would solve it. Because we had 9 solvers, I won a lot of alcohol! :)
Hopefully you enjoyed hearing a little about the BSidesSF CTF challenges I wrote! I really enjoyed writing them, and then seeing people working on solving them!
On some of the challenges, I tried to teach something (or have a teachable lesson, something I can use when I teach). On some, I tried to make something pretty difficult. On some, I fell somewhere between. But there's one thing they have in common: I tried to make my own challenges as easy as possible to test and validate. :)
On February 17, 2017, Horizon Blue Cross Blue Shield of New Jersey (“Horizon”) agreed to pay $1.1 million as part of a settlement with the New Jersey Division of Consumer Affairs (the “Division”) regarding allegations that Horizon did not adequately protect the privacy of nearly 690,000 policyholders.
The settlement stemmed from the theft of two laptops stolen from Horizon headquarters in November 2013, when personnel from outside vendors performing renovations and moving services at Horizon’s Newark headquarters had unsupervised access to the area where company laptops were stored. The stolen laptops contained policyholder electronic Protected Health Information (“ePHI”), including names, addresses, birth dates, insurance identifications and, in some cases, Social Security numbers and clinical data. The policyholder data was password protected but not encrypted, in violation of HIPAA and HITECH.
An investigation by the Division found that more than 100 company-owned laptops assigned to Horizon employees were not encrypted, in violation of HIPAA and HITECH, as well as a company policy requiring company-issued laptops to contain encryption software. The Division found that most of these unencrypted laptops were obtained outside Horizon’s normal procurement process, and therefore the IT department failed to adequately monitor, service or install security software required by company policy on those laptops. The Division further found that the stolen laptops were issued to employees who were not required to store ePHI on their laptops, in violation of another company policy restricting ePHI access to employees with a “need to know.” The relevant company policies were instituted after an unrelated 2008 laptop theft from an employee’s car.
Under the terms of the settlement, in addition to the $1.1 million monetary settlement, which breaks down into a civil penalty, a reimbursement of the state’s attorneys’ fees and investigative costs, and promotion of consumer privacy programs, Horizon must take corrective steps to address its data security practices with respect to ePHI. In particular, Horizon must hire a third-party professional to assess security risks associated with its storage, transmission and receipt and submit a report of those findings to the Division within 180 days of the settlement, and every year thereafter for two years. $150,000 in civil penalties are suspended pending Horizon’s compliance with the terms of the settlement.
On February 16, 2017, the U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) entered into a resolution agreement with Memorial Healthcare System (“Memorial”) that emphasized the importance of audit controls in preventing breaches of protected health information (“PHI”). The $5.5 million settlement with Memorial is the fourth enforcement action taken by OCR in 2017, and matches the largest civil monetary ever imposed against a single covered entity.
In April 2012, Memorial submitted a breach report to OCR indicating that it had suffered a breach involving impermissible access to PHI by employees. Memorial supplemented that report three months later, indicating that it had discovered additional impermissible access that resulted in a total of 115,000 affected patients. The PHI involved consisted of patients’ names, dates of birth and Social Security numbers. OCR investigated Memorial and found that the entity had committed several HIPAA violations by (1) impermissibly disclosing PHI in violation of the Privacy Rule, (2) failing to implement procedures to regularly review records of information system activity such as audit logs and (3) failing to implement policies and procedures to review and modify users’ access to PHI.
The resolution agreement requires Memorial to pay $5.5 million to OCR and enter into a Corrective Action Plan that obligates Memorial to:
- conduct a risk analysis and implement a risk management plan;
- revise its policies and procedures regarding information systems activity review and access establishment, modification and termination;
- distribute the revised policies and procedures to its workforce members;
- submit a plan to OCR to internally monitor its compliance with the Corrective Action Plan;
- select and engage an independent third-party assessor to review the entity’s compliance with the Corrective Action Plan;
- report any events of noncompliance with its HIPAA policies and procedures; and
- submit annual compliance reports for a period of three years.
In announcing the settlement with Memorial, OCR Acting Director Robinsue Frohboese stated that “organizations must implement audit controls and review audit logs regularly. As this case shows, a lack of access controls and regular review of audit logs helps hackers or malevolent insiders to cover their electronic tracks, making it difficult for covered entities and business associates to not only recover from breaches, but to prevent them before they happen.”
In connection with the Memorial settlement, OCR also linked to its recent guidance on audit trails. The guidance discusses three types of audit trails: (1) application audit trails, (2) system-level audit trails and (3) user audit trails, and encourages covered entities and business associates to “consider which audit tools may best help them with reducing non-useful information contained in audit records, as well as with extracting useful information.”
I see this was posted 3 months ago to Youtube, but its new to me.
Watch this video on YouTube.
This being blogging, lets over-analyze.
The General’s password is ihatemyjob1.
Not a bad password. Using a passphrase is easy to remember. Easy to type.
No doubt he should have capitalized the “I”. Most systems can handle spaces, which would add some length. Putting in a “@” in for a and a “0” in for o would add some complexity. If the password file is compromised, this wouldn’t be enough to prevent breaking the hash. But its good for a day-to-day logon. For accounts where a password safe can be used to ease login, random would be better. But that doesn’t work for every account.
The General’s password is echoed to the screen. Typical security controls require that your password not be displayed on the screen. It should be replaced by asterisks. The General would also have been better entering it himself and not telling a subordinate the password. He could have turned off the output of the computer to the big screen temporarily to prevent the room from seeing the password.
In pressure situations, its easy to take actions that compromise our security. This is the type of feeling that phishers, and fraudsters often try to create so you just act and not thinking about if what you are doing makes sense.
Yes, it’s just a funny commercial. But it can also be used as a teachable moment. Hopefully without sucking all the fun out of the commercial
On February 15, 2017, the Centre for Information Policy Leadership (“CIPL”) at Hunton & Williams LLP submitted two sets of formal comments to the Article 29 Working Party (the “Working Party”). CIPL commented on the Guidelines for identifying a controller or processor’s lead supervisory authority (“Lead Authority Guidelines”), and on the Guidelines on the right to data portability (“Data Portability Guidelines”). Both were adopted by the Working Party on December 13, 2016, for public consultation.
CIPL’s comments on the Lead Authority Guidelines follow a November 2016 CIPL white paper with initial input to the Working Party, and the comments on the Data Portability Guidelines represent the first CIPL intervention on this new individual right that will be introduced by the EU General Data Protection Regulation (“GDPR”).
CIPL’s comments on the Lead Authority Guidelines underline that a fully functioning cooperation mechanism among data protection authorities (“DPAs”), based on the concept of a one-stop-shop (“OSS”) and a lead DPA, is an essential prerequisite for the consistent and effective implementation of the GDPR. Additionally:
- Any guidelines on the OSS should keep the principle of harmonization as a main guiding thread.
- CIPL commends the Working Party’s Lead Authority Guidelines as generally well-balanced and pragmatic. Guidance provided at this early stage makes it possible for companies to prepare for the new legal regime.
- CIPL also approves that the Lead Authority Guidelines provide for a central role for organizations in the process of designating the lead DPA, because the controller/processor is in the best position to identify where its central administration is located, where decisions on the purposes and means of processing are taken or where its main processing activities take place.
- The Lead Authority Guidelines should be regarded only as a first step towards a fully functioning OSS; CIPL suggests that the Working Party consider the Lead Authority Guidelines as a living document and regularly update them.
CIPL’s comments on the Lead Authority Guidelines also emphasize several key issues that it believes were insufficiently addressed by the Working Party, including:
- The functioning of the OSS should be based on the identification of the lead DPA by the organization itself (the controller or the processor), subject to review by the DPA based on all relevant facts.
- The different realities of controllership within groups of undertakings should be taken into account.
- Cooperation between the lead DPA and concerned DPAs should be fully transparent and organizations should be involved in the procedure of referring a matter to the European Data Protection Board.
- Processors should fully benefit from the OSS.
- The assessment of data transfers based on due diligence, as required in the Schrems judgment of the Court of Justice of the European Union, should be primarily a task of the lead DPA.
- The identification of a lead authority carried out in the context of BCRs should play a role in identifying the main establishment and lead DPA under the GDPR.
The right to data portability is laid down in Article 20 of the GDPR as a new right of individuals. CIPL’s comments on the Data Portability Guidelines commend that the Working Party has developed practical guidance on how to implement it. CIPL’s comments must be seen in light of the double objective of the right to data portability: providing individuals with an additional tool for control over their personal data and contributing to competition and innovation, which is beneficial to individuals, businesses and society at large. The right to data portability must be implemented in a way that effectively supports both objectives.
- The data portability right should effectively provide added value to individuals, in addition to the other rights of the individuals in the GDPR. Data portability should not replace or recalibrate these other rights.
- CIPL has doubts about the added value of the data portability right with respect to employees’ data or personal data in the context of B2B activities. The data portability right should not extend to the employment context, but only be applied to a narrow subset of such data.
- An overly broad implementation of the data portability right may stifle competition and innovation and impose unnecessary burdens on organizations.
- In many instances, controllers will have to make a significant technical investment. This should not lead to disproportionate efforts, especially in areas where the right does not present added value to individuals.
- Processors may also be significantly impacted by the data portability right.
- Organizations need to have full legal certainty about the scope of application of the data portability right, as envisioned in the GDPR. Therefore, CIPL suggests clarifications to:
- The definition of data that may be subject to a data portability request, focusing on data actively provided by the data subject and recognizing that data portability cannot necessarily work for pseudonymized data.
- The responsibilities of the sending and receiving parties, limiting the responsibilities of receiving parties.
- The status of shared and third-party data.
- The requirement and feasibility of technical formats.
Finally, CIPL proposes to facilitate a roundtable with key stakeholders, which could be instrumental in reaching the right outcomes.
CIPL’s comments were developed based on input by the private sector participants in CIPL’s ongoing GDPR Implementation Project, which includes more than 85 individual private sector organizations.
Revision Note: V1.0 (February 21, 2017): Bulletin published.
Summary: This security update resolves vulnerabilities in Adobe Flash Player when installed on all supported editions of Windows 8.1, Windows Server 2012, Windows Server 2012 R2, Windows RT 8.1, and Windows 10.
On February 13, 2017, the Parliament of Australia passed legislation that amends the Privacy Act of 1988 (the “Privacy Act”) and requires companies with revenue over $3 million AUD ($2.3 million USD) to notify affected Australian residents and the Australian Information Commissioner (the “Commissioner”) in the event of an “eligible data breach.”
The Privacy Act defines “personal information” to include “information or an opinion about an identified individual, or an individual who is reasonably identifiable (1) whether the information or opinion is true or not; and (2) whether the information or opinion is recorded in a material form or not.”
The new legislation includes a harm threshold for determining what constitutes an “eligible data breach,” which is defined as occurring when:
- (1) “there is unauthorized access to, or unauthorized disclosure of, the [personal] information” and (2) “a reasonable person would conclude that the access or disclosure would be likely to result in serious harm to any of the individuals to whom the information relates”; or
- “the information is lost in circumstances where:
- (1) unauthorized access to, or unauthorized disclosure of, the information is likely to occur; and
- (2) assuming that unauthorized access to, or unauthorized disclosure of, the information were to occur, a reasonable person would conclude that the access or disclosure would be likely to result in serious harm to any of the individuals to whom the information relates.”
The new legislation does not define “serious harm,” but an official explanatory memorandum states that serious harm could include “serious physical, psychological, emotional, economic and financial harm, as well as serious harm to reputation.” In determining whether serious harm has occurred, entities may consider the sensitivity of the information involved, the kind of person who might gain access to the information and the nature of the harm that may result from the breach.
The explanatory memorandum lists the following examples of breaches that may require notification:
- a malicious breach of the secure storage and handling of information (e.g., in a cybersecurity incident);
- accidental loss (most commonly of IT equipment or hard copy documents); and
- negligent or improper disclosure of information.
Pursuant to the new legislation, if an entity suspects that an eligible data breach has occurred, it must take “all reasonable steps to ensure” that it completes an assessment of the incident within 30 days following discovery. This is not a hard deadline, but a preferable timeframe that may be adjusted depending on the complexity of the incident. If the assessment determines that an eligible data breach has occurred, entities must notify the Commissioner and affected individuals “as soon as practicable.”
Notification to both the Commissioner and affected individuals must include:
- the identity and contact details of the entity;
- a description of the serious data breach;
- the kinds of information possibly breached; and
- recommendations about the steps that individuals should take in response to the serious data breach.
The explanatory memorandum states that an entity may notify affected individuals using the method of communication it normally uses to communicate with those individuals.
In addition, there is an exception to notification for situations where the entity takes remedial action before the access or disclosure results in serious harm. The new legislation also contains a “secrecy” provision exception, which states that where compliance with the notification requirement would be inconsistent with a provision under Australian law (other than the Privacy Act) that prohibits or regulates the use or disclosure of information, the notification requirement would “be limited to the extent of the inconsistency.”
A failure to notify that is found to be a serious or repeated interference with privacy under the Privacy Act can be penalized with a fine of up to $360,000 AUD ($274,560 USD) for individuals and $1.8 million AUD ($1.37 million USD) for organizations.
Although the effective date for the new legislation has yet to be set, the new notification requirements will come into force at the latest one year after the receiving Royal Assent, which typically occurs seven to ten days after Parliament passes a bill.
Read about groups and types of targeted threats here: Mitre ATT&CK
- APT28_2014-10_TrendMicro Operation Pawn Storm. Using Decoys to Evade Detection
- APT28_2015-07_Digital Attack on German Parliament
- APT28_2015-10_New Adobe Flash Zero-Day Used in Pawn Storm
- APT28_2015-10_Root9_APT28_targets Financial Markets
- APT28_2015-12_Kaspersky_Sofacy APT hits high profile targets
- APT28_2016-02_PaloAlto_Fysbis Sofacy Linux Backdoor
- APT29_2016-06_Crowdstrike_Bears in the Midst Intrusion into the Democratic National Committee << DNC (NOTE: this is APT29)
- APT28_2016-07_Invincea_Tunnel of Gov DNC Hack and the Russian XTunnel
- APT28_2016-10_ESET_Observing the Comings and Goings
- APT28_2016-10_ESET_Sednit A Mysterious Downloader
- APT28_2016-10_ESET_Sednit Approaching the Target
- APT28_2016-10_Sekoia_Rootkit analysisUse case on HideDRV
- APT28_2017-02_Bitdefender_OSX_XAgent << OSX XAgent
Download sets (matching research listed above). Email me if you need the password
Download all files/folders listed (72MB)
Scott Kannry and Jason Christopher of Axio join us. In the news, Sophos acquires Invincea, the startup fundraising dictionary, five tough lessons every solopreneur needs to know, and how much is a Shark Tank appearance worth? Stay tuned!
I'm thrilled to mention that @markrussinovich and @mxatone have released Sysmon v6.
When I first discussed Sysmon v2 two years ago it offered users seven event types.
Oh, how it's grown in the last two years, now with 19 events, plus an error event.
From Mark's RSA presentation we see the current listing with the three new v6 events highlighted.
"This release of Sysmon, a background monitor that records activity to the event log for use in security incident detection and forensics, introduces an option that displays event schema, adds an event for Sysmon configuration changes, interprets and displays registry paths in their common format, and adds named pipe create and connection events."
Mark's presentation includes his basic event recommendations so as to run Sysmon optimally.
|Basic Event Recommendations|
|Basic Event Recommendations (Cont)|
I strongly suggest you deploy using these recommendations.
A great way to get started is to use a Sysmon configuration template. Again, as Mark discussed at RSA, consider @SwiftOnSecurity's sysmon-config-export.xml via Github. While there are a number of templates on Github, this one has "virtually every line commented and sections are marked with explanations, so it should also function as a tutorial for Sysmon and a guide to critical monitoring areas in Windows systems." Running Sysmon with it is as easy as:
sysmon.exe -accepteula -i sysmonconfig-export.xml
As a quick example of Sysmon capabilities and why you should always run it everywhere, consider the following driver installation scenario. While this is a non-malicious scenario that DFIR practitioners will appreciate, rather than the miscreants, the detection behavior resembles that which would result from kernel-based malware.
I fired up WinPMEM, the kernel mode driver for gaining access to physical memory included with Rekall, as follows:
|Event ID 6: Driver loaded|
Cheers...until next time.
David Conrad of ICANN joins us, Carrie Roberts of Black Hills InfoSec breaks all the firewalls, and we discuss the security news for the week. Stay tuned!
Paul and John review the CISO Manifesto and deliver the top 10 rules for security vendors. In the news, Nerdio partners with CensorNet, ThreatConnect reveals a new threat intelligence product suite, free cyberthreat hunter and defender tools for security analysts, and more. Stay tuned!
On February 15, 2017, the European Data Protection Supervisor (“EDPS”) published its Priorities for 2017 (the “EDPS Priorities”). The EDPS Priorities consist of a note listing the strategic priorities and a color-coded table listing the European Commission’s proposals that require the EDPS’ attention, sorted by level of priority.
New Legal Framework for the EDPS
Following the European Commission’s recent proposal for a revision of Regulation (EC) No 45/2001 on the protection of individuals with regard to the processing of personal data by the Community institutions and bodies and on the free movement of such data defining the EDPS’ duties and tasks, the EDPS will focus on ensuring that the rules for data processing applicable to European institutions, bodies, offices and agencies are aligned with the principles of the EU General Data Protection Regulation.
Ensure the Protection of Confidentiality and Privacy in Electronic Communications
In the context of the European Commission’s ongoing review of the e-Privacy Directive 2002/58/EC, the EDPS will focus on the need to adequately translate the principle of confidentiality of electronic communications into secondary EU law.
Contribute to a Security Union and Stronger Borders Based on Respect for Fundamental Rights
The EDPS will contribute to initiatives that are likely to have implications on the protection of privacy and personal data, such as the implementation of the Security Union agenda and the Action Plan of terrorist financing.
Initiatives Related to the European Commission’s Work Programme for 2017
The EDPS also will contribute to several topics that have been identified by the Commission Work Programme as objectives for 2017, including participating in the revision of the Schengen Information System and contributing to fairer taxation of companies.
The EDPS indicated that it was consulted by the European Council on a proposal for a directive regarding contracts for the supply of digital content. In addition, the EDPS stated that it will:
- closely monitor the proposed new framework regarding adequacy;
- participate in the discussions around the proposed review of the Fourth Anti-Money Laundering Directive; and
- closely monitor the potential privacy and data protection impact of possible new trade agreements with Japan, Canada, Australia, Chile and New Zealand, and agreements in the law enforcement sector.
On January 10, 2017, the European Commission published a communication on Building a European Data Economy regarding the Digital Single Market strategy and launched a publication consultation on the free flow of data and data location restrictions. The EDPS will provide its input on the consultation.
Finally, the EDPS also announced that it will publish a toolkit to assist policymakers and the co-legislator in assessing the necessity of measures that interfere with fundamental rights, including the right to data protection. Moreover, the EDPS will follow-up with a background document on the principle of proportionality in EU data protection law.
On February 4, 2017, the Cyberspace Administration of China published a draft of its proposed Measures for the Security Review of Network Products and Services (the “Draft”). Under the Cybersecurity Law of China, if an operator of key information infrastructure purchases network products and services that may affect national security, a security review is required. The Draft provides further hints of how these security reviews may actually be carried out, and is open for comment until March 4, 2017.
According to the Draft, any critical network products and services used in information systems or purchased by operators of key information infrastructure that may affect national security and the public interest are subject to a network security review.
The Draft would establish a potentially significant standard that would be commonly applied in security assessments performed under the Cybersecurity Law. These security assessments would focus on verifying that products or services are “secure and controllable.” The concept of “security and controllability” has appeared before, both in the State Security Law of China and in guidelines for the banking and telecommunications sectors, but here it is being applied in the context of the new Cybersecurity Law.
It remains to be seen how this term would be interpreted in the context of the new Cybersecurity Law. The exact requirements to determine if a product or service is “secure and controllable” are still not provided in the Draft, and even after being established, may evolve over time. However, under the Draft, the process of determining whether a product or service is “secure and controllable” would take the form of a risk assessment, under which the following risks would be principally analyzed: (1) the risk that the product or service may be illegally controlled, interfered with or suspended, (2) the risks arising during development, delivery and technical support of the product or service, (3) the risk that the provider of the product or service may use it to illegally collect, store, process or use the personal information of its users, (4) the risk that the provider of the product or service may engage in unfair competition or infringe upon the interests of users, by taking advantage of their reliance on the product or service and (5) any other risks that may jeopardize national security or the public interest.
The Cyberspace Administration of China will establish a network security review commission, which will cooperate with third-party institutions to evaluate these risks.
Microsoft delays Patch Tuesday, WordPress continues to fail at failing, Valve eradicates a Steam bug, ransomware that makes you do terrible things, and more. Jason Wood of Paladin Security joins us to talk about a father and son who created access to a supercomputer via voice commands!
Lior Frenkel of Waterfall Security joins us. In the Enterprise News, CyberArk beefs up its cloud security, Kenna Security partners with Exodus, Gigamon is eliminating network blind spots, and more. Stay tuned!
Webroot has expressed its commitment to follow the recommendations of AMTSO and, in compliance with our policy, facilitates this review by MRG Effitas, an AMTSO-member tester.
William Lin of Trident Capital Cybersecurity joins us. In the news, 12 KPIs you need to know before pitching your startup, VC firms back a record number of cybersecurity startups in 2016, and why should entrepreneurs think like farmers? Stay tuned!
On March 6 and 7, 2017, the Centre for Information Policy Leadership (“CIPL”) at Hunton & Williams LLP and over 100 public and private sector participants in CIPL’s GDPR Implementation Project will convene in Madrid, Spain, for CIPL’s third major GDPR implementation workshop.
The all-day workshop on March 7, titled “GDPR Implementation: Status, Key Challenges, and Understanding the Core Principles of Transparency, Consent and Legitimate Interest,” will be co-hosted by the Spanish Data Protection Authority. The agenda will focus on four key areas:
- member states’ and regulators’ ongoing activities, priorities and progress in implementing the GDPR;
- the most significant challenges industry is facing in becoming GDPR compliant by May 2018;
- how to make the GDPR’s transparency requirement user-centric and meaningful so that it enables broad, accountable and trusted data uses; and
- how to use consent, legitimate interest and other grounds for processing personal data under the GDPR in the context of the modern information ecosystem.
On March 6, CIPL will hold a special pre-workshop session with Bruno Gencarelli, Head of Unit, International Data Flows and Protection, European Commission, to discuss the communication sent from the European Commission to the European Parliament and the Council on Exchanging and Protecting Personal Data in a Globalized World. The communication set forth the European Commission’s policy priorities and proposed action items with respect to cross-border data flows, relevant transfer and governance mechanisms, and global interoperability. The purpose of the pre-workshop session is to allow stakeholders to provide specific feedback to the European Commission regarding the communication.
The event in Madrid is part of CIPL’s ongoing GDPR Implementation Project, which aims to address the need for a constructive and expert dialogue between industry, regulators and key policymakers with the following specific objectives:
- facilitating consistent interpretations of the GDPR across the EU;
- informing and advancing constructive and forward-thinking interpretations of key GDPR requirements;
- facilitating consistency in the further implementation of the GDPR by EU Member States, the European Commission and the European Data Protection Board;
- examining best practices and challenges in the implementation of key GDPR requirements;
- sharing industry experiences and views to benchmark, coordinate and streamline the implementation of new compliance measures; and
- examining how the new GDPR requirements should be interpreted and implemented to advance the European Digital Single Market strategy and data-driven innovation, while protecting individuals’ privacy and respecting the fundamental right to data protection.
This multi-year project includes stakeholder workshops, roundtables, working sessions, white papers, webinars and public consultations.
At first sight the HTML page looks like the following image.
|Figure1: Attack Vector. A simple HTML page|
|Figure2: Obfuscated First Stage|
|Figure3: Clear Text First Stage|
- blob. The raw content of "big_encoded_data" (please refer to Figure3)
- image.js. The saving name
|Figure4: FileServer.js Original Entry|
|Figure5: FileServer.js constructor|
|Figure6: Stage 2 base64 decoded obfuscated downloader|
|Figure7: Second Stage Downloader|
|Figure8: NullSoft Installer|
|Figure9: Usage of Encryption Libraries|
|Figure10: Setting the running pointer|
|Figure11: Decoding Functions|
|Figure12: Ransom Request|
|Figure13: Ransom Request Web Page|
|Created Date :||2017-02-07T12:37:10Z|
|Updated Date :||2017-02-08T10:38:54Z|
|Created Date :||2017-02-07T12:37:10Z|
|Updated Date :||2017-02-08T10:38:54Z|
- base dns: XXXXXXX.divamind.org
.?????? (6 characters)
Enjoy your new IoC
Paul and a dozen infosec professionals celebrate episode 500 by hosting roundtable discussions on IoT security and penetration testing. Stay tuned!
As previously published on the Data Privacy Laws blog, Pablo A. Palazzi, partner at Buenos Aires law firm Allende & Brea, provides the following report.
Earlier this month, the Argentine Data Protection Agency (“DPA”) posted the first draft of a new data protection bill (the “Draft Bill”) on its website. Argentina’s current data protection bill was enacted in December 2000. Argentina was the first Latin American country to be recognized as an adequate country by the European Union.
The Draft Bill will take into consideration several changes proposed in a public consultation during 2016. The Draft Bill is heavily based on the EU General Data Protection Regulation (“GDPR”) that will come into effect in 2018 and maintains the structure of Argentina’s current data protection bill.
The DPA will be accepting comments on the Draft Bill through February 24, 2017, using the digital platform created by the government for public participation in rule making. Comments are also accepted by paper and in English or Spanish.
Changes introduced in the Draft Bill include:
- the elimination of the duty to register databases;
- the recognition of only individuals as data subjects, whereas the current data protection bill covers both individuals and legal entities (e.g., companies);
- the addition of several new definitions, such as biometric data and genetic data, among others;
- the introduction of new ways to determine whether an entity or certain data processing is subject to Argentine law, quite similar to the criteria found in the GDPR;
- the introduction of new legal bases, other than consent, for data processing, including processing that is in the legitimate interests of the data controller (with a test similar to the GDPR);
- an overhaul of the current rules of international transfers of personal data, including allowing Binding Corporate Rules as a legal basis for data transfers; and
- an introduction to sections on child consent (processing personal data of children under 13 is now allowed with consent from a parent), cloud computing, data breaches, accountability, privacy by design and by default, the duty to have a data protection officer and mandatory privacy impact studies.
Credit reports, one of the main issues with the current data protection bill, have received certain amendments, such as the time limit on retaining negative data as well as the introduction of a duty to notify an individual in the event that certain agreements are not entered into as a result of negative information in a credit report. It should be noted that, because of the elimination of protections for legal entities, the Draft Bill will not apply to financial information of corporations.
Finally, one of the last amendments proposed in the Draft Bill is the independence of the DPA from any other governmental entity.
The DPA is expected to send the Draft Bill to the President later this year. The Draft Bill will be discussed by Congress in 2018.
On February 6, 2017, the House of Representatives suspended its rules and passed by voice vote H.R 387, the Email Privacy Act. As we previously reported, the Email Privacy Act amends the Electronic Communications Privacy Act (“ECPA”) of 1986. In particular, the legislation would require government entities to obtain a warrant, based on probable cause, before accessing the content of any emails or electronic communications stored with third-party service providers, regardless of how long the communications have been held in electronic storage by such providers.
Similar legislation unanimously passed the House in the last Congress, but died in the Senate due to concerns over amendments to the bill. Even the legislation’s Senate sponsors—Sen. Leahy (D-VT) and Mike Lee (R-UT)—eventually withdrew the bill from consideration due to concerns that the amendments would make electronic communications even less private than they are now.
The Email Privacy Act now moves to the Senate, where it will be considered by the Senate Judiciary Committee, which is chaired by Sen. Chuck Grassley (R-IA). However, action on the legislation may be a lower priority for the Committee, and the Senate in general, because they are currently concentrating on nominations for agencies and the Supreme Court.
Android vulnerabilities are patched, your TV is watching you, iOS apps are vulnerable, the lamest crypto bug, and more. Jason Wood of Paladin Security joins us to talk about a former NSA contractor who may have stolen 75% of TAO’s elite hacking tools!
Your author is an Aikido practitioner, albeit a fledgling in practice, with so, so much to learn. While Aikido is often translated as "the way of unifying with life energy" or as "the way of harmonious spirit", I propose that the philosophies and principles inherent to Aikido have significant bearing on the practice of information security.
In addition to spending time in the dojo, there are numerous reference books specific to Aikido from which a student can learn. Among the best is Adele Westbrook and Oscar Ratti's Aikido and the Dynamic Sphere. All quotes and references that follow are drawn from this fine publication.
As an advocate for the practice of HolisticInfoSec™ (so much so, I trademarked it) the connectivity to Aikido is practically rhetorical, but allow me to provide you some pointed examples. I've tried to connect each of these in what I believe is an appropriate sequence to further your understanding, and aid you in improving your practice. Simply, one could say each of these can lead to the next.
The Practice of Aikido
"The very first requisite for defense is to know the enemy."
So often in information security, we see reference to the much abused The Art of War, wherein Sun Tzu stated "It is said that if you know your enemies and know yourself, you will not be imperiled in a hundred battles." Aikido embraces this as the first requisite, but so too offers the importance of not underestimating your enemy or opponent. For information security, I liken it to this. If you are uninformed on adversary actor types and profiles, their TTPs (tools, tactics, procedures), as well as current vulnerabilities and exploits, along with more general threat intelligence, then you are already at a disadvantage before you even begin to imagine countering your opponent.
"A positive defensive strategy is further qualified as being specific, immediate, consistent, and powerful."
Upon learning more about your adversary, a rehearsed, exercised strategy for responding to their attack should be considered the second requisite for defense. To achieve this, your efforts must include:
- a clear definition and inventory of the assets you're protecting
- threat modeling of code, services, and infrastructure
- an incident response plan and SOP, and regular exercise of the IR plan
- robust security monitoring to include collection, aggregation, detection, correlation, and visualization
- ideally, a purple team approach that includes testing blue team detection and response capabilities in partnership with a red team. Any red team that follows the "you suck, we rock" approach should be removed from the building and replaced by one who espouses "we exist to identify vulnerabilities and exploits with the goal of helping the organization better mitigate and remediate".
The Process of Defense and Its Factors
"EVERY process of defense will consist of three stages: perception, evaluation-decision, and reaction."
These should be easy likenesses for you to reconcile.
Perception = detection and awareness
The better and more complete your threat intelligence collection and detection capabilities, the better your situational awareness will be, and as a result your perception of adversary behaviors will improve and become more timely.
Evaluation-decision = triage
It's inevitable...$#!+ happens. Your ability to quickly evaluate adversary actions and be decisive in your response will dictate your level of success as incident responders. Strength at this stage directly impacts the rest of the response process. Incorrect or incomplete evaluation, and the resulting ill-informed decisions, can set back your response process in a manner from which recovery will be very difficult.
Reaction = response
My Aikido sensei, after doing so, likes to remind his students "Don't get hit." :-) The analogy here is to react quickly enough to stay on your feet. Can you move quickly enough to not be hit as hard or as impactfully as your adversary intended? Your reaction and response will determine such outcomes. The connection between kinetic and virtual combat here is profound. Stand still, get hit. Feign or evade, at least avoid some, or all contact. In the digital realm, you're reducing your time to recover with this mindset.
"A defensive aikido strategy begins the moment a would-be attacker takes a step toward you or turns aggressively in your direction. His initial motion (movement) in itself contains the factors you will use to neutralize the action of attack which will spring with explosive force from that motion of convergence."
Continuing on our theme of inevitability, digital adversaries will, beyond the shadow of a doubt, take a step toward you or turn aggressively in your direction. The question for you will be, do you even know when that has occurred in light of our discussion of requisites above? Aikido is all about using your opponent's energy against them, wherein, for those of us in DFIR, our adversary's movement in itself contains the factors we use to neutralize the action of attack. As we improve our capabilities in our defensive processes (perception, evaluation-decision, and reaction), we should be able to respond in a manner that begins the very moment we identify adversarial behavior, and do so quickly enough that our actions pivot directly on our adversary's initial motion.
As an example, your adversary conducts a targeted, nuanced spear phishing campaign. Your detective means identify all intended victims, you immediately react, and add all intended victims to an enhanced watch list for continuous monitoring. The two victims who engaged the payload are quarantined immediately, and no further adversarial pivoting or escalation is identified. The environment as a whole raised to a state of heightened awareness, and your user-base becomes part of your perception network.
"It will be immediate or instantaneous when your reaction is so swift that you apply a technique of neutralization while the attack is still developing, and at the higher levels of the practice even before an attack has been fully launched."
Your threat intelligence capabilities are robust enough that your active deployment of detections for specific Indicators of Compromise (IOCs) prevented the targeted, nuanced spear phishing campaign from even reaching the intended victims. Your monitoring active lists include known adversary infrastructure such that the moment they launch an attack, you are already aware of its imminence.
You are able to neutralize your opponent before they even launch. This may be unimaginable for some, but it is achievable by certain mature organizations under specific circumstances.
The Principle of Centralization
"Centralization, therefore, means adopting a new point of reference, a new platform from which you can exercise a more objective form of control over events and over yourself."
Some organizations decentralize information security, others centralize it with absolute authority. There are arguments for both, and I do not intend to engage that debate. What I ask you to embrace is the "principle of centralization". The analogy is this: large corporations and organizations often have multiple, and even redundant security teams. Even so, their cooperation is key to success.
- Is information exchanged openly and freely, with silos avoided?
- Are teams capable of joint response?
- Are there shared resources that all teams can draw from for consistent IOCs and case data?
- Are you and your team focused on facts, avoiding FUD, thinking creatively, yet assessing with a critical, objective eye?
Adversarial conditions, in both the physical realm, and the digital realm in which DFIR practitioners operate, are stressful, challenging, and worrisome.
Morihei Ueshiba, Aikido's founder reminds us that "in extreme situations, the entire universe becomes our foe; at such critical times, unity of mind and technique is essential - do not let your heart waver!" That said, perfection is unlikely, or even impossible, this is a practice you must exercise. Again Ueshiba offers that "failure is the key to success; each mistake teaches us something."
Keep learning, be strong of heart. :-)
Cheers...until next time.
On February 6, 2017, the FTC announced that it has agreed to settle charges that VIZIO, Inc. (“VIZIO”), installed software on about 11 million consumer televisions to collect viewing data without consumers’ knowledge or consent. The stipulated federal court order requires VIZIO to pay $2.2 million to the FTC and New Jersey Division of Consumer Affairs.
According to the complaint, beginning in February 2014, VIZIO and an affiliated company manufactured smart televisions and installed automated content recognition software that, by default, captured second-by-second information about video displayed on the televisions, including video from consumer cable, broadband, set-top box, DVD, over-the-air broadcasts and streaming devices. VIZIO then shared this information with third parties who used the data for their own purposes, including audience measurement, analyzing advertising effectiveness and displaying targeted advertising. The complaint alleged that VIZIO also provided viewers’ IP address information to data aggregators which facilitated the appending of specific consumer demographic information to the viewing data, including sex, age, income, marital status, household size, education level, home ownership and household value. VIZIO did all of this without sufficiently informing consumers that the televisions’ settings enabled the collection of consumers’ viewing data or obtaining their informed consent.
Under the terms of the stipulated federal court order, in addition to the $2.2 million payment, VIZIO is prohibited from making misrepresentations about the privacy, security or confidentiality of consumer information it collects, and must, among other things:
- prominently disclose and obtain affirmative consumer express consent for its data collection and sharing practices, including (1) the types of viewing data that will be collected, used and shared with third parties, (2) to whom the data will be shared and (3) the purposes of such sharing;
- delete relevant viewing data collected before March 1, 2016; and
- implement a comprehensive data privacy program and biennial assessments of that program.
Archie Agarwal of ThreatModeler joins us. In the news, how to prevent startup burnout, five IoT cybersecurity predictions for 2017, three tips to help entrepreneurs make the right sacrifices, and what exactly is your income statement telling you? Stay tuned!
Katherine Teitler of MISTI joins us, Nathaniel "Q" Quist of LogRhythm delivers a technical segment, and we cover the latest security news. Stay tuned!
On February 1, 2017, the U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) announced a $3.2 million civil monetary penalty against Children’s Medical Center of Dallas (“Children’s”) for alleged ongoing violations of the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) Privacy and Security Rules, following two consecutive breaches of patient electronic protected health information (“ePHI”). This is the third enforcement action taken by OCR in 2017, following the respective actions taken against MAPFRE Life Insurance of Puerto Rico and Presence Health earlier in January.
According to OCR’s Notice of Final Determination, Children’s experienced two breaches of patient ePHI over a three-year span. Both breaches involved the loss or theft of unencrypted devices containing patient ePHI. Following the 2010 breach, OCR commenced an investigation of Children’s compliance with the HIPAA Privacy, Security and Breach Notification Rules. OCR’s investigation determined that Children’s was put on notice of its security vulnerabilities—particularly the threats posed by unencrypted laptops and mobile devices—prior to both breaches. OCR found Children’s to be noncompliant with HIPAA due to Children’s (1) “failure to implement risk management plans, contrary to prior external recommendations to do so” and (2) “failure to deploy encryption or an equivalent alternative measure on all of its laptops, work stations, mobile devices and removable storage media until April 9, 2013.”
According to its Notice of Final Determination, OCR considered the following “aggravating factors” in reaching its $3.2 million civil monetary penalty:
- The amount of time that Children’s continued to use unencrypted devices even after it had actual knowledge that encryption was necessary to ensure the security of ePHI. OCR alleged that Children’s was put on notice as early as 2008 that it was at a “high risk” of loss of ePHI through the loss or theft of an unsecured device, and that encryption of its devices was “necessary and appropriate.”
- Children’s prior history of noncompliance with the HIPAA Privacy and Security Rules. OCR underscored the fact that both the 2010 and 2013 data breaches involved noncompliance with the same or similar provisions of the HIPAA Privacy and Security Rules. OCR also cited additional incidents involving Children’s loss of devices containing unsecured ePHI, which took place prior to the implementation of the HIPAA Breach Notification Rule.
In announcing the penalty against Children’s, OCR Acting Director Robinsue Frohboese warned that, “although OCR prefers to settle cases and assist entities in implementing corrective action plans, a lack of risk management not only costs individuals the security of their data, but it can also cost covered entities a sizable fine.”
Matt Alderman of Tenable joins us. In the Enterprise News, Distil Networks wants to leverage device fingerprints, Exabeam reveals its latest security intelligence program, HPE acquires Niara, and more. Stay tuned!
On February 1, 2017, Matt Hancock, the UK Government Minister responsible for data protection, was questioned by the House of Lords committee on the UK’s implementation plan of the EU General Data Protection Regulation (“GDPR”) in the context of the UK’s looming exit from the EU. In responding to the questioning, Hancock revealed further details into the UK Government’s position on implementing the GDPR into UK law.
Importantly, Hancock (1) confirmed the UK Government’s continued support for the GDPR, in part because the UK position was sufficiently taken into consideration during negotiation of the final GDPR text, and (2) reaffirmed that the GDPR will come into effect in the UK on May 25, 2018, notwithstanding Brexit. He also clarified that some of the provisions of the existing UK Data Protection Act will need to be repealed before the application of the GDPR in May 2018 to ensure that there is no duplication or contradiction between the remaining provisions of the UK Data Protection Act and the GDPR.
In the specific context of Brexit, Hancock noted that he did not foresee any significant changes being made to UK data protection law once the UK formally withdraws from the EU. In particular, he noted that the UK would seek to implement the GDPR in full, so as to provide the UK with the greatest possible chance of securing the free flow of data between the UK and the EU post-Brexit. When asked whether the UK Government intends to seek a declaration of adequacy from the European Commission, which would permit the transfer of EU personal data to the UK without interruption, Hancock declined to provide any specific details about the arrangements that the Government would seek to put in place, other than that the UK wishes to secure unhindered and uninterrupted data flows between the UK and EU post-Brexit. Hancock also confirmed that the UK Government will seek to ensure the uninterrupted flow of data between the UK and the U.S. post-Brexit, and thus it is expected that a data transfer framework between the UK and the U.S. will be developed in furtherance of that goal.
On February 2, 2017, the UK government published a white paper entitled The United Kingdom’s exit from and new partnership with the European Union (the “white paper”). The white paper strikes a conciliatory tone, making it clear that the UK intends to maintain close ties with the European Union and its 27 remaining Member States after Brexit. A large portion of the white paper is devoted to discussing the issues at the heart of the 2016 Brexit referendum, such as immigration controls, continuing trade with the EU and the protection of individuals’ rights conferred under EU law. Among the rights addressed is the free flow of personal data between the UK and the EU.
The white paper emphasizes that the UK will “seek to maintain the stability of data transfer between EU Member States and the UK” and notes that “the European Commission is able to recognize data protection standards in third countries as being essentially equivalent to those in the EU, meaning that EU companies are able to transfer data to those countries freely.” While the white paper does not explicitly state that the UK will seek an adequacy determination, it appears from reading between the lines that this is the UK’s goal as it exits the EU.
It is unlikely to be a seamless effort for the UK to secure a finding of adequacy for data protection because of the UK’s recent adoption of the Investigatory Powers Act. This law strengthens the hand of national security agencies regarding surveillance in ways that the EU has historically found unpalatable. A December 2016 judgment by the Court of Justice of the European Union regarding the UK’s Data Retention and Investigatory Powers Act 2014, which contains similar provisions that have been replaced by the Investigatory Powers Act, found that it was disproportionate and contravened individuals’ rights to privacy and data protection.
This is Part 3 of the series implementing IoT securely in your company, click here for part 1 and here for part 2. As it is quite common that new IoT devices are ordered and also maintained by the appropriate department and not by the IT department, it is important that there is a policy in place.
This policy is specially important in this case as most non IT departments don’t think about IT security and maintaining the system. They are often used to think about buying a device and it will run for years and often even longer, without doing much. We on the other hand in the IT know that the buying part is the easy part, maintaining it is the hard one.
Extend existing security policies
Most companies won’t need to start from scratch, as they most likely have policies for common stuff like passwords, patching and monitoring. The problem here is the scope of the policies and that you’re current able to technically enforce many of them:
- Most passwords are typically maintained by an identity management system and the password policy is therefore enforced for the whole company. The service/admin passwords are typically configured and used by members of the IT department. For IoT devices that maybe not true as the devices are managed by the using department and technically enforcing it may not be possible.
- Patching of the software is typically centrally done by the IT department, be it the client or server team. But who is responsible for updating the IoT devices? Who monitors that updates are really done? How does he monitor that? What happens if a department does not update their devices? What happens if a vendor stops providing security updates for a given device?
- Centrally by the IT department provided services are generally monitored by the IT department. Is the IT department responsible for monitoring the IoT devices? Who is responsible for looking into the problem?
You should look at this and write it down as a policy which is accepted by the other departments before deploying IoT devices. In the beginning they will say yes sure we’ll update the devices regularly and replace the devices before the vendors stops providing security updates – and often can’t remember it some years later.
Typical IoT device problems
Beside extending the policies to cover IoT devices it’s also important to check the policies if the fit the IoT space and cover typical problems. I’ll list some of them here, which I’ve seen done wrong in the past. Sure some of them also apply for normal IT server/services but are maybe consider so basically that everyone just does it right, that it is maybe not covered by your policy.
- No Update is possible
Yes, there are devices out in the wild that can’t be updated. What does your policy say?
- Default Logins
Many IoT devices come with a default login and as the management of the devices is done via a central (cloud) management system, it is often forgotten that the devices may have also a administration interface.What does your policy say?
- Recover from IoT device loss
Let’s assume that an attacker is able to get into one IoT device or that the IoT device gets stole. Is the same password used on the server? Do all devices use the same password? Will the IT department get informed at all? What does your policy say?
- Naming and organizing things
For IT devices it’s clear that we use the DNS structure – works for servers, switches, pc’s. Make sure that the same gets used for IoT device. What does your policy say?
- Replacing IoT devices
Think about > 100 IoT devices running for 4 years and now some break down, and the the devices are end of sales. Can you connect new models to the old ones? does someone keep spare parts? What does your policy say?
- Self signed certificates
If the system/devices uses TLS (e.g. HTTPS) it needs to be able to use your internal PKI certificates. Self signed certificates are basically the same as unencrypted traffic. What does your policy say?
- Disable unused services
IoT enable often all services by default, like I had a device providing a FTP and telnet server – but for administration only HTTP was ever used. What does your policy say?
I hope that article series helps you to implement IoT devices somewhat securely.
Getting back to my normal territory here-
In case you missed it, in late December Troy Hunt posted 10 Ways for a conference to Upset their speakers on his blog. I mostly agree with Troy’s list and it adds to my series of rants about conferences from last fall. It’s worth a read if you are interested in conferences and speaking and you haven’t already read it.
Cancer sucks. The number of people who are touched by cancer is terrifying, it is rare to find someone who hasn’t had friends or family attacked by cancer if they’ve avoided it themselves. Sometimes, as with my bladder cancer, it’s not that bad- for me I get a rather uncomfortable exam regularly, and sometimes get a small tumor or two removed, no big deal. That makes me lucky, few who face cancer get to shrug it off as a mere annoyance.
Since I’ve recently learned a lot more about ovarian cancer than I ever expected to know, I’d like to share a few things with everyone. Remember, I’m not a medical professional, these are my observations and ideas formed over the two and a half years of my late wife’s struggle with clear cell ovarian cancer.
First, routine tests and doctor visits are unlikely to detect it early.
Second, it’s insidious- many women develop ovarian cancer around the time of menopause, and many of the symptoms of the cancer are also expected conditions that accompany menopause.
There is a blood test which looks for a marker, CA 125, which may help detect ovarian cancer but the test is far from perfect. Many people have suggested it should be a regular test, others think it may lead to a false sense of security. Gilda Radner talked about the test in her autobiography before we lost her to ovarian cancer. Here’s my take- and keep in mind that I’m not a doctor of anything and this isn’t medical advice- I think that CA 125 screening and the symptoms of ovarian cancer are things women should be aware of. I think that routine CA 125 screening probably makes sense for women with a family history of cancer, maybe for a broader population- but only if the test is considered a weak indicator, and is done as part of comprehensive medical care (a low reading does not mean there’s no cancer). If you have a healthy relationship with your doctor it should be part of a conversation, as with most tests. I don’t think much about my prostate, but I do think about symptoms of prostate problems every time my doctor sends me off for a PSA test. Awareness of symptoms, thinking about them honestly, and having real conversations with your doctors is key to minimizing Bad Things.
Note: I was going to prefix this with a note saying this is another personal post with nothing to do with InfoSec, then I realized I’m talking about using weak indicators as component in a comprehensive detection plan, and that sounds pretty familiar.
I don’t want to watch any more people die of cancer, and neither do you. But we will, so let’s try to spread the word and minimize the suffering.
Finally, I am not a doctor, psychologist, or anyone else who can provide real help- but if you or a loved one are facing ovarian cancer and want someone to talk to, yell at, or commiserate with- reach out to me. There’s email info in the upper right corner of the page.