Monthly Archives: August 2017

Main Takeaways for CIOs from the Global C-Suite Study

Technological advances are transforming the way we connect, disrupting the status quo and creating huge turbulence. Industries are converging, and new opportunities and threats are emerging, as never before.

The pace of change is top of mind for CIOs. We live in an age where technology is nearly obsolete by the time it has been implemented and deployed. Gone are the days of 5-year and 7-year technology deployment plans, instead CIOs must oversee a near-continuous digital transformation of their enterprise, constantly. Add to that the critical nature of today’s technology infrastructure — i.e. can your business run without computers, networks, or the Internet — and you get a good sense for the level of stress CIOs are facing today.

In 2016, IBM’s Institute for Business Value (IBV) sought to explore the CIO’s perspective, , as part of a wider study focusing on the C-Suite. For the CIO angle, the IBV study interviewed 1,805 CIOs from around the world. The study sought to answer what the CIOs at the most successful enterprises do differently than their peers. They found a small, but distinctive group, representing about 4% of CIOs. Compared to the rest of the pack, this small group, termed the Torchbearers, stood out by their ability to be “creating intelligent, agile cultures; wising up to the needs of customers; and rewiring the way their organizations reason.” At the other extreme stood a large chunk of respondents (35%), termed Market-Followers for their lower market profile and stemming from less financially successful organizations.

When it comes to the factors that worry CIOs, 77% are worried about “the disruptive influence of new technologies” and the inability to see the next competitor in time to be able to react to them, a concern echoed by the rest of the C-Suite. Which new technologies did CIOs expect to have the most impact? They pointed to mobile solutions (71%), cloud computing (66%), and the Internet of Things (61%).

The Torchbearer Secret to Success?

No business can remain relevant by making ‘tweaks.’ The only way to stay ahead of disruptive change is to embrace it, which means being able to develop and release new products and services within weeks or even days.IBM IBV 2016 Global C-suite Study - The CIO Point of View

CIOs know that to be able to thrive — or just survive — in an era of converging industries, global competition, and high-speed innovation, they need to move towards technology investments that provide their organizations with insight and foresight, instead of a rear-view mirror vision of progress and capabilities. Seventy-one percent of Torchbearer CIOs consider the “strategic implications of new technologies,” looking to save costs but also add to the bottom line by stimulating innovation. But these CIOs also know that a traditional implementation model won’t cut it, which is why 90% of Torchbearer CIOs support agile innovation, compared to just 36% of Market-Follower CIOs.

CIOs today know they must continue to watch operating costs — in many cases do more with less — yet also provide great service quality, minimal downtime, increased agility, while also ensuring the security of the organization’s data. These are tall orders, and CIOs know that they do not have the in-house capability to deliver all these traits simultaneously.

Torchbearer CIOs are more likely to form partnerships to reap the full benefits of technological improvements. They realize the benefits of collaboration with others, not only leveraging their systems and capabilities to provide both the level and the range of services that are required for the organization to compete today, but also to continue to be competitive tomorrow. Yet all these systems and data are likely to use different operating platforms, and thus need to be integrated.

Takeaways for CIOs

But in order to provide this agility, CIOs need to rethink how they plan for and use technology to meet the ever-changing needs of the organization. Unless they have the luxury of time and the ability to manually integrate disparate systems, CIOs need help to improve the way they plan and manage the strategy around the automation and integration of IT infrastructure. This is where partnering with world-class enterprise service providers comes in. For example, in May 2017, the Everest Group named IBM as the Leader in IT Infrastructure Automation. They also pointed to IBM’s recent successes in leveraging cognitive computing to improve the way IT services are planned for, implemented, and delivered.

The most successful CIOs fully appreciate the need to forge alliances with the rest of the C-Suite, and they never lose focus on the value that they bring to all aspects of the business, from IT as critical business infrastructure, to maintaining a watchful eye over data, but also to investing in tools and technologies that will extract business intelligence out of the mountains of data. When it comes to competing and thriving in the global marketplace, Torchbearer CIOs have a strong focus on continuous technology improvements to not only drive efficiencies (e.g. savings achieved by leveraging cloud solutions), but also to provide insight and foresight, which requires leveraging technologies like cloud computing and cognitive computing (e.g., IBM Watson).
Like the rest of the C-Suite, CIOs know the pressure to provide better analytics. However, such analytics aren’t just limited to sales and marketing trends and results. Even IT can benefit from better insights into how current technology is or isn’t enabling the business to be more competitive. The question is, how are CIOs going to implement this agility, this capability to continuously adapt to change, and drive better performance and (technology) investment decisions. CIOs should look for an integration and automation partner entity that supports multiple platforms and ecosystems, supports automation, and that can provide the invaluable analytics needed to monitor service levels and drive improvements.
This post was brought to you by IBM Global Technology Services. For more content like this, visit ITBizAdvisor.com

IoT privacy: 30 ways to build a security culture

Much work still must be done before the industrial and municipal Internet of Things (IoT) becomes widely adopted outside of the circle of innovators. One field, privacy, well understood by the public and private sector in the context of the cloud, PCs and mobile, is in the early stage of adaptation for the IoT.

The sheer volume of data that will be collected and the new more granular architecture of the IoT present new privacy concerns that need to be resolved on an equal scale as the platform’s forecasted growth.

A demonstration of this new aspect of privacy and compliance is the Privacy Guidelines for Internet of Things: Cheat Sheet, Technical Report (pdf) by Charith Perera, researcher at the Newcastle University in the U.K. The nine-page report details 30 points about implementing strong privacy protections. This report is summarized below.

To read this article in full, please click here

VirusTotal gets a new hairdo

Being geeks in a world of executable disassemblies, shell scripts, memory dumps and other beautiful matrix like interfaces, it is no secret that at VirusTotal we have never been great artists. This said, many of you may have noticed that we have taken some time to refresh our public web site. Design is a matter of taste and so we acknowledge that while some will love it, some others won't. However, we think all of our users will be excited about some technical improvements that come along with this refresh, and so we wanted to be sure to call those out.

First of all, we dived into this redesign exercise in order to take advantage of new front-end architecture concepts such as web components. By making use of Polymer, we intend to create basic building blocks that will allow us to operate in a more agile fashion going forward, hopefully making it easier to create new features that you may all enjoy.

Under the hood we have placed a front-end cache layer that allows us, under certain circumstances, to load file and URL reports as if the data was stored locally on your machine, instantaneously. For instance, if you take a look at reports that contain lists of files or URLs, e.g.
https://www.virustotal.com/#/domain/drive.google.com
you may click on several files in the Downloaded files section and you will notice that after a first template load, subsequent file reports load immediately; the file objects appearing on lists are now locally cached via your browser's local storage. As you dive into multiple threat reports you may also feel lighter transitions thanks to this revamped site being mostly a single page application.

We have also acknowledged the fact that analysts and researchers like to see as much information as possible about a threat condensed into as little space as possible, this is why we have reduced unnecessary paddings, removed merely decorative icons, compacted detections into two columns, etc. It is also the reason behind introducing file type icons so that we can communicate at a glance as much details as possible:


https://www.virustotal.com/#/file/072afa99675836085893631264a75e2cffd89af568138678aa92ae241bad3553/detection
https://www.virustotal.com/#/file/82d763c76918d161faaca7dd06fe28bd3ececfdb93eced12d855448c1834a149/detection
We would like to thank our friends over at Freepik and Flaticon for designing such a rich set of icons for us.

Ease of data communication and comprehension also explains why certain new sections grouping details of the same nature have appeared, e.g. the file history section:


This section ties together all the date related information that we have about a file, including submission dates to VirusTotal, date metadata shared by partners such as Sysinternals' tool suite, file signature dates, modification date metadata contained in certain file formats such as ZIP bundles, etc. Many of these details were formerly spread over different sections that made it difficult to get a clear picture of a file under study.

We have also taken a shot at some usability improvements. You will notice that we now have an omnibar that allows users to search or submit files from any page within VirusTotal, no matter whether you are on a file, domain, IP address or URL report, you can refer to the top bar in order to continue your investigations. Similarly, you can always drag and drop a file in any view in order to trigger a file scan. By the way, we now accept files up to 256MB in size, leaving behind the former 128MB limitation.

Usability is also the reason why file and URL reports now include a floating action button that allows users with privileged accounts to act on the file in VirusTotal Intelligence, for example, by launching a similar file search in order to pinpoint other variants of your interest.


Finally,  we also wanted to spend some time making sure that certain technical features would be understood by non-technical audiences, this is why when you now hover over the headings or subheadings of the different detail sections you get descriptive tooltips:



Better descriptions and inline testing forms can also be found in our  new API documentation and help center

As you can see, what looked merely like a subtle aesthetic change hides certain unnoticed functionality improvements that we hope will make your research smoother. We feel very excited about the transition to web components, as this will allow us to reuse basic building blocks and will speed up future coding efforts. There is still a lot of work to do as we have not fully rewritten the entire site: group and consumption sites or private views such as Intelligence are now entering our redesign kitchen. As usual, we would love to read your suggestions and ideas so that new iterations match your expectations, please share your feedback.

P.S. You may have noticed that our logo has morphed from a sigma into a sigma-flag symbiosis; there is a nice little story to it. The sigma represented the aggregation of detection technologies, and in the security field we often use the term flag in order to detect or mark a file as suspicious, hence, the new logo represents both the aggregation and flagging in one unique visual component.

Introducing Behavioral Information Security

I recently had the privilege of attending BJ Fogg's Behavior Design Boot Camp. For those unfamiliar with Fogg's work, he started out doing research on Persuasive Technology back in the 90s, which has become the basis for most modern uses of technology to influence people (for example, use of Facebook user data to influence the 2016 US Presidential Election). The focus of the boot camp was around "behavior design," which was suggested to me by a friend who's a leading expert in modern, progress security awareness program management.

Thinking about how best to apply this new-found knowledge, I've been mulling opportunities for application of Fogg models and methods. Suddenly, it occurred to me, "Hey, you know what we really need is a new sub-field that combines all aspects of security behavior design, such as security awareness, anti-phishing, social engineering, and even UEBA." I concluded that maybe this sub-field would be called something like "behavioral security" and started doing searches on the topic.

Well, low-and-behold, it already exists! There is already a well-established sub-field within information security (infosec) known as "Behavioral Information Security." Most of the literature I've found (and there's a lot in academia) has popped-up over the past 5 years or so. However, I did find a reference to "behavioral security" dating back to May 2004 (see "Behavioral network security: Is it right for your company?").

Going forward, I believe that organizations and standards should stop listing "security awareness" as a single line item requirement, and instead pivot to the expanding domain of "behavioral infosec." NIST CSF would be a great place to start (though I'm assuming it's too late for the v1.1 release, expected sometime soon). Nonetheless, I will be using this phrasing and description going forward.

The inevitable question you might have is, "How do you define the domain/sub-field of Behavioral Information Security?" To me, the answer is quite simple: Any practice or capability that monitors or seeks to modify human behavior to reduce risk or improve security falls under behavioral infosec. These practice areas would include everything from modern, progressive security education, training, and awareness programs (these are programs well beyond posters and blind anti-phishing, including developer education tied to appsec testing data), progressive anti-phishing programs (that is, those that baseline and then measure impact), all forms of social engineering (including red team testing, blue team testing, etc.), and user behavior monitoring through tools like UEBA (User and Entity Behavior Analytics).

Behavioral InfoSec Engineering programs and teams should be instantiated that are charged with these practice areas (definitely security awareness and various testing, measuring, and reporting practices). Personnel should be suitably trained, not just in analytical areas, but also in technical areas in order to best develop technical content and practices designed to impact human behavior.

Lastly, why human behavior as a focus? Because reports (like VzB DBIR) consistently report year after year after year that one wrong click by a human can break an entire security chain. Thus, we need to help people make better decisions. This notion is also very DevOps-friendly thinking. We should not want to see large security programs built and maintained within organizations, but rather must work to thoroughly embed as many security practices and decisions as possible within non-security teams in order to improve security overall (this is something emphasized in DevSecOps programs). Security resources will never scale sufficiently on their own, which means we have to scale in other ways.

As an added bonus, to see the power of behavior design, I strongly recommend trying out BJ Fogg's "Tiny Habits" program, which is freely available here: http://tinyhabits.com/

cheers and good luck!

Confessions of an InfoSec Burnout

Soul-crushing failure.

If asked, that is how I would describe the last 10 years of my career, since leaving AOL.

I made one mistake, one bad decision, and it's completely and thoroughly derailed my entire career. Worse, it's unclear if there's any path to recovery as failure piles on failure piles on failure.

The Ground I've Trod

To understand my current state of career decrepitude, as well as how I've seemingly become an industry pariah...

I have worked for 11 different organizations over the past 10 years. I left AOL in September 2007, right before a layoff (I should have waited for the layoff and gotten a package!). I had been there for more than 3.5 years and I was miserable. It was a misery of my own making in many ways. My team manager had moved up the ranks, leaving an opening. All my teammates encouraged me to throw my hat in the ring, but I demurred, telling myself I simply wasn't ready to manage. Oops. Instead, our new manager came through an internal process, and immediately made life un-fun. I left a couple months later.

When I left AOL, it was to take a regional leadership role in BT-INS (BT Global Services - they bought International Network Services to build-out their US tech consulting). A month into the role as security lead for the Mid-Atlantic, where I was billable on day 1, the managing director left and a re-org merged us in with a different region where there was already a security lead. 2 of 3 sales reps left and the remaining person was unable and unwilling to sell security. I sat on the bench for a long time, traveling as needed. An idle, bored Ben is a bad thing.

From BT I took a leadership role with this weird tech company in Phoenix. There was no budget and no staff, but I was promised great things. They let me start remote for a couple months before relocating. I knew it was a bad fit and not a good company before we made the move. I could feel it in my gut. But, I uprooted the family in the middle of the school year (my wife is an elementary teacher) and went to Phoenix, ignoring my gut. 6 months later they eliminated the position. The fact is that they'd hired a new General Counsel who also claimed a security background (he had a CISSP), and thus they made him the CISO. The year was 2009, the economy was in tatters after the real estate bubble had burst. We were stranded in a dead economy and had no place to go.

Thankfully, after a month of searching, someone threw me a life-line and I promptly started a consulting gig with Foreground Security. Well, that was a complete disaster and debacle. We moved back to Northern Virginia and my daughter immediately got sick and ended up in the hospital (she'd hardly had a sniffle before!). By the time she got out of the hospital I was sicker than I'd ever been before. The doctors had me on a couple different antibiotics and I could hardly get out of bed. This entire time the president of the company would call and scream at me every day. Literally, yelling at the top of his lungs over the phone. Hands-down the most unprofessional experience I'd had. The company partnership subsequently fell apart and I was kacked in the process. I remember it clearly to this day: I'm at my parents house in NW MN over the winter holidays and the phone rings. It's the company president, who starts out by telling me they'd finally had the kid they were expecting. And, they're letting me go. Yup, that's how the conversation went ("We had a baby. You're termed.").

Really, being out of Foreground was a relief given how awful it had been. Luckily they relocated us no strings attached, so I didn't owe anything. But, I once again was out of a job for the second time in 3 months. I'd had 3 employers in 2009 and ended the year unemployed.

In early 2010 I was able to land a contract gig, thinking I'd try a solo practice. It didn't work out. The client site was in Utah, but they didn't want to pay for a ton of travel, so I tried working remotely, but people refused to answer the phone or emails, meaning I couldn't do the work they wanted. The whole situation was a mess.

Finally, I connected with Peter Hesse at Gemini Security Solutions to do a contract-to-hire tryout. His firm was small, but had a nice contract with a large client that helped underpin his business. He brought me in to do a mix of consulting and biz dev, but after a year+ of trying to bring in new opportunities (and have them shot down internally for various reasons), I realized that I wasn't going to be able to make a difference there. Plus, being reminded almost daily that I was an expensive resource didn't help. I worked my butt off but in the end it was unappreciated, so I left for LockPath.

The co-founders of LockPath had found me when I was in Phoenix thanks to a paper I'd written on PCI for some random website. They came out to visit me and told me what they were up to. I kept in touch with them over the years, including through their launch of Keylight 1.0 on 10/10/10. I somewhat forced my way into a role with them, initially to build a pro svcs team, but that got scrapped almost immediately and I ended up more in a traveling role, presenting at conferences to help get the name out there, as well as doing customer training. After a year-and-a-half of doing this, they hired a full-time training coordinator who immediately threw me under the bus (it was a major wtf moment). They wanted to consolidate resources at HQ and moving to Kansas wasn't in the cards, so seeing the writing on the wall I started a job search. Things came to an end in mid-May while I was on the road for them. I remember it clearly, having dropped my then-3yo daughter with the in-laws the night before, I had just gotten into my hotel room in St. Paul, MN, ahead of Secure360 and the phone rang. I was told it was over, but he was going to think about it overnight. I asked "Am I still representing the company when I speak at the conference tomorrow?" and got no real answer, but was promised one first thing the next morning. That call never came, so I spoke to a full room the next morning and worked the booth all that day and the morning after that. I met my in-laws for lunch to pick-up my kiddo, and was sitting in the airport awaiting our flight home when the call finally came in delivering the final news. I was pretty burned-out at that time, so in many ways it was welcome news. Startup life can be crazy-intense, and I thankfully maintain a decent relationship with the co-founders today. But those days were highly stressful.

The good news was that I was already in-process with Gartner, and was able to close on the new gig a couple weeks later. Thus started what I thought would be one of my last jobs. Alas, I was wrong. As was much with my time there.

It bears noting here before I go any further an important observation: The onboarding experience is all-important. If you screw it up, then it sets a horrible tone for the entire gig, and the likelihood of success drops significantly. If onboarding is professional and goes smoothly, then people will feel valued and able to contribute. If it goes poorly, then people will feel undervalued from the get-go and they will literally start from an emotional hole. Don't do this to people! I don't care if you're a startup or a Fortune 50 large multi-national. Take care of people from Day 1 and things will go well. Fail at it and you'd might as well stop and release them asap.

Ok, anyway... back to Gartner. It was a difficult beginning. I was assigned a mentor, per their process, but he was gone 6 of the first 9 weeks I was there. I was sent to official "onboarding training" the end of August (the week before Labor Day!) despite having been there for 2 months by that time. I was not prepped at all before going to onboarding, and as it turns out I should have been. Others showed up with documents to be edited and an understanding of the process. I showed up completely stressed out, not at all ready to do the work that was expected, and generally had a very difficult time. It was also the week before Labor Day, which at the time meant it was teacher workshops, and I was on the road for it with 2 young kids at home. Thankfully, the in-laws came and helped out, but suffice to say it was just really not good all-around.

I really enjoyed the manager I worked for initially, but all that changed in February 2014 when my former mentor, with whom I did not at all get along, became the team manager. The stress levels immediately spiked as the focus quickly shifted to strong negativity. I had been struggling to get paper topics approved and was fighting against the reality that the target audience for Gartner research is not the leading edge of thinking, but the middle of the market. It took me nearly a full year to finally get my feet under me and start producing at an appropriate pace. My 1 yr mark roughly corresponded with the mid-year review, which was highly negative. By the end of the year I finally found my stride and had a ton of research in the pipeline (most of which would publish in early 2015). Unfortunately, the team manager, Captain Negative, couldn't see that and gave me one of the worst performance reviews I've ever received. It was hands-down the most insulted I'd ever been by a manager. It seemed very clear from his disrespectful actions that I wasn't wanted there, and so I launched an intensive job search. Meanwhile, I published something like 4 papers in 6 weeks while also having 4 talks picked up for that year's Security & Risk Management Conference. All I heard from my manager was negativity despite all that progress and success. I felt like shit, a total failure. There were no internal opportunities, so outward I looked, eventually landing at K12.

Oh, what a disaster that place was. K12 is hands-down the most toxic environment I've ever seen (and I've seen a lot!). Literally, all 10 people with whom I'd interviewed had lied to me - egregiously! I'd heard rumblings of changes in the executive ranks, but the hiring manager assured me there was nothing that would affect me. A new CIO - my manager's boss - started the same day I did. Yup, nothing that would affect me. Ha. Additionally, it turns out that they already had a "security manager" of sorts working in-house. He wasn't part of the interview process for my "security architect" role. They said they were doing DevOps, but it was just a side pilot that wasn't getting anywhere. Etc. Etc. Etc. Suffice to say, it was really bad. I frankly wondered how they were still in business, especially in light of the constant stream of lawsuits emanating from the states where they had "online public schools." Oy...

Suffice to say, I started looking for work on Day 1 at K12. But, there wasn't much there, and recruiters were loathe to talk to me given such a short stint. Explanations weren't accepted, and I was truly stuck. The longer I was there, the worse it looked. Finally, my old manager from AOL reached out as he was starting a CISO role at Ellucian. He rescued me and in October 2015 I started with them in a security architect role.

There's not much I can say about my experience at Ellucian. Things seemed ok at first, but after a CIO change a few months in, plus a couple other personnel issues, things got wonky, and it became clear my presence was no longer desired. When your boss starts cancelling weekly 1-on-1 meetings with you, it becomes pretty clear that he doesn't really want you there. New Context reached out in May 2016 and offered me an opportunity to do research and publishing for them, so I jumped at it and got the heck out of dodge. It turns out, this was a HUGE mistake, too...

There's even less I can say about New Context... we'll just put it at this: Despite my best efforts, I was never able to get things published due to a lack of internal approvals. After a year of banging my head against the wall, my boss and I concluded it wasn't going to happen, and they let me go a couple weeks later.

From there, I launched my own solo practice and signed what was to be a 20-wk contract with an LA-based client. They had been chasing me for several months to come help them out in a consulting (staff augmentation, really) capacity. I closed the deal with them and started on July 31st of this year. That first week was a mess with them not being ready for me on day 1, then sending me a botched laptop build on day 2, and then finally getting me online on day 3. I flew to LA to be on-site with them the following week and immediately locked horns with the other security architect. That first week on-site was horribly stressful. Things had finally started leveling off last wk, and then yesterday (Monday 8/28/17) they called and cancelled the contract. While I'm disappointed, it's also a bit of a relief. It wasn't a good fit, it was a very difficult client experience, and overall I was actively looking for new opportunities while I did what I could for them.

Shared Culpability or Mea Culpa?

After all these years, I'm tired of taking the blame and being the seemingly constant punchline to some joke I don't get. I'm tired, I'm burned-out, I'm frustrated, I'm depressed, and more than anything I just don't understand why things have gone so completely wrong over the past 10 years. How could one poor decision result in so much career chaos and heartache? It's astonishing. And appalling. And depressing.

I certainly share responsibility in all of this. I tend to be a fairly high-strung person (less so over the years) and onboarding is always highly stressful for me. Increasingly, employers want you engaged and functional on Day 1, even though that is incredibly unrealistic. Onboarding must be budgeted for a minimum of 3-6 months. If a move is involved, then even longer! Yet nobody is willing to allow that any more. I don't know if it's mythology or downward pressure or what... but the expectations are completely unreasonable.

But I do have a responsibility here, and I've certainly not been Mr. Sunshine the past few years, which means I tend to come off as extremely negative and sarcastic, which can be off-putting to people. Attitude is something I need to focus on when starting, and I need to find ways to better manage all the stress that comes with commencing a new gig.

That said, I also seem to have a knack for picking the wrong jobs. This even precedes my time at AOL, which is really a shining anchor in the middle of a turbulent career. Coming into the workforce just before the DOT-COM bubble burst, I've been through lots of layoffs and turmoil. I simply have a really bad track record of making good employment choices. I'm not even sure how to go about fixing that, short of finding people to advise me on the process.

However, lastly, it's important for companies to realize that they're also failing employees. The onboarding process is immensely important. Treating people respectfully and mindfully from Day 1 is immensely important. Setting reasonable expectations is immensely important. If you do not actively work to set your personnel up for success, then it is extremely unlikely that they'll achieve it! And even in this day and age where companies really, truly don't value personnel (except for execs and directors), it must be acknowledged that there is a significant cost in lost productivity, efficiency, and effectiveness that can be directly tied to employee turnover. This includes making sure managers are reasonably well trained and are actually well-suited to being managers. You owe it to your employees to treat them as humans, not just replaceable cogs in a machine.

Where To Go From Here?

The pull of deep depression is ever stronger. Resistance becomes evermore difficult with each successive failure. I feel like I cannot buy a break. My career is completely off-track and I decreasingly see a path to recovery. Every morning is a struggle to get up and look for work yet again. I feel like I've been doing this almost constantly for the past 10 years. I've not been settled anywhere since AOL (maybe BT).

I initially launched a solo practice, Falcon's View Consulting, to handle some contracts. And, that's still out there if I need it. However, what I really need is a full-time job. With a good, stable company. In a role with a good manager. A role that eventually has upward mobility (in order to get back on track).

Where that role is based I really do not care (my family might). Put me in a leadership role, pay me a reasonable salary, and relocate me to where you need me. At this point, I'm willing to go to bat and force the family to move, but you gotta make it easy and compelling. Putting me into financial hardship won't get it done. Putting me into a difficult position with no support won't get it done. Moving me and not being committed to keeping me onboard through the most stressful times won't get it done.

I'm quite seriously at the end of my rope. I feel like I have about one more chance left, after which it'll be bankruptcy and who knows what... I've given just about everything I can to this industry, and my reward has been getting destroyed in the process. This isn't sustainable, it isn't healthy, and it's altogether stupid.

I want to do good work. I want to find an employer that values me that I can stay with for a reasonable period of time. I've never gone into any FTE role thinking "this is just a temporary stop while I find something better." I throw my whole self into my work, which is - I think - why it is so incredibly painful when rejection and failure final happen. But I don't know another way to operate. Nor should anyone else, for that matter.

Two roads diverged in the woods / And I... I took the wrong one / And that has made all the difference

Toolsmith Release Advisory: Magic Unicorn v2.8

David Kennedy and the TrustedSec crew have released Magic Unicorn v2.8.
Magic Unicorn is "a simple tool for using a PowerShell downgrade attack and inject shellcode straight into memory, based on Matthew Graeber's PowerShell attacks and the PowerShell bypass technique presented by Dave and Josh Kelly at Defcon 18.

Version 2.8:
  • shortens length and obfuscation of unicorn command
  • removes direct -ec from PowerShell command
Usage:
"Usage is simple, just run Magic Unicorn (ensure Metasploit is installed and in the right path) and Magic Unicorn will automatically generate a PowerShell command that you need to simply cut and paste the PowerShell code into a command line window or through a payload delivery system."


Heralding GSoC17 Report

The summer is coming to the end as well as my GSoC17 happy days. So, now it’s time to sum up the results and say goodbye to the GSoC until the next year.

My impressions about working on the Heralding project

Working on the Heralding project was awesome experience for me. I feel I did something helpful, fun and challenging at the same time. I hadn’t wanted anything else before the summer!

read more

Cyber Chef

Nice site at https://gchq.github.io/CyberChef/ - Allows you to do all sorts of conversions of data format, generate encoding and encryption, parse network data, extract strings, IPs, email addresses, etc., analyze hashes and a lot more.

Cooling Down the Hottest Ticket in Town

We had an interesting conversation on the Proverbs Hackers mailing list today about getting tickets for popular conferences that have limited ticket sales. Security conferences most often thought of in this category are DerbyCon and ShmooCon. For anyone that has tried to get tickets to one of these conferences in the traditional fashion, you know the struggle is real. The conversation got me thinking about ways you can acquire a ticket that you may not realize are available. Below is the result of that thought exercise.

  1. Automate it. If you do go the traditional route, every second counts. Never more so than with DerbyCon, which has traditionally opened up ticket sales early. There was a lot of dialog on that this year as they sold out before they were actually supposed to go on sale. For conferences like DerbyCon, racing for a ticket upon release is the worst way to try and get a ticket. But if you insist, set up a curl or wget based heart beat script for the registration page and have it running 30 minutes before the scheduled start. This should give you the best chance of being one of the first to know when sales actually start. My wife and I did this for her Walker Stalker tickets this year and it worked great. Here's a one-liner to get you started: while :; do ping -c 3 127.0.0.1 2>&1 >/dev/null; curl -s {purchase url} | grep "{text unique to pre-sale condition}" || say "go go go"; done
  2. Submit to the conference CFP. This has always been my approach. Places like ShmooCon have traditionally provided opportunities to buy tickets for every CFP submission. The system can be "gamed" a bit, but there is also always a chance that your CFP gets accepted, so be prepared to speak if you go this route.
  3. Buy second hand. This has traditionally been the best way to get a ticket for these conferences. I usually just keep an eye on Twitter. Especially, the day the conference sends CFP acceptance letters. This is the day that the accepted folks off-load the ticket they bought as a back up plan.
  4. Pay an accepted speaker their honorarium in exchange for the extra ticket they get offered. Many conferences offer an honorarium OR a second free ticket to the conference for accepted CFP submissions. Get in touch with someone whose CFP submission was accepted and offer to pay their honorarium in exchange for a ticket. Then, they can choose the extra ticket over the honorarium as their "gift" and sell it to you on site. This might cost a little more, as honorariums are typically more than the ticket price, but usually not by much.
  5. Go to training. Most conferences include access to the seminars with a training ticket. This is the most expensive way, but you get the most too.

So, perhaps this will open a door for someone that really wants to get a ticket to a conference, but thought they were out of options. If you happen to use one of these techniques and it works, I'd love to hear your success story. Good luck, and happy hunting. It's officially conference season.

Malware spam: "Voicemail Service" / "New voice message.."

The jumble of numbers in this spam is a bit confusing. Attached is a malicious RAR file that leads to Locky ransomware. Subject:       New voice message 18538124076 in mailbox 185381240761 from "18538124076" <6641063681>From:       "Voicemail Service" [vmservice@victimdomain.tdl]Date:       Fri, August 25, 2017 12:36 pmDear user:just wanted to let you know you were just left a 0:13 long

Mentoring: On Blogging

Received the question about blogging. More specifically:
  • How and Why
  • How to benefit from blogging
  • How to be consistent with posting
In my mind, the key to success and blogging is to be totally selfish in its planning and execution.

Blogging is a personal activity/journey that you allow the public to be a part of.  What I mean by this is that the main audience for your blog should be YOU.  My blog is a place where I take notes and occasionally try to talk about a more touchy-feely topics or issues. These notes are notes that I'm ok with sharing publicly. I also keep a private blog  (but really more notes/cheat-sheet think RTFM...I use MDwiki) because you don't need to give everyone all your tricks and secrets.   If you show up for a new job and everyone knows your tricks because you've shared them publicly (because you need attention from strangers) what value are you bringing to your employer?

The benefit to blogging is note taking. I'm a HUGE proponent of taking notes and I'd chalk a lot of my success up to taking copious notes.  When I figure out how to mess with technology X, I take notes on it. As a consultant, it may be months or years before I see it again.  Having notes to go back to saves time and stress.  It also allows me to help people on my team in the event they run into it while I am on a different project.

How/Platforms:  I use Blogger because I don't want to secure/worry about my blogging platform. This blog was on Drupal for a bit and some jerk person decided to make an example of the blog's lack of updates publicly at BlackHat (appreciate the heads up...#totallynotbitter).  With Blogger, hosted WordPress, or some other hosted platform I'm offloading the risk and I don't have to worry about keeping up with patches.  

Consistently posting. No idea. It's clear I have lost the ability to consistently post. I do sometimes queue up a bunch of posts and schedule their posting.  I've found it was easier to find things to blog about when I was consulting since I had a different client every week so it would be difficult to tie a vulnerability back to any particular client.  Now that I work for a company, if I'm talking about some vulnerability or exploit I used there is a good chance I used it for work; potentially exposing the company to risk.

Length.  No one reads long posts.  Break long posts into separate logical posts even if you choose to post them at the same time.


Also see the "On Social Media" post (Todo)

Also
https://www.j4vv4d.com/a-blog-about-blogging-with-bloggers/

Also see this timely tweet by Robin Wood
https://twitter.com/digininja/status/900340713669279745

Malware spam: "Your Sage subscription invoice is ready" / noreply@sagetop.com

This fake Sage invoice leads to Locky ransomware. Quite why Sage are picked on so much by the bad guys is a bit of a mystery. Subject:       Your Sage subscription invoice is readyFrom:       "noreply@sagetop.com" [noreply@sagetop.com]Date:       Thu, August 24, 2017 8:49 pmDear CustomerYour Sage subscription invoice is now ready to view.Sage subscriptions To view your Sage subscription

Multiple badness on metoristrontgui.info / 119.28.100.249

Two massive fake "Bill" spam runs seem to be under way, one claiming to be from BT and the other being more generic. Subject:       New BT BillFrom:       "BT Business" [btbusiness@bttconnect.com]Date:       Thu, August 24, 2017 6:08 pmPriority:       NormalFrom BTNew BT BillYour bill amount is: $106.84This doesn't include any amounts brought forward from any other bills.We've put your latest

Malware spam: "Customer Service" / "Copy of Invoice xxxx"

This fairly generic spam leads to the Locky ransomware: Subject:       Copy of Invoice 3206From:       "Customer Service" Date:       Wed, August 23, 2017 9:12 pmPlease download file containing your order information.If you have any further questions regarding your invoice, please call Customer Service.Please do not reply directly to this automatically generated e-mail message.Thank

MS16-149 – Important: Security Update for Microsoft Windows (3205655) – Version: 1.1

Severity Rating: Important
Revision Note: V1.1 (August 23, 2017): Corrected the Updates Replaced for security update 3196726 to None. This is an informational change only. Customers who have already successfully installed the update do not need to take any further action.
Summary: This security update resolves vulnerabilities in Microsoft Windows. The most severe of the vulnerabilities could allow elevation of privilege if a locally authenticated attacker runs a specially crafted application.

Malware spam: "Voice Message Attached from 0xxxxxxxxxxx – name unavailable"

This fake voice mail message leads to malware. It comes in two slightly different versions, one with a RAR file download and the other with a ZIP. Subject:       Voice Message Attached from 001396445685 - name unavailable From:       "Voice Message" Date:       Wed, August 23, 2017 10:22 am Time: Wed, 23 Aug 2017 14:52:12 +0530 Download

Malware spam from "Voicemail Service" [pbx@local]

This fake voicemail leads to malware: Subject:       [PBX]: New message 46 in mailbox 461 from "460GOFEDEX" <8476446077> From:       "Voicemail Service" [pbx@local] Date:       Tue, August 22, 2017 10:37 am To:       "Evelyn Medina" Priority:       Normal Dear user:         just wanted to let you know you were just left a 0:53 long message (number 46) in mailbox 461 from "460GOFEDEX" <

Cerber spam: "please print", "images etc"

I only have a couple of samples of this spam, but I suspect it comes in many different flavours.. Subject:       imagesFrom:       "Sophia Passmore" [Sophia5555@victimdomain.tld]Date:       Fri, May 12, 2017 7:18 pm--*Sophia Passmore*Subject:       please printFrom:       "Roberta Pethick" [Roberta5555@victimdomain.tld]Date:       Fri, May 12, 2017 7:18 pm--*Roberta Pethick* In these two

IoT device guidelines

On several occasions I’ve written about insecurities of the Internet of Things – such as here, here, here, here and here. Recently, four US Senators decided to do something about it, and with the help of the Atlantic Council and Harvard University, have drafted a bill outlining minimum security requirements for IoT device purchases by …

Google Begins Campaign Warning Forms Not Using HTTPS Protocol

August 2014, Google released an article sharing their thoughts on how they planned to focus on their “HTTPS everywhere” campaign (originally initiated at their Google I/O event). The premise of...

Read More

The post Google Begins Campaign Warning Forms Not Using HTTPS Protocol appeared first on PerezBox.

Toolsmith #127: OSINT with Datasploit

I was reading an interesting Motherboard article, Legal Hacking Tools Can Be Useful for Journalists, Too, that includes reference to one of my all time OSINT favorites, Maltego. Joseph Cox's article also mentions Datasploit, a 2016 favorite for fellow tools aficionado, Toolswatch.org, see 2016 Top Security Tools as Voted by ToolsWatch.org Readers. Having not yet explored Datasploit myself, this proved to be a grand case of "no time like the present."
Datasploit is "an #OSINT Framework to perform various recon techniques, aggregate all the raw data, and give data in multiple formats." More specifically, as stated on Datasploit documentation page under Why Datasploit, it utilizes various Open Source Intelligence (OSINT) tools and techniques found to be effective, and brings them together to correlate the raw data captured, providing the user relevant information about domains, email address, phone numbers, person data, etc. Datasploit is useful to collect relevant information about target in order to expand your attack and defense surface very quickly.
The feature list includes:
  • Automated OSINT on domain / email / username / phone for relevant information from different sources
  • Useful for penetration testers, cyber investigators, defensive security professionals, etc.
  • Correlates and collaborate results, shows them in a consolidated manner
  • Tries to find out credentials,  API keys, tokens, sub-domains, domain history, legacy portals, and more as related to the target
  • Available as single consolidating tool as well as standalone scripts
  • Performs Active Scans on collected data
  • Generates HTML, JSON reports along with text files
Resources
Github: https://github.com/datasploit/datasploit
Documentation: http://datasploit.readthedocs.io/en/latest/
YouTube: Quick guide to installation and use

Pointers
Second, a few pointers to keep you from losing your mind. This project is very much work in progress, lots of very frustrated users filing bugs and wondering where the support is. The team is doing their best, be patient with them, but read through the Github issues to be sure any bugs you run into haven't already been addressed.
1) Datasploit does not error gracefully, it just crashes. This can be the result of unmet dependencies or even a missing API key. Do not despair, take note, I'll talk you through it.
2) I suggest, for ease, and best match to documentation, run Datasploit from an Ubuntu variant. Your best bet is to grab Kali, VM or dedicated and load it up there, as I did.
3) My installation guidance and recommendations should hopefully get you running trouble free, follow it explicitly.
4) Acquire as many API keys as possible, see further detail below.

Installation and preparation
From Kali bash prompt, in this order:

  1. git clone https://github.com/datasploit/datasploit /etc/datasploit
  2. apt-get install libxml2-dev libxslt-dev python-dev lib32z1-dev zlib1g-dev
  3. cd /etc/datasploit
  4. pip install -r requirements.txt
  5. mv config_sample.py config.py
  6. With your preferred editor, open config.py and add API keys for the following at a minimum, they are, for all intents and purposes required, detailed instructions to acquire each are here:
    1. Shodan API
    2. Censysio ID and Secret
    3. Clearbit API
    4. Emailhunter API
    5. Fullcontact API
    6. Google Custom Search Engine API key and CX ID
    7. Zoomeye Username and Password
If, and only if, you've done all of this correctly, you might end up with a running instance of Datasploit. :-) Seriously, this is some of the glitchiest software I've tussled with in quite a while, but the results paid handsomely. Run python datasploit.py domain.com, where domain.com is your target. Obviously, I ran python datasploit.py holisticinfosec.org to acquire results pertinent to your author. 
Datasploit rapidly pulled results as follows:
211 domain references from Github:
Github results
Luckily, no results from Shodan. :-)
Four results from Paste(s): 
Pastebin and Pastie results
Datasploit pulled russ at holisticinfosec dot org as expected, per email harvesting.
Accurate HolisticInfoSec host location data from Zoomeye:

Details regarding HolisticInfoSec sub-domains and page links:
Sub-domains and page links
Finally, a good return on DNS records for holisticinfosec.org and, thankfully, no vulns found via PunkSpider

DataSploit can also be integrated into other code and called as individual scripts for unique functions. I did a quick run with python emailOsint.py russ@holisticinfosec.org and the results were impressive:
Email OSINT
I love that the first query is of Troy Hunt's Have I Been Pwned. Not sure if you have been? Better check it out. Reminder here, you'll really want to be sure to have as many API keys as possible or you may find these buggy scripts crashing. You'll definitely find yourself compromising between frustration and the rapid, detailed results. I put this offering squarely in the "shows much promise category" if the devs keep focus on it, assess for quality, and handle errors better.
Give Datasploit a try for sure.
Cheers, until next time...

Detect and Prevent Data Exfiltration Webinar with Infoblox

Please join SANS Institute Instructor and LEO Cyber Security Co-Founder & CTO Andrew Hay and Infoblox Security Product Marketing’s Sam Kumarsamy on Thursday, August 17th, 2017 at 1:00 PM EDT (17:00:00 UTC) as they present a SANS Institute webinar entitled Detect & Prevent Data Exfiltration: A Unique Approach.

Overview

Data is the new currency in the modern digital enterprise and protecting data is a strategic imperative for every organization. Enterprises must protect data whether it resides in a data center, an individual’s laptop that is used on premise or off premise and across the global distributed enterprise. Effective data exfiltration prevention requires protecting DNS, the most commonly used channels to steal data and combining reputation, signatures and behavioral analytics. The detection and prevention of loss of data requires analysis of vast amounts of network data and require a solution that can scale to examine this data. In this webinar you will also learn about the Infoblox’s unique approach to detecting and preventing data exfiltration.

To register for the webinar, please visit: https://www.sans.org/webcasts/detect-prevent-data-exfiltration-unique-approach-infoblox-104985

You can now also attend the webcast using your mobile device!

 

The post Detect and Prevent Data Exfiltration Webinar with Infoblox appeared first on LEO Cyber Security.

The CERT Guide to Coordinated Vulnerability Disclosure

We are happy to announce the release of the CERT® Guide to Coordinated Vulnerability Disclosure (CVD). The guide provides an introduction to the key concepts, principles, and roles necessary to establish a successful CVD process. It also provides insights into how CVD can go awry and how to respond when it does so.

As a process, CVD is intended to minimize adversary advantage while an information security vulnerability is being mitigated. And it is important to recognize that CVD is a process, not an event. Releasing a patch or publishing a document are important events within the process, but do not define it.

CVD participants can be thought of as repeatedly asking these questions: What actions should I take in response to knowledge of this vulnerability in this product? Who else needs to know what, and when do they need to know it? The CVD process for a vulnerability ends when the answers to these questions are nothing, and no one.

If we have learned anything in nearly three decades of coordinating vulnerability reports at the CERT/CC, it is that there is no single right answer to many of the questions and controversies surrounding the disclosure of information about software and system vulnerabilities. The CERT Guide to CVD is a summary of what we know about a complex social process that surrounds humans trying to make the software and systems they use more secure. It's about what to do (and what not to) when you find a vulnerability, or when you find out about a vulnerability. It's written for vulnerability analysts, security researchers, developers, and deployers; it's for both technical staff and their management alike. While we discuss a variety of roles that play a part in the process, we intentionally chose not to focus on any one role; instead we wrote for any party that might find itself engaged in coordinating a vulnerability disclosure.

In a sense, this report is a travel guide for what might seem a foreign territory. Maybe you've passed through once or twice. Maybe you've only heard about the bad parts. You may be uncertain of what to do next, nervous about making a mistake, or even fearful of what might befall you. If you count yourself as one of those individuals, we want to reassure you that you are not alone; you are not the first to experience events like these or even your reaction to them. We're locals. We've been doing this for a while. Here's what we know.

Abstract

Security vulnerabilities remain a problem for vendors and deployers of software-based systems alike. Vendors play a key role by providing fixes for vulnerabilities, but they have no monopoly on the ability to discover vulnerabilities in their products and services. Knowledge of those vulnerabilities can increase adversarial advantage if deployers are left without recourse to remediate the risks they pose. Coordinated Vulnerability Disclosure (CVD) is the process of gathering information from vulnerability finders, coordinating the sharing of that information between relevant stakeholders, and disclosing the existence of software vulnerabilities and their mitigations to various stakeholders including the public. The CERT Coordination Center has been coordinating the disclosure of software vulnerabilities since its inception in 1988. This document is intended to serve as a guide to those who want to initiate, develop, or improve their own CVD capability. In it, the reader will find an overview of key principles underlying the CVD process, a survey of CVD stakeholders and their roles, and a description of CVD process phases, as well as advice concerning operational considerations and problems that may arise in the provision of CVD and related services.

The CERT® Guide to Coordinated Vulnerability Disclosure is available in the SEI Digital Library.

2017 DerbyCon Hiring List

Created the 2017 UNOFFICIAL DerbyCon Hiring List. To get on the list is even easier now! Just complete the following form: https://goo.gl/forms/vyqVHjZkxE4WhA9X2

(One small tip, first come first serve, so if you want to be on the top of the list it’s best to submit the best info you have vs waiting on anyone, I don’t change the list order for anyone.)

Direct Link to Google Doc: https://docs.google.com/spreadsheets/d/1tf0C09Cwt6_GBinOjvI714655YdQcxi5k2g6iDzPt9I/

MS17-MAR – Microsoft Security Bulletin Summary for March 2017 – Version: 4.0

Revision Note: V4.0 (August 8, 2017): For MS17-007, to comprehensively address CVE-2017-0071, Microsoft released the July security updates for all versions of Windows 10. Note that Windows 10 for 32-bit Systems, Windows 10 for x64-based Systems, Windows 10 Version 1703 for 32-bit Systems, and Windows 10 Version 1703 for x64-based Systems have been added to the Affected Products table as they are also affected by this vulnerability. Microsoft recommends that customers who have not already done so install the July 2017 security updates to be fully protected from this vulnerability
Summary: This bulletin summary lists security bulletins released for March 2017

4038556 – Guidance for securing applications that host the WebBrowser Control – Version: 1.0

Revision Note: V1.0 (August 8, 2017): Advisory published.
Summary: Microsoft is releasing this security advisory to provide information regarding security settings for applications developed with the Microsoft Internet Explorer layout engine, also known as the Trident layout engine. This advisory also provides guidance on what developers and individuals can do to ensure that their applications hosting the WebBrowser Control are properly secured.

MS17-007 – Critical: Cumulative Security Update for Microsoft Edge (4013071) – Version: 2.0

Severity Rating: Critical
Revision Note: V2.0 (August 8, 2017): To comprehensively address CVE-2017-0071, Microsoft released the July security updates for all versions of Windows 10. Note that Windows 10 for 32-bit Systems, Windows 10 for x64-based Systems, Windows 10 Version 1703 for 32-bit Systems, and Windows 10 Version 1703 for x64-based Systems have been added to the Affected Products table as they are also affected by this vulnerability. Microsoft recommends that customers who have not already done so install the July 2017 security updates to be fully protected from this vulnerability.
Summary: This security update resolves vulnerabilities in Microsoft Edge. The most severe of the vulnerabilities could allow remote code execution if a user views a specially crafted webpage using Microsoft Edge. An attacker who successfully exploited these vulnerabilities could take control of an affected system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.

CISOs Moving up in the Corporate Ladder? CIOs Shouldn’t Be Worried


While Chief Information Security Officers (CISOs) are relatively new members of the C-Suite for many organizations, the continued worries about cybersecurity and data breaches have compelled CEOs and boards to reconsider the positioning of the CISO function in the organizational chart.
CISOs - A Rapid Ascent
According to a Forrester study from 2015, 35% of CISOs now report directly to the CEO or president of the organization. This reality is often a little challenging — if not impossible — for CIOs to digest. After all, why is it that someone who used to report to the CIO just a decade ago now gets unfiltered access to the top leadership, and often special budget lines?
A recent blog post characterizes the evolution of the CISO role thusly: “The Guardian and Technologist is giving way to the Business Strategist, the Business Enabler and the Trusted Advisor, who articulates risk, reviews metrics and reports regularly to the board.”
A January 2017 CIO article reported that organizations where the CISO still reports to the CIO had “14% more downtime due to security incidents.” And while the majority of CISOs still report to CIOs, this situation is fluid and evolving rapidly. A K-Logix study reports that when asked about where CISOs will be reporting in the future, “50% of CISOs responded that the role will report into the CEO.”

So, while it may be tempting to consider from a loss perspective, the CISO’s rise isn’t something that CIOs can do much about, at least given the current threat environment. Instead, CIOs can look at this change in the executive landscape as an opportunity to refocus their role, and rally around causes that are relevant to both CISOs and CIOs.
The CISO as a Potential Ally of the CIO
Choose your battles wisely. After all, life isn't measured by how many times you stood up to fight. It's not winning battles that makes you happy, but it's how many times you turned away and chose to look into a better direction. Life is too short to spend it on warring. Fight only the most, most, most important ones, let the rest go. C. JoyBell C.

For decades, a CIO was often the only technology-minded person in the C-Suite. The rise of the CISO means that the CIO has a potential ally within earshot of the CEO or the board. Yet CISOs are not seeking to replace CIOs, and CIOs can no longer look at IT risks as falling purely within “their domain.” The digital risk landscape needs — requires — a functioning relationship among these two giants of the world of data.

CIOs should grab this opportunity to revisit their relationship with the CISO, openly, and seek to patch things up, especially any disagreements from the past which could continue to poison the relationship.
CISO as A Strategic Partner
While a positive working CIO-CISO relationship is definitely a must, the global marketplace and the ever-increasing cybersecurity risks mean that to be truly effective, the CIO-CISO relationship should be that of a strategic partnership: CIOs and CISOs should forge an alliance to focus both on protecting and enabling the organization through smart, effective investments in security and technology.

For example, AI and cloud are changing the way organizations are doing business, leveraging on-demand computing and storage, bringing along cost-savings and increased agility, but also presenting new challenges to keeping track of IT risks, and preparing for the inevitable breach. By working together in a strategic manner, the CIO and CISO can lean on each other to provide, on one hand the IT and data infrastructure that keeps the organization running, and on the other hand, balances cyber risks to within acceptable levels, all the while maintaining a vigilant eye on the network, devices, and data, ready to respond when needed.

The CIO, as an experienced member of the C-Suite can start building this new level of relationship by offering to share their own lessons learned and experiences with joining the top leadership, and share their concerns about the overall digital strategy of the business. For some CIOs, relinquishing control will be more challenging, but one has to pick their battles, and the positioning on the CISO isn’t one worth hanging on to, at least not in the interest of the organization, the greater good.

This post was brought to you by IBM Global Technology Services. For more content like this, visit ITBizAdvisor.com

Eaton ELCSoft Vulnerabilities

NCCIC/ICS-CERT is aware of a public report of buffer overflow vulnerabilities affecting Eaton ELCSoft, a PLC programming software for Eaton Logic Control (ELC) controllers. According to the public report, which was coordinated with ICS-CERT prior to its public release, researcher Ariele Caltabiano (kimiya) working with Trend Micro's Zero Day Initiative, identified that an attacker can leverage these vulnerabilities to execute arbitrary code in the context of the process. ICS-CERT has notified the affected vendor, who has reported that they are planning to address the vulnerabilities. No timeline has been provided. ICS-CERT is issuing this alert to provide notice of the report and to identify baseline mitigations for reducing risks to these and other cybersecurity attacks.

Howto setup a Debian 9 with Proxmox and containers using as few IPv4 and IPv6 addresses as possible

My current Linux Root-Server needs to be replaced with a newer Linux version and should also be much cheaper then the current one. So at first I did look what I don’t like about the current one:

  • It is expensive with about 70 Euros / months. Following is responsible for that
    • My own HPE hardware with 16GB RAM and a software RAID (hardware raid would be even more expensive) – iLo (or something like it) is a must for me 🙂
    • 16 additional IPv4 addresses for the visualized container and servers
    • Large enough backup space to get back some days.
  • A base OS which makes it hard to run newer Linux versions in the container (sure old ones like CentOS6 still get updates, but that will change)
    • Its time to move to newer Linux versions in the containers
  • OpenVZ based containers which are not mainstream anymore

Then I looked what surrounding conditions changed since I did setup my current server.

  • I’ve IPv6 at home and 70% of my traffic is IPv6 (thx to Google (specially Youtube) and Cloudflare)
  • IPv4 addresses got even more expensive for Root-Servers
  • I’m now using Cloudflare for most of the websites I host.
  • Cloudflare is reachable via IPv4 and IPv6 and can connect back either with IPv4 or IPv6 to my servers
  • With unprivileged containers the need to use KVM for security lessens
  • Hosting providers offer now KVM servers for really cheap, which have dedicated reserved CPUs.
  • KVM servers can host containers without a problem

This lead to the decision to try following setup:

  • A KVM based Server for less than 10 Euro / month at Netcup to try the concept
  • No additional IPv4 addresses, everything should work with only 1 IPv4 and a /64 IPv6 subnet
  • Base OS should be Debian 9 (“Stretch”)
  • For ease of configuration of the containers I will use the current Proxmox with LXC
  • Don’t use my own HTTP reverse proxy, but use exclusively Cloudflare for all websites to translate from IPv4 to IPv6

After that decision was reached I search for Howtos which would allow me to just set it up without doing much research. Sadly that didn’t work out. Sure, there are multiple Howtos which explain you how to setup Debian and Proxmox, but if you get into the nifty parts e.g. using only minimal IP addresses, working around MAC address filters at the hosting providers (which is quite a important security function, BTW) and IPv6, they will tell you: You need more IP addresses, get a really complicated setup or just ignore that point at all.

As you can read that blog post you know that I found a way, so expect a complete documentation on how to setup such a server. I’ll concentrate on the relevant parts to allow you to setup a similar server. Of course I did also some security harding like making a secure ssh setup with only public keys, the right ciphers, …. which I won’t cover here.

Setting up the OS

I used the Debian 9 minimal install, which Netcup provides, and did change the password, hostname, changed the language to English (to be more exact to C) and moved the SSH Port a non standard port. The last one I did not so much for security but for the constant scans on port 22, which flood the logs.

passwd
vim /etc/hosts
vim /etc/hostname
dpkg-reconfigure locales
vim /etc/ssh/sshd_config
/etc/init.d/ssh restart

I followed that with making sure no firewall is active and installed the net-tools so I got netstat and ifconfig.

apt install net-tools

At last I did a check if any packages needs an update.

apt update
apt upgrade

Installing Proxmox

First I checked if the IP address returns the correct hostname, as otherwise the install fails and you need to start from scratch.

hostname --ip-address

Adding the Proxmox Repos to the system and installing the software:

echo "deb http://download.proxmox.com/debian/pve stretch pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list
wget http://download.proxmox.com/debian/proxmox-ve-release-5.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-5.x.gpg
apt update && apt dist-upgrade
apt install proxmox-ve postfix open-iscsi

After that I did a reboot and booted the Proxmox kernel, I removed some packages I didn’t need anymore

apt remove os-prober linux-image-amd64 linux-image-4.9.0-3-amd64

Now I did my first login to the admin GUI to https://<hostname>:8006/ and enabled the Proxmox firewall

Than set the firewall rules for protecting the host (I did that for the whole datacenter even if I only have one server at this moment). Ping is allowed, the Webgui and ssh.

I mate sure with

iptables -L -xvn

that the firewall was running.

BTW, if you don’t like the nagging windows at every login that you need a license and if this is only a testing machine as mine is currently, type following:

sed -i.bak 's/NotFound/Active/g' /usr/share/perl5/PVE/API2/Subscription.pm && systemctl restart pveproxy.service

Now we need to configure the network (vmbr0) for our virtual systems and this is the point where my Howto will go an other direction. Normally you’re told to configure the vmbr0 and put the physical interface into the bridge. This bridging mode is the easiest normally, but won’t work here.

Routing instead of bridging

Normally you are told that if you use public IPv4 and IPv6 addresses in containers you should bridge it. Yes thats true, but there is one problem. LXC containers have their own MAC addresses. So if they send traffic via the bridge to the datacenter switch, the switch sees the virtual MAC address. In a internal company network on a physical host that is normally not a problem. In a datacenter where different people rent their servers thats not good security practice. Most hosting providers will filter the MAC addresses on the switch (sometimes additional IPv4 addresses come with the right to use additional MAC addresses, but we want to save money here 🙂 ). As this server is a KVM guest OS the filtering is most likely part of the virtual switch (e.g. for VMware ESX this is the default even).

With ebtables it is possible to configure a SNAT for the MAC addresses, but that will get really complicated really fast – trust me with networking stuff – when I say complicated it is really complicated fast. 🙂

So, if we can’t use bridging we need to use routing. Yes the routing setup on the server is not so easy, but it is clean and easy to understand.

First we configure the physical interface in the admin GUI

Two configurations are different than at normal setups. The provider gave you most likely a /23 or /24, but I use a subnet mask /32 (255.255.255.255), as I only want to talk to the default gateway and not the other servers from other customers. If the switch thinks traffic is ok, he can reroute it for me. The provider switch will defend its IP address against ARP spoofing, I’m quite sure as otherwise a incorrect configuration of a customer will break the network for all customer – the provider will make that mistake only once. For IPv6 we do basically the same with /128 but in this case we also want to reuse the /64 subnet on our second interface.

As I don’t have additional IPv4 addresses, I’ll use a local subnet to provide access to IPv4 addresses to the containers (via NAT), the IPv6 address gets configured a second time with the /64 subnet mask. This setup allows use to route with only one /64 – we’re cheap … no extra money needed.

Now we reboot the server so that the /etc/network/interfaces config gets written. We need to add some additional settings there, so it looks like this

The first command in the red frame is needed to make sure that traffic from the containers pass the second rule. Its some kind lxc specialty. The second command is just a simple SNAT to your public IPv4 address. The last 2 are for making sure that the iptable rules get deleted if you stop the network.

Now we need to make sure that the container traffic gets routed so we put following lines into /etc/sysctl.conf

And we should also enable following lines

Now we’re almost done. One point remains. The switch/router which is our default gateway needs to be able to send packets to our containers. For this he does for IPv6 something similar to an ARP request. It is called neighbor discovery and as the network of the container is routed we need to answer the request on the host system.

Neighbor Discovery Protocol (NDP) Proxy

We could now do this by using proxy_ndp, the IPv6 variant of proxy_arp. First enable proxy_ndp by running:

sysctl -w net.ipv6.conf.all.proxy_ndp=1

You can enable this permanently by adding the following line to /etc/sysctl.conf:

net.ipv6.conf.all.proxy_ndp = 1

Then run:

ip -6 neigh add proxy 2a03:5000:3d:1ee::100 dev ens3

This means for the host Linux system to generate Neighbor Advertisement messages in response to Neighbor Solicitation messages for 2a03:5000:3d:1ee::100 (e.g. our container with ID 100) that enters through ens3.

While proxy_arp could be used to proxy a whole subnet, this appears not to be the case with proxy_ndp. To protect the memory of upstream routers, you can only proxy defined addresses. That’s not a simple solution, if we need to add an entry for every container. But we’re saved from that as Debian 9 ships with an daemon that can proxy a whole subnet, ndppd. Let’s install and configure it:

apt install ndppd
cp /usr/share/doc/ndppd/ndppd.conf-dist /etc/ndppd.conf

and write a config like this

route-ttl 30000
proxy ens3 {
router no
timeout 500
ttl 30000
rule 2a03:5000:3d:1ee::/64 {
auto
}
}

now enable it by default and start it

update-rc.d ndppd defaults
/etc/init.d/ndppd start

Now it is time to boot the system and create you first container.

Container setup

The container setup is easy, you just need to use the Proxmox host as default gateway.

As you see the setup is quite cool and it allows you to create containers without thinking about it. A similar setup is also possible with IPv4 addresses. As I don’t need it I’ll just quickly describe it here.

Short info for doing the same for an additional IPv4 subnet

Following needs to be added to the /etc/network/interfaces:

iface ens3 inet static
pointopoint 186.143.121.1

iface vmbr0 inet static
address 186.143.121.230 # Our Host will be the Gateway for all container
netmask 255.255.255.255
# Add all single IP's from your /29 subnet
up route add -host 186.143.34.56 dev br0
up route add -host 186.143.34.57 dev br0
up route add -host 186.143.34.58 dev br0
up route add -host 186.143.34.59 dev br0
up route add -host 186.143.34.60 dev br0
up route add -host 186.143.34.61 dev br0
up route add -host 186.143.34.62 dev br0
up route add -host 186.143.34.63 dev br0
.......

We’re reusing the ens3 IP address. Normally we would add our additional IPv4 network e.g. a /29. The problem with this straight forward setup would be that we would lose 2 IP addresses (netbase and broadcast). Also the pointopoint directive is important and tells our host to send all requests to the datacenter IPv4 gateway – even if we want to talk to our neighbors later.

The for the container setup you just need to replace the IPv4 config with following

auto eth0
iface eth0 inet static
address 186.143.34.56 # Any IP of our /29 subnet
netmask 255.255.255.255
gateway 186.143.121.13 # Our Host machine will do the job!
pointopoint 186.143.121.1

How that saved you some time setting up you own system!

On Titles, Jobs, and Job Descriptions (Not All Roles Are Architects)

Folks: Please stop calling every soup-to-nuts, everything-but-the-kitchen-sink security job a "security architect" role. It's harmful to the industry and it's doing you no favors trying to find the right resources. In fact, please stop posting these "one role does everything security under the sun" positions altogether. It's hurting your recruitment efforts, and it makes it incredibly difficult to find positions that are a good fit. Let me explain...

For starters, there are generally three classes of security people, management and pentesters aside:
- Analysts
- Engineers
- Architects

(Note that these terms tend to be loaded due to their use in other industries. In fact, in some states you might even have to come up with a different equivalent term for positions due to legal definitions (or licensing) of roles. Try to bear with me and just go with the flow, eh?)

Analysts are people who think about stuff and write about stuff and sometimes help initiate actions, but they are not the implementers of security tools or practices. An analyst may or may not be particularly technical, depending on the nature of the role. For example, there are tons of entry-level SOC analyst positions today that can provide a first taste of infosec work life. You rarely need to have a lot of technical skills, at least initially, to land one of these gigs (this varies by org). Similarly, there are GRC analyst roles that tend not to be technical at all (despite often including "technical writing," such as for policies, in the workload). On the far end of the spectrum, you may have incident response (IR) analysts who are very technical, but again note the nature of their duties: thinking about stuff, writing about stuff, and maybe initiating actions (such as the IR process or escalations therein).

Engineers are people who do most of the hands-on work. If you're looking for someone to do a bunch of implementation work, particularly around security tools and tech, then you want a security engineer, and that should be clearly stated in your job description. Engineers tend to be people who really enjoy implementation and maintenance work. They like rolling up their sleeves and getting their hands dirty. You might also see "administrator" used in this same category (though that's muddy water as sometimes a "security administrator" might be more like an analyst in being less technical, skilled in one kind of tool, like adding and removing users to Active Directory or your IAM of choice). In general, if you're listing a position that has implementation responsibilities, then you need to be calling it an engineer role (or equivalent), not an analyst and certainly not an architect.

Architects are not your implementers. And, while they are thinkers who may do a fair amount of technical writing, the key differentiators here are that 1) they tend to be way more technical than the average analyst, 2) they see a much bigger picture than the average analyst or engineer, and 3) they've often risen to this position through one or both of the other roles, but almost certainly with considerable previous hands-on implementation experience as an engineer. It's very important to understand that your architects, while likely having a background in engineering, is unlikely to want to do much hands-on implementation work. What hands-on work they are willing/interested to do is likely focused heavily on proofs of concept (POCs) and testing new ideas and technologies. Given their technical backgrounds, they'll be able to go toe-to-toe on technical topics with just about anyone in the organization, even though they may not be able to sit down and crank out a bunch of server builds in short order any more (or, maybe they can!). A good security architect provides experiential, context-relevant guidance on how to design /secure/ systems and applications, as well as providing guidance on technology purchasing decisions, technical designs, etc. Where they differ from, say, GRC/policy analysts is that when they provide a recommendation on something, they can typically back it up with more than a flaccid reference to "best practices" or some other lame appeal to authority; they can instead point to proven experiences and technical rationale.

Going all the way back to before my Gartner days, I've long told SMBs that their first step should not be hiring a security manager, but rather a security architect who reports up through the IT food chain, preferably directly to the IT manager/director or CIO (depending on size and structure of the org). The reason for this recommendation is that small IT shops already have a number of engineers/administrators and analysts, but what they oftentimes lack is someone with broad AND deep technical expertise in security who can provide all sorts of guidance and value to the organization. Part and parcel to this is that SMBs especially do not need to build out a "security team" or "security department"! (In fact, I often argue only the largest enterprises should ever go this route, and only to improve efficiency and effectiveness. Status quo and conventional wisdom be damned.) Most small IT shops just need someone to help out with decisions and evaluations to ensure that the organization is making smart security decisions. This security architect role should not be focused on implementation or administration, but instead should be serving in an almost quasi-EA (enterprise architect) role that cuts across the entire org. In many ways, a security architect in a counselor who works with teams to improve their security decisions. It's common in larger organizations for security architects to have a focus on one part of the business simply as a matter of scale and supportability.

So that's it. Nothing too crazy, right? But, I think it's important. Yes, some of you may debate and question how I've defined things, and that's fine, but the main takeaway here, hopefully, is that job descriptions need to be reset again around some standard language. In particular, orgs need to stop listing a ton of implementation work for "security architect" roles because that's misleading and really not what a security architect does. Properly titling and describing roles is very important, and will help you more readily find your ideal candidates. Calling everything a "security architect" does not do anything positive for you, and it serves to frustrate and disenfranchise your candidate pools (not to mention wasting your time on screening).

fwiw. ymmv. cheers!

DerbyCon 7 Live Stream

If you weren't fortunate to get a ticket to DerbyCon this year, the conference will once again be live streaming talks. More information will be available closer to the conference at www.derbycon.com.

But did you know every talk (almost) is also available for viewing after the conference is over? You can find past Derbycon presentations here as well as dozens of other conferences, or on IronGeek's YouTube channel here. Not as interesting or as much fun as being there, but if you're looking for good presentations to learn pen testing or blue teaming tactics, it's a great resource.

A security minded guy forced to buy a Wifi enabled cleaning robot

First I want to tell you all that I wanted a vacuum cleaning robot without Internet connection, but I couldn’t find one which fulfilled the requirements. At first I thought the DEEBOT M81 from ECOVACS would be such a device (vacuum and mop combo and possible to carry between rooms as it works randomly), but don’t buy it if you’ve stairs. On the first day alone at home it went 2 floors down, somehow it did look okay and still worked after the kamikaze. We just needed to search for it through the whole house. After that I did some tests, I found out that it stops 6 times at the stairs and falls down the 7 or 8 time. Searching through the Internet showed me that I’m not the only one. The second problem was that configuring the timer differently for some days (like not cleaning on weekends) was not possible. After loosing my last chance for a non Internet connected device I went for the DEEBOT M81 Pro which needs an Android or IPhone app and WiFi, if you want to configure the timer for not cleaning on weekends. This is my story about that – I guess – a typical IoT device.

 

The App – ECOVACS
After unpacking and charging of the robot, I went and installed the App on my test mobile. Why not on my real mobile? Take a look at the required permissions:

I though that is just an App to control my vacuum robot …. guess not. Anyway I installed it on my test system and created a dummy user. Of course I took a look at the traffic. First it connects to ecosphere-app.ecovacs-japan.com,

where it does an HTTPS connect. Hm, maybe thats better than I thought, but the TLS config of the server is bad, but at least it encrypted – so there is still hope.

Looking at the other traffic I saw a XMPP / jabber connection (lbat.ecouser.net / 47.91.78.247), which was encrypted, but sadly with a self signed certificate. I’ll thought I’ll take a look at the traffic via MitM later, lets get it to work before.

Getting it to work
It looks like the robot is creating a SSID for the App on the mobile to connect to, after you pressed the WiFi button >3 sec. So the exchange of the WiFi password seems to secure enough. But it took me almost 1h to get the robot to connect to my IoT network and I didn’t find any information or tips online. I changed following on my side to get it to work, maybe that helps somebody else:

  • I enabled the location stuff (which I’ve disabled by default) on the mobile as I remembered the WiFi Analyser App always tells me to enabled that to sees WiFi networks.
  • I needed to change my IoT network to support legacy WiFi modes. My normal setup is:

    I needed to change it to following in order for the robot to be able to connect:

Robot traffic

The first request from the robot after getting an IP address is to request a HTTP connection to lbo.ecouser.net (47.91.78.247) on Port 8007

Hey we know the IP address and port – that’s the Jabber server the App also connects to. But before the robot connects to the Jabber server he does a second HTTP request, this time to an IP address (47.88.193.19:8005) and not a DNS name. Thats interesting:

That looks like a check for newer firmware …. firmware updates unencrypted .. what can possible go wrong here. As the request currently returns no new firmware I can’t look at that more closely – something for the future. Checking Shodan Info on that IP address is interesting. It runs a portmapper and ntp server reachable from the internet … someone already using that as DDOS amplifier? I’m not talking about the not configured nginx which also leaks IP addresses in the certificate: IP Address:120.26.244.107, IP Address:121.41.41.198, IP Address:47.88.193.19

Let’s go back to the Jabber server the robot connect to. The App uses a self signed certificate “protected” channel but the robot does connect completely in the clear – thats nice so I don’t need to do a MitM attack. The wireshark trace is so full on information that I’m really not sure what I can show you without making it too easy for you to control my robot.

Following is shown in the screenshot (which shows only a a part of the communication):

  • The logon to the server via PLAIN authentication, which is comprised of
    • username: Is the serial number of the device, which is also printed onto the box the device is sold in.
    • password: Looks like a MD5 hash of something, as its 32 hex chars – something to investigate
  • It shares its online (presence status in jabber terms) with the app
  • It gets asked for a version, I guess the firmware version which it returns as 0.16.46 – hope a thats already stable

Looking at later traffic following requests issued by the app:

  • GetDeviceInfo
  • SetTime
  • GetChargeState
  • GetBatteryInfo
  • GetWKVer
  • GetError
  • GetOnOff
  • GetSched
  • GetLifeSpan

I didn’t control the device via the App otherwise there should be much more commands.

Questions and thoughts

I don’t really see a peering which makes sure that only the right App can control a robot, so it is maybe possible to control other robots. As the user ID used on the Jabber server is just the serial number with @141.ecorobot.net/atom added, it should be ease to guess additional user IDs. There is no need to know the password of the robot. On the other side it should be possible to create your own Jabber server and redirect traffic to it. Also writing a DIY App without all that App permissions should be possible and not to hard. The robot I bought is not so interesting for an attacker as it cannot provide room layouts as the more expensive ones provide. The screenshots of the App show what is possible:

I guess I wait for the next versions of the robots that provide a microphone and/or a camera – than it gets really interesting.

As I was able to configure the schedules via the App and set the time,  I’ll try if that still works if the robot is not able to connect to the Internet. If so I’ll got that route and enable the Internet connection only if I need to change the schedules.

Ps: you should really have a separate IoT network.