Cloud management is a critical topic that organizations are looking at to simplify operations, increase IT efficiency, and reduce costs. Although cloud adoption has risen in the past few years, some organizations aren’t seeing the results they’d envisioned. That’s why we’re sharing a few of the top cloud management challenges enterprises need to be cautious of and how to overcome them.
Cloud Management Challenge #1: Security
Given the overall trend toward migrating resources to the cloud, a rise in security threats shouldn’t be surprising. Per our latest Cloud Risk and Adoption Report, the average enterprise organization experiences 31.3 cloud related security threats each month—a 27.7% increase over the same period last year. Broken down by category, these include insider threats (both accidental and malicious), privileged user threats, and threats arising from potentially compromised accounts.
To mitigate these types of cloud threats and risks, we have a few recommendations to better protect your business. Start with auditing your Amazon Web Services, Microsoft Azure, Google Cloud Platform, or other IaaS/PaaS configurations to get ahead of misconfigurations before they open a hole in the integrity of your security posture. Second, it’s important to understand which cloud services hold most of your sensitive data. Once that’s determined, extend data loss prevention (DLP) policies to those services, or build them in the cloud if you don’t already have a DLP practice. Right along with controlling the data itself goes controlling who the data can go to, so lock down sharing where your sensitive data lives.
Cloud Management Challenge #2: Governance
Many companies deploy cloud systems without an adequate governance plan, which increases the risk of security breaches and inefficiency. Lack of data governance may result in a serious financial loss, and failing to protect sensitive data could result in a data breach.
Cloud management and cloud governance are often interlinked. Keeping track of your cloud infrastructure is essential. Governance and infrastructure planning can help mitigate certain infrastructure risks, therefore, automated cloud discovery and governance tools will help your business safeguard operations.
Cloud Management Challenge #3: Proficiency
You may also be faced with the challenge of ensuring that IT employees have the proper expertise to manage their services in a cloud environment. You may need to decide to either hire a new team that is already familiar with cloud environments or train your existing staff.
In the end, training your existing staff is less expensive, scalable, and faster. Knowledge is key when transforming your business and shifting your operational model to the cloud. Accept the challenge and train your employees, give them hands-on time, and get them properly certified. For security professionals, the Cloud Security Alliance is a great place to start for training programs.
Cloud Management Challenge #4: Performance
Enterprises are continually looking for ways to improve their application performance, and internal/external SLAs. However, even in the cloud, they may not immediately achieve these benefits. Cloud performance is complex and if you’re having performance issues it’s important to look at a variety of issues that could be occurring in your environment.
How should you approach finding and fixing the root causes of cloud performance issues? Check your infrastructure and the applications themselves. Examine the applications you ported over from on-premises data centers, and evaluate whether newer, cloud technologies such as containers or serverless computing could replace some of your application components and improve performance. Also, evaluate multiple cloud providers for your application or infrastructure needs, as each have their own offerings and geographic distribution.
Cloud Management Challenge #5: Cost
Managing cloud costs can be a challenge, but in general, migrating to the cloud offers companies enormous savings. We see organizations investing more dollars in the cloud to bring greater flexibility to their enterprise, allowing them to quickly and efficiently react to the changing market conditions. Organizations are moving more of their services to the cloud, which is resulting in higher spend with cloud service providers.
Shifting IT cost from on-premises to the cloud on its own is not the challenge – it is the unmonitored sprawl of cloud resources that typically spikes cost for organizations. Managing your cloud costs can be simple if you effectively monitor use. With visibility into unsanctioned, “Shadow” cloud use, your organization can find the areas where there is unnecessary waste of resources. By auditing your cloud usage, you may even determine new ways to manage cost, such as re-architecting your workloads using a PaaS architecture, which may be more cost-effective.
Migrating to the cloud is a challenge but can bring a wide range of benefits to your organization with a reduction in costs, unlimited scalability, improved security, and overall a faster business model. These days, everyone is in the cloud but that doesn’t mean your business’s success should be hindered by the common challenges of cloud management.
For more on how to secure your cloud environment, check out McAfee MVISION Cloud, a cloud access security broker (CASB) that protects data where it lives with a solution that was built natively in the cloud, for the cloud.
The post Cloud 101: Navigating the Top 5 Cloud Management Challenges appeared first on McAfee Blogs.
The DBIR has evolved since its initial release in 2008, when it was payment card data breach and Verizon breach investigations data focused. This year’s DBIR involved the analysis of 41,686 security incidents from 66 global data sources in addition to Verizon. The analysed findings are expertly presented over 77 pages, using simple charts supported by ‘plain English’ astute explanations, reason why then, the DBIR is one of the most quoted reports in presentations and within industry sales collateral.
DBIR 2019 Key Takeaways
- Financial gain remains the most common motivate behind data breaches (71%)
- 43% of breaches occurred at small businesses
- A third (32%) of breaches involved phishing
- The nation-state threat is increasing, with 23% of breaches by nation-state actors
- More than half (56%) of data breaches took months or longer to discover
- Ransomware remains a major threat, and is the second most common type of malware reported
- Business executives are increasingly targeted with social engineering, attacks such as phishing\BEC
- Crypto-mining malware accounts for less than 5% of data breaches, despite the publicity it didn’t make the top ten malware listed in the report
- Espionage is a key motivation behind a quarter of data breaches
- 60 million records breached due to misconfigured cloud service buckets
- Continued reduction in payment card point of sale breaches
- The hacktivist threat remains low, the increase of hacktivist attacks report in DBIR 2012 report appears to be a one-off spike
Our data lives in the cloud, and nearly a quarter of it requires protection to limit our risk. You won’t be able to get far in your transformation to the cloud without learning the sources of cloud data risk and how to circumnavigate them.
In our latest Cloud Adoption and Risk Report, we analyze the types of sensitive data in the cloud and how it’s shared, examine IaaS security and adoption trends, and review common threats in the cloud. Test your knowledge on the latest cloud trends and see if your enterprise understands the basics of cloud-related risks.
Not prepared? Lucky for you this is an “open-book” test. Find some cheat sheets and study guides below.
Submitted by: Adam Boyle, Head of Product Management, Hybrid Cloud Security, Trend Micro
When it comes to software container security, it’s important for enterprises to look at the big picture, taking into account how they see containers affecting their larger security requirements and future DevOps needs. Good practices can help security teams build a strategy that allows them to mitigate pipeline and runtime data breaches and threats without impacting the agility and speed of application DevOps teams.
Security and IT professionals need to address security gaps across agile and fast pace DevOps teams but are challenged by decentralized organizational structures and processes. And since workloads and environments are constantly changing, there’s no silver bullet when it comes to cybersecurity, there’s only the info we have right now. To help address the current security landscape, and where containers fit in, we need to ask ourselves a few key insightful questions.
How have environments for workloads changed and what are development teams focused on today? (i.e. VMs to cloud to serverless > DevOps, microservices, measured on delivery and uptime).
Many years ago, the customer conversations that we were having were primarily around cloud migration of traditional, legacy workloads from the data center to the cloud. While performing this “forklift,” they had to figure out what IT tools, including security, would operate naturally in the cloud. Many traditional tools they had already purchased previously, before the cloud migration, didn’t quite work out when expanded to the cloud, as they weren’t designed with the cloud in mind.
In the last few years, those same customers who migrated workloads to the cloud, started new projects and applications using cloud native services, and building these new capabilities on Docker, and serverless technologies such as AWS Lambda, Azure functions, and Google Cloud functions. These technologies have enabled teams to adopt DevOps practices where they can essentially continuously deliver “parts” of applications independently of one and other, ultimately delivering outcome much faster to market than one would with a monolithic application. The new projects have given birth to CI/CD pipelines leveraging Git for source code management (using hosted versions from either GitHub or BitBucket), Jenkins, or Bamboo for DevOps automation, and Kubernetes for automated deployment, scaling, and management of containers.
Both of these thrusts are now happening in parallel driving two distinct classes of applications—legacy, monolithic applications, and cloud native microservices. The questions for an enterprise are simple; how do I protect all of this? And, how can I do this at scale?
What’s worth mentioning is also the maturity of IT and how these teams have evolved into leveraging “infrastructure as code.” That is, writing code to automate IT operations. This includes security as code or writing code to automate security. Cloud operations teams have embraced automation and have partnered with application teams to help scale the automation of DevOps driven applications while meeting IT requirements. Technologies like Chef, Puppet, Ansible, Terraform, and Saltstack are popular in our customer base when automating IT operations.
While vulnerabilities and threats will always persist, what is the bigger impact on the organization when it comes to DevOps teams and security?
What we hear when companies talk to us is that the enterprise is not designed to do security at scale for a large set of DevOps teams who are continuously doing build->ship->run and need continuous and uninterrupted protection.
A typical enterprise has a centralized IT and Security Ops teams who are serving many groups of internal customers, typically business units which are responsible for generating the revenue for the enterprise.
So, how do tens or hundreds of DevOps teams who continuously build->ship->run, interact with centralized IT and security Ops teams, at scale? How do IT and security Ops teams embrace these practices and technologies, and ensure that they are secure—both the CI/CD pipelines and the runtime environments?
These relationships between IT teams (including security teams), and the business units have largely been at an executive level (VP and up), but to deliver “secure” outcomes continuously—a more effective, a more automated interplay—between these teams are needed.
We see many DevOps teams across business units incorporating security with varying degrees of rigor—or buying their own security solutions that only work for their set of projects—purchased out of their business unit budgets, implementing them with limited security experience and no tie-back to corporate security requirements or IT awareness. This leads to a fragmented, duplicated, complicated, inconsistent security posture across the enterprise and higher cost models on security tools that becomes more complicated to manage and support. The pressure to deliver faster within a business unit is sometimes at the cost of a coordinated enterprise-wide security plan…we’ve all been there and there’s often a balance that needs to be found.
The relationship, at the working level, between business unit application teams and centralized IT and security Ops teams is not always a collaborative, healthy, working relationship. Sometimes it has friction. Sometimes, the root cause of this friction can be related to application teams having significantly higher understanding of DevOps practices, tools, along with higher understanding of technologies, such as Docker, Kubernetes, and various serverless technologies, than their IT counterparts. We’ve seen painful, unproductive discussions between application teams trying to educate their IT/Security teams on the basics, let alone, get them on board with doing things differently. The friction increases if the IT and security Ops teams don’t embrace the changes in their approach when it comes to container and serverless security. So, to us, the biggest impact right now is if a DevOps team wants to deliver continuously while following an enterprise-wide approach, then they need a continuous relationship with the IT and security operations teams, whom must become well educated in DevOps practices and tools, and microservices technologies (Docker, Kubernetes, etc), where the teams work together to automate security across pipelines and runtime environments. And, the IT and security teams need to level up their skills sets to DevOps and all associated technologies, and help teams move faster, not slower, while meeting security requirements.
To be true DevOps, the “Dev” part would be the application team, the “Ops” part would be ideally IT/security and they would work together. So, we think there could be some pretty big shifts on how enterprises organize their development teams and IT/security Ops teams as the traditional organizational models favor delivery of monolithic, legacy applications that do not do continuous delivery.
The biggest opportunity for IT/security Ops teams is engage the application teams with a set of self-service tools and practices that are positioned to help the teams move faster, while meeting the IT and security requirements for the enterprise.
How can DevOps teams take advantage of the best security measures to better protect emerging technologies like container environments and their supporting tools?
Well this could easily be a book! However, let’s try to summarize at a high level and break this down into “build,” “ship,” and “run.” By no means is this a complete list, but enough to get started. For more information, contact us
Security teams have fantastic opportunity to introduce the following services across the enterprise, for all teams with pipelines and runtimes, in a consistent way.
For serverless containers and serverless, application protection in every image or serverless function (AppSec library focusing on RASP, OWASP, malware, and vulnerabilities inside the application execution path). Trend Micro will be releasing an offer later this year to address this.
Trend Micro provides a stronger and more robust full lifecycle approach to container security. This approach helps application teams meet compliance and IT security requirements for continuous delivery in CI/CD pipelines and runtime environments. With multiple security capabilities, complete automation resources, and world class threat intelligence research teams, Trend Micro is a leader in the cybersecurity needs of today’s application and container driven organizations.
Learn more at www.trendmicro.com/containers.
The post The Next Enterprise Challenge: How Best to Secure Containers and Monolithic Apps Together, Company-wide appeared first on .
Many breaches start with an “own goal,” an easily preventable misconfiguration or oversight that scores a goal for the opponents rather than for your team. In platform-as-a-service (PaaS) applications, the risk profile of the application can lure organizations into a false sense of security. While overall risk to the organization can be lowered, and new capabilities otherwise unavailable can be unlocked, developing a PaaS application requires careful consideration to avoid leaking your data and making the task of your opponent easier.
PaaS integrated applications are nearly always multistep service architectures, leaving behind the simplicity of yesterday’s three-tier presentation/business/data logic applications and basic model-view-controller architectures. While many of these functional patterns are carried forward into modern applications—like separating presentation functions from the modeled representation of a data object—the PaaS application is nearly always a combination of linear and non-linear chains of data, transformation, and handoffs.
As a simple example, consider a user request to generate a snapshot of some kind of data, like a website. They make the request through a simple portal. The request would start a serverless application, which applies basic logic, completes information validation, and builds the request. The work goes into a queue—another PaaS component. A serverless application figures out the full list of work that needs to be completed and puts those actions in a list. Each of these gets picked up and completed to build the data package, which is finally captured by another serverless application to an output file, with another handoff to the publishing location(s), like a storage bucket.
Planning data interactions and the exposure at each step in the passing process is critical to the application’s integrity. The complexity of PaaS is that the team must consider threats both for each script/step at a basic level individually as well as holistically for the data stores in the application. What if I could find an exploit in one of the steps to arbitrarily start dumping data? What if I found a way to simply output more data unexpectedly than it was designed to do? What if I found a way to inject data instead, corrupting and harming rather than stealing?
The familiar threats of web applications are present, and yet our defensive posture is shaped by which elements of the applications we can see and which we cannot. Traditional edge and infrastructure indicators are replaced by a focus on how we constructed the application and how to use cloud service provider (CSP) logging together with our instrumentation to gain a more holistic picture.
In development of the overall application, the process architecture is as important as the integrity of individual technical components. The team leadership of the application development should consider insider, CSP, and external threats, and consider questions like:
- Who can modify the configuration?
- How is it audited? Logged? Who monitors?
- How do you discover rogue elements?
- How are we separating development and production?
- Do we have a strategy to manage exposure for updates through blue/green deployment?
- Have we considered the larger CSP environment configuration to eliminate public management endpoints?
- Should I use third-party tools to protect access to the cloud development and production environment’s management plane, such as a cloud access broker, together with cloud environmental tools to enumerate accounts and scan for common errors?
In the PaaS application construction, the integrity of basic code quality is magnified. The APIs and/or the initiation processes of serverless steps are the gateway to the data and other functions in the code. Development operations (DevOps) security should use available sources and tools to help protect the environment as new code is developed and deployed. These are a few ways to get your DevOps team started:
- Use the OWASP REST Security Cheat Sheet for APIs and code making calls to other services directly.
- Consider deploying tools from your CSP, such as the AWS Well-Architected Tool on a regular basis.
- Use wrappers and tie-ins to the CSP’s PaaS application, such as AWS Lambda Layers to identify critical operational steps and use them to implement key security checks.
- Use integrated automated fuzzing/static test tools to discover common missteps in code configuration early and address them as part of code updates.
- Consider accountability expectations for your development team. How are team members encouraged to remain owners of code quality? What checks are necessary to reduce your risk before considering a user story or a specific implementation complete?
The data retained, managed, and created by PaaS applications has a critical value—without it, few PaaS applications would exist. Development teams need to work with larger security functions to consider the privacy requirements and security implications and to make decisions on things like data classification and potential threats. These threats can be managed, but the specific countermeasures often require a coordinated implementation between the code to access data stores, the data store configuration itself, and the dedicated development of separate data integrity functions, as well as a disaster recovery strategy.
Based on the identified risks, your team may want to consider:
- Using data management steps to reduce the threat of data leakage (such as limiting the amount of data or records which can be returned in a given application request).
- Looking at counters, code instrumentation, and account-based controls to detect and limit abuse.
- Associating requests to specific accounts/application users in your logging mechanisms to create a trail for troubleshooting and investigation.
- Recording data access logging to a hardened data store, and if the sensitivity/risk of the data store requires, transition logs to an isolated account or repository.
- Asking your development team what the business impact of corrupting the value of your analysis, or the integrity of the data set itself might be, for example, by an otherwise authorized user injecting trash?
PaaS applications offer compelling value, economies of scale, new capabilities, and access to advanced processing otherwise out of reach for many organizations in traditional infrastructure. These services require careful planning, coordination of security operations and development teams, and a commitment to architecture in both technical development and managing risk through organizational process. Failing to consider and invest in these areas while rushing headlong into new PaaS tools might lead your team to discover that your app has sprung a leak!
- Have you documented security incidents? How did you remediate those incidents?
- Do you have the result of your last business continuity test? If yes, can you share it?
- What security controls exist for your users? Do they use multifactor authentication, etc.?
- How are you maturing your security program?
- Are you ISO, SOC 1/SOC 2, and NIST Compliant, and is there documentation to support this?
If you’re unsatisfied with the answers from a potential partner regarding their security, it’s OK to walk away, especially if you make the determination that working with the vendor may not be critical to your business.
- Remediation: Can you work with the vendor to remediate the technical risk?
- Compensating controls: If you cannot remediate the risks entirely, can you establish technical compensating controls to minimise or deflect the risk?
These are policies that users of the offering should follow, such as limits on the types and amounts of data that can be input securely. Some typical policy scenarios include:
- Regulatory compliance: For example, a vendor’s non-compliance could mandate you walk away from a third-party relationship.
- Contractual obligations: Are there contractual obligations in place with your existing clients that prevent you from working vendors who don’t meet certain security and privacy standards?
- Security best practices: Ensure your policies around risk are enforced and determine whether they may conflict with your vendors’ policies.
Post GDPR, there is still a lot of complexity in data privacy and data residency requirements. Depending on where they are located, what industry they are in, and how diverse their customer base is, companies are requiring a high degree of flexibility in the tools they use for web security. While most web security products in the market today simply document their data handling practices as a part of GDPR compliance, McAfee strives to give customers more flexibility to implement the level of data privacy appropriate for their business. Most of our McAfee Web Protection customers use our technologies to manage employee web traffic, which requires careful handling when it comes to processing Personal Data.
Our latest update to the McAfee Web Gateway Cloud Service introduced two key features for customers to implement their data privacy policies:
- Concealment of Personal Data in internal reporting: We enable you to conceal or pseudonymize certain fields in our access logs. You can still report on the data but Personal Data is obfuscated. As an example, you can report on how much your Top Web Users surfed the Internet, but administrators cannot identify who that top user is.
- Full control of data residency: Especially in heavily regulated industries, many of our customers have asked for the ability to control where their log data goes so that they have control over data residency. We give you that control. For example, you can currently select between the EU and US as data storage points for users connecting in each geographical region. Additional finer control can be achieved by configuring client proxy settings, or through Hybrid policy. And, in conjunction with Content Security Reporter 2.6, customers can centrally report on all the data, while providing access control on the generated reports.
As a globally dispersed organization, there are of course still limits to what we can offer – our support and engineering teams, for instance, might need to access data for troubleshooting purposes from other geographies. Telemetry and other data required to operate the service would still be global. But to the extent that we can, with the access logs that contain PII, customers want more control.
McAfee Web Gateway Cloud Service is built for the enterprise, and many organizations will gain a higher level of performance than they currently experience on premises. As your security team continues to manage highly sophisticated malware and targeted attacks that evade traditional defences, McAfee Web Gateway Cloud Service allows you to go beyond basic protection, with behaviour emulation that prevents zero-day malware in milliseconds as traffic is processed.
The post McAfee Web Security offers a more flexible approach to Data Privacy appeared first on McAfee Blogs.
The risk to your family’s healthcare data often begins with that piece of paper on a clipboard your physician or hospital asks you to fill out or in the online application for healthcare you completed.
That data gets transferred into a computer where a patient Electronic Health Record (EHR) is created or added to. From there, depending on the security measures your physician, healthcare facility, or healthcare provider has put in place, your data is either safely stored or up for grabs.
It’s a double-edged sword: We all need healthcare but to access it we have to hand over our most sensitive data armed only with the hope that the people on the other side of the glass window will do their part to protect it.
Breaches on the Rise
Feeling a tad vulnerable? You aren’t alone. The stats on medical breaches don’t do much to assuage consumer fears.
A recent study in the Journal of the American Medical Association reveals that the number of annual health data breaches increased 70% over the past seven years, with 75% of the breached, lost, or stolen records being breached by a hacking or IT incident at a cost close to consumers at nearly $6 billion.
The IoT Factor
Not only are medical facilities vulnerable to hackers, but with the growth of the Internet of Things (IoT) consumer products — which, in short, means everything is digitally connected to everything else — also provide entry points for hackers. Wireless devices at risk include insulin pumps and monitors, Fitbits, scales, thermometers, heart and blood pressure monitors.
To protect yourself when using these devices, experts recommend staying on top of device updates and inputting as little personal information as possible when launching and maintaining the app or device.
The Dark Web
The engine driving healthcare attacks of all kinds is the Dark Web where criminals can buy, sell, and trade stolen consumer data without detection. Healthcare data is precious because it often includes a much more complete picture of a person including social security number, credit card/banking information, birthdate, address, health care card information, and patient history.
With this kind of data, many corrupt acts are possible including identity theft, fraudulent medical claims, tax fraud, credit card fraud, and the list goes on. Complete medical profiles garner higher prices on the Dark Web.
Some of the most valuable data to criminals are children’s health information (stolen from pediatrician offices) since a child’s credit records are clean and more useful tools in credit card fraud.
According to Raj Samani, Chief Scientist and McAfee Fellow, Advanced Threat Research, predictions for 2019 include criminals working even more diligently in the Dark Web marketplace to devise and launch more significant threats.
“The game of cat and mouse the security industry plays with ransomware developers will escalate, and the industry will need to respond more quickly and effectively than ever before,” Says Samani.
Healthcare professionals, hospitals, and health insurance companies, while giving criminals an entry point, though responsible, aren’t the bad guys. They are being fined by the government for breaches and lack of proper security, and targeted and extorted by cyber crooks, while simultaneously focusing on patient care and outcomes. Another factor working against them is the lack of qualified cybersecurity professionals equipped to protect healthcare practices and facilities.
Protecting ourselves and our families in the face of this kind of threat can feel overwhelming and even futile. It’s not. Every layer of protection you build between you and a hacker, matters. There are some things you can do to strengthen your family’s healthcare data practices.
Ways to Safeguard Medical Data
Don’t be quick to share your SSN. Your family’s patient information needs to be treated like financial data because it has that same power. For that reason, don’t give away your Social Security Number — even if a medical provider asks for it. The American Medical Association (AMA) discourages medical professionals from collecting patient SSNs nowadays in light of all the security breaches.
Keep your healthcare card close. Treat your healthcare card like a banking card. Know where it is, only offer it to physicians when checking in for an appointment, and report it immediately if it’s missing.
Monitor statements. The Federal Trade Commission recommends consumers keep a close eye on medical bills. If someone has compromised your data, you will notice bogus charges right away. Pay close attention to your “explanation of benefits,” and immediately contact your healthcare provider if anything appears suspicious.
Ask about security. While it’s not likely you can change your healthcare provider’s security practices on the spot, the more consumers inquire about security standards, the more accountable healthcare providers are to following strong data protection practices.
Pay attention to apps, wearables. Understand how app owners are using your data. Where is the data stored? Who is it shared with? If the app seems sketchy on privacy, find a better one.
How to Protect IoT Devices
According to the Federal Bureau of Investigation (FBI), IoT devices, while improving medical care and outcomes, have their own set of safety precautions consumers need to follow.
- Change default usernames and passwords
- Isolate IoT devices on their protected networks
- Configure network firewalls to inhibit traffic from unauthorized IP addresses
- Implement security recommendations from the device manufacturer and, if appropriate, turn off devices when not in use
- Visit reputable websites that specialize in cybersecurity analysis when purchasing an IoT device
- Ensure devices and their associated security patches are up-to-date
- Apply cybersecurity best practices when connecting devices to a wireless network
- Invest in a secure router with appropriate security and authentication practices
The post How to Safeguard Your Family Against A Medical Data Breach appeared first on McAfee Blogs.
Cloud Security should not be an afterthought
It is essential for security to be baked into a new cloud services design, requirements determination, and in the procurement process. In particular, defining and documenting the areas of security responsibility with the intended cloud service provider.
Cloud does not absolve the business of their security responsibilities
All cloud service models, whether the standard models of Infrastructure as a Service (IaaS), Platform as a Service (PaaS) or Software as a Service (SaaS), always involve three areas of security responsibilities to define and document:
- Cloud Service Provider Owned
- Business Owned
- Shared (Cloud Service Provider & Business)
Regardless of the cloud model, data is always the responsibility of the business.
- Cloud Security Alliance
- PCI SSC Cloud Computing Guidelines
- NCSC Cloud Security Guidance
- Microsoft O365 Security and Compliance Blueprint UK
February was a rather quiet month for hacks and data breaches in the UK, Mumsnet reported a minor data breach following a botched upgrade, and that was about it. The month was a busy one for security updates, with Microsoft, Adobe and Cisco all releasing high numbers of patches to fix various security vulnerabilities, including several released outside of their scheduled monthly patch release cycles.
A survey by PCI Pal concluded the consequences of a data breach had a greater impact in the UK than the United States, in that UK customers were more likely to abandon a company when let down by a data breach. The business reputational impact should always be taken into consideration when risk assessing security.
I will be speaking at the e-crime Cyber Security Congress in London on 6th March 2019, on cloud security, new business metrics, future risks and priorities for 2019 and beyond.
Finally, completely out of the blue, I was informed by 4D that this blog had been picked by a team of their technical engineers and Directors as one of the best Cyber Security Blogs in the UK. The 6 Best Cyber Security Blogs - A Data Centre's Perspective Truly humbled and in great company to be on that list.
- What's the greater risk to UK 5G, Huawei backdoors or DDoS?
- The Business of Organised Cybercrime
- Is Huawei a Threat to UK National Security?
- Customers Blame Companies not Hackers for Data Breaches
- Automotive Technologies and Cyber Security
- The 6 Best Cyber Security Blogs - A Data Centre's Perspective
- Parenting Website Mumsnet hit by Data Breach
- UK Officials Concerned over Huawei’s Presence
- UK Consumers more likely to Abandon a Breached Company according to Research
- US Military Hackers took Russian troll factory offline during midterms, report claims
- GCHQ Chief: Cyber conflict could deteriorate into a Wild West if left unchecked
- Australia’s Major Political Parties Hacked by 'state actor' ahead of Elections
- High Stress Levels Impacting CISOs Physically, Mentally
- 60,000 EU Data Breaches filed under GDPR
- Dow Jones database holding 2.4 million records of politically exposed persons
- Palisades Park receives £151,000 advance after Cyberattack
- UK Bank Customers hit by Dozens of IT shutdowns due to operational and security incidents
- Musical.ly (TikTok App) fined a Record £4.3 Million under United States COPPA
- Microsoft Patches 76 Vulnerabilities, including 20 Critical for Windows, Edge, Hyper-V, Chakra and Adobe Flash
- Microsoft Fixes IIS Vulnerability that can cause CPU usage to Soar 100% when processing HTTP/2 requests
- Adobe Releases fixes 70 Vulnerabilities in Acrobat and Acrobat Reader
- Adobe issues New patch for Acrobat and Reader Out of Band
- RDP Flaws could allow Hackers to take over control of Systems
- Cisco rolls out Multiple Security Updates across its Product Portfolio
- Apple Patches Two Flaws Exploited in Zero-Day Attacks; also fixes FaceTime Eavesdropping Bug
- Mozilla Foundation issues Firefox Updates
- Cisco Network Assurance Engine (NAE) contains Password Vulnerability
- Cisco Patches Two Code Execution Vulnerabilities
- Carbon Black Global Threat Research Project
- 2019 CrowdStrike Global Threat Report
- Netscout Threat Landscape Report: IoT Devices Attacked Faster than Ever, DDoS Attacks up dramatically
Serverless platform-as-a-service (PaaS) offerings are being deployed at an increasing rate for many reasons. They relate to information in a myriad of ways, unlocking new opportunities to collect data, identify data, and ultimately find ways to transform data to value.
Figure 1. Serverless application models.
Serverless applications can cost-effectively reply and process information at scale, returning critical data models and transformations synchronously to browsers or mobile devices. Synchronous serverless applications unlock mobile device interactions and near-real-time processing for on-the-go insights.
Asynchronous serverless applications can create data sets and views on large batches of data over time. We previously needed to have every piece of data and run batch reports, but we now have the ability to stagger events, or even make requests, wait some time to check in on them, and get results that bring value to the organization a few minutes or an hour later.
Areas as diverse as tractors, manufacturing, and navigation are benefiting from the ability to stream individual data points and look for larger relationships. These streams build value out of small bits of data. Individually they’re innocuous and of minimal value, but together they provide new intelligence we struggled to capture before.
The key theme throughout these models is the value of the underlying data. Protecting this data, while still using it to create value becomes a critical objective for the cloud-transforming enterprise. We can start by looking at the model for how data moves into and out of the application. A basic access and data model illustrates the way the application, access medium, CSP provider security, and serverless PaaS application have to work together to balance protection and capability.
Figure 2. Basic access and data model for serverless applications.
A deeper exploration of the security environment—and the shared responsibility in cloud security—forces us to look more carefully at who is involved, and how each party in the cloud ecosystem is empowered to see potential threats to the environment, and to the transaction specifically. When we expand the access and data model to look at the activities in a modern synchronous serverless application, we can see how the potential threats expand rapidly.
Figure 3. Expanded access and data model for a synchronous serverless application.
Organizations using this common model for an integrated serverless PaaS application are also gaining information from infrastructure-as-a-service (IaaS) elements in the environment. This leads to a more specific view of the threats that exist:
Figure 4. Sample threats in a serverless application.
By pushing the information security team to more carefully and specifically consider the ways the application can be exploited, they can then take simple actions to ensure that both development activities and the architecture for the application itself offer protection. A few examples:
- Threat: Network sniffing/MITM
- Protection: High integrity TLS, with signed API requests and responses
- Threat: Code exploit
- Protection: Code review, and SAST/pen testing on regular schedule
- Threat: Data structure exploit
- Protection: API forced data segmentation and request limiting, managed data model
The organization first must recognize the potential risk, make it part of the culture to ask the question, “What threats to my data does my change or new widget introduce?” and make it an expectation of deployment that privacy and security demand a response.
Otherwise, your intellectual property may just become the foundation of someone else’s profit.
The post The Exploit Model of Serverless Cloud Applications appeared first on McAfee Blogs.
A classic meet-cute – the moment where two people, destined to be together, meet for the first time. This rom-com cornerstone is turned on its head by Netflix’s latest bingeable series “You.” For those who have watched, we have learned two things. One, never trust someone who is overly protective of their basement. And two, in the era of social media and dating apps, it’s incredibly easy to take advantage of the amount of personal data consumers readily, and somewhat naively, share online and with the cloud every day.
We first meet Joe Goldberg and Guinevere Beck – the show’s lead characters – in a bookstore, she’s looking for a book, he’s a book clerk. They flirt, she buys a book, he learns her name. For all intents and purposes, this is where their story should end – but it doesn’t. With a simple search of her name, Joe discovers the world of Guinevere Beck’s social media channels, all conveniently set to public. And before we know it, Joe has made himself a figurative rear-window into Beck’s life, which brings to light the dangers of social media and highlights how a lack of digital privacy could put users in situations of unnecessary risk. With this information on Beck, Joe soon becomes both a physical and digital stalker, even managing to steal her phone while trailing her one day, which as luck would have it, is not password protected. From there, Joe follows her every text, plan and move thanks to the cloud.
Now, while Joe and Beck’s situation is unique (and a tad dramatized), the amount of data exposed via their interactions could potentially occur through another romantic avenue – online dating. Many millennial couples meet on dating sites where users are invited to share personal anecdotes, answer questions, and post photos of themselves. The nature of these apps is to get to know a stranger better, but the amount of personal information we choose to share can create security risks. We have to be careful as the line between creepy and cute quickly blurs when users can access someone’s every status update, tweet, and geotagged photo.
While “You” is an extreme case of social media gone wrong, dating app, social media, and cloud usage are all very predominant in 2019. Therefore, if you’re a digital user, be sure to consider these precautions:
- Always set privacy and security settings. Anyone with access to the internet can view your social media if it’s public, so turn your profiles to private in order to have control over who can follow you. Take it a step further and go into your app settings to control which apps you want to share your location with and which ones you don’t.
- Use a screen name for social media accounts. If you don’t want a simple search of your name on Google to lead to all your social media accounts, consider using a different variation of your real name.
- Watch what you post. Before tagging your friends or location on Instagram and posting your location on Facebook, think about what this private information reveals about you publicly and how it could be used by a third-party.
- Use strong passwords. In the chance your data does become exposed, or your device is stolen, a strong, unique password can help prevent your accounts from being hacked.
- Leverage two-factor authentication. Remember to always implement two-factor authentication to add an extra layer of security to your device. This will help strengthen your online accounts with a unique, one-time code required to log in and access your data.
- Use the cloud with caution. If you plan to store your data in the cloud, be sure to set up an additional layer of access security (one way of doing this is through two-factor authentication) so that no one can access the wealth of information your cloud holds. If your smartphone is lost or stolen, you can access your password protected cloud account to lock third-parties out of your device, and more importantly your personal data.
The post Roses Are Red, Violets Are Blue – What Does Your Personal Data Say About You? appeared first on McAfee Blogs.
Technology is as diverse and advanced as ever, but as tech evolves, so must the way we secure it from potential threats. Serverless architecture, i.e. AWS Lambda, is no exception. As the rapid adoption of this technology has naturally grown, the way we approach securing it has to shift. To dive into that shift, let’s explore the past and present of serverless architecture’s risk profile and the resulting implications for security.
For the first generation of cloud applications, we implemented “traditional” approaches to security. Often, this meant taking the familiar “Model-View-Controller” view to initially segment the application, and sometimes we even had the foresight to apply business logic separation to further secure the application.
But our cloud security model was not truly “cloud-native.” That’s because our application security mechanisms assumed that traffic functioned in a specific way, with specific resources. Plus, our ability to inspect and secure that model relied on an intimate knowledge of how the application worked, and the full control of security resources between its layers. In short, we assumed full control of how the application layers were segmented, thus replicating our data center security in the cloud, giving up some of the economics and scale of the cloud in the process.
Figure 2. Simplified cloud application architecture separated by individual functions.
Now, when it comes to the latest generation of cloud applications, most leverage Platform-as-a-Service (PaaS) functions as an invaluable aid in the quest to reduce time-to-market. Essentially, this means getting back to the original value proposition for making the move to cloud in the first place.
And many leaders in the space are already making major headway when it comes to this reduction. Take Microsoft as an example, which cited a 67% reduction in time-to-market for their customer Quest Software by using Microsoft Azure services. Then there’s Oracle, which identified 50% reduction in time-to-market for their customer HEP Group using Oracle Cloud Platform services.
However, for applications built with Platform-as-a-Service, we have to think about risk differently. We must ask ourselves — how do we secure the application when many of the layers between the “blocks” of serverless functions are under cloud service provider (CSP) control and not your own?
Fortunately, there are a few things we can do. We can start by having the architecture of the application become a cornerstone of information security. From there, we must ask ourselves, do the elements relate to each other in a well understood, well-modeled way? Have we considered how they can be induced to go wrong? Given that our instrumentation is our source of truth, we need to ensure that we’re always in the know when something does go wrong – which can be achieved through a combination of CSP and 3rd party tools.
Additionally, we need to look at how code is checked and deployed at scale and look for opportunities to complete side by side testing. Plus, we must always remember that DevOps, without answering basic security questions, can often unwittingly give away data in any release.
It can be hard to shoot a moving target. But if security strategy can keep pace with the shifting risk profile of serverless architecture, we can reap the benefits of cloud applications without worry. Then, serverless architecture will remain both seamless and secure.
The post The Shifting Risk Profile in Serverless Architecture appeared first on McAfee Blogs.
Monday 21st & Tuesday 22nd January 2019
Annenberg Community Beach House, Santa Monica, USA
The Future of Cyber Security Manchester
Thursday 24th January 2019
Bridgewater Hall, Manchester, UK
Friday 25th January 2019
Cloth Hall Court, Leeds, UK
Cyber Security for Industrial Control Systems
Thursday 7th & Friday 8th February 2019
Savoy Place, London, UK
NOORD InfoSec Dialogue UK
Tuesday 26th & Wednesday 27th February 2019
The Bull-Gerrards Cross, Buckinghamshire, UK
Monday 4th to Friday 8th March 2019
At Moscone Center, San Francisco, USA
Olympia, London, UK
ISF UK Spring Conference
Wednesday 6th & Thursday 7th March 2019
Regent Park, London, UK
Cloud and Cyber Security Expo
Tuesday 12th to Wednesday 13 March 2019
At ExCel, London, UK
Monday 15th & Tuesday 16th April 2019
Cyber Security Manchester
Wednesday 3rd & Thursday 4th April 2019
Manchester Central, Manchester, UK
BSides Scotland 2019
Tuesday 23rd April 2019
Royal College of Physicians, Edinburgh, UK
Wednesday 24th & Thursday 25th April 2019
Scottish Event Campus, Glasgow, UK
Cyber Security & Cloud Expo Global 2019
Thursday 25th and Friday 29th April 2019
Olympia, London, UK
Infosecurity Europe 2019
Tuesday 4th to Thursday 6th June 2019
Where Olympia, London, UK
Thursday 6th June 2019
ILEC Conference Centre, London, UK
Blockchain International Show
Thursday 6th and Friday 7th June 2019
ExCel Exhibition & Conference Centre, London, UK
Hack in Paris 2019
Sunday 16th to Friday 20th June 2019
Maison de la Chimie, Paris, France
UK CISO Executive Summit
Wednesday 19th June 2019
Hilton Park Lane, London, UK
Cyber Security & Cloud Expo Europe 2019
Thursday 19th and Friday 20th June 2019
RIA, Amsterdam, Netherlands
Gartner Security and Risk Management Summit
Monday 17th to Thursday 20th June 2019
National Harbor, MD, USA
European Maritime Cyber Risk Management Summit
Tuesday 25th June 2019
Norton Rose Fulbright, London, UK
Saturday 3rd to Thursday 8th August 2019
DEF CON 27
Thursday 8th to Sunday 11th August 2019
Paris, Ballys & Planet Hollywood, Las Vegas, NV, USA
Wednesday 11th to Friday 13th September 2019
ILEC Conference Centre, London, UK
2019 PCI SSC North America Community Meeting
Tuesday 17th to Thursday 19th September 2019
Vancouver, BC, Canada
Thursday 10th & Friday 11th October 2019
Aula, Gent, Belgium
EuroCACS/CSX (ISACA) 2019
Wednesday 16th to Friday 19th October 2019
Palexpo Convention Centre, Geneva, Switzerland
6th Annual Industrial Control Cyber Security Europe Conference
Tuesday 29th and Wednesday 30th October 2019
Copthorne Tara, Kensington, London, UK
2019 PCI SSC Europe Community Meeting
ISF 30th Annual World Congress
Saturday 26th to Tuesday 29th October 2019
Convention Centre Dublin, Dublin, Ireland
Santa Clara Convention Centre, Silicon Valley, USA
Thursday 14th & Friday 15th November 2019
CodeNode, London, UK
2019 PCI SSC Asia-Pacific Community Meeting
Wednesday 20th and Thursday 21st November 2019
Thursday 20th to Saturday 30th November 2019
The Imperial Riding School Vienna, Austria
Post in the comments about any cyber & information security themed conferences or events you recommend.