The calculus for disaster recovery and risk management is changing. Most small businesses within the past decade would often keep many of their critical technology assets locally, perhaps in a server closet, or a centralized data center for multiple offices. They built their own “vault” of applications, databases, email, files, etc., often on a few […]… Read More
As a system administrator during the early days of the “cloud revolution” I found the “cloud” metaphor an interesting choice to frame the technology stack. Clouds, in my mind, were “woolly” and hard to pin down as opposed to the omnipresent, always-available things that IT marketers were suggesting cloud services would be. But whilst I […]… Read More
BARCELONA — SAP’s new cloud data storage products were formally revealed at its TechEd conference last week, and despite a round of layoffs announced earlier this year across its Canadian operations, Canada’s research hubs in Vancouver and Waterloo remain an integral part of SAP’s latest portfolio enhancements, executives told IT World Canada.
SAP’s chief technology officer Juergen Mueller, took to the stage during his opening keynote on the heels of the company’s other TechEd event in Las Vegas, and explained how the company plans to make it easier for businesses to develop applications across its Business Technology Platform. Front and centre is SAP HANA Cloud Services, which combines all of SAP’s data and analytics capabilities as one set of interconnected services to store, process, govern and consume large volumes of data.
“We are becoming much more business-centric,” Mueller told audience members, acknowledging the fact that there was less of a focus on demonstrations with running code on the keynote stage. “SAP HANA Cloud offers one data access layer for all your data sources. It directly connects to your data from your on-premise HANA system, your third-party systems, and even Excel, without the need for data replication in order to work with that data.”
SAP HANA Cloud can be managed on the SAP Cloud platform using Kubernetes, and will significantly lower customers’ total cost of ownership for storing and managing petabytes worth of data, added Mueller. The company’s research hub in Vancouver has been hard at work for the past five years separating compute and storage with SAP HANA Cloud, ultimately allowing customers to scale both independently.
“Vancouver is at the cutting edge of our machine learning and predictive analytics developments. They’re taking machine learning technology, and not thinking about just improving the product but thinking about how to connect it to people and the way they do work,” said Gerrit Kazmaier, SAP’s executive vice-president for analytics, databases, and data management, who spoke with IT World Canada after the morning keynotes.
SAP also unveiled the SAP Data Warehouse Cloud, a cloud-based repository under the SAP HANA Cloud umbrella, which according to Kazmaier, will help customers store petabytes worth of data without worrying about pesky capacity limitations. SAP’s other research hub in Waterloo was largely responsible for laying the groundwork for the repository.
As a result, customers will be able to put their newest or most valuable data in HANA’s in-memory repository and as the data is used less frequently, or “cools”, move it to HANA’s disk storage mode. Customers can eventually move that data to HANA’s new data lake, which is more cost-effective, once the heavy analytics is applied to it. And when data cools even further, they can move it to external data lakes, such as AWS S3 and Azure Data Lake all from within the SAP HANA Cloud.
Neil McGovern, senior director of product marketing for SAP, likened the company’s cloud strategy to the way people store photos on the public cloud.
“It’s the same idea. We’re just doing it with business data,” he explained, adding even C-Suite executives, who have been slow to fully grasp the value SAP can bring to the table due to the complexities around its products and the technology powering them, know the flexibility cloud can provide.
McGovern indicated that if a CIO today demands $25 million for a data centre during a boardroom meeting, there’s almost only one response.
“They’ll look at you and say ‘Go through the cloud instead’, and ‘Why are you employed with us?’”
General availability for SAP HANA Cloud and SAP Data Warehouse Cloud is planned for the fourth quarter of 2019.
In a recent cloud WAF hacking, many customers were alarmed when private API keys, salted passwords, and SSL certificates were revealed to have been compromised.
It’s clear from this specific hacking incident that the appropriate steps were not taken to protect customers’ data. One proper security measure that was overlooked was API security.
API security is concerned with the transfer of data connected to the internet, which means broken, exposed, or hacked APIs can lead to breaches.
For a cloud WAF, they are essential for the integration of the WAF service into the client’s servers. This blog post will delve deeper into what API security means for cloud WAFs and how you can secure your APIs for WAFs.
Encrypt your API keys.
Keys are central to API security. API keys are essentially long strings that uniquely identify an application and allow two applications to communicate over the internet.
For WAF vendors and customers, securing these keys can mitigate threats such as man-in-the-middle (MITM) attacks (which alter communications of API messages between two parties) by preventing the interception of site traffic.
However, this can be protected with SSL. By securing all of your webpages using SSL (which encrypts transmitted data) your data sent via web APIs will also be encrypted.
This is crucial because APIs sometimes contain sensitive information (e.g. email, card information); with encryption, you can thwart off hackers who are trying to intercept your communications.
Authenticate users that utilize the API keys.
If an API key is not authenticated, there’s no guarantee that the user “calling” the API is the one you intended to issue the WAF API key. By determining the identity of the user, authentication can help reduce the misuse of the system by preventing too many API requests from one user.
While basic authentication can be implemented using SSL, there are more secure alternatives to authenticate users when using WAF APIs.
These include OAuth 2 and OpenID Connect, two popular industry standards for authentication.
Some WAFs also offer API tokens that support two-factor authentication. For example, a one-time password can be generated to quickly identify your intended recipient.
Consider using a secure API Gateway.
If properly secured, API gateways can add an added layer of protection. However, many API gateway technologies are designed for integration, and not necessarily designed with security in mind.
These API products simply provide access control, which is not enough to properly APIs from external threats.
However, API security is much more than access control. Because API gateways also handle traffic management, you might be concerned about data leakage and data integrity.
Luckily, WAFs are commonly used to secure API platforms, as they are able to prevent common web exploits misuse and exploitation. A WAF can also help mitigate application-layer DDoS attacks.
Threats posed by vulnerable APIs, including those affecting WAFs, are ever-growing. In fact, 9 of the top 10 vulnerabilities mentioned by the latest OWASP Top 10 now note APIs.
Yet, API security remains overlooked in information security today. This is because API vulnerabilities are not easy to detect without specialized technology.
WAFs are one way to make sure API platforms are secured, and for securing the actual WAF API keys, encryption and authentication will come in handy.
As threats evolve and organizations become more aware of the threats that vulnerable APIs pose, it’s clear API security will gain more traction in not just the WAF arena but other cloud services as well.
While organizations shift their applications to microservices environments, the responsibility for securing these environments shifts as well, Radware reveals. The rapid expansion of the Development Security Operations (DevSecOps) role has changed how companies address their security posture with approximately 70% of survey respondents stating that the CISO was not the top influencer in deciding on security software policy, tools and or implementation. This shift has likely exposed companies to a broader range of security risks … More
The post DevSecOps role expansion has changed how companies address their security posture appeared first on Help Net Security.
For many organizations moving to the cloud, Infrastructure as a Service (IaaS) like AWS EC2, Azure Virtual Machines or Google Compute Engine often forms the backbone of their cloud architecture. These services allow you to create instances of pretty much any operating system almost instantly. Unfortunately, moving your IT infrastructure to the cloud doesn’t relieve […]… Read More
The post Automating Secure Configuration Management in the Cloud appeared first on The State of Security.
If I asked you what security products you had in place to manage your risk within your IT organisation 10 years ago, you’d probably have been able to list a half dozen different tools and confidently note that most of your infrastructure was covered by a common set of key products such as antivirus, DLP, […]… Read More
The post Secure Configuration in Cloud – IaaS, PaaS and SaaS Explained appeared first on The State of Security.
Avaya Holdings Corp. kicked off the month of October with some major news.
While it didn’t get acquired like many were anticipating, the unified communications and contact center solutions provider today announced a strategic partnership with RingCentral.
Through this exclusive partnership, Avaya, which also provides desktop equipment and services, announced the introduction of Avaya Cloud Office by RingCentral, a new global unified communications as a service (UCaaS) solution that’s set to launch in 2020.
Avaya chief executive officer Jim Chirico said the partnership will deliver a dramatic boost to the company’s ongoing shift to the cloud, which is good news for both customers and partners. According to Avaya’s second quarter fiscal 2019 financial results, the company’s public cloud seats increased more than 165 per cent year-over-year.
“This also gives us the opportunity to unlock value from a largely unmonetized base of our business as it brings compelling value to our customers and partners,” said Chirico in a press release.
Vlad Shmunis, founder, chairman and CEO of RingCentral, said the pairing will lead to a differentiated solution that will lean on Avaya’s installed base of more than 100 million users and 4,700 partners.
RingCentral is contributing $500 million to its partnership with Avaya, including a $125 million investment of 3 per cent redeemable preferred equity that is convertible at $16 per share. RingCentral will also pay Avaya an advance of $375 million, primarily in stock, for future payments and certain licensing rights.
The transaction, still subject to customary closing conditions and regulatory approvals, is expected to close in the fourth quarter of this year.
The news comes after several months of speculation around the future of Avaya, which began in May when Avaya announced it had “engaged J.P. Morgan to evaluate strategic alternatives to maximize shareholder value.”
Avaya has nearly 4 million subscribers on their platform and is furthering their growth into UCaaS, as evidenced by the company’s growing as-a-service model. An Avaya acquisition appeared imminent.
Chirico today appeared to suggest the door isn’t closed on that opportunity.
“The strategic actions that we are executing as a result of our comprehensive review create new growth opportunities, return capital to our shareholders and de-lever our balance sheet. With a clear path forward, we will further invest in technology and innovation to continue bringing state-of-the-art solutions to our valued customers and partners.”
Avaya exited bankruptcy protection in December 2017 and began trading publicly almost a year after being placed in Chapter 11 by its private equity owners TPG and Silver Lake.
In my previous article, we discussed how organizations are shifting how IT resources are deployed and managed. We covered three methods in particular: automated image creation and deployment, immutable image deployment and containers. We’ll now explore how organizations can make the best of these methods in a dynamic environment. Dealing with Change when the Targets […]… Read More
The post Best Practices for Using Tripwire Enterprise in Dynamic Environments – Part 2 appeared first on The State of Security.
The Canadian government’s ongoing effort to adopt the public cloud took another step forward this summer with the help of Microsoft and AWS, representing a “massive leap of faith” in cloud security, according to Peter Melanson, director of federal sales at Microsoft Canada.
“We’re talking about internal workloads of government data…things like human resources systems and financial systems,” he said, referring to the types of workloads the federal government is moving to public cloud.
These workloads fall under the Protected B classification, which according to the Department of Justice, is described as “information where unauthorized disclosure could cause serious injury to an individual, organization or government.” Protected B includes medical information, information protected by solicitor-client or litigation privilege, or received in confidence from other government departments and agencies.
The migration to public cloud is part of the government’s Cloud First Strategy. In 2017, the federal government started to migrate unclassified data to public cloud storage, with the goal to eventually store its Protected B workloads in the same environment as well.
Unsurprisingly, any vendor bidding for the contracts to store these workloads has to clear a high bar. There are 469 separate security controls, as outlined by the Canadian Cyber Security Centre, that the government has to follow, explained Melanson.
“They want to make certain that you are compliant before they put these very intimate workloads up into the public cloud,” indicated Melanson.
It’s not much of a surprise then that the SCC selected Azure and AWS, two of the big three public cloud providers, to host its Protected B data.
In April 2019, Shared Services Canada (SSC) signed an enterprise agreement with Microsoft Canada that will provide client departments with access to Microsoft 365. On Aug. 8, SSC signed Cloud Framework Agreements with AWS Canada and Microsoft Azure. The two have various other contracts with the federal government when it comes to hosting some of its unclassified data, a lot of which is done through channel partners.
The two vendors have done a lot over the years to quell the public sector’s fears around public cloud security, indicated a spokesperson for SSC.
“Initial reservations of migrating data to the cloud were based primarily upon concerns about cloud security features. Over the past few years, cloud computing and storage have matured significantly,” they wrote in an email. “The Cloud First Strategy aligns Canada with Australia, New Zealand, United Kingdom and the United States.”
Quoting Sean Roche, the associate deputy director for digital innovation at the Central Intelligence Agency, Rejean Bourgault, country manager for public sector at AWS Canada, emphasized the acceptance of cloud security among governments.
“On its weakest day, the cloud is more secure than a client service solution,” Bourgault recited. “On a global basis, AWS has more than 5,000 government agencies using AWS at all classification levels…all the way up to Top Secret.”
Only 8 percent of companies are securing 75 percent or more of their cloud-native applications with DevSecOps practices today, with that number jumping to 68 percent of companies securing 75 percent or more of their cloud-native applications with DevSecOps practices in two years, according to ESG. The study results also revealed that API-related vulnerabilities are the top threat concern (63 percent of respondents) when it comes to organizations use of serverless. Overall, the study analyzed … More
The post DevSecOps is emerging as the main methodology for securing cloud-native applications appeared first on Help Net Security.
Nearly two-thirds of organizations that currently use cloud also leverage some level of managed services; with 71% of large enterprise IT pros revealing that managed services will be a better use of their money in the future, and a strong majority saying it allows their teams to focus on more strategic and productive IT projects, according to 451 Research. The report examined the significance of managed services for cloud, driven by the increasing complexity of … More
The post Enterprises report IT teams’ cloud skill gaps have nearly doubled appeared first on Help Net Security.
Just a few years ago, most IT environments were made up of deployed servers on which personnel installed applications, oftentimes as many as that one system could handle. They then remained and ran that way for years. In the meantime, the IT team maintained the system and updated the applications as needed. Sometimes there were […]… Read More
The post Best Practices for Using Tripwire Enterprise in Dynamic Environments – Part 1 appeared first on The State of Security.
IaaS is now the fastest growing area of the cloud due to the speed, cost and reliability with which organizations can create and deploy applications, according to McAfee. Cloud-Native Breach (CNB) attack chain The results of the survey demonstrate that 99 percent of IaaS misconfigurations go unnoticed—indicating awareness around the most common entry point to new “Cloud-Native Breaches” (CNB) is extremely low. “In the rush toward IaaS adoption, many organizations overlook the shared responsibility model … More
The post 99% of misconfiguration incidents in the cloud go unnoticed appeared first on Help Net Security.
Verizon recently released a 5 step process for evaluating cloud security products and services to inform purchase decisions. That’s a fantastic tool for buyers to have.
This is especially helpful because cloud discussions are almost always driven by business objectives to satisfy a cost and or productivity problem. The CISO has to come in and secure the pieces after the migration decision is made.
The overall direction, as well as the point in migration at which the CISO is brought in, both impact how cloud security products and services are approached.
Assess the Need
The main focus of the assessment phase needs to be about understanding what data, applications and services are being moved to the cloud. This will determine the requirements for security.
Verizon points out that the migration itself is just half the security battle, as many security products can’t provide workload visibility once everything lives in the cloud.
Fortunately, Trend Micro can help with that. Regardless of how your cloud environment is structured, we help with visibility across physical, virtual, cloud, and container environments.
If you’re a CISO moving forward in the steps to cloud security, as outlined by Verizon, there are a few additional things I recommend keeping in mind.
How will you protect against misconfigurations?
Cloud security is dependent on the people owning the workloads. We know that the ration of security practitioner to IT to employee is incredibly disproportionate. This leads to the #1 cause of cloud information leaks we’ve seen so far – misconfigurations.
How will your security tools fit into a DevOps culture?
The shift to DevOps has become part of cloud migrations. Everything operates faster and more fluidly than with legacy setups. An effective security solution can seamlessly operate across the entire CI/CD pipeline and runtime environment – not to slow down the process, but to maintain security as the process moves forward.
Security doesn’t need to be a cloud roadblock. It should be an enabler. Verizon tees up the cloud security conversation with the foundation for considering cloud security solutions. But don’t settle for a security product that slows down or limits the benefits of the cloud.
The post Beyond The Standard CISO Cloud Security Guide appeared first on .
In July 2019, Capital One made news headlines not for achieving another milestone but because it had been breached. Capital One was using AWS cloud services, as many businesses are doing nowadays. The problem stemmed (in part) because Capital One had a misconfigured open-source Web Application Firewall (WAF) hosted in the cloud with Amazon Web […]… Read More
The post Concerns and Challenges Towards an Effective Cloud Security appeared first on The State of Security.
In the age of information, data is everything. Since the implementation of GDPR in the EU, businesses around the world have grown more “data conscious;” in turn, people, too, know that their data is valuable.
It’s also common knowledge at this point that data breaches are costly. For example, Equifax, the company behind the largest-ever data breach, is expected to pay at least $650 million in settlement fees.
And that’s just the anticipated legal costs associated with the hacking. The company is spending hundreds of millions of dollars in upgrading its systems to avert any future incidents.
In the cloud WAF arena, data breaches are no strangers. Having powerful threat detection capabilities behind your cloud WAF service provider, while important, is not the only thing to rely on for data breach prevention.
API security and secure SSL certificate management are just as important.
So, what are some ways hackers can cause damage as it relates to cloud WAF customers? And how can you protect yourself if you are using a cloud WAF service?
The topics covered in this blog will answer the following:
- What can hackers do with stolen emails?
- What can hackers do with salted passwords?
- What can hackers do with API keys?
- What can hackers do with compromised SSL certificates?
- What can I do to protect myself if I am using a cloud WAF?
When you sign up for a cloud WAF service, your email is automatically stored in the WAF vendor’s database so long as you use their service.
In case of a data breach, if emails alone are compromised, then phishing emails and spam are probably your main concern. Phishing emails are so common we often sometimes we forget how dangerous they are.
For example, if a hacker has access to your email, they have many ways they can impersonate a legal entity (e.g. by purchasing a similar company domain) and send unsolicited emails to your inbox.
► What can hackers do with salted passwords?
Cloud WAF vendors that store passwords in their database without any hashing or salting are putting their customers at risk if there is a breach, and even more so if hackers already have email addresses.
In this scenario, hackers can quickly take over your account or sell your login credentials online. But what if the WAF vendors salted the passwords? Hashing passwords can certainly protect against some hacker intrusions.
In the event of a password breach without salting/hashing, a hacker can get your website to validate your password when the website compares and matches the stored hash to the hash in the database.
This is where salting the hash can help defeat this particular attack, but it won’t guarantee protection against hash collision attacks (a type of attack on a cryptographic hash that tries to find two inputs that produce the same hash value).
In this scenario, systems with weak hashing algorithms can allow hackers access to your account even if the actual password is wrong because whether they insert different inputs (actual password and some other string of characters for example), the output is the same.
► What can hackers do with API keys?
Cloud WAF vendors that use or provide APIs to allow third-party access must place extra attention to API security to protect their customers.
APIs are connected to the internet and transfer data and allows many cloud WAFs work to implement load balancers among other things via APIs.
If API keys are not using HTTPS or API requests not being authenticated, then there is a risk for hackers to take over the accounts of developers.
If a cloud WAF vendor is using a public API but did not register for an authorized account to gain access to the API, hackers can exploit this situation to send repeated API requests. Had the APIs been registered, then the API key can be tracked if it’s being used for too many suspicious requests.
Beyond securing API keys, developers must also secure their cloud credentials. If a hacker gains access to this then they are able to possibly take down servers, completely mess up DNS information, and more.
API security is not only a concern for developers but also for end users using APIs for their cloud WAF service as you’ll see in the next section.
► What can hackers do with compromised SSL certificates?
Next, what happens if the SSL certificates WAF customers provided ends up in the hands of hackers?
Let’s assume the hacker has both the API keys and SSL certificates. In this scenario, hackers can affect the security of the incoming and outgoing traffic for customer websites.
With the API keys, hackers can whitelist their own websites from the cloud WAF’s settings, allowing their websites to bypass detection. This allows them to attack sites freely.
Additionally, hackers could modify the traffic of a customer website to divert traffic to their own sites for malicious purposes. Because the hackers also have the SSL certificates then they can expose this traffic as well and put you at risk for exploits and other vulnerabilities.
► What can I do to protect myself if I am using a cloud WAF?
First, understand that your data is never 100% safe. If a company claims that your data is 100% safe, then you should be wary. No company can guarantee that your data will always be safe with them.
When there is a data breach, however, cloud WAF customers are strongly encouraged to change their passwords, enable 2FA, upload new SSL certificates, and reset their API keys.
Only two of these are realistic preventive measures (changing your passwords frequently and using 2FA), but it’s unlikely that you, as a customer, will frequently upload new SSL certificates and change your API keys.
Thus, we recommend that you ask your WAF vendors about the security of not just the WAF technology itself but also how they deal with API security and how they store SSL certificates for their customers.
If you’d like to chat with one of our security experts and see how our cloud WAF works, submit the form below!
The post My cloud WAF service provider suffered a data breach…how can I protect myself? appeared first on Cloudbric.
Security used to be an inhibitor to cloud adoption, but now the tables have turned, and for the first time we are seeing security professionals embrace the cloud as a more secure environment for their business. Not only are they finding it more secure, but the benefits of cloud adoption are being accelerated in-step with better security.
Do you know what’s shaping our new world of secure cloud adoption? Do you know what the best practices are for you to accelerate your own business with the cloud? Test your knowledge in this quiz.Note: There is a widget embedded within this post, please visit the site to participate in this post's widget.
Not prepared? Lucky for you this is an “open-book” test. Find some cheat sheets and study guides below.
The post Test Your Knowledge on How Businesses Use and Secure the Cloud appeared first on McAfee Blogs.
The book described how cloud security is a big change from enterprise security because it relies less on IP-address-centric controls and more on users and groups. The book talked about creating security groups, and adding users to those groups in order to control their access and capabilities.
As I read that passage, it reminded me of a time long ago, in the late 1990s, when I was studying for the MCSE, then called the Microsoft Certified Systems Engineer. I read the book at left, Windows NT Security Handbook, published in 1996 by Tom Sheldon. It described the exact same security process of creating security groups and adding users. This was core to the new NT 4 role based access control (RBAC) implementation.
Now, fast forward a few years, or all the way to today, and consider the security challenges facing the majority of legacy enterprises: securing Windows assets and the data they store and access. How could this wonderful security model, based on decades of experience (from the 1960s and 1970s no less), have failed to work in operational environments?
There are many reasons one could cite, but I think the following are at least worthy of mention.
The systems enforcing the security model are exposed to intruders.
Intruders are generally able to gain code execution on systems participating in the security model.
Intruders have access to the network traffic which partially contains elements of the security model.
From these weaknesses, a large portion of the security countermeasures of the last two decades have been derived as compensating controls and visibility requirements.
The question then becomes:
Does this change with the cloud?
In brief, I believe the answer is largely "yes," thankfully. Generally, the systems upon which the security model is being enforced are not able to access the enforcement mechanism, thanks to the wonders of virtualization.
Should an intruder find a way to escape from their restricted cloud platform and gain hypervisor or management network access, then they find themselves in a situation similar to the average Windows domain network.
This realization puts a heavy burden on the cloud infrastructure operators. They major players are likely able to acquire and apply the expertise and resources to make their infrastructure far more resilient and survivable than their enterprise counterparts.
The weakness will likely be their personnel.
Once the compute and network components are sufficiently robust from externally sourced compromise, then internal threats become the next most cost-effective and return-producing vectors for dedicated intruders.
Is there anything users can do as they hand their compute and data assets to cloud operators?
I suggest four moves.
First, small- to mid-sized cloud infrastructure users will likely have to piggyback or free-ride on the initiatives and influence of the largest cloud customers, who have the clout and hopefully the expertise to hold the cloud operators responsible for the security of everyone's data.
Second, lawmakers may also need improved whistleblower protection for cloud employees who feel threatened by revealing material weaknesses they encounter while doing their jobs.
Third, government regulators will have to ensure no cloud provider assumes a monopoly, or no two providers assume a duopoloy. We may end up with the three major players and a smattering of smaller ones, as is the case with many mature industries.
Fourth, users should use every means at their disposal to select cloud operators not only on their compute features, but on their security and visibility features. The more logging and visibility exposed by the cloud provider, the better. I am excited by new features like the Azure network tap and hope to see equivalent features in other cloud infrastructure.
Remember that security has two main functions: planning/resistance, to try to stop bad things from happening, and detection/respond, to handle the failures that inevitably happen. "Prevention eventually fails" is one of my long-time mantras. We don't want prevention to fail silently in the cloud. We need ways to know that failure is happening so that we can plan and implement new resistance mechanisms, and then validate their effectiveness via detection and response.
Update: I forgot to mention that the material above assumed that the cloud users and operators made no unintentional configuration mistakes. If users or operators introduce exposures or vulnerabilities, then those will be the weaknesses that intruders exploit. We've already seen a lot of this happening and it appears to be the most common problem. Procedures and tools which constantly assess cloud configurations for exposures and vulnerabilities due to misconfiguration or poor practices are a fifth move which all involved should make.
A corollary is that complexity can drive problems. When the cloud infrastructure offers too many knobs to turn, then it's likely the users and operators will believe they are taking one action when in reality they are implementing another.