Scientists had used a specialized camera system developed by Widder called the Medusa, which uses red light undetectable to deep sea creatures and has allowed scientists to discover species and observe elusive ones.
The probe was outfitted with a fake jellyfish that mimicked the invertebrates' bioluminescent defense mechanism, which can signal to larger predators that a meal may be nearby, to lure the squid and other animals to the camera.
During the last week of December a US
Coast Guard facility was the target of a Ryuk ransomware attack that shut down
operations for over 30 hours. Though the Coast Guard has implemented multiple
cybersecurity regulations in just the last six months or so, this attack broke
through the weakest link in the security chain: human users. Ryuk typically
spreads through an email phishing campaign that relies on the target clicking
on a malicious link before spreading through a network.
Crypto-trading Platform Forces Password Reset After Possible Leak
Officials for Poloniex, a cryptocurrency trading platform, began pushing out forced password resets after a list of email addresses and passwords claiming to be from Poloniex accounts was discovered on Twitter. While the company was able to verify that many of the addresses found on the list weren’t linked to their site at all, they still opted to issue passwords reset for all clients. It’s still unclear where the initial list actually originated, but it was likely generated from a previous data leak and was being used on a new set of websites.
Nearly every one of Wawa’s
850 stores in the U.S. were found to be infected with a payment
card-skimming malware for roughly eight months before the company discovered
it. It appears Wawa only found out about the problem after Visa issued a
warning about card fraud at gas pumps using less-secure magnetic strips. WaWa has
since begun offering credit monitoring to anyone affected. In a statement, they
mention skimming occurring from in-store transactions as well, so card chips
would only be effective if the malware had been at the device level, rather
than the transaction point.
Microsoft Takes Domains from North Korean Hackers
Microsoft recently retook control of 50 domains that were being used by North Korean hackers to launch cyberattacks. Following a successful lawsuit, Microsoft was able to use its extensive tracking data to shut down phishing sites that mainly targeted the U.S., Japan, and South Korea. The tech company is well-known for this tactic, having taken down 84 domains belonging to the Russian hacking group Fancy Bear and seizing almost 100 domains linked to Iranian spies.
Landry’s Suffers Payment Card Breach
One of the largest restaurant chain and property owners, Landry’s,
recently disclosed that many of their locations were potentially affected by a
payment card leak through their point-of-sale systems. The company discovered
that from January through October of 2019, any number of their 600 locations
had been exposed to a card-skimming malware if not processed through a main
payment terminal that supported end-to-end encryption.
Though the common vernacular is “The Cloud,” the truth is, there are multiple cloud environments and providers available to organizations looking to utilize this growing technology. Read on to learn about the different types of cloud environments, and the biggest security obstacle each presents.
Terminology in cloud computing is growing almost as rapidly as the technology. The following list outlines the important differences between the most common types of cloud deployments:
Private Cloud – Private clouds are created for a single organization, either internally or by a third-party service.
Public Cloud – Public clouds are created for use by multiple organizations. For example, Amazon Web Services (AWS) is a public cloud utilized by many businesses.
Community Cloud – Like a public cloud, community clouds are used by multiple parties. Unlike a public cloud, a community cloud is a collaborative effort, in which infrastructure is shared amongst the users.
Hybrid Cloud – A hybrid cloud environment is made up of two or more cloud types from different providers. For example, an organization could utilize both the AWS platform as well as private cloud. Hybrid environments may also refer to a combination of cloud and on-premise servers.
Multicloud – Similar to a hybrid cloud, multicloud environments, also known as a Polynimbus cloud strategy, use multiple clouds for storage and development. However, a multicloud environment uses multiple clouds that are all of the same type. For example, an organization may engage the services of AWS, Azure, and Rackspace, which are all public clouds.
Hybrid cloud and multicloud models have become increasingly popular, as it allows organizations to mix and match to have the exact cloud arrangement that suits their needs. However, this aggravates the main problem plaguing cloud security: misconfiguration.
Outsourcing your development and data storage capabilities across different vendors is inevitably complex. Learning the ins and outs of each cloud environment, synchronizing these clouds together, and coordinating IT teams are just the beginning. With all of these balls in the air, it’s no wonder that configuring cohesive security settings often falls through the cracks.
Unfortunately, misconfigured cloud servers can lead to disastrous consequences. Breaches, data theft, compliance violations, and lost revenue are only a few of the possibilities.
A United Front
Understanding the potential dangers of misconfiguration is critical when assessing how to best approach cloud security. Cloud providers oversee the security of the cloud, but you are responsible for the security of the data that you place in that cloud. Requiring a unified security policy across your domain is the best way to ensure that misconfiguration doesn’t place your system at risk.
Cloud adoption is only growing, with Gartner analysts predicting that cloud computing will be a $300 billion business by 2021. However, Gartner also predicts that organizations using the cloud will be responsible for 95% of all cloud security issues during that time. With misconfiguration as the major catalyst to security issues, streamlined configuration simply cannot remain a manual task.
Powertech Security Auditor centralizes and automates security administration across all environments. It documents your security policy and can implement or make changes to your policy across multiple servers at the same time, manually or automatically. Security Auditor tackles consistent configuration for you, allowing your organization to make the most of your cloud environment.
We all know that to err is human. The problem is some mistakes are an order of magnitude larger than others. If you forget to buy apples at the store, that’s unfortunate. But if you forget to lock down your cloud server with the proper security controls and hackers gain entry…. That’s a problem that could cost your business dearly.
IBM Security Services recently gave a staggering figure as it continues to monitor security incidents and offer guidance for what’s ahead. They stated there was a 424 percent jump in the number of records compromised from 2016 to 2017 due to negligence in IT. The underlying issue: misconfigured cloud servers, networked backup incidents, and other improperly configured systems—all preventable errors that happen when the appropriate skillsets, controls, and processes are not put in place.
In fact, there is a new term IBM and other industry experts have begun to use: the inadvertent employee. These are the well-meaning IT professionals who are often at fault when it comes to misconfigured servers, networks, and databases. There has been a slew of major cybersecurity incidents in the past few years due to these mistakes as cybercriminals take full advantage of oversights. It doesn’t always take an IT expert to detect these weaknesses in cloud servers and gain entry into the sensitive information they store. Simply entering a URL into a browser to see if it returns a directory listing is an easy first step.
Recent Breaches Due to Misconfiguration
Already in 2018 there have been notable security breaches affecting millions of people. Hackers often look for sensitive information that commands a price on the dark market. This can include names, addresses, phone numbers, social security numbers, and credit card information, among other high-value data points.
Consider the following:
FedEx: In February, Kromtech Security researchers found an unsecured Amazon Web Services (AWS®) server with 119,000 FedEx customer records. FedEx quickly secured the information, which included driver’s license and passport numbers in addition to addresses, phone numbers, and other details. Apparently, the incident happened because Bongo International, acquired by FedEx in 2014, hadn’t taken the proper security precautions. The incident is a reminder to companies involved in merger and acquisition activities that they need to review their IT environments and not make assumptions about security policies.
BJC HealthCare: In March, BJC HealthCare announced that scanned images of documents related to 33,000+ patients had been accessible for more than eight months due to a misconfigured cloud server. These documents included social security numbers, treatment records, contact details, driver’s licenses, and insurance cards. Although the organization has stated they don’t believe the information was misused, they have offered free identity theft protection to their patients in their mea culpa.
Panera: In April, around 37 million customers who had placed orders on panerabread.com or had used Panera catering services learned their account information had been left open without a password for months on a cloud server. According to Krebs on Security, the company not only stored account details in easily accessed plain text format, but it was also lethargic about taking corrective action.
Public trust in the ability of any large company to protect their personal information is eroding. HelpSystems partners with thousands of organizations to secure their IT infrastructure as well as detect misconfigured systems—and alert you of any problems.
Speed is essential in today’s business climate, hence the rise of DevOps. Unifying development and operations compresses development cycles and enables more frequent deployments that align closely with business objectives. It’s no wonder executives love DevOps.
But one question is often left unasked in DevOps strategy meetings: what about security?
When speed and agility are paramount, it’s easy for data protection to take a backseat. Continuous delivery leaves little time to consider security controls.
We’re deploying to the cloud and our cloud vendor is secure, isn’t it?
Well, yes and no. The infrastructure of the cloud is very secure. Buildings, processes, systems, and personnel resources are all architected to be highly secure and highly available.
But anything the we deploy into the cloud is our responsibility, just as if we built it in our own data center. It’s called the “shared responsibility model.” It’s not only operational money for every new deployment but protection of our systems and our data. The tools and configuration options are available, but it is up to us to implement them.
The question is, how do we gain visibility into this ever-changing environment to know what’s going on under the covers and empower management to make smart business decisions? That’s our DevOps blind spot!
Many times, what happens is that our development and testing teams are spinning up servers with no regard for cost or security. Every system is a target for the continuous train of malicious actors all over the world and we don’t prioritize securing our configurations until after we discover a breach. Every system also costs the business money and we don’t see it until the charges get so high that payment approval rolls across our CIO’s desk. Once DevOps has the ability to create new systems and infrastructure at will, no developer will go unserved. Systems will be spun up continuously and our developers’ jobs are to write code, not build secure configurations.
With the average costs of a data breach reaching $3.62 million last year, this is not something we can ignore!
But how do we gain visibility and control over our DevOps processes and eliminate our blind spot? It was easy when we released once a quarter or once a year. We provisioned new servers in our data center and spent days of work deploying, testing, and architecting our networks to provide the security that the business required.
Automated DevOps requires automated SecOps as well.
Powertech Security Auditor from HelpSystems gives you the visibility and security that your management teams require. As systems are deployed to your public, private, or hybrid clouds, Security Auditor will automatically apply security controls and audits, instantly reporting on what it finds. No matter what regulatory framework you are working with, Security Auditor’s automatic application of security controls will alert you to vulnerabilities and misconfigurations.
Controls can be applied differently to different groups of systems based upon your preferences. For example, one set of configurations for development systems and a more stringent configuration set on your production deployments.
Are blacklisted services disabled?
Have critical system files been altered?
Are remote access settings properly secured?
Are unknown entities attempting to access our systems?
These settings and hundreds more can instantly be audited and reported on, adding SecOps to your DevOps. Security Auditor can even automatically change non-compliant findings to the desired configuration settings if desired.
It is 2020 now, and while the new year has a lot in store with regard to privacy, I want to take a look back. In fact, I want to look way back, long, long ago. In a galaxy far, far away. That’s right, I found a reason to write about Star Wars and privacy. […]
We are excited to announce that Veracode has been inducted into SC Media’s 2019 Innovator Hall of Fame. To select the honorees, the SC Media team leverages data from SC Labs testing groups, conferences, research, and referrals. The team then evaluates the nominees against strict criteria to ensure that the final selection is comprised of vendors with the most promising products and capabilities.
We’re honored to be one of only five new Hall of Fame inductees!
To announce its innovators, SC Media publishes an annual eBook highlighting the selected vendors’ greatest strengths.
“We interviewed each vendor to understand the security problems they identified and mitigated with their latest innovations,” the SC Media editors wrote. “Almost every organization pointed to two interrelated struggles: exhausting technological ‘noise’ and personnel fatigue.” This leaves security operations centers understaffed, overwhelmed, and frustrated, they continued.
“The vendors on this list understand these problems and recognize how such issues inhibit business operations and user experiences. They have responded with two helpful solutions: advanced automation and threat prioritization. Many platforms include artificial intelligence and machine learning that recognize patterns and can replicate remediation processes in the future to remove the manual burden from SOCs. Many new solutions also can determine whether a noted threat poses significant or minimal risk and adjust alert policies accordingly. In nearly every case, both automation and threat prioritization are integrated into a platform that can then easily integrate with existing infrastructures, making the transition to these nextgen solutions quick and easy,” the editors said.
Veracode was selected as an honoree in the Virtualization and cloud-based security category. The description said, in part:
The Veracode Platform provides an entire system of testing, scans and analysis that minimizes the presence of vulnerabilities and produces more secure software as a result. Veracode knows that vendors want to develop, use and sell software with confidence. By integrating into the development process multiple testing techniques — including static, dynamic and software composition analysis — the Veracode Platform can anticipate many potential vulnerabilities and resolve them before they ever materialize in a software’s final form.
Veracode also differentiates itself as a SaaS provider, according to SC Media, saying the model “makes Veracode versatile enough for local and global use, even by organizations with highly distributed personnel or partners.”
The recognition went on to say:
Veracode hopes to influence the cybersecurity ecosystem as well as the organizations they serve, so that vulnerability prevention becomes not just one possible solution amidst a series of alternatives but a standard step in software development procedures. All enterprises developing their own applications will likely benefit from the security measures integrated into the Veracode platform.
Veracode is also recognized for its ability to ease the workload of security and development teams by integrating multiple testing techniques into the development process. This strength is making a positive cultural impact on the perception of cybersecurity measures.
To learn more about our induction into the Innovator Hall of Fame, check out SC Media’s eBook, Innovators. For additional information on our comprehensive suite of products and services, visit the Veracode homepage.
Effective collaboration is key to the success of any organization. But perhaps none more so than those working towards the common goal of securing our connected world. That’s why Trend Micro has always been keen to reach out to industry partners in the security ecosystem, to help us collectively build a safer world and improve the level of protection we can offer our customers. As part of these efforts, we’ve worked closely with Microsoft for decades.
Trend Micro is therefore doubly honored to be at the Microsoft Security 20/20 awards event in February, with nominations for two of the night’s most prestigious prizes.
No organization exists in a vacuum. The hi-tech, connectivity-rich nature of modern business is the source of its greatest power, but also one of its biggest weaknesses. Trend Micro’s mission from day one has been to make this environment as safe as possible for our customers. But we learned early on that to deliver on this vision, we had to collaborate. That’s why we work closely with the world’s top platform and technology providers — to offer protection that is seamless and optimized for these environments.
As a Gold Application Development Partner we’ve worked for years with Microsoft to ensure our security is tightly integrated into its products, to offer protection for Azure, Windows and Office 365 customers — at the endpoint, on servers, for email and in the cloud. It’s all about simplified, optimized security designed to support business agility and growth.
Innovating our way to success
This is a vision that comes from the very top. For over three decades, our CEO and co-founder Eva Chen has been at the forefront of industry leading technology innovation and collaborative success at Trend Micro. Among other things during that time, we’ve released:
The world’s first hardware-based system lockdown technology (StationLock)
Innovative internet gateway virus protection (InterScan VirusWall)
The industry’s first two-hour virus response service-level agreement
The first integrated physical-virtual security offering, with agentless threat protection for virtualized desktops (VDI) and data centers (Deep Security)
The first ever mobile app reputation service (MARS)
AI-based writing-style analysis for protection from Business Email Compromise (Writing Style DNA)
Cross-layer detection and response for endpoint, email, servers, & network combined (XDR)
Broadest cloud security platform as a service (Cloud One)
We’re delighted to have been singled out for two prestigious awards at the Microsoft Security 20/20 event, which will kick off RSA Conference this year:
At Trend Micro, the customer is at the heart of everything we do. It’s the reason we have hundreds of researchers across 15 threat centers around the globe leading the fight against emerging black hat tools and techniques. It’s why we partner with leading technology providers like Microsoft. And it’s why the channel is so important for us.
Industry Changemaker: Eva Chen
It goes without saying that our CEO and co-founder is an inspirational figure within Trend Micro. Her vision and strong belief that our only real competition as cybersecurity vendors are the bad guys and that the industry needs to stand united against them to make the digital world a safer place, guides the over 6000 employees every day. But she’s also had a major impact on the industry at large, working tirelessly over the years to promote initiatives that have ultimately made our connected world more secure. It’s not an exaggeration to say that without Eva’s foresight and dedication, the cybersecurity industry would be a much poorer place.
We’re all looking forward to the event, and for the start of 2020. As we enter a new decade, Trend Micro’s innovation and passion to make the digital world a safer place has never been more important.
Welcome to our weekly roundup, where we share what you need to know about the cybersecurity news and events that happened over the past few days. This week, learn about Trend Micro’s Cyber Risk Index (CRI) and its results showing increased cyber risk. Also, read about a data breach from IoT company Wyze that exposed information of 2.4 million customers.
Now is the perfect time to reflect on the past and think of all the ways you can make this coming year your best one yet. With technology playing such a central role in our lives, technology resolutions should remain top of mind heading into the new year. In this blog, Trend Micro shares five tech resolutions that will help make your 2020 better and safer.
Elevated risk of cyber attack is due to increased concerns over disruption or damages to critical infrastructure, according to the Trend Micro’s latest Cyber Risk Index (CRI) study. The company commissioned Ponemon Institute to survey more than 1,000 organizations in the U.S. to assess business risk based on their current security postures and perceived likelihood of attack.
In the second blog of a three-part series on security protection for your home and family, Trend Micro discusses the risks associated with children beginning to use the internet for the first time and how parental controls can help protect them.
The Cambridge Analytica scandal continues to haunt Facebook. The company has been receiving fines for its blatant neglect and disregard towards users’ privacy. The latest to join the bandwagon after the US, Italy, and the UK is the Brazilian government.
Privileged containers in Docker are containers that have all the root capabilities of a host machine, allowing the ability to access resources which are not accessible in ordinary containers. In this blog post, Trend Micro explores how running a privileged, yet unsecure, container may allow cybercriminals to gain a backdoor in an organization’s system.
An exposed Elasticsearch database, owned by Internet of Things (IoT) company Wyze, was discovered leaking connected device information and emails of millions of customers. Exposed on Dec. 4 until it was secured on Dec. 26, the database contained customer emails along with camera nicknames, WiFi SSIDs (Service Set Identifiers; or the names of Wi-Fi networks), Wyze device information, and body metrics.
WordPress is estimated to be used by 35% of all websites today, making it an ideal target for threat actors. In this blog, Trend Micro explores different kinds of attacks against WordPress – by way of payload examples observed in the wild – and how attacks have used hacked admin access and API, Alfa-Shell deployment, and SEO poisoning to take advantage of vulnerable sites.
In a new research paper published on the last day of 2019, a team of American and German academics showed that field-programmable gate array (FPGA) cards can be abused to launch better and faster Rowhammer attacks. The new research expands on previous work into an attack vector known as Rowhammer, first detailed in 2014
The city of Frankfurt, Germany, became the latest victim of Emotet after an infection forced it to close its IT network. There were also incidents that occurred in the German cities of Gießen, Bad Homburgas and Freiburg.
BeyondCorp was first to shift security away from the perimeter and onto individual users and devices. Now, it is BeyondProd that protects cloud-native applications that rely on microservices and communicate primarily over APIs, because firewalls are no longer sufficient. Greg Young, vice president of cybersecurity at Trend Micro, discusses BeyondProd’s value in this article.
In 2013, the MITRE Corporation, a federally funded not-for-profit company that counts cybersecurity among its key focus area, came up with MITRE ATT&CK, a curated knowledge base that tracks adversary behavior and tactics. In this analysis, Trend Micro investigates an incident involving the MyKings botnet to show how the MITRE ATT&CK framework helps with threat investigation.
With backlash swelling around TikTok’s relationship with China, the United States Army this week announced that U.S. soldiers can no longer have the social media app on government-owned phones. The United States Army had previously used TikTok as a recruiting tool for reaching younger users,
Mobile banking applications that help users check account balances, transfer money, or pay bills are quickly becoming standard products provided by established financial institutions. However, as these applications gain ground in the banking landscape, cybercriminals are not far behind.
What security controls do you have in place to protect your home and family from risks associated with children who are new internet users? Share your thoughts in the comments below or follow me on Twitter to continue the conversation: @JonLClay.
Foreign exchange company Travelex announced that it had temporarily disabled all of its systems following a malware attack. Twitter user Izzy Fergus first noticed something was wrong when she attempted to visit travelex.co.uk and saw a runtime error message. When she reached out to the company on Twitter, Travelex UK informed her that it was […]… Read More
I couldn't get 2 days into the new decade without having to deal with ridiculous password criteria from Tik Tok followed by my phone automatically associating with what it thought was my washing machine whilst in a grocery store on the other side of the world (yep, you read that correctly). It somehow seems to just be reflective of how crazy online security is becoming in the modern era. On the plus side, Chrome is making some really positive changes to how it handles cookies so it's not all bad news. Hope you enjoy the first update of 2020 😊
Internet-connected devices have been one of the most remarkable developments that have happened to humankind in the last decade. Although this development is a good thing, it also stipulates a high security and privacy risk to personal information.
In one such recent privacy mishap, smart IP cameras manufactured by Chinese smartphone maker Xiaomi found mistakenly sharing surveillance footage
Cookies like to get around. They have no scruples about where they go save for some basic constraints relating to the origin from which they were set. I mean have a think about it:
If a website sets a cookie then you click a link to another page on that same site, will the cookie be automatically sent with the request? Yes.
What if an attacker sends you a link to that same website in a malicious email and you click that link, will the cookie be sent? Also yes.
Last one: what if an attacker directs you to a malicious website and upon visiting it your browser makes a post request to the original website that set the cookie - will that cookie still be sent with the request? Yes!
Cookies just don't care about how the request was initiated nor from which origin, all they care about is that they're valid for the requested resource. "Origin" is a key word here too; those last two examples above are "cross-origin" requests in that they were initiated from origins other than the original website that set the cookie. Problem is, that opens up a rather nasty attack vector we know as Cross Site Request Forgery or CSRF. Way back in 2010 I was writing about this as part of the OWASP Top 10 for ASP.NET series and a near decade on, it's still a problem. Imagine this request:
This is a real request from my Hack Yourself First website I use as part of the workshops Scott Helme and I run. You can go and create an account there then try to change the password and watch the request that's sent via your browser's dev tools. Then, ask yourself the question: what does the HTTP request need to look like in order to change the user's password? There are only 3 requirements:
It needs to be a POST request
It needs to be sent to the URL on the first line
It needs to have 2 fields in the body called NewPassword and ConfirmPassword
That is all. It doesn't matter if the request is initiated from the website itself or from an external location, for example an attacker's website. If that malicious site can force the browser into making a POST request to that URL with that form data, the password is changed. Why is this possible? Because the auth cookie is sent with the request regardless of where it's initiated from and that, is how a CSRF attack works.
We (the industry) tackled this risk by applying copious amounts of sticky tape we refer to anti-forgery tokens. By way of example, here's what the request to perform a domain search for troyhunt.com on HIBP looks like:
There are two anti-forgery tokens passed in the request, one in a cookie and one in the body, both called "__RequestVerificationToken". They're not identical but they're paired such that when the server receives the request it checks to see if both values exist and belong together. If not, the request is rejected. This works because whilst the one in the cookie will be automatically sent with the request regardless of its origin, in a forged request scenario the one in the body would need to be provided by the attacker and they have no idea what the value should be. The browser's security model ensures there's no way of the attacker causing the victim's browser to visit the target site, generate the token in the HTML then pull it out of the browser in a way the malicious actor can access. At least not without a cross site scripting vulnerability as well and then that's a whole different class of vulnerability with different defences.
This, frankly, is a mess. Whilst it's relatively easy to implement via frameworks such as ASP.NET, it leaves you wondering - do cookies really need to be that promiscuous? Do they need to accompany every single request regardless of the origin? No, they don't, which is why if you look in Chrome's dev tools on this very blog at the time of writing, you'll see the following:
The "future release of Chrome" is version 80 and it's scheduled to land on the 4th of Feb which is rapidly approaching. Which brings us to the SameSite cookies mentioned in the console warning above. In a nutshell, they boil down to 3 different ways of handling cookies based on the value set:
None: what Chrome defaults to today without a SameSite value set
Lax: some limits on sending cookies on a cross-origin request
Strict: tight limits on sending cookies on a cross-origin request
Come version 80, any cookie without a SameSite attribute will be treated as "Lax" by Chrome. This is really important to understand because put simply, it'll very likely break a bunch of stuff. In order to demonstrate that, I've set up a little demo site to show how "Lax" and "Strict" SameSite cookies behave alongside the traditional ones with no policy at all. I'm going to do this with an "origin" site (the one that sets the cookies in the first place) and an "external" site (one which links to or embeds content from the origin site). Let's begin by visiting the origin site at http://originsite.azurewebsites.net/setcookies/
I'm showing the Chrome dev tools here as they make it easy to see the SameSite value that's been set for each cookie (if set at all). These have been given self-explanatory names so no need to delve into them here. The main thing is that the site setting the cookies can read them all. But that's not what SameSite is all about, let's make it interesting and load up http://externalsite.azurewebsites.net/
There are 4 different things I want to demonstrate here as each implements a slightly different behaviour. Let's begin with by clicking the GET request button:
This loads the origin website with the GET verb and passes through all existing cookies except for the "Strict" one. Going back to the purpose of this blog post, once Chrome starts defaulting cookies without a SameSite policy to "Lax", GET requests will still send them through.
Next up, let's try the POST request:
And this is where things start to get interesting as neither the "Strict" nor "Lax" cookies have been sent with the request. The default cookies with no SameSite policy has, but only because I'm running Chrome 79. Come next month when Chrome 80 hits, the image above will no longer show the default cookie. By extension, any websites you're responsible for that are passing cookies around cross domain by POST request and don't already have a SameSite policy are going to start misbehaving pretty quickly.
Next up is the iframe:
You can see how the source of the frame is the origin website and embedding it like this will make a GET request, but even the "Lax" cookie hasn't been passed. This is really important to understand: not all resource types behave the same way even when the same verb is used.
The last one is the cookie image and it's easiest just to look at the request in the dev tools for this one:
As with the iframe, it's only the cookies with no SameSite policy that are sent either because it's explicitly set to "None" or because no policy has been set at all. But as with the iframe and the POST request, the default cookie shortly won't be sent at all and again, that's where the gotcha is going to hit next month.
Chrome will make an exception for cookies set without a SameSite attribute less than 2 minutes ago. Such cookies will also be sent with non-idempotent (e.g. POST) top-level cross-site requests despite normal SameSite=Lax cookies requiring top-level cross-site requests to have a safe (e.g. GET) HTTP method. Support for this intervention ("Lax + POST") will be removed in the future.
Given that last sentence, it's probably not something you want to be relying on though.
As a massive HTTPS proponent, this makes me happy 😊 To demonstrate this behaviour, I've added an additional "None" cookie but flagged it as secure. As such, the cookie will only stick after being loaded over an HTTPS connection so give this a go: https://originsite.azurewebsites.net/setcookies/
That results in the following cookies coming back in the response, the highlighted one being the new one:
Both the highlighted cookies will die as of Chrome 80: The "None" cookie because whilst it has a SameSite policy, it's not flagged as "Secure" and the default cookie because it will inherit the behaviour of a "Lax" cookie which will no longer be loaded into an iframe.
This change has the potential to break a lot of stuff so if you're in an environment where you're explicitly disabling the SameSite policy with "None" and still making insecure requests (*cough* enterprise), times are about to get interesting. Or if you're Google's own tracking service:
This popped up on my blog as soon as I changed Chrome's default behaviour to reflect what's coming next month (it's subtly different to the one earlier in this blog post) so it's a good example of the sorts of things you can proactively pick up now. If you do see this sort of thing in the enterprise, Chrome's changed behaviour can also be reverted across the organisation:
Enterprise IT administrators may need to implement special policies to temporarily revert Chrome Browser to legacy behavior if some services such as single sign-on or internal applications are not ready for the February launch.