Azure AD helps lululemon enable productivity and security all at once for its employees

Todays post was written by Sue Bohn, Director of Program Management at Microsoft, and Simon Cheng, who is responsible for Identity and Access Management at lululemon.

Happy New Year and welcome to the next installment of the Voice of the Customer blog series. My name is Sue Bohn and I am the director of Program Management for Identity and Access Management. Im really excited about our next blog in this series. Last time, we featured The Walsh Group. Today, I am sharing a story from lululemon, who really inspired me to think more broadly about what you can achieve when you step back and look at where you want to go.

Simon Cheng, responsible for Identity and Access Management at lululemon, is today a strong believer that every step towards cloud Identity and Access Management makes you more secure, but that wasnt always the case. Read on to learn more about lululemons experience implementing Azure Active Directory (Azure AD).

Too many apps, too many passwords

At lululemon, our journey to Azure AD began with two overarching business requirements: 1. Secure all our apps and 2. Simplify user access. We knew, based on the typical behavior weve seen in the past, that most of our users were likely using the same corporate password across all the apps they use, including the ones we dont manage. This meant that if even just one of these apps had security vulnerabilities, a hacker could exploit the vulnerability to get into our corporate resources. And we would have no idea! Our security is only as strong as the weakest app being accessed, and so if you can imagine the challenge was that we had over 300+ applications! To protect our corporate resources, we needed to ensure that the authentication process for each app was secure.

Our shadow IT environment wasnt just a security challenge, it also frustrated our users. Over and over we heard there are too many portals and too many passwords. This sentiment drove our second business requirement, which we boiled down to an overriding principle: Not another portal, not another password. So, our solution needed to address security and simplify user access without reducing business flexibility. The obvious answer was to consolidate identities, and this quickly led us to Azure AD and Microsoft Enterprise Mobility + Security (EMS). As an Office 365 customer, our users were comfortable and familiar with the Office 365 sign-in experience, and so it was an easy decision. Once we had chosen a solution, our next big task was rolling it out without disrupting our users, which is really where my concern waswould our users embrace it?

Single Sign On (SSO) sells itself

When we began the rollout of Azure AD, our top concern was whether our employees would comply. As it turns out I completely underestimated our users, and my concerns were really nothing. Within three months of the Azure AD rollout, our users loved the SSO experience so much that the business units came to us requesting that additional apps get rolled on. Even risk-based Multi-Factor Authentication (MFA) enforced by Azure AD conditional access policy feature went smoother than I expected. We hardly heard any complaints and even fewer calls on how to set it up. For highly sensitive apps, such as our financial and HR apps, we followed a recommended approach to enforce MFA at every sign-in. For several other less sensitive apps, we were able to prioritize user experience and protect them with risk-based conditional access rules.

In 2013, we had two apps onboarded: ServiceNow and Workday; now we have over 200! And every single one of our 18,000 users are protected by conditional access and MFA. I am really proud of this accomplishment as it has enabled higher productivity for our organization while maintaining stronger security because our employees are using it! This experience taught me not to underestimate our users, and I think this is because they are familiar with security measures, having already learned to do so through consumer services such as social media. Had I known this when we started, I would have deployed Azure AD much sooner.

The cloud allowed us to implement more security features faster than we ever could on-premises

Once we had Azure AD deployed, our next project was to implement Azure AD Privileged Identity Management (PIM). Azure AD PIM allows us to enable just in time administrative access, which significantly reduces the possibility that our administrative accounts will get compromised. Launching PIM was an eye-opening experience! This is a capability that is very labor intensive and time consuming to operate typically.

I am constantly delighted with how fast I can deploy services in the cloud, Azure AD PIM being a prime example. More often than not, the trap Ive seen organizations fall into is that they plan based on capabilities that exist within solutions rather than whats needed to secure their users. This is exactly where Azure AD and cloud wins over on-premises solutions. My takeaway has been that it is better to step back and plan what needs to be done for my organization and then just let the cloud services roll in almost automagically. Of course, where there are gaps, I work directly with the Azure AD engineering team!

Just in the last year, we have deployed, from pilot to production:

  1. Azure AD Connect implementation and Self Service Password Reset (SSPR) migration from the old tool (6 weeks)
  2. MFA registration, Azure AD conditional access, and Azure AD Identity Protection (7 weeks)
  3. Microsoft Advanced Threat Analytics (3 weeks)
  4. Group-based licensing (3 days)
  5. Azure Information Protection (8 weeks)
  6. Azure AD Privileged Identity Management (3 days!)
  7. Countless apps (each in a matter of hours!)

Learnings from lululemon

A big thanks to Simon! It is always great to learn from our customers’ deployments. In lululemons case, the need to take a step back and develop a plan based on the security goals, rather than a set of capabilities, really hits home. We can always plan something in the confines of what we currently have, but the fact is that new features get rolled out at cloud speed. It is great to see customers like lululemon deploy services in the cloud so quickly and benefit from them. Come back to our Secure blog to check in on our next customer blog and also read some other articles around Identity and Access Management and Zero Trust Networks.

The post Azure AD helps lululemon enable productivity and security all at once for its employees appeared first on Microsoft Secure.

Hebei, a Northern Chinese Province, Unveils an App That Triggers a Notification When You’re Near Someone in Debt

China is gearing up to launch a social credit system in 2020, giving all citizens an identity number that will be linked to a permanent record. Like a financial score, everything from paying back loans to behaviour on public transport will be included. One aspect of this social credit system is a new app in the northern province of Hebei. From a report: According to the state-run newspaper China Daily, the Hebei-based app will alert people if there are in 500 metres of someone in debt. It's like being on Oxford Street and being able to work out everyone around you who was in debt. According to the financial charity, the Money Charity, the average UK household debt (including mortgages) was $76,000, in June last year. That's a lot of notifications.

Read more of this story at Slashdot.

Crypto Update: Coins Survive Break-Down Attempt, but Setup Still Bearish

Yesterday, the major cryptocurrencies experienced a quick sell-off below support and a rapid reversal, but the “glitch” (or manipulation attempt) didn’t change the overall technical setup. The top coins are back in their trading ranges that have been dominant for over a week, and the short- and long-term downtrends are all intact. While the recovery […]

The post Crypto Update: Coins Survive Break-Down Attempt, but Setup Still Bearish appeared first on Hacked: Hacking Finance.

Popular free Android VPN apps on Play Store contain malware

By Waqas

If you want to ensure optimal privacy while surfing the web, a VPN (virtual private network) is the only reliable option. In this regard, a majority of web and smartphone users rely upon free VPN services, which according to the latest research is a risky step. In 2017, researchers identified that 38% of Android VPN apps on […]

This is a post from HackRead.com Read the original post: Popular free Android VPN apps on Play Store contain malware

Linkedin Learning: Producing a Video

My Linkedin Learning course is getting really strong positive feedback. Today, I want to peel back the cover a bit, and talk about how chaotically it came to be.

Before I struck a deal with Linkedin, I talked to some of the other popular training sites. Many of them will buy you a microphone and some screen recording software, and you go to town! They even “let” you edit your own videos. Those aren’t my skillsets, and I think the quality often shines through. Just not in a good way.

I had a great team at Linkedin. From conceptualizing the course and the audience, through final production, it’s been a blast. Decisions that were made were made because of what’s best for the student. Like doing a video course so we could show me drawing on a whiteboard, rather than showing fancy pictures and implying that that’s what you need to create to threat model like the instructor.

My producer Rae worked with me, and taught me how to write for video. It’s a very different form than books or blogs, and to be frank, it took effort to get me there. It took more effort to get me to warm up on camera and make good use of the teleprompter(!), and that’s an ongoing learning process for me. The team I work with there manages to be supportive, directive and push without pushing too hard. They should do a masterclass in coaching and feedback.

But the results are, I think, fantastic. The version of me that’s recorded is, in a very real way, better than I ever am. It’s the magic of Holywood 7 takes of every sentence. The team giving me feedback on how each sounded, and what to improve.

The first course is “Learning Threat Modeling for Security Professionals.”

Meizu Unveils a Smartphone That Does Not Have Any Port, or a SIM Card Slot, or a Button, or Speaker Grill

Phone maker Meizu has announced a new phone called "Zero," which doesn't have a headphone jack, or a charging port, or a physical SIM card slot, or any buttons, or a speaker grill. From a report: It doesn't even come with a SIM card slot and buttons you'd usually see on a phone -- the only elements that disturb the surface of its all-display, 7.8mm-thick ceramic unibody are its 12MP and 20MP rear cameras and two pinholes. One is a microphone, while the other is for hard resets. To make up for the lack of ports, Meizu Zero will support Bluetooth 5.0 and a wireless USB connectivity that will reportedly be able to transfer files as fast as the USB 3.0 standard can. Zero's 5.99-inch QHD OLED screen will act as some sort of a giant speaker and earpiece replacement. It does have a big enough bezel for a 20MP front camera, but its fingerprint reader is completely on-screen. The device, which is powered by a Snapdragon 845 processor, relies on 18W wireless charging due to the lack of a charger port. And it may not have the usual physical buttons, but it does have pressure-sensing ones with haptic feedback on its borders.

Read more of this story at Slashdot.

Crypto Update: These 3 Altcoins Look Ready to Pump

With Bitcoin (BTC/USD) trading sideways, altcoins finally get the opportunity to shine. Over the last few weeks, small and mid cap coins have been pumping left and right. Many, such as BlockMason Credit Protocol (BCPT/BTC) and Viberate (VIB/BTC) have posted double digit gains in terms of percentage from their bottom. However, there are those that […]

The post Crypto Update: These 3 Altcoins Look Ready to Pump appeared first on Hacked: Hacking Finance.

CVE-2018-1751

IBM Security Key Lifecycle Manager 3.0 through 3.0.0.2 uses weaker than expected cryptographic algorithms that could allow an attacker to decrypt highly sensitive information. IBM X-Force ID: 148512.

CVE-2019-3587

DLL Search Order Hijacking vulnerability in Microsoft Windows client in McAfee Total Protection (MTP) Prior to 16.0.18 allows local users to execute arbitrary code via execution from a compromised folder.

CVE-2019-3584

Exploitation of Authentication vulnerability in MVision Endpoint in McAfee MVision Endpoint Prior to 1811 Update 1 (18.11.31.62) allows authenticated administrator users --> administrators to Remove MVision Endpoint via unspecified vectors.

Europe Plans To Drill the Moon For Oxygen and Water by 2025

The European Space Agency hopes to be mining the moon for water and oxygen in six years' time. From a report: The agency took a big step toward this ambition by signing a deal with launch provider ArianeGroup on Monday. The one-year contract will see the company examine the possibility of mining regolith -- lunar soil and rock fragments that can yield oxygen and water, which could be very handy if you're trying to put a base on the moon. The mission would use an Ariane 64 launch vehicle. The European Space Agency (ESA) has already directed ArianeGroup, a joint venture between Airbus and Safran, to develop the craft, and its first test flight is anticipated in 2020. As for the lunar lander, that would come from the German startup PTScientists (which entertainingly stands for "Part-Time Scientists") -- the same outfit that aims to put the first mobile network on the moon.

Read more of this story at Slashdot.

Flush of Green: Crypto Markets on the Rise as Bitcoin Approaches Oversold Territory

Crypto markets saw renewed upside on Wednesday, as bitcoin emerged from oversold levels and bitcoin cash jumped to weekly highs. The moves, which appear technical in nature, set the stage for a bigger short-term rally. Markets Eye Recovery The total value of cryptocurrencies rose by more than $1 billion on Wednesday, reaching a high of […]

The post Flush of Green: Crypto Markets on the Rise as Bitcoin Approaches Oversold Territory appeared first on Hacked: Hacking Finance.

So, You Wanna Be A Security Star?

Well, here’s where you can start and learn the ropes.

There are over 350,000 security analyst job openings currently available and many have starting salaries in the six-digits. On top of that, organizations are struggling to find good security analysts due to the shortage of cybersecurity skills. And that will continue to be the case in the coming years. There could be a 1.8 million cybersecurity talent shortage by 2022. (1)

So what’s the deal? Why is this happening?

Many reasons but most of all, we’re human. We are creative, social beings that need to grow, learn, evolve and have a passion for what we do.

Information security analysts plan and execute security plans to protect an organization’s networks, computers and systems. They also monitor those systems for security breaches and investigate any violations. They install software or firewalls or other security measures to protect sensitive information. In addition, they prepare risk analysis/mitigation plans, document breaches, report on damages of a breach, conduct penetration testing, stay relevant with security trends, develop security best practices for an organization, ensure regulatory compliance, recommend security enhancements and help determine the disaster recovery plan. In a nut shell, their job is to monitor the network and hosts therein to identify and mitigate security threats.

Even with that list, their responsibilities are continually growing as the number and rate of cyberattacks increases.

Historically, they typically worked for financial organizations, consulting firms, technology & service providers, and those that endured digital attacks. But in over the last decade, as more businesses built a digital presence, as clouds grew and as more things/nouns contained software and became internet connected, security analysists are now needed in almost every sector. Often involved from conception to completion depending on the product or service.

They are sorely needed in this great battle of internet good vs. evil.

A Typical Day

Probably the first thing about a security analyst job is that there is no typical day and unpredictability is the norm. Sure, there might be certain tasks that an analyst performs daily, but in the wild west of cybercrime, you follow the evidence. And often, it comes in the form of Dashboard alerts. Examining alert logs is a very common task for security analysts since they need to understand what happened (past), the current situation (present) and what might occur (future) for each incident. Classic risk analysis.

With the massive increase in organization’s multitude of security alerts, staffers can get bogged down triaging and trying to determine appropriate counter measures to the vulnerability. On average, it takes around 45 minutes to investigate each alert. Within that, it could be intrusion attempts or policy violations of users. Events of concern should always have some human review and a resolution, even if it’s simply a report.

And that’s just the incident alerts.

In addition, security teams routinely install, manage and maintain security devices like firewalls. They also manage IDP/IPS, ICAP, SSL, PKI, etc along with the policy, change management and troubleshooting of those devices. Knowing how stuff works and who owns it makes incident response much easier. For instance, a compromised host requesting malware updates requires a different approach than defending a DDoS attack.

Then there are the tools. The success of any security operations center (SOC) depends on having the right tools, processes and, most importantly, efficient and effective analysis. (2) As more security solutions enter the SOC, it becomes difficult to monitor all the data being generated by all the sources. There could be dozens of technologies being used and managing those independently is cumbersome. A central source on a single platform can make it easier to manage, monitor and measure security ops and incident response.

F5 SOC Analyst Paul Dockter explains,

Security is an ever-evolving game of constant adaptation and my goal as a security analyst is to make sure that I stay ahead of this game. This requires that I stay up to date on current malware trends and variants and phishing attack vectors.

My daily responsibilities include alert analysis, monitoring for potential attacks, along with proactive research to find attacks before they end up generating alerts. Taking the results of these sources I work with internet authorities from Hosting and Registrar providers to CERTs and Law Enforcement around the world to swiftly eradicate detected attacks before they can be fully leveraged to target our customers.

Additionally, I work with our customers providing product support to make sure that their products are operating correctly to generate future alerts. Beyond these responsibilities my day is made up of reading security articles, brainstorming issues with Analysts on the other SOC teams, and working on tasks as provided by my managers.

Educational Background

Typically, a bachelor’s degree in a computer related field is certainly a good starting point, experience in a related occupation is preferred. Many of today’s InfoSec old-timers, grew up playing with computers systems alongside the growth of computer networking, application delivery and the internet.

They started out as the early network engineers and as the threats came their way, they defended as best they knew. They’d run an application with a sniffer to see what ports/protocols are passing to ensure a proper firewall policy. They stayed cognizant of the criminal mindset and developed solutions to protect against the bad actors. Signature one day, blocked port/IP address another. And kept current on new techniques and vulnerabilities along with doing some of their own penetration testing as research. Many InfoSec pros have gained notoriety for discovering serious flaws in systems. Often these discoveries have forced technology manufacturers to fix critical flaws that could have had devastating consequences. The great ones are experts at recognizing patterns. See something that looks weird and dig a bit deeper.

Today’s Landscape

‘With the retirement of many of those early security pioneers and immense outsourcing over the last decade, today we face a shortage of cybersecurity talent. The depth of individual expertise across the security framework has diminished. This, at a time, when cyber-threats have escalated to insane. The industry has a massive supply and demand problem where organizations must invest in their own people. It’s becoming clear that any size organization with security needs should provide career development, training and mentoring to talent who show interest and have the technical skills. Opportunities such as security research, threat hunting and certifications, along with compensation, are key.

In terms of building future talent, the good news is that colleges and universities are now offering cybersecurity programs. George Washington University has the Institute for Information Infrastructure Protection and Marymount University offers a MS in Cybersecurity. While many programs focus on coding, cryptography and ethical hacking, it’s also important to understand some basic business decision making. Cybersecurity roles can encompass not only the technical realm but also legal, policy and management. You need operations with incident management.

At a national level, The National Security Agency (NSA) and the Department of Homeland Security (DHS) jointly sponsor the National Centers of Academic Excellence in Cyber Defense (CAE-CD) program. One in Cyber Defense and one in Cyber Operations. The goal, according to the agencies, is to reduce vulnerability in our national information infrastructure by promoting higher education and research in cyber defense and producing professionals with cyber defense expertise.  Many colleges and universities in the US are eligible to apply to become a CAE-CD school. In fact, F5’s own F5 Labs Threat Research Team has partnered with several Universities and has published research findings with them. (UW, UWT, Whatcom)

Likewise, in the UK, a new National College of Cyber Security will open in 2019 and their Cyber Discovery program focuses on kids 14-18, teaching teenagers about cybersecurity in a fun and assessable way.

The one thing about this field is that you’ll always be learning. With the Internet of things, new threats are a daily occurrence along with malicious data exfiltration techniques. If you hated studying in college, this might not be the career for you as this job requires constant training, learning and studying the latest trends and techniques. You must have passion for this project.

Job Fatigue

Is very real.

According to ESG, 63 percent of organizations say the cybersecurity skills shortage has led to increased workloads on existing staff. Security analysis are typically consumed by the routines of their job and many reach burnout within 1 to 3 years. Being manually intensive, procedures are very static, and numbness can creep quickly. In fact, many analysts feel that they haven’t contributed at all to the overall security posture of an organization.

In recent months, several respected InfoSec pros have decided to take a step back from the security scene, particularly on social media. There have been stories of people stealing research, claiming as their own and one respected expert noted on twitter, ‘I see what was once a community driven on knowledge, sharing, or working together to make a positive difference, regardless of who you were or where you were from, completely shift towards going after one another’

The InfoSec industry can’t sustain with that mentality.

A 2015 study, A Human Capital Model for Mitigating Security Analyst Burnout, took an anthropological approach to explore the burnout phenomenon. They were able to train and then place a researcher within a Security Operations Center (SOC) to better understand, beyond interviews, what is driving the exhaustion. Trust is important within SOCs, and this embedded researcher had to have the skills to do the job along with noting the daily reflections of the operation.

The SOC team was comprised of an operations team and an incident management crew. Each with Level 1 and Level 2 analysts. L1 analysts were the first line of defense monitoring the Security Information Event Management (SIEM) console for any possible attempted breaches. L2 were more senior providing mentoring, management, reporting and in-depth analysis of incidents.

According to the researchers, Human capital, in the context of a SOC, refers to the knowledge, talents, skills, experience, intelligence, training, judgment, and wisdom possessed by individual analysts and the team as a whole. Human capital can also be defined as the collective and individual intellectual stock of a SOC.

They looked at morale, automation, operational efficiency, management metrics and of course, how this leads to analyst burnout. Specifically, they noted, the cyclic interaction of Human Capital with Automation, Operational Efficiency and Management Metrics contributes to burnout.

One analyst shared that he wanted to work in an environment where he was continuously learning and have the opportunity to analyze malware. As a level 1 engineer, he felt dismayed that he wasn’t doing any real threat detection and lamented about potentially making a bad career choice. Lack of intellectual growth can be a huge issue for morale. Other morale tugs include things like step-by-step mundane procedures, tasks without consultation and certainly, compensation…or perceived lack thereof.

There are also operational efficiency gaps when there is a lack of cooperation between groups or incomplete information (from other groups) when investigating an event. Even a misunderstanding or lack of clarity for a given task can lead to inefficiencies. In the security world, details matter.

One may think automation would be the perfect solution for an overworked staff but that takes the whole human element out of it. Sure, you could write a script to automate ‘look for this!’ but often automation is inserted without a review of procedures suitable for automation. In addition, if the automated process fails to identify a threat, then liability rears its head again.

———————-

A Path Forward

Proper development and management of security analysts is vital for a SOC’s success.

A Human Capital Model for Mitigating Security Analyst Burnout study identified four factors that impact the creation and preservation of efficient security analysts: Skills, Empowerment, Creativity and Growth.

The right skills are important for a security analyst to do their job and can be gained by education or experience. It is vital that both L1 and L2 teams share and exchange training about their responsibilities. If someone is not properly trained, their confidence in addressing an issue diminishes. This lack of confidence can lead to frustration as opportunities are passed by due to not having the proper knowledge.

From an empowerment standpoint, when analysts are encouraged to contribute to ideas or investigate new threat data, they feel excited and empowered. Empowerment and morale go hand in hand so as the analyst grows, so should the responsibilities and trust since the risk of screw-up is diminished.

Humans are creative beings and the report notes that empowerment directly affects an analyst’s creativity. The creativity to handle a scenario that is different from anything in the past. When empowered, they might go outside the normal written procedures to creatively figure out the issue. They are not afraid to try new ideas since they are empowered to think outside the SIEM box. Empowerment encourages creativity and offering creative outlets to staff when things get repetitive leads to a more enjoyable job, thus good morale. Skills, empowerment and creativity gives SOC personnel the confidence to handle any situation in real time.

Growth for security staff involves increasing the intellectual capacity of any analyst. Most growth usually happens on the job handling incidents but its important to work different types of security events to learn new skills and improve knowledge. When one learns there is a sense of accomplishment and purpose on the job. With accomplishment, comes confidence and growth. Growth is influenced by creativity. Dull activities doing the same thing leads to lower creative development. Lower creative development means that the analyst uses the same skills daily, inhibiting growth. It’s all intertwined. On the flip side, growth often comes in the form of mentors, teaching and leaning new skills.

Even highly skilled, empowered employees may find a lack of growth due to no mentors or anyone more knowledgeable. They may be the smartest cat in the room but that’s where they’ll stay. Often one of the reasons why highly skilled InfoSec pros move on. They want to find something more challenging or work on a variety of issues…not just the daily bells. There needs to be a good balance, so all can learn, grow and feel good about what they are doing.

You can see how low skills, low growth, low empowerment and low creativity can lead to dissatisfaction and low morale. This is the vicious cycle of Human Capital according to the study. As long as there are positive outcomes among the factors, then morale can remain high. Frequent turnover can also lead to spending more on new folks and training.

You easily understand how a pattern of lower empowerment leads to lower creativity which then leads to lower growth and skills. Burnout occurs when one gets stuck in that vicious cycle. Same with skill level. If there’s lower skills, management trust is lower leading to less empowerment (no creativity) and no opportunity to grow. When you’re not accomplishing anything, the daily routine and monotony brings exhaustion.

With lower skilled employees, gradually increasing trust and empowerment allows them to learn and thus, improve their skills. Now that they’re skilled, they get more privileges and grow as analysts. That fuels creativity and now the cycle is virtuous rather than vicious. As one outgrows their position, a new, more challenging one could be offered ensuring growth and potentially saving a quitter.

Earlier we mentioned that automation, while beneficial for repetitive tasks, cold take away the human element. If humans are involved with determining the operational bottlenecks that would benefit from automation, then it could alleviate some staff stress and allow them to focus on more interesting, challenging, growing projects. When the analysts are empowered to help make automation decisions and are part of the creative development process, they feel part of solution rather than having an automated tool shoved down their throat.

Automation could also triage individual security events to determine if it is an attacker and correlate it with other events that may be affecting the same devices. Machines can automate the scoring of the attacks and prioritize them based on risk or threat level. If it works well, manual work can be eliminated, and the analyst is presented with the right scenario to act with confidence.

Operational efficiency allows SOCs to utilize all resources to detect and respond to threats in real time. Since analysts are in the thick of it, they directly influence operational efficiency. One example in the study described the ticketing process. Case creation takes too long, filling a ticket to find the hosts, and selecting the proper dropdown for a field entry. Reflecting on what was needed, an engineer wrote a script to automate those tasks and it helped. They knew what was needed, was empowered to create a solution, and had a positive result for the SOC. With the dull out of the way, they were able to focus on more interesting and challenging investigations.

To measure a SOC’s efficiency, metrics provide management visibility. They can identify bottlenecks, measure intrusions, determine compensation, influence investment and to understand the SOC’s value to the organization. The appropriate metrics are important. Reports that are too technical might be misinterpreted by management and ones that are too ‘managerial’ might not reflect the actual SOC’s inner workings. Often, SOC’s workers are unsure what management needs. Every incident? Most detrimental attacks? Multiple teams involved? Missed threats?

Getting the reports right, not just generating numbers, can have a lasting effect on the SOC. It helps determine ROI and investment forward. It also has a direct effect on human capital. Metrics could decide exactly what an analyst works on creating some limited empowerment for the analyst. Metrics gives management perception of what’s happening which directly influences investment in the SOC. Again, with less budget, areas like training or compensation could be reduced, directly limiting analyst’s growth. Meaningful and good metrics on the other hand can lead to promotions and other perks for analysts.

Conclusion

Human analysts are the most important piece of the SOC puzzle, followed by tools and procedures. Humans are creative, passionate creatures and need to be nurtured as such. Skills, Empowerment, Creativity and Growth are the essential ingredients for a productive, resilient and well-maintained Security Ops Center, if done right. Organizations can reap benefits while keeping their infrastructure secure, and the hard-working analysts will finally feel like Security Stars.

Footnotes

Other References

Peter Silva
Peter Silva, Sr. Solutions Marketing Manager, Security at F5 Networks

Peter Silva Web Site

The ISBuzz Post: This Post So, You Wanna Be A Security Star? appeared first on Information Security Buzz.

Get 140+ Hours Of CompTIA Certification Training For $59 (90% Off)

Hiring Based on Skills Instead of College Degrees is Vital for the Future, IBM CEO Says

What does the future of getting a job in the tech industry look like? According to the CEO of IBM, Ginni Rometty, it's important that tech companies focus on hiring people with valuable skills, not just people with college degrees. From a report: Rometty made the comments yesterday at the World Economic Forum in Davos, Switzerland. The CEO said that technology's fast-moving pace here in the 21st century makes it harder for people to find jobs and has led to disillusionment with the future. "With the new technologies that are out there, I think there is a huge inclusion problem, meaning there's a large part of society that does not feel this is going to be good for their future," Rometty said. "Forget about whether it is or it isn't or what we believe. Therefore they feel very disenfranchised." [...] "So when it comes to education and skills, I think the government can't solve it alone," Rometty said. "I think businesses have to believe I'll hire for skills, not just their degrees or their diplomas. Because otherwise we'll never bridge this gap." "All of us are full of companies with university degrees, PhDs, you've got to make room for everyone in society in these jobs," Rometty said as other business leaders on the panel nodded their heads. She added, "We have a very serious duty about this. Because these technologies are changing faster with times than their skills are going to change. So it is causing this skill crisis. [...] We have to have a new paradigm. You would have to have new pathways that don't all include college education and you would have to have respect for that job -- not blue collar or white collar, I call it new collar."

Read more of this story at Slashdot.

FUD at Davos: Bitcoin Price Holds Steady as Debate Over Future Grows

Bitcoin’s price continued to stabilize on Wednesday, as a lack of trading catalysts kept the bulls and the bears at bay. A debate over bitcoin’s future raged on at Davos, Switzerland midweek, offering some interesting perspectives about bitcoin’s long-term future. Price Holds Steady The bitcoin price has traded within a narrow range in the last […]

The post FUD at Davos: Bitcoin Price Holds Steady as Debate Over Future Grows appeared first on Hacked: Hacking Finance.

The Monopolist in the House: Rep. David Trone’s Wine Company Seeks to Overturn a Constitutional Amendment

President Donald Trump has been reasonably condemned for attempting to trash the Constitution. But there’s only one active politician in America working to actually reverse a standing constitutional amendment.

He’s a freshman Democratic House member from Maryland’s 6th Congressional District.

David Trone was elected in 2018 to fill the seat of John Delaney, who seems to think that he’s running for president. Trone won a spirited primary with the assistance of $14.2 million in self-funded contributions and another $3.25 million in personal loans. This available fortune was generated from Trone’s personal alcoholic beverage empire. Total Wine, which Trone co-founded with his brother, is America’s largest privately owned retailer of beer, wine, and liquor, with 193 stores in 23 states. Trone served as president of Total Wine until December 2016 and is still listed as co-owner of the company on its website.

Total Wine is currently embroiled in a Supreme Court case that challenges the 21st Amendment, which ended Prohibition. In Tennessee Wine & Spirits Retailers Association v. Blair, Total Wine claims that Tennessee cannot impose a two-year residency requirement for obtaining a retail license to sell alcohol. This has proven a barrier for Total Wine and for Doug and Mary Ketchum, who recently moved to Tennessee after agreeing to take over a mom-and-pop liquor store in Memphis.

But the residency issue is a stalking horse for the question of whether states have the right to regulate alcohol sales within their borders. While the 21st Amendment is seemingly very clear on that, alcohol producers and retailers have persistently fought it. If the Supreme Court sides with Total Wine, state alcohol laws will have little or no force, making it easier for retail giants to dominate the sector and potentially roll back health and safety measures on alcohol in a drive for profit.

So you have David Trone, a proud member of the new Democratic congressional majority, trying to use a conservative judiciary to deregulate an industry so that his wine shops can pop up on every street corner in America.

Section 1 of the 21st Amendment, ratified on December 5, 1933, simply repeals the 18th Amendment, which ushered in Prohibition 14 years earlier. But Section 2 bans “the transportation or importation into any State, Territory, or possession of the United States for delivery or use therein of intoxicating liquors, in violation of the laws thereof.” In other words, all alcohol producers or distributors had to follow local laws in order to make legal sales. This was seen at the time as a compromise to allow “dry” counties or states to continue with their local preferences. (In fact, parts of Trone’s district were dry until recently.)

In this case, the Ketchums moved from Utah in July 2016 to open a wine shop in Memphis. The Ketchums’s daughter has cerebral palsy, and the weather in Tennessee was deemed better for her ailment. The Tennessee Alcoholic Beverage Commission was actually ready to approve the application; the state hasn’t really enforced the two-year residency rule for several years, and Tennessee’s own former attorney general once claimed that it’s “probably unconstitutional.”

But the Tennessee Wine and Spirits Retailers Association, a local trade association, threatened to sue the state for not adhering to the residency requirement. Tennessee’s law is actually even more restrictive: The initial license expires after one year, and applicants looking to renew must be residents for 10 years.

At the same time, Total Wine wanted to open two new outlets in Nashville and Memphis, which would have faced hurdles if the residency rule was newly enforced because a retailer’s directors and officers must all be residents (thanks to the lax enforcement, Total Wine already has a store in Knoxville). Total Wine, at the time still under Trone’s direction, argued that the 21st Amendment was in conflict with the so-called dormant commerce clause doctrine, which prohibits states from discriminating against out-of-state or foreign commercial enterprises.

In 2005, the court ruled that laws allowing in-state wineries to ship to local residents had to be available to out-of-state wineries as well, to conform to the Constitution’s delegation of interstate commerce to Congress. According to the ruling, even if Congress made no law specifically on wine sales, the very existence of the commerce clause prohibited restrictions.

But Congress did ratify the 21st Amendment, which gave exclusive regulatory rights over alcohol to the states. Therein lies the dispute.

The 6th Circuit Court of Appeals sided with Total Wine and threw out the residency requirement, further opening the loophole to the 21st Amendment first breached in 2005, from alcohol producers and products to retail businesses. The Tennessee Wine and Spirits Retailers Association appealed to the Supreme Court.

You could view Tennessee’s residency requirement as protectionism to prevent outside companies from doing business in the state. After all, it’s not like Tennesseans, in the home of Jack Daniels whiskey, are forced to do without liquor; they’re just restricted from purchasing from out-of-state sellers.

But Sandeep Vaheesan of the Open Markets Institute argues that opinions about the regulation are besides the point; in plain fact, states have been empowered with oversight over alcohol. The 21st Amendment “sought to ensure that alcohol would still be subject to close public oversight and gave this power to the states to structure markets for alcohol in accordance with local preferences,” Vaheesan and John Laughlin Carter write in an amicus brief to the court.

Vaheesan and Carter believe that the 2005 ruling “violated the plain language of the Twenty-First Amendment” and worry about the broader degradation of states’ ability to regulate alcohol, particularly based on a judge’s prerogative to deem a regulation protectionist. “States should have the authority to structure commerce in alcohol to promote a range of public ends, including but not limited to the protection of public health, the promotion of the responsible use of alcohol, and the maintenance of decentralized markets with many distributors and producers of alcohol,” they write.

Much more is at stake than whether the Ketchums have to wait two years to open a liquor store (indeed, since the Ketchums moved to Tennessee in July 2016, they could legally procure a state liquor license today, at least for the first year). If the ruling is broad enough, it could nullify retail bans on direct-to-consumer wine shipping. That would allow Amazon or Walmart to sell wine over the internet.

More immediately, Total Wine would have fewer restrictions to expand its business from 23 to all 50 states. The national chain has already overrun states without residency laws and would almost certainly use its market power to dominate Tennessee and everywhere else. Ironically, the Ketchums, who teamed up with Total Wine in the lawsuit, may be at risk from Total Wine’s power if they prove victorious.

And if that succeeds, Total Wine and other giants can gain political power in the states, threatening other laws regulating the usage of alcohol. Whether by extrapolation from judicial precedent or brute force lobbying power, those laws could fall. Alcohol markets in the United Kingdom have been effectively deregulated, leading to high rates of intoxication among young people and a veritable public health crisis. Breaking the state regulatory apparatus could worsen alcoholism in America.

At the Supreme Court last week, in oral arguments held on the 100th anniversary of the original Prohibition amendment, several justices appeared skeptical of the residency requirement, with Justice Elena Kagan intimating that it is “clearly protectionist.” But Kagan and Justice Neil Gorsuch did seem to understand the slippery slope at play; Gorsuch wondered whether the next step would be to ask for license to “operate as the Amazon of liquor.”

Gorsuch added that “alcohol has been treated differently than other commodities in our nation’s experience,” and the 21st Amendment provided a barrier to an easy adjudication of the case. Attorneys for the Tennessee Wine and Spirits Retailers Association made similar claims about the amendment’s granting of near-total regulatory power over alcohol to the states.

With Justice Ruth Bader Ginsburg absent and uncharacteristic silence from Chief Justice John Roberts, SCOTUSBlog called it “a hard case to handicap.” But the outcome matters enormously for Trone and the ability to extend his wine empire.

Total Wine has often used tactics designed to corner markets. The company has paid fines in Connecticut for selling wine and liquor below cost, a form of predatory pricing to gain market share and drive competitors out of business. Massachusetts accused Total Wine of the same tactic in 2016, but the company sued the state and won on appeal. In 2016, Maryland cited Total Wine for giving campaign contributions above state limits.

Through it all, Total Wine has continuously expanded across the eastern seaboard and the Southwest, and it ships wine wherever state laws allow. Where expansion has been curtailed, Total Wine has employed lobbyists and lawyers, suing Connecticut, New Jersey, Maryland, and Massachusetts over its alcohol laws.

The midterm victory was Trone’s second attempt at a House seat; he unsuccessfully ran in 2016 for the seat vacated by Chris Van Hollen and now occupied by Rep. Jamie Raskin, D-Md. In 2018, he switched seats and ran in Maryland’s 6th, where residents mysteriously started receiving Total Wine circulars even though there are no outlets in the area.

Monopolists have served in Congress before; railroad baron Jay Gould was a U.S. senator. Trone’s ascendance offers another bridge back to the late 1800s in our new Gilded Age.

Hannah Muldavin, communications director for Trone, told The Intercept to contact Total Wine directly for comment. A spokesperson for Total Wine replied that the company doesn’t comment on pending litigation.

The post The Monopolist in the House: Rep. David Trone’s Wine Company Seeks to Overturn a Constitutional Amendment appeared first on The Intercept.

A Democratic Firm Is Shaking Up the World of Political Fundraising

When Kara Eastman pulled off a primary upset this past spring in Nebraska’s 2nd Congressional District, a swing seat in the Omaha metro region, she did so with no help from the national Democratic party. Eastman, a social worker and first-time candidate running on an unapologetic left-wing platform, was competing against former Rep. Brad Ashford, who served for years in the Nebraska legislature and one term in Congress between 2014 and 2016.

Despite Ashford’s long track record of supporting abortion restrictions, pro-choice groups like EMILY’s List, Planned Parenthood, and NARAL Pro-Choice America opted to stay out of the race. The Democratic Congressional Campaign Committee, or DCCC, elevated Ashford to their “Red to Blue” list, a signal of official party support for competitive races, and political action committees controlled by House leader Nancy Pelosi, D-Calif., and Rep. Steny Hoyer, D-Md., kicked in over $28,000 to Ashford’s bid.

Eastman, who embraced not only reproductive freedom but also policies like “Medicare for All,” tuition-free college, a $15 minimum wage, and increased gun control, struggled early on to compete. While her proposals and personal story were popular, finding donors was hard.

Yet by the time her primary rolled around, Eastman emerged the winner, raising close to $400,000  and benefitting from a flurry of late-stage media coverage. Using a new digital fundraising company to target customized groups of donors across the country — such as all Democrats who identify as social workers or those who back “Medicare for All” — Eastman’s team was able to change the trajectory of the race.

Her campaign credits Grassroots Analytics, an obscure tech startup that’s quietly shaking up the Democratic campaign finance world. Not a single article has ever been written about or even mentioned it, despite the company having aided some of the biggest upsets of the 2018 cycle, including Joe Cunningham in South Carolina, Lucy McBath in Georgia, and Kendra Horn in Oklahoma.

“Grassroots Analytics absolutely was what allowed us to be competitive in the primary and get on TV, otherwise there is no way we would have won,” said Dave Pantos, the finance director for Eastman’s campaign. “We were definitely not the mainstream candidate, and we didn’t have access to donor lists that more establishment candidates have.” Eastman ended up losing the general election, earning 49 percent of the vote, but has already announced that she’s jumping back in the fray for 2020.

Grassroots Analytics says it wants to level the playing field and to make it easier for candidates to run who don’t already have a built-in network of wealthy family, friends, and co-workers. Using an algorithm to clean and sort publicly available data spread across the internet, the company provides campaigns with customized lists of donors who they believe are most likely to support them. If you’re involved in the world of political fundraising, a thought has probably occurred to you just now: Wait, isn’t that illegal? Hold that thought.

Establishment groups like the Democratic National Committee, the DCCC, and EMILY’s List have largely given the firm the cold shoulder, despite its goals and the fact that it worked with 137 campaigns in the last cycle. Not even mainstream progressive organizations like Our Revolution or Justice Democrats would return Grassroots Analytics’s entreaties to work together.

JTG_190117_33842-30-1548180616

Danny Hogenkamp, founder and director of Grassroots Analytics, in Washington, D.C.

Photo: Justin T. Gellerson for The Intercept

Danny Hogenkamp, the 24-year-old founder and director of Grassroots Analytics, wasn’t expecting to end up in this kind of business. He had no background in politics; he studied Arabic at the University of North Carolina at Chapel Hill and assumed he’d end up doing foreign policy or refugee resettlement work after college.

But after graduating in 2016, with no job yet to speak of, he decided to go crash with some relatives in Syracuse, New York, where he was born, and try his hand in a congressional campaign. He enlisted with first-time candidate Colleen Deacon, a 39-year-old single mother who had worked as Sen. Kirsten Gillibrand’s regional aide in upstate New York. Deacon, who previously lived on Medicaid and food stamps, campaigned on putting herself through college with minimum wage jobs and student loans.

Hogenkamp was placed on the finance team, where he was charged with raising money and managing a team of 20 unpaid interns. It was there that he first encountered the opaque world of political fundraising — a world that even many organizers, pundits, and journalists can hardly grasp.

“I had no idea what campaigns were like, and it turns out that literally what candidates actually do to raise money, unless you’re really well-connected and famous, is sit in a room and call rich, old people to beg for $1,500, $2,000, or preferably [the federal maximum] of $2,700,” he said.

To run a competitive House race, Deacon’s campaign knew it needed to raise between $1.5 million and $2 million. Syracuse is one of the poorer metropolitan areas in New York, and after the campaign exhausted all the local prospective donors it could think of, the next step was the big open secret in political campaigning: finding similar candidates in other states and races and then researching who donated to their campaigns. So, for example, Deacon staffers would search for similar candidates — like Monica Vernon, who was running for Congress at the same time in Iowa — and then try and track down the contact information for the donors listed on their Federal Election Commission reports.

“Our interns would literally just Google people and try to find their phone numbers,” Hogenkamp said. “But donors change their numbers all the time, and they’re hard to find.”

The whole thing was invariably slow and disorganized. “It was the stupidest process,” he said. “It’s not digitized; there’s no math; it’s just random and stupid.”

Hogenkamp, still pretty much an idealistic novice, was convinced that there had to be a better way, some obvious step he was missing. So, from his perch as a relatively high-level finance staffer on Deacon’s team, he reached out to everyone he could think of — like the DCCC, EMILY’s List, liberal consulting firms, and other politicians — to find out how to make this fundraising process easier. “No one had any good answers; they said, ‘Well, this is just how you do it,’” he said. Hogenkamp recalled Gillibrand’s team telling him about its personal wealthy contacts in New York and how fundraising for the campaign meant going to those people and asking each of them to go out and find 10 more donors within their own networks.

Eventually, Hogenkamp connected with David Chase, a Democratic political operative who was then managing the campaign for Rubén Kihuen in Nevada’s 4th Congressional District. Chase offered a bit of help: He had developed a very rudimentary tool to aid his team’s fundraising efforts.

“Using OpenSecrets, I built some product that allowed you to search through all the federal and state contributions,” Chase told The Intercept. “It was very simple — I don’t have any advanced technological skills — but I wrote a script that allowed you to upload a list and it spit back the stats on the amount of times someone had given to state races and their average contributions.” In other words, for someone looking to discover who had given $500 or so to multiple candidates, Chase’s tool provided a way to more quickly glean that information.

Chase explained his tool, and Hogenkamp realized that there was a lot more he could do with an idea like that. During college, he had interned at the Consumer Financial Protection Bureau, where he learned to model how likely students were to default on their student loans. “I just randomly had a background in R and Python and zero-inflated negative binomial regressions from my time at the CFPB, so it was really just serendipitous that I actually knew what to do,” he said. Following that conversation, Hogenkamp went back and recruited a bunch of Syracuse University computer science students to help him build out his vision.

The result was effectively what he calls a “cleaner” of publicly available data, scraped from across the internet, that analyzes and sorts information for more than 14.5 million Democratic donors over the last 15 years. The tool would generate lists of individuals most likely to support a candidate given shared characteristics and shared views — ranging from race and ethnicity to a passion for yoga or universal health care.

“We know where you live; where you used to live; what issues you care about; if you’re trending Republican or Democrat; what other kinds of candidates you like to support; and contact information” he explained.

The lists aren’t perfect or fully comprehensive. They exclude some websites for legal reasons, and when I asked to see my own donor profile, recalling a $25 donation I gave in college to an Ohio Democrat, Grassroots Analytics had no record of it, because I’ve never given above the $200 reporting minimum to a federal candidate, and only some states and localities disclose small-dollar donations. Had I donated $5 to Stacey Abrams’s gubernatorial campaign, by contrast, I would have shown up in their system.

Nevertheless, the tool offers candidates — especially insurgent and working-class ones who lack rolodexes of wealthy friends — a real window into what is arguably the most important part of any political campaign: early-stage fundraising. The unspoken rule of viability in federal campaigning is that if you haven’t amassed at least $250,000 by your first quarter financial report, you’re probably not a candidate who people will take too seriously. EMILY’s List, an acronym for “Early Money Is Like Yeast,” was founded precisely to help female pro-choice Democrats compete against men who have long received the bulk of political contributions from the heavily white and male political donor class. Yeast makes the dough rise.

Connor Farrell, the finance director for Abdul El-Sayed, a left-wing former candidate who ran for Michigan governor this past cycle, credits Grassroots Analytics with fast-tracking his campaign’s fundraising, allowing the team to target progressives and doctors across the country. (El-Sayed campaigned on his credentials as a physician and public health expert.) “The applications of this new tool were valuable for our call-time operation, building for events, and some digital solicitations,” Farrell told The Intercept. “Grassroots saved us enormous research time, while allowing us to pivot quickly to new avenues of research. For a bootstrapped campaign, saving time and being flexible in your finance department is critical.”

JTG_190117_34034-48-1548180849

Campaign posters decorate the wall of Danny Hogenkamp’s office in Washington, D.C.

Photo: Justin T. Gellerson for The Intercept


Is Grassroots Analytics legal? And moreover, in the age when big tech companies are under fire for sharing personal information — not to mention Cambridge Analytica, the political consulting firm, hired by the Trump campaign in 2016, which gained access to more than 50 million Facebook users’ private data — is it ethical?

Depends on who you ask. Federal law prohibits “any information copied from” Federal Election Commission reports from being “sold or used by any person for the purposes of soliciting contributions or for commercial purposes.” Subsequent regulation prohibits “information copied, or otherwise obtained, from any [FEC] report or statement, or any copy, reproduction, or publication thereof” from being sold or used for soliciting contributions. But because these laws date back to before the advent of the internet, and campaigns across the country already scour through FEC lists for leads, Grassroots Analytics says it’s effectively just simplifying the process that hordes of interns and finance staffers already do every day when they set out to research donor prospects.

To comply with the legal prohibition, Grassroots Analytics bars its algorithm from scraping the FEC website and websites like OpenSecrets that aggregate data directly from the FEC. Instead, Grassroots Analytics collects campaign contribution data only from public record caches, newspaper articles, nonprofit reports, and secondary websites. However, there’s little question that most of the campaign finance information they do collect originated at some point from FEC reports.

The company, in other words, exploits an ambiguity in the law, which is whether they have in fact “obtained” information from FEC reports. How many layers removed does information have to be in the age of the internet to pass legal muster? In an advisory opinion produced at the request of Grassroots Analytics that was reviewed by The Intercept, an attorney with one of California’s top boutique firms specializing in political and election law determined that existing law, court cases, and the FEC’s enforcement history “provide no clear answer to this question.” But because Grassroots Analytics takes steps to omit FEC data and sites that aggregate directly from the FEC, the attorney hired to assess their legal status determined that the company has a “legally defensible” position that its products and services do not violate federal law and that in their expert opinion, the FEC, especially with a Republican majority, is unlikely to conclude that the firm or its clients are breaking the law.

“I obviously didn’t go into tens of thousands of dollars of credit card debt to get this thing going without getting extensive legal advice from multiple law firms,” said Hogenkamp. “I’m a little rash sometimes, but I’m not that stupid.”

But should donor information, even if it’s technically public, be made so easily accessible?

“They’re donor pimps, that’s all they are,” said one fundraiser. “If you don’t know people, if your staff doesn’t know people, then you actually shouldn’t run for office. You’re not actually a good candidate.”

Others shrug off the critics, saying that while the FEC and secretaries of state should work to clarify campaign finance rules in the age of the internet, including for political advertisements, right now it’s no secret that most campaigns utilize FEC data in some fashion for solicitation purposes. “Most people think it’s fine to use those lists to research people a little further, to get a better picture of their donor history, and then turn them into leads,” one senior finance director told me.

“I think part of the debate is that the folks who’ve traditionally run the finance side of our party tend to be a little older, more focused on relationships and identifying event hosts and bundlers,” said Chase, who now works for a Democratic consulting firm. “But I think from the last cycle or two, you’ve seen a pretty dramatic shift in the way our party raises money. Digital fundraising exploded, and with that came folks like Danny who said, ‘Well, maybe we can do some of this stuff better than the traditional way of just calling rich folks and trying to get nice checks.’”

The debate also stems partly from confusion over what Grassroots Analytics is or actually does. Some suspect they’re just farming out lists of rich people to clients and engaging in another disapproved practice that’s rampant in the campaign industry — taking donor data from one campaign to another. Trading rich donor contact information is also not unusual among senior finance staffers.

Hogenkamp understands the mistrust. “Don’t get me wrong: I’m so skeptical of everyone in this industry. I totally understand how very smart people would think we’re just some kids with a list of like a hundred thousand donors and that we just make money off that same list,” he said. “But it’s like, no, we have more than 14 million people.”

There is another data analytics company that bills itself as helping candidates (and nonprofits and universities) become more strategic in their fundraising efforts. RevUp, which promises to “revolutionize your fundraising,” was started in 2013 by a top Obama fundraiser and Silicon Valley investor named Steve Spinner. It’s a software company that works with both Republicans and Democrats, helping campaigns to analyze their existing social networks, like their email contacts or LinkedIn connections, to more efficiently find new prospects to hit up. (Grassroots Analytics also analyzes clients’ LinkedIn data for donor prospects.) In October, RevUp, which has won several campaign industry awards for fundraising and innovation, announced a new $7.5 million round of investment.

A key difference between RevUp and Grassroots Analytics is that the former doesn’t expand the universe of donor prospects beyond your own network — it just helps you navigate and analyze the contacts in your existing universe more efficiently. From one vantage point, that’s more respectful, and skirts the thorny questions of legality and ethics. From another, it doesn’t do much to change the problem of connected people hoarding access to connected people.

“At RevUp, we believe successful fundraising is all about respecting prospective donors,” Spinner told The Intercept. “Through our data analytics software, RevUp uniquely allows a candidate, staff, or volunteer to reach out to the right person, at the right time, with the right ask. Our mission is to expand the donor universe and grow the pie beyond the ‘low-hanging fruit’ — the 25,000 major donors that get constantly called.”

JTG_190117_33963-43-1548181013

Danny Hogenkamp, right, works with his colleague Derrick Flakoll, technical director of Grassroots Analytics, in their coworking space in Washington, D.C.

Photo: Justin T. Gellerson for The Intercept


When Hogenkamp first developed Grassroots Analytics, he hoped someone in the Democratic establishment would recognize the potential of this technology, buy him out, and give him the institutional support to make it grow. But despite his persistent appeals, almost no one would return his emails.

Yet while no groups would publicly associate with Grassroots Analytics, staffers for some major Democratic political organizations were discreetly referring their candidates to the company throughout the 2018 cycle. Two emails reviewed by The Intercept showed an EMILY’s List campaign operative connecting Grassroots Analytics to Sol Flores’s primary campaign in Illinois and to Veronica Escobar’s race in Texas. “Thanks again for all your work with all o[f] EMILYs List candidates,” they wrote.

Other emails showed Democratic consultants setting up deals with Grassroots Analytics, explaining that the DCCC would be the organization actually writing the check on behalf of their clients. (This was the case with Linda Coleman’s unsuccessful bid for Congress.)

The DCCC and EMILY’s List did not return multiple requests for comment. When the DNC rolled out its “I Will Run” program in April 2018, which was essentially a list of vetted technology companies they recommended campaigns to hire, DNC Tech Manager Sally Marx announced the committee had “surveyed the progressive tech ecosystem looking for tools that campaigns and state parties can use to upgrade their work.” Their list, the DNC said, was a “curated compilation of the best-in-class tools currently used by campaigns.”

RevUp was on there for recommended fundraising companies, but Grassroots Analytics was not. The DNC declined to make Marx available for comment, but in a statement provided by a spokesperson, the party committee claimed Grassroots Analytics “was not on our radar until recently. As we head into this cycle, we look forward to re-evaluating and potentially adding new vendors to our I Will Run marketplace.”

Spokespersons for Our Revolution and Justice Democrats also confirmed that they do not have relationships with Grassroots Analytics and have not referred their candidates to the company.

One person who did show an early interest and helped Hogenkamp break into the field was Molly Allen, a political consultant who runs the political action committee for Blue Dog Democrats. She met with him in 2017 and referred Grassroots to their first three clients.

“I’ve only met Danny a few times and haven’t formally worked with them more than a short-term one-off, so I can’t speak to their work in details, but Danny seems great and I respect his start-up idea and success!” wrote Allen in an email.

I asked Hogenkamp how he felt about his company breaking out by representing Blue Dogs, when they had envisioned being a fix to the barriers blocking progressives from running for office.

“It was weird, but I was desperate,” he said.

Over the course of the last two years, though, Grassroots Analytics has decided to work with anyone running in the Democratic caucus, a decision Hogenkamp says was made to avoid pitting themselves as arbiters of the left. (This could also just be a handy rationale to bolster their client lists and profit margins.) But Grassroots Analytics, Hogenkamp adds, does have some red lines for clients, saying they turned down someone last year who they felt had too strong of ties to charter school backers.

JTG_190117_34013-45-1548181277

Data analyst Josh Townsend, right, goes over donor research with Danny Hogenkamp in a common area of the coworking space where Grassroots Analytics operates in Washington, D.C.

Photo: Justin T. Gellerson for The Intercept


But if this all could be started by a young person with barely any political experience, why hadn’t it been done before?

Multiple people interviewed for this article chalked the problem up to monopoly in the political industry and the disincentives to innovate that come with monopolies.

Sean Adler, a New York-based software engineer, said he ran into this problem when he tried to insert some innovation into campaigning four years ago. A friend of Adler’s had mounted a bid for Congress in New Jersey, and Adler, then 23 years old, started developing phone banking tools for the campaign’s volunteers. He ended up co-founding a company to sell the technology and called it Partic.

“I wrote the whole thing from scratch. We took the whole data file of voters, and it would distribute lists and show a script and do all sorts of custom assignments,” he explained. “But one thing that ended up being a bummer was the DCCC had their own thing they forced campaigns to use, so we ended up only getting the real scrappy campaigns, the real mega-underdogs.” Adler was referring to VAN, an omnipresent software company that provides what they describe as “an integrated platform of the best fundraising, compliance, field, organizing, digital, and social networking products.”

Adler built a host of new design features and capabilities to make phone banking more useable and successful than he found VAN’s technology offered. Partic worked with eight campaigns in the 2016 cycle, but has since ceased operations, citing the enormous barriers new companies face to compete effectively.

“No one can enter this market without a lot of connections, and that’s fair, but the customer base of this market also completely goes away every two years, so the only people who can sustain that are people who are already in it,” Adler told The Intercept. “I think I was too much of an idealistic liberal,” he continued. “I thought, ‘Oh sure, the Democratic Party might not be as technologically advanced as Google, but they certainly wouldn’t try to shut out people with better ideas and products in order to protect their friends.’ For the state of the world to change, people like Danny have to succeed. The establishment monopoly not only screws over local candidates no one has ever heard of, but it also screws over candidates at the very top.”

Ultimately, Hogenkamp says he wouldn’t mind being put out of business, citing the new bill introduced in the House this month for publicly financed elections.

“I sort of stumbled into this. I think the whole campaign fundraising system is stupid, and you know, if our country gets serious about publicly funded elections, I would so gladly shut down the business and go work in the State Department like I had planned,” he said. “I don’t care enough about this; the whole campaign finance system needs to be completely overhauled, but until that happens, the only way you’re ever going to do it is helping Democrats raise money to win competitive elections.”

The post A Democratic Firm Is Shaking Up the World of Political Fundraising appeared first on The Intercept.

What Your Cloud Business Needs to Know About SOC 2 Certification

A guide to SOC 2 compliance for SaaS developers and other cloud services providers

As cyber threats present greater risks to enterprises of all sizes and in all industries, more are requiring that their SaaS providers and other cloud services vendors have an SOC 2 certification. Let’s examine what an SOC 2 certification is and why your cloud services business should get one.

What Your Cloud Business Needs to Know About SOC 2 Certification

What is an SOC 2 report?

The SOC 2 is part of the American Institute of Certified Public Accountants (AICPA) Service Organization Control (SOC) reporting framework, utilizes the AT-101 professional standard, and is based on the five AICPA Trust Services Principles. Companies undergo SOC 2 audits to assure their clients that their organizations have implemented specific controls to effectively mitigate operational and compliance risks.

SOC 1 vs SOC 2 vs SOC 3

An SOC 1 report utilizes the SSAE 18 standard and reports on internal controls over financial reporting (ICFR), while an SOC 2 attestation is performed in accordance with AT-101 and addresses non-financial controls. The SOC 2 was developed to meet the needs of technology service providers, so that they could attest to their adherence to comprehensive data security control procedures and practices. Distribution of SOC 1 and 2 reports is restricted to certain stakeholders, such as compliance officers, auditors, or business partners.

The SOC 3 is a simplified version of an SOC 2. It reports on the same information, but the report is shorter, contains fewer details, and is meant for a general audience. Distribution of SOC 3 reports is unrestricted; they can be shared with anyone, including via posting on the company’s website.

What are the AICPA Trust Services Principles?

Companies undergoing an SOC 2 audit must attest to their compliance with one or more of the AICPA Trust Services Principles:

  • Security: Information and systems are protected against unauthorized access, unauthorized disclosure of information, and damage to systems that could compromise the availability, integrity, confidentiality, and privacy of information or systems and affect the entity’s ability to meet its objectives.
  • Availability: Information and systems are available for operation and use to meet the entity’s objectives.
  • Processing integrity: System processing is complete, valid, accurate, timely, and authorized to meet the entity’s objectives.
  • Confidentiality: Information designated as confidential is protected to meet the entity’s objectives.
  • Privacy: Personal information is collected, used, retained, disclosed, and disposed to meet the entity’s objectives.

Reporting organizations are not required to address all five Trust Service Principles; SOC 2 attestations can be limited to the principles that are relevant to the services being provided.

SOC Type 1 vs Type 2

An SOC 2 Type 1 audit provides a snapshot of an organization’s controls at a point in time, while an SOC 2 Type 2 audit examines them over a specified period. Because the Type 2 is far more rigorous, this is the certification most companies will want their SaaS and cloud providers to have.

The benefits of an SOC 2 Type 2 attestation

Unlike regulatory standards such as PCI DSS and HIPAA, SOC 2 attestations are not required by law. However, they are well worth the investment. Companies that undergo SOC 2 Type 2 audits are demonstrating to their customers that they have comprehensive internal security controls in place and that these controls have been tested over time and proven to work. This is a major competitive differentiator in our increasingly dangerous cyber threat environment. Companies that have a choice between two cloud services vendors, one with an SOC 2 Type 2 and the other without, will choose the one with the certification.

How much does an SOC 2 audit cost?

The cost of an SOC 2 audit depends on your organization’s size, data environment, and current security controls, as well as the method your auditor uses to perform your SOC 2 audit. Lazarus Alliance utilizes Continuum GRC’s IT Audit Machine (ITAM), the number-one-ranked IRM GRC audit software solution for AICPA SOC audits, which allows us to get our clients from start to compliant quickly and effectively while dramatically reducing their costs.

The cyber security experts at Lazarus Alliance have deep knowledge of the cyber security field, are continually monitoring the latest information security threats, and are committed to protecting organizations of all sizes from security breaches. Our full-service risk assessment services and Continuum GRC RegTech software will help protect your organization from data breaches, ransomware attacks, and other cyber threats.

Lazarus Alliance is proactive cyber security®. Call 1-888-896-7580 to discuss your organization’s cyber security needs and find out how we can help your organization adhere to cyber security regulations, maintain compliance, and secure your systems.

The post What Your Cloud Business Needs to Know About SOC 2 Certification appeared first on .

The danger of stolen data: credential stuffing attacks

credential stuffing

When we talk about cyberattacks, for companies, there is one word that normally comes to mind: malware, every computer’s nightmare, that can infect their systems and take with it not just the company’s most sensitive information, but also that of their users, clients, providers, employees, and so on.

However, malware isn’t always a cybercriminal’s tool of choice; in fact, in 2017 it started to give way to other kinds of attack, which are having similar levels of success at achieving the same goal: breaking through their victims’ corporate cybersecurity.

What is credential stuffing?

A credential stuffing attack is a kind of cyberattack in which, using details gathered from a data breach, the perpetrator manages to access user accounts on a platform by bombarding credentials until they hit upon the correct combination.

To carry out an attack of this kind, the cybercriminal must first get, steal, or buy a database made up of user accounts, with their login names and passwords. Their next step is to try to log in to the affected platform using these login details. As it is not always guaranteed that the details will coincide, the strategy is to launch multiple automatic logins until the details match up. What’s more, the identification processes are carried out by specialized botnets so that the platform believes them to be authentic. If it is possible to log in, the credential stuffing attack will have been a success.

The victims: Dunkin Donuts, Yahoo…

These cyberattacks are affecting an increasing number of companies.  The latest victim was Dunkin Donuts. In November, the company detected the theft of credentials and their subsequent use in an attack on the users of DD Perks, its loyalty and rewards program. The credentials stemmed from a data breach, although Dunkin Donuts stated that this breach didn’t happen on their system, rather on the system of a supplier, which gave access to third parties. Specifically, the user information came from a previous leak, and so the cybercriminals used this information both to access DD Perks accounts and to log in to other platforms that used the same credentials.

But there is, unfortunately, one incident that takes the crown for credential stuffing attacks: in 2016, around 500 million Yahoo accounts were seriously compromised by the prior leaking of a vast amount of information after another data breach. In this case, the breach had one more outcome: when Yahoo went public with the incident, many users received emails from people claiming to belong to the company, which contained a link to resolve the breach. These emails, however, were a phishing attempt by another group of cybercriminals.

Success rate and how to avoid them

When it comes to evaluating the potential damage of credential stuffing, it is important to get some perspective. According to a Shape Security study carried out in 2018, their success rate is usually, at best, 1%, a figure that may make this attack seem insignificant.

credential stuffing

However, we must bear in mind the fact that these cyberattacks usually use databases that can contain credentials of several million users. This means their success rate, though modest in relative terms, is large enough in absolute terms for the affected company’s reputation to be seriously damaged by the exposure of its corporate cybersecurity.

Companies must therefore take appropriate steps to avoid both data breaches and possible credential stuffing attacks.

1.- Two factor authentication? Two-factor authentication (2FA) is one of the most commonly used methods for companies and platforms that want ensure a secure login for their users. However, as we have already seen, two factor authentication is not infallible, since it can be broken by getting users to introduce their details on fake portals.

2.- Cybersecurity solutions. A company’s security cannot rely 100% on users correctly managing their passwords, especially since the attack very often comes first: i.e., data breaches are often a consequence of poor corporate cybersecurity management, rather than as a result of poor password management by users. This is where Panda Adaptive Defense comes in: it has a data protection module, Panda Data Control, that is able to monitor data in all its states, including when it is at rest, helping the solution to know at all times what processes are being run and what data is being used.

3.- Employee awareness Companies must also instill in their employees a series of prevention measures, as they are often the easiest point of entry for cybercrime. Employees must remain alert, as well as not giving out their credentials via email (to avoid phishing, tech support scams or BEC scams) and, if they come across any problems, report the incident to the company’s head of IT.

The post The danger of stolen data: credential stuffing attacks appeared first on Panda Security Mediacenter.

Recorded Future Adds Third-Party Risk to Threat Intelligence Platform

Over the last few years, the supply chain has emerged as a primary attack vector for both criminal gangs and nation-state groups. Attackers are compromising often smaller and less well-defended suppliers in order to gain access to larger primary targets. This problem is getting worse with the increasing digital transformation of business around the world -- more companies are dealing electronically with each other than ever before.

read more

Protecting Critical Infrastructure and Roadways: How Smart Cities Create New Risks

Advanced technology has changed countless facets of everyday life, from internal enterprise processes to consumer pursuits and beyond. Even the design, management and support for large and small cities has shifted thanks to innovative smart city systems.

While advanced components to support utilities, critical infrastructure, traffic and more can bring numerous benefits, these solutions also open both urban and rural areas to new risks and cyber threats.

We’re taking a closer look at city infrastructure and roadways, including energy and water utilities and highway transportation systems, the changes being made in these areas and how new technologies must be balanced with proper risk assessment.

Upgrading water and energy infrastructure

There’s simply no doubt that access to water and energy resources are some of the most important elements for residents. In many areas, city managers and officials are looking to upgrade their existing systems – some of which are considerably legacy, and have been in place for decades – with updated, intelligent technology.

As Trend Micro pointed out, such systems are able to run in the background, helping to manage and maintain water and energy infrastructures with little human interaction. This, in turn, boosts efficiency and, in theory, helps reduce the chances of long-term outages that result from inclement weather or other critical infrastructure issues.

At the same time, though, upgrading water and energy systems with smart technologies could, as Trend Micro researchers noted, “come at a cost.” Putting intelligent platforms in place where there previously were none could create significant risks that must be considered and mitigated ahead of time.

“Using Shodan and other tools, Trend Micro researchers looked into the possible weaknesses of exposed industrial control systems (ICS) across the energy and water industries,” researchers explained. “The results give a glimpse of security gaps found in ICS and human machine interfaces (HMIs) … that could lead to bigger problems due to the interdependent nature of critical infrastructure sectors and, more importantly, the natural dependence of people on these infrastructures.”

In many instances, the security risks that could potentially impact water utilities overlap with those that threaten access to energy resources:

Cyberattacks

Unsurprisingly, a leading concern here is the possibility of cyberattacks that could prevent access to these resources, or create situations of extended downtime. A long-term power outage or inability to access running water could have severe consequences for small and large cities alike, creating panic and potential public health impacts among residents. The ways in which attackers might achieve a successful intrusion and cyberattack differ, and are delved into more deeply below, but the potential for this risk is clear across utility sectors.

Exposed devices

As Trend Micro explained in its report, “Exposed and Vulnerable Critical Infrastructure: Water and Energy Industries,” researchers discovered that several devices – including human machine interfaces, report desktop protocols, virtual network computing systems and other components – are currently exposed on the internet. These exposed devices provide an ideal point of attack for cybercriminals looking to support an intrusion.

Researchers found different levels of exposure and different reasons behind this issue, including improper setup of remote access functions, unsecured access provided to a third-party, and/or incorrectly configured network settings. These security issues make it possible for attackers to access exposed devices and leverage them to steal sensitive personally identifiable customer information; to gain entry to the network and subsequently support sabotage or fraudulent processes; to run illegal operations using the network, including DDoS attacks, botnets, cryptocurrency mining and other malicious activity.

Once an exposed device has been identified, the potential for misuse by attackers leading to other security issues and attacks is nearly limitless. Worse still, this issue impacts all different types of energy and utility plants, including those for oil and gas, solar energy, hydroelectric plants, water treatment, and other industrial facilities.

Example of a real-world threat scenario

Within the report, Trend Micro researchers look into several potential real-world threat scenarios that could take place thanks to exposed human machine interfaces and other devices within the industrial sector.

“One of the greatest concerns for organizations in this sector is the possible effect of direct cyberattacks on their operations, thereby leading to a disruption of supply to and from the plant,” Trend Micro researchers explained. “This is especially true for water facilities that either purify water for distribution or use water in their operations.”

A water treatment plant, for instance, could be attacked via exposed human machine interface controls through public methods. Controls that are not properly secured and therefore exposed over the internet could provide the ideal opening for an attack that interrupts operations and prevents the plant from supplying drinking water.

Attacks on highway infrastructure

As Trend Micro researchers noted in the report, “Cyberattacks Against Intelligent Transportation Systems: Assessing Future Threats to ITS,” intelligent transportation systems create similar risks to smart infrastructure.

Successful attacks on transportation systems can have numerous malicious consequences, including vehicular accidents; traffic jams that impact service delivery, the movement of freight and daily commutes; additional ripple effects that create financial loss for businesses, individual people or cities.

The intelligent systems that could be impacted here include autonomous vehicles, as well as connected vehicles equipped with LAN or Wifi connections. Roadway reporting systems encompassing elements like lane cameras, roadway weather stations and other platforms fall under this risk umbrella; as do traffic flow controls like traffic signals, message signs and toll collection systems.

The potential risk of attack here differs depending on the scenario, but as Trend Micro pointed out in its report, several real-world attacks have already taken place. In one instance, an individual hijacked a dynamic traffic sign and changed its message to “Drive Crazy Y’all” as a prank. Surprisingly, this attack was made possible through default login credentials that were easy to guess.

In a more damaging example, San Francisco’s Municipal Transportation agency was attacked in 2016 by ransomware that shut down internal and commuter systems. Fare payment machines were made inaccessible, displaying “OUT OF SERVICE” messages across screens and preventing riders from paying for fares. In response, the transportation agency had to allow free rides on its light rail until the issue was resolved.

As this scenario shows, an attack on transportation infrastructure can be considerably impactful, and have significant financial repercussions. Other instances might affect emergency services, or other crucial transportation-dependent needs.

These issues highlight the critical responsibility on the part of utility providers and organizations involved with transportation management. These groups must be sure they are aware of these potential threats and are working proactively to mitigate them.

To find out more and to read about other potential and actual attack scenarios involving critical infrastructures, check out Trend Micro’s reports, “Exposed and Vulnerable Critical Infrastructure: Water and Energy Industries,” and “Cyberattacks Against Intelligent Transportation Systems.”

The post Protecting Critical Infrastructure and Roadways: How Smart Cities Create New Risks appeared first on .

Chinese Hacker Publishes PoC for Remote iOS 12 Jailbreak On iPhone X

Here we have great news for all iPhone Jailbreak lovers and concerning one for the rest of iPhone users. A Chinese cybersecurity researcher has today revealed technical details of critical vulnerabilities in Apple Safari web browser and iOS that could allow a remote attacker to jailbreak and compromise victims' iPhoneX running iOS 12.1.2 and before versions. To do so, all an attacker needs to

Quantify Third-Party Risk in Real Time With Our New Module

At Recorded Future, our mission has been to empower our users to defend themselves against cyber threats at the speed and scale of the internet. Empowerment means giving you the capabilities necessary to understand and manage your own risk environment — and the Recorded Future® Platform helps you measure and understand your own risk environment in real time, with full transparency to original sources of risk data. First-party risk reduction remains our first and foremost goal, and in today’s world, that means managing third-party risk, as well.

Leading companies in every industry today are undergoing digital transformation. They are driving more online and mobile access, more transparency, more interconnection of processes across their businesses, all with faster cycle times. These changes further blur the lines between an organization and its partners, suppliers, vendors, and other third parties. Interconnection creates advantages but also expands attack surfaces. Now more than ever before, the state of our security is only as strong as the weakest link.

That’s why Recorded Future is introducing our new Third-Party Risk module. An add-on to our core platform, Third-Party Risk helps you quantify the threat environments of your business partners. It is a powerful complement to traditional risk management processes focused on compliance frameworks, reviews, and audits. For organizations where risk management and security teams work together to identify and reduce risks, the Third-Party Risk module generates the threat intelligence they need to understand the risks stemming from their third-party associates.

Digital Transformation and Risk

Countless organizations are implementing digital technologies to transform the way they gather, store, and analyze data. More and more data is stored in the cloud, moving data control systems and processes from in-house to third-party providers. In industrial settings, businesses are using internet-connected sensors to gather vast amounts of operational data. Customer service is moving from phone to chat to automated self-service, creating a more robust, data-driven understanding of each customer’s support experience.

This digitization is happening rapidly across sectors, in many cases using solutions that follow security best practices inconsistently, either on the supplier’s or the buyer’s side. Can we be confident that every one of the organizations we work with are as rigorous about their own security as we are?

Managing Third-Party Risk

Traditional approaches to managing third-party risk often involve these three steps:

  1. Attempt to understand your organization’s business relationship with the third party, getting a grasp on the nature and degree of your exposure to risk.
  2. Based on that understanding, identify the right frameworks to evaluate that third party’s financial health, corporate controls, and IT security and hygiene, and how they relate to your organization’s own approach to security.
  3. Use those frameworks to assess the third party, usually with risk reporting or evaluating whether an organization is compliant with security standards like SOC 2 or FISMA, or investigations such as a financial audit.

These remain essential steps in evaluating third-party risk. But they don’t tell the whole story.

What Recorded Future’s Third-Party Risk module does differently is provide transparency into the threat environment of the companies you work with. Being able to quantify risk will help you determine the right course of action from an educated standpoint and ask the right questions when evaluating business partners. We’ll look a little more closely at how this can work in the next section.

Our module does this through key features such as:

  • Intelligence Cards: Tens of thousands of company Intelligence Cards provide an easy-to-read overview of company risk, all in one place and updated in real time.
  • Real-Time Risk Scoring: Risk scores are dynamically determined from real-time data with transparent sourcing and risk rules, allowing security professionals to look at evidence behind triggered risk rules and set up automatic alerts on changes to risk severity.
  • Integration Into the Complete Solution: Access to third-party risk data from directly within our threat intelligence platform makes pivoting into investigations seamless and keeps all of your alerts in one place, making it easy to monitor new and emerging threats.

One of the greatest values provided by the machine learning and automation that drives the Recorded Future platform is the speed of real-time data and updates at scale. Knowing how and when the threat environment changes can mean the difference between knowing you’re exposed to a vulnerability in your supply chain and getting attacked through a vector you weren’t even aware of. Automation expands your ability to monitor the threat landscape without adding more to your workload.

Asking the Right Questions

We believe open communication between different teams is the key to a flourishing security function at any organization. The Third-Party Risk module gives security teams another way to help risk management and procurement teams apply threat intelligence to their work.

This goes beyond the technology risk management capabilities of the core Recorded Future platform. Alongside questions like, “What are my assets, what are their vulnerabilities, and how am I patching them?” security professionals can now also ask the right questions of the third parties they work with and get ahead of threats.

Let’s say a huge set of credentials leaked from a business partner in a new data dump on the dark web. Through alerts set up in the Recorded Future platform, your security analysts are immediately notified about this leak, which might expose your own organization. With this information, your team can immediately take the proper precautions, like resetting passwords and more closely monitoring some accounts. Without alerting through our Third-Party Risk module, your organization may have to wait until your partners choose to disclose the leak before taking action.

As the digital realm expands and our security processes become interdependent on those of our partners, suppliers, and other third parties, evaluating third-party risk through threat intelligence is an increasingly essential part of any threat analysis and risk mitigation program. By integrating third-party risk into our universal threat intelligence platform, Recorded Future provides the most comprehensive solution for threat intelligence teams.

Learn More About Third-Party Risk

ESG took a close look at how companies are managing their third-party risk today and concluded that many of the current processes used are lagging behind security requirements — 44 percent of IT organizations, for example, said that there were insufficient resources available to them for auditing the security of third parties, and 39 percent said that data collection and analysis was also insufficient.

Download this new report from ESG to see why real-time threat intelligence like that offered by the Recorded Future platform is so critical for monitoring third-party risk.

The post Quantify Third-Party Risk in Real Time With Our New Module appeared first on Recorded Future.

     

The App Approval Workflow Keeps Enterprise Security in Check Without Disrupting Productivity

Mobile applications have become a part of our everyday lives. We use them to get where we’re going, stay in constant communication with others and get the information we need to be productive. Apps are no longer a novelty for today’s workforce; they’re a necessity. And with that necessity comes risk. Just like any enterprise technology, it’s crucial to take security measures to prevent data loss, threats and breaches.

But in the context of the enterprise, where apps are used to drive business outcomes, increase efficiency and improve worker productivity, how do they impact enterprise security? What can IT and security leaders do to ensure that the apps being pushed out to hundreds or even thousands of corporate devices meet security standards?

Security should always be a top priority in the enterprise, especially in today’s malware landscape. Chief information officers (CIOs) and chief information security officers (CISOs) are already taking proactive approaches to stay safe from attackers and combat exposures. With the help of a unified endpoint management (UEM) solution, mobile app security only takes a few steps, and it’s easier than you think.

Do Your Due Diligence Before App Deployment

Security teams must implement processes to prepare applications for enterprise use. To guarantee that apps follow the proper security protocols, IT must ask the following questions:

  • Were the apps developed with security in mind?
    With the abundance of available apps on the market, IT leaders should ensure the apps they need have been developed with no security flaws that could pose a risk to their critical enterprise security and data.
  • Have the apps been properly vetted? What steps and tools have been implemented to ensure the apps IT pushes to end users are, in fact, safe? This examination process helps IT leaders confirm apps are secure and can be approved for deployment.
  • Are existing tools and technologies being used to scan for malicious code and irregularities? Out of all the available tools for IT teams, it’s best to find and use a solution that offers a built-in approach, rather than trying to make multiple tools communicate in a productive manner.

These questions are important to the enterprise at large because they will help guarantee the overall security of mobile applications before they’re distributed to end users.

Register for the Feb. 7 webinar to learn more

A New Framework for App Review and Approval

To get the most out of your apps while ensuring their predeployment security, your IT teams must follow the app approval workflow. It’s now easier to deploy enterprise apps so that every stakeholder — including security officers, IT administrators and development teams — has an opportunity to engage at the right stage of the process and weigh in to verify that the apps are secure and ready for deployment.

The approval workflow follows a logical sequence to make sure every precaution and test is completed to get the app approved for distribution. Third-party vendors have security and malware checks in place to review private enterprise apps. Working in conjunction with a UEM solution, it is now easier to upload, check and deploy enterprise apps to your fleet of devices.

Once the workflow is completed, IT and security leaders can rest assured that they’ve taken all the necessary steps to secure their apps before users even download them.

Follow These Steps for Total Enterprise Security

The app approval framework is now available to all IBM MaaS360 with Watson administrators to help them securely deploy their enterprise apps while using existing technology.

An example of the app approval workflow follows as such:

  1. App upload: The UEM admin uploads the enterprise app to the portal, but does not yet deploy it. Instead, the admin goes to the app approval menu.
  2. Vendor integration: UEM integration must be completed on the security vendor’s site before any approval workflow can begin.
  3. App review: The admin chooses a security vendor for the application approval and submits the app for review.
  4. Results: An email containing the results of the scan is sent to an app approver, such as a security officer who is a UEM admin, for review. The app approver provides a quality check of the results and shares them with internal stakeholders. If the app doesn’t pass enterprise security criteria, it must be patched or coded and resubmitted for review.
  5. App deployment: Once the app is fully approved, it can be deployed to the entire fleet of devices within the UEM portal.

App Approval Workflow Diagram

By having an all-encompassing solution that focuses on desktop, mobile and web apps, IT and security leaders can save time and resources and get their apps reviewed, approved and deployed in no time. This process can also prevent the headache of a potential security breach, which can be a costly endeavor to fix.

Register for the Feb. 7 webinar to learn more

The post The App Approval Workflow Keeps Enterprise Security in Check Without Disrupting Productivity appeared first on Security Intelligence.

How Former Bomb Disposal Expert and Lighting Designer Shaked Vax Pivoted Toward a Cybersecurity Career

There’s no doubt that a cybersecurity breach can blow up a business, but it’s still surprising to hear Shaked Vax, worldwide technical sales leader at IBM Security, compare some aspects of his cybersecurity career to his time with the Israeli Army’s bomb disposal unit.

“One of the key things you are taught when approaching an improvised explosive device (IED) to dismantle it is to avoid coming from the obvious direction — the direction the attacker assumed you will come from,” Shaked explained. “Come from the back, from the side, from the top — however you can approach that is unpredictable.”

The same advice applies to cybersecurity, especially when it comes to the ways in which attackers target the users in their sights. The best way to identify them or launch a counterattack is by using the most innovative tools and approaching from the most unpredictable angle. According to Shaked, that’s how we can use attackers’ own methodologies against them.

Walking on Wires — and Cutting Them

Another link between Shaked’s two lives is caution. He believes, and has learned from experience, that being afraid actually helps to protect you because it makes you more alert. When you are bold and overconfident, that’s when mistakes may happen — whether that means using the wrong approach to dismantle a bomb, or being complacent with your company’s cybersecurity protocols.

“Newsflash: Stuff can hurt you, and you should be super alert when handling it,” the former bomb disposal expert advised. “Being cautious, on your toes and thinking of it as a rivalry allows you to be more in tune, and that’s something I took forward to in my role in cybersecurity. It’s how I operate and think now. It becomes ingrained in your veins and it really gets to be part of you.”

Shining a Light on Cybersecurity

Despite these strong threads between his past and present lives, a career in cybersecurity was not always in Shaked’s vision. He studied theater design at university and later went on to design lighting for rock concerts, operas, theater productions and TV studios.

While studying for his master’s degree, Shaked was offered a job working in an Israeli technology company that created lighting control boards — similar to the soundboards you see at concerts, but used to control the light show.

It was a great springboard for the budding lighting designer because he was hands-on in quality assurance and involved in new features and designs. A chance promotion saw him move into product and marketing management at the company, where he got even more engaged and started leading new offerings and feature designs.

“It was exciting because going to visit a customer meant I was going to meet lighting designers and lighting operators in a rock concert or an opera house or a disco club, which was awesome,” he recalled. “It was a great way to do market research.”

This area of theater design is “very, very technological,” Shaked explained. “You can imagine how much computing power is required to manage hundreds of lights that move and morph in real time, and how many innovative UI concepts need to go into a system to allow the operator to really interact with the show.”

So while he was working with his first love, he was developing another — technology — and becoming fascinated with how it interacts with our world. The dot-com bubble and the rise of the Israeli startup scene in the 2000s excited Shaked, and he wanted to push his technology career further, outside of lighting design. Colleagues recommended him for a role at cybersecurity firm Check Point, and thus his passion for lighting became just a passion again; his career was now cybersecurity.

Shaked moved up the ladder again at Check Point, where he worked in research and development and helped to innovate new security information and event management (SIEM) and Secure Sockets Layer virtual private network (SSL VPN) products, and later jumped around the tech scene as a product manager. He arrived at Trusteer just a few months before it was acquired by IBM Security in 2013.

“Trusteer got acquired by IBM, which gave me a great career path,” he said. “I got to expand in offering management, learning a lot about how a big business manages products and portfolios, and many more business perspectives.”

Shaked Vax approached his cybersecurity career from an unexpected angle

A Positive Spin on Fraud Prevention

As a product manager, Shaked had always been focused on the technology, the customers and the sellers. At IBM, he got to learn the business perspective of what he was doing.

He moved from Israel to Boston with his family three years ago to take on a strategic role, looking to expand the Trusteer business to new markets and solve new problems with the advanced fraud prevention technology. Although it was traditionally focused on banking and financial fraud, Trusteer’s technology is branching out.

“We call it trusted digital identity instead of fraud prevention,” said Shaked. “We’re looking more positively at how we enable businesses to do digital transformation and engage better with their customers over digital channels.”

Shifting focus from the negative implications of fraud and into more positive trust-based messaging is a market evolution, Shaked explained. Many technologies previously used for fraud detection are becoming increasingly intertwined with identity and access management (IAM) tools because identity fraud prevention centers on transparently ensuring that users are who they say they are.

Taking Identity Trust to New Places

“At the end of the day, authentication solutions were designed to correlate and prove digital identities,” said Shaked. “However, what was initially created as fraud solutions does that transparently. It does this without asking you anything, which is where everyone wants to be — passwordless, frictionless.”

Shaked now leads Trusteer’s technical sellers across the world as part of his mission to take the identity fraud prevention technology to new places. Although it’s a relatively new role, he is building the team and driving improvements in how it operates, ensuring that sellers have the tools and knowledge they need across the entire portfolio.

And if you’re wondering, yes, Shaked still occasionally has his hands in lighting design. The bomb disposal work, though, has stayed firmly in the past. These days, he’s focused on keeping businesses from blowing up.

Visit the Subway System of Cybercrime With Francisco Galian

The post How Former Bomb Disposal Expert and Lighting Designer Shaked Vax Pivoted Toward a Cybersecurity Career appeared first on Security Intelligence.

More Than Half of PC Applications Installed Worldwide Are Out-of-Date

Avast's PC Trends Report 2019 found [PDF] that users are making themselves vulnerable by not implementing security patches and keeping outdated versions of popular applications on their PCs. From a news report: The applications where updates are most frequently neglected include Adobe Shockwave (96%), VLC Media Player (94%) and Skype (94%). The report, which uses anonymized and aggregated data from 163 million devices across the globe, also found that Windows 10 is now installed on 40% of all PCs globally, which is fast approaching the 43% share held by Windows 7. However, 15% of all Windows 7 users and 9% of all Windows 10 users worldwide are running older and no longer supported versions of their product, for example, the Windows 7 Release to Manufacturing version from 2009 or the Windows 10 Spring Creators Update from early 2017.

Read more of this story at Slashdot.

Cryptocurrency and Blockchain Networks: Facing New Security Paradigms

On Jan. 22, FireEye participated in a panel focused on cryptocurrencies and blockchain technology during the World Economic Forum. The panel addressed issues raised in a report developed by FireEye, together with our partner Marsh & McLennan (a global professional services firm) and Circle (a global crypto finance company). The report touched on some of the security considerations around crypto-assets – today and in the future, and in this blog post, we delve deeper into the security paradigms surrounding cryptocurrencies and blockchain networks.

First, some background that will provide context for this discussion.

Cryptocurrencies – A Primer

By its simplest definition, cryptocurrency is digital money that operates on its own decentralized transaction network. When defined holistically, many argue that cryptocurrencies and their distributed ledger (blockchain) technology is powerful enough to radically change the basic economic pillars of society and fundamentally alter the way our systems of trust, governance, trade, ownership, and business function. However, the technology is new, subject to change, and certain headwinds related to scalability and security still need to be navigated. It is safe to assume that the ecosystem we have today will evolve. Since the final ecosystem is yet to be determined, as new technology develops and grows in user adoption, the associated risk areas will continually shift – creating new cyber security paradigms for all network users to consider, whether you are an individual user of cryptocurrency, a miner, a service-provider (e.g., exchange, trading platform, or key custodian), a regulator, or a nation-state with vested political interest.

Malicious actors employ a wide variety of tactics to steal cryptocurrencies. These efforts can target users and their wallets, exchanges and/or key custodial services, and underlying networks or protocols supporting cryptocurrencies. FireEye has observed successful attacks that steal from users and cryptocurrency exchanges over the past several years. And while less frequent, attacks targeting cryptocurrency networks and protocols have also been observed. We believe cryptocurrency exchanges and/or key custodial services are, and will continue to be, attractive targets for malicious operations due to the potentially large profits, their often-lax physical and network security, and the lack of regulation and oversight.

This blog post will highlight some of the various risk areas to consider when developing and adopting cryptocurrency and blockchain technology.

Wallet & Key Management

Public and Private Keys

There are two types of keys associated with each wallet: a public key and a private key. Each of these keys provides a different function, and it is the security of the private key that is paramount to securing cryptocurrency funds.

The private key is a randomly generated number used to sign transactions and spend funds within a specific wallet, and the public key (which is derived from the private key) is used to generate a wallet address to which they can receive funds.


Figure 1: Private key, public key, and address generation flow

The private key must be kept secret at all times and, unfortunately, revealing it to third-parties (or allowing third-parties to manage and store private keys) increases convenience at the expense of security. In fact, some of the most high-profile exchange breaches have occurred in large part due to a lack of operational controls relating to the storage of private keys. Maintaining the confidentiality, integrity, and availability of private keys requires fairly robust controls.

However, from an individual user perspective, a large number of user-controlled software wallet solutions store the private and public keys in a wallet file on the user’s hard drive that is located in a well-known directory, making it an ideal target for actors that aim to steal private keys. Easily available tools such as commercial keyloggers and remote access tools (RATs) can be used to steal funds by stealing (or making copies of) a user’s wallet file. FireEye has observed myriad malware families, traditionally aimed at stealing banking credentials, incorporate the ability to target cryptocurrency wallets and online services. FireEye Intelligence subscribers may be familiar with this already, as we’ve published about these malware families use in targeting cryptocurrency assets on our FireEye Intelligence Portal. The following are some of the more prominent crimeware families we have observed include such functionality:

  • Atmos
  • Dridex
  • Gozi/Ursnif
  • Ramnit
  • Terdot
  • Trickbot
  • ZeusPanda/PandaBot
  • IcedID
  • SmokeLoader
  • Neptune EK
  • BlackRuby Ransomware
  • Andromeda/Gamarue
  • ImminentMonitor RAT
  • jRAT
  • Neutrino
  • Corebot

Wallet Solutions

By definition, cryptocurrency wallets are used to store a user’s keys, which can be used to unlock access to the funds residing in the associated blockchain entry (address). Several types of wallets exist, each with their own level of security (pros) and associated risks (cons). Generally, wallets fall into two categories: hot (online) and cold (offline).

Hot Wallets

A wallet stored on a general computing device connected to the internet is often referred to as a “hot” wallet. This type of storage presents the largest attack surface and is, consequently, the riskiest way to store private keys. Types of hot wallets typically include user-controlled and locally stored wallets (also referred to as desktop wallets), mobile wallets, and web wallets. If remote access on any hot wallet device occurs, the risk of theft greatly increases. As stated, many of these solutions store private keys in a well-known and/or unencrypted location, which can make for an attractive target for bad actors. While many of these wallet types offer the user high levels of convenience, security is often the trade-off.

Wallet Type

Examples

Desktop

  • Bitcoin Core
  • Atomic
  • Exodus
  • Electrum
  • Jaxx

Mobile

  • BRD
  • Infinito
  • Jaxx
  • Airbitz
  • Copay
  • Freewallet

Web

  • MyEtherWallet
  • MetaMask
  • Coinbase
  • BTC Wallet
  • Blockchain.info

Table 1: Types of hot wallets

If considering the use of hot wallet solutions, FireEye recommends some of the following ways to help mitigate risk:

  • Use two-factor authentication when available (as well as fingerprint authentication where applicable).
  • Use strong passwords.
  • Ensure that your private keys are stored encrypted (if possible).
  • Consider using an alternative or secondary device to access funds (like a secondary mobile device or computer not generally used every day) and kept offline when not in use.
Cold Wallets

Offline, also called cold wallets, are those that generate and store private keys offline on an air-gapped computer without network interfaces or connections to the outside internet. Cold wallets work by taking the unsigned transactions that occur online, transferring those transactions offline to be verified and signed, and then pushing the transactions back online to be broadcasted onto the Bitcoin network. Managing private keys in this way is considered to be more secure against threats such as hackers and malware. These types of offline vaults used for storing private keys is becoming the industry security standard for key custodians such as Coinbase, Bittrex, and other centralized cryptocurrency companies. Even recently, Fidelity Investments released a statement regarding their intentions to play an integral part of the Bitcoin’s custodial infrastructure landscape.

"Fidelity Digital Assets will provide a secure, compliant, and institutional-grade omnibus storage solution for bitcoin, ether and other digital assets. This consists of vaulted cold storage, multi-level physical and cyber controls – security protocols that have been created leveraging Fidelity’s time-tested security principles and best practices combined with internal and external digital asset experts."

-Fidelity Investments                                

While more security-conscious exchanges employ this type of key storage for their users, cold wallets are still susceptible to exploitation:

  • In November 2017, ZDnet published an article describing four methods hackers use to steal data from air-gapped computers through what they call “covert channels.” These channels can be broken down into four groups:
    • Electromagnetic
    • Acoustic
    • Thermal
    • Optical
  • In addition to those four types of attacks, WikiLeaks revealed, as part of its ongoing Vault 7 leak, a tool suite (dubbed Brutal Kangaroo, formerly EZCheese) allegedly used by the CIA for targeting air-gapped networks.
  • In February 2018, security researchers with the Cybersecurity Research Center at Israel's Ben-Gurion University made use of a proof-of-concept (PoC) malware that allowed for the exfiltration of data from computers placed inside a Faraday cage (an enclosure used to block electromagnetic fields). According to their research, attackers can exfiltrate data from any infected computer, regardless if air-gapped or inside a Faraday cage. The same group of researchers also revealed additional ways to exploit air-gapped computers:
    • aIR-Jumper attack that steals sensitive information from air-gapped computers with the help of infrared-equipped CCTV cameras that are used for night vision
    • USBee attack that can be used steal data from air-gapped computers using radio frequency transmissions from USB connectors
    • DiskFiltration attack that can steal data using sound signals emitted from the hard disk drive (HDD) of the targeted air-gapped computer
    • BitWhisper that relies on heat exchange between two computer systems to stealthily siphon passwords or security keys
    • AirHopper that turns a computer's video card into an FM transmitter to capture keystrokes
    • Fansmitter technique that uses noise emitted by a computer fan to transmit data
    • GSMem attack that relies on cellular frequencies
    • PowerHammer, a malware that leverages power lines to exfiltrate data from air-gapped computers.
Hardware Wallets

Hardware wallets are typically a small peripheral device (such as USB drives) used to generate and store keys, as well as verify and sign transactions. The device signs the transactions internally and only transmits the signed transactions to the network when connected to a networked computer. It is this separation of the private keys from the vulnerable online environment that allows a user to transact on the blockchain with reduced risk.

However, hardware wallets are susceptible to exploitation as well, such as man-in-the-middle (MitM) supply chain attacks, wherein a compromised device is purchased. Such an event obstenibly occurred in early 2018, when an individual purchased a compromised Nano Ledger off of eBay, and consequently lost $34,000 USD worth of cryptocurrency stored on the device as the attacker created their own recovery seed to later retrieve the funds stored on the device. In order to trick the victim, the attacker included a fake recovery seed form inside the compromised device packaging (as seen in Figure 2).


Figure 2: Fraudulent recovery seed document for Ledger Nano (image source: Reddit)

To help mitigate the risk of such an attack, FireEye recommends only purchasing a hardware wallet from the manufacturer directly or through authorized resellers.

In addition to supply-chain attacks, security researchers with Wallet.fail have recently disclosed two vulnerabilities in the Ledger Nano S device. One of these vulnerabilities allows an attacker to execute arbitrary code from the boot menu, and the other allows physical manipulation without the user knowing due to a lack of tamper evidence. In both cases, physical access to the device is required, and thus deemed less likely to occur if proper physical security of the device is maintained and unauthorized third-party purchasing is avoided.

Paper Wallets

Typically, wallet software solutions hide the process of generating, using, and storing private keys from the user. However, a paper wallet involves using an open-source wallet generator like BitAddress[.]org and WalletGenerator[.]net to generate the user’s public and private keys. Those keys are then printed to a piece of paper. While many view this form of key management as more secure because the keys do not reside on a digital device, there are still risks.

Because the private key is printed on paper, theft, loss, and physical damage present the highest risk to the user. Paper wallets are one of the only forms of key management that outwardly display the private key in such a way and should be used with extreme caution. It is also known that many printers keep a cache of printed content, so the possibility of extracting printed keys from exploited printers should also be considered.

Exchanges & Key Custodians

According to recent Cambridge University research, in 2013 there were approximately 300,000 to 1.3 million users of cryptocurrency. By 2017 there were between 2.9 million and 5.8 million users. To facilitate this expedited user growth, a multitude of companies have materialized that offer services enabling user interaction with the various cryptocurrency networks. A majority of these businesses function as an exchange and/or key custodians. Consequently, this can make the organization an ideal candidate for intrusion activity, whether it be spear phishing, distributed denial of service (DDoS) attacks, ransomware, or extortion threats (from both internal and external sources).

Many cryptocurrency exchanges and services around the world have reportedly suffered breaches and thefts in recent years that resulted in substantial financial losses and, in many cases, closures (Figure 3). One 2013 study found that out of 40 bitcoin exchanges analyzed, over 22 percent had experienced security breaches, forcing 56 percent of affected exchanges to go out of business.


Figure 3: Timeline of publicly reported cryptocurrency service compromises

Some of the more notable cryptocurrency exchange attacks that have been observed are as follows:

Time Frame

Entity

Description

July 2018

Bancor

Bancor admitted that unidentified actors compromised a wallet that was used to upgrade smart contracts. The actors purportedly withdrew 24,984 ETH tokens ($12.5 million USD) and 229,356,645 NPXS (Pundi X) tokens (approximately $1 million USD). The hackers also stole 3,200,000 of Bancor's own BNT tokens (approximately $10 million USD). Bancor did not comment on the details of the compromise or security measures it planned to introduce.

June 2018

Bithumb

Attackers stole cryptocurrencies worth $30 million USD from South Korea's largest cryptocurrency exchange, Bithumb. According to Cointelegraph Japan, the attackers hijacked Bithumb's hot (online) wallet.

June 2018

Coinrail

Coinrail admitted there was a "cyber intrusion" in its system and an estimated 40 billion won ($37.2 million USD) worth of coins were stolen. Police are investigating the breach, but no further details were released.

February 2018

BitGrail

BitGrail claimed $195 million USD worth of customers' cryptocurrency in Nano (XRB) was stolen.

January 2018

Coincheck

Unidentified attackers stole 523 million NEM coins (approximately $534 million USD) from the exchange's hot wallet. Coincheck stated that NEM coins were kept on a single-signature hot wallet rather than a more secure multi-signature wallet and confirmed that stolen coins belonged to Coincheck customers.

July 2017

Coindash

Unidentified actors reportedly stole $7.4 million USD from users attempting to invest during a Coindash (app platform) ICO. Coindash, which offers a trading platform for ether, launched its ICO by posting an Ethereum address to which potential investors could send funds. However, malicious actors compromised the website and replaced the legitimate address with their own ether wallet address. Coindash realized the manipulation and warned users only three minutes after the ICO began, but multiple individuals had already sent funds to the wrong wallet. This incident was the first known compromise of an ICO, which indicates the persistent creativity of malicious actors in targeting cryptocurrencies.

June 2017

Bithumb

Bithumb, a large exchange for ether and bitcoin, admitted that malicious actors stole a user database from a computer of an employee that allegedly includes the names, email addresses, and phone numbers of more than 31,800 customers. Bithumb stated that its internal network was not compromised. Bithumb suggested that actors behind this compromise used the stolen data to conduct phishing operations against the exchange's users in an attempt to steal currency from its wallets, allegedly stealing cryptocurrency worth more than $1 million USD.

April 2017

Yapizon

Unidentified actor(s) reportedly compromised four hot wallets belonging to a South Korean Bitcoin exchange, Yapizon, and stole more than 3,816 bitcoins (approximately $5 million USD). The identity of the responsible actor(s) and the method used to access the wallets remain unknown. However, Yapizon stated that there was no insider involvement in this incident.

August 2016

Bitfinex

Malicious actor(s) stole almost 120,000 bitcoins ($72 million USD at the time), from clients' accounts at Bitfinex, an exchange platform in Hong Kong. How the breach occurred remains unknown, but the exchange made some changes to its systems after regulatory scrutiny. However, some speculate that complying with the regulators' recommendations made Bitfinex vulnerable to theft.

May 2016

Gatecoin

The Hong Kong-based Gatecoin announced that as much as $2 million USD in ether and bitcoin were lost following an attack that occurred over multiple days. The company claimed that a malicious actor altered its system so ether deposit transfers went directly to the attacker's wallet during the breach.

February 2015

KipCoin

The Chinese exchange KipCoin announced that an attacker gained access to its server in 2014 and downloaded the wallet.dat file. The malicious actor stole more than 3,000 bitcoins months later.

February 2015

BTER

BTER announced via its website that it lost 7,170 bitcoins, ($1.75 million USD at the time). The company claimed that the bitcoins were stolen from its cold wallet.

December 2015

Bitstamp

Bitstamp reported that multiple operational wallets were compromised, which resulted in the loss of 19,000 bitcoins. The company received multiple phishing attempts in the months prior to the theft. One employee allegedly downloaded a malicious file that gave the attacker access to servers that contained the wallet.dat file and passphrase for the company's hot wallet.

August 2014

BTER

The China-based exchange BTER claimed that an attacker stole 50 million NXT, ($1.65 million USD at the time). The company claims the theft was possible following an attack on one of its hosting servers. The company reportedly negotiated the return of 85 percent of the stolen funds from the attacker.

July 2014

MintPal

MintPal admitted that an attacker accessed 8 million VeriCoins ($1.8 million USD) in the company's hot wallet. The attackers exploited a vulnerability in its withdrawal system that allowed them to bypass security controls to withdraw the funds.

Early 2014

Mt. Gox

Mt. Gox, one of the largest cryptocurrency exchanges, filed for bankruptcy following a theft of 850,000 bitcoins (approximately $450 million USD at the time) and more than $24 million USD from its bank accounts. A bug in the exchange's system that went unidentified for years allegedly enabled this compromise. Additionally, some speculated that an insider could have conducted the theft. Notably, recent reports revolving around the arrest of the founder of BTC-e (Alexander Vinnik) suggest he was responsible for the attack on Mt. Gox.

Table 2: Sample of observed exchange breaches

As little oversight is established for cryptocurrency exchanges and no widely accepted security standards exist for them, such incidents will likely persist. Notably, while these incidents may involve outsiders compromising exchanges' and services' systems, many of the high-profile compromises have also sparked speculations that insiders have been involved.

Software Bugs

While there has yet to be an in-the-wild attack that has caused significant harm to the Bitcoin network itself, remember the Bitcoin software is just that: software. Developers have identified 30 common vulnerabilities and exposures (CVEs) since at least 2010, many of which could have caused denial of service attacks on the network, exposure of user information, degradation of transaction integrity, or theft of funds.

The most recent software bug was a transaction validation bug that affected the consensus rules; essentially allowing miners to create transactions that weren’t properly validated and contained an extra input – which could have ultimately been exploited to create an amount of bitcoin from nothing. This vulnerability went unnoticed for two years, and fortunately was responsibly disclosed.

Running any peer-to-peer (P2P) or decentralized and distributed software is risky because each individual user has the responsibility to upgrade software when bugs are found. The more people who fail to update their software in a timely manner, the greater the chance of those nodes being exploited or used to attack the network.

Scaling & Attack Surface

At the time of this post, scaling blockchain networks to the size required to support a truly global payment system still presents a problem for the new technology and is an area of contention among developers and industry players. To address this, many developers are working on various scaling solutions. The following are some of the proposed solutions and the risks associated with each:

On-chain Scaling

One proposed suggestion is to increase the block size, which consequently shifts the cost of scaling to miners and those who operate nodes. Some argue that this could introduce the risk of centralization, because the only larger organizations that can meet the bandwidth and storage demands of ever-increasing block sizes can support this type of solution.

Off-chain Scaling

Some of the more popular blockchain scaling solutions for crypto-assets often depend on layering networks and system architectures on top of the base protocol – also referred to as “layer two” (L2) scaling. This allows users to conduct transactions “off-chain” and only occasionally synchronize them with the Bitcoin blockchain. Many argue that this is similar to how legal contracts are enforced; you don’t need to go to court each time a legal contract is written, agreed upon, and executed. And this is something that already occurs frequently in Bitcoin, as the vast majority of transactions happen offline and off-chain within large exchanges’ and merchant providers’ cold storage solutions.

However, two choices for off-chain scaling exist:

Off-chain Private Databases

This solution involves pushing transactions off-chain to a privately managed database where transaction can be settled and then occasionally synced with the Bitcoin blockchain. However, in creating this second layer of private “off-chain” transaction processing, an element of trust is introduced to the system, which unfortunately introduces risk. When transactions occur “off-chain” in a centralized private database, there is risk of improperly secured centralized ledgers that can be falsified or targeted for attack.

Off-chain Trustless Payment Channels

Another L2 solution would be to push transactions off-chain – not onto a private database, but to a trustless decentralized routing network. There are two primary L2 solutions being developed: The Liquid Network (for Bitcoin) and Raiden (for Ethereum).

However, a critique of this type of scaling solution is that the accounts used on this layer are considered hot wallets, which presents the largest attack surface. This makes it the riskiest way to store funds while also creating a valuable target for hackers. If an attacker is able to identify and access a user’s L2 node and associated wallet, they could transmit all funds out of the user’s wallet.

Lightning and Raiden as scaling solutions are still relatively new and experimental, so it’s unknown whether the they will be globally accepted as the preferred industry scaling solution. Additionally, because this layered development is still new and not widely implemented, at the time of this post there has not yet been an instance or proof of concept attack against L2 networks.

Network & Protocol Attacks

Actors may also attempt to directly exploit a cryptocurrency P2P network or cryptographic protocol to either steal cryptocurrency or disrupt a cryptocurrency network. Albeit rare, successful attacks of this nature have been observed. Examples of attack vectors that fall into this category include the following:

51% Attack

The 51% attack refers to the concept that if a single malicious actor or cohesive group of miners controlled more than 50 percent of the computing capability validating a cryptocurrency's transactions, they could reverse their own transactions or prevent transactions from being validated. While previously considered theoretical, 51% attacks have been recently observed:

  • In early April 2018, the cryptocurrency Verge reportedly suffered a 51% attack, which resulted in the attacker being able to mine 1,560 Verge coins (XVG) every second for a duration of three hours.
  • In May 2018, developers notified various cryptocurrency exchanges of a 51% attack on Bitcoin Gold. According to a report by Bitcoinist, the attack cost exchanges nearly $18 million.
  • Following the Bitcoin Gold attack, in June 2018, ZenCash became another target of the 51% attack, in which attackers siphoned $550,000 USD worth of currency from exchanges.

Companies such as NiceHash offer a marketplace for cryptocurrency cloud mining in which individuals can rent hashing power. Couple the information available from sites like Crypto51, which calculates the cost of performing 51% attacks, and it presents an attractive option for criminals seeking to disrupt cryptocurrency networks. While these types of attacks have been observed, and are no longer theoretical, they have historically posed the most risk to various alt-coins with lower network participation and hash rate. Larger, more robust, proof-of-work (PoW) networks are less likely to be affected, as the cost to perform the attack outweighs potential profit.

We anticipate that as long as the cost to perform the 51% attack and the likelihood of getting caught remains low, while the potential profit remains high, actors will continue showing interest in these types of attacks across less-robust cryptocurrency networks. 

Sybil Attack

A Sybil attack occurs when a single node claims to be multiple nodes on the P2P network, which many see as one of the greatest security risks among all large-scale, peer-to-peer networks. A notable Sybil attack (in conjunction with a traffic confirmation attack) against the Tor anonymity network occurred in 2014, spanned the course of five months, and was conducted by unknown actors.

As it pertains to cryptocurrency networks in particular, attackers performing this type of attack could perform the following:

  • Block honest users from the network by outnumber honest nodes on the network, and refusing to receive or transmit blocks.
  • Change the order of transactions, prevent them from being confirmed, or even reverse transactions that can lead to double spending by controlling a majority of the network computing power in large-scale attacks.

As described by Microsoft researcher John Douceur, many P2P networks rely on redundancy to help lower the dependence on potential hostile nodes and reduce the risk of such attacks. However, this method of mitigation falls short if an attacker impersonates a substantial fraction of the network nodes, rendering redundancy efforts moot. The suggested solution to avoiding Sybil attacks in P2P networks, as presented in the research, is to implement a logically centralized authority that can perform node identity/verification. According to the research, without implementing such a solution, Sybil attacks will always remain a threat “except under extreme and unrealistic assumptions of resource parity and coordination among entities.”

Eclipse Attack

An eclipse attack involves an attacker or group controlling a significant number of nodes and then using those nodes to monopolize inbound and outbound connections to other victim nodes, effectively obscuring the victim node’s view of the blockchain and isolating it from other legitimate peers on the network. According to security researchers, aside from disrupting the network and filtering the victim node’s view of the blockchain, eclipse attacks can be useful in launching additional attacks once successfully executed. Some of these attacks include:

  • Engineered Block Races: Block races occur in mining when two miners discover blocks at the same time. Generally, one block will be added to the chain, yielding mining rewards, while the other block is orphaned and ignored, yielding no mining reward. If an attacker can successfully eclipse attack miners, the attacker can engineer block races by hoarding blocks until a competing block has been found by non-eclipsed miners – effectively causing the eclipsed miners to waste efforts on orphaned blocks.
  • Splitting Mining Power: An attacker could use eclipse attacks to effectively cordon off fractions of miners on a network, thereby eliminating their hashing power from the network. Removing hashing power from a network allows for easier 51% attacks to occur given enough miners are effectively segmented from the network to make a 51% attack profitable.

On Jan. 5, 2019, the cryptocurrency company Coinbase detected a possible eclipse + 51% attack effecting the Ethereum Classic (ETC) blockchain. The attack involved malicious nodes surrounding Coinbase nodes, presenting them with several deep chain reorganizations and multiple double spends – totaling 219,500 ETC (worth at the time of this reporting roughly $1.1 million USD).

While eclipse attacks are difficult to mitigate across large-scale P2P networks, some fixes can make them more difficult to accomplish. FireEye recommends implementing the following, where applicable, to help reduce the risk of eclipse attacks:

  • Randomized node selection when establishing connections.
  • Retain information on other nodes previously deemed honest, and implement preferential connection to those nodes prior to randomized connections (this increases the likelihood of connecting to at least one honest node).

How the Public and Private Sector Can Help Mitigate Risk

Public Sector Priorities

As blockchain technology continues to develop, and issues like scaling, security, and identity management are addressed, it is safe to assume the ecosystem we have today will not look like the ecosystem of tomorrow. Due to this, the public sector has generally maintained a hands-off approach to allow the space to mature and innovate before implementing firm regulations. However, in the future, there are likely to be certain key areas of regulation the public sector could focus on:

  • Virtual Currencies (tax implications, asset classification)
  • Data encryption
  • Privacy
  • Identity Management (KYC and FCC)

Private Sector’s Role

Because of the public sector’s wait-and-see approach to regulation, it could be argued that the private sector should have a more active role in securing the technology as it continues to mature. Private sector leaders in software and network development, hardware manufacturing, and cyber security all have the ability to weigh in on blockchain development as it progresses to ensure user security and privacy are top priorities. Universities and independent research groups should continue to study this emerging technology as it develops.

While no widely promoted and formal security standards exist for cryptocurrency networks at the time of this post, The Cryptocurrency Certification Consortium (C4) is actively developing the Cryptocurrency Security Standard (CCSS), a set of requirements and framework to complement existing information security standards as it relates to cryptocurrencies, including exchanges, web applications, and cryptocurrency storage solutions.

Cyber Security Community

From a cyber security perspective, we should learn from the vulnerabilities of TCP/IP development in the early days of the internet, which focused more on usability and scale than security and privacy – and insist that if blockchain technology is to help revolutionize the way business and trade is conducted that those two areas of focus (security and privacy) are held at the forefront of blockchain innovation and adoption. This can be achieved through certain self-imposed (and universally agreed upon) industry standards, including:

  • Forced encryption of locally stored wallet files (instead of opt-in options).
  • Code or policy rule that requires new wallet and key generation when user performs password changes.
  • Continued development and security hardening of multi-sig wallet solutions.
  • Emphasis on and clear guidelines for responsible bug disclosure.
  • Continued security research and public reporting on security implications of both known and hypothetical vulnerabilities regarding blockchain development.
    • Analyzing protocols and implementations to determine what threats they face, and providing guidance on best practices.

Outlook

While blockchain technology offers the promise of enhanced security, it also presents its own challenges. Greater responsibility for security is often put into the hands of the individual user, and while some of the security challenges facing exchanges and online wallet providers can be addressed through existing best practices in cyber security, linking multiple users, software solutions, and integration into complex legacy financial systems creates several new cyber security paradigms.

To maintain strong network security, the roles and responsibilities of each type of participant in a blockchain network must be clearly defined and enforced, and the cyber security risks posed by each type of participant must be identified and managed. It is also critical that blockchain development teams understand the full range of potential threats that arise from interoperating with third parties and layering protocols and applications atop the base protocols.

The value and popularity of cryptocurrencies has grown significantly in the recent years, making these types of currencies a very attractive target for financially motivated actors. Many of the aforementioned examples of the various attack vectors can be of high utility in financially motivated operations. We expect cyber crime actors will continue to demonstrate high interest in targeting cryptocurrencies and their underlying network protocols for the foreseeable future.

Hacker threatened a family using a Nest Camera to broadcast a fake missile attack alert

Nest recommended the owners of its security cameras to use enhanced authentication to avoid being hacked as happened with a family living in the US.

Over the weekend, a family living in California was terrified with a hoax nuclear missile attack. 

The couple explained to the local media that hackers compromised their Nest security camera and used atop their television and issued a warning of an imminent impact of missiles launched from North Korea.

After an initial fright, the family realized that they had been the victim of a hack, the attackers took control of their device and in particular of the built-in speakers in the camera, which allowed them to listen and talk with the victims.

Nest camera

According to Nest, the hackers used password obtained from other data breaches.

“Nest, which is owned by Google-parent Alphabet, told AFP that incidents of commandeered camera control in recent months were the result of hackers using passwords stolen from other online venues.” reported AFP.

“Nest was not breached,” confirmed Google that own the vendor.

“These recent reports are based on customers using compromised passwords – exposed through breaches on other websites.”

This isn’t an isolated incident, similar hacks made the headlines in the last months. Media reported the case of a hacker that threatened to kidnap a baby.

Experts and consumers are asking Nest to implement two-factor authentication to prevent such kind of attacks.

Nest is checking the credentials used for its users’ accounts are not included in data leaked online following the numerous data breaches.

If the credentials match the ones present is some dump available online, the company prompts to change passwords.

Last week, the popular cyber security expert Troy Hunt announced the discovery of a massive data leak he called ‘‘Collection #1’ that included 773 million records.

Someone has collected a huge trove of data through credential stuffing, the ‘Collection #1’ archive is a set of email addresses and passwords totalling 2,692,818,238 rows resulting from thousands of different sources.

According to Hunt, there are 1,160,253,228 unique combinations of email addresses and passwords, an excellent source for a hacker that is searching for valid credentials for security cameras and other devices.

Pierluigi Paganini

(SecurityAffairs – Nest, security cameras)

The post Hacker threatened a family using a Nest Camera to broadcast a fake missile attack alert appeared first on Security Affairs.

Trade Recommendation: Ravencoin

Ravencoin (RVN/BTC) is a market that looks ready to rally. It dropped to as low as 0.00000321 on January 14, 2019. At that point, Ravencoin breached the line in the sand of 0.00000344. We were curious whether the market will eventually recover the support or it will continue to trend lower. We got a response […]

The post Trade Recommendation: Ravencoin appeared first on Hacked: Hacking Finance.

Tron (TRX) and Litecoin (LTC) Extend Gains as Binance Adds New Stablecoin Pairs

Tron (TRX) and Litecoin (LTC) stood out among the top performers among major altcoins leading into Wednesday, both adding around 6% to their value. Both coins were the subject of this morning’s announcement by Binance that LTC and TRX would gain new stablecoin pairs on its platform. Newly formed TRX/PAX and TRX/USDC pairs will go […]

The post Tron (TRX) and Litecoin (LTC) Extend Gains as Binance Adds New Stablecoin Pairs appeared first on Hacked: Hacking Finance.

DataBreachToday.com RSS Syndication: DHS Issues More Urgent Warning on DNS Hijacking

Government Agencies Should Audit DNS Settings Within 10 Days
The U.S. Department of Homeland Security says executive branch agencies are being targeted by attacks aimed at modifying Domain Name System records, which are critical for locating websites and services. The warning comes as security companies have noticed a rise in DNS attacks.

DataBreachToday.com RSS Syndication

Crypto Intervention

Hi Everyone, The action of intervention is often has negative connotations, but sometimes intervention can save lives, as any surgeon will probably tell you. The Bank of International Settlements (BIS) has put out a new research paper that explores the economics of an intervention into bitcoin’s blockchain. To be clear, the BIS is the bank of central banks […]

The post Crypto Intervention appeared first on Hacked: Hacking Finance.

Netflix Becomes First Streaming Company To Join the MPAA

An anonymous reader quotes a report from Hollywood Reporter: Netflix has joined the membership ranks of the Motion Picture Association of America alongside the six major Hollywood studios, the top lobbying group said Tuesday, The unprecedented move -- coming on the same day that the streamer landed its first Oscar nomination for best picture -- was endorsed by Disney, Fox, Paramount, Sony, Universal and Warner Bros. It is the first time in history that a non-studio has been granted entry. It also is a defining moment for MPAA chairman-CEO Charles Rivkin 18 months into his tenure. The Netflix-MPAA union coincides with the streamer becoming a card-carrying member of the Oscar race after securing an unprecedented 15 nominations on Tuesday morning. Netflix CEO Reed Hastings and Sarandos are intent on upping the company's profile as a legitimate force in the movie business, and joining the MPAA will further that goal. Additionally, once Fox is merged with Disney, the MPAA will have one less member, meaning a loss of as much as $10 million to $12 million in annual dues. Sources say the MPAA is courting other new members as well (Amazon could be a candidate). Prior to joining the MPAA, Netflix "departed from the Internet Association -- a major industry trade group representing tech companies including Google, Amazon, and Facebook," Engadget notes. "Netflix had been a member of the internet association since 2013."

Read more of this story at Slashdot.

Fuzzing an API with DeepState (Part 2)

Alex Groce, Associate Professor, School of Informatics, Computing and Cyber Systems, Northern Arizona University

Mutation Testing

Introducing one bug by hand is fine, and we could try it again, but “the plural of anecdote is not data.” However, this is not strictly true. If we have enough anecdotes, we can probably call it data (the field of “big multiple anecdotes” is due to take off any day now). In software testing, creating multiple “fake bugs” has a name, mutation testing (or mutation analysis). Mutation testing works by automatically generating lots of small changes to a program, in the expectation that most such changes will make the program incorrect. A test suite or fuzzer is better if it detects more of these changes. In the lingo of mutation testing, a detected mutant is “killed.” The phrasing is a bit harsh on mutants, but in testing a certain hard-heartedness towards bugs is in order. Mutation testing was once an academic niche topic, but is now in use at major companies, in real-world situations.

There are many tools for mutation testing available, especially for Java. The tools for C code are less robust, or more difficult to use, in general. I (along with colleagues at NAU and other universities) recently released a tool, the universalmutator, that uses regular expressions to allow mutation for many languages, including C and C++ (not to mention Swift, Solidity, Rust, and numerous other languages previously without mutation-testing tools). We’ll use the universalmutator to see how well our fuzzers do at detecting artificial red-black tree bugs. Besides generality, one advantage of universalmutator is that it produces lots of mutants, including ones that are often equivalent but can sometimes produce subtle distinctions in behavior — that is, hard to detect bugs — that are not supported in most mutation systems. For high-stakes software, this can be worth the additional effort of analyzing and examining the mutants.

Installing universalmutator and generating some mutants is easy:

pip install universalmutator
mkdir mutants
mutate red_black_tree.c --mutantDir mutants

This will generate a large number of mutants, most of which won’t compile (the universalmutator doesn’t parse, or “know” C, so it’s no surprise many of its mutants are not valid C). We can discover the compiling mutants by running “mutation analysis” on the mutants, with “does it compile?” as our “test”:

analyze_mutants red_black_tree.c "make clean; make" --mutantDir mutants

This will produce two files: killed.txt, containing mutants that don’t compile, and notkilled.txt, containing the 1120 mutants that actually compile. To see if a mutant is killed, the analysis tool just determines whether the command in quotes returns a non-zero exit code, or times out (the default timeout is 30 seconds; unless you have a very slow machine, this is plenty of time to compile our code here).

If we copy the notkilled.txt file containing valid (compiling) mutants to another file, we can then do some real mutation testing:

cp notkilled.txt compile.txt
analyze_mutants red_black_tree.c "make clean; make fuzz_rb; ./fuzz_rb" --mutantDir mutants --verbose --timeout 120--fromFile compile.txt

Output will look something like:

ANALYZING red_black_tree.c
COMMAND: ** ['make clean; make fuzz_rb; ./fuzz_rb'] **
#1: [0.0s 0.0% DONE]
  mutants/red_black_tree.mutant.2132.c NOT KILLED
  RUNNING SCORE: 0.0
...
Assertion failed: (left_black_cnt == right_black_cnt), function checkRepHelper, file red_black_tree.c, line 702.
/bin/sh: line 1: 30015 Abort trap: 6           ./fuzz_rb
#2: [62.23s 0.09% DONE]
  mutants/red_black_tree.mutant.1628.c KILLED IN 1.78541398048
  RUNNING SCORE: 0.5
...

Similar commands will run mutation testing on the DeepState fuzzer and libFuzzer. Just change make fuzz_rb; ./fuzz_rb to make ds_rb; ./ds_rb --fuzz --timeout 60 --exit_on_fail to use the built-in DeepState fuzzer. For libFuzzer, to speed things up, we’ll want to set the environment variable LIBFUZZER_EXIT_ON_FAIL to TRUE, and pipe output to /dev/null since libFuzzer’s verbosity will hide our actual mutation results:

export LIBFUZZER_EXIT_ON_FAIL=TRUE
analyze_mutants red_black_tree.c "make clean; make ds_rb_lf; ./ds_rb_lf -use_value_profile=1 -detect_leaks=0 -max_total_time=60 >& /dev/null" --mutantDir mutants --verbose --timeout 120 --fromFile compile.txt

The tool generates 2,602 mutants, but only 1,120 of these actually compile. Analyzing those mutants with a test budget of 60 seconds, we can get a better idea of the quality of our fuzzing efforts. The DeepState brute-force fuzzer kills 797 of these mutants (71.16%). John’s original fuzzer kills 822 (73.39%). Fuzzing the mutants not killed by these fuzzers another 60 seconds doesn’t kill any additional mutants. The performance of libFuzzer is strikingly similar: 60 seconds of libFuzzer (starting from an empty corpus) kills 797 mutants, exactly the same as DeepState’s brute force fuzzer – the same mutants, in fact.

“There ain’t no such thing as a free lunch” (or is there?)

DeepState’s native fuzzer appears, for a given amount of time, to be less effective than John’s “raw” fuzzer. This shouldn’t be a surprise: in fuzzing, speed is king. Because DeepState is parsing a byte-stream, forking in order to save crashes, and producing extensive, user-controlled logging (among other things), it is impossible for it to generate and execute tests as quickly as John’s bare-bones fuzzer.

libFuzzer is even slower; in addition to all the services (except forking for crashes, which is handled by libFuzzer itself) provided by the DeepState fuzzer, libFuzzer determines the code coverage and computes value profiles for every test, and performs computations needed to base future testing on those evaluations of input quality.

Is this why John’s fuzzer kills 25 mutants that DeepState does not? Well, not quite. If we examine the 25 additional mutants, we discover that every one involves changing an equality comparison on a pointer into an inequality. For example:

<   if ( (y == tree->root) ||
---
>   if ( (y <= tree->root) ||

The DeepState fuzzer is not finding these because it runs each test in a fork. The code doesn’t allocate enough times to use enough of the address space to cause a problem for these particular checks, since most allocations are in a fork! In theory, this shouldn’t be the case for libFuzzer, which runs without forking. And, sure enough, if we give the slow-and-steady libFuzzer five minutes instead of 60 seconds, it catches all of these mutants, too. No amount of additional fuzzing will help the DeepState fuzzer. In this case, the bug is strange enough and unlikely enough we can perhaps ignore it. The issue is not the speed of our fuzzer, or the quality (exactly), but the fact that different fuzzing environments create subtle differences in what tests we are actually running.

After we saw this problem, we added an option to DeepState to make the brute force fuzzer (or test replay) run in a non-forking mode: --no_fork. Unfortunately, this is not a complete solution. While we can now detect these bugs, we can’t produce a good saved test case for them, since the failure depends on all the mallocs that have been issued, and the exact addresses of certain pointers. However, it turns out that --no_fork has a more important benefit: it dramatically speeds up fuzzing and test replay on mac OS – often by orders of magnitude. While we omit it in our examples because it complicates analyzing failure causes, you should probably use it for most fuzzing and test replay on mac OS.

We can safely say that, for most intents and purposes, DeepState is as powerful as John’s “raw” fuzzer, as easy to implement, and considerably more convenient for debugging and regression testing.

Examining the Mutants

This takes care of the differences in our fuzzers’ performances. But how about the remaining mutants? None of them are killed by five minutes of fuzzing using any of our fuzzers. Do they show holes in our testing? There are various ways to detect equivalent mutants (mutants that don’t actually change the program semantics, and so can’t possibly be killed), such as comparing the binaries generated by an optimizing compiler. For our purposes, we will just examine a random sample of the 298 unkilled mutants, to confirm that at least most of the unkilled mutants are genuinely uninteresting.

  • The first mutant changes a <= in a comment. There’s no way we can kill this. Comparing compiled binaries would have proven it.
  • The second mutant modifies code in the InorderTreePrint function, which John’s fuzzer (and thus ours) explicitly chooses not to test. This would not be detectable by comparing binaries, but it is common sense. If our fuzzer never covers a piece of code, intentionally, it cannot very well detect bugs in that code.
  • The third mutant changes the assignment to temp->key on line 44, in the RBTreeCreate function, so it assigns a 1 rather than a 0. This is more interesting. It will take some thought to convince ourselves this does not matter. If we follow the code’s advice and look at the comments on root and nil in the header file, we can see these are used as sentinels. Perhaps the exact data values in root and nil don’t matter, since we’ll only detect them by pointer comparisons? Sure enough, this is the case.
  • The fourth mutant removes the assignment newTree->PrintKey= PrintFunc; on line 35. Again, since we never print trees, this can’t be detected.
  • The fifth mutant is inside a comment.
  • The sixth mutant changes a pointer comparison in an assert.
    686c686
    <     assert (node->right->parent == node);
    ---
    >     assert (node->right->parent >= node);

    If we assume the assert always held for the original code, then changing == to the more permissive >= obviously cannot fail.

  • The seventh mutant lurks in a comment.
  • The eighth mutant removes an assert. Again, removing an assert can never cause previously passing tests to fail, unless something is wrong with your assert!
  • The ninth mutant changes a red assignment:
    243c243
    <       x->parent->parent->red=1;
    ---
    >       x->parent->parent->red=-1;

    Since we don’t check the exact value of the red field, but use it to branch (so all non-zero values are the same) this is fine.

  • The tenth mutant is again inside the InorderTreePrint function.

At this point if we were really testing this red-black tree as a critical piece of code, we would probably:

  • Make a tool (like a 10-line Python script, not anything heavyweight!) to throw out all mutants inside comments, inside the InorderTreePrint function, or that remove an assertion.
  • Compile all the mutants and compare binaries with each other and the original file, to throw out obvious equivalent mutants and redundant mutants. This step can be a little annoying. Compilers don’t always produce equivalent binaries, due to timestamps generated at compile time, which is why we skipped over it in the discussion above.
  • Examine the remaining mutants (maybe 200 or so) carefully, to make sure we’re not missing anything. Finding categories of “that’s fine” mutants often makes this process much easier than it sounds off hand (things like “assertion removals are always ok”).

The process of (1) making a test generator then (2) applying mutation testing and (3) actually looking at the surviving mutants and using them to improve our testing can be thought of as a falsification-driven testing process. For highly critical, small pieces of code, this can be a very effective way to build an effective fuzzing regimen. It helped Paul E. McKenney discover real bugs in the Linux kernel’s RCU module.

Just Fuzz it More

Alternatively, before turning to mutant investigation, you can just fuzz the code more aggressively. Our mutant sample suggests there won’t be many outstanding bugs, but perhaps there are a few. Five minutes is not that extreme a fuzzing regimen. People expect to run AFL for days. If we were really testing the red-black tree as a critical piece of code, we probably wouldn’t give up after five minutes.

Which fuzzer would be best for this? It’s hard to know for sure, but one reasonable approach would be first to use libFuzzer to generate a large corpus of tests to seed fuzzing, that achieve high coverage on the un-mutated red-black tree. Then, we could try a longer fuzzing run on each mutant, using the seeds to make sure we’re not spending most of the time just “learning” the red-black tree API.

After generating a corpus on the original code for an hour, we ran libFuzzer, starting from that corpus, for ten minutes. The tests we generated this way can be found here. How many additional mutants does this kill? We can already guess it will be fewer than 30, based on our 3% sample. A simple script, as described above, brings the number of interesting, unkilled, mutants to analyze down to 174 by removing comment mutations, print function mutations, and assertion removals. In fact, this more aggressive (and time-consuming) fuzzing kills zero additional mutants over the ones already killed by John’s fuzzer in one minute and libFuzzer in five minutes. Even an hour-long libFuzzer run with the hour-long corpus kills only three additional mutants, and those are not very interesting. One new kill removes a free call, and the memory leak eventually kills libFuzzer; the other two kills are just more pointer comparisons. Is this solid evidence that our remaining mutants (assuming we haven’t examined them all yet) are harmless? We’ll see.

What About Symbolic Execution?

[Note: this part doesn’t work on Mac systems right now, unless you know enough to do a cross compile, and can get the binary analysis tools working with that. I ran it on Linux inside docker.]

DeepState also supports symbolic execution, which, according to some definitions, is just another kind of fuzzing (white box fuzzing). Unfortunately, at this time, neither Manticore nor angr (the two binary analysis engines we support) can scale to the full red-black tree or file system examples with a search depth anything like 100. This isn’t really surprising, given the tools are trying to generate all possible paths through the code! However, simply lowering the depth to a more reasonable number is also insufficient. You’re likely to get solver timeout errors even at depth three. Instead, we use symex.cpp, which does a much simpler insert/delete pattern, with comparisons to the reference, three times in a row.

clang -c red_black_tree.c container.c stack.c misc.c
clang++ -o symex symex.cpp -ldeepstate red_black_tree.o stack.o misc.o container.o -static -Wl,--allow-multiple-definition,--no-export-dynamic
deepstate-manticore ./symex --log_level 1

The result will be tests covering all paths through the code, saved in the out directory. This may take quite some time to run, since each path can take a minute or two to generate, and there are quite a few paths. If deepstate-manticore is too slow, try deepstate-angr (or vice versa). Different code is best suited for different symbolic execution engines. (This is one of the purposes of DeepState – to make shopping around for a good back-end easy.)

INFO:deepstate.mcore:Running 1 tests across 1 workers
TRACE:deepstate:Running RBTree_TinySymex from symex.cpp(65)
TRACE:deepstate:symex.cpp(80): 0: INSERT:0 0x0000000000000000
TRACE:deepstate:symex.cpp(85): 0: DELETE:0
TRACE:deepstate:symex.cpp(80): 1: INSERT:0 0x0000000000000000
TRACE:deepstate:symex.cpp(85): 1: DELETE:0
TRACE:deepstate:symex.cpp(80): 2: INSERT:0 0x0000000000000000
TRACE:deepstate:symex.cpp(85): 2: DELETE:-2147483648
TRACE:deepstate:Passed: RBTree_TinySymex
TRACE:deepstate:Input: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ...
TRACE:deepstate:Saved test case in file out/symex.cpp/RBTree_TinySymex/89b9a0aba0287935fa5055d8cb402b37.pass
TRACE:deepstate:Running RBTree_TinySymex from symex.cpp(65)
TRACE:deepstate:symex.cpp(80): 0: INSERT:0 0x0000000000000000
TRACE:deepstate:symex.cpp(85): 0: DELETE:0
TRACE:deepstate:symex.cpp(80): 1: INSERT:0 0x0000000000000000
TRACE:deepstate:symex.cpp(85): 1: DELETE:0
TRACE:deepstate:symex.cpp(80): 2: INSERT:0 0x0000000000000000
TRACE:deepstate:symex.cpp(85): 2: DELETE:0
TRACE:deepstate:Passed: RBTree_TinySymex
...

We can see how well the 583 generated tests perform using mutation analysis as before. Because we are just replaying the tests, not performing symbolic execution, we can now add back in the checkRep and RBTreeVerify checks that were removed in order to speed symbolic execution, by compiling symex.cpp with -DREPLAY, and compile everything with all of our sanitizers. The generated tests, which can be run (on a correct red_black_tree.c) in less than a second, kill 428 mutants (38.21%). This is considerably lower than for fuzzing, and worse than the 797 (71.16%) killed by the libFuzzer one hour corpus, which has a similar < 1s runtime. However, this summary hides something more interesting: five of the killed mutants are ones not killed by any of our fuzzers, even in the well-seeded ten minute libFuzzer runs:

703c703
<   return left_black_cnt + (node->red ? 0 : 1);
---
>   return left_black_cnt / (node->red ? 0 : 1);
703c703
<   return left_black_cnt + (node->red ? 0 : 1);
---
>   return left_black_cnt % (node->red ? 0 : 1);
703c703
<   return left_black_cnt + (node->red ? 0 : 1);
---
>   /*return left_black_cnt + (node->red ? 0 : 1);*/
701c701
<   right_black_cnt = checkRepHelper (node->right, t);
---
>   /*right_black_cnt = checkRepHelper (node->right, t);*/
700c700
<   left_black_cnt = checkRepHelper (node->left, t);
---
>   /*left_black_cnt = checkRepHelper (node->left, t);*/

These bugs are all in the checkRep code itself, which was not even targeted by symbolic execution. While these bugs do not involve actual faulty red-black tree behavior, they show that our fuzzers could allow subtle flaws to be introduced into the red-black tree’s tools for checking its own validity. In the right context, these could be serious faults, and certainly show a gap in the fuzzer-based testing. In order to see how hard it is to detect these faults, we tried using libFuzzer on each of these mutants, with our one hour corpus as seed, for one additional hour of fuzzing on each mutant. It was still unable to detect any of these mutants.

While generating tests using symbolic execution takes more computational power, and, perhaps, more human effort, the very thorough (if limited in scope) tests that result can detect bugs that even aggressive fuzzing may miss. Such tests are certainly a powerful addition to a regression test suite for an API. Learning to use DeepState makes mixing fuzzing and symbolic execution in your testing easy. Even if you need a new harness for symbolic execution work, it looks like, and can share code with, most of your fuzzing-based testing. A major long-term goal for DeepState is to increase the scalability of symbolic execution for API sequence testing, using high-level strategies not dependent on the underlying engine, so you can use the same harness more often.

See the DeepState repo for more information on how to use symbolic execution.

What About Code Coverage?

We didn’t even look at code coverage in our fuzzing. The reason is simple: if we’re willing to go to the effort of applying mutation testing, and examining all surviving mutants, there’s not much additional benefit in looking at code coverage. Under the hood, libFuzzer and the symbolic execution engines aim to maximize coverage, but for our purposes mutants work even better. After all, if we don’t cover mutated code, we can hardly kill it. Coverage can be very useful, of course, in early stages of fuzzer harness development, where mutation testing is expensive, and you really just want to know if you are even hitting most of the code. But for intensive testing, when you have the time to do it, mutation testing is much more thorough. Not only do you have to cover the code, you actually have to test what it does. In fact, at present, most scientific evidence for the usefulness of code coverage relies on the greater usefulness of mutation testing.

Further Reading

For a more involved example using DeepState to test an API, see the TestFs example, which tests a user-level, ext3-like file system, or the differential tester that compares behavior of Google’s leveldb and Facebook’s rocksdb. For more details on DeepState in general, see our NDSS 2018 Binary Analysis Research Workshop paper.

L’État confronté à la guerre des talents cyber (3/6) – Acteurs publics | Renseignements Stratégiques, Investigations & Intelligence Economique

scoop.it - Dans une société numérique qui voit la maîtrise de l’État sur les technologies et les équipements se réduire comme peau de chagrin, l’attractivité des métiers de la cybersécurité est plus que jamais …


Tweeted by @Expert_IE_ https://twitter.com/Expert_IE_/status/1088053876425666563

Bitcoin Cash Price Analysis: BCH/USD Bulls Force a Breakout Above a Huge Resistance Trend Line

Bitcoin Cash continues upside momentum, running at its second consecutive session in the green. BCH/USD bulls make a big push above long-running descending trend line. BCH/USD Price Behavior The Bitcoin Cash price was seen trading in positive territory again in the early part of trading on Wednesday. BCH/USD is running at its second consecutive session within […]

The post Bitcoin Cash Price Analysis: BCH/USD Bulls Force a Breakout Above a Huge Resistance Trend Line appeared first on Hacked: Hacking Finance.

The Evolution of Darknets

This is interesting:

To prevent the problems of customer binding, and losing business when darknet markets go down, merchants have begun to leave the specialized and centralized platforms and instead ventured to use widely accessible technology to build their own communications and operational back-ends.

Instead of using websites on the darknet, merchants are now operating invite-only channels on widely available mobile messaging systems like Telegram. This allows the merchant to control the reach of their communication better and be less vulnerable to system take-downs. To further stabilize the connection between merchant and customer, repeat customers are given unique messaging contacts that are independent of shared channels and thus even less likely to be found and taken down. Channels are often operated by automated bots that allow customers to inquire about offers and initiate the purchase, often even allowing a fully bot-driven experience without human intervention on the merchant's side.

[...]

The other major change is the use of "dead drops" instead of the postal system which has proven vulnerable to tracking and interception. Now, goods are hidden in publicly accessible places like parks and the location is given to the customer on purchase. The customer then goes to the location and picks up the goods. This means that delivery becomes asynchronous for the merchant, he can hide a lot of product in different locations for future, not yet known, purchases. For the client the time to delivery is significantly shorter than waiting for a letter or parcel shipped by traditional means - he has the product in his hands in a matter of hours instead of days. Furthermore this method does not require for the customer to give any personally identifiable information to the merchant, which in turn doesn't have to safeguard it anymore. Less data means less risk for everyone.

The use of dead drops also significantly reduces the risk of the merchant to be discovered by tracking within the postal system. He does not have to visit any easily to surveil post office or letter box, instead the whole public space becomes his hiding territory.

Cryptocurrencies are still the main means of payment, but due to the higher customer-binding, and vetting process by the merchant, escrows are seldom employed. Usually only multi-party transactions between customer and merchant are established, and often not even that.

[...]

Other than allowing much more secure and efficient business for both sides of the transaction, this has also lead to changes in the organizational structure of merchants:

Instead of the flat hierarchies witnessed with darknet markets, merchants today employ hierarchical structures again. These consist of procurement layer, sales layer, and distribution layer. The people constituting each layer usually do not know the identity of the higher layers nor are ever in personal contact with them. All interaction is digital -- messaging systems and cryptocurrencies again, product moves only through dead drops.

The procurement layer purchases product wholesale and smuggles it into the region. It is then sold for cryptocurrency to select people that operate the sales layer. After that transaction the risks of both procurement and sales layer are isolated.

The sales layer divides the product into smaller units and gives the location of those dead drops to the distribution layer. The distribution layer then divides the product again and places typical sales quantities into new dead drops. The location of these dead drops is communicated to the sales layer which then sells these locations to the customers through messaging systems.

To prevent theft by the distribution layer, the sales layer randomly tests dead drops by tasking different members of the distribution layer with picking up product from a dead drop and hiding it somewhere else, after verification of the contents. Usually each unit of product is tagged with a piece of paper containing a unique secret word which is used to prove to the sales layer that a dead drop was found. Members of the distribution layer have to post security - in the form of cryptocurrency - to the sales layer, and they lose part of that security with every dead drop that fails the testing, and with every dead drop they failed to test. So far, no reports of using violence to ensure performance of members of these structures has become known.

This concept of using messaging, cryptocurrency and dead drops even within the merchant structure allows for the members within each layer being completely isolated from each other, and not knowing anything about higher layers at all. There is no trace to follow if a distribution layer member is captured while servicing a dead drop. He will often not even be distinguishable from a regular customer. This makes these structures extremely secure against infiltration, takeover and capture. They are inherently resilient.

[...]

It is because of the use of dead drops and hierarchical structures that we call this kind of organization a Dropgang.

DHS Issues Emergency Directive on DNS Infrastructure Tampering

The Department of Homeland Security (DHS) has issued an emergency directive that requires federal agencies to mitigate the threat of Domain Name System (DNS) infrastructure tampering. In “Emergency Directive 19-01,” DHS explains that it’s been working with the Cybersecurity and Infrastructure Security Agency (CISA) to track a campaign of DNS infrastructure tampering. A hijack in […]… Read More

The post DHS Issues Emergency Directive on DNS Infrastructure Tampering appeared first on The State of Security.

Apple delivers security patches, plugs an RCE achievable via FaceTime

Apple has released a new set of updates for its various products, plugging a wide variety of vulnerabilities. WatchOS, tvOS, Safari and iCloud Let’s start with “lightest” security updates: iCloud for Windows 7.10 brings fixes for memory corruption, logic and type confusion issues in the WebKit browser engine, all of which can be triggered via maliciously crafted web content and most of which may lead to arbitrary code execution. The update also carries patches for … More

The post Apple delivers security patches, plugs an RCE achievable via FaceTime appeared first on Help Net Security.

Adobe Released Another Patch – This Time For Adobe Experience Manager

This month, Adobe released patches for various products multiple times. However, it seems the vulnerabilities continue to appear in Adobe

Adobe Released Another Patch – This Time For Adobe Experience Manager on Latest Hacking News.