Category Archives: Executive Perspectives

House Actions on Election Security Bode Well for 2020

As a U.S. cybersecurity company, McAfee supports legislation that aims to safeguard U.S. election security. After the 2016 election, McAfee sees the importance of improving and preserving election security; we even offered free security tools to local election boards prior to the 2018 elections and released educational research on how localities can best protect themselves in future elections. As the 2020 primary elections quickly approach, it is more important than ever that the federal government takes steps to ensure our election infrastructure is secure and that states and localities have the resources they need to quickly upgrade and secure systems.

The U.S. House of Representatives recently passed H.R. 2722, the Securing America’s Federal Elections (SAFE) Act, legislation introduced by Rep. Zoe Lofgren (D-CA) that would allocate $600 million for states to secure critical election infrastructure. The bill would require cybersecurity safeguards for hardware and software used in elections, prevent the use of wireless communication devices in election systems and require electronic voting machines to be manufactured in the United States. The SAFE Act is a key step to ensuring election security and integrity in the upcoming 2020 election.

Earlier this year, the House also passed H.R. 1, the For the People Act. During a House Homeland Security Committee hearing prior to the bill’s passage, the committee showed commitment to improving the efficiency of election audits and continuing to incentivize the patching of election systems in preparation for the 2020 elections. H.R. 1 and the SAFE Act demonstrate the government’s prioritization of combating election interference. It is exciting to see the House recognize the issue of election security, as it is a multifaceted process and a vital one to our nation’s democracy.

McAfee applauds the House for keeping its focus on election security and prioritizing the allocation of resources to states. We hope that Senate leadership will take up meaningful, comprehensive election security legislation so our country can fully prepare for a secure 2020 election.

The post House Actions on Election Security Bode Well for 2020 appeared first on McAfee Blogs.

Expanding Our Vision to Expand the Cybersecurity Workforce

I recently had the opportunity to testify before Congress on how the United States can grow and diversify the cyber talent pipeline. It’s great that members of Congress have this issue on their radar, but at the same time, it’s concerning that we’re still having these discussions. A recent (ISC) Study puts the global cybersecurity workforce shortage at 2.93 million. Solving this problem is challenging, but I offered some recommendations to the House Homeland Security Committee’s Subcommittee on Cybersecurity, Infrastructure Protection and Innovation.

Increase the NSF CyberCorps Scholarships for Service Program

The National Science Foundation (NSF) together with the Department of Homeland Security (DHS) designed a program to attract more college students to cybersecurity, and it’s working. Ten to 12 juniors and seniors at each of the approximately 70 participating institutions across the country receive free tuition for up to two years plus annual stipends. Once they’ve completed their cybersecurity coursework and an internship, they go to work for the federal government for the same amount of time they’ve been in the program. Afterwards, they’re free to remain federal employees or move elsewhere, yet fortunately, a good number of them choose to stay.

Congress needs to increase the funding for this program (which has been flat since 2017) from $55 million to at least $200 million. Today the scholarships are available at 70 land grant colleges. The program needs to be opened up to more universities and colleges across the country.

Expand CyberCorps Scholarships to Community Colleges

Community colleges attract a wide array of students – a fact that is good for the cybersecurity profession. Some community college attendees are recent high school graduates, but many are more mature, working adults or returning students looking for a career change or skills training. A strong security operation requires differing levels of skills, so having a flexible scholarship program at a community college could not only benefit graduates but also provide the profession with necessary skills.

Furthermore, not everyone in cybersecurity needs a four-year degree. In fact, they don’t need to have a traditional degree at all. Certificate programs provide valuable training, and as employers, we should change our hiring requirements to reflect that reality.

Foster Diversity of Thinking, Recruiting and Hiring

Cybersecurity is one of the greatest technical challenges of our time, and we need to be as creative as possible to meet it. In addition to continually advancing technology, we need to identify people from diverse backgrounds – and not just in the standard sense of the term. We need to diversify the talent pool in terms of race, ethnicity, gender and age, all of which lead to creating an inclusive team that will deliver better results. However, we also should seek out gamers, veterans, people working on technical certificates, and retirees from computing and other fields such as psychology, liberal arts as well as engineering. There is no one background required to be a cybersecurity professional. We absolutely need people with deep technical skills, but we also need teams with diverse perspectives, capabilities and levels of professional maturity.

Public-Private Sector Cross Pollination

We also must develop creative approaches to enabling the public and private sectors to share talent, particularly during significant cybersecurity events. We should design a mechanism for cyber professionals – particularly analysts or those who are training to become analysts – to move back and forth between the public and private sector so that government organizations would have a continual refresh of expertise. This type of cross-pollination would help everyone share best practices on technology, business processes and people management.

One way to accomplish this would be for DHS to partner with companies and other organizations such as universities to staff a cadre of cybersecurity professionals – operators, analysts and researchers – who are credentialed to move freely between public and private sector service. These professionals, particularly those in the private sector, could be on call to help an impacted entity and the government respond to a major attack in a timely way. Much like the National Guard, a flexible staffing approach to closing the skills gap could become a model of excellence.

We’re Walking the Talk

McAfee is proud to support the community to establish programs that provide skills to help build the STEM pipeline, fill related job openings, and close gender and diversity gaps. These programs include an Online Safety Program, onsite training programs and internships for high school students. Our employees also volunteer in schools help educate students on both cybersecurity risks and opportunities. Through volunteer-run programs across the globe, McAfee has educated more than 500,000 children to date.

As part of the McAfee’s new pilot Achievement & Excellence in STEM Scholarship program, we’ll make three awards of $10,000 for the 2019-2020 school year. Twelve students from each of the three partner schools will be invited to apply, in coordination with each partner institution’s respective college advisor. Target students are college-bound, high school seniors with demonstrated passion for STEM fields, who are seeking a future in a STEM-related path. This type of a program can easily be replicated by other companies and used to support the growth and expansion of the workforce.

We’re Supporting Diversity

While we recognize there is still more to do in fostering diversity, we’re proud to describe the strides we’re making at McAfee. We believe we have a responsibility to our employees, customers and communities to ensure our workplace reflects the world in which we live. Having a diverse, inclusive workforce is the right thing to do, and after we became an independent, standalone cybersecurity company in 2017, we made and have kept this a priority.

 The steps we’re taking include:

  • Achieving pay parity between women and men employees in April 2019, making us the first pureplay cybersecurity company to do so.
  • In 2018, 27.1% of all global hires were female and 13% of all U.S. hires were underrepresented minorities.
  • In June 2018, we launched our “Return to Workplace” program for men and women who have paused their career to raise children, care for loved ones or serve their country. The 12-week program offers the opportunity to reenter the tech space with the support and resources needed to successfully relaunch careers.
  • Last year, we established the Diversity & Culture Council, a volunteer-led global initiative focused on creating an infrastructure for the development and maintenance of an integrated strategy for diversity and workplace culture.
  • McAfee CEO Chris Young joined CEO Action for Diversity Inclusion, the largest group of CEOs and presidents committed to act on driving an inclusive workforce. By taking part in CEO Action, Young personally commits to advancing diversity and inclusion with the coalition’s three-pronged approach of fostering safe workplaces.

Looking to the Future

While I’d love to see a future where fewer cybersecurity professionals were needed, I know that for the foreseeable future, we’ll not only need great technology but also talented people. With that reality, we in the industry need to expand our vision and definition of what constitutes cybersecurity talent. The workforce shortage is such that we have to do expand our concepts and hiring requirements. In addition, the discipline itself will benefit from a population that brings more experiences, skills and diversity to bear on a field that is constantly changing.

The post Expanding Our Vision to Expand the Cybersecurity Workforce appeared first on McAfee Blogs.

A Robust Federal Cybersecurity Workforce Is Key To Our National Security

The Federal government has long struggled to close the cybersecurity workforce gap. The problem has continued to get worse as the number of threats against our networks, critical infrastructure, intellectual property, and the millions of IoT devices we use in our homes, offices and on our infrastructure increase. Without a robust cyber workforce, federal agencies will continue to struggle to develop and execute the policies needed to combat these ongoing issues.

The recent executive order on developing the nation’s cybersecurity workforce was a key step to closing that gap and shoring up the nation’s cyber posture. The widespread adoption of the cybersecurity workforce framework by NIST, the development of a rotational program for Federal employees to expand their cybersecurity expertise and the “president’s cup” competition are all crucial to retaining and growing the federal cyber workforce. If we are to get serious about closing the federal workforce gap, we have to encourage our current professionals to stay in the federal service and grow their expertise to defend against the threats of today and prepare for the threats of tomorrow.

Further, we must do more to bring individuals into the field by eliminating barriers of entry and increasing the educational opportunities available for people so that there can be a strong, diverse and growing cybersecurity workforce in both the federal government and the private sector. Expanding scholarship programs through the National Science Foundation (NSF) and Department of Homeland Security (DHS) for students who agree to work for federal and state agencies will go a long way to bringing new, diverse individuals into the industry.  Additionally, these programs should be expanded to include many types of educational institutions including community colleges. Community colleges attract a different type of student than a 4-year institution, increasing diversity within the federal workforce while also tapping into a currently unused pipeline for cyber talent.

The administration’s prioritization of this issue is a positive step forward, and there has been progress made on closing the cyber skills gap in the U.S., but there is still work to be done. If we want to create a robust, diverse cyber workforce, the private sector, lawmakers and the administration must work together to come up with innovative solutions that build upon the recent executive order.

The post A Robust Federal Cybersecurity Workforce Is Key To Our National Security appeared first on McAfee Blogs.

McAfee Playing an Ever Growing Role in Tackling Disinformation and Ensuring Election Security

As Europe heads to the polls this weekend (May 23-26) to Members of the European Parliament (“MEPs”) representing the 28 EU Member States, the threat of disinformation campaigns aimed at voters looms large in the minds of politicians. Malicious players have every reason to try to undermine trust in established politicians, and push voters towards the political fringes, in an effort to destabilise European politics and weaken the EU’s clout in a tense geopolitical environment.

Disinformation campaigns are of course not a new phenomenon, and have been a feature of public life since the invention of the printing press. But the Internet and social media have given peddlers of fake news a whole new toolbox, offering bad actors unprecedented abilities to reach straight into the pockets of citizens via their mobile phones, while increasing their ability to hide their true identity.

This means that the tools to fight disinformation need to be upgraded in parallel. There is no doubt that more work is needed to tackle disinformation, but credit should also go to the efforts that are being made to protect citizens from misinformation during elections.  The European Commission has engaged the main social media players in better reporting around political advertising and preventing the spread of misinformation, as a complement to the broader effort to tackle illegal content online. The EU’s foreign policy agency, the External Action Service, has also deployed a Rapid Alert System involving academics, fact-checkers, online platforms and partners around the world to help detect disinformation activities and sharing information among member states of disinformation campaigns and methods, to help them stay on top of the game. The EU has also launched campaigns to ensure citizens are more aware of disinformation and improving their cyber hygiene, inoculating them against such threats.

But adding cybersecurity research, analysis and intelligence trade craft to the mix is a vital element of an effective public policy strategy.  And recently published research by Safeguard Cyber is a good example of how cybersecurity companies can help policymakers get to grips with disinformation.

The recent engagement with the European Commission think-tank, the EPSC, and Safeguard Cyber is a good example of how policymakers and cyber experts can work together, and we encourage more such collaboration and exchange of expertise in the months and years ahead.  McAfee Fellow and Chief Scientist Raj Samani told more than 50 senior-ranking EU officials in early May that recent disinformation campaigns are “direct, deliberate attacks on our way of life” that seek to disrupt and undermine the integrity of the election process.  And he urged policy makers that the way to address this is to use cyber intelligence and tradecraft to understand the adversary, so that our politicians can make informed decisions on how best to combat the very real threat this represents to our democracies. In practice this means close collaboration between best-in-class cybersecurity researchers, policymakers and social media players to gain a deeper understanding of the modus operandi of misinformation actors and respond more quickly.

As the sceptre of disinformation is not going to go away, we need a better understanding the actors involved, their motivations and most importantly, the rapidly changing technical tools they use to undermine democracy.  And each new insight into tackling disinformation will be put to good use in elections later this year in Denmark, Norway, Portugal, Bulgaria, Poland and Croatia and Austria.

The post McAfee Playing an Ever Growing Role in Tackling Disinformation and Ensuring Election Security appeared first on McAfee Blogs.

Why AI Innovation Must Reflect Our Values in Its Infancy

In my last blog, I explained that while AI possesses the mechanics of humanness, we need to train the technology to make the leap from mimicking humanness with logic, rational and analytics to emulating humanness with common sense. If we evolve AI to make this leap the impact will be monumental, but it will require our global community to take a more disciplined approach to pervasive AI proliferation. Historically, our enthusiasm for and consumption of new technology has outpaced society’s ability to evolve legal, political, social, and ethical norms.

I spend most of my time thinking about AI in the context of how it will change the way we live. How it will change the way we interact, impact our social systems, and influence our morality.  These technologies will permeate society and the ubiquity of their usage in the future will have far reaching implications. We are already seeing evidence of how it changes how we live and interact with the world around us.

Think Google. It excites our curiosity and puts information at our fingertips. What is tripe – should I order it off the menu? Why do some frogs squirt blood from their eyes? What does exculpatory mean?

AI is weaving the digital world into the fabric of our lives and making information instantaneously available with our fingertips.

AI-enabled technology is also capable of anticipating our needs. Think Alexa. As a security professional I am a hold out on this technology but the allure of it is indisputable. It makes the digital world accessible with a voice command. It understands more than we may want it to – Did someone tell Alexa to order coffee pods and toilet tissue and if not – how did Alexa know to order toilet tissue? Maybe somethings I just don’t want to know.

I also find it a bit creepy when my phone assumes (and gets it right) that I am going straight home from the grocery store letting me know, unsolicited, that it will take 28 minutes with traffic. How does it know I am going home? I could be going to the gym. It’s annoying that it knows I have no intention of working out. A human would at least have the decency to give me the travel time to both, allowing me to maintain the illusion that the gym was an equal possibility.

On a more serious note, AI-enabled technology will also impact our social, political and legal systems. As we incorporate it into more products and systems, issues related to privacy, morality and ethics will need to be addressed.

These questions are being asked now, but in anticipation of AI becoming embedded in everything we interact with it is critical that we begin to evolve our societal structures to address both the opportunities and the threats that will come with it.

The opportunities associated with AI are exciting.  AI shows incredible promise in the medical world. It is already being used in some areas. There are already tools in use that leverage machine learning to help doctors identify disease related patterns in imaging. Research is under way using AI to help deal with cancer.

For example, in May 2018, The Guardian reported that skin cancer research using a convolutional neural network (CNN – based on AI) detected skin cancer 95% of the time compared to human dermatologists who detected it 86.6% of the time. Additionally, facial recognition in concert with AI may someday be commonplace in diagnosing rare genetic disorders, that today, may take months or years to diagnose.

But what happens when the diagnosis made by a machine is wrong? Who is liable legally? Do AI-based medical devices also need malpractice insurance?

The same types of questions arise with autonomous vehicles. Today it is always assumed a human is behind the wheel in control of the vehicle. Our laws are predicated on this assumption.

How must laws change to account for vehicles that do not have a human driver? Who is liable? How does our road system and infrastructure need to change?

The recent Uber accident case in Arizona determined that Uber was not liable for the death of a pedestrian killed by one of its autonomous vehicles. However, the safety driver who was watching TV rather than the road, may be charged with manslaughter. How does this change when the car’s occupants are no longer safety drivers but simply passengers in fully autonomous vehicles. How will laws need to evolve at that point for cars and other types of AI-based “active and unaided” technology?

There are also risks to be considered in adopting pervasive AI. Legal and political safeguards need to be considered, either in the form of global guidelines or laws. Machines do not have a moral compass. Given that the definition of morality may differ depending on where you live, it will be extremely difficult to train morality into AI models.

Today most AI models lack the ability to determine right from wrong, ill intent from good intent, morally acceptable outcomes from morally irreprehensible outcomes. AI does not understand if the person asking the questions, providing it data or giving it direction has malicious intent.

We may find ourselves on a moral precipice with AI. The safeguards or laws I mention above need to be considered before AI becomes more ubiquitous than it already is.  AI will enable human kind to move forward in ways previously unimagined. It will also provide a powerful conduit through which humankind’s greatest shortcomings may be amplified.

The implications of technology that can profile entire segments of a population with little effort is disconcerting in a world where genocide has been a tragic reality, where civil obedience is coerced using social media, and where trust is undermined by those that use mis-information to sew political and societal discontent.

There is no doubt that AI will make this a better world. It gives us hope on so many fronts where technological impasses have impeded progress. Science may advance more rapidly, medical research progress beyond current roadblocks and daunting societal challenges around transportation and energy conservation may be solved.  It is another tool in our technological arsenal and the odds are overwhelmingly in favor of it improving the global human condition.

But realizing its advantages while mitigating its risks will require commitment and hard work from many conscientious minds from different quarters of our society. We as the technology community have an obligation to engage key stakeholders across the legal, political, social and scientific community to ensure that as a society we define the moral guardrails for AI before it becomes capable of defining them, for or in spite of, us.

Like all technology before it, AI’s social impacts must be anticipated and balanced against the values we hold dear.  Like parents raising a child, we need to establish and insist that the technology reflect our values now while its growth is still in its infancy.

The post Why AI Innovation Must Reflect Our Values in Its Infancy appeared first on McAfee Blogs.

I am an AI Neophyte

I am an Artificial Intelligence (AI) neophyte. I’m not a data scientist or a computer scientist or even a mathematician. But I am fascinated by AI’s possibilities, enamored with its promise and at times terrified of its potential consequences.

I have the good fortune to work in the company of amazing data scientists that seek to harness AI’s possibilities. I wonder at their ability to make artificial intelligence systems “almost” human. And I use that term very intentionally.

I mean “almost” human, for to date, AI systems lack the fundamentals of humanness. They possess the mechanics of humanness, qualities like logic, rationale, and analytics, but that is far from what makes us human. Their most human trait is one we prefer they not inherit –  a propensity to perpetuate bias.  To be human is to have consciousness. To be sentient. To have common sense. And to be able to use these qualities and the life experience that informs them to interpret successfully not just the black and white of our world but the millions of shades of grey.

While data scientists are grappling with many technical challenges associated with AI there are a couple I find particularly interesting. The first is bias and the second is lack of common sense.

AI’s propensity to bias is a monster of our own making. Since AI is largely a slave to the data it is given to learn from, its outputs will reflect all aspects of that data, bias included. We have already seen situations where applications leveraging AI have perpetuated human bias unintentionally but with disturbing consequences.

For example, many states have started to use risk assessment tools that leverage AI to predict probable rates of recidivism for criminal defendants. These tools produce a score that is then used by a judge for determining a defendant’s sentencing. The problem is not the tool itself but the data that is used to train it. There is evidence that there has historically been significant racial bias in our judicial systems, so when that data is used to train AI, the resulting output is equally biased.

A report by ProPublica in 2016 found that algorithmic assessment tools are likely to falsely flag African American defendants as future criminals at nearly twice the rate as white defendants*. For any of you who saw the Tom Cruise movie, Minority Report, it is disturbing to consider the similarities between the fictional technology used in the movie to predict future criminal behavior and this real life application of AI.

The second challenge is how to train artificial intelligence to be as good at interpreting nuance as humans are. It is straight forward to train AI how to do something like identifying an image as a Hippopotamus. You provide it with hundreds or thousands of images or descriptions of a hippo and eventually it gets it right most if not all the time.

The accuracy percentage is likely to go down for things that are perhaps more difficult to distinguish—such as a picture of a field of sheep versus a picture of popcorn on a green blanket—but  with enough training even this is a challenge that can be overcome.

The interesting thing is that the challenge is not limited to things that lack distinguishing characteristics. In fact, the things that are so obvious that they never get stated or documented, can be equally difficult for AI to process.

For example, we humans know that a hippopotamus cannot ride a bicycle. We inherently know that if someone says “Jimmy played with his boat in the swimming pool” that, except in very rare instances likely involving eccentric billionaires, the boat was a toy boat and not a full-size catamaran.

No one told us these things – it’s just common sense. The common sense aspects of interpreting these situations could be lost on AI. The technology also lacks the ability to infer emotion or intent from data. If we see someone buying flowers we can mentally infer why – a romantic dinner or somebody’s in the doghouse. We can not only guess why they are buying flowers, but when I say somebody’s in the dog house you know exactly what I mean. It’s not that they are literally in the dog house, but someone did something stupid and the flowers are an attempt at atonement.

That leap is too big for AI today. When you add to the mix cultural differences it exponentially increases the complexity. If a British person says put something in the boot it is likely going to be groceries. If it is an American it will likely be a foot. Teaching AI common sense is a difficult task and one that will take significant research and effort on the part of experts in the field.

But the leap from logic, rationale and analytics to common sense is a leap we need AI to make for it to truly become the tool we need it to be, in cybersecurity and in every other field of human endeavor.

In my next blog, I’ll discuss the importance of ensuring that this profoundly impactful technology reflects our human values in its infancy, before it starts influencing and shaping them itself.

*ProPublica, Machine Bias, May 23, 2016

The post I am an AI Neophyte appeared first on McAfee Blogs.

Celebrating Mother’s Day: How McAfee Supports Expecting & Working Mothers

Mother. It’s one of the best, hardest, most rewarding, challenging and unpredictable jobs a woman can have.

As we approach Mother’s Day in the U.S, I’m reminded of the immense happiness motherhood brings me. I’m also reminded of my own mother. As a child, I distinctly remember watching her getting ready for work. I remember what it stirred in me. In a word? Pride. My mother’s commitment to her career inspired me. I wanted a career I would be passionate about, and in turn, inspire my own children.

That’s why this Mother’s Day I’m appreciating working for a company where I can be a mother and a business professional in a role I truly love. I’m also reflecting on how critical the strides we’re making in workplace culture, policies, and programs are to better serve working mothers and parents.

In an industry made up of just 24% women, we can’t afford to miss out on the perspectives and innovation we unlock when we ensure our workplace mirrors the world in which we live. And considering our current cybersecurity talent shortage, an inclusive workplace is critical to bridging our workforce gap.

To encourage more women to bring (and keep!) their valuable and highly sought-after skills in the workplace, we can’t just talk a good game when it comes to championing inclusion and diversity; we have to walk the walk. Here are three ways McAfee is doing just that when it comes to supporting mothers:

Supporting You as Your Family Grows

Welcoming a child is an exciting time in your life. We want to help you take the time you need and to celebrate, bond, and adjust to new life with the newest addition to your family. Whether it’s offering extended leave with your new baby, providing the convenience of bringing your kids to the office or flexible working schedules, our parent initiatives recognize, celebrate, and accommodate your life’s big moments.

Offering Comfort and Convenience in the Workplace

Coming back to work after having a baby can be a big transition for many, which is why McAfee helps support mothers returning to the workplace after leave. For example, if you’re a nursing mother who travels throughout the U.S., we offer a Milk Stork delivery program to give you peace of mind and convenience to get your baby’s nourishment delivered in a safe and speedy manner.

In an ever-growing number of McAfee offices we offer Mother’s Rooms to provide a private and convenient way for mothers or mothers-to-be to enjoy a quiet and comfortable space while providing for their infant (and let’s be honest, sometimes that’s the only 20 minutes or so of quiet time a new working mother might have!). And for expecting or new mothers, Stork Parking provides reserved parking spaces. Fun fact: a pregnant woman’s lungs become increasingly compact as the baby grows which means getting from A to B is no longer a simple task. We recognize this at McAfee. We know that the small things count.

Reintroducing Mothers to the Workforce

We know careers aren’t always linear, and parents may choose to pause their careers to care for their families. McAfee’s Return to Workplace program taps into the potential of those who may have taken a career break with the support, guidance, and resources needed to successfully rejoin the workforce. This global initiative was launched in our Bangalore, Cork, and Plano offices last year. I’m proud to share 80 percent of program participants were offered a full-time position at McAfee.

Being a working mother is a strength. It only adds to the varying perspectives and experiences that drive innovative solutions. At McAfee, I’m so proud of the ways we’re recognizing and supporting mothers – and all of our team members – in being successful at home and at the workplace.

To learn more about the ways we support working mothers and our efforts to build an inclusive workplace where all can belong, read our first-ever Inclusion & Diversity Report.

Ready to join a company that helps you achieve your best at work and at home? We’re hiring.

The post Celebrating Mother’s Day: How McAfee Supports Expecting & Working Mothers appeared first on McAfee Blogs.

Why Data Security Is Important

The Increasing Regulatory Focus on Privacy

The ongoing trend of data breaches and the increasing privacy risks associated with social media continue to be a national and international concern. These issues have prompted regulators to seriously explore the need for new and stronger regulations to protect consumer privacy. Some of the regulatory solutions focus on U.S. federal-level breach and privacy laws, while individual U.S. states are also looking to strengthen and broaden their privacy laws.

The focus on stronger consumer privacy has already sparked new regulations like Europe’s General Data Protection Regulation (GDPR) and California’s Consumer Privacy Act (CCPA). Many customers of U.S. companies are covered by GDPR’s broad privacy protections, which protects the rights of residents of the European Economic Area. As U.S. states increasingly pass their own privacy laws, the legal environment is becoming more fragmented and complex. This has led to an increased focus on potentially creating a U.S. federal privacy law, perhaps along the lines of the GDPR or otherwise protecting individuals’ information more broadly than the sectoral laws now in place. Although it is not clear whether effective national legislation will pass in the immediate future, the continued focus on regulatory solutions to strengthen consumer data privacy appears certain.

Privacy is Important to McAfee

For technology to be effective, individuals and corporations must be able to trust it. McAfee believes that trust in the integrity of systems – whether a corporate firewall or a child’s cell phone – is essential to enabling people to get the most possible out of their technologies. Fundamental to that trust is privacy and the protection of data. McAfee is committed to enabling the protection of customer, consumer and employee data by providing robust security solutions.

Why Privacy Matters to McAfee
  • Protecting our customers’ personal data and intellectual property, and their consumer and corporate products, is a core value.
  • Robust Privacy and Security solutions are fundamental to McAfee’s strategic vision, products, services and technology solutions.
  • Privacy and Security solutions enable our corporate and government customers to more efficiently and effectively comply with applicable regulatory requirements.
  • McAfee believes privacy and security are necessary prerequisites for individuals to have trust in the use of technology.

Effective Consumer Privacy Also Requires Data Security

Today, electronic systems are commonly used by government, business and consumers. There are many types of electronic systems and connected devices used for a variety of beneficial purposes and entertainment. The use of data is a common element across these systems, some of which may be confidential information, personal data and or sensitive data.

A reliable electronic system must have adequate security to protect the data the system is entrusted to process and use. Data leaks and security breaches threaten the ability of customers to trust businesses and their products. Flawed or inadequate data security to provide robust data protection puts consumers’ privacy at risk.

Too often, privacy and information security are thought of as separate and potentially opposing concerns. However, there are large areas of interdependency between these two important policy areas. Privacy and information security must work in harmony and support each other to achieve the goal of consumer privacy. Privacy requires that consumers have the capacity to decide what data about them is collected and processed, and the data must have safeguards driven by appropriately secure technologies and processes.

Data security is the process of protecting data from unauthorized access and data corruption throughout its lifecycle. Privacy is an individual’s right or desire to be left alone and or to have the ability to control her own data. Data security also enables the effective implementation of protective digital privacy measures to prevent unauthorized access to computers, databases and websites. Data security and privacy must be aligned to effectively implement consumer privacy protections.

An effective risk-based privacy and security framework should apply to all collection of personal data. This does not mean that all frameworks solutions are equal. The risks of collection and processing the personal data must be weighed against the benefits of using the data. Transparency, choice and reasonable notice should always be a part of the way data is collected. The specific solutions of a framework may vary based on the risk and specific types of data. The key is to have in place a proactive evaluation (Privacy and Security by Design principles) to provide the most effective protection for the specific application and data use.

Examples Where Privacy Regulations Require or Enable Robust Data Security

Breach Notification Safe Harbor for Encrypted Data in U.S. State Privacy Laws

Data breach notification laws require organizations to notify affected persons or regulatory authorities when an unauthorized acquisition of personal data occurs as defined by the applicable law or regulation. Many U.S. state laws provide a “safe harbor” for data breach notice obligations if the data was encrypted. A safe harbor may be defined as a “provision of a statute or a regulation that reduces or eliminates a party’s liability under the law, on the condition that the party performed its actions in good faith or in compliance with defined standards.”

Security safe harbor provisions may be used to encourage entities and organizations to proactively protect sensitive or restricted data by employing good security practices. Encrypting data may protect the organization from costly public breach notifications.  Encrypted data may be excluded from breach requirements or unauthorized access to encrypted data may not be considered a “breach” as defined in the statute. To be protected by an encryption “safe harbor” exemption, the breached organization must encrypt data in compliance with the state statute. The state-specific statutes may also require control of the encryption keys to claim safe harbor.

GDPR Security Requirements

The General Data Protection Regulation (GDPR) went into effect in the European Economic Area (EEA) in 2018, enhancing further the privacy rights of residents of the EEA.  In addition to allowing EEA residents access to personal data collected about them, the GDPR requires companies interacting with this data to perform risk analyses to determine how to secure the data appropriately.  The GDPR lays out basic security requirements in Article 32, GDPR Security of processing, which requires entities to “ensure the ongoing confidentiality, integrity, availability, and resilience of processing systems and services.”

Controllers of personal data must also have appropriate technical and organizational measures to satisfy the GDPR. Business processes that handle personal data must be designed and implemented to meet the GDPR security principles and to provide adequate safeguards to protect personal data.

Implementing a robust security framework to meet the GDPR requirements means the organization should proactively evaluate its data security policies, business practices and security technologies, and the organization must develop security strategies that adequately protect personal data.

Next Steps:

Federal policymakers need to pass uniform privacy legislation into law. A key part of this effort must include sufficiently strong cybersecurity provisions, which are imperative to protecting data, as evidenced by GDPR and thoughtful state breach notification laws. Instead of relying on hard regulations to incent organizations to implement strong security, policymakers should include a liability incentive – a rebuttable presumption or a safe harbor – in privacy legislation. Such an approach, ideally aligned to NIST’s flexible Cybersecurity Framework, would enable policy makers to promote the adoption of strong security measures without resorting to a “check the box” compliance model that has the potential to burden customers and discourage innovation in cyber security markets.

The post Why Data Security Is Important appeared first on McAfee Blogs.

Federal, State Cyber Resiliency Requires Action

It is no shock that our state and local infrastructures are some of the most sought-after targets for foreign and malicious cyber attackers, but the real surprise lies in the lack of preventive measures that are able to curb them. Major attention has been drawn to the critical gaps that exist as a result of an ever-expanding attack surface, making old system architectures an increasing liability.

Recently, the city of Albany, New York became a victim of a ruthless ransomware attack, which created a series of municipal service interruptions. Residents weren’t able to use the city’s services to obtain birth certificates, death certificates or marriage licenses, and the police department’s networks were rendered inoperable for an entire day. This resulted in an enormous disruption of the city’s functionality and made clear that the threat to infrastructure is more real than ever. Bolstering state and local digital defenses should be of the utmost priority, especially as we near the 2020 presidential elections when further attacks on election infrastructure are expected. We must take the necessary precautions to mitigate cyberattack risk.

The reintroduction of the State Cyber Resiliency Act by Senators Mark Warner (D-VA) and Cory Gardner (R-CO), and Representatives Derek Kilmer (D-WA) and Michael McCaul (R-TX), does just that. The legislation demonstrates a critical bipartisan effort to ensure that state, local and tribal governments have a robust capacity to strengthen their defenses against cybersecurity threats and vulnerabilities through the Department of Homeland Security (DHS). States have made clear that they suffer from inadequate resources to deal with increasingly sophisticated attacks, but also the most basic attacks, which require proper safeguards and baseline protection. This bill works to strategically address the challenges posed by a lack of resources to deal with emerging threats.

The possibility of cyber warfare must not be taken lightly and has long gone ignored. This bill shows that the status quo of kicking the can further down the road will no longer stand as a “strategy” in today’s political and cybersecurity landscape. Action is necessary to better secure our national security and the systems upon which every sector of our economy relies, from utilities to banking to emergency first responders to hospital networks to election infrastructure. It is our responsibility to create and support the safeguards against bad actors looking for gaps in our infrastructure.

The bill makes states eligible for grants to implement comprehensive, flexible cybersecurity plans that address continuous vulnerability monitoring, protection for critical infrastructure systems and a resilient cybersecurity workforce. States would also be able to repurpose funds to various local and tribal governments. In addition, the bill would implement a 15-person committee to review the proposed plans and track the spending of state and local governments. This committee would help states and localities formulate and deliver annual reports to Congress that detail the program’s progress. The specific funding was not disclosed, but this effort showcases the timeliness of the issue and why it is such an imperative step at this stage in time.

We must take basic steps to ensure the security of our state and local systems, and enable systems to be patched, maintained and protected from outside threats. This bill is a welcomed and needed effort by lawmakers to address the existing challenges states and local governments and infrastructures are dealing with every day.  As adversaries become increasingly sophisticated and targeted in their attack strategies, we have a responsibility to best equip states and localities with the necessary tools to close gaps and mitigate gaps.

We at McAfee are committed to partnering with federal, state and local governments to equip them with the best strategies to create a better and more secure cybersecurity future.

The post Federal, State Cyber Resiliency Requires Action appeared first on McAfee Blogs.

McAfee CTO @ RSA: Catching Lightning in a Bottle or Burning Bridges to the Future?

I spoke last week at the RSA Conference in San Francisco on the subject of AI related threats and opportunities in the cybersecurity field. I asserted that innovations such as AI can strengthen our defenses but can also enhance the effectiveness of a cyber attacker.  I also looked at some examples of underlying fragility in AI that enable an attacker opportunity to evade AI based defenses. The key to successfully unlocking the potential of AI in cybersecurity requires that we in the cybersecurity industry answer the question of how we can nurture the sparks of AI innovation while recognizing its limitations and how it can be used against us.

We should look to the history of key technological advances to better understand how technology can bring both benefits and challenges. Consider flight in the 20th century. The technology has changed every aspect of our lives, allowing us to move between continents in hours, instead of weeks. Businesses, supply chains, and economies operate globally, and our ability to explore the world and the universe has been forever changed.

But this exact same technology also fundamentally changed warfare. In World War II alone, the strategic bombing campaigns of the Allied and Axis powers killed more than two million people, many of them civilians.

The underlying technology of flight is Bernoulli’s Principle, which explains why an airplane wing creates lift. Of course, the technology in play has no knowledge of whether the airplane wing is connected to a ‘life-flight’ rescue mission, or to a plane carrying bombs to be dropped on civilian targets.

When Orville Wright was asked in 1948 after the devastation of air power during World War II whether he regretted inventing the airplane he answered:

“No, I don’t have any regrets about my part in the invention of the airplane, though no one could deplore more than I do the destruction it has caused. We dared to hope we had invented something that would bring lasting peace to the earth. But we were wrong. I feel about the airplane much the same as I do in regard to fire. That is, I regret all the terrible damage caused by fire, but I think it is good for the human race that someone discovered how to start fires, and that we have learned how to put fire to thousands of important uses.”

Orville’s insight that technology does not comprehend morality—and that any advances in technology can be used for both beneficial and troubling purposes.  This dual use of technology is something our industry has struggled with for years.

Cryptography is a prime example. The exact same algorithm can be used to protect data from theft, or to hold an individual or organization for ransom. This matters more than ever given that we now encrypt 75% of the world’s web traffic, protecting over 150 exabytes of data each month.  At the same time, organizations and individuals are enduring record exploitation through ransomware.

The RSA Conference itself was at the epicenter of a debate during the 1990’s on whether it was possible to conditionally use strong encryption only in desirable places, or only for desirable functions.  At the time, the U.S. government classified strong encryption as a munition along with strict export restrictions.   Encryption is ultimately just math and it’s not possible to stop someone from doing math.  We must be intellectually honest about our technologies; how they work, what the precursors to use them are and when, how and if they should be contained.

Our shared challenge in cybersecurity is to capture lightning in a bottle, to seize the promise of advances like flight, while remaining aware of the risks that come with technology.  Let’s take a closer look at that aspect.

History repeats itself

Regardless of how you define it, AI is without a doubt the new foundation for cybersecurity defense. The entire industry is tapping into the tremendous power that this technology offers to better defend our environments. It enables better detection of threats beyond what we’ve seen in the past, and helps us out-innovate our cyber adversaries. The combination of threat intelligence and artificial intelligence, together or human-machine teaming provides us far better security outcomes—faster—than either capability on their own.

Not only does AI enable us to build stronger cyber defense technology, but also helps us solve other key issues such as addressing our talent shortage. We can now delegate many tasks to free up our human security professionals to focus on the most critical and complex aspects of defending our organizations.

“It’s just math..”

Like encryption, AI is just math. It can enhance criminal enterprises in addition to its beneficial purposes. McAfee Chief Data Scientist Celeste Fralick joined me on stage during this week’s keynote to run through some examples of how this math can be applied for good or ill. (visit here to view the keynote).  From machine learning fueled crime-spree predictors to DeepFake videos to highly effective attack obfuscation, we touch on them all.

It’s important to understand that the cybersecurity industry is very different from other sectors that use AI and machine learning. For a start, in many other industries, there isn’t an adversary trying to confuse the models.

AI is extremely fragile, therefore one focus area of the data science group at McAfee is Adversarial Machine Learning. Where we’re working to better understand how attackers could try to evade or poison machine learning models.  We are developing models that are more resilient to attacks using techniques such as feature reduction, adding noise, distillation and others.

AI and False Positives: A Warning

We must recognize that this technology, while incredibly powerful, is also incredibly different from what many cybersecurity defenders worked with historically. In order to deal with issues such as evasion, models will need to be tuned to high levels of sensitivity. The high level of sensitivity makes false positives inherent and something we must fully work into the methodology for using the technology.

False positive can have catastrophic results.  For an excellent example of this, watch the video of the keynote here if you haven’t seen it yet.  I talk through the quintessential example of how a false positive almost started World War III and nuclear Armageddon.

The Take-Away

As with fire and flight, how we manage new innovations is the real story.  Recognizing technology does not have a moral compass is key.  Our adversaries will use the technology to make their attacks more effective and we must move forward with our eyes wide open to all aspects of how technology will be used…. Its benefits, limitations and how it will be used against us.

 

Please see the video recording of our keynote speech RSA Conference 2019: https://www.rsaconference.com/events/us19/presentations/keynote-mcafee

 

The post McAfee CTO @ RSA: Catching Lightning in a Bottle or Burning Bridges to the Future? appeared first on McAfee Blogs.