Author Archives: Candace Worley

Expanding Our Vision to Expand the Cybersecurity Workforce

I recently had the opportunity to testify before Congress on how the United States can grow and diversify the cyber talent pipeline. It’s great that members of Congress have this issue on their radar, but at the same time, it’s concerning that we’re still having these discussions. A recent (ISC) Study puts the global cybersecurity workforce shortage at 2.93 million. Solving this problem is challenging, but I offered some recommendations to the House Homeland Security Committee’s Subcommittee on Cybersecurity, Infrastructure Protection and Innovation.

Increase the NSF CyberCorps Scholarships for Service Program

The National Science Foundation (NSF) together with the Department of Homeland Security (DHS) designed a program to attract more college students to cybersecurity, and it’s working. Ten to 12 juniors and seniors at each of the approximately 70 participating institutions across the country receive free tuition for up to two years plus annual stipends. Once they’ve completed their cybersecurity coursework and an internship, they go to work for the federal government for the same amount of time they’ve been in the program. Afterwards, they’re free to remain federal employees or move elsewhere, yet fortunately, a good number of them choose to stay.

Congress needs to increase the funding for this program (which has been flat since 2017) from $55 million to at least $200 million. Today the scholarships are available at 70 land grant colleges. The program needs to be opened up to more universities and colleges across the country.

Expand CyberCorps Scholarships to Community Colleges

Community colleges attract a wide array of students – a fact that is good for the cybersecurity profession. Some community college attendees are recent high school graduates, but many are more mature, working adults or returning students looking for a career change or skills training. A strong security operation requires differing levels of skills, so having a flexible scholarship program at a community college could not only benefit graduates but also provide the profession with necessary skills.

Furthermore, not everyone in cybersecurity needs a four-year degree. In fact, they don’t need to have a traditional degree at all. Certificate programs provide valuable training, and as employers, we should change our hiring requirements to reflect that reality.

Foster Diversity of Thinking, Recruiting and Hiring

Cybersecurity is one of the greatest technical challenges of our time, and we need to be as creative as possible to meet it. In addition to continually advancing technology, we need to identify people from diverse backgrounds – and not just in the standard sense of the term. We need to diversify the talent pool in terms of race, ethnicity, gender and age, all of which lead to creating an inclusive team that will deliver better results. However, we also should seek out gamers, veterans, people working on technical certificates, and retirees from computing and other fields such as psychology, liberal arts as well as engineering. There is no one background required to be a cybersecurity professional. We absolutely need people with deep technical skills, but we also need teams with diverse perspectives, capabilities and levels of professional maturity.

Public-Private Sector Cross Pollination

We also must develop creative approaches to enabling the public and private sectors to share talent, particularly during significant cybersecurity events. We should design a mechanism for cyber professionals – particularly analysts or those who are training to become analysts – to move back and forth between the public and private sector so that government organizations would have a continual refresh of expertise. This type of cross-pollination would help everyone share best practices on technology, business processes and people management.

One way to accomplish this would be for DHS to partner with companies and other organizations such as universities to staff a cadre of cybersecurity professionals – operators, analysts and researchers – who are credentialed to move freely between public and private sector service. These professionals, particularly those in the private sector, could be on call to help an impacted entity and the government respond to a major attack in a timely way. Much like the National Guard, a flexible staffing approach to closing the skills gap could become a model of excellence.

We’re Walking the Talk

McAfee is proud to support the community to establish programs that provide skills to help build the STEM pipeline, fill related job openings, and close gender and diversity gaps. These programs include an Online Safety Program, onsite training programs and internships for high school students. Our employees also volunteer in schools help educate students on both cybersecurity risks and opportunities. Through volunteer-run programs across the globe, McAfee has educated more than 500,000 children to date.

As part of the McAfee’s new pilot Achievement & Excellence in STEM Scholarship program, we’ll make three awards of $10,000 for the 2019-2020 school year. Twelve students from each of the three partner schools will be invited to apply, in coordination with each partner institution’s respective college advisor. Target students are college-bound, high school seniors with demonstrated passion for STEM fields, who are seeking a future in a STEM-related path. This type of a program can easily be replicated by other companies and used to support the growth and expansion of the workforce.

We’re Supporting Diversity

While we recognize there is still more to do in fostering diversity, we’re proud to describe the strides we’re making at McAfee. We believe we have a responsibility to our employees, customers and communities to ensure our workplace reflects the world in which we live. Having a diverse, inclusive workforce is the right thing to do, and after we became an independent, standalone cybersecurity company in 2017, we made and have kept this a priority.

 The steps we’re taking include:

  • Achieving pay parity between women and men employees in April 2019, making us the first pureplay cybersecurity company to do so.
  • In 2018, 27.1% of all global hires were female and 13% of all U.S. hires were underrepresented minorities.
  • In June 2018, we launched our “Return to Workplace” program for men and women who have paused their career to raise children, care for loved ones or serve their country. The 12-week program offers the opportunity to reenter the tech space with the support and resources needed to successfully relaunch careers.
  • Last year, we established the Diversity & Culture Council, a volunteer-led global initiative focused on creating an infrastructure for the development and maintenance of an integrated strategy for diversity and workplace culture.
  • McAfee CEO Chris Young joined CEO Action for Diversity Inclusion, the largest group of CEOs and presidents committed to act on driving an inclusive workforce. By taking part in CEO Action, Young personally commits to advancing diversity and inclusion with the coalition’s three-pronged approach of fostering safe workplaces.

Looking to the Future

While I’d love to see a future where fewer cybersecurity professionals were needed, I know that for the foreseeable future, we’ll not only need great technology but also talented people. With that reality, we in the industry need to expand our vision and definition of what constitutes cybersecurity talent. The workforce shortage is such that we have to do expand our concepts and hiring requirements. In addition, the discipline itself will benefit from a population that brings more experiences, skills and diversity to bear on a field that is constantly changing.

The post Expanding Our Vision to Expand the Cybersecurity Workforce appeared first on McAfee Blogs.

Why AI Innovation Must Reflect Our Values in Its Infancy

In my last blog, I explained that while AI possesses the mechanics of humanness, we need to train the technology to make the leap from mimicking humanness with logic, rational and analytics to emulating humanness with common sense. If we evolve AI to make this leap the impact will be monumental, but it will require our global community to take a more disciplined approach to pervasive AI proliferation. Historically, our enthusiasm for and consumption of new technology has outpaced society’s ability to evolve legal, political, social, and ethical norms.

I spend most of my time thinking about AI in the context of how it will change the way we live. How it will change the way we interact, impact our social systems, and influence our morality.  These technologies will permeate society and the ubiquity of their usage in the future will have far reaching implications. We are already seeing evidence of how it changes how we live and interact with the world around us.

Think Google. It excites our curiosity and puts information at our fingertips. What is tripe – should I order it off the menu? Why do some frogs squirt blood from their eyes? What does exculpatory mean?

AI is weaving the digital world into the fabric of our lives and making information instantaneously available with our fingertips.

AI-enabled technology is also capable of anticipating our needs. Think Alexa. As a security professional I am a hold out on this technology but the allure of it is indisputable. It makes the digital world accessible with a voice command. It understands more than we may want it to – Did someone tell Alexa to order coffee pods and toilet tissue and if not – how did Alexa know to order toilet tissue? Maybe somethings I just don’t want to know.

I also find it a bit creepy when my phone assumes (and gets it right) that I am going straight home from the grocery store letting me know, unsolicited, that it will take 28 minutes with traffic. How does it know I am going home? I could be going to the gym. It’s annoying that it knows I have no intention of working out. A human would at least have the decency to give me the travel time to both, allowing me to maintain the illusion that the gym was an equal possibility.

On a more serious note, AI-enabled technology will also impact our social, political and legal systems. As we incorporate it into more products and systems, issues related to privacy, morality and ethics will need to be addressed.

These questions are being asked now, but in anticipation of AI becoming embedded in everything we interact with it is critical that we begin to evolve our societal structures to address both the opportunities and the threats that will come with it.

The opportunities associated with AI are exciting.  AI shows incredible promise in the medical world. It is already being used in some areas. There are already tools in use that leverage machine learning to help doctors identify disease related patterns in imaging. Research is under way using AI to help deal with cancer.

For example, in May 2018, The Guardian reported that skin cancer research using a convolutional neural network (CNN – based on AI) detected skin cancer 95% of the time compared to human dermatologists who detected it 86.6% of the time. Additionally, facial recognition in concert with AI may someday be commonplace in diagnosing rare genetic disorders, that today, may take months or years to diagnose.

But what happens when the diagnosis made by a machine is wrong? Who is liable legally? Do AI-based medical devices also need malpractice insurance?

The same types of questions arise with autonomous vehicles. Today it is always assumed a human is behind the wheel in control of the vehicle. Our laws are predicated on this assumption.

How must laws change to account for vehicles that do not have a human driver? Who is liable? How does our road system and infrastructure need to change?

The recent Uber accident case in Arizona determined that Uber was not liable for the death of a pedestrian killed by one of its autonomous vehicles. However, the safety driver who was watching TV rather than the road, may be charged with manslaughter. How does this change when the car’s occupants are no longer safety drivers but simply passengers in fully autonomous vehicles. How will laws need to evolve at that point for cars and other types of AI-based “active and unaided” technology?

There are also risks to be considered in adopting pervasive AI. Legal and political safeguards need to be considered, either in the form of global guidelines or laws. Machines do not have a moral compass. Given that the definition of morality may differ depending on where you live, it will be extremely difficult to train morality into AI models.

Today most AI models lack the ability to determine right from wrong, ill intent from good intent, morally acceptable outcomes from morally irreprehensible outcomes. AI does not understand if the person asking the questions, providing it data or giving it direction has malicious intent.

We may find ourselves on a moral precipice with AI. The safeguards or laws I mention above need to be considered before AI becomes more ubiquitous than it already is.  AI will enable human kind to move forward in ways previously unimagined. It will also provide a powerful conduit through which humankind’s greatest shortcomings may be amplified.

The implications of technology that can profile entire segments of a population with little effort is disconcerting in a world where genocide has been a tragic reality, where civil obedience is coerced using social media, and where trust is undermined by those that use mis-information to sew political and societal discontent.

There is no doubt that AI will make this a better world. It gives us hope on so many fronts where technological impasses have impeded progress. Science may advance more rapidly, medical research progress beyond current roadblocks and daunting societal challenges around transportation and energy conservation may be solved.  It is another tool in our technological arsenal and the odds are overwhelmingly in favor of it improving the global human condition.

But realizing its advantages while mitigating its risks will require commitment and hard work from many conscientious minds from different quarters of our society. We as the technology community have an obligation to engage key stakeholders across the legal, political, social and scientific community to ensure that as a society we define the moral guardrails for AI before it becomes capable of defining them, for or in spite of, us.

Like all technology before it, AI’s social impacts must be anticipated and balanced against the values we hold dear.  Like parents raising a child, we need to establish and insist that the technology reflect our values now while its growth is still in its infancy.

The post Why AI Innovation Must Reflect Our Values in Its Infancy appeared first on McAfee Blogs.

I am an AI Neophyte

I am an Artificial Intelligence (AI) neophyte. I’m not a data scientist or a computer scientist or even a mathematician. But I am fascinated by AI’s possibilities, enamored with its promise and at times terrified of its potential consequences.

I have the good fortune to work in the company of amazing data scientists that seek to harness AI’s possibilities. I wonder at their ability to make artificial intelligence systems “almost” human. And I use that term very intentionally.

I mean “almost” human, for to date, AI systems lack the fundamentals of humanness. They possess the mechanics of humanness, qualities like logic, rationale, and analytics, but that is far from what makes us human. Their most human trait is one we prefer they not inherit –  a propensity to perpetuate bias.  To be human is to have consciousness. To be sentient. To have common sense. And to be able to use these qualities and the life experience that informs them to interpret successfully not just the black and white of our world but the millions of shades of grey.

While data scientists are grappling with many technical challenges associated with AI there are a couple I find particularly interesting. The first is bias and the second is lack of common sense.

AI’s propensity to bias is a monster of our own making. Since AI is largely a slave to the data it is given to learn from, its outputs will reflect all aspects of that data, bias included. We have already seen situations where applications leveraging AI have perpetuated human bias unintentionally but with disturbing consequences.

For example, many states have started to use risk assessment tools that leverage AI to predict probable rates of recidivism for criminal defendants. These tools produce a score that is then used by a judge for determining a defendant’s sentencing. The problem is not the tool itself but the data that is used to train it. There is evidence that there has historically been significant racial bias in our judicial systems, so when that data is used to train AI, the resulting output is equally biased.

A report by ProPublica in 2016 found that algorithmic assessment tools are likely to falsely flag African American defendants as future criminals at nearly twice the rate as white defendants*. For any of you who saw the Tom Cruise movie, Minority Report, it is disturbing to consider the similarities between the fictional technology used in the movie to predict future criminal behavior and this real life application of AI.

The second challenge is how to train artificial intelligence to be as good at interpreting nuance as humans are. It is straight forward to train AI how to do something like identifying an image as a Hippopotamus. You provide it with hundreds or thousands of images or descriptions of a hippo and eventually it gets it right most if not all the time.

The accuracy percentage is likely to go down for things that are perhaps more difficult to distinguish—such as a picture of a field of sheep versus a picture of popcorn on a green blanket—but  with enough training even this is a challenge that can be overcome.

The interesting thing is that the challenge is not limited to things that lack distinguishing characteristics. In fact, the things that are so obvious that they never get stated or documented, can be equally difficult for AI to process.

For example, we humans know that a hippopotamus cannot ride a bicycle. We inherently know that if someone says “Jimmy played with his boat in the swimming pool” that, except in very rare instances likely involving eccentric billionaires, the boat was a toy boat and not a full-size catamaran.

No one told us these things – it’s just common sense. The common sense aspects of interpreting these situations could be lost on AI. The technology also lacks the ability to infer emotion or intent from data. If we see someone buying flowers we can mentally infer why – a romantic dinner or somebody’s in the doghouse. We can not only guess why they are buying flowers, but when I say somebody’s in the dog house you know exactly what I mean. It’s not that they are literally in the dog house, but someone did something stupid and the flowers are an attempt at atonement.

That leap is too big for AI today. When you add to the mix cultural differences it exponentially increases the complexity. If a British person says put something in the boot it is likely going to be groceries. If it is an American it will likely be a foot. Teaching AI common sense is a difficult task and one that will take significant research and effort on the part of experts in the field.

But the leap from logic, rationale and analytics to common sense is a leap we need AI to make for it to truly become the tool we need it to be, in cybersecurity and in every other field of human endeavor.

In my next blog, I’ll discuss the importance of ensuring that this profoundly impactful technology reflects our human values in its infancy, before it starts influencing and shaping them itself.

*ProPublica, Machine Bias, May 23, 2016

The post I am an AI Neophyte appeared first on McAfee Blogs.