Category Archives: AI

NVIDIA’s Latest AI Software Turns Rough Doodles Into Realistic Landscapes

An anonymous reader quotes a report from The Verge: AI is going to be huge for artists, and the latest demonstration comes from Nvidia, which has built prototype software that turns doodles into realistic landscapes. Using a type of AI model known as a generative adversarial network (GAN), the software gives users what Nvidia is calling a "smart paint brush." This means someone can make a very basic outline of a scene (drawing, say, a tree on a hill) before filling in their rough sketch with natural textures like grass, clouds, forests, or rocks. The results are not quite photorealistic, but they're impressive all the same. The software generates AI landscapes instantly, and it's surprisingly intuitive. For example, when a user draws a tree and then a pool of water underneath it, the model adds the tree's reflection to the pool. Nvidia didn't say if it has any plans to turn the software into an actual product, but it suggests that tools like this could help "everyone from architects and urban planners to landscape designers and game developers" in the future. The company has published a video showing off the imagery it handles particularly well.

Read more of this story at Slashdot.

Bill Gates Talked With Google Employees About Using AI To Analyze Ultrasound Images of Unborn Children

Bill Gates said he talked to Google researchers about the application of artificial-intelligence technology in healthcare. "While Microsoft and Google are arch-competitors in many areas, including cloud computing and artificial intelligence research, the visit is an example of how Gates' broad interest in technology trumps Microsoft's historical rivalries with other tech companies," reports CNBC. From the report: Gates talked about the use of AI in weapons systems and autonomous vehicles before arriving at the subject of health care. "In the medical field, you know, we just don't have doctors. Most people are born and die in Africa without coming near to a doctor," said Gates, who is co-chair of the nonprofit Bill & Melinda Gates Foundation, which concerns itself with improving global health among other things. "We're doing a lot of work with analyzing ultrasound, and we can do things like sex-blind the output, because we're not having anybody actually see the image. We can tell you what's going on without revealing the gender, which is, of course -- when you do that, it drives gendercide. And yet, we're doing the analysis, the medical understanding, in a much deeper way, and that's an example where it's all done with a lot of machine learning." "I was meeting with the guys at Google who are helping us with this this morning, and there's some incredible promise in that field, where, in the primary health-care system, the amount of sophistication to do diagnosis and understand, for example, "Is this a high-risk pregnancy?' 'Yes.' 'Let's escalate that person to go to the hospital level,' even though you couldn't afford to do that on a widespread basis. So this stuff is going to be very domain-specific."

Read more of this story at Slashdot.

NVIDIA’s $99 Jetson Nano is an AI Computer for DIY Enthusiasts

Sophisticated AI generally isn't an option for homebrew devices when the mini computers can rarely handle much more than the basics. NVIDIA thinks it can do better -- it's unveiling an entry-level AI computer, the Jetson Nano, that's aimed at "developers, makers and enthusiasts." From a report: NVIDIA claims that the Nano's 128-core Maxwell-based GPU and quad-core ARM A57 processor can deliver 472 gigaflops of processing power for neural networks, high-res sensors and other robotics features while still consuming a miserly 5W. On the surface, at least, it could hit the sweet spot if you're looking to build your own robot or smart speaker. The kit can run Linux out of the box, and supports a raft of AI frameworks (including, of course, NVIDIA's own). It comes equipped with 4GB of RAM, gigabit Ethernet and the I/O you'd need for cameras and other attachments.

Read more of this story at Slashdot.

Orgs Say Yes to AI Use But Ask “What Is It?”

Organizations across the US and Japan have plans to increase their use of artificial intelligence (AI) and machine learning (ML) this year, yet many don’t really understand the technology, according to

The post Orgs Say Yes to AI Use But Ask “What Is It?” appeared first on The Cyber Security Place.

Breaking the Bank: Weakness in Financial AI Applications

Currently, threat actors possess limited access to the technology required to conduct disruptive operations against financial artificial intelligence (AI) systems and the risk of this targeting type remains low. However, there is a high risk of threat actors leveraging AI as part of disinformation campaigns to cause financial panic. As AI financial tools become more commonplace, adversarial methods to exploit these tools will also become more available, and operations targeting the financial industry will be increasingly likely in the future.

AI Compounds Both Efficiency and Risk

Financial entities increasingly rely on AI-enabled applications to streamline daily operations, assess client risk, and detect insider trading. However, researchers have demonstrated how exploiting vulnerabilities in certain AI models can adversely affect the final performance of a system. Cyber threat actors can potentially leverage these weaknesses for financial disruption or economic gain in the future.

Recent advances in adversarial AI research highlights the vulnerabilities in some AI techniques used by the financial sector. Data poisoning attacks, or manipulating a model's training data, can affect the end performance of a system by leading the model to generate inaccurate outputs or assessments. Manipulating the data used to train a model can be particularly powerful if it remains undetected, since "finished" models are often trusted implicitly. It should be noted that adversarial AI research demonstrates how anomalies in a model do not necessarily point users toward a wrong answer, but redirect users away from the more correct output. Additionally some cases of compromise require threat actors to obtain a copy of the model itself, through reverse engineering or compromising the machine learning pipeline of the target. The following are some vulnerabilities that assume this white-box knowledge of the models under attack:

  • Classifiers are used for detection and identification, such as object recognition in driverless cars and malware detection in networks. Researchers have demonstrated how these classifiers can be susceptible to evasion, meaning objects can be misclassified due to inherent weaknesses in the mode (Figure 1).


Figure 1: Examples of classifier evasion where AI models identified 6 as 2

  • Researchers have highlighted how data poisoning can influence the outputs of AI recommendation systems. By changing reward pathways, adversaries can make a model suggest a suboptimal output such as reckless trades resulting in substantial financial losses. Additionally, groups have demonstrated a data-poisoning attack where attackers did not have control over how the training data was labeled.
  • Natural language processing applications can analyze text and generate a basic understanding of the opinions expressed, also known as sentiment analysis. Recent papers highlight how users can input corrupt text training examples into sentiment analysis models to degrade the model's overall performance and guide it to misunderstand a body of text.
  • Compromises can also occur when the threat actor has limited access and understanding of the model’s inner-workings. Researchers have demonstrated how open access to the prediction functions of a model as well as knowledge transfer can also facilitate compromise.

How Financial Entities Leverage AI

AI can process large amounts of information very quickly, and financial institutions are adopting AI-enabled tools to make accurate risk assessments and streamline daily operations. As a result, threat actors likely view financial service AI tools as an attractive target to facilitate economic gain or financial instability (Figure 2).


Figure 2: Financial AI tools and their weaknesses

Sentiment Analysis

Use

Branding and reputation are variables that help analysts plan future trade activity and examine potential risks associated with a business. News and online discussions offer a wealth of resources to examine public sentiment. AI techniques, such as natural language processing, can help analysts quickly identify public discussions referencing a business and examine the sentiment of these conversations to inform trades or help assess the risks associated with a firm.

Potential Exploitation

Threat actors can potentially insert fraudulent data that could generate erroneous analyses regarding a publicly traded firm. For example, threat actors could distribute false negative information about a company that could have adverse effects on a business' future trade activity or lead to a damaging risk assessment. Manipulating the data used to train a model can be particularly powerful if it remains undetected, since "finished" models are often trusted implicitly.

Threat Actors Using Disinformation to Cause Financial Panic

FireEye assess with high confidence that there is a high risk of threat actors spreading false information that triggers AI enabled trading and causes financial panic. Additionally, threat actors can leverage AI techniques to generate manipulated multimedia or "deep fakes" to facilitate such disruption.

False information can have considerable market-wide effects. Malicious actors have a history of distributing false information to facilitate financial instability. For example, in April 2013, the Syrian Electronic Army (SEA) compromised the Associated Press (AP) Twitter account and announced that the White House was attacked and President Obama sustained injuries. After the false information was posted, stock prices plummeted.


Figure 3: Tweet from the Syrian Electronic Army (SEA) after compromising Associated Press's Twitter account

  • Malicious actors distributed false messaging that triggered bank runs in Bulgaria and Kazakhstan in 2014. In two separate incidents, criminals sent emails, text messages, and social media posts suggesting bank deposits were not secure, causing customers to withdraw their savings en masse.
  • Threat actors can use AI to create manipulated multimedia videos or "deep fakes" to spread false information about a firm or market-moving event. Threat actors can also use AI applications to replicate the voice of a company's leadership to conduct fraudulent trades for financial gain.
  • We have observed one example where a manipulated video likely impacted the outcome of a political campaign.
Portfolio Management

Use

Several financial institutions are employing AI applications to select stocks for investment funds, or in the case of AI-based hedge funds, automatically conduct trades to maximize profits. Financial institutions can also leverage AI applications to help customize a client's trade portfolio. AI applications can analyze a client's previous trade activity and propose future trades analogous to those already found in a client's portfolio.

Potential Exploitation

Actors could influence recommendation systems to redirect a hedge fund toward irreversible bad trades, causing the company to lose money (e.g., flooding the market with trades that can confuse the recommendation system and cause the system to start trading in a way that damages the company).

Moreover, many of the automated trading tools used by hedge funds operate without human supervision and conduct trade activity that directly affects the market. This lack of oversight could leave future automated applications more vulnerable to exploitation as there is no human in the loop to detect anomalous threat activity.

Threat Actors Conducting Suboptimal Trades

We assess with moderate confidence that manipulating trade recommendation systems poses a moderate risk to AI-based portfolio managers.

  • The diminished human involvement with trade recommendation systems coupled with the irreversibility of trade activity suggest that adverse recommendations could quickly escalate to a large-scale impact.
  • Additionally, operators can influence recommendation systems without access to sophisticated AI technologies; instead, using knowledge of the market and mass trades to degrade the application's performance.
  • We have previously observed malicious actors targeting trading platforms and exchanges, as well as compromising bank networks to conduct manipulated trades.
  • Both state-sponsored and financially motivated actors have incentives to exploit automated trading tools to generate profit, destabilize markets, or weaken foreign currencies.
  • Russian hackers reportedly leveraged Corkow malware to place $500M worth of trades at non-market rates, briefly destabilizing the dollar-ruble exchange rate in February 2015. Future criminal operations can leverage vulnerabilities in automatic training algorithms to disrupt the market with a flood of automated bad trades.
Compliance and Fraud Detection

Use

Financial institutions and regulators are leveraging AI-enabled anomaly detection tools to ensure that traders are not engaging in illegal activity. These tools can examine trade activity, internal communications, and other employee data to ensure that workers are not capitalizing on advanced knowledge of the market to engage in fraud, theft, insider trading, or embezzlement.

Potential Exploitation

Sophisticated threat actors can exploit the weaknesses in classifiers to alter an AI-based detection tool and mischaracterize anomalous illegal activity as normal activity. Manipulating the model helps insider threats conduct criminal activity without fear of discovery.

Threat Actors Camouflaging Insider Threat Activity

Currently threat actors possess limited access to the kind of technology required to evade these fraud detection systems, and therefore with high confidence we assess that the threat of this activity type remains low. However, as AI financial tools become more commonplace, adversarial methods to exploit these tools will also become more available and insider threats leveraging AI to evade detection will likely increase in the future.

Underground forums and social media posts demonstrate there is a market for individuals with insider access to financial institutions. Insider threats could exploit weaknesses in AI-based anomaly detectors to camouflage nefarious activity, such as external communications, erratic trades, and data transfers, as normal activity.

Trade Simulation

Use

Financial entities can use AI tools that leverage historical data from previous trade activity to simulate trades and examine their effects. Quant-fund managers and high-speed traders can use this capability to strategically plan future activity, such as the optimal time of the day to trade. Additionally, financial insurance underwriters can use these tools to observe the impact of market-moving activity and generate better risk assessments.

Potential Exploitation

By exploiting inherent weaknesses in an AI model, threat actors could lull a company into a false sense of security regarding the way a trade will play out. Specifically, threat actors could find out when a company is training their model and inject corrupt data into a dataset being used to train the model. Subsequently, the end application generates an incorrect simulation of potential trades and their consequences. These models are regularly trained on the latest financial information to improve a simulation's performance, providing threat actors with multiple opportunities for data poisoning attacks. Additionally, some high-speed traders speculate that threats could flood the market with fake sell orders to confuse trading algorithms and potentially cause the market to crash.

FireEye Threat Intelligence has previously examined how financially motivated actors can leverage data manipulation for profit through pump and dump scams and stock manipulation.

Threat Actors Conducting Insider Trading

FireEye assesses with moderate confidence that the current risk of threat actors leveraging these attacks is low as exploitations of trade simulations require sophisticated technology as well as additional insider intelligence regarding when a financial company is training their model. Despite these limitations, as financial AI tools become more popular, adversarial methods to exploit these tools are also likely to become more commonplace on underground forums and via state-sponsored threats. Future financially motivated operations could monitor or manipulate trade simulation tools as another means of gaining advanced knowledge of upcoming market activity.

Risk Assessment and Modeling

Use

AI can help the financial insurance sector's underwriting process by examining client data and highlighting features that it considers vulnerable prior to market-moving actions (joint ventures, mergers & acquisitions, research & development breakthroughs, etc.). Creating an accurate insurance policy ahead of market catalysts requires a risk assessment to highlight a client's potential weaknesses.

Financial services can also employ AI applications to improve their risk models. Advances in generative adversarial networks can help risk management by stress-testing a firm's internal risk model to evaluate performance or highlight potential vulnerabilities in a firm's model.

Potential Exploitation

If a country is conducting market-moving events with a foreign business, state-sponsored espionage actors could use data poisoning attacks to cause AI models to over or underestimate the value or risk associated with a firm to gain a competitive advantage ahead of planned trade activity. For example, espionage actors could feasibly use this knowledge and help turn a joint venture into a hostile takeover or eliminate a competitor in a bidding process. Additionally, threat actors can exploit weaknesses in financial AI tools as part of larger third-party compromises against high-value clients.

Threat Actors Influencing Trade Deals and Negotiations

With high confidence, we consider the current threat risk to trade activity and business deals to be low, but as more companies leverage AI applications to help prepare for market-moving catalysts, these applications will likely become an attack surface for future espionage operations.

In the past, state-sponsored actors have employed espionage operations during collaborations with foreign companies to ensure favorable business deals. Future state-sponsored espionage activity could leverage weaknesses in financial modeling tools to help nations gain a competitive advantage.

Outlook and Implications

Businesses adopting AI applications should be aware of the risks and vulnerabilities introduced with these technologies, as well as the potential benefits. It should be noted that AI models are not static; they are routinely updated with new information to make them more accurate. This constant model training frequently leaves them vulnerable to manipulation. Companies should remain vigilant and regularly audit their training data to eliminate poisoned inputs. Additionally, where applicable, AI applications should incorporate human supervision to ensure that erroneous outputs or recommendations do not automatically result in financial disruption.

AI's inherent limitations also pose a problem as the financial sector increasingly adopts these applications for their operations. The lack of transparency in how a model arrived at its answer is problematic for analysts who are using AI recommendations to conduct trades. Without an explanation for its output, it is difficult to determine liability when a trade has negative outcomes. This lack of clarity can lead analysts to mistrust an application and eventually refrain from using it altogether. Additionally, the rise of data privacy laws may also accelerate the need for explainable AI in the financial sector. Europe's General Data Protection Regulation (GDPR) stipulates that companies employing AI applications must have an explanation for decisions made by its models.

Some financial institutions have begun addressing this explainability problem by developing AI models that are inherently more transparent. Researchers have also developed self-explaining neural networks, which provide understandable explanations for the outputs generated by the system.

Why Social Network Analysis Is Important

I got into social network analysis purely for nerdy reasons – I wanted to write some code in my free time, and python modules that wrap Twitter’s API (such as tweepy) allowed me to do simple things with just a few lines of code. I started off with toy tasks, (like mapping the time of day that @realDonaldTrump tweets) and then moved onto creating tools to fetch and process streaming data, which I used to visualize trends during some recent elections.

The more I work on these analyses, the more I’ve come to realize that there are layers upon layers of insights that can be derived from the data. There’s data hidden inside data – and there are many angles you can view it from, all of which highlight different phenomena. Social network data is like a living organism that changes from moment to moment.

Perhaps some pictures will help explain this better. Here’s a visualization of conversations about Brexit that happened between between the 3rd and 4th of December, 2018. Each dot is a user, and each line represents a reply, mention, or retweet.

Tweets supportive of the idea that the UK should leave the EU are concentrated in the orange-colored community at the top. Tweets supportive of the UK remaining in the EU are in blue. The green nodes represent conversations about UK’s Labour party, and the purple nodes reflect conversations about Scotland. Names of accounts that were mentioned more often have a larger font.

Here’s what the conversation space looked like between the 14th and 15th of January, 2019.

Notice how the shape of the visualization has changed. Every snapshot produces a different picture, that reflects the opinions, issues, and participants in that particular conversation space, at the moment it was recorded. Here’s one more – this time from the 20th to 21st of January, 2019.

Every interaction space is unique. Here’s a visual representation of interactions between users and hashtags on Twitter during the weekend before the Finnish presidential elections that took place in January of 2018.

And here’s a representation of conversations that happened in the InfoSec community on Twitter between the 15th and 16th of March, 2018.

I’ve been looking at Twitter data on and off for a couple of years now. My focus has been on finding scams, social engineering, disinformation, sentiment amplification, and astroturfing campaigns. Even though the data is readily available via Twitter’s API, and plenty of the analysis can be automated, oftentimes finding suspicious activity just involves blind luck – the search space is so huge that you have to be looking in the right place, at the right time, to find it. One approach is, of course, to think like the adversary. Social networks run on recommendation algorithms that can be probed and reverse engineered. Once an adversary understands how those underlying algorithms work, they’ll game them to their advantage. These tactics share many analogies with search engine optimization methodologies. One approach to countering malicious activities on these platforms is to devise experiments that simulate the way attackers work, and then design appropriate detection methods, or countermeasures against these. Ultimately, it would be beneficial to have automation that can trace suspicious activity back through time, to its source, visualize how the interactions propagated through the network, and provide relevant insights (that can be queried using natural language). Of course, we’re not there yet.

The way social networks present information to users has changed over time. In the past, Twitter feeds contained a simple, sequential list of posts published by the accounts a user followed. Nowadays, Twitter feeds are made up of recommendations generated by the platform’s underlying models – what they understand about a user, and what they think the user wants to see.

A potentially dystopian outcome of social networks was outlined in a blog post written by François Chollet in May 2018, which he describes social media becoming a “psychological panopticon”.

The premise for his theory is that the algorithms that drive social network recommendation systems have access to every user’s perceptions and actions. Algorithms designed to drive user engagement are currently rather simple, but if more complex algorithms (for instance, based on reinforcement learning) were to be used to drive these systems, they may end up creating optimization loops for human behavior, in which the recommender observes the current state of each target (user) and keeps tuning the information that is fed to them, until the algorithm starts observing the opinions and behaviors it wants to see. In essence the system will attempt to optimize its users. Here are some ways these algorithms may attempt to “train” their targets:

  • The algorithm may choose to only show its target content that it believes the target will engage or interact with, based on the algorithm’s notion of the target’s identity or personality. Thus, it will cause a reinforcement of certain opinions or views in the target, based on the algorithm’s own logic. (This is partially true today)
  • If the target publishes a post containing a viewpoint that the algorithm doesn’t wish the target to hold, it will only share it with users who would view the post negatively. The target will, after being flamed or down-voted enough times, stop sharing such views.
  • If the target publishes a post containing a viewpoint the algorithm wants the target to hold, it will only share it with users that would view the post positively. The target will, after some time, likely share more of the same views.
  • The algorithm may place a target in an “information bubble” where the target only sees posts from friends that share the target’s views (that are desirable to the algorithm).
  • The algorithm may notice that certain content it has shared with a target caused their opinions to shift towards a state (opinion) the algorithm deems more desirable. As such, the algorithm will continue to share similar content with the target, moving the target’s opinion further in that direction. Ultimately, the algorithm may itself be able to generate content to those ends.

Chollet goes on to mention that, although social network recommenders may start to see their users as optimization problems, a bigger threat still arises from external parties gaming those recommenders in malicious ways. The data available about users of a social network can already be used to predict when a when a user is suicidal or when a user will fall in love or break up with their partner, and content delivered by social networks can be used to change users’ moods. We also know that this same data can be used to predict which way a user will vote in an election, and the probability of whether that user will vote or not.

If this optimization problem seems like a thing of the future, bear in mind that, at the beginning of 2019, YouTube made changes to their recommendation algorithms exactly because of problems it was causing for certain members of society. Guillaume Chaslot posted a Twitter thread in February 2019 that described how YouTube’s algorithms favored recommending conspiracy theory videos, guided by the behaviors of a small group of hyper-engaged viewers. Fiction is often more engaging than fact, especially for users who spend all day, every day watching YouTube. As such, the conspiracy videos watched by this group of chronic users received high engagement, and thus were pushed up the recommendation system. Driven by these high engagement numbers, the makers of these videos created more and more content, which was, in-turn, viewed by this same group of users. YouTube’s recommendation system was optimized to pull more and more users into a hole of chronic YouTube addiction. Many of the users sucked into this hole have since become indoctrinated with right-wing extremist views. One such user actually became convinced that his brother was a lizard, and killed him with a sword. Chaslot has since created a tool that allows users to see which of these types of videos are being promoted by YouTube.

Social engineering campaigns run by entities such as the Internet Research Agency, Cambridge Analytica, and the far-right demonstrate that social media advert distribution platforms (such as those on Facebook) have provided a weapon for malicious actors that is incredibly powerful, and damaging to society. The disruption caused by their recent political campaigns has created divides in popular thinking and opinion that may take generations to repair. Now that the effectiveness of these social engineering techniques is apparent, I expect what we’ve seen so far is just an omen of what’s to come.

The disinformation we hear about is only a fraction of what’s actually happening. It requires a great deal of time and effort for researchers to find evidence of these campaigns. As I already noted, Twitter data is open and freely available, and yet it can still be extremely tedious to find evidence of disinformation campaigns on that platform. Facebook’s targeted ads are only seen by the users who were targeted in the first place. Unless those who were targeted come forward, it is almost impossible to determine what sort of ads were published, who they were targeted at, and what the scale of the campaign was. Although social media platforms now enforce transparency on political ads, the source of these ads must still be determined in order to understand who’s being targeted, and by what content.

Many individuals on social networks share links to “clickbait” headlines that align with their personal views or opinions (sometimes without having read the content behind the link). Fact checking is uncommon, and often difficult for people who don’t have a lot of time on their hands. As such, inaccurate or fabricated news, headlines, or “facts” propagate through social networks so quickly that even if they are later refuted, the damage is already done. This mechanism forms the very basis of malicious social media disinformation. A well-documented example of this was the UK’s “Leave” campaign that was run before the Brexit referendum. Some details of that campaign are documented in the recent Channel 4 film: “Brexit: The Uncivil War”.

Its not just the engineers of social networks that need to understand how they work and how they might be abused. Social networks are a relatively new form of human communication, and have only been around for a few decades. But they’re part of our everyday lives, and obviously they’re here to stay. Social networks are a powerful tool for spreading information and ideas, and an equally powerful weapon for social engineering, disinformation, and propaganda. As such, research into these systems should be of interest to governments, law enforcement, cyber security companies and organizations that seek to understand human communications, culture, and society.

The potential avenues of research in this field are numerous. Whilst my research with Twitter data has largely focused on graph analysis methodologies, I’ve also started experimenting with natural language processing techniques, which I feel have a great deal of potential.

The Orville, “Majority Rule”. A vote badge worn by all citizens of the alien world Sargus 4, allowing the wearer to receive positive or negative social currency. Source: youtube.com

We don’t yet know how much further social networks will integrate into society. Perhaps the future will end up looking like the “Majority Rule” episode of The Orville, or the “Nosedive” episode of Black Mirror, both of which depict societies in which each individual’s social “rating” determines what they can and can’t do and where a low enough rating can even lead to criminal punishment.

Inspired and powered by partners

$32 billion in revenue. That’s an incredible number that Satya Nadella and Amy Hood shared during the Q2 earnings call last week. Just as impressive is the commercial cloud revenue increase of 48 percent year-over-year to $9 billion. Did you know that 95 percent of Microsoft’s commercial revenue flows directly through our partner ecosystem? With more than 7,500 partners joining that ecosystem every month, partner growth and partner innovation are directly fueling our commercial cloud growth. One accelerant, the IP co-sell program, now has thousands of co-sell ready partners that generated an incredible $8 billion in contracted partner revenue since the program began in July 2017.

It’s exciting to see the success of our partners, and to know we are collaborating with businesses of all types and sizes wherever there is opportunity. We’re working together with partners old and new to help them build their own digital capability to compete and grow. We’ve doubled down on our partnership with Accenture and Avanade, creating the new Accenture Microsoft Business Group to help customers overcome disruption and lead transformation in their industries. We’re partnering in new ways with customers like Kroger to bring their new Retail as a Service solution built on Azure, to use in their stores – and to sell to other retailers.

Part of Microsoft’s digital transformation is moving beyond transactional reselling via partners, to a true partnership philosophy where we’re working together to develop and sell each other’s technology and solutions. Our partners are building on our technology, collaborating with partners across borders to build repeatable solutions, and creating new revenue opportunities that didn’t exist in the past. We focus as much on selling third-party solutions as our own, and the speed of the cloud enables all of us to accelerate value to our customers.

I want to share more with you about how hundreds of thousands of Microsoft partners are powering customer innovation, and how we are evolving our partnership strategy in order to drive tech intensity for customers around the world.

Partner success and momentum

With hundreds of thousands of partners across the world, our partner ecosystem is stronger than ever.

CSP: Through our Cloud Solution Provider (CSP) program, our fastest-growing licensing model, partners are embedding Microsoft technologies into their own solutions and delivering more differentiated, long-term value for customers. The number of partners transacting through CSP is up 52 percent, and they are serving more than 2 million customers.

Azure Expert MSP: The Azure Expert MSP program has grown to 43 partners that deliver consistent, repeatable, high-fidelity managed services on Azure and are driving more than $100,000 per month in Azure consumption. A big part of this volume is in migration services, as SQL Server 2008 phases out this summer, followed by Windows Server 2008 a year from now. The opportunity for partners can’t be overstated. Our estimates put the opportunity around $50 billion for partners to help customers move their existing on-premises workloads to Azure and start capitalizing on the benefits of the cloud.

IP Co-Sell: Our industry-leading IP co-sell program that rewards Microsoft sellers for selling third-party solutions is a runaway success, generating $8 billion in contracted partner revenue since July. Our partners are reaping the benefits and seeing co-sell deals close nearly three times faster, projects that are nearly six times larger, and drive six times more Azure consumption.

Building the largest commercial marketplace

Gartner estimates the opportunity for business applications will be $133 billion this year, with independent software vendors (ISVs) driving more than half of that. So we are upping our commitment to ISVs by investing in Microsoft’s marketplaces, Azure Marketplace, and AppSource, to build the largest commercial marketplace in the industry. Our marketplace provides a frictionless selling and buying experience that brings parity to first and third-party solutions and meets the needs of both IP builders and software purchasers. Partners with solutions in our marketplace can sell directly to more than a billion customers and partners, and they benefit from lower deployment costs and flexible procurement models for software. Through the marketplace go-to-market services, we’ve seen partners achieve an average of 40 percent reduction in cost per lead, and a 2x lead conversion to sales rate compared to industry averages.

New capabilities are coming soon to AppSource and Azure Marketplace. One of the biggest developments is the ability for partners to offer their solutions to our partner ecosystem through the CSP program, with a single click. We’re also improving the user experience and interface with natural language and recommendations features. And by setting up private marketplaces, partners will be able to customize the terms for any specific customer—billing or metering their services on a per-user, per-app, per-month, or per-day basis to meet customer needs. And soon we’ll be offering curated portfolio IP & Services solutions that leverage Azure, Dynamics, Power BI, Power Apps, and Office.

AI for enterprise

IDC estimates that global spending on cognitive and artificial intelligence systems is expected to triple between 2018 to 2022, from $24 billion to $77.6 billion. And just like Microsoft transformed the way people work and live by making personal computing widely accessible in the 1980s and 1990s, we plan to do the same with artificial intelligence. Our aim is to make AI accessible to and valuable for everyone. We’ll do it by focusing on AI innovations that extend and empower human capabilities, while keeping people in control. Our partners are finding huge success and growth in the AI space. Through our AI Inner Circle Partner program, partners provide custom services and enhanced AI solutions to customers and have seen more than 200 percent growth in their AI practices year-over-year.

As we encourage partners to go all-in on AI, we need to make sure they have substantial resources and training. So, we’ve developed AI Practice Development Workshops, Advanced Education, trainings in the classroom, online, and at events. So far, since July, more than 29,000 people have been trained across Microsoft’s data and AI portfolios. Our popular AI Partner Development Playbook and library of online resources—collectively with more than 1 million downloads—have put answers at the fingertips of partners launching and expanding their AI services.

New HR skills playbook and tools

The latest in our series of Cloud Practice Development Playbooks, released today, is an outstanding human resources guide for partners and customers. We collected input from more than 700 partners to develop “Recruit, Hire, Onboard & Retain Talent.” It is a hands-on guide to walk partners through the HR process of recruiting, hiring, and onboarding employees. Alongside the playbook, we’re launching a new learning portal on MPN that simplifies partner training, and a new Partner Transformation Assessment Tool to help partners map resources and investments against solution areas and workloads.

Partner opportunities ahead

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. And we know that partners make more possible. As a customer-first, partner-led company, we start with the needs of our customers and work with our partners to deliver the best outcomes for each organization. We look forward to continued evolution in the Microsoft-partner relationship this year—with more innovation in AI, more co-selling opportunities, and more ways to connect partners to customers and to other partners through Azure Marketplace and AppSource. I invite you to learn more about how Microsoft leaders from the Azure, Dynamics, and ISV teams are supporting our partners, and how partners can capitalize on the opportunities ahead.

 

 

 

 

The post Inspired and powered by partners appeared first on The Official Microsoft Blog.

From shopping to car design, our customers and partners spark innovation across every industry

Judson Althoff visits Kroger’s QFC store in Redmond, WA, one of two pilot locations featuring connected customer experiences powered by Microsoft Azure and AI. Also pictured, Wesley Rhodes, Vice President of Technology Transformation at Kroger.

Computing is embedded all around us. Devices are increasingly more connected, and the availability of data and information is greater now than it has ever been. To grow, compete and respond to customer demands, all companies are becoming digital. In this new reality, enterprise technology choices play an outsized role in how businesses operate, influencing how employees collaborate, how organizations ensure data security and privacy, and how they deliver compelling customer experiences.

This is what we mean when we talk about digital transformation. As our CEO Satya Nadella described it recently, it is how organizations with tech intensity adopt faster, best-in-class technology and simultaneously build their own unique digital capabilities. I see this trend in every industry where customers are choosing Microsoft’s intelligent cloud and intelligent edge to power their transformation.

Over just the past two months, customers as varied as Walmart, Gap Inc., Nielsen, Mastercard, BP, BlackRock, Fruit of the Loom and Brooks Running have shared how technology is reshaping all aspects of our lives — from the way we shop to how we manage money and save for retirement. At the Consumer Electronics Show (CES) earlier this month, Microsoft customers and partners highlighted how the Microsoft cloud, the Internet of Things (IoT) and artificial intelligence (AI) play an ever-expanding role in driving consumer experiences, from LGE’s autonomous vehicle and infotainment systems, to Visteon’s use of Azure to develop autonomous driving development environments, to ZF’s fleet management and predictive maintenance solutions. More recently, at the National Retail Federation (NRF) conference, Microsoft teamed up with retail industry leaders like Starbucks that are reimagining customer and employee experiences with technology.

In fact, there is no shortage of customer examples of tech intensity. They span all industries, including retail, healthcare, automotive manufacturing, maritime research, education and government. Here are just a few of my favorite examples:

Together with Microsoft, Kroger – America’s biggest supermarket chain – opened two pilot stores offering new connected experiences with Microsoft Azure and AI and announced a Retail as a Service (RaaS) solution on Azure. This partnership with Kroger resonates strongly with me because I first met with the company’s CEO in 2013 soon after joining Microsoft. Since then, I have witnessed the Kroger-Microsoft relationship grow and mature beyond measure. The pilot stores feature “digital shelves” which can show ads and change prices on the fly, along with a network of sensors that keep track of products and help speed shoppers through the aisles. Kroger may eventually roll out the Microsoft cloud-powered system in all its 2,780 supermarkets.

In the healthcare industry, earlier this month, we announced a seven-year strategic cloud partnership with Walgreens Boots Alliance (WBA). Through the partnership, WBA will harness the power of Microsoft Azure cloud and AI technology, Microsoft 365, health industry investments and new retail solutions with WBA’s customer reach, convenient locations, outpatient health care services and industry expertise to make health care delivery more personal, affordable and accessible for people around the world.

Pharmacy staff member with patient

Walgreens Boots Alliance will harness the power of Microsoft Azure cloud and AI technology and Microsoft 365 to help improve health outcomes and lower overall costs.

Customers tell us that one of the biggest advantages of working with Microsoft is our partner ecosystem. That ecosystem has brought together BevMo!, a wine and liquor store, and Fellow Inc., a Microsoft partner. Today, BevMo! is using Fellow Robots to connect supply chain efficiency with customer delight. Power BI, Microsoft Azure and AI enable the Fellow Robots to provide perfect product location using image recognition to offer customers different types of products by integrating point of sale interactions. BevMo! is also using Microsoft’s intelligent cloud solutions to empower its store associates to deliver better customer service.

Fellow Robots product in a retail store

Fellow Robots from partner Fellow, Inc. are helping BevMo! connect supply chain efficiency and better customer service. The robots are powered by Microsoft Azure, AI and Machine Learning.

In automotive, companies like Toyota are breaking new ground in mixed reality. With its HoloLens solution, Toyota can now project existing 3D CAD data used in the vehicle design process directly onto the vehicle for measurements, optimizing existing processes and minimizing errors. In addition, Toyota is trialing Dynamics 365 Layout to improve machinery layout within its facilities and Dynamics 365 Remote Assist to provide workers with expert support from off-site designers and engineers. Also, Toyota has deployed Surface devices, enabling designers and engineers to fluidly connect in real time as part of a company-wide investment to accelerate innovation through collaboration.

A Toyota engineer uses Microsoft HoloLens to perform a process called “film coating thickness inspection” to manage the thickness of the paint for consistent coating quality on every vehicle.
A Toyota engineer uses Microsoft HoloLens to perform a process called “film coating thickness inspection” to manage the thickness of the paint for consistent coating quality on every vehicle.

Digital transformation is also changing the way we learn. For example, in the education space, Law School Admission Council (LSAC), a non-profit organization devoted to law and education worldwide, announced its selection of the Windows platform on Surface Go devices to digitize the Law School Admission test (LSAT) for more than 130,000 LSAT test takers each year. In addition to the Digital LSAT, Microsoft is working with LSAC on several initiatives to improve and expand access to legal education.

Surface Go device
One of the thousands of Microsoft Surface Go devices running Windows 10 and proprietary software to facilitate a the modern and efficient Digital LSAT starting in July 2019.

Beyond manufacturing and retail, organizations are adopting the cloud and AI to reimagine environmental conservation. Fish may not be top of mind when thinking about innovation, but Australia’s Northern Territory is building its own technology to ensure the sustainable management of fisheries resources for future generations. For marine biologists, a seemingly straightforward task like counting fish becomes significantly more challenging or even dangerous when visibility in marine environments is low and when large predators (think: saltwater crocodiles) live in those environments. That is where AI comes in. Scientists use the technology to automatically identify and count fish photographed by underwater cameras. Over time, the AI solution becomes more accurate with each new fish analyzed. Greater availability of this technology may soon help other areas of the world improve their understanding of aquatic resources.

Shane Penny, Fisheries Research Scientist and his team using baited underwater cameras as part of Australia’s Northern Territory Fisheries artificial intelligence project with Microsoft to fuel insights in marine science.

Shane Penny, Fisheries Research Scientist and his team using baited underwater cameras as part of Australia’s Northern Territory Fisheries artificial intelligence project with Microsoft to fuel insights in marine science.

With almost 13,000 post offices and more than 134,000 employees, Poste Italiane is Italy’s largest distribution network. The organization delivers traditional mail and parcels but also operates at the digital frontier through innovation in financial and insurance services as well as mobile and digital payments solutions. Poste Italiane selected Dynamics 365 for its CRM, creating the largest online deployment in Italy. The firm sees the deployment as a critical part of its strategy to support growth, contain costs and deliver a better, richer customer experience.

Poste Italiane building
Poste Italiane’s selection of Microsoft is part of their digital transformation program that aims to reshape the retail sales approach and increase cross-selling revenues and profitability of its subsidiaries BancoPosta and PosteVita.

These examples only scratch the surface of how digital transformation and digital capabilities are bringing together people, data and processes in a way that generates value, competitive advantage and powers innovation across every industry. I am incredibly humbled that our customers and partners have chosen Microsoft to support their digital journey.

The post From shopping to car design, our customers and partners spark innovation across every industry appeared first on The Official Microsoft Blog.

Is AI the Answer to never-ending Cybersecurity Problems?

Paul German, CEO, Certes Networks, talks about the impact and the benefits of Artificial Intelligence(AI) driven cybersecurity. And how AI adoption is helping organisations to stay ahead in the never-ending game that is cybersecurity.

Artificial Intelligence (AI) isn’t going anywhere anytime soon. With 20% of the C-suite already using machine learning and 41% of consumers believing that AI will improve their lives, wide scale adoption is imminent across every industry - and cybersecurity is no exception. A lot has changed in the cyber landscape over the past few years and AI is being pushed to the forefront of conversations. It’s becoming more than a buzzword and delivering true business value. Its ability to aid the cybersecurity industry is increasingly being debated; some argue it has the potential to revolutionise cybersecurity, whilst others insist that the drawbacks outweigh the benefits.

With several issues facing the current cybersecurity landscape such as a disappearing IT perimeter, a widening skills gap, increasingly sophisticated cyber attacks and data breaches continuing to hit headlines, a remedy is needed. The nature of stolen data has also changed - CVV and passport numbers are becoming compromised, so coupled with regulations such as GDPR, organisations are facing a minefield.

Research shows that 60% think AI has the ability to find attacks before they do damage. But is AI the answer to the never-ending cybersecurity problems facing organisations today?

The Cost-Benefit Conundrum
On one hand, AI could provide an extremely large benefit to the overall framework of cybersecurity defences. On the other, the reality that it equally has the potential to be a danger under certain conditions cannot be ignored. Hackers are fast gaining the ability to foil security algorithms by targeting the data AI technology is training on. Inevitably, this could have devastating consequences.

AI can be deployed by both sides - by the attackers and the defenders. It does have a number of benefits such as the ability to learn and adapt to its current learning environment and the threat landscape. If it was deployed correctly, AI could consistently collect intelligence about new threats, attempted attacks, successful data breaches, blocked or failed attacks and learn from it all, fulfilling its purpose of defending the digital assets of an organisation. By immediately reacting to attempted breaches, mitigating and addressing the threat, cybersecurity could truly reach the next level as the technology would be constantly learning to detect and protect.

Additionally, AI technology has the ability to pick up abnormalities within an organisation’s network and flag it quicker than a member of the cybersecurity or IT team could; AI’s ability to understand ‘normal’ behaviour would allow it to bring attention to potentially malicious behaviour of suspicious or abnormal user or device activity.

As with most new technologies, for each positive there is an equal negative. AI could be configured by hackers to learn the specific defences and tools that it runs up against which would give way to larger and more successful data breaches. Viruses could be created to host this type of AI, producing more malware that can bypass even more advanced security implementations. This approach would likely be favoured by hackers as they don’t even need to tamper with the data itself - they could work out the features of the code a model is using and mirror it with their own. In this particular care, the tables would be turned and organisations could find themselves in sticky situations if they can’t keep up with hackers.

Organisations must be wary that they don’t adopt AI technology in cybersecurity ‘just because.’ As attack surfaces expand and hackers become more sophisticated, cybersecurity strategies must evolve to keep up. AI contributes to this expanding attack surface so when it comes down to deployment, the benefits must be weighed up against the potential negatives. A robust, defence-in-depth Information Assurance strategy is still needed to form the basis of any defence strategy to keep data safe.



Paul German, CEO, Certes Networks

AI & Your Family: The Wows and Potential Risks

artificial intelligenceAm I the only one? When I hear or see the word Artificial Intelligence (AI), my mind instantly defaults to images from sci-fi movies I’ve seen like I, Robot, Matrix, and Ex Machina. There’s always been a futuristic element — and self-imposed distance — between AI and myself.

But AI is anything but futuristic or distant. AI is here, and it’s now. And, we’re using it in ways we may not even realize.

AI has been woven throughout our lives for years in various expressions of technology. AI is in our homes, workplaces, and our hands every day via our smartphones.

Just a few everyday examples of AI:

  • Cell phones with built-in smart assistants
  • Toys that listen and respond to children
  • Social networks that determine what content you see
  • Social networking apps with fun filters
  • GPS apps that help you get where you need to go
  • Movie apps that predict what show you’d enjoy next
  • Music apps that curate playlists that echo your taste
  • Video games that deploy bots to play against you
  • Advertisers who follow you online with targeted ads
  • Refrigerators that alert you when food is about to expire
  • Home assistants that carry out voice commands
  • Flights you take that operate via an AI autopilot

The Technology

While AI sounds a little intimidating, it’s not when you break it down. AI is technology that can be programmed to accomplish a specific set of goals without assistance. In short, it’s a computer’s ability to be predictive — to process data, evaluate it, and take action.

AI is being implemented in education, business, manufacturing, retail, transportation, and just about any other sector of industry and culture you can imagine. It’s the smarter, faster, more profitable way to accomplish manual tasks.

An there’s tons of AI-generated good going on. Instagram — the #2 most popular social network — is now using AI technology to detect and combat cyberbullying on in both comments and photos.

No doubt, AI is having a significant impact on everyday life and is positioned to transform the future.

Still, there are concerns. The self-driving cars. The robots that malfunction. The potential jobs lost to AI robots.

So, as quickly as this popular new technology is being applied, now is a great time to talk with your family about both the exciting potential of AI and the risks that may come with it.

Talking points for families

Fake videos, images. AI is making it easier for people to face swap within images and videos. A desktop application called FakeApp allows users to seamlessly swap faces and share fake videos and images. This has led to the rise in “deep fake” videos that appear remarkably realistic (many of which go viral). Tip: Talk to your family about the power of AI technology and the responsibility and critical thinking they must exercise as they consume and share online content.

Privacy breaches. Following the Cambridge Analytica/Facebook scandal of 2018 that allegedly used AI technology unethically to collect Facebook user data, we’re reminded of those out to gather our private (and public) information for financial or political gain. Tip: Discuss locking down privacy settings on social networks and encourage your kids to be hyper mindful about the information they share in the public feed. That information includes liking and commenting on other content — all of which AI technology can piece together into a broader digital picture for misuse.

Cybercrime. As outlined in McAfee’s 2019 Threats Prediction Report, AI technology will likely allow hackers more ease to bypass security measures on networks undetected. This can lead to data breaches, malware attacks, ransomware, and other criminal activity. Additionally, AI-generated phishing emails are scamming people into handing over sensitive data. Tip: Bogus emails can be highly personalized and trick intelligent users into clicking malicious links. Discuss the sophistication of the AI-related scams and warn your family to think about every click — even those from friends.

IoT security. With homes becoming “smarter” and equipped with AI-powered IoT products, the opportunity for hackers to get into these devices to steal sensitive data is growing. According to McAfee’s Threat Prediction Report, voice-activated assistants are especially vulnerable as a point-of-entry for hackers. Also at risk, say security experts, are routers, smartphones, and tablets. Tip: Be sure to keep all devices updated. Secure all of your connected devices and your home internet at its source — the network. Avoid routers that come with your ISP (Internet Security Provider) since they are often less secure. And, be sure to change the default password and secure your primary network and guest network with strong passwords.

The post AI & Your Family: The Wows and Potential Risks appeared first on McAfee Blogs.

Ethics In Artificial Intelligence: Introducing The SHERPA Consortium

In May of this year, Horizon 2020 SHERPA project activities kicked off with a meeting in Brussels. F-Secure is a partner in the SHERPA consortium – a group consisting of 11 members from six European countries – whose mission is to understand how the combination of artificial intelligence and big data analytics will impact ethics and human rights issues today, and in the future (https://www.project-sherpa.eu/).

As part of this project, one of F-Secure’s first tasks will be to study security issues, dangers, and implications of the use of data analytics and artificial intelligence, including applications in the cyber security domain. This research project will examine:

  • ways in which machine learning systems are commonly mis-implemented (and recommendations on how to prevent this from happening)
  • ways in which machine learning models and algorithms can be adversarially attacked (and mitigations against such attacks)
  • how artificial intelligence and data analysis methodologies might be used for malicious purposes

We’ve already done a fair bit of this research*, so expect to see more articles on this topic in the near future!

 

As strange as it sounds, I sometimes find powerpoint a good tool for arranging my thoughts, especially before writing a long document. As an added bonus, I have a presentation ready to go, should I need it.

 

 

Some members of the SHERPA project recently attended WebSummit in Lisbon – a four day event with over 70,000 attendees and over 70 dedicated discussions and panels. Topics related to artificial intelligence were prevalent this year, ranging from tech presentations on how to develop better AI, to existential debates on the implications of AI on the environment and humanity. The event attracted a wide range of participants, including many technologists, politicians, and NGOs.

During WebSummit, SHERPA members participated in the Social Innovation Village, where they joined forces with projects and initiatives such as Next Generation Internet, CAPPSI, MAZI, DemocratieOuverte, grassroots radio, and streetwize to push for “more social good in technology and more technology in social good”. Here, SHERPA researchers showcased the work they’ve already done to deepen the debate on the implications of AI in policing, warfare, education, health and social care, and transport.

The presentations attracted the keen interest of representatives from more than 100 large and small organizations and networks in Europe and further afield, including the likes of Founder’s Institute, Google, and Amazon, and also led to a public commitment by Carlos Moedas, the European Commissioner for Research, Science and Innovation. You can listen to the highlights of the conversation here.

To get a preview of SHERPA’s scenario work and take part in the debate click here.

 


* If you’re wondering why I haven’t blogged in a long while, it’s because I’ve been hiding away, working on a bunch of AI-related research projects (such as this). Down the road, I’m hoping to post more articles and code – if and when I have results to share 😉

AI and the future of cybersecurity work

In February 2014, journalist Martin Wolf wrote a piece for the London Financial Times[1] titled Enslave the robots and free the poor. He began the piece with the following quote:

“In 1955, Walter Reuther, head of the US car workers’ union, told of a visit to a new automatically operated Ford plant. Pointing to all the robots, his host asked: How are you going to collect union dues from those guys? Mr. Reuther replied: And how are you going to get them to buy Fords?”

Most attacks against energy and utilities occur in the enterprise IT network

The United States has not been hit by a paralyzing cyberattack on critical infrastructure like the one that sidelined Ukraine in 2015. That attack disabled Ukraine's power grid, leaving more than 700,000 people in the dark.

But the enterprise IT networks inside energy and utilities networks have been infiltrated for years. Based on an analysis by the U.S. Department of Homeland Security (DHS) and FBI, these networks have been compromised since at least March 2016 by nation-state actors who perform reconnaissance activities looking industrial control system (ICS) designs and blueprints to steal.

Near and long-term directions for adversarial AI in cybersecurity

The frenetic pace at which artificial intelligence (AI) has advanced in the past few years has begun to have transformative effects across a wide variety of fields. Coupled with an increasingly (inter)-connected world in which cyberattacks occur with alarming frequency and scale, it is no wonder that the field of cybersecurity has now turned its eye to AI and machine learning (ML) in order to detect and defend against adversaries.

The use of AI in cybersecurity not only expands the scope of what a single security expert is able to monitor, but importantly, it also enables the discovery of attacks that would have otherwise been undetectable by a human. Just as it was nearly inevitable that AI would be used for defensive purposes, it is undeniable that AI systems will soon be put to use for attack purposes.

Choosing an optimal algorithm for AI in cybersecurity

In the last blog post, we alluded to the No-Free-Lunch (NFL) theorems for search and optimization. While NFL theorems are criminally misunderstood and misrepresented in the service of crude generalizations intended to make a point, I intend to deploy a crude NFL generalization to make just such a point.

You see, NFL theorems (roughly) state that given a universe of problem sets where an algorithm’s goal is to learn a function that maps a set of input data X to a set of target labels Y, for any subset of problems where algorithm A outperforms algorithm B, there will be a subset of problems where B outperforms A. In fact, averaging their results over the space of all possible problems, the performance of algorithms A and B will be the same.

With some hand waving, we can construct an NFL theorem for the cybersecurity domain:  Over the set of all possible attack vectors that could be employed by a hacker, no single detection algorithm can outperform all others across the full spectrum of attacks.