Category Archives: Technology

Rich Roth on LinkedIn: “Kevin Murray posted this, and like most of what he does, hits a very critical area of Espionage Law. One point I would add, is that everyone is concentrating on data and or cyber storage, the same is true of Paper files with employee information as well as business secrets. The business owner that collects them has a duty to protect them, and this has been true for years. Most human resources staff understand that paper files have the same legal requirement to protect the data, like Social Security numbers, or health files, and this is true of business secrets as well. If you do not take the effort to protect business secrets, they at times have been considered not secret anymore. This is the same of electronic data, if it is open on your web site for all to see, you will be hard put to claim anyone has stolen the data. In doing counter espionage surveys, one of the biggest culprits to letting secrets out, tends to be the Marketing staff. They often want to get the new product out there for the world to see, and give to much away or too soon. Counter Espionage sweeps are helpful, and at times needed, but Counter Espionage surveys can be what it really needed. Trash dumpsters are still one of the biggest and richest hauls for Espionage teams.” - Kevin Murray posted this, and like most of what he does, hits a very critical area of Espionage Law. One point I would add, is that everyone is concentrating on data and or cyber storage, the same is…

Tweeted by @RichRoth1

McAfee report suggests hackers hitting global government and defense firms with cyber espionage campaign – Hackerpost - Hackers invaded many organizations around the globe with cutting edge malicious software that extracted data from their frameworks, as per McAfee. Research released by the cybersecurity firm on Wedne…

Tweeted by @HackerPost2

McAfee: a coordinated cyber espionage campaign focused on defense organizations sent malware via targeted phishing messages to at least 87 firms in 24 countries - Issie Lapowsky / @issielapowsky: I transcribed this whole exchange between Rep. Cicilline and Sundar Pichai on China, because Pichai's evasive responses are so, so telling. #googlehearing http://twit…

Tweeted by @netmobz

What is cyber warfare? - Cyber warfare is a rather scary-sounding title that refers to the use of cyber technologies to launch a virtual war against countries, governments and citizens. Although it’s really just a name rathe…

Tweeted by @cloudsparkuk

New Trends Take Off in the Cybermarket - IRAN, desperate to boost revenue following the return of sanctions imposed by the United States, has encouraged its hackers to pursue ransomware attacks on individuals and organizations, according to…

Tweeted by @TrajectoryCaptl

What is the difference between a VPN and a proxy?

As you dig into the networking settings on your computer or smartphone, you’ll often see options labelled “VPN” or “Proxy”. Although they do similar jobs, they are also very different. This article will help you understand what the difference is, and when you might want to use one.

What is a proxy?

Normally when you browse the web, your computer connects to a website directly and begins downloading the pages for you to read. This process is simple and direct.

When you use a proxy server, your computer sends all web traffic to the proxy first. The proxy forwards your request to the target website, downloads the relevant information, and then passes it back to you.
Why would you do this? There’s a couple of reasons:

  • You want to browse a website anonymously – all traffic appears to come from the proxy server, not your computer.
  • You need to bypass a content restriction. Famously, your UK Netflix subscription won’t work in the USA. But if you connect to a UK proxy server it looks like you are watching TV from the UK and everything works as expected.

Although they work very well, there’s also a few problems with proxies:

  • All of the web traffic that passes through a proxy can be seen by the server owner. Do you know the proxy owner? Can they be trusted?
  • Web traffic between your computer and proxy, and proxy and website is unencrypted, so a skilled hacked can intercept sensitive data in transit and steal it.

What is a VPN?

A VPN is quite similar to a proxy. Your computer is configured to connect to another server, and it may be that your route web traffic through that server. But where a proxy server can only redirect web requests, a VPN connection is capable of routing and anonymising all of your network traffic.

But there is one significant advantage of the VPN – all traffic is encrypted. This means that hackers cannot intercept data between your computer and the VPN server, so your sensitive personal information cannot be compromised.

VPNs are the most secure choice

By encrypting and routing all of your network traffic, the VPN has a distinct advantage over a proxy server. And more than simply anonymising your web activities, a proxy server offers additional functionality too.

Take the Panda Dome VPN service. Not only does it anonymise your internet traffic and help you circumvent geographic filters, but traffic is also carefully inspected and filtered. Our VPN servers check every request and block anything that is known to be dangerous, like websites that host malware.

Routing your web traffic through an advanced VPN helps you avoid malware infections, phishing scams and fake websites. And because Panda’s servers are constantly updated, you are protected around the clock from sophisticated cybercrime attacks.

You can get started with the Panda VPN now – for free – here. And for more help and advice about staying safe online, take a look at the practical tips in the Panda Security blog.

Download Panda FREE VPN

The post What is the difference between a VPN and a proxy? appeared first on Panda Security Mediacenter.

SN 693: Internal Bug Discovery

  • Australia's recently passed anti-encryption legislation
  • Details of a couple more mega-breaches including a bit of Marriott follow-up
  • A welcome call for legislation from Microsoft
  • A new twist on online advertising click fraud
  • The DHS is interested in deanonymizing cryptocurrencies beyond Bitcoin
  • The changing landscape of TOR funding
  • An entirely foreseeable disaster with a new Internet IoT-oriented protocol
  • Google finds bugs in Google+ and acts responsibly -- again -- what that suggests for everyone else

We invite you to read our show notes.

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to this show at

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site:, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.


Google CEO Hammered by Members of Congress on China Censorship Plan

Google CEO Sundar Pichai came under fire from lawmakers on Tuesday over the company’s secretive plan to launch a censored search engine in China.

During a hearing held by the House Judiciary Committee, Pichai faced sustained questions over the China plan, known as Dragonfly, which would blacklist broad categories of information about democracy, human rights, and peaceful protest.

The hearing began with an opening statement from Rep. Kevin McCarthy, R-Calif., who said launching a censored search engine in China would “strengthen China’s system of surveillance and repression.” McCarthy questioned whether it was the role of American companies to be “instruments of freedom or instruments of control.”

Pichai read prepared remarks, stating “even as we expand into new markets, we never forget our American roots.” He added: “I lead this company without political bias and work to ensure that our products continue to operate that way. To do otherwise would go against our core principles and our business interests.”

The lawmakers questioned Pichai on a broad variety of subjects. Several Republicans on the committee complained that Google displayed too many negative stories about them in its search results, and claimed that there was “bias against conservatives” on the platform. They also asked about recent revelations of data leaks affecting millions of Google users, Android location tracking, and Google’s work to combat white supremacist content on YouTube.

It was not until Pichai began to face questions on China that he began to look at times uncomfortable.

Rep. David Cicilline, D-R.I., told Pichai that the Dragonfly plan seemed to be “completely inconsistent” with Google’s recently launched artificial intelligence principles, which state that the company will not “design or deploy” technologies whose purpose “contravenes widely accepted principles of international law and human rights.”

“It’s hard to imagine you could operate in the Chinese market under the current government framework and maintain a commitment to universal values, such as freedom of expression and personal privacy,” Cicilline said.

McCarthy questioned whether it was the role of American companies to be “instruments of freedom or instruments of control.”

Pichai repeatedly insisted that Dragonfly was an “internal effort” and the Google currently had “no plans to launch a search service in China.” Asked to confirm that the company would not launch “a tool for surveillance and censorship in China,” Pichai declined to answer, instead saying that he was committed to “providing users with information, and so we always — we think it’s ideal to explore possibilities. … We’ll be very thoughtful, and we will engage widely as we make progress.”

Pichai’s claim that the company does not have a plan to launch the search engine in China contradicted a leaked transcript from a private meeting inside the company. In the transcript, the company’s search chief Ben Gomes discussed an aim to roll out the service between January and April 2019. For Pichai’s statement to Congress to be truthful, there is only one possibility: that the company has put the brakes on Dragonfly since The Intercept first exposed the project in August.

During a separate exchange, Rep. Keith Rothfus, R-Pa., probed Pichai further on China. Rothfus asked Pichai how many months the company had been working to develop the censored search engine and how many employees had worked on it. Pichai seemed caught off guard and stumbled with his response. “We have had the project underway for a while,” he said, admitting that “at one point, we had over 100 people on it.” (According to sources who worked on Dragonfly, there have been closer to 300 people developing the plan.)

Rep. Tom Marino, R-Pa., quizzed Pichai on what user information the company would share with Chinese authorities. Pichai did not directly answer, stating, “We would look at what the conditions are to operate … [and we would] explore a wide range of possibilities.” Pichai said that he would be “transparent” with lawmakers on the company’s China plan going forward. He did not acknowledge that Dragonfly would still be secret — and he would not have been discussing it in Congress — had it not been for the whistleblowers inside the company who decided to leak information about the project.

At one point during the hearing, the proceedings were interrupted by a protester who entered the room carrying a placard that showed the Google logo altered to look like a China flag. The man was swiftly removed by Capitol Police. A handful of Tibetan and Uighur activists gathered in the hall outside the hearing, where they held a banner that stated “stop Google censorship.”

“We are protesting Google CEO Sundar Pichai to express our grave concern over Google’s plan to launch Project Dragonfly, a censored search app in China which will help Chinese government’s brutal human right abuses,” said Dorjee Tseten, executive director of Students for a Free Tibet. “We strongly urge Google to immediately drop Project Dragonfly. With this project, Google is serving to legitimize the repressive regime of the Chinese government and authorities to engage in censorship and surveillance.”

Earlier on Tuesday, more than 60 leading human rights groups sent a letter to Pichai calling on him to cancel the Dragonfly project. If the plan proceeds, the groups wrote, “there is a real risk that Google would directly assist the Chinese government in arresting or imprisoning people simply for expressing their views online, making the company complicit in human rights violations.”

The post Google CEO Hammered by Members of Congress on China Censorship Plan appeared first on The Intercept.

Rights Groups Turn Up Pressure on Google Over China Censorship Ahead of Congressional Hearing

Google is facing a renewed wave of criticism from human rights groups over its controversial plan to launch a censored search engine in China.

A coalition of more than 60 leading groups from countries across the world have joined forces to blast the internet giant for failing to address concerns about the secretive China project, known as Dragonfly. They come from countries including China, the United States, the United Kingdom, Argentina, Bolivia, Chile, France, Kazakhstan, Mexico, Norway, Pakistan, Palestine, Romania, Syria, Tibet, and Vietnam.

A prototype for the censored search engine was designed to blacklist broad categories of information about human rights, democracy, and peaceful protest. It would link Chinese users’ searches to their personal cellphone number and store people’s search records inside the data centers of a Chinese company in Beijing or Shanghai, which would be accessible to China’s authoritarian Communist Party government.

If the plan proceeds, “there is a real risk that Google would directly assist the Chinese government in arresting or imprisoning people simply for expressing their views online, making the company complicit in human rights violations,” the human rights groups wrote in a letter that will be sent to Google’s leadership on Tuesday.

The letter highlights mounting anger and frustration within the human rights community that Google has rebuffed concerns about Dragonfly, concerns that have been widely raised both inside and outside the company since The Intercept first revealed the plan in August. The groups say in their 900-word missive that Google’s China strategy is “reckless,” piling pressure on CEO Sundar Pichai, who is due to appear Tuesday before the House Judiciary Committee, where he will likely face questions on Dragonfly.

The groups behind the letter include Amnesty International, the Electronic Frontier Foundation, Access Now, Human Rights Watch, Reporters Without Borders, the Center for Democracy and Technology, Human Rights in China, the International Campaign for Tibet, and the World Uyghur Congress. They have been joined in their campaign by several high-profile individual signatories, such as former National Security Agency contractor Edward Snowden and Google’s former head of free expression in Asia, Lokman Tsui.

In late August, some of the same human rights groups had contacted Google demanding answers about the censored search plan. In October, the groups revealed on Monday, Google’s policy chief Kent Walker responded to them. In a two-page reply, Walker appeared to make the case for launching the search engine, saying that “providing access to information to people around the world is central to our mission.”

Walker did not address specific human rights questions on Dragonfly and instead claimed that the company is “still not close to launching such a product and whether we would or could do so remains unclear,” contradicting a leaked transcript from Google search chief Ben Gomes, who stated that the company aimed to launch the search engine between January and April 2019 and instructed employees to have it ready to be “brought off the shelf and quickly deployed.”

Walker agreed in his letter that Google would “confer” with human rights groups ahead of launching any search product in China, and said that the company would “carefully consider” feedback received. “While recognizing our obligations under the law in each jurisdiction in which we operate, we also remain committed to promoting access to information as well as protecting the rights to freedom of expression and privacy for our users globally,” Walker wrote.

“The company may knowingly compromise its commitments to human rights and freedom of expression.”

The human rights groups were left unsatisfied with Walker’s comments. They wrote in a new letter of reply, to be sent Tuesday, that he “failed to address the serious concerns” they had raised. “Instead of addressing the substantive issues,” they wrote, Walker’s response “only heightens our fear that the company may knowingly compromise its commitments to human rights and freedom of expression, in exchange for access to the Chinese search market.”

The groups added: “We welcome that Google has confirmed the company ‘takes seriously’ its responsibility to respect human rights. However, the company has so far failed to explain how it reconciles that responsibility with the company’s decision to design a product purpose-built to undermine the rights to freedom of expression and privacy.”

Separately, former Google research scientist Jack Poulson, who quit the company in protest over Dragonfly  has teamed up with Chinese, Tibetan, and Uighur rights groups to launch an anti-Dragonfly campaign. In a press conference on Monday, Poulson said it was “time for Google to uphold its own principles and publicly end this regressive experiment.”

Teng Biao, a Chinese human rights lawyer who said he had been previously detained and tortured by the country’s authorities for his work, recalled how he had celebrated in 2010 when Google decided to pull its search services out of China, with the company citing concerns about the Communist Party’s censorship and targeting of activists. Teng said he had visited Google headquarters in Beijing and laid flowers outside the company’s doors to thank the internet giant for its decision. He was dismayed by the company’s apparent reversal on its anti-censorship stance, he said, and called on “every one of us to stop Google from being an accomplice in China’s digital totalitarianism.”

Lhadon Tethong, director of the Tibet Action Institute, said there is currently a “crisis of repression unfolding across China and territories it controls.” Considering this, “it is shocking to know that Google is planning to return to China and has been building a tool that will help the Chinese authorities engage in censorship and surveillance,” she said. “Google should be using its incredible wealth, talent, and resources to work with us to find solutions to lift people up and help ease their suffering  not assisting the Chinese government to keep people in chains.”

Google did not respond to a request for comment.

The post Rights Groups Turn Up Pressure on Google Over China Censorship Ahead of Congressional Hearing appeared first on The Intercept.

Top Penetration Testing Certifications - The OSCE exam is a doozy. It takes place over 48 hours and is an advanced certificate offered by Offensive Security. This test will really challenge you and prove that you are ready to do good work i…

Tweeted by @IvoDQtNRAOzr19l

5G and cybersecurity

Keeping you safe in an increasingly connected world

If you’ve upgraded your smartphone in the last few years, there’s a very good chance your handset supports 4G mobile networks. 4G, short for ‘fourth generation’, offers super-fast data download speeds allowing you to stream more video, share larger high-resolution pictures and to browse the web more quickly.

But even these speeds are not enough to keep pace with demand. Everything is getting bigger – 4K video file sizes are enormous, and augmented reality animations are bandwidth hungry.

There is a second issue that needs to be addressed too – the sheer number of wireless devices connecting to mobile networks. Smart devices are an increasingly important part of modern life, at home and in the workplace.

We need more bandwidth and faster connectivity to deal with the changing demands on mobile networks. We are already outgrowing the possibilities of 4G networking.

Welcome to 5G networking

The good news is that the fifth generation (5G) of mobile network technologies has been designed and is undergoing testing. Mobile operators across the world are in the advanced stages of planning how these systems will be rolled out to consumers and businesses in the UK and beyond.

The introduction of 5G technologies will allow us to do more than ever before with our devices – but it also highlights a serious challenge. More and more devices are being connected to mobile networks, and each represents a potential security risk.

Smart sensors used in factories allow manufacturers to monitor assembly lines in real time for instance. But if poorly configured, or insufficiently secured, these devices could be used to hack into the company network to steal other data.

Other devices, like self-driving cars are completely reliant on mobile network connections to work properly. These vehicles are permanently connected, uploading and downloading data to the cloud to make split second decisions. If these decisions are interrupted – by hackers breaking through the network security for instance – the car could be involved in an accident, perhaps even killing people in the process.

More security required

With more devices connected to mobile networks, the need for security increases. Every single device needs to be protected against cyberattack, which means that security systems need to be present everywhere.

To cope with the increasing number of devices, security systems will also have to get smarter. Artificial intelligence will become more important, monitoring network activity to identify – and block – suspicious behaviour automatically. This approach is quicker, and more effective, than traditional IT security provisions especially as security software does not necessarily have to be installed directly on each device.

Play your part

Businesses will have to take care of their own smart sensors, but consumers need to get involved. The increased number of devices and network traffic presents a risk to you too – so you must ensure your smartphone is properly protected.

Even if you don’t have a 5G connection, you can take steps to protect yourself now. Download and install a free copy of Panda Security Antivirus for Android today and you’ll be fully prepared to overcome the future challenges of next generation mobile networks.

The post 5G and cybersecurity appeared first on Panda Security Mediacenter.

17 Technology, IT and Engineering Scholarships for Women in 2019-2020

Technology, IT and engineering are male-dominated industries. However, multiple companies and organizations are aiming to introduce more diversity by providing the education and training women need to enter these fields. Scholarships and grants can open doors typically closed to many women, especially with the rising costs of BA, Masters and PhD courses in the UK […]… Read More

The post 17 Technology, IT and Engineering Scholarships for Women in 2019-2020 appeared first on The State of Security.

Best 29 Tech Companies To Work For In The U.S. In 2019

Top 29 U.S. tech companies to work for in 2019, according to Glassdoor

Glassdoor, the renowned career job site, has released its annual report of 100 best places to work in the U.S. in 2019 under the name “Employees’ Choice Awards.” The compiled list of 100 companies to work for, includes 29 companies that belong to the technology sector.

Organizations were ranked based on employee ratings across several factors, such as overall satisfaction, career opportunities, corporate culture, transparency, compensation and benefits, work-life balance, culture, values, and business outlook.

Based on employees’ reviews, companies received overall ratings on a scale of one to five, with five signifying the most satisfied employees.

It has been a rather tough year for tech big-wigs like Facebook, Google, Tesla, and Salesforce, who were marred with scandals and controversies. As a result, it has hampered their overall ranking, which saw Facebook fall from the first position last year to the seventh overall rank. Also, the search giant, Google dropped down from second overall position to eight overall ranking

While you can check the entire list of Glassdoor here, let’s have a look at the top 29 tech companies to work for in 2019:

Zoom Video Communications

Overall ranking: #2

Company rating: 4.5

AboutZoom Video Communications provides remote conferencing services using cloud computing.

An employee says“Honesty, this is the best company I have ever worked for, hands down. Our CEO practices what he preaches with Delivering Happiness. Management is great, no micromanaging, and always willing to help. The sales floor is competitive, but not toxic at all. Everyone is more than happy to help each other. We realize we are all a part of something special right now, and that this should be a team effort.”— Zoom Account Executive (Santa Barbara, California).

Procore Technologies

Overall ranking: #4

Company rating: 4.5

AboutProcore Technologies is a construction project management software company.

An employee says: “From the day I started working at Procore, I’ve felt welcome, engaged, and energized. The work is challenging and fast-paced, but the people and culture make coming to the office something I look forward to every day. Procore truly lives its values of Ownership, Optimism, and Openness.” — Anonymous Procore Employee (Carpinteria, California).


Overall ranking: #6

Company rating: 4.5

AboutLinkedIn Corporation is a social networking website specifically designed for people in professional jobs and for recruiting professionals.

An employee says: “Linkedin has a very strong emphasis on employee wellness and goes the extra mile to care how employees feel and does everything to make them more productive in their daily work. Linkedin offers best quality food, continuous wellness programs, clear paths to advance careers and everything else necessary for happiness.” — Anonymous LinkedIn Employee (Mountain View, California).


Overall ranking: #7

Company rating: 4.5

AboutFacebook is the world’s biggest online social networking service and website.

What employees say: “Learning from great software engineers and/or researchers in AI. Too many political debates inside the company.” — Facebook Research Scientist (New York, New York).


Overall ranking#8

Company rating4.4

AboutGoogle Inc. is an American multinational corporation that is best known for running one of the largest search engines on the World Wide Web (WWW) and also creates cloud computing, hardware, and software products, and more.

An employee says“If you’re a software engineer, you’re among the kings of the hill at Google. It’s an engineer-driven company without a doubt (that *is* changing, but it’s still very engineer-focused).” — Google Software Engineer (New York, New York).


Overall ranking#11

Company rating4.4

AboutSalesforce is an American cloud computing company that helps companies manage their sales, marketing, and application programming projects.

An employee says“Supportive and inclusive environment, clear and reasonable expectations, challenging environment, awesome corporate mission, lots of room and support for professional growth.” — Salesforce Solutions Engineer (Cincinnati, Ohio).


Overall ranking#16

Company rating4.4

About: HubSpot is a developer and marketer of software products for inbound marketing and sales.

An employee says: “I’ve been at HubSpot now for almost 4 years and there’s nowhere else I’ve even thought about working in that time. Why? HubSpot is a great place to work. I feel like I’m valued. I have a lot of autonomy in how and when and where I work. I feel strongly about the mission of the company.” — Anonymous HubSpot Employee (Cambridge, Massachusetts).


Overall ranking#17

Company rating4.4

About: DocuSign provides electronic signature technology and digital transaction management services for facilitating electronic exchanges of contracts and signed documents.

An employee says: “We’re on a good path with no signs of slowing down and a lot of untapped market potential. This is great news. Because the company is growing fast, there’s a lot of opportunity to grow your career and step up into new roles.” — DocuSign Enterprise Corporate Sales(San Francisco, California).

Ultimate Software

Overall ranking#18

Company rating4.4

AboutUltimate Software is an American technology company that develops and sells UltiPro, a cloud-based human capital management (HCM) solution for businesses.

An employee says: “Amazing company. It’s the only payroll / HCM organization that truly cares about the customer – and while it’s not easy – the organization has maintained an amazing culture all in an effort to provide the best support to the customer. I love that.” — Ultimate SDM


Overall ranking#20

Company rating4.4

AboutPaylocity is a provider of cloud-based payroll and human capital management software solutions for medium-sized organizations.

An employee says“Great company culture. People that really believe in what we do, and investment in technology to push the envelope.” — Paylocity Account Executive (Tampa, Florida).

Fast Enterprises

Overall ranking#26

Company rating4.4

AboutFast Enterprises provides software and information technology consulting services.

An employee says: “Fast, even at the 1000+ size it is, still cares deeply about each and every employee. Their benefits, even the way they help people move, the way they bring individuals AND their spouses/families into the culture, it super impressive and I love that about Fast.” — Fast Enterprises Implementation Consultant (Boston, Massachusetts).


Overall ranking#27

Company rating4.4

About: SAP is a multinational software corporation that develops enterprise software to help manage business operations and customer relations.

An employee says: “We have yoga and meditation classes, mindfulness workshops. Many invited guests from technology industries to provides you with information. Leadership women workshops, global coaching, mentoring programs, and flexible work environment.? It is truly a top-notch company that will give back to their employees.”— SAP Manager (Montreal, Québec).


Overall ranking#30

Company rating4.4

AboutAdobe Inc. is an American multinational computer software company best known for its design and photo-editing solutions.

An employee says: “Relentless commitment to customer success. This is the core of most day to day decisions and the North Star for all activity. This makes it a place to be proud to work. Incredible products. Amazing benefits and culture that draws incredibly talented individuals.” — Adobe Learning Specialist (San Jose, California).


Overall ranking#32

Company rating4.4

AboutReal estate agency and platform for buying, selling, and renting a home.

An employee says: “Having recently joined Compass, all I can say about the company, its mission, and the people in it is… ‘simply amazing.’ Compass is a unicorn. It is that rare company that combines passion, focus, execution, vision, and has a heart and a soul.” — Anonymous Compass Employee (San Francisco, California).


Overall ranking#34

Company rating4.4

AboutMicrosoft Corporation (MS) is an American multinational technology company that develops, manufactures, licenses, supports and sells computer software, consumer electronics, personal computers, and related services.

An employee says“Respect for the individual, constant stressing of core cultural values of letting everyone be heard, etc. Decent work/life balance, though it’s hugely dependent on the individual to enforce. Individuals are encouraged to engage with managers at any level (for example with your manager’s manager’s manager…). There’s a general high-level of passion for the products we make.” — Senior Microsoft Electrical Engineer (Redmond, Washington).


Overall ranking#36

Company rating4.3

About: Nvidia Corporation designs graphics processing units (GPUs) for the gaming, cryptocurrency, and professional markets, as well as system on a chip units (SoCs) for the mobile computing and automotive market.

An employee says“I’ll be up front and say that it has always been my dream to work here. With that in mind, I came in telling myself to look at this place as objectively as possible to not cloud my judgement. After working here for over a year, I must say, the hype is real.” — Senior Nvidia Systems Engineer (Santa Clara, California).


Overall ranking#38

Company rating4.3

AboutIntuit Inc. is a business and financial software company that develops and sells financial, accounting, and tax preparation software and related services for small businesses, accountants, and individuals.

An employee says“Incredible company that has market dominance, yet also has so much room to grow. Management constantly preaches disruption, and its reflected in our priorities and work.” — Intuit Data Scientist.


Overall ranking#40

Company rating4.3

AboutTaskUs is a global outsourcing company that provides back-office support and customer care solutions.

An employee says“Taskus puts their people first, they understand that their people are the ones who make their company! I have gone through many interviews with other companies and Taskus is the first one who truly shows it!” — TaskUs Digital Content Moderator (San Antonio, Texas).


Overall ranking#41

Company rating4.3

AboutCengage is an educational content, technology, and services company for the higher education, K-12, professional, and library markets worldwide.

An employee says“The leadership of the company has been jaw-droppingly motivated, visionary, and transparent. They have turned a company haunted by downturns in the market into a trendsetter that is adapting profitably. Along the way, they have been committed to employee growth and job satisfaction. I am thrilled with what we are doing for learning.” — Senior Systems Analyst (Rapid City, South Dakota).

Kronos Incorporated

Overall ranking#44

Company rating4.3

About: HR, payroll, recruiting, and timekeeping software.

An employee says“The culture is positive. Employees are hard working and care. Leadership cares for employees and their experience. The company also cares for their customers.” — Anonymous Kronos Employee (Denver, Colorado).


Overall ranking#51

Company rating4.3

AboutVMware, Inc. is a subsidiary of Dell Technologies that provides cloud computing and platform virtualization software and services.

An employee says“Lots of smart and talented coworkers who are happy to share information you will learn a lot in a short amount of time but are expected to contribute. Slackers need not apply. If you’re a slacker you won’t survive the high stress and fast pace.” — VMware Technical Support (Broomfield, Colorado).


Overall ranking#58

Company rating4.3

AboutAppDynamics is an application performance management (APM) and IT operations analytics (ITOA) company that focuses on managing the performance and availability of applications across cloud computing environments as well as inside the data center.

An employee says“Great encouraging and supportive leadership. Promotional opportunities every quarter. Family atmosphere, where everyone has a genuine interest in you as an individual and employee.” — AppDynamics Business Development Representative (Dallas, Texas).


Overall ranking#62

Company rating4.3

About: Paycom is an online payroll and human resource technology provider.

An employee says“This is honestly the best job I think I’ll ever have. The benefits are amazing and the pay is more than I ever thought I could get. BE WARNED this job is hard. Never in my life have I had so much stress. That’s the reason why it pays so well. Be prepared to be stressed every day and have heavy daily workloads and have new procedures constantly thrown at you from management. But guess what it’s your job so you either adapt or you don’t make it.” — Paycom Specialist (Oklahoma City, Oklahoma).

Cisco Systems

Overall ranking#69

Company rating4.3

AboutCisco Systems, Inc. is an American multinational technology that develops, manufactures and sells networking hardware, telecommunications equipment, and other high-technology services and products.

An employee says“Military Friendly Culture empowers and gives transitioning veterans the opportunity to learn develop self to full potential. As a Military Retiree I feel there could not have been a better company to transition to than Cisco and the leadership team is very understanding and appreciative of what we bring to the table.” — Cisco Program Manager (Austin, Texas).


Overall ranking#71

Company rating4.3

AboutApple Inc. is an American multinational technology company that designs, develops, and sells consumer electronics, computer software, and online services.

An employee says“The company is AMAZING. There are limitless advancement opportunities. You work with some very cool people and the leadership cares about your development. You may get coaching but you never get battered or belittled.” — Apple At Home Advisor (Lakewood, Colorado).


Overall ranking#82

Company rating4.2

AboutNetApp, Inc. is a hybrid cloud data services and data management company that offers hybrid cloud data services for management of applications and data across cloud and on-premises environments.

An employee says“Great team chemistry. Interesting work. This company cares about its employees a lot and there are numerous events at work and outside work which show this.” — NetApp HPC Solutions Architect (Sunnyvale, California).

HP Inc.

Overall ranking#87

Company rating4.2

AboutHP Inc. is a provider of a wide variety of hardware components as well as software components.

An employee says“HP’s global footprint makes it unique in allowing you to have a BIG impact. Senior leaders are quality execs who’ve proven their mettle. Lots of opportunity to contribute given the size of the businesses.” — Anonymous HP Employee.

Expedia Group

Overall ranking#92

Company rating4.2

AboutExpedia Group is an American global travel technology company.

An employee says: “Expedia is the best place to work. I have been here for 11 months and enjoying every single day. The culture is upbeat, leadership is transparent, clear on direction, very well organized process oriented company. Awesome work life balance.” — Expedia Software Engineering Manager (Chicago, Illinois).

World Wide Technology

Overall ranking#99

Company rating4.2

AboutWorld Wide Technology, Inc. (WWT) provides technology and supply chain services with a focus on the enterprise commercial, public and telecom service provider sectors. The company provides planning, procurement and deployment of IT products and solution selling.

An employee says“Bar none, THE BEST place I have ever worked.” — World Wide Technology Senior Consultant (Denver, Colorado).

The post Best 29 Tech Companies To Work For In The U.S. In 2019 appeared first on TechWorm.

Facebook ‘could threaten democracy’ - Facebook could become a threat to democracy without tougher regulation, the former head of intelligence agency GCHQ has said. Robert Hannigan told the BBC the social media giant was more interested i…

Tweeted by @annetteashley61

12 Security Tech Terms Everyone Must Know

With tons of new technology coming out in the 21st century, it’s extremely important that you know what each of these terms means. You don’t want to be looking like a complete fool when your tech-savvy buddies are going around using these terms and you have no idea what they are talking about.

Even if you don’t like technology that much, knowing these insider terms can get you into a lot of new conversations and they will help you connect with many new people.

Read the terms, write them down, make a Quizlet out of them. Do what you got to do to learn them. Just know them.

1. Asset

An object that is valuable to a person or an organization.

2. Authentication

A process used to verify people to confirm their identity.

3. Brainjacking

Brainjacking is the term used for hacking into brain implants. In the future, we may all have brain implants but they come at the risk of getting hijacked, altering pain levels and even behavior.

4. Certification Body

A separate, independent organization which gives certification services to its customers.

5. Machine Bias

Machine bias is when the data in an algorithm machine gets influenced over time by human bias and prejudices from the people whom the data was collected from.

6. Crowdturfing

Crowdturfing is when you hire someone to write reviews for you. This is getting used in large scale by attackers and it makes it very hard to separate what is fake from what is real.

7. AI Cyber-Attacks

AI chatbots are being used to trick people into giving up private information like their credit card information, private documents, and passwords.

8. Computer Hallucinations

Self-driving cars and AI technology rely on cameras and algorithms to figure out what things are and how to navigate around obstacles. The problem is that the AI doesn’t always recognize things properly and it is possible that cars with computer vision could get attacked.

9. Encryption

A process that allows one to convert data into a secure form to hide its content.

10. Instant Messaging

Allows conversations between multiple people through typing on devices.

11. Keyboard Logger

A software the records all keys pressed, usually used to reveal credit card details and private information.

Keyloggers can be extremely dangerous because usually when they are installed they come equipped with trojans to help attack your computer.

Make sure you check out the conclusion to this article to learn how to defend against these kinds of dangerous malware with the best antivirus solution.

12. Macro Virus

A macro virus is a special type of malware that uses regular spreadsheet and word document software to infect a computer.

These Terms are Cool But A Quality Antivirus is Superior

Antivirus has multiple layers of antivirus protection and it has the ability to completely defend your computer from the malware listed above.

When it comes to antivirus protection, Norton Antivirus is essential in cybersecurity. They have won many awards for their superior antivirus protection.

The post 12 Security Tech Terms Everyone Must Know appeared first on TechWorm.

How To Be Invisible On The Internet - Everywhere you look, concerns are mounting about internet privacy. Although giving up your data was once an afterthought when gaining access to the newest internet services such as Facebook and Uber,…

Tweeted by @copperpeony

Cyber Warfare - Nation-states and their proxies are regularly spying and attacking in cyberspace across national borders. Western societies that are being targeted should do three things: Be less vulnerable, be able…

Tweeted by @IdeaGov

Steven Wilson on LinkedIn - Steven Wilson Cybercrime and Digital Investigator / Data Privacy and Protection / Cyber Security / Technology Keynote Speaker / Author6m · Edited

Tweeted by @globalfraudchat

Why Huawei Should Worry America - Eli Lake is a Bloomberg Opinion columnist covering national security and foreign policy. He was the senior national security correspondent for the Daily Beast and covered national security and intell…

Tweeted by @mobilemaui

The Wired Guide to Data Breaches - Another week, another massive new corporate security breach that exposes your personal data. Names, email addresses, passwords, Social Security numbers, dates of birth, credit card numbers, banking d…

Tweeted by @adeyasecure

A look back at cybercrime in 2018 - Last year IBM’s predicted that: These were very accurately predicted as areas of great impact! Symantec’s 2018 cybersecurity attacks report reported that IOT experienced a 60…

Tweeted by @SecurEcomConslt

6 Critical Website Elements You Need to Review

By Carolina

Just like a car needs regular maintenance, your business’s website needs a review to ensure that it is performing to the best of its capabilities. Whether you have noticed a reduction in visitor numbers, or are responding to feedback from customers, reviewing your website is a business necessity. Here are 6 critical elements of your […]

This is a post from Read the original post: 6 Critical Website Elements You Need to Review

Likely North Korean Cyberespionage Group Uses Chrome Extension to Infect Academic Institutions With Spyware in ‘Stolen Pencil’ Campaign - The Drum: Is it time to unfriend Zuckerberg and Sandberg? Chicago Tribune: Facebook allegedly offered advertisers special access to users data, activities TIME: Lawmakers Say Facebook Struck Deals Ov…

Tweeted by @Metacurity

Artificial Intelligence Experts Issue Urgent Warning Against Facial Scanning With a “Dangerous History”

Facial recognition has quickly shifted from techno-novelty to fact of life for many, with millions around the world at least willing to put up with their faces scanned by software at the airport, their iPhones, or Facebook’s server farms. But researchers at New York University’s AI Now Institute have issued a strong warning against not only ubiquitous facial recognition, but its more sinister cousin: so-called affect recognition, technology that claims it can find hidden meaning in the shape of your nose, the contours of your mouth, and the way you smile. If that sounds like something dredged up from the 19th century, that’s because it sort of is.

AI Now’s 2018 report is a 56-page record of how “artificial intelligence” — an umbrella term that includes a myriad of both scientific attempts to simulate human judgment and marketing nonsense — continues to spread without oversight, regulation, or meaningful ethical scrutiny. The report covers a wide expanse of uses and abuses, including instances of racial discrimination, police surveillance, and how trade secrecy laws can hide biased code from an AI-surveilled public. But AI Now, which was established last year to grapple with the social implications of artificial intelligence, expresses in the document particular dread over affect recognition, “a subclass of facial recognition that claims to detect things such as personality, inner feelings, mental health, and ‘worker engagement’ based on images or video of faces.” The thought of your boss watching you through a camera that uses machine learning to constantly assess your mental state is bad enough, while the prospect of police using “affect recognition” to deduce your future criminality based on “micro-expressions” is exponentially worse.

“The ability to use machine vision and massive data analysis to find correlations is leading to some very suspect claims.”

That’s because “affect recognition,” the report explains, is little more than the computerization of physiognomy, a thoroughly disgraced and debunked strain of pseudoscience from another era that claimed a person’s character could be discerned from their bodies — and their faces, in particular. There was no reason to believe this was true in the 1880s, when figures like the discredited Italian criminologist Cesare Lombroso promoted the theory, and there’s even less reason to believe it today. Still, it’s an attractive idea, despite its lack of grounding in any science, and data-centric firms have leapt at the opportunity to not only put names to faces, but to ascribe entire behavior patterns and predictions to some invisible relationship between your eyebrow and nose that can only be deciphered through the eye of a computer. Two years ago, students at a Shanghai university published a report detailing what they claimed to be a machine learning method for determining criminality based on facial features alone. The paper was widely criticized, including by AI Now’s Kate Crawford, who told The Intercept it constituted “literal phrenology … just using modern tools of supervised machine learning instead of calipers.”

Crawford and her colleagues are now more opposed than ever to the spread of this sort of culturally and scientifically regressive algorithmic prediction: “Although physiognomy fell out of favor following its association with Nazi race science, researchers are worried about a reemergence of physiognomic ideas in affect recognition applications,” the report reads. “The idea that AI systems might be able to tell us what a student, a customer, or a criminal suspect is really feeling or what type of person they intrinsically are is proving attractive to both corporations and governments, even though the scientific justifications for such claims are highly questionable, and the history of their discriminatory purposes well-documented.”

In an email to The Intercept, Crawford, AI Now’s co-founder and distinguished research professor at NYU, along with Meredith Whittaker, co-founder of AI Now and a distinguished research scientist at NYU, explained why affect recognition is more worrying today than ever, referring to two companies that use appearances to draw big conclusions about people. “From Faception claiming they can ‘detect’ if someone is a terrorist from their face to HireVue mass-recording job applicants to predict if they will be a good employee based on their facial ‘micro-expressions,’ the ability to use machine vision and massive data analysis to find correlations is leading to some very suspect claims,” said Crawford.

Faception has purported to determine from appearance if someone is “psychologically unbalanced,” anxious, or charismatic, while HireVue has ranked job applicants on the same basis.

As with any computerized system of automatic, invisible judgment and decision-making, the potential to be wrongly classified, flagged, or tagged is immense with affect recognition, particularly given its thin scientific basis: “How would a person profiled by these systems contest the result?,” Crawford added. “What happens when we rely on black-boxed AI systems to judge the ‘interior life’ or worthiness of human beings? Some of these products cite deeply controversial theories that are long disputed in the psychological literature, but are are being treated by AI startups as fact.”

What’s worse than bad science passing judgment on anyone within camera range is that the algorithms making these decisions are kept private by the firms that develop them, safe from rigorous scrutiny behind a veil of trade secrecy. AI Now’s Whittaker singles out corporate secrecy as confounding the already problematic practices of affect recognition: “Because most of these technologies are being developed by private companies, which operate under corporate secrecy laws, our report makes a strong recommendation for protections for ethical whistleblowers within these companies.” Such whistleblowing will continue to be crucial, wrote Whittaker, because so many data firms treat privacy and transparency as a liability, rather than a virtue: “The justifications vary, but mostly [AI developers] disclaim all responsibility and say it’s up to the customers to decide what to do with it.” Pseudoscience paired with state-of-the-art computer engineering and placed in a void of accountability. What could go wrong?

The post Artificial Intelligence Experts Issue Urgent Warning Against Facial Scanning With a “Dangerous History” appeared first on The Intercept.

Windows Defender ATP device risk score exposes new cyberattack, drives Conditional access to protect networks – Microsoft Secure - Several weeks ago, the Windows Defender Advanced Threat Protection (Windows Defender ATP) team uncovered a new cyberattack that targeted several high-profile organizations in the energy and food and …

Tweeted by @EliShlomo

Here’s Facebook’s Former “Privacy Sherpa” Discussing How to Harm Your Facebook Privacy

In 2015, rising star, Stanford University graduate, winner of the 13th season of “Survivor,” and Facebook executive Yul Kwon was profiled by the news outlet Fusion, which described him as “the guy standing between Facebook and its next privacy disaster,” guiding the company’s engineers through the dicey territory of personal data collection. Kwon described himself in the piece as a “privacy sherpa.” But the day it published, Kwon was apparently chatting with other Facebook staffers about how the company could vacuum up the call logs of its users without the Android operating system getting in the way by asking for the user for specific permission, according to confidential Facebook documents released today by the British Parliament.

“This would allow us to upgrade users without subjecting them to an Android permissions dialog.”

The document, part of a larger 250-page parliamentary trove, shows what appears to be a copied-and-pasted recap of an internal chat conversation between various Facebook staffers and Kwon, who was then the company’s deputy chief privacy officer and is currently working as a product management director, according to his LinkedIn profile.

The conversation centered around an internal push to change which data Facebook’s Android app had access to, to grant the software the ability to record a user’s text messages and call history, to interact with bluetooth beacons installed by physical stores, and to offer better customized friend suggestions and news feed rankings . This would be a momentous decision for any company, to say nothing of one with Facebook’s privacy track record and reputation, even in 2015, of sprinting through ethical minefields. “This is a pretty high-risk thing to do from a PR perspective but it appears that the growth team will charge ahead and do it,” Michael LeBeau, a Facebook product manager, is quoted in the document as saying of the change.

Crucially, LeBeau commented, according to the document, such a privacy change would require Android users to essentially opt in; Android, he said, would present them with a permissions dialog soliciting their approval to share call logs when they were to upgrade to a version of the app that collected the logs and texts. Furthermore, the Facebook app itself would prompt users to opt in to the feature, through a notification referred to by LeBeau as “an in-app opt-in NUX,” or new user experience. The Android dialog was especially problematic; such permission dialogs “tank upgrade rates,” LeBeau stated.

But Kwon appeared to later suggest that the company’s engineers might be able to upgrade users to the log-collecting version of the app without any such nagging from the phone’s operating system. He also indicated that the plan to obtain text messages had been dropped, according to the document. “Based on [the growth team’s] initial testing, it seems this would allow us to upgrade users without subjecting them to an Android permissions dialog at all,”  he stated. Users would have to click to effect the upgrade, he added, but, he reiterated, “no permissions dialog screen.”

It’s not clear if Kwon’s comment about “no permissions dialog screen” applied to the opt-in notification within the Facebook app. But even if the Facebook app still sought permission to share call logs, such in-app notices are generally designed expressly to get the user to consent and are easy to miss or misinterpret. Android users rely on standard, clear dialogs from the operating system to inform them of serious changes in privacy. There’s good reason Facebook would want to avoid “subjecting” its users to a screen displaying exactly what they’re about to hand over to the company:

It’s not clear how this specific discussion was resolved, but Facebook did eventually begin obtaining call logs and text messages from users of its Messenger and Facebook Lite apps for Android. This proved highly controversial when revealed in press accounts and by individuals posting on Twitter after receiving data Facebook had collected on them; Facebook insisted it had obtained permission for the phone log and text massage collection, but some users and journalists said it had not.

It’s Facebook’s corporate stance that the documents released by Parliament “are presented in a way that is very misleading without additional context.” The Intercept has asked both Facebook and Kwon personally about what context is missing here, if any, and will update with their response.

The post Here’s Facebook’s Former “Privacy Sherpa” Discussing How to Harm Your Facebook Privacy appeared first on The Intercept.

The CyberWire Daily Briefing 12.5.18 - WannaCry, NotPetya, ransomware-as-a-service, and fileless attacks abounded. And, that’s not everything. The victims of cybercrime ranged from private businesses to the fundamental practices of democr…

Tweeted by @thecyberwire

Drawing the line for cyber warfare - Drawing the line for cyber warfare Post navigation Share this: Click to share on Twitter (Opens in new window) Click to share on Facebook (Opens in new window) Click to share on Google+ (Opens in new…

Tweeted by @DrSonamsharma

What’s ahead for cybersecurity in 2019? |IT News Africa – Up to date technology news, IT news, Digital news, Telecom news, Mobile news, Gadgets news, Analysis and Reports - What lies ahead for companies, governments and individuals regarding cybersecurity in 2019? Will we see the EU government forcing US data centers to hand over data? Will the European Union issue its …

Tweeted by @GetCalCISO

SN 692: GPU RAM Image Leakage

  • Another Lenovo SuperFish-style local security certificate screw up
  • The Marriott breach and several other new, large and high-profile secure breach incidents
  • The inevitable evolution of exploitation of publicly exposed UPnP router services
  • The emergence of "Printer Spam"
  • How well does ransomware pay? We have an idea now.
  • The story of two iOS scam apps
  • Progress on the DNS over HTTPS front
  • Rumors that Microsoft is abandoning their EdgeHTML engine in favor of Chromium We also have a bit of
  • A Cyber Security related Humble Book Bundle just in time for Christmas
  • Some new research that reveals that it's possible to recover pieces of web browser page images that have been previously viewed.

We invite you to read our shown notes.

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to this show at

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site:, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.


Why 5G will be disruptive - Every next-generation mobile phone standard is greeted by some skepticism. Fifth-generation technology has certainly endured its share. Critics say it’s just a collection of incremental improvements,…

Tweeted by @baruchproforum

Imprisoned Hacktivist Jeremy Hammond Bumped a Guard With a Door — and Got Thrown in Solitary Confinement

Last month, a famed hacker who has been serving a 10-year prison sentence since 2012 was accused by a guard at a federal detention center of “minor assault,” landing the so-called hacktivist in solitary confinement, according to advocates. The guard at Michigan’s Federal Correctional Institute-Milan made the accusation against Jeremy Hammond — the activist associated with hacking groups Anonymous and LulzSec and best know for hacking private intelligence firm Stratfor and leaking documents to WikiLeaks — on either November 19 or 20. Hammond has been held in solitary confinement ever since, according to the Jeremy Hammond Support Network.

The guard claims that Hammond hit him with a door, “stood his ground,” and pushed his shoulder into the guard. The head of Hammond’s support network said the prison guard’s account is an overblown. “Jeremy says that he was exiting his unit through a door that has no windows and could not see the guard on the other side, and as he’s exiting, bumped the guard with the door,” Grace North told The Intercept. “The guard immediately grabbed Jeremy and threw him up against the wall and dragged him down to solitary, with no handcuffs, without calling for backup, which is against prison protocol, and Jeremy has been there ever since.”

North’s version of events also portrays the guard as overly aggressive: After the guard was hit with the door, North said, he asked Hammond if he “wanted to go.”

“It’s absurd to classify being bumped with a door as assault and to think that an appropriate response is to subject the person who bumped you to torture.”

Hammond, who pleaded guilty to violating one count of the Computer Fraud and Abuse Act in a noncooperating plea deal, had never been part of any physical alteration since his arrest in Chicago on March 5, 2012. In 2013, Hammond pleaded guilty to hacking the private intelligence firm Stratfor Global Intelligence and other targets. The Stratfor hack lead to numerous revelations, including that the firm spied on activists for major corporations on several occasions.

Hammond’s run-in with the guard could have severe implications on his time in prison, disrupting his studies toward a higher-education degree and potentially precipitating a move from the minimum-security Milan facility to a medium-security prison.

“It’s absurd to classify being bumped with a door as assault and to think that an appropriate response is to subject the person who bumped you to torture,” said North. “This is yet another example of the wildly unchecked systems of power and abuse that are endemic to American prisons, and illustrate the need not just for reform, but the complete abolition of the entire prison-industrial complex.”

This week will mark the start of Hammond’s third week in a so-called segregated housing unit — more commonly known as solitary confinement. The United Nations has said that confinement of such length could be considered torture. “Considering the severe mental pain or suffering solitary confinement may cause,” U.N. Special Rapporteur on Torture Juan Méndez said in 2011, “it can amount to torture or cruel, inhuman, or degrading treatment or punishment.” He added that prolonged isolation for more than 15 days — around the length of Hammond’s current stint in solitary — should be absolutely prohibited because scientific studies have established that it can lead to lasting mental damage.

The charge that led to Hammond’s move to solitary confinement was upheld in a disciplinary hearing last week, which Hammond attended over the phone because he was barred from attending in person. North said that the “minor assault” charge against him is a disciplinary matter — as opposed to criminal — so Hammond was not allowed to have a lawyer. “He’s not entitled to representation of any kind,” North said. North added that Hammond was left unaware if any evidence against him was presented at the hearing, such as video of the incident. “It’s a prison, obviously there’s video of every corner of the building,” North said. “So we’re not aware if there was video shown, or if it was just the word of the guard.” The recommendation from the hearing is to transfer Hammond from FCI Milan, a low-security federal prison in Michigan, to a medium-security federal prison, according to North. (A spokesperson for FCI Milan declined to comment, citing the Privacy Act of 1974 that prohibits them from releasing information about any incarcerated people without their written permission.)

The “minor assault” charge is severely disrupting Hammond’s life in prison. Hammond has been taking college classes through a local community college that has a prison education program and was expecting to earn an associate’s degree in general studies next semester, making him part of the first class of incarcerated people to receive a college degree through the program. Since he’s been in solitary confinement, however, he has missed his classes, been unable to turn in assignments, and is unable to take his finals. “He greatly enjoys his studies, he greatly enjoys the classes he’s been taking,” North said. “Most prisons don’t offer the prison education program. Milan is one of them. It would almost certainly be guaranteed that whatever prison he was transferred to would not offer the program that Milan offers.”

In 2004, while Hammond was a freshman at University of Illinois at Chicago on a full scholarship, he hacked into the website of the computer science department, told them about it, and offered to help fix the vulnerability. In the cybersecurity industry, this is called responsible disclosure, but university administrators expelled him for it, and he never finished his degree.

If he gets transferred to a medium-security prison, Hammond will enjoy fewer freedoms than he currently does at Milan. He’ll also be farther from friends and family who right now are able to visit him frequently.

In 2011, hacktivists affiliated with Anonymous and LulzSec, including Hammond and FBI informant Hector Monsegur, also known as “Sabu,” hacked Stratfor and leaked seven and a half years of the company’s emails to WikiLeaks. At the time, Stratfor — which describes itself as “the world’s leading geopolitical intelligence platform” — had clients ranging from military agencies and defense contractors to global corporations that wanted to spy on activists.

Among other things, the hack and leak exposed how Dow Chemical hired Stratfor to spy on the culture-jamming activist group the Yes Men; Coca-Cola, a sponsor of the 2010 Winter Olympics in Vancouver, Canada, hired the firm to spy on activists associated with animal rights organization PETA, worried that they might be planning direct action against the corporation during the games; and American Petroleum Institute, the U.S. oil and gas industry lobby group, hired Stratfor to spy on Pulitzer Prize-winning investigative journalist outfit ProPublica, which in 2008 broke the first news stories about the environmental and health risks posed by fracking.

Monsegur, who was often referred to as the leader of LulzSec, was secretly arrested by the FBI on June 7, 2011. Immediately after his arrest, he began working closely with the FBI as an informant, building a case against Hammond and the other hackers associated with LulzSec and Anonymous. With Monsegur’s help, the FBI was aware of — and helped fund and participate in — the hacking of Stratfor and other targets. Monsegur provided Hammond with an FBI-owned server to exfiltrate emails and documents to during the Stratfor hack.

In a statement during his sentencing hearing, Hammond referred to his hacking as “acts of civil disobedience and direct action,” describing “an obligation to use my skills to expose and confront injustice and to bring the truth to light.” He says he had never heard of Stratfor until Monsegur — who was already an FBI informant at the time — brought it to his attention. “Why the FBI would introduce us to the hacker who found the initial vulnerability and allow this hack to continue remains a mystery,” he said at the sentencing.

Hammond is currently scheduled for release in February 2020.

The post Imprisoned Hacktivist Jeremy Hammond Bumped a Guard With a Door — and Got Thrown in Solitary Confinement appeared first on The Intercept.

Want to Be A Hacker? Go to Dallas. - A man named Tinker clutches an HDMI cable as he bumps through the crowd milling in front of the stage at Family Karaoke, a joint on Dallas’s east side. A former Marine, Tinker has a solid but not ove…

Tweeted by @InfoSecSherpa

Face aux boycotts, le Chinois Huawei voit le piège géopolitique se refermer sur son avenir – - La croissance explosive de l’équipementier en télécoms chinois Huawei se heurte désormais à l’écueil géopolitique, certains acteurs occidentaux ou asiatiques refusant de le laisser s’installer dans l…

Tweeted by @ECONEWSN1

NPR Choice page - By choosing “I agree” below, you agree that NPR’s sites use cookies, similar tracking and storage technologies, and information about the device you use to access our sites to enhance your viewing, l…

Tweeted by @ProNewsViews

Is strategic cyber-warfare feasible today? - The term "Cyber Warfare" is largely nonsense. There just isn't enough there to make a prolonged exchange of hostilities likely, not on the order of magnitude that you could call a "war". However, as …

Tweeted by @StackSecurity

Securing Your Site like It’s 1999 - Running a website in the early years of the web was a scary business. The web was an evolving medium, and people were finding new uses for it almost every day. From book stores to online auctions, th…

Tweeted by @tutorialsonline

60 Cybersecurity Predictions For 2019 - I’ve always been a loner, avoiding crowds as much as possible, but last Friday I found myself in the company of 500 million people. The breach of the personal accounts of Marriott and Starwood custom…

Tweeted by @threat_x_inc

Homeland Security Will Let Computers Predict Who Might Be a Terrorist on Your Plane — Just Don’t Ask How It Works

You’re rarely allowed to know exactly what’s keeping you safe. When you fly, you’re subject to secret rules, secret watchlists, hidden cameras, and other trappings of a plump, thriving surveillance culture. The Department of Homeland Security is now complicating the picture further by paying a private Virginia firm to build a software algorithm with the power to flag you as someone who might try to blow up the plane.

The new DHS program will give foreign airports around the world free software that teaches itself who the bad guys are, continuing society’s relentless swapping of human judgment for machine learning. DataRobot, a northern Virginia-based automated machine learning firm, won a contract from the department to develop “predictive models to enhance identification of high risk passengers” in software that should “make real-time prediction[s] with a reasonable response time” of less than one second, according to a technical overview that was written for potential contractors and reviewed by The Intercept. The contract assumes the software will produce false positives and requires that the terrorist-predicting algorithm’s accuracy should increase when confronted with such mistakes. DataRobot is currently testing the software, according to a DHS news release.

The contract also stipulates that the software’s predictions must be able to function “solely” using data gleaned from ticket records and demographics — criteria like origin airport, name, birthday, gender, and citizenship. The software can also draw from slightly more complex inputs, like the name of the associated travel agent, seat number, credit card information, and broader travel itinerary. The overview document describes a situation in which the software could “predict if a passenger or a group of passengers is intended to join the terrorist groups overseas, by looking at age, domestic address, destination and/or transit airports, route information (one-way or round trip), duration of the stay, and luggage information, etc., and comparing with known instances.”

DataRobot’s bread and butter is turning vast troves of raw data, which all modern businesses accumulate, into predictions of future action, which all modern companies desire. Its clients include Monsanto and the CIA’s venture capital arm, In-Q-Tel. But not all of DataRobot’s clients are looking to pad their revenues; DHS plans to integrate the code into an existing DHS offering called the Global Travel Assessment System, or GTAS, a toolchain that has been released as open source software and which is designed to make it easy for other countries to quickly implement no-fly lists like those used by the U.S.

According to the technical overview, DHS’s predictive software contract would “complement the GTAS rule engine and watch list matching features with predictive models to enhance identification of high risk passengers.” In other words, the government has decided that it’s time for the world to move beyond simply putting names on a list of bad people and then checking passengers against that list. After all, an advanced computer program can identify risky fliers faster than humans could ever dream of and can also operate around the clock, requiring nothing more than electricity. The extent to which GTAS is monitored by humans is unclear. The overview document implies a degree of autonomy, listing as a requirement that the software should “automatically augment Watch List data with confirmed ‘positive’ high risk passengers.”

The document does make repeated references to “targeting analysts” reviewing what the system spits out, but the underlying data-crunching appears to be almost entirely the purview of software, and it’s unknown what ability said analysts would have to check or challenge these predictions. In an email to The Intercept, Daniel Kahn Gillmor, a senior technologist with the American Civil Liberties Union, expressed concern with this lack of human touch: “Aside from the software developers and system administrators themselves (which no one yet knows how to automate away), the things that GTAS aims to do look like they could be run mostly ‘on autopilot’ if the purchasers/deployers choose to operate it in that manner.” But Gillmor cautioned that even including a human in the loop could be a red herring when it comes to accountability: “Even if such a high-quality human oversight scheme were in place by design in the GTAS software and contributed modules (I see no indication that it is), it’s free software, so such a constraint could be removed. Countries where labor is expensive (or controversial, or potentially corrupt, etc) might be tempted to simply edit out any requirement for human intervention before deployment.”

“Countries where labor is expensive might be tempted to simply edit out any requirement for human intervention.”

For the surveillance-averse, consider the following: Would you rather a group of government administrators, who meet in secret and are exempt from disclosure, decide who is unfit to fly? Or would it be better for a computer, accountable only to its own code, to make that call? It’s hard to feel comfortable with the very concept of profiling, a practice that so easily collapses into prejudice rather than vigilance. But at least with uniformed government employees doing the eyeballing, we know who to blame when, say, a woman in a headscarf is needlessly hassled, or a man with dark skin is pulled aside for an extra pat-down.

If you ask DHS, this is a categorical win-win for all parties involved. Foreign governments are able to enjoy a higher standard of security screening; the United States gains some measure of confidence about the millions of foreigners who enter the country each year; and passengers can drink their complimentary beverage knowing that the person next to them wasn’t flagged as a terrorist by DataRobot’s algorithm. But watchlists, among the most notorious features of post-9/11 national security mania, are of questionable efficacy and dubious legality. A 2014 report by The Intercept pegged the U.S. Terrorist Screening Database, an FBI data set from which the no-fly list is excerpted, at roughly 680,000 entries, including some 280,000 individuals with “no recognized terrorist group affiliation.” That same year, a U.S. district court judge ruled in favor of an ACLU lawsuit, declaring the no-fly list unconstitutional. The list could only be used again if the government improved the mechanism through which people could challenge their inclusion on it — a process that, at the very least, involved human government employees, convening and deliberating in secret.


Diagram from a Department of Homeland Security technical document illustrating how GTAS might visualize a potential terrorist onboard during the screening process.

Document: DHS

But what if you’re one of the inevitable false positives? Machine learning and behavioral prediction is already widespread; The Intercept reported earlier this year that Facebook is selling advertisers on its ability to forecast and pre-empt your actions. The consequences of botching consumer surveillance are generally pretty low: If a marketing algorithm mistakenly predicts your interest in fly fishing where there is none, the false positive is an annoying waste of time. The stakes at the airport are orders of magnitude higher.

What happens when DHS’s crystal ball gets it wrong — when the machine creates a prediction with no basis in reality and an innocent person with no plans to “join a terrorist group overseas” is essentially criminally defamed by a robot? Civil liberties advocates not only worry that such false positives are likely, possessing a great potential to upend lives, but also question whether such a profoundly damning prediction is even technologically possible. According to  DHS itself, its predictive software would have relatively little information upon which to base a prognosis of impending terrorism.

Even from such mundane data inputs, privacy watchdogs cautioned that prejudice and biases always follow — something only worsened under the auspices of self-teaching artificial intelligence. Faiza Patel, co-director of the Brennan Center’s Liberty and National Security Program, told The Intercept that giving predictive abilities to watchlist software will present only the veneer of impartiality. “Algorithms will both replicate biases and produce biased results,” Patel said, drawing a parallel to situations in which police are algorithmically allocated to “risky” neighborhoods based on racially biased crime data, a process that results in racially biased arrests and a checkmark for the computer. In a self-perpetuating bias machine like this, said Patel, “you have all the data that’s then affirming what the algorithm told you in the first place,” which creates “a kind of cycle of reinforcement just through the data that comes back.” What kind of people should get added to a watchlist? The ones who resemble those on the watchlist.

What kind of people should get added to a watchlist? The ones who resemble those on the watchlist.

Indeed, DHS’s system stands to deliver a computerized turbocharge to the bias that is already endemic to the American watchlist system. The overview document for the the Delphic profiling tool made repeated references to the fact that it will create a feedback loop of sorts. The new system “shall automatically augment Watch List data with confirmed ‘positive’ high risk passengers,” one page read, with quotation marks doing some very real work. The software’s predictive abilities “shall be able to improve over time as the system feeds actual disposition results, such as true and false positives,” said another section. Given that the existing watchlist framework has ensnared countless thousands of innocent people , the notion of “feeding” such “positives” into a machine that will then search even harder for that sort of person is downright dangerous. It also becomes absurd: When the criteria for who is “risky” and who isn’t are kept secret, it’s quite literally impossible for anyone on the outside to tell what is a false positive and what isn’t. Even for those without civil libertarian leanings, the notion of an automatic “bad guy” detector that uses a secret definition of “bad guy” and will learn to better spot “bad guys” with every “bad guy” it catches would be comical were it not endorsed by the federal government.

For those troubled by the fact that this system is not only real but currently being tested by an American company, the fact that neither the government nor DataRobot will reveal any details of the program is perhaps the most troubling of all. When asked where the predictive watchlist prototype is being tested, the DHS tech directorate spokesperson, John Verrico, told The Intercept, “I don’t believe that has been determined yet,” and stressed that the program was meant for use with foreigners. Verrico referred further questions about test location and which “risk criteria” the algorithm will be trained to look for back to DataRobot. Libby Botsford, a DataRobot spokesperson, initially told The Intercept that she had “been trying to track down the info you requested from the government but haven’t been successful,” and later added, “I’m not authorized to speak about this. Sorry!” Subsequent requests sent to both DHS and DataRobot were ignored.

Verrico’s assurance — that the watchlist software is an outward-aiming tool provided to foreign governments, not a means of domestic surveillance — is an interesting feint given that Americans fly through non-American airports in great numbers every single day. But it obscures ambitions much larger than GTAS itself: The export of opaque, American-style homeland security to the rest of the world and the hope of bringing every destination in every country under a single, uniform, interconnected surveillance framework. Why go through the trouble of sifting through the innumerable bodies entering the United States in search of “risky” ones when you can move the whole haystack to another country entirely? A global network of terrorist-scanning predictive robots at every airport would spare the U.S. a lot of heavy, politically ugly lifting.

“Automation will exacerbate all of the worst aspects of the watchlisting system.”

Predictive screening further shifts responsibility. The ACLU’s Gillmor explained that making these tools available to other countries may mean that those external agencies will prevent people from flying so that they never encounter DHS at all, which makes DHS less accountable for any erroneous or damaging flagging, a system he described as “a quiet way of projecting U.S. power out beyond U.S. borders.” Even at this very early stage, DHS seems eager to wipe its hands of the system it’s trying to spread around the world: When Verrico brushed off questions of what the system would consider “risky” attributes in a person, he added in his email that “the risk criteria is being defined by other entities outside the U.S., not by us. I would imagine they don’t want to tell the bad guys what they are looking for anyway. ;-)” DHS did not answer when asked whether there were any plans to implement GTAS within the United States.

Then there’s the question of appeals. Those on DHS’s current watchlists may seek legal redress; though the appeals system is generally considered inadequate by civil libertarians, it offers at least a theoretical possibility of removal. The documents surrounding DataRobot’s predictive modeling contract make no mention of an appeals system for those deemed risky by an algorithm, nor is there any requirement in the DHS overview document that the software must be able to explain how it came to its conclusions. Accountability remains a fundamental problem in the fields of machine learning and computerized prediction, with some computer scientists adamant that an ethical algorithm must be able to show its work, and others objecting on the grounds that such transparency compromises the accuracy of the predictions.

Gadeir Abbas, an attorney with the Council on American-Islamic Relations, who has spent years fighting the U.S. government in court over watchlists, saw the DHS software as only more bad news for populations already unfairly surveilled. The U.S. government is so far “not able to generate a single set of rules that have any discernible level of effectiveness,” said Abbas, and so “the idea that they’re going to automate the process of evolving those rules is another example of the technology fetish that drives some amount of counterterrorism policy.”

The entire concept of making watchlist software capable of terrorist predictions is mathematically doomed, Abbas added, likening the system to a “crappy Minority report. … Even if they make a really good robot, and it’s 99 percent accurate,” the fact that terror attacks are “exceedingly rare events” in terms of naked statistics means you’re still looking at “millions of false positives. … Automation will exacerbate all of the worst aspects of the watchlisting system.”

The ACLU’s Gillmor agreed that this mission is simply beyond what computers are even capable of:

For very-low-prevalence outcomes like terrorist activity, predictive systems are simply likely to get it wrong. When a disease is a one-in-a-million likelihood, the surest bet is a negative diagnosis. But that’s not what these systems are designed to do. They need to “diagnose” some instances positively to justify their existence. So, they’ll wrongly flag many passengers who have nothing to do with terrorism, and they’ll do it on the basis of whatever meager data happens to be available to them.

Predictive software is not just the future, but the present. Its expansion into the way we shop, the way we’re policed, and the way we fly will soon be commonplace, even if we’re never aware of it. Designating enemies of the state based on a crystal ball locked inside a box represents a grave, fundamental leap in how societies appraise danger. The number of active, credible terrorists-in-waiting is an infinitesimal slice of the world’s population. The number of people placed on watchlists and blacklists is significant. Letting software do the sorting — no matter how smart and efficient we tell ourselves it will be — will likely do much to worsen this inequity.

The post Homeland Security Will Let Computers Predict Who Might Be a Terrorist on Your Plane — Just Don’t Ask How It Works appeared first on The Intercept.

State-sponsored espionage and sabotage to shape 15 cybersecurity threats to beware in 2019 – IFSEC Global | Security and Fire News and Resources - A raft of cyber-threat trends to expect in 2019 also suggests that hackers will seek to capitalise on GDPR, probe cloud security for vulnerabilities and expand use of ‘multi-homed’ malware attacks. A…

Tweeted by @ifsecglobal