Author Archives:

E Hacking News – Latest Hacker News and IT Security News: Fresh e-mails have been floating around warning users to back their Google+ data…

Fresh e-mails have been floating around warning users to back their Google+ data up by April 2, 2019 before it gets deleted forever.

In October, last year last year as announced by Google the platform was planned to be shutting the aforementioned platform.

The reasons outlined were lack of usage, discovery of API bugs, leakage of information and later on discoveries of more bugs.

And as of a few days, Google+ started its email flotation to warn users to backup data and save it before it’s too late.

Per the mail, the shutting process would not in any way affect the other Google products including YouTube, Google Photos and Google Drive.

The following steps could be followed to go about the Backup:

1.     Go to the Google+ download page.

2.   By putting a check-mark on them select the data categories you want to save

3.   Click on “Next Step”.

4.   Mention how exactly you’d like to retrieve the archives, that is, via emails, links or would you want them saved in Dropbox, Google Drive or One Drive.

5.    You could also decide how large you want those files to be.

6.   Once done, click “Create Archive” and Google will start to create an archive for you.

After following all of the above steps you’ll be presented with an archive with all your material saved in it in the form of HTML files and images respectively.

Also to be on the safer side, it’s highly advised by Google+ to start this process quickly and get it done by March 31, 2019.

This would only endure that the archive preparation is done in good time.

E Hacking News - Latest Hacker News and IT Security News

SolarWinds MSP Blog: How Do You Manage Impatient Clients?

Clients come in many shapes, sizes, and attitudes. Handling impatient clients is just one of the many struggles managed services providers (MSPs) have to deal with every day. It’s hardly surprising, given that everyone’s work product depends so much on their technology working properly. So what can you do to help mitigate the anxiety and frustration behind an impatient customer? 

Set expectations

Read More

SolarWinds MSP Blog

Radware Blog: The Intersections between Cybersecurity and Diversity

Cybersecurity and diversity are high-value topics that are most often discussed in isolation. Both topics resonate with individuals and organizations alike. However, the intersections between cybersecurity and diversity are often overlooked. As nations and organizations seek to protect their critical infrastructures, it’s important to cultivate relationships between the two areas. Diversity is no longer only […]

The post The Intersections between Cybersecurity and Diversity appeared first on Radware Blog.

Radware Blog

Data Security Blog | Thales eSecurity: Crafting the Perfect Pipeline in GitLab

When using a traditional single-server continuous integration (CI), fast, incremental builds are simple. Each time the CI builds your pipeline, it’s able to use the same workspace, preserving the state from the previous build. But what if you are using Kubernetes runners to execute your pipelines? These can be spun up or down on demand, and each pipeline execution is not guaranteed to be executed on the same runner, same machine, or even in the same country as the previous execution.

In this case, the most basic configuration that just clones and compiles the code will always have to do a full rebuild. It could take a while to compile the code for larger projects and you would lose the quick feedback loop, but could you use the features of GitLab CI for this instead? Let’s explore.


The first step to tackle is the build dependencies (compilers, third-party libraries and other tools). You don’t want to have to install the dependencies every time you build the project, and since we cannot depend on them being installed globally on any runner, the most natural solution is to use a container. Since containers may be used for deployment, the simplest approach is to have a single Docker container that installs the compile-time and run-time dependencies, as well as copies, builds and executes the code. The downside is that any single change to the code will cause the entire copy layer to be executed again, resulting in a full rebuild.

Crafting the Perfect Pipeline in GitLab

A better solution is to use two separate containers. A builder container that includes the compile-time dependencies, and a runner container that only includes the run-time dependencies.

Docker & GitLab CI

So, how do we achieve this within GitLab CI? It is generally a good idea to split the build into several stages. This will provide better feedback regarding which stages have failed, and it allows for better load-balancing of stages between runners when multiple pipelines are running simultaneously. Stages should be defined for building the builder container, the project itself, and the runner container.

GitLab includes its own Docker registry that can be used for storing images between stages, or you could use an external Docker registry if preferred. Either way you must log in to the preferred registry, as well as set up stages and some variables that are used later to tag the images.

Defining the stage that will build the builder container comes next. Since this could be executed anywhere, you cannot rely on the cache of previous builds. Instead, you must explicitly pull down any previous image and use Docker’s cache-from option to instruct it to use this image for any cache checks. You tag the image both with a tag specific to this pipeline that will be used for caching in the next execution of this pipeline, as well as with a tag specific to this hash of this commit that will be used in subsequent stages of this pipeline. If you use only the first of these, multiple pipelines running at the same time could interfere with each other.

GitLab CI Build Cache

The next step is to use the builder container to achieve incremental builds of the project itself, which requires caching of the build output between executions. The default caching will not actually work with Kubernetes runners due to their distributed nature, so you must first configure GitLab to use a central cache, such as S3.

Caching the build-output alone is also not sufficient to achieve incremental builds. Each time the stage is run, the repository is cloned, which resets all the file-modification times to the current time (this causes everything to be rebuilt anyway). For more information on how I have found a way to achieve an incremental build, please visit my blog post on our Horizons research website.


Next, you need to transfer the build artefacts to the next stage, so they can be copied into the runner container. I recommend using the install feature of your build system to copy only the necessary binaries into an install folder.

You then configure your build stage to treat these files as artefacts. When you create the container-builder step, add the builder step as a dependency, which will cause it to automatically pull across any related artefacts.

The building of the container itself is similar to the building of the builder container, using a Dockerfile that copies in the binaries from the install directory.

Running Tests & Multi-Project Pipelines

To run a suite of tests as part of the CI pipeline, containers for these can be built similarly to the runner container except with the test binary as the entry-point.

For larger projects you may wish to split the CI build into multiple pipelines. Since our previous steps have each pushed their containers to a central Docker registry, the downstream pipelines can pull down and launch the containers that they require. If you want to be able to trigger these downstream pipelines automatically, you’ll need GitLab Premium for its “Multi-Project Pipelines” feature.

More information on some things to keep in mind when running test containers and multi-project pipelines can be found in my blog post on our Horizons research website.


By combining GitLab CI and Kubernetes runners with Docker and the techniques described above, you can achieve the scalability of Kubernetes while maintaining some of the speed of incremental builds. There will inevitably still be some slow-down, as the pushing/pulling of containers takes some additional time, but build times (especially on larger projects) can be improved enough to retain the fast feedback loop– one of the benefits of using continuous integration.

Visit our Horizons research portal for more information and follow Thales eSecurity on Twitter, LinkedIn and Facebook.

The post Crafting the Perfect Pipeline in GitLab appeared first on Data Security Blog | Thales eSecurity.

Data Security Blog | Thales eSecurity

I had a backup. Really.: Amazon AWS / .Net SDK: "MissingMethodException"

I spent way too much time digging into this, and the advice on the internet seems a little vague.

Amazon offers nice PowerShell support for working with its AWS components, such as S3 (storage) and DynamoDB (NoSQL database), and they make it really easy to prototype an idea. I've been using them on a customer project, but we needed higher integration and performance than the stock cmdlets, so I resorted to writing my own cmdlets in C#.

Writing directly in .NET requires the Amazon AWS SDK for .NET, and Visual Studio is happy to retrieve this package via nuget so you can use it directly in your project.

While attempting to use some Async methods in a cmdlet, this happens:

Update-MyStuff : Method not found: 'System.Threading.Tasks.Task`1<Amazon.S3.Model.ListObjectsV2Response>
Amazon.S3.AmazonS3Client.ListObjectsV2Async(Amazon.S3.Model.ListObjectsV2Request, System.Threading.CancellationToken)'.
At C:\Testing\Run-Test.ps1:27 char:1
+ Update-MyStuff -LocalDb $ldb -Worker $worker -Verbose
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Update-Stuff], MissingMethodException
    + FullyQualifiedErrorId : System.MissingMethodException,Unixwiz.Cmdlet.Update_MyStuff

Here, Update-MyStuff is my own cmdlet provided in a DLL, and it compiles and links just fine against the AWS SDK, but now there's a missing method.

The internet shows this happens with different random methods, not just this one. What's going on?

The problem appears to be a conflict between the "same" .NET assemblies provided by the AWS PowerShell tools and those provided by the AWS SDK for .NET.

The AWS PowerShell tools installed on my system to a version-numberred subdirectory under C:\Program Files\WindowsPowerShell\Modules\AWSPowerShell\, while the SDK components were fetched by nuget under my project's packages\AWSSDK.S3.version\lib\net45\ directory, and the net45\ part provides our first clue.

Using Microsoft's ildasm .NET disassembler tool, we compare the PowerShell version with the SDK version side by side, showing that some methods are found in the SDK version but not in PowerShell version:

The first time you use an Amazon-provided cmdlet in a PowerShell session, it automatically loads the older PowerShell-provided version of that assembly (such as AWSSDK.S3.dll), and it properly services whatever the cmdlet means to do.

When you later call your own cmdlet, even though it's depending on features of the newer assemblies, PowerShell sees that the assembly in question—AWSSDK.S3.dll—has already been loaded, so it believes that request has been satisfied. The missing-method error follows henceforth.


How to resolve

As noted above, the net45\ component of the SDK version suggests that the SDK is meant for modern systems with the latest .NET, while the AWS-provided PowerShell components are a bit older and can be used by a wider range of installations. It's not just the Task-based Async functions missing, it's anything that's been added to the library since the older version was created.

Don't use the Amazon-provided AWS PowerShell cmdlets
Though this would prevent the wrong DLLs from getting loaded, it's a painful step because you'd be missing out on a lot of useful facilities: you would have to re-invent the wheel for common functions, such as credential and support.
It would also make it more difficult to experiment with aspects of the AWS ecosystem you've not yet created your own code for.
Explicitly load the proper DLLs prior to invoking the AWS cmdlets
If you manually Import-Module on the AWS.Core and AWS.S3 DLL before you invoke the first AWS cmdlet, the good DLLs will already be loaded, preventing the older ones.
# start of my PowerShell test script, runs in the $(PROJECT)\testing\ directory

Import-Module ..\packages\AWSSDK.Core.\lib\net45\AWSSDK.Core.dll
Import-Module ..\packages\AWSSDK.S3.\lib\net45\AWSSDK.S3.dll
Import-Module ..\My.Cmdlet\bin\Debug\My.Cmdlet.dll
This definitely works, though it's tedious and inconvenient to keep track of where these DLLs are, especially as packages get updated.
Update the AWS PowerShell module with the newer DLLs.
This seems like a terrible idea, but it seems to work and creates the least hassle to let you get into a groove of software development without having to work around this all the time.
Be sure all PowerShell windows are closed so the AWS modules are not in use, then open a Run-As-Administrator PowerShell session:
PS> C:\Program Files\WindowsPowerShell\Modules\AWSPowerShell\3.3.462.0
PS> mkdir .old
PS> mv AWSSDK.Core.dll .old
PS> mv AWSSDK.S3.dll .old
PS> copy ...\packages\AWSSDK.Core.\lib\net45\AWSSDK.Core.dll
PS> copy ...\packages\AWSSDK.S3.\lib\net45\AWSSDK.S3.dll
You will have to remember you did this every time you update your modules.
Ask Amazon to provide newer cmdlets
I'm not sure how one even does this

Disclaimer: I'm not at all strong on this ecosystem with respect to mixing and matching of .NET assemblies built for different runtimes, or how much this applies broadly rather than to my own particular situation.

I have tested all this on my Windows 10 desktop system, and PowerShell reports itself as:

PS> $PSVersionTable

Name                           Value
----                           -----
PSVersion                      5.1.17134.590
PSEdition                      Desktop
PSCompatibleVersions           {1.0, 2.0, 3.0, 4.0...}
BuildVersion                   10.0.17134.590
CLRVersion                     4.0.30319.42000
WSManStackVersion              3.0
PSRemotingProtocolVersion      2.3

And the app I'm building targets targets .NET 4.6.1

I had a backup. Really.

Bromium: Bromium Secure Platform Powers HP Sure Click Advanced

  • Following a successful launch of HP Sure Click in 2017, Bromium is expanding our relationship with HP Inc., with the introduction of HP Sure Click Advanced
  • HP Sure Click Advanced – the enterprise-ready extension of HP Sure Click which comes pre-installed on select HP PCs – is powered by Bromium Secure Platform and is a as part of HP Device as a Service (DaaS) Proactive Security offering
  • A growing collaboration with an industry leader like HP Inc. further validates Bromium’s technology market leadership and the mounting momentum for application isolation approach to endpoint security

Expanding Our Relationship with HP

Earlier today, Bromium announced an expansion of our relationship with HP Inc., with the introduction of HP Sure Click Advanced, which is part of the HP DaaS Proactive Security Service. HP Sure Click Advanced is an extension of HP Sure Click – a product of successful HP and Bromium technology collaboration originally announced in 2017. Powered by Bromium’s hardware-enforced application isolation, HP Sure Click Advanced provides turnkey protection against risky activities like opening email attachments, clicking on links in email, browsing, or downloading files.

Bromium isolates risky activities from the host, enabling the HP DaaS Proactive Security service to deliver the world’s most advanced isolation security service for files and browsing on Windows 10 PCs, regardless of whether the endpoints come from HP or another manufacturer.

All malware and threats are trapped inside micro virtual machines, allowing security teams to safely watch the threats play out and respond more quickly to security incidents, using complete and easily shareable forensics analysis. When integrated into a wider security stack, Bromium threat intelligence helps HP make their customers’ environments more protected.

Most Trust Hardware To Protect the Enterprise

HP Sure Start, HP Sure Run, HP Sure Recover, HP Sure Click, and now HP Sure Click Advanced are all part of HP’s commitment to create the world’s most secure and manageable PC. By building trust directly into the hardware platform, HP allows enterprises to shift cyber security decisions away from their end-users, while providing detailed, near real-time threat intelligence and high-fidelity telemetry to their security operations teams.

From the moment HP users power on their devices, to the moment they click on a potentially malicious document, HP leverages modern hardware features to isolate risk and prevent attacks. No other hardware manufacturer has gone to such lengths to protect the enterprise.

Strengthening Threat Intelligence

For Bromium customers, collaboration with an industry leader like HP will drive additional threat intelligence data, helping to further classify malware and reduce false positives. All threat intelligence is shared with customers, including HP Sure Click users, improving the experience for both employees and administrators.

Our growing technology collaboration with HP is a great endorsement for Bromium’s strategic technology leadership position and the growing market popularity and acceptance of our approach to endpoint security.

Learn more about our relationship with HP Inc.

Learn More About HP Sure Click Advanced

HP Sure Click Advanced will be available in more than 50 countries starting in April 2019, as part of the HP Device as a Service (DaaS) offering – a complete managed solution that combines hardware, security services, analytics, proactive management, and device lifecycle services.

Learn more about HP DaaS Proactive Security Service.

The post Bromium Secure Platform Powers HP Sure Click Advanced appeared first on Bromium.


Blog | Avast EN: Identify fake news and prevent its sharing | Avast

While the internet is a wonderful tool for expressing your opinion instantly, it’s becoming increasingly apparent that people are quick to confuse opinions with hard, proven facts, thus spreading fake news. Many, many websites are fueled on attracting clicks as an almost blind reaction. Furthermore, have you ever shared a piece that you hadn’t even entirely read? When you decide to share, or “like” a story, you should maybe think twice about what it’s talking about and if it’s worth sharing. Our web is only as good as the content we put out and propagate.

Blog | Avast EN

liquid thoughts: View Switching

I am very concerned about how to extract metadata from academic PDFs in order to allow for a semi-automated information flow for the users citation documents and I think it would be foolhardy to work to build extracting systems and what I have found online does not seem maintained or robust.

I therefore think I should focus on PhD on what I originally planned: The ‘switching between views’ more than the views themselves or data extraction. I will of course still focus on how to augment a student’s ability to do their Literature Review, but as a deliberate act of the student adding citations to their document and more manually pointing out the connections which are useful to them.

I will support people and organisations, documents and the general category of concepts. There will be many issues but the key issue is how to fluidly switch between the basic word processing view and the dynamic view(s) which will be spawnable from different spark points (clicking on glossary terms in the document and pinch out etc.). The research question then becomes:

Based on the work in the field of symbol manipulation, hypertext and spatial hypertext in particular,
and novel interaction design and testing,
this project will investigate optimal ways for an academic user to change the view of their (primarily textual) documents to augment the process of doing a Literature Review,
with a clear design objective that the user should feel that the information is tangible and has a shape they can grasp,
in order to generate richly a richly connected knowledge space and then to linearise the multidimensional knowledge into a traditional academic thesis which will support the student’s aim of demonstrating their thesis examiner that they have done a thorough job of examining the knowledge space, while preserving the full interactivity for the reader who wishes to access them.

Now that I have the basics of the dynamic view in Author I think I have a space to start experimenting and testing based on what I have learnt in my own literature review on how to most effectively manage the view shifting process.

three by the pool. Hegland, 2019.

It’s liquid. But not as we know it.

liquid thoughts

liquid thoughts: Les Meeting 13 March

Get ethics approval for focus group.

General Plan

WAIS presentation

PhD student Survey word

What questions do you ask when doing your work?
What would you like to see visually?
What are the problems with the tools you currently use?

Go through my two page paper for schedule.

“Word” (Millard, Gibbins, Michaelides, Weal, 2005)

For the lit review for me zippered list network diagram, what can help show the literature review space?

How can my work help the student get a clue if they have done a thorough job? How can that be judged?

liquid thoughts

E Hacking News – Latest Hacker News and IT Security News: Hacker Puts Up For Sale the Data of Six Companies, Totalling 26.42 Million User Records

Gnosticplayers, a hacker who already is for the most part known for putting up for sale more than 840 million user records in the previous month has yet again made an appearance and has returned with a fourth round of hacked data that he's selling on a dark web marketplace.

Ever since February 11 the hacker has set available for sale, data for 32 companies in three rounds on Dream Market, a dark web marketplace. This time, Gnosticplayers is more focused on the information of six companies, totalling 26.42 million user records, for which he's asking 1.2431 bitcoin which is approximately $4,940.

The difference between this Round 4 and the past three rounds is that five of the six databases Gnosticplayers set available for sale were gained amid hacks that have occurred a month ago, i.e. in February 2019. What's more, it merits referencing that a large number of the companies whose data Gnosticplayers has sold in the past three rounds have already affirmed breaches.

The six new companies targeted this time are , namely game dev. platform GameSalad, Brazilian book store Estante Virtual, online task manager and scheduling applications Coubic and LifeBear, Indonesia e-commerce giant Bukalapak, and Indonesian  student career site YouthManual.

"I got upset because I feel no one is learning,” the hacker said in an online chat "I just felt upset at this particular moment, because seeing this lack of security in 2019 is making me angry."

He says that he set up the data for sale essentially in light of the fact that these companies had neglected to ensure their passwords with solid encryption algorithms like bcrypt.

Albeit simply the last month the hacker said that he needed to hack and put up for sale more than one billion records and after that retire and vanish with the cash. But in a recent conversation, he says this is not his objective any longer, as he discovered that various other hackers have already just accomplished the similar objective before him.

Gnosticplayers likewise revealed that not every one of the information he acquired from hacked companies had been put on sale. A few companies surrendered to extortion demands and paid expenses so that the breaches would stay private.

E Hacking News - Latest Hacker News and IT Security News

E Hacking News – Latest Hacker News and IT Security News: Most of the Antivirus Android Apps Ineffective and Unreliable

In a report published by AV-Comparatives, an Austrian antivirus testing company, it has been found out that the majority of anti-malware and antivirus applications for Android are untrustworthy and ineffective.

While surveying 250 antivirus applications for Android, the company discovered that only 80 of them detected more than 30% of the 2,000 harmful apps they were tested with. Moreover, a lof of them showed considerably high false alarm rates.

The detailed version of the report showcased that the officials at AV-Comparatives selected 138 companies which are providing anti-malware applications on Google Play. The list included some of the most well-known names like Google Play Protect, Falcon Security Lab, McAfee, Avast, AVG, Symantec, BitDefender, VSAR, DU Master, ESET and various others.

ZDNet noted that the security researchers at AV-Comparatives resorted to manual testing of all the 250 apps chosen for the study instead of employing an emulator. The process of downloading and installing these infectious apps on an Android device was repeated 2,000 times which assisted the researchers in concluding the end result i.e., the majority of those applications are not reliable and effective to detect malware or virus.

However, the study conducted by AV-Comparatives also highlighted that some of the offered antivirus applications can potentially block malicious apps.

As some of the vendors did not bother to add their own package names into the white list, the associated antivirus apps detected themselves as infectious. Meanwhile, some of the antivirus applications were found with wildcards in order to allow packages starting with an extension like "com.adobe" which can easily be exploited by the hackers to breach security.

On a safer side, Google guards by its Play Protect which provides security from viruses on Android by default. Despite that, some users opt for anti-malware apps from third-party app stores or other unknown sources which affect safety on their devices.

The presence of malicious apps on Google Play was also noticed in the past and with the aforementioned study, Android is becoming an unsafe mobile platform.

E Hacking News - Latest Hacker News and IT Security News

OpenDNS Umbrella Blog: Introducing Threat Busters: A Game of Threat Intelligence

We’ve been on a mission to protect the world from internet-based threats since the launch of our enterprise security product, Cisco Umbrella (formerly OpenDNS), in 2012. We talk a lot about what our product can do and the threats it’ll block you from, but we don’t talk enough about the research team that powers our product and how they do it.

Today, we’re changing that. Introducing Threat Busters: A new digital adventure where you can access our team’s latest security research and hunt down threats in a retro, underground cyberworld while you do it. If you’re feeling competitive, find as many “Easter eggs” as you can to boost your score and join our Leaderboard.

The site is live with content on malicious cryptomining, ransomware and phishing and the cyberattacks XBash, DanaBot and Roaming Mantis. We’ll continue to add new threat and attack content monthly, based on what we see happening in the security space.

Here’s a sneak peak of what is live:

Threat Trend Graphs

With 16,000+ enterprise customers in over 160 countries, we have a unique view of corporate internet traffic. For both malicious cryptomining and phishing, we’ll show you traffic by company size, vertical and geography, as well as the overall traffic trend for December 2018 through February 2019. Above is a pie chart showing top phishing traffic by vertical for the period December 2018 through February 2019. Traffic trend graphs for ransomware are coming soon.

How Cisco Umbrella blocks threats

It might be enough for you to know that Umbrella blocks these threats and attacks, but have you ever wondered how it’s actually done? For each threat and attack featured we’ll tell you how our team blocks the threat in question, from using open-source intelligence (OSINT) to algorithms and everything in-between.

We also include a list of Indicators of Compromise (IOCs) on the attack briefing pages. We do this so that any member of the information security community can use them to identify potentially malicious activity on their own system or network and improve early detection of future attack attempts using the intrusion detection systems (IDS) and security information and event management systems (SIEM).

What cyber attacks are roaming the internet?

We’ll handpick current attacks that we see roaming the internet and give you background on the threat, how Umbrella blocks it and illustrate how the attack works.

Cisco Umbrella & Talos Security Intelligence

Cisco Umbrella, also benefits from the Talos Security Intelligence and Research Group. We leverage their threat intelligence to help detect, analyze and protect against both known and emerging threats.

Take the first step to making your organization more secure.

Happy exploring!

The post Introducing Threat Busters: A Game of Threat Intelligence appeared first on OpenDNS Umbrella Blog.

OpenDNS Umbrella Blog

SolarWinds MSP Blog: The Ins and Outs of Security Awareness Training

One of your customers’ employees logs into their computer. They get an email from someone claiming to be their IT service provider, saying they must reset their password immediately (even though there wasn’t any warning beforehand). They click a link without checking the destination URL, go to a phishing site, and enter the credentials for their email. The criminal now has access to their email credentials and starts a spear-phishing campaign. 

Read More

SolarWinds MSP Blog

liquid thoughts: Glossary dialog for Flow

This is the same as the previous post but without all the discussion, only the implementation.

The glossary creation dialog in Liquid | Flow will work as before, with changes to the grey text and layout, as shown below, and with a large change to the way the user indicates a relationship to the previous term:

first part. Hegland, 2019.

Once the user chooses a Glossary Entry, additional request appears below, asking the user to add a further relationship with the same two terms, but the other way around. Also, the little [+] only appears now, in case the user wants to add further relationships.

If possible, this additional relationship will be posted to the other term, (Liquid Information Environment, not Liquid | Flow) appended at the bottom of the term, if this is feasible.

second part. Hegland, 2019.

Dynamic View

This way the dynamic graph view can support showing both Liquid | Flow and Liquid Information in the centre:

flow in the centre. Hegland, 2019.

And once the user clicks on ‘Liquid Information’ it moves to the centre, showing relationships from its perspective:

liquid information in the centre. Hegland, 2019.

liquid thoughts

Bromium: Webinar: Social Media Platforms and the Cybercrime Economy

  • New “Social Media Platforms and the Cybercrime Economy” report explores the role of social media in enabling cybercrime
  • The author of the report, Dr. Mike McGuire, will discuss the key findings and recommendations in a live webinar on Wednesday, March 20 at 15:00 GMT | 11am EDT | 8am PDT
  • Download the report and join us for Wednesday’s webinar

Register Now: Download the report and you’ll automatically be enrolled in the webinar

Join us for a special Q&A webinar with Dr. Michael McGuire, a researcher who specializes in criminality from the University of Surrey, share the key findings of the next chapter in his landmark “Web of Profit” study. The new report, “Social Media Platforms and the Cybercrime Economy”, explores the role of social media in facilitating cybercrime and other criminal activity, including money laundering, extortion, and drug sales.

This report is a result of a 6-months-long academic study that looks deep into the systems that support cybercrime, and specifically zeroes in on the role that social media platforms play in promoting the spread of malware and enabling other criminal operations.

Dr. McGuire found that cybercriminals earn over $3bn per year from social media-enabled activities, and that so far, individuals, organizations, social media companies, and law enforcement agencies have no clear strategies for stopping them or protecting their own private information, data, and assets from being targeted.

Join us on Wednesday, March 20 at 15:00 GMT | 11am EDT | 8am PDT to chat with Dr. McGuire as he discusses the key findings of this groundbreaking research.

Register Now: Download the report and you’ll automatically be enrolled in the webinar

The post Webinar: Social Media Platforms and the Cybercrime Economy appeared first on Bromium.


liquid thoughts: Showing relationships based on glossary

To define for the graph view we have the problem of arrows, of direction. If one entry has, for example, that Liquid | Flow was inspired by Doug Engelbart, it should ideally also have in the Doug Engelbart entry that he inspired Liquid | Flow.

When a node is in the centre of the view it should be able to link out to entries which are not listed in it’s WordPress entry, but in the other entries.

There are two ways to tackle this: The other entry could automatically have this new relationship appended but that would require semantic analysis to change ‘inspired’ to ‘inspired by’ or ‘works for’ to ‘is the boss of’ etc. Therefore I think the solution needs to be something where the user enters relationships in one node for both directions and then enters it in reverse for the other node. In this interface the user will enter ‘was developed by’ and then choose ‘Frode Hegland’ from the previous Glossary Entry popup:

glossary term for graph view. Hegland, 2019.

When the user clicks OK the dialog below is presented, asking the user to enter the reverse, which will then be appended to the other term’s WordPress entry:

reverse. Hegland, 2019.

This should then be able to support graph views. However, the problem in the graph view then is which version to choose? Incoming or outgoing? Maybe let the user choose based on the two options, by clicking on the arrow and have it reverse polarity and description?

Hegland, 2019.

liquid thoughts

E Hacking News – Latest Hacker News and IT Security News: Beto O’Rourke Was A Former Hacking Group Member In His Teen Days!

Beto O’Rourke, who’s better known for his candidature for the Democratic Presidential seat, has been revealed to be a part of an eminent hacking group in his teen days.

Recently in an interview for an upcoming book, O’Rourke confirmed that during his days in El Paso, he was a member of a hacking cult of the name, “Cult of the Dead Cow”.

His major tasks while in the group comprised of stealing long-distance phone service, participating stealthily in electronic discussions and related offenses.

While in the group he also took to writing online essays by the pen name of “Psychedelic Warlord”.

The essays ranged from fiction from the perspective of a killer to mocking a neo-Nazi.

According to the article, the ex-congressman was one of the most renowned former hackers of the American Politics.

The book goes by the name of “Cult of the Dead Cow: How the Original Hacking Supergroup Might Just Save the World.”

The book also encompasses the first-time mentions of the members of the aforementioned cult after they finally agreed to be named.

There is neither evidence nor insinuations as to Beto being a part of illegal hacking activities that deal with writing code or so.

The group in 1980s started getting known for hijacking others’ machines. It was all kind of controversial.

O’Rourke being a presidential candidate gets kind of in a shady side of the court with a past like this.

He was born to a high-up family in El Paso, but he also had played in a punk band before he started his small technology business and stepped into local politics.

O’Rourke’s national presence was enhanced when he defeated Texas Republican Sen. Ted Cruz during a Senate campaign.

On the brighter side, Beto’s involvement shows a profound sense of technological comprehension and a powerful will to change what’s not required.

E Hacking News - Latest Hacker News and IT Security News

liquid thoughts: What can you do with it?

Someone I know who is a very positive person but not at all very technical asked me a while ago about my Apple Watch: “What can you do with it?” And my first thought was “well, probably not very much for you because you don’t like to learn how to do things with tools”. I didn’t say this of course, I just mentioned a few of the main features. However, this got me thinking about something fundamental:

The pencil and paper, what does it do? This is a question more about the users skill level and use case. The properties of the pencil and paper medium are not that hard to describe. The power comes from a skilled interaction.

The smart phone, what does it do? This depends very much on the users ability and interest in using the available apps so the answer would be anything from just making calls to running your full digital life.

The point is that the capability = the tool + the user.

Personally I have to try to answer the question of what does a graph view of text do for an author? It’s not a simple question since there are specific ‘affordances’ which need to be be built into the system for certain things to be possible. When making a CGI movie it is often remarked on how everything in the world has to be thought about and designed–there is no background or set they can put the actors into. Similar things with games–in Battlefield I can easily blow up a building but I can’t tie my shoelaces.

I read once (and cannot for the life of me remember where, but I suspect it was in Edge) that a lead developer of the successful Crysis game once remarked that making the AI for the players adversaries work well was a matter of making sure that everything in the game knows what it is and what its characteristics are: A piece of wood needs to know what force is required to shatter it and so on. This matters greatly since a digital environment may look like a flat screen with colours but this is only a two dimensional slice of a multidimensional space of interactions. Even a paint program does not simply add colour to the screen based on the users mouse, trackpad or digital pen–it adds the marks based on the pen and virtual paper’s characteristics.

In the graph view what is being around is text and lines but the text represents specific text, since I have already taken it as a starting point that simply letting the user take any words from their document and move them around is not as useful as having different text have semantically interactive characteristics.

The basic way to do this is to have some sort of a list of what words are ’special’ and a way for the user to visually state which other text is also special. By assigning text as a heading you are saying it has a special role–and I am referring to roles within the work of authoring a document, particularly an academic document–that of indicating a high-level view of the organisation of the document. Headings have been referred to as structural links since they do not have a semantic meaning but they do have a semantic meaning, there is a reason why one chapter or section comes before another in the flow of a linearised argument. This is the very essence of headings: Showing sections of a linear flow. To me, at this point that should be respected and therefore headings shown by themselves, collapsed into a table of contents or outline should be editable in sequence through a drag and drop function but not be in a graph view since that defeats the function of headings. This can be disputed but there it is for me for now. They can have value in as markers in other views, but not for themselves.

Other text in a list. This refers to what I am working on with what was originally called the hyperGlossary and then Liquid Glossary but which I think I will just call it a glossary though Chris might not agree. Anyway, it’s a list but a list where each item has attributes (as in Crysis) to create an environment for useful interactions. Each ‘glossary’ term can easily be linked to other terms to create an explicit connections allowing for construction:

And there we have it. I wrote the above sentence using the word ‘structure’ instead of the final ‘construction’ since I thought that structure seemed a bit too final so I used Liquid Flow to look up construction in wikipedia–not useful, then the etymology and then it became clear that what I wanted to do was to say this allows for construction, it is not a structure and this is the key.

The glossary as I am designing it now for Liquid Author’s dynamic view has these types:

  • Document for anything the user cites
  • Authors/people in general
  • Institutions of people
  • Concepts for anything else

This is for the use-case of a student of course, the last item ‘concepts’ is quite general but users will be able to type in anything they choose. Likely ‘document’ will be auto-assigned when the user downloads an academic document with the Liquid Browser:

There is probably room for improving this list, particularly outside of the initial use case but categorising is useful for filtering views, for doing basic citation analysis for example. Naming things is a big issue. Confucius is said to have said: “If I was the ruler, the first thing I would do would be to make sure everything is named correctly”: “If names be not correct, language is not in accordance with the truth of things.”


Walking in the early, dark morning of Singapore to find a toilet, this Starbucks does not have one and it’s the only 24 hour one. Lots of police cars outside Orchard Towers. I hope they are only there for the unruly. Anyway, to ION and back and with a new perspective.


Human language does not allow for one correct label for one thing. For the case of this work though, I will remove ‘institutions’ from the basic list and have buttons for ‘Document’ and ‘Person’ and freeform for anything else, which will automatically be put under the meta-tag of ‘Concept’. This should serve literature reviews well since documents are a core unit and are addressable items and persons are out-of system but the reason for the documents. Anything else can be labeled should the user want to, from idea to building, but they willl… Stop, this does not really make sense. Let’s start again:

Buttons for Documents (they have citation information) and Person. Anything else will be text entered but recent items will stay available for clicking on, to use recollection to encourage the use of same tags. Here it is mocked up:

basic types. Hegland, 2019.

This means that we can support nice citation flows through documents, their authors and any associated concepts as well as let the user add any terms they know.

So what can you do with it? Anything you like in terms of visualisation we hope, over time. Initially though, we are focused on supporting citation views and concept views to help the student ‘map out’ their understanding of a knowledge space and communicate this understanding to other readers, particularly examiners of their thesis.


Headings are for linearising. Glossary terms are for enabling constructions of relationships. And this is how we will focus the development of the Liquid Views.

liquid thoughts

liquid thoughts: Conversations with Edgar

I started recording answers to function/expected questions from Edgar in the far, distant future. I worked on an app idea for this a few years ago and there are all kinds of wrinkles but I think I should just record some answer videos and then just see if I actually do enough and if they seem useful. They are on a public but unlisted playlist on YouTube: 

What prompted this is that I am in Singapore without Edgar or Emily revisiting pasts and thinking about futures. Right now we only have FaceTime video so why not let him have the potential to have ‘facetime’ chats with the dad who loves him so much in the future?


liquid thoughts

liquid thoughts: the dream. update

(Semi-personal diary note/update, written to a friend)

As you know, my dream is to build a thinking and presentation space which really starts to set the written word from the constraints of the past, free for rich interaction and deep visualisation. I really believe that this has serious potential to augment our ability to interact with knowledge in powerful ways.

I have just spent another $10,000 on Author though, and that is of course not sustainable. Most of this went to support import and export of Word, of which we are only partly done; we can’t export links or headings or images. Ugh, Microsoft has made this very hard…

The Dynamic View

I am looking at adding a major feature to Author, which would also be part of my PhD (yes, I’m hanging in there, barely): We already have the ability to pinch in to collapse the document into an outline and now it will be possible to pinch out to ‘explode’ it into a ‘dynamic’ view.

This is the view we have been thinking about, discussing and testing to an extent tested in collaboration with Chris as the Webleau/Liquid Space: and which I made video demo tests for way back when: and today, to show an early transition from word processing view into the dynamic view. This is live software BTW, not a mockup:

I have written a list of basic capabilities here:

Glossary Terms

The crucial difference between the initial dynamic view tests and what I am looking at now is that it is not the headings which become the nodes, but terms which the user has defined in their personal glossary, as either a concept, a person, an institution or a document. These terms have been defined using Liquid | Flow which makes it easy to create a structured entry which then have relationships in them. I put up a few screenshots here for you: and I have of course blogged about the process on the same blog in general:

Promoting Author

I am not standing completely still with promoting Author, though I desperately need support:

In order to promote Author I am adding the Rich PDF export, so that a full, original copy of the Author document is embedded into the PDF. This means of course that anyone with only a PDF reader can read it but if opened into Author you get all the interactions of a full Author document. I hope this will make Author documents more ‘viral’ since they will make it clear opening the document into Author gives the reader a better experience.

I also think it would be a great way to send the proposal document to Apple, with a top sentence in the ‘PDF’, saying ‘Please open this document in Author’

I am also sponsoring (only £1,000) a JATS conference 20th of May. JATS is the up and coming academic document format we are working on supporting:$x7oUDJUT4


So what to do now? The goal is to get Apple to consistently feature Author on the App Store, that should generate enough revenue to make it self-sustaining. I need $5,000 for the next few months to add JATS, rich PDF, smaller myriads of issues like making import and export work with all attributes, and most importantly add the Dynamic View feature.

Dynamic Views

I hope to have something more impressive and real to share with you for the Dynamic Views over the next few days…

liquid thoughts

Blog | Avast EN: Gearbest Data Breach Puts Millions at Risk | Avast

White hat hackers scanning the web for system holes and data leaks stumbled upon an unsecured ElasticSearch server containing millions of Gearbest customer records. Gearbest is an Amazon-style e-commerce site with a focus on tech and Chinese brands. It ships to over 250 countries and publishes 18 subdomains in different languages. Under parent company Globalegrow, Gearbest is a billion-dollar business, but while its privacy policy states that the company encrypts any and all customer info it retains, the unsecured server found online proves that this is not true. Hundreds of thousands of customers are putting themselves at risk daily, adding their info to the growing repository of customer data accumulating for anyone to access.

Blog | Avast EN