Welcome to our Blog

The Hook

Read on...

How to build a Kali HVM AMI From Scratch

kali_linux

INTRO

At Blackfin Security, we use Kali Linux extensively for multiple projects, and we also use Amazon EC2. However, as of now, the official Kali AMIs in Amazon from Offensive Security are a few revisions behind their latest release and only support the PVM hardware type. We wanted to run Kali inside Amazon VPCs which require the AMI to be built with HVM support.

Our approach was to use the latest official Debian AMI with HVM support and convert it into a working Kali AMI. We are happy with the result and hope that documenting our steps might be useful to the community.

GETTING STARTED

You’ll need to have a working Packer install as well as an AWS account to make use of this. If you do, then all you need to do is update a few variables with your AWS credentials and run ‘packer build kali.json’. Sit back and wait a bit and then you’ll have a fully updated Kali HVM AMI.

You may use our Unofficial Kali AMI here – ami-c45a71ac (debian2kali-1427320319), but you use it at your own risk.

That said, you’ll most likely want to build your own. We’re starting with an official Debian Wheezy AMI. You can see the whole list here.

REVERSE ENGINEERING THE KALI BUILD PROCESS

Since Offensive Security already offers an old paravirtual Kali AMI we started by looking at that code to see what steps they took. Luckily a lot of it was unnecessary since it involved making the VM AWS compliant – something we didn’t need to worry about since we’re starting with a Debian AMI that is already AWS compliant.

Next, we looked at the kali-linux-preseed repo to see what configurations they set when they build a fresh installation.

Between those two repos and the Kali Custom Image forums we’re pretty confident that we’ve successfully ‘ported’ AMI creation over to Packer.

The short answer we came up with is pretty simple: Debian + Kali Repos + Kali Kernel = Kali.

We’ll go into a bit more detail below, documenting our steps and you can follow along with the code here if you’re interested: https://github.com/ctarwater/packer-debian2kali-ec2

PACKER / AWS SETTINGS

Mandatory Settings

By default Packer requires a few things set up in order to successfully build an AMI in AWS – these can be found in the kali.json file here.

These should all be pretty self-explainable, but we’ll cover them real quick just in case. Your access/secret keys are required for authentication so Packer can access the AWS API. The security group id tells the API what actions your user is allowed to take. The subnet id is required for HVM machines since they’re used with VPCs.

  • aws_access_key
  • aws_secret_key
  • security_group_id
  • subnet_id (required for HVM only)
  • instance_type
  • region
  • source_ami

Build Settings

Next you have the VM settings or “block_device_mappings”. These specify the “hardware” of your target AMI – these shouldn’t be changed unless you know what you’re doing. Basically we’re telling it to use a 20GB SSD mounted at /dev/xvda and that if the instance is terminated then the volume should be as well.

  • “volume_size”: 20,
  • “volume_type”: “gp2″,
  • “device_name”: “/dev/xvda”,
  • “delete_on_termination”: “true”

KALI BASE

Now we get to the main part – installing Kali with the ‘base.sh’ script.

First, it overwrites the default Debian ‘sources.list’ file with Kali’s repos, then it installs Kali’s gpg key and keyring.

Next it pre-configures some software settings so that we can skip any setup during install and fully automate the process. These configs are taken directly from the kali-preseed repo.

Finally it installs the default Kali metapackages as well as the full Kali linux suite and gnome desktop. The resulting packages are exactly the same as you find on the official Kali iso.

Lastly it runs a dist-upgrade to make sure that we end up with the most recent version of Kali and its packages.

KALI GRUB

The one change we made where we know we’re not 100% identical to a default Kali install is right here. Kali uses the Grub2 bootloader by default but our source AMI uses Syslinux.

What this script does is install Grub2 alongside Syslinux so everyone is happy.

CLEANUP

Finally, we remove any ssh keys, clear out all of our logs, and clear our bash history – all as suggested by Amazon if you plan on creating a public AMI.

CONCLUSION

While the process we followed is straightforward and we believe it is complete, please open an issue in the Github repo if you notice anything we’ve overlooked. If this is useful to you, please let us know in the comments. Thanks!

Docker Swarm with TLS authentication

Swarm-TLS
This article is part 2 of a series, you can read part 1 here – Getting Started with Docker Swarm

My last post discussed a quick-and-dirty guide to getting started with Docker Swarm. Today I’m covering how to set up TLS authentication between Docker, Swarm, and you.

The big thing to remember is that Docker Swarm requires all of the nodes to be running a Docker daemon bound to a TCP port. Since I was testing things using AWS that meant that things were avialable on the internet and that anyone who knew my IP:Port could send Docker commands to my machine. Not cool.

To get around this we’ll set up TLS/SSL and require all parties involved to use TLS authentication.

Please note these instructions are still only for getting a dev environment for testing/playing with and definitely aren’t recommended for production. TLS/SSL is a huge topic, one I’m mostly glossing over for the sake of just ‘making it work’ for this article. You’ve been warned.

Up until about a week ago Docker Swarm required you to use subjectAltName (SAN) IPs in your certificates which made things a bit more of a hassle than normal. Luckily, thanks to a recent update to the Swarm code that’s no longer the case as long as you use hostnames instead of IPs – which is what we’re going to be doing here.

Hostname Resolution

Swarm Host
Now that we’re using hostnames we need to add them to our /etc/hosts file. Open the /etc/hosts file on your Swarm host and add:
xx.xx.xx.xx node1
xx.xx.xx.xx node2

Where xx.xx.xx.xx are replaced by the respective IPs of your Docker nodes.

Test connectivity* ping host1 -c 3

Local
Add the hostname of your Swarm host to your local /etc/hosts file.
xx.xx.xx.xx swarm
Where xx.xx.xx.xx is replaced by the IP of your Swarm host.

Test connectivity* ping swarm -c 3

*I’m not covering firewall settings here so I’m assuming that your machines can talk to each other and that ICMP isn’t blocked.

Basic Steps

Here’s a quick rundown of what we’re going to be doing:

  • Create a minimal openssl.cnf file with required settings
  • Create a Certificate Authority keypair
  • Use the CA to create a keypair for your Swarm Host
  • Use the CA to create a keypair for your Nodes
  • Use the CA to create a keypair for your local machine
  • Transfer the keys
  • Configure Docker Nodes/Swarm/Local to use TLS

Create an openssl.cnf file

My file is listed below:

[ req ]
default_bits = 4096
default_keyfile = privkey.pem
distinguished_name = req_distinguished_name
x509_extensions = v3_ca
default_md = sha1
string_mask = nombstr
req_extensions = v3_req
prompt = no

[req_distinguished_name]
countryName = US
stateOrProvinceName = NY
localityName = Transmetropolitan
organizationalUnitName = Filty Assistants

[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth,serverAuth
subjectKeyIdentifier = hash

[ v3_ca ]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer:always
basicConstraints = CA:true

[ crl_ext ]
authorityKeyIdentifier=keyid:always

Create a Certificate Authority keypair

  1. Create our private CA Key
    openssl genrsa -out CAkey.pem 2048
  2. Use the private key to sign our certificate
    openssl req -config openssl.cnf -new -key CAkey.pem -x509 -days 3650 -out ca.pem

Use the CA to create a keypair for your Swarm Host

  1. Create our private Swarm key
    openssl genrsa -out swarmKEY.pem 2048
  2. Create our Certificate Signing Request (CSR)
    openssl req -subj "/CN=swarm" -new -key swarmKEY.pem -out swarm.csr
  3. Sign our certificate
    openssl x509 -req -days 3650 -in swarm.csr -CA ca.pem -CAkey CAkey.pem -CAcreateserial -out swarmCRT.pem -extensions v3_req -extfile openssl.cnf
    openssl rsa -in swarmKEY.pem -out swarmKEY.pem

Use the CA to create a keypair for your Nodes

NODE1

  1. Create your private key
    openssl genrsa -out node01KEY.pem 2048
  2. Create your Certificate Signing Request (CSR)
    openssl req -subj "/CN=node1" -new -key node01KEY.pem -out node01.csr
  3. Sign your certificate
    openssl x509 -req -days 3650 -in node01.csr -CA ca.pem -CAkey CAkey.pem -CAcreateserial -out node01CRT.pem -extensions v3_req -extfile openssl.cnf
    openssl rsa -in node01KEY.pem -out node01KEY.pem

NODE2

  1. Create your private key
    openssl genrsa -out node02KEY.pem 2048
  2. Create your Certificate Signing Request (CSR)
    openssl req -subj "/CN=node2" -new -key node02KEY.pem -out node02.csr
  3. Sign your certificate
    openssl x509 -req -days 3650 -in node02.csr -CA ca.pem -CAkey CAkey.pem -CAcreateserial -out node02CRT.pem -extensions v3_req -extfile openssl.cnf
    openssl rsa -in node02KEY.pem -out node02KEY.pem

Use the CA to create a keypair for your local machine

  1. Create your private key
    openssl genrsa -out localKEY.pem 2048
  2. Create your CSR (Make sure to change ‘CN=HOSTNAME’ to the hostname of your local machine that will be sending commands to the Swarm host)
    openssl req -subj "/CN=HOSTNAME" -new -key localKEY.pem -out local.csr
  3. Sign your certificate
    openssl x509 -req -days 3650 -in local.csr -CA ca.pem -CAkey CAkey.pem -CAcreateserial -out localCRT.pem -extensions v3_req -extfile openssl.cnf
    openssl rsa -in localKEY.pem -out localKEY.pem

Transfer the keys

I used SCP to move all of the keys to their respective machines.

Swarm Host
ca.pem, swarmCRT.pem, swarmKEY.pem, all moved to ~/.ssh on the Swarm Host
scp /path/to/ca.pem user@xx.xx.xxx.xxx:~/.ssh/ca.pem
etc.

Node1
ca.pem, node01CRT.pem, node01KEY.pem, all moved to ~/.ssh on Node1
scp /path/to/ca.pem user@xx.xx.xxx.xxx:~/.ssh/ca.pem
etc.

Node2
ca.pem, node02CRT.pem, node02KEY.pem, all moved to ~/.ssh on Node2
scp /path/to/ca.pem user@xx.xx.xxx.xxx:~/.ssh/ca.pem
etc.

Local
ca.pem, localCRT.pem, localKEY.pem, all moved to ~/.docker on my laptop (where I generated everything to begin with)
cp /path/to/ca.pem ~/.docker/ca.pem
etc.

Configure Docker Nodes to use TLS

At this point you’ll need to make a few adjustments to make sure your Docker nodes are using TLS authentication.

Nodes
Update your Docker daemon settings to use TLS. I’m using Project Atomic Fedora AMIs so my file is at /etc/sysconfig/docker.
# vi /etc/sysconfig/docker
Add the following to your options:
OPTIONS='--tlsverify --tlscacert=/path/to/ca.pem --tlscert=/path/to/node0xCRT.pem --tlskey=/path/to/node0xKEY.pem -H 0.0.0.0:2376 -H fd://'

Make sure you point to the correct pathname for each of the certs on your box. The other thing to note is that we’re no longer using port 2375 since Docker uses port 2376 for SSL.

Reload and restart Docker or just restart your machines.

Putting it all together

Docker Nodes
Make sure your Docker nodes are up and running with TLS authentication.

Swarm Host
Create your swarm (make sure to note the Swarm token):
~/gocode/bin/swarm create

Add your nodes to the swarm – making sure to use the hostname ‘node1′ or ‘node2′ instead of the IP in the –addr variable:
~/gocode/bin/swarm join --addr=node1:2376 token://YOUR-SWARM-TOKEN
~/gocode/bin/swarm join --addr=node2:2376 token://YOUR-SWARM-TOKEN

Start the Swarm daemon like this:
~/gocode/bin/swarm -debug manage --tlsverify --tlscacert=/path/to/ca.pem --tlscert=/path/to/Swarm-cert.pem --tlskey=/path/to/Swarm-key.pem --host=0.0.0.0:2376 token://YOUR-SWARM-TOKEN

Make sure you point to the correct pathname for each of the certs on your box, also note that we’re using port 2376 here as well. You should probably replace ‘YOUR-SWARM-TOKEN’ with the actual Swarm token you’re using.

Local
Send a command to the Swarm host, making sure you point to the correct pathname for each of the certs on your box.

docker --tlsverify --tlscacert=/path/to/ca.pem --tlscert=/path/to/local-cert.pem --tlskey=/path/to/local-key.pem -H swarm:2376 run blackfinsecurity/tha-kali

The above command should launch a container on one of your hosts. Pretty cool.

Now to test it one last time, try to send the command without the TLS stuff and watch it get rejected:
docker -H swarm:2376 run blackfinsecurity/tha-kali

Which should give you the following error:

FATA[0000] Post http://xx.xx.xxx.xxx:2376/v1.16/containers/create: malformed HTTP response "\x15\x03\x01\x00\x02\x02\x16". Are you trying to connect to a TLS-enabled daemon without TLS?

Getting Started With Docker Swarm

docker-swarm-by-victor-vieux-1-638 (1)

I’ve recently had the chance to play around with Docker’s new clustering tool – Docker Swarm, and I like it. Since the tool is still in a pre-alpha things are changing quickly and it can be a bit difficult to figure out how to get started, so I’m sharing my notes for anyone else interested in playing with Docker Swarm. Please note that while this is meant to get someone up-and-running it is by no means a deep dive into the capabilities of Swarm and I highly recommend that you study their documentation at https://github.com/docker/swarm.

DEFINITIONS

First I’m going to quickly define a couple of terms that I make use of this article.

  • Docker Node – a machine running the Docker daemon
  • Swarm Host – a machine running the Swarm daemon
  • Swarm – a series of Docker nodes

WHAT IS DOCKER SWARM?

Docker Swarm is a native clustering system for Docker. It allows you to define your cluster (swarm) and create/control docker images and containers throughout the cluster through the Docker Swarm daemon.

WHY USE DOCKER SWARM?

Docker swarm is a relatively simple tool for optimizing your container workloads across your cluster while using the standard docker commands you’re already familiar with.

OK, HOW DO WE USE DOCKER SWARM?

Basic Infrastructure
I’m sure there’s many ways to set this up, here’s how I did it: 1 Swarm Host and 2 Docker Nodes.

SWARM HOST
The only requirements for a Swarm host is that it has Docker Swarm installed.

I started with a t2.micro instance on AWS running Ubuntu 14.04. From there you’ll need to install Docker Swarm and since it’s still pre-beta they’re not offering binaries yet.

Install Docker Swarm
$ sudo apt-get install golang git
$ mkdir ~/gocode; export GOPATH=~/gocode
$ cd ~/gocode
$ go get -u github.com/docker/swarm

Test Docker Swarm
$ ~/gocode/bin/swarm --help

DOCKER NODE
Any node in the swarm requires Docker to be installed and the Docker daemon to be running and bound to a tcp socket, which is how the Swarm Host communicates with the nodes.

For my project I started with two t2.micro Project Atomic Fedora ami instances, you can use any node you’d like so long as the following steps are taken.

Install Docker
Docker is changing very quickly, I highly recommend installing the latest version directly from them.

Bind the Docker daemon to a tcp port
You can manually add the `-H` flag every time you run the daemon or you can edit your Docker settings so that it runs with it by default, I chose the latter:

# vi /etc/sysconfig/docker
Then add `-H 0.0.0.0:2375` to OPTIONS

Reload the daemon so the new settings take effect and then restart it
# systemctl daemon-reload
# systemctl start docker

If you don’t want to edit your default Docker behavior then just make sure that you start your Docker daemon on every node with the following flag and options: `-H 0.0.0.0:2375`

Putting it all Together

Now that you have the infrastructure in place it’s time to put it all together.

All of the following commands will be issued from the Swarm Host

Create your swarm
$ ~/gocode/bin/swarm create

This command will spit out your ‘swarm_id’ that looks something like this: `2a0e3b11ce210ae859a141b473abdd34`. Don’t lose it, this is the token used to identify your swarm.

Add nodes to your swarm
$ ~/gocode/bin/swarm join --addr=<ip-of-node>:2375 token://<swarm_id>

Issue the above command for all nodes that you want to add to the swarm, change the ip accordingly but obviously use the same swarm id.

Note that this command does not currently return so once the IP has been added to the swarm, you can ctrl-c out. The join command does not need to keep running to stay joined.

List the nodes in your swarm
$ ~/gocode/bin/swarm list token://<swarm_id>

Start the Swarm manager (daemon)
$ ~/gocode/bin/swarm -debug manage --host=0.0.0.0:2375 token://<swarm_id>

I like running with the -debug flag for more info.

Once this command is running the Swarm manager daemon will continue to listen for incoming Docker commands to execute on your swarm. It needs to stay running for Swarm to work.

The MAGIC of Docker Swarm
Now that everything is in place, let’s see what we can do with it.

Log into a new machine, any machine, that has Docker installed and is able to access the IP of your Swarm Host.

Launch a Docker container
$ docker -H <ip-of-swarm-host>:2375 run blackfinsecurity/tha-kali

This command is telling docker to connect to the Swarm host on port 2375, the Swarm daemon then relays a standard docker run command.

Since Swarm is listening it will receive the command, do some calculations about the status of your nodes in the swarm, and then issue the docker run command on the node. That’s right, you can now issue standard Docker commands via your Swarm host and they will be issued throughout your swarm.

FINAL THOUGHTS

Swarm offers a decent amount of power in controlling how it places containers within your swarm. For further information on how it works and the settings available you should check out their documentation on scheduler filters and scheduler strategies.

Another thing to keep in mind is that if you followed this guide you have 3 AWS instances that are visible to the internet and anyone with their ip and port  can issue docker commands to them. Needless to say that’s insecure and I wouldn’t recommend leaving them running when you’re not using them unless you’ve taken steps to secure them.

Continue reading part 2 of this series – Docker Swarm wit TLS authentication

Blackfin Security Included in Ravello Case Study

Blackfin has been using Ravello as our cloud and virtualization partner for just over a year, and we chose to do so because they were able to handle our requirements for launching our very dynamic virtual training spaces and threat simulation environments. The majority of this collaboration is seen in our technical training solution, The Hacker Academy. Recently, we worked with them on developing a case study outlining how exactly we utilize their services. They just published the results of this study which goes into depth on how the labs are put into production and the various needs of our organization. Many Hacker Academy members are currently using the interactive labs, but may not realize how great the technology behind them really is.

We’ve highlighted some key components of the case study out, but it can be found in its entirety here: http://www.ravellosystems.com/blog/training-labs-aws-google-blackfin/

Case Study Highlights

Blackfin Security offers self-paced online security training through ‘The Hacker Academy’ and cyber warfare threat simulations as a service. Blackfin uses Ravello to handle its lab environments for Hacker Academy students as well as to host immersive threat simulation events.

By directly leveraging Ravello’s API, Blackfin has been able to provide a much richer user experience to its students, and scale on-demand as Ravello runs on AWS and Google Cloud (Tier 1 cloud providers).

With Ravello, Blackfin has seen an increase in customer engagement measured by an increase in on-demand lab usage, as well as an increase in average training session duration. Further, it has reduced the cost of training delivery because of 83% reduction in provisioning time to import and publish updated training VMs into live service by 83% and also saved Blackfin over 75% in monthly costs relative to the boutique cloud providers.

Ravello saves us over 75% in monthly costs relative to the boutique cloud providers.Brad Geesaman, CTO, Blackfin Security

When we provide live, on-demand security training environments to our students, we need them to operate identically to a datacenter environment. By integrating with Ravello, we gained access to the scalable compute resources of the public cloud, software defined networking, direct VMware importing of systems, and predictable, usage-based pricing. We no longer have to worry about the logistics and costs of bursting to handle large capacity events. Ravello is an ideal fit for us.

The Hacker Academy Lab & Immersive Threat Simulation Environments

The Hacker Academy’s online virtual labs typically comprise 2-4 Virtual Machines (totaling to 4 VCPUs, 8GB RAM), with labs for the advanced courses needing more resources. The labs consist of a Kali Linux system (a Debian based Linux distribution that is specifically designed for digital forensics and penetration testing) and one or more purpose-built systems designed and configured for each lab objective. Students can quickly launch their own private lab environment and get right to practicing their web, server, and network attacks.

In order to handle Immersive Threat Simulation events ranging from 10 to 1,500+ participants, Blackfin Security uses Ravello’s API to programmatically generate comprehensive distributed environments in minutes. The environments consist of 10 to 350 VMs running a range of operating systems, VPNs, firewalls, and intrusion detection systems – with load distributed to ensure the optimal user experience. By leveraging Ravello’s ability to run environments in multiple geographic locations, Blackfin can bring the event even closer to the participants around the world.

ravello case study image

Ravello – A Perfect Match for Blackfin Security’s Requirements

When Ravello was launched as a public beta, Blackfin Security tried Ravello and found it to be an ideal match for Blackfin’s requirements. Over time, Blackfin has transitioned all its virtual lab environments to Ravello. Here is how Ravello delivered on Blackfin’s requirements:

Blackfin’s Requirement
How moving to Ravello helped Blackfin

No CapEx Investments
Ravello’s solution allowed Blackfin Security to deploy their virtual training labs on Google Cloud and AWS, eliminating the need to build their own Data Center.

Scale On-Demand
With Ravello, Blackfin was able to spin up as many environments as needed to absorb the peak loads. Since Ravello runs on AWS and Google Cloud (Tier 1 cloud providers), there is never any shortage of capacity, quota, and overage concerns. Additionally, with the application blueprint feature, Blackfin was able to take a snapshot of each of the virtual lab environments, and clone to deploy new instances of the environment through Ravello’s API as needed.

Zero Change Deployment
Ravello’s High performance nested hypervisor (HVX) and Software Defined Networking (SDN) ensured that VMware VMs in Blackfin’s environment could run on Google Cloud and AWS without needing any modifications.

System Development Fidelity
Ravello’s SDN & Blueprint features ensured that Blackfin’s production training lab mirrored the course developer’s local VMware environment – there was no loss in fidelity (same configuration, setup, networking and storage).
Launch environments ‘on-demand’ With Ravello’s API support, Blackfin was able to integrate the process of creating new instances of virtual training lab at student’s click, and shutting down instances once the training objectives were completed.

Consistent User Experience
With Ravello’s API integration into Blackfin’s environment, Blackfin was able to keep the user on Blackfin’s portal until the virtual lab was up and running. Furthermore, Blackfin was able to provide a richer user experience through a progress-bar that indicated the time left before virtual training lab was deployed and available for use.

Usage-based Costs
Ravello is Software as a Service (SaaS) offering and Blackfin only gets charged based on actual service usage.

Results with Ravello

Blackfin Security has benefited in several ways since moving to Ravello for its virtual training labs. The tight integration between Hacker Academy and Ravello through APIs has led to a richer user experience for students, resulting in a higher level of student engagement. This is evidenced by an increase in the number of virtual lab launches, and also an increase in average session duration across its user-base.

Deployment of virtual training labs using Ravello has also reduced operational overhead for Blackfin Security. Before deploying on Ravello, Blackfin would spend 6+ hours on an average to ‘pack’ and ‘publish’ a new virtual training lab. By switching to Ravello, they have been able to reduce it to less than an hour. Further, cloning through blueprints has made it easier for Blackfin to make incremental changes to the virtual labs and deploy them to production.

Finally, with Ravello’s usage based pricing, Blackfin has saved over 75% relative to boutique cloud providers while increasing their ability to scale as needed with consistent pricing. In some cases, because their previous cloud providers charged for fixed capacity and overage rates, Ravello saved them up to 90%.

Encouraged by this success, Blackfin has expanded its immersive threat simulation offering to also be available on Ravello, as Ravello can handle running both small and very large, highly complex, on-demand threat scenarios, while offering same benefits as it did to The Hacker Academy individual training lab environments.

Blackfin to Participate in Cyber Safety Twitter Chat with Digital Citizens Alliance

A new survey of Americans’ online security habits shows large numbers of Americans are putting their devices and personal information at-risk. With October marking National Cybersecurity Awareness month, join Digital Citizens Alliance and Blackfin Security for an engaging Twitter chat on October 24th. We’ll be discussing how Americans can stay safer and more secure online. Join us, share your thoughts and questions, and get answers from security experts in real time.

Some of the topics discussed will include best security practices for:

  • Social Media
  • Passwords
  • Wireless Connections
  • Downloading Movies and Music
  • Email
  • Resources for Additional Information

Is there a topic of particular interest to you or something you would like to chat about? Let us know.

We look forward to hearing from you on October 24th!

Time: 1pm-2pm EDT

Follow: @4SaferInternet, @getblackfin and use hashtag #CyberSafeChat.

stay safe online

Blackfin and Core Security Partner to Make Core Impact® Pro Training and Certification Available Online

The Core Impact Certified Professional (CICP) program, which helps customers maximize the value of Core Security’s penetration testing tool, is now being offered as an on-demand course.

Boston – October 22, 2014 – Core Security®, a leading provider of attack intelligence solutions, today announced its Core Impact Certified Professional (CICP) program is now available online and on-demand through its partner, The Hacker Academy. Previously offered exclusively as a two day, in person course, this addition to Core Security’s training options is designed to ensure customers who cannot accommodate that structure can still maximize the value of Core Impact® Pro. Like the original program, this online option combines in-depth product training with guidance on planning, conducting and reporting on highly targeted penetration tests.

Over the course of 16 hours, this self-paced program enables participants to conduct hands-on penetration testing against lab environments specifically designed to reinforce and deepen the understanding of principles introduced in the curriculum. Upon passing a competency examination, attendees receive a certificate confirming they are a Core Impact Certified Professional. By completing CICP training, Certified Information Systems Security Professionals (CISSPs) also receive credits toward maintaining their certificates, recognized by (ISC)².

“We have always received excellent feedback on our in-person training course, but as our customer base became more globally dispersed, we found it necessary to create a flexible and budget-conscious alternative to the traditional program,” said Michael Hurley, Senior Manager, Account Management for Core Security. “We want to ensure customers are able to utilize 100 percent of what Core Impact Pro has to offer, regardless of what cost or travel restrictions they might face.”

“Collaboration in our industry is vital,” said Aaron Cohen, Chief Operating Officer and Co-Founder of The Hacker Academy, a part of Blackfin Security. “This partnership with Core Security allows us to offer a top notch product certification course, utilizing our world class online platform; a perfect match.”

Core Security is offering an Early Adopter discount (25 percent off of the list price) for those who enroll before the end of 2014. The registration page and additional information about the course can be found here: https://hackeracademy.com/masterclass/core-cicp-promo.

About Core Security:

Core Security provides the industry’s first comprehensive attack intelligence platform. With Core Security, enterprises can focus on the most likely threats to their critical business assets by modeling, simulating and testing what an actual attacker would do. Core Security helps more than 1,400 customers worldwide identify the most vulnerable areas of their IT environments to improve the effectiveness of remediation efforts and ultimately secure the business. Our patented, proven, award-winning enterprise products and solutions are backed by more than 15 years of applied expertise from Core Labs research and Core Security Consulting Services. For more information, visit www.coresecurity.com.

About The Hacker Academy:

A part of Blackfin Security, The Hacker Academy is the platform used for the development, delivery and distribution of information security training. Our easy-to-use and robust platform allows administrators to easily manage and deploy information security training content, as well as track and report on user progress, within one fully hosted cloud environment. The Hacker Academy delivers third party content and certifications, as well as technical security training, in an interactive, on-demand environment where participants are able to practice concepts as they progress through real world simulations.  Our training platform and content has been used by organizations around the world since 2005.

Poll: high number of americans leave themselves vulnerable to hackers

A new survey of Americans’ personal online security habits shows large numbers of Americans are putting their devices and personal information at-risk.

The Zogby Analytics poll, commissioned by the Digital Citizens Alliance and Blackfin Security, shows that Americans open their devices up to unknown entities, download files of unknown origin at high rates, and even ignore best practices when they know they should do otherwise.

As part of National Cyber Security Awareness Month, the Digital Citizens Alliance and Blackfin Security have created a Personal Threat Assessment – a series of ten questions that Americans can administer themselves to see if they are taking some basic steps to ensure their devices and data are both secure. To take the quiz yourself, go to this page on the Digital Citizens Alliance website: http://www.digitalcitizensalliance.org/cac/alliance/content.aspx?page=cyberquiz

“The hackings of Home Depot, Target, and other large retailers may be lulling Americans into thinking that it’s big corporations that are rogue operators’ prime targets, but that’s a mistake,” said Adam Benson, Deputy Executive Director of the Digital Citizens Alliance. “Hackers want personal data – credit card numbers, passwords, social security numbers. They’ll look for open windows – and the online behavior we see reflected in this survey tells us that millions of Americans are leaving the windows open, the doors unlocked, and even giving some hackers the key to get in.”

“There may not be a lot that we as individuals can do to stop the next data breach of a large corporation, but there is definitely room for improvement in how we handle our personal data,” explains Josh Larsen, co-founder and CEO of Blackfin Security. “As we become more connected in nearly every facet of our lives, we have to take more precautions and be aware that nearly every connection we make online presents an opportunity for opportunistic cyber criminals to take advantage.”

Some of the major findings from the poll include:

  • Nearly one-third of Americans don’t change their passwords enough – going as long as a year without updating them;
  • More than one-third use public WiFi that doesn’t require a password sometimes or even “always”;
  • 16 percent said that using two-factor authentication (which requires the user to have two types of credentials before being able to access an account) makes signing on too much of a burden, while another 23 percent didn’t know what two-factor authentication is;
  • 62 percent said they didn’t always check or weren’t sure if their downloaded movies, music, games, or books were legally authorized. Previous Digital Citizens Alliance research has shown this is a widely used delivery mechanism for malware;
  • More than 35 percent of all Americans like/follow/connect with people they barely know or don’t know on social media. While that can often be with a celebrity or influential figure, in some cases, people might be connecting with someone more interested in your habits than they are in your safety.

Blackfin’s Marketing Manager, Megan Horner, was not surprised by some of the results. “The 16% of individuals who are indicating that enabling two-factor authentication creates a burden are validating the constant battle between convenience and security.” She adds that, “It’s our job as a security industry to inform people how simple taking these precautions can be, and of the benefits they’ll see in the long run.”

While Benson added: “These numbers show that Americans need to think a bit more about what they do online and how they go about doing it. Our quiz includes both the poll numbers and short videos from Blackfin that provide explanations to users about why they should take extra steps. Security isn’t easy. Hackers have hit the wealthy, the powerful, and the brilliant. There isn’t a magic bullet, but small steps could deter hackers from hitting you and your computer. This quiz is designed just to get people thinking about what more they can do.”

Screen Shot 2014-10-10 at 9.36.49 AM

Blackfin Security CEO featured on FOX 5

In light of National Security Awareness Month, Blackfin Security Group’s CEO, Josh Larsen, was featured this morning on FOX 5, Washington D.C.’s local station.

“Between the massive cyber-attacks on stores like Home Depot and Target and the recent celebrity nude photo hacks, a lot of people are second-guessing their own cyber security. Adam Benson, deputy executive director of the Digital Citizens Alliance, and Josh Larsen, co-founder and CEO of Blackfin Security joined us with more.”

http://www.myfoxdc.com/Clip/10663721/cybersecurity-how-safe-are-you

Screen Shot 2014-10-07 at 4.41.31 PM

Take your “Personal Threat Assessment” here: http://www.digitalcitizensalliance.org/cac/alliance/content.aspx?page=cyberquiz

Information Security Training Company, Blackfin Security Group, Launches an Integrated Platform to Help Reduce Phishing Attacks in the Workplace

Blackfin Security Group, innovator in the information security training space, has announced the release of a new anti-phishing and training platform that empowers organizations to deliver effective and engaging security training to their employees. The anti-phishing platform builds on Blackfin’s existing portfolio of security training products to provide a robust and integrated solution for the enterprise.

RESTON, Va.Sept. 24, 2014 —Blackfin Security Group, a security training company, was formed after a recent spin-off from security integrator MAD Security. Blackfin Security Group was created to bring a higher level of focus to the development and release of security training products and services. Building on established enterprise security solutions in the areas of security training and threat simulation, Blackfin continues to enhance its unique and effective platform.

“Focus is critical when it comes to meeting the needs of our customers,” says Blackin Security Group co-founder and COO Aaron Cohen. “The threat landscape is changing constantly. Security training needs to stay ahead of the curve so we can combat attackers at every vector. The release of the Blackfin Anti-Phishing Platform gives our customers another powerful tool to do this effectively.”

Blackfin solutions are purpose-built around the concept of creating a more secure enterprise through training activities involving employees at all levels. The Hacker Academy, Blackfin’s technical training platform, has grown over the last eight years into a premier resource for security professionals to access on-demand content and gain hands-on technical experience through lab environments and threat simulation activities. Blackfin’s custom developed Security Awareness Training introduced an engaging and effective medium to train employees across the enterprise based on role and job function. The newest addition, the Blackfin Anti-Phishing Platform, will provide a similar level of simulation training for end users by mimicking a real-life phishing attack against an organization’s employees.

“Too often, security training in the enterprise is relegated to nothing more than a compliance exercise.” explains Josh Larsen, Blackfin Security Group co-founder and CEO. “The Blackfin Security Platform gives organizations a cost effective way to create a systemic and ongoing security education campaign that allows training and assessment activities to be delivered when they will have the greatest impact as opposed to just once a year.”

The Blackfin Anti-Phishing Platform is the only solution that gives organizations a dedicated privately hosted environment to manage their security training initiatives. Unlike traditional SaaS solutions, Blackfin customers enjoy increased data privacy by not sharing an application environment with other customers.  Other features seen in Blackfin’s Anti-Phishing Platform include:

  • Unlimited Assessments
  • Unlimited Users
  • Multiple Assessment Types
  • Automated Incident Reporting
  • Rate Limiting and Scheduling
  • Integrated User Training
  • Automated Follow-on Training
  • Detailed Reporting Features

About Blackfin Security Group

Blackfin Security Group builds enterprise security solutions that show real ROI. Organizations must take a holistic approach to securing the enterprise through securing technology, process and people. When it comes to securing the human element, Blackfin is here to help. They help you effectively educate and train your employees through an Anti-Phishing Platform, Role-Based Security Awareness Training, and On-Demand Technical Security Training. Blackfin’s customers include both commercial and government organizations in numerous industries. For more information, visit http://www.blackfin.co

Media Contact: Megan Horner, Blackfin Security Group, 7174513598,marketing@blackfin.co

Blackfin Security Co-Founders to Speak at ISSA International Conference

ISSAwebconfbanner2014

Blackfin Security Co-Founders Aaron Cohen and Josh Larsen will both be speaking at next month’s ISSA International Conference in sunny Orlando, Florida.

The 2014 ISSA Conference’s self-proclaimed goal is to include, “solution-oriented, proactive and innovative sessions focused on cybersecurity as a vital part of today’s businesses.

Panel Details

The Future of Information Security Awareness

In the last year the effectiveness of information security awareness has been the subject of vigorous debate. In this panel, leading experts will discuss the causes for dissatisfaction with historical awareness techniques and how awareness has evolved in the last decade. Topics such as metrics, surrogate outcomes and the latest research will all be discussed.

Panelists
Aaron Cohen: COO, Blackfin Security
Ira Winkler: President, Information Systems Security Association

Moderator
Kelley Archer: Director Information Security, AIMIA Inc.

Date: October 23rd
Time: 3:40 – 4:30 pm
Location: Nutcracker Room 1

Accounting for the Humans

Panelists
Josh Larsen: CEO, Blackfin Security
Robert Ivey: Chief Technology Officer, GCA Technology Services

Moderator
Samantha Manke: Executive Vice President, Secure Mentem

Date: October 22nd
Time: 2:10 – 3:00 pm
Location:Fantasia Room A & B

Drop us a line!

Need real ROI out of your security training efforts? Contact us to see how we can help.