Category: Uncategorized


With computers it’s always a step by step process. There is no magic. There is no cheating a way around something. Everything is a process. Figure out the process and get the data you wish.

Computers and the applications they run are meant to take input and produce output. That’s their job. Even with fancy names like AI or smart computing, everything they do is input produces output. Even errors. Especially errors.

Take a challenge at Hack the Box I was working on today.

The goal of the challenge is to get the info from a web page and submit it – and do it really fast. Faster than I can if I could do math. Faster than I could if I could do quick algorithms in my head. I’m learning it’s a pretty normal thing for a capture the flag or hacking challenge.

Doing it that fast means I can’t do it myself. I need to script it. Also being that it’s still live, I can’t really say what I did (even if only 3 people read this write up) but I can walk through my process – because a process will get the answer.

First up

What data is it looking for? Look at the description and then the web page. Make a few deductions.


What am I going to use to script this? Me? Python. It’s down, dirty and quick. There are modules to connect to the web site and to do what it wants me to do to complete the task.

Now I said earlier there is a process to everything. That is true. That doesn’t mean there is only one way to do it though. I am choosing python because I know it and I know how to use to to get what I want. There are a myriad of other scripting languages that can do this too, which other people are more versed in – and can probably do it quicker and cleaner than I can, but this is my drug of choice.

Next Next

Is there a pattern here that can help me out? What is it? How do I produce it and reproduce it?

Next Next Next

Where do they want the data once I get it? Cool. Cool. Cool. I see where they want it.

Next Next Next Next

Put it all together. This is the messy testing put in a lot of comments and make changes and remember to close parentheses portion of the program.

It’s also the part where I say, hey – this should work. Why isn’t it working? They did something wrong. But I know they didn’t do something wrong. It was me who forgot to run the data transformation function. Finally I got the session info in there and Voila! – the result I was looking for.

Plug it in and get a challenge owned.

Process found. Process followed. Process complete.

Now, plumbing, electricity, and how cars work… that’s magic!

I was tolling around on the internet and what do I come across? A poll! Thanks for the inspiration for the post spencer.

A Poll

What did I answer? Other (the check mark should have given that away).

Why did I answer it that way?

A long time ago in another life I was an elementary school teacher. I heard this old tale of what happened in Japanese schools. The tale goes that once a teacher becomes a principal, every ten years they go back to being a teacher for a year. It’s done to ground the principal, bring them back to their roots.

This is not an Undercover Boss scenario where they are given a fake mustache and made to eat lunch with those who work the hardest for the company for a day. The principal goes back to teaching, works in the classroom for the year, gets paid as a teacher, (gets the health benefits of a teacher?). This is meant to keep the principal perspective fresh on what those who report to her work with every day and help the principal make the hard decisions.

Years on I don’t know if that tale is true. True or not I do like the idea behind it. We all should be reminded of what it takes to do what got us to where we are.

When I was a ‘senior’ network engineer I didn’t want to talk to end users. It was kinda why I worked my tail off to get the goods to do that. I wanted to work on the complex problems, not the day to day user problems. I did learn that I too needed to talk to the user to see where the real problem is. At my level it may not be where the user is really having the problem. It was – and still is humbling to learn that I may not be seeing the problem from the perspective of the person having it. Would I want to go back to the hours/pay of the first tier tech support at that time? Honestly, probably not (except I would be able to leave it all behind at the end of the day), but that doesn’t mean it wouldn’t be good for me.

So, why did I answer that the CISO should report to the person with boots on the ground?

Same reason. The CISO should jump back in, not in the nerdy fun Mr. Robot way.

The C*O should get the perspective of the workers that they are responsible for. They should see the work that is done over time and what works and what doesn’t. It’s not just about the process that can be fixed, which is real important. It’s also about the people doing the work. What can really be done? What is really working well and what they really need is people who aren’t going to just smile and tell them how great their ideas are, but see what is really happening in the corporation/institution.

It’s only a year. It could be a vacation from what the C*O does, right?

Or – BrrCon 2019 – what I got out of it

TLDR; – a lot for a one day conference.

Long version: Well, that’s why I have this blog.

The community and the atmosphere of of BrrCon is one that enticed me to look for more conferences last year. Last year was my first year attending security conferences and the we can do it attitude, the we need to fix things attitude, and the let’s help each other vibe I got really got me into the community.

Now, I know the security world is not all roses – but like all other worlds it is also about who I surround myself with. I found some talks that looked interesting and will talk about them in a minute, but first the opening keynote.

This was the second year that I saw Dave Kennedy speak. Forgive me. I don’t remember if he was the opening speaker last year or not. He is he first of the three talk I will discuss – and the ‘rockstar’ (i.e. – the one I have heard of before the talks).

When he spoke he spoke of how things things are getting better because his job of breaking things/into things is getting harder. He spoke of how things are not all right yet, but we can work to make them better by working together. There was no blaming X or Y. There was only this is what we can do to help, which I think is a fantastic idea – and to hear it from one, who wants to be or not, is one with influence and status is great to see.

This was the second time I remember hearing his speak that the industry has it’s problems, needs to own them, and work them out. I don’t hear that much – and I appreciate hearing that – and see it being taken into account, encouraging people to be the best they can be by lifting each other up, not tearing others down.

Next is a woman who I started to follow on Twitter maybe 20 minutes into her talk. Yolonda Smith hit us with an excellent presentation on working with others.

Are you starting to see a theme here?

The talk was called Empathy for the (Devel)oper: Lessons Learned Building An Application Security Module and the abstract can be read here. I hope the slides go up soon. Working with our DevOps team on a project has brought me back to being a sys admin when someone comes and says ‘Hey this needs to be this way’. The presentation highlighted how words matter – and how taking the time to talk can usually cut out a lot of misunderstanding and BS.

My kids like to read and be read too. Words matter and taking the time to talk are really big themes in the stories for young ones. We should revisit that more often as adults I think.

Did I mention the presentation was also themed around The Good Place? Pretty funny show that my wife and I enjoy – that I just read is doing it’s final season.

I walked in as the last presentation I writing about was starting because Gabe and I were talking about something we may submit for a presentation. Leo Bastidas had a talk at 2:30 on Host Hunting on a Budget. This one one of the technical talks I went to.

He was excited. He wanted to share. He wanted others to know about the cool things he was learning about. Everything I feel giddy about when I find a cool tool. Everything I like about going to a conference.

I got to attend an awesome one day conference where I saw one of the first people who got me excited about this journey – speak again, where I got to hear from a new (to me) person who really sparked my ideas about what I/we can do when I get to the office on Monday to make things a little better for all, and I got to see a new speaker with a great passion – that I hope to be able to display if I end up talking to a group of eager folks.

Thanks BrrCon for putting together another awesome conference bridging tech and people.

Tenable IO Scan Scripts v3 – Get ’em here.

First things first, I need to learn how to properly use GitHub to version my stuff. This being the first project that I am working on where someone outside of me may use it, I need to get better at this. I commit commented 3/4 of my repo with the same message.

My comments…

The message though is a small step to a larger plan. I have added two new additions to the repo.

I have added a few canned filters in the ioFiles/ folder. We’ve needed to find out why some scans aren’t running all plugins, so a list of those was created. Why not share that and others? The files are broken up by the type and what is in them. More will be added.

Scans can now call the plugin family to get data.

This is where the small things create big changes. Our biggest headache is that there is no 3rd party that can robustly take in compliance data from TenableIO (including Tenable’s own Security Center) so we need to pull it and parse it.

To add to that, a fun project we are going to need to embark on is to write our own audit files to check for middleware since IO won’t be accepting custom .nasl files anytime soon.

This call will get all compliance data from a scan (which includes custom audit files):

python -scan ScanNametoSearch -o csv -q pluginfamily -f ioFiles/pluginfamilyCompliance

We can now work to incorporate the Nessus Parser into the scripts, or port the functionality from Perl to python.

That, plus many possibilities are now available from a few small changes. Now, to learn to use GitHub a bit better.

This year I was able to attend Edge, the User’s Conference for Tenable, the product I use most at work. I didn’t really know what to expect – and to be honest, I hid from all vendor sponsored sales talks. I think this may be why I had such a great time. It may be why I learned so much in my three days in Atlanta.

I got some real good technical information from the talk, but more than that I got excellent people information. This post is tech lite (okay, non existent).

Team Mates

First up, my team mates. When you work 100% remote it’s kinda exciting and scary to meet the people you work with face to face. Spending time with them I got to learn about them more as people, as well and long conversations to work the technical issues we all work – but together. There really is a lot to be said for face to face time.


Face time. It’s important. I understand it costs money and every vendor is there to make money, but the face time with people is important and can save that time that so many people think they are saving with emails and support tickets.

We sat in a room for an hour. The vendors and a few of my teammates. We hashed out issues, talked about moving forward, and then when we saw each other later – we talked about it more. We talked about expectations – from all sides and it may just be the after show glow, but it seems a lot can be done when sitting down with someone, or getting on the phone (like we finally got support to do while I was at the airport).

We have a relationship moving forward.


I am an introvert with verbal diarrhea when people start asking me questions. I am very aware of it, and work to curb the flow because I become more and more uncomfortable as I speak.

But while there I had to meet people. I had to meet those who are doing what we are doing. People where there from far and wide, from different industries and I was able to learn a lot from them. The people in my home/work town and I will be getting together to work on issues we all come across, also to come to the vendor as a united front on things that are needed to make out lives easier.

I was able to meet the developers and talk. It’s an important thing to be able to do. Speaking with those and just hearing about their decisions, and sharing what’s happening sets up a lot understanding. It leaves a lot to think about.

The Airport

Finally the conference ended. There were a lot of Minnesota goodbyes. I then got lucky. I got to the airport with a lot of time to spare. I walked the way through to my gate, checking out all the art at the Atlanta airport. I was hungry. I made the decision to eat sushi in an airport and I was not disappointed.

The sushi was good, but the real cool thing was meeting up with Krista. See, I had just spent three days with one of our brilliant techs, who happens to be a woman in our field – and she is really needed here. At the end of the conference we spoke about leadership and being a woman in our field/organization. Me, being a white male can’t ever relate to what she will experience, but that doesn’t mean I can’t advocate.

Krista is/was having a much tougher time in the field then the tech I spent the week with. This was where I put my stuff to the side. My uncomfortable feelings when someone is telling me things and wanting to talk to fix it. Instead I listened (something my wife told me is a good thing). I listened until she asked me a question. I only asked her one question at the end. “How can I make the new woman on my team feel welcome, and not like you are feeling?”

She told me to listen. She told me to acknowledge the work that’s done. She told me not to try to intimidate, but to reason.

I wish her the best of luck in her new adventures. I think what I learned in the end from all of the experiences I had at the conference, was we all really need to listen, acknowledge the work when it’s done, and reason with people – not intimidate.

I wonder if things will be the same when at Defcon?

One of the thing I like about Linux tools is many of them don’t try to be the kitchen sink. They do one thing and do it well.

Me, as a human I stink at multi-tasking. Many people say they can do it. Because my world revolves around me and I can’t, I don’t believe them.

Today I again proved to myself that I should do one thing at a time and focus on all the steps, otherwise we break production. Lucky for me, production was this web site, and not a production system at my job (we’ve all borked production at the job, right?).

It started simple enough – with an email.

Time to test that the job to update my certs is working. Follow directions for a dry run.

sudo certbot renew --dry-run

After a few errors, I remembered that some things need to be changed with CloudFlare.

I had to turn on Development Mode under Caching and my errors went away. CloudFlare is great. I just need to remember to read the documentation.

After that I thought it would be good to write a short blog post on it (a bit longer than above).

Logging into WordPress I see that an update is available.

The squirrel ran by and I decided to pick this up instead of writing about how I worked though the issues on updating my certificates.

I can’t run the auto update for this due to my server configuration (I’m not letting FTP through on my secret lair for the sever). That’s okay. I can read the instructions on how to do it manually.

To be fair, it does not say ‘backup your files’. But really, I’ve been doing this long enough that I should back up my files. Of course I didn’t back up my files.

I copied over the right files. I did what I was supposed to do (except backing up my files). I still got me a 500 Error when looking at the site.

What did I do wrong? Did I mess up permissions because I used FTP when I usually copy and mv over?

I checked the permissions, updated them and still a 500. I tried the tried and true – re-upload the files after deleting the wrong files. Still get me a 500.

Did I check the error logs? No. Why am I forgetting to do all these things I should normally do?

I check the error logs:

logs logs logs

What do I see here? An uncaught error:

Uncaught Error: Call to undefined function wp_recovery_mode()

Google being my friend with people asking about it 5 days ago, shows the best bet right now is to downgrade back to 5.1 and backup next time I am going to do my upgrade.

My friend Google

I shall wait for a not so nice day out to do that.

Ah, spring is here

Time for version 2…

The ch ch changes:

Some changes are big. There are now 3 main scripts and I have scrapped the need to download separately. This should allow for setting up the two switch based scripts easier as a cron job or scheduled task. This was first done when made I the interactive script. I moved it to the switch ones once I saw it worked well. Time helps make things better (I hope).

Some changes are small. Spelling, grammar, and consistency between scripts has been checked and updated.

Each file has a ReadMe in the docs/ folder. Below is a brief overview.

Search the scans by answering some questions. This one is good for one off reports. We all get them don’t we? Check it out.

Use the optional switches to download queue up and download what you need. This was originally until the download was rolled into it. Check it out below.

So, you really want to download a lot of data and have the bandwidth and time to do it? This one is for you. It’s a modified search and download where you provide the scan and output type it will get the Critical, High, Medium and Scan Info data for you. I did not provide a walk through because it will just take a really long time.

In my last post(Consistency {code and APIs}) I was working out how to get data that was available into a tool I was working on. Working with the team at the vendor we were able to push some API improvements to make it all work out.

I’m happy to say I was able to put together a scripted tool that can be used in house at my job as well as for anyone else who is using

Many people use a free Nessus scanner to check for vulnerabilities. Many companies use Tenable’s Security Center on premises and like all things, it’s moving to the cloud in

Moving to a new platform brings about challenges. But, that is what I am here for, the challenges. The new platform is not as mature as the ones it is based off of. Data is robust in the new platform and pulling it into a data management tool has been… let’s stick with the word challenging.

How do things work?

Scans run on IO. Data is there. Someone needs to see it to act on it.

This should be very simple. Nothing is simple. Let the fun begin!

What is the goal?

Give the remediation team the data they need to get to work!

What do they need most to get this done?

For us: it’s hostname, pluginid, vulnerability plugin names, risk factor and compliance names. Your mileage may vary, and since the tool is free to download and use, you can update for your needs.

Let’s see this in action now (Hello World!).

Head on over to my TenableIO github repo to get some searching on. Clone the repo. and are the scripts we are using. Fill in environment variables noted in the ReadMe and we are good to go.

A good test search is pluginid 19506 because that returns results on the scan itself.

python3 -scan "Scan Name" -o csv -q pluginid -d 19506

Now pull down the results. Depending on the amount of data being asked for the report that IO writes can take some time, so I broke this out.


We can now see that IO has put together a handy spreadsheet of data for us to review, hand off, or do something else with.

Update — added interactive functionality.

Not everyone likes to remember switches, so I added an interactive option that works like a question/answer to get your searches on. Downloading of the data is also included in the script.

Expanding on the simple search

The -search option does not need to be one scan specific. If you have five scans with the name Vulnerability in it, a -search Vulnerability will provide results for all five of the scans.

As I noted above we can search for the following as a one shot or a list of each from a text file creating a much more robust report.

  • plugin id
  • plugin name
  • hostname
  • risk factor
  • Compliance name*

*Vulnerability and Compliance data is stored differently so searching on a plugin name will not give you a compliance result. See Consistency {code and APIs}.

Data can also be written up for download in the native .nessus format for import into any other tool.

And finally, because it’s out on Github for anyone to use, fork, fix – a user is not stuck only searching for what I say. The dictionary of plugins to search is there to update as needed. Just choose what it important to your team from the documentation and add as needed.

Don’t want to deal with all these fancy switches and just need to download scan data that needs attention on a schedule? I have one for you too. is what you are looking for. Queue these up in a batch job and the data is yours when you want it.

%python3 --scan "scan name" --type nessus or csv

I’m hoping these scripts help others since many have written tools that help me.

I plan to keep the whole repo updated as I work more and more with IO and need to get data/repeat tasks.

Comments, questions, fixes, and pretzels are always welcome!

My job is to get people consistent data that they can rely on to make decisions that they tell me cost a lot of money. More precisely I (and so many others) do my job by hacking together solutions that vendors promise and rarely deliver in the glossy sales decks.

Over the past year I have spent a lot of time working with an external API. I’ve learned the beauty of being able to send calls and get data consistently. I’ve learned the limitations on my skills and work to improve them.

Consistency is important when working with an API. When the API says it will do x it’s pretty important that it doesn’t do y. That’s wholly different data. When the API is the only way to get the data because there is no built in functions for a user to get the data, or to import the data – I need to make the tool to do that.

That’s cool. That’s my job. That’s what I like to do. I like to hack things together to work. I like to solve problems that weren’t there until someone wanted something a little more from the program. These people thinking outside the box makes me think outside the box.

I write more and more code to do this. My code is not always the prettiest or most elegant. There are probably many other ways to do what I am writing, so I can’t hold all to such a high bar that I can’t reach.

What I can do though is ask, nay, say, that when providing an API be consistent.

API says it’s possible to search on a field. Oh, let’s say a description field.

We know the field is returned, because it works and is filled when we search on a pluginID in another script.

*This is probably a good time to note that part of the way I work when I am trying to add functionality to a script is take the working script, write a new one with some new functionality to prove it can work without breaking the first script – then merge. This is all happening on the second script I want to merge.

Consistency says we see a description field returned. We see the API documentation says it’s a searchable field. Searching it for data we see is returned in another search should return us results, just based on that field.

So, why doesn’t it?

Working with testing and support I come to learn there is not much consistency in the way the API is working.

The description field is referencing a reference field when searching a compliance audit rather than a vulnerability scan, which is not referenced in the documentation (the reference field or that it searches different fields and mushes them together for the final output). That’s a lot of references to what seems to me a big limitation of the documentation.

It may take a time or two to read the above paragraph. I understand.

What to do?

We can’t just be here – have an issue and not fix it? My data readers still need their data. I still need to get it to them.

I am happy to work with a McGyver watching support engineer who comes up with some pretty good ideas. Right now – I may have to end up using them based on turn around time in the past. Happy about it? Nope. But people want what they want and it’s my job to get it to them.

What have I learned?

I’ve learned that my code isn’t that bad and I was/am on the right track. There’s a bug, that needs to be quashed and I can’t do that. I’m pretty sure when I get this working someone, not just me will be happy.

On a final note – I’ve been questioning my ‘hacker’ cred as of late. Maybe it’s Twitter, maybe it’s walls that I run into. Then something like this comes along, and I remember why I do what I do, why people pay me to do what I do, and what I am doing is hacking these systems to do what people want them to do.

The scripts I am working on can be found at my gitHub repo… all sanitized for others who work to get software running as sold. When this one is working, it will be added to it. Hopefully sooner rather than later.

I documented putting this site together. I go back to it at times to look at a few things because I do want to set things up with Let’s Encrypt again and I don’t remember it all.

I also found out that over the past couple of weeks my domain would go away. I use CloudFlare for DNS and would check what was going on. It always gave this .dev domain a different IP then which is weird because they are running on the same server, behind the same IP.

I would run my ddclient script and it would tell me things were set properly and was being skipped.

Then I would go to CloudFlare and see that the IP for was set different than The IP looked familiar too, but I couldn’t put two and two together.

Why was one updating correctly and not the other?

I find in cases like this, it’s usually me, not the system. Yes, the system or automation is great to shake my fist at, but it’s how I use it that is what matters.

Do you remember testing the ddclient script on my always VPN’d box? Me neither, but I did write about it, because it bit me during testing.

Yup. I ran into the issue when I was running it on my VPN’d machine for testing. I knew it would bite me if I had to do it again, so I tried to save myself and maybe anyone who caught the post a bit of pain. Go me!

But, I did forget to kill it on my other machine so I am thinking I was having a whole bunch of competing updates to my DNS. One from the machine serving the web site and the other from where I tested it. I’ve disabled and removed the script from the machine that shouldn’t be doing the updating and all has been working as expected.

What did I learn? Automation works. Documentation saves ones butt and when troubleshooting – check the user first… even (especially) if that user is me.