What are we doing here? (and where is here?)

Here – in a secret basement buried in snow, somewhere in the Midwest, with no redundant power supply, on a RaspberryPi with another site or two… behind CloudFlare… in hopes to keep things going… we are building a website dedicated to trying things, breaking things, learning things and documenting it all!

When the lights are on I will be writing up tests I am doing, issues I run into while testing security software, or just learning how to hack things a little better. I find WordPress a better solution to do this then my homegrown site.

I have also set up a collaborative site for people to share what they are doing to help ourselves and to help others. Same server. Let’s see if it all holds!

I hope you and I both find this site useful.

One of the thing I like about Linux tools is many of them don’t try to be the kitchen sink. They do one thing and do it well.

Me, as a human I stink at multi-tasking. Many people say they can do it. Because my world revolves around me and I can’t, I don’t believe them.

Today I again proved to myself that I should do one thing at a time and focus on all the steps, otherwise we break production. Lucky for me, production was this web site, and not a production system at my job (we’ve all borked production at the job, right?).

It started simple enough – with an email.

Time to test that the job to update my certs is working. Follow directions for a dry run.

sudo certbot renew --dry-run

After a few errors, I remembered that some things need to be changed with CloudFlare.

I had to turn on Development Mode under Caching and my errors went away. CloudFlare is great. I just need to remember to read the documentation.

After that I thought it would be good to write a short blog post on it (a bit longer than above).

Logging into WordPress I see that an update is available.

The squirrel ran by and I decided to pick this up instead of writing about how I worked though the issues on updating my certificates.

I can’t run the auto update for this due to my server configuration (I’m not letting FTP through on my secret lair for the sever). That’s okay. I can read the instructions on how to do it manually.

To be fair, it does not say ‘backup your files’. But really, I’ve been doing this long enough that I should back up my files. Of course I didn’t back up my files.

I copied over the right files. I did what I was supposed to do (except backing up my files). I still got me a 500 Error when looking at the site.

What did I do wrong? Did I mess up permissions because I used FTP when I usually copy and mv over?

I checked the permissions, updated them and still a 500. I tried the tried and true – re-upload the files after deleting the wrong files. Still get me a 500.

Did I check the error logs? No. Why am I forgetting to do all these things I should normally do?

I check the error logs:

logs logs logs

What do I see here? An uncaught error:

Uncaught Error: Call to undefined function wp_recovery_mode()

Google being my friend with people asking about it 5 days ago, shows the best bet right now is to downgrade back to 5.1 and backup next time I am going to do my upgrade.

My friend Google

I shall wait for a not so nice day out to do that.

Ah, spring is here

Time for version 2… https://github.com/m0nkeyplay/TenableIO

The ch ch changes:

Some changes are big. There are now 3 main scripts and I have scrapped the need to download separately. This should allow for setting up the two switch based scripts easier as a cron job or scheduled task. This was first done when made I the interactive script. I moved it to the switch ones once I saw it worked well. Time helps make things better (I hope).

Some changes are small. Spelling, grammar, and consistency between scripts has been checked and updated.

Each file has a ReadMe in the docs/ folder. Below is a brief overview.

ioInteractiveScanSearch.py

Search the scans by answering some questions. This one is good for one off reports. We all get them don’t we? Check it out.

ioSearchDownloadScans.py

Use the optional switches to download queue up and download what you need. This was originally ioSearchScansQueue3.py until the download was rolled into it. Check it out below.

ioDownloadScans.py

So, you really want to download a lot of data and have the bandwidth and time to do it? This one is for you. It’s a modified search and download where you provide the scan and output type it will get the Critical, High, Medium and Scan Info data for you. I did not provide a walk through because it will just take a really long time.

In my last post(Consistency {code and APIs}) I was working out how to get data that was available into a tool I was working on. Working with the team at the vendor we were able to push some API improvements to make it all work out.

I’m happy to say I was able to put together a scripted tool that can be used in house at my job as well as for anyone else who is using Tenable.io.

Many people use a free Nessus scanner to check for vulnerabilities. Many companies use Tenable’s Security Center on premises and like all things, it’s moving to the cloud in Tenable.io.

Moving to a new platform brings about challenges. But, that is what I am here for, the challenges. The new platform is not as mature as the ones it is based off of. Data is robust in the new platform and pulling it into a data management tool has been… let’s stick with the word challenging.

How do things work?

Scans run on IO. Data is there. Someone needs to see it to act on it.

This should be very simple. Nothing is simple. Let the fun begin!

What is the goal?

Give the remediation team the data they need to get to work!

What do they need most to get this done?

For us: it’s hostname, pluginid, vulnerability plugin names, risk factor and compliance names. Your mileage may vary, and since the tool is free to download and use, you can update for your needs.

Let’s see this in action now (Hello World!).

Head on over to my TenableIO github repo to get some searching on. Clone the repo. ioSearchScansQueue3.py and ioExportDownload3.py are the scripts we are using. Fill in environment variables noted in the ReadMe and we are good to go.

A good test search is pluginid 19506 because that returns results on the scan itself.

python3 ioSearchScansQueue3.py -scan "Scan Name" -o csv -q pluginid -d 19506

Now pull down the results. Depending on the amount of data being asked for the report that IO writes can take some time, so I broke this out.

python3 ioExportDownload3.py

We can now see that IO has put together a handy spreadsheet of data for us to review, hand off, or do something else with.

Update — added interactive functionality.

Not everyone likes to remember switches, so I added an interactive option that works like a question/answer to get your searches on. Downloading of the data is also included in the script.

Expanding on the simple search

The -search option does not need to be one scan specific. If you have five scans with the name Vulnerability in it, a -search Vulnerability will provide results for all five of the scans.

As I noted above we can search for the following as a one shot or a list of each from a text file creating a much more robust report.

  • plugin id
  • plugin name
  • hostname
  • risk factor
  • Compliance name*

*Vulnerability and Compliance data is stored differently so searching on a plugin name will not give you a compliance result. See Consistency {code and APIs}.

Data can also be written up for download in the native .nessus format for import into any other tool.

And finally, because it’s out on Github for anyone to use, fork, fix – a user is not stuck only searching for what I say. The dictionary of plugins to search is there to update as needed. Just choose what it important to your team from the documentation and add as needed.

Don’t want to deal with all these fancy switches and just need to download scan data that needs attention on a schedule? I have one for you too. ioExportScanQueue3.py is what you are looking for. Queue these up in a batch job and the data is yours when you want it.

%python3 ioExportScanQueue3.py --scan "scan name" --type nessus or csv
%python3 ioExportDownload3.py

I’m hoping these scripts help others since many have written tools that help me.

I plan to keep the whole repo updated as I work more and more with IO and need to get data/repeat tasks.

Comments, questions, fixes, and pretzels are always welcome!

My job is to get people consistent data that they can rely on to make decisions that they tell me cost a lot of money. More precisely I (and so many others) do my job by hacking together solutions that vendors promise and rarely deliver in the glossy sales decks.

Over the past year I have spent a lot of time working with an external API. I’ve learned the beauty of being able to send calls and get data consistently. I’ve learned the limitations on my skills and work to improve them.

Consistency is important when working with an API. When the API says it will do x it’s pretty important that it doesn’t do y. That’s wholly different data. When the API is the only way to get the data because there is no built in functions for a user to get the data, or to import the data – I need to make the tool to do that.

That’s cool. That’s my job. That’s what I like to do. I like to hack things together to work. I like to solve problems that weren’t there until someone wanted something a little more from the program. These people thinking outside the box makes me think outside the box.

I write more and more code to do this. My code is not always the prettiest or most elegant. There are probably many other ways to do what I am writing, so I can’t hold all to such a high bar that I can’t reach.

What I can do though is ask, nay, say, that when providing an API be consistent.

API says it’s possible to search on a field. Oh, let’s say a description field.

We know the field is returned, because it works and is filled when we search on a pluginID in another script.

*This is probably a good time to note that part of the way I work when I am trying to add functionality to a script is take the working script, write a new one with some new functionality to prove it can work without breaking the first script – then merge. This is all happening on the second script I want to merge.

Consistency says we see a description field returned. We see the API documentation says it’s a searchable field. Searching it for data we see is returned in another search should return us results, just based on that field.

So, why doesn’t it?

Working with testing and support I come to learn there is not much consistency in the way the API is working.

The description field is referencing a reference field when searching a compliance audit rather than a vulnerability scan, which is not referenced in the documentation (the reference field or that it searches different fields and mushes them together for the final output). That’s a lot of references to what seems to me a big limitation of the documentation.

It may take a time or two to read the above paragraph. I understand.

What to do?

We can’t just be here – have an issue and not fix it? My data readers still need their data. I still need to get it to them.

I am happy to work with a McGyver watching support engineer who comes up with some pretty good ideas. Right now – I may have to end up using them based on turn around time in the past. Happy about it? Nope. But people want what they want and it’s my job to get it to them.

What have I learned?

I’ve learned that my code isn’t that bad and I was/am on the right track. There’s a bug, that needs to be quashed and I can’t do that. I’m pretty sure when I get this working someone, not just me will be happy.

On a final note – I’ve been questioning my ‘hacker’ cred as of late. Maybe it’s Twitter, maybe it’s walls that I run into. Then something like this comes along, and I remember why I do what I do, why people pay me to do what I do, and what I am doing is hacking these systems to do what people want them to do.

The scripts I am working on can be found at my gitHub repo… all sanitized for others who work to get software running as sold. When this one is working, it will be added to it. Hopefully sooner rather than later.

https://github.com/m0nkeyplay/TenableIO

I documented putting this site together. I go back to it at times to look at a few things because I do want to set things up with Let’s Encrypt again and I don’t remember it all.

I also found out that over the past couple of weeks my domain would go away. I use CloudFlare for DNS and would check what was going on. It always gave this .dev domain a different IP then https://youat.dev which is weird because they are running on the same server, behind the same IP.

I would run my ddclient script and it would tell me things were set properly and was being skipped.

Then I would go to CloudFlare and see that the IP for themonkeyplayground.dev was set different than youat.dev The IP looked familiar too, but I couldn’t put two and two together.

Why was one updating correctly and not the other?

I find in cases like this, it’s usually me, not the system. Yes, the system or automation is great to shake my fist at, but it’s how I use it that is what matters.

Do you remember testing the ddclient script on my always VPN’d box? Me neither, but I did write about it, because it bit me during testing.

Yup. I ran into the issue when I was running it on my VPN’d machine for testing. I knew it would bite me if I had to do it again, so I tried to save myself and maybe anyone who caught the post a bit of pain. Go me!

But, I did forget to kill it on my other machine so I am thinking I was having a whole bunch of competing updates to my DNS. One from the machine serving the web site and the other from where I tested it. I’ve disabled and removed the script from the machine that shouldn’t be doing the updating and all has been working as expected.

What did I learn? Automation works. Documentation saves ones butt and when troubleshooting – check the user first… even (especially) if that user is me.

I’ve been playing around in HTB and root-me.org to help me learn the skills I need to do my job. And also, because figuring things out is fun! I have people that I work on these with to learn what I don’t know and get better at what I do know.

These are all great places. The issue I am coming up against is I can’t share what I have done to work through most of these challenges. Yes, when working on a team, we can share to get the results we need, but when I am working it out, I can’t write it out for others for later. This is what we do in the real world when we get a good solution.

Root-me does give the option to share how it was done afterward, which I think is a great idea. I can even see what others had done to get the answer. I learned the other day after using Dominic Breuker’s stego-toolkit, that just running a file through strings would have gotten me the results with a lot less resources and time. I can’t complain, because I learned a lot and will come back to use the toolkit. Plus, it looked real cool to me when it did work with the toolkit.

Tonight I worked for a few hours on a HTB web challenge that probably would take most people a good 10-15 minutes. I used three different tools, googling and looking back on notes about the options to use – but I got the result I was looking for. I can’t say much past that though without breaking the rules.

And that is where the problem is starting to lie. I need to find a way to share what I have done – without breaking the rules of the game. Even though the game is about breaking the rules.

I can write it up for myself. I will write it up for myself, because I will forget it if I don’t. But, I didn’t do it all on my own. No, I didn’t get the answer from a web site. I got direction from other people’s posts or man pages based on info gathering from the challenge.

Let’s see if I can break it down a bit – simply.

The name of the challenge is usually a good hint at some way to start. Googling the name led me to a tool to use — a brute force password tool.

Before I could use that tool though I needed to know what I was looking at. That let me try Burp Suite. I can say I used Burp Suite ’cause with a web attack I think that’s the go to tool. One I don’t have a lot of experience with, but here’s a good chance to get some.

Got my parameters and got my data, now onto the tool with so many options. Struggling to get the parameters correct took the longest amount of time.

Get that! I’m golden.

Nope. I’m not. It even tells me so. I get the password for the site – but just get told I am too slow. The site password is not the flag.

Fair enough, I’ve been there before. I’ve faced off against Candy in the root-me.org programming challenges. I think I can outsmart this if I need to be quick. That just means don’t do it manually.

A few minutes later, with a refresher on -d 'param=value' -X POST and I get the flag.

It’s a small victory, but a learning experience. A learning experience I can only vaguely share about before giving away too much in an information gathering challenge.

Of course I have.

So have you. In some way or another a company has flubbed and let someone else get your information. That information may be my credentials or personal data, but it’s out there somewhere; usually through our email address. A quick search on Google News will show how often it happens.

Have I Been Pwned? is a great resource to check what data and what services are leaked so we can fix it. It’s kinda like the WebMD of personal info. Lots of info. Much concern after looking at it.

The site provides an API for a geek like me (or a system admin in need of a quick way to regularly check on users) to look up the emails we use to sign in to so many sites and see if they have been compromised. I have used recon-ng, which has a great module for checking against HIBP, but I really don’t want to fire up my VM just to do some checking of the family emails.

This is where the hibp_quickCheck was born. I put together a python script that will query the API for a single email or a batch of them, check a breach or a paste and let the runner know where each email stands.

The simplest check is for one email and a breach:

./hibp_check.py breach -e [email protected]

This will come back with any breach info on the email provided. An important thing to note is the Breach Date. Was the breach last week or 5 years ago? What can I do to help myself? Am I still using the account? Have I changed the password?

Checking a paste is just as important as a breach. A paste is telling us that a password or hash may actually be out there, not just that it happened.

Now, if I want to use either of these with a list of emails for my family let’s say (or an org) – I just put the list in file, one email at a time and run:

./hibp_check.py breach -f /path/to/file

We will get a list for each email in the file. The API has a rate limit, so if there are a lot of emails in the list, it can take some time. I have tried to take this into account when choosing a file to go through.

My hopes is this can help out geek families and a sys admin or two.

Grab the script from my GitHub repo: https://github.com/m0nkeyplay/hibp_quickCheck