Introduction
In the previous blog post, I showed a Python tool that can initiate a Veracode scan, and wait for results. The program then generates both a terminal console output and a json representation of the results. It’s a good idea to save the results of this type of scanning. The most important reason for storing scan data is that we can now write programs that will consume the stored data. If a graph showing historical trends of findings in scans is desired, we must retrieve the data from somewhere. In the previous post, I showed an example scan json scan result, as well as some Python code that performed database write operations.
For this to work, you will need to have a running mongoDB server.
While you can easily get mongo to work in a docker container, I found it easier to install a mongoDB server on my local machine, less networking issues that way. I’m working on a mac, so I used brew install mongodb-community && brew services start mongodb-community
to get the server going. I used the Studio3T (formerly Robo3T) to examine the db. Studio3T is not totally straight-forward to use. I am including the screenshot below, which shows on how to make data appear.
Then you can use theUI to look into your collection:
Note that the GUI option is not available on servers. It is therefore advisable to learn how to connect to mongo with the command line. The program you will want to install and use is mongosh
, it comes with the server install, but can be installed as stand-alone client on another machine to access the server via the network.
To test web page table generation, we will want more than one record — so time to write some code to generate dummy Veracode scan data.
def write_dummy_records_to_mongo(): # load json from file projects = ["admin", "analytics", "browser", "cloud-os", "data-store", "devenv", "enrichment", "experiment-manager", "hcluster", "hmdb", "image-server", "insight", "ion-library", "keycloak", "kmeans", "library", "licensing", "multivariate", "omics", "pathways", "pathways-sbgn-to-json", "pca-pcvg", "pcada", "peptide-fragmenter", "ppm", "ppm-pi", "ppm-processor", "proteinpilot", "result-manager", "search", "ttest", "uniprot", "visualization", "workflow"] with open('veracode_report.json') as json_file: data = json.load(json_file) for project in projects: data['application_name'] = project low = randrange(20) high = randrange(15) medium = randrange(10) informational = randrange(15) data['low'] = str(low) data['high'] = str(high) data['medium'] = str(medium) data['informational'] = str(informational) data['total'] = str(low + high + medium + informational) write_to_mongo(data)
The code above reads the ‘report.json’ and generates some random low, medium, and high issues. With those database records in place, we can move on to the next step. The next step is to publish the most recent information from the database for each project to a web server. This program will publish to the Atlassian Confluence (standalone server, not cloud) – a wiki we use at work. You can read the next part of the series here:
Part 7: Posting the latest Veracode scan information to a Dashboard