Docker makes it easy for developers to package up and push out application changes, and spin up run-time environments on their own. Maybe too easy.
With Docker, developers can make their own decisions on how to configure and package applications. But this also means that they can make simple but dangerous mistakes that will leave the system unsafe without anyone noticing until it is too late.
Fortunately, there are some good tools that can catch many of these problems early, as part of your build pipelines and run-time configuration checks. Toni de la Fuente maintains a helpful list of Docker security and auditing tools here.
Unfortunately, many of the open source projects in this list have been shelved or orphaned. So, I want to put together a short list of the essential open source tools that are available today to help you secure your Docker environment.
Check your container configuration settings
As part of your build process and continuous run-time checks, it is important that you enforce safe and consistent configuration defaults for containers and the hosts that they run on.
The definitive guidelines for setting up Docker safely is the CIS Docker Benchmark, which lists over 100 recommendations and best practices for hardening the host configuration and Docker daemon configuration (including Swarm configuration settings), file permissions rules, container images and build file management, container runtime settings, and operations practices.
The Docker security team has provided a free tool, Docker Bench for Security, that checks Docker containers against this hardening guide (although the tests are organized a bit differently – the Swarm checks are all run together in a separate section for example). Docker Bench is updated for each release of the CIS benchmark guide, which is updated with each release of Docker, although there tends to be a brief lag.
Docker Bench ships as a small container which runs with high privilege, and executes a set of tests against all containers that it can find. Tests return PASS or WARN (clear fail) status, or INFO (for findings that need to be manually reviewed to see if they match expected results). NOTEs are printed for manual checks that need to be done separately.
After you run Docker Bench, you will need to work through fussy detailed findings and decide what makes sense for your environment. Docker Bench is an auditing tool, designed to be run and reviewed manually. Docker Bench Test shows how you can run Docker Bench in an automated test pipeline, by wrapping it inside the Bats test framework, although unfortunately it hasn’t been updated for a couple of years.
Another free auditing tool from the Docker security team is Actuary. According to Diogo Monica at Docker, Actuary checks the same rules as Docker Bench (for now), but runs across all nodes in a Docker Swarm. Actuary is positioned as a future replacement for Docker Bench: it is written in Go (instead of Bash scripts) and is more extensible, using configurable templates for checking and testing.
Image scanning and policy enforcement
In addition to making sure that your container run-time is configured correctly, you need to ensure that all of the image layers in a container are free from known vulnerabilities. This is done by static scanning of “cold images” in repos, or before they are pushed to a repo, as part of your image build process.
Commercial Docker customers can take advantage of Docker Security Scanning (DSS) (fka Nautilus) to automatically and continuously check images in private registries on Docker Hub or Docker Cloud for known vulnerabilities. DSS is also used to scan Official Repositories on Docker Hub.
If you’re using open source Docker, you’ll need to do your own checking. There are a few good open source tools available, all of which work basically the same way:
Scan the image (generally a binary scan), pull apart the layers, and build a detailed manifest or bill of materials of the contents
Take a snapshot of OS and software package vulnerability data
Compare the contents of the image manifest against the list of known vulnerabilities and report any matches
The effectiveness of these security scanning tools depends on:
Depth and completeness of static analysis – the scanner’s ability to see inside image layers and the contents of those layers (packages and files)
Quality of vulnerability feeds – coverage, and how up to date the vulnerability lists are
How results are presented – is it clear what the problem is, where to find it, and what to do about it
De-duplication and whitelisting capabilities to reduce noise
First, there is Clair from CoreOS, the scanning engine used in the Quay.io public container registry (an alternative to Docker Hub). Clair is a static analysis tool for Docker and appc containers, which scans an image and compares the vulnerabilities found against a whitelist to see if they have already been reviewed and accepted. It can be controlled through a JSON API or CLI.
If you’re using OpenSCAP there is the oscap-docker util which can be used to scan Docker images and running containers for CVEs, and compliance violations against SCAP policy guides.
Anchore is a powerful and flexible automated scanning and policy enforcement engine that is easy to integrate into your CI/CD build pipelines to check for CVEs – and much more – in Docker images. You can create whitelists (to suppress findings that you’ve determined are not exploitable) and blacklists (for required packages or banned packages, and prohibited content such as source code or secrets), as well as custom checks on container or application configuration rules, etc.
Anchore is available as a free SaaS online Navigator for public registries, and an open source engine for on prem scanning. The scanning engine can be wired in to your CI/CD pipelines using CLI or REST or a Jenkins plug in, to automatically analyze images as changes are checked in, and fail the build if checks don’t pass. A nice overview of running Anchore can be found here.
Anchore comes with a built-in set of security and compliance policies, analysis functions and decision gates. You can write your own analysis modules and policies, reports and certification workflows in a high-level language, or extend the analysis engine with custom plugins.
You can also integrate the Anchore scanning engine with Anchore Navigator, so that you can define policies and whitelists using Navigator’s graphical editor. Anchore will subscribe to updates so that you will be automatically notified of new CVEs, or updates to images in public registries.
Anchore (the company) offers premium support subscriptions, and enterprise solutions to discover, explore and analyze images, with additional analysis modules and policies, data feeds, tooling, and workflow integration options.
Another new and ambitious open source container scanner is Dagda. Dagda builds a consolidated vulnerability database, taking snapshots of CVE information from NIST’s NVD, publicly-reported security bugs in the SecurityFocus Bugtraq database, and known exploits from the Offensive Security database, and uses OWASP Dependency Check and Retire.JS to analyze dependencies, to identify known security vulnerabilities in Docker images. Dagda can be controlled through the command line or its REST API, and keeps a history of all checks for auditing and trend analysis.
It also runs ClamAV against Docker images to check for trojans and other malware, and integrates with Sysdig’s powerful (and free) Falco run-time anomaly checker to monitor containers on Linux hosts. Falco is installed as an agent on each host, which taps into kernel syscalls and filters against rules in a signature database to identify suspicious activity and catch attacks or operational problems on the host and inside containers.
Dagda throws everything but the kitchen sink at container security. It is a lot of work to set this up and keep all of it working, but it shows you how far you can go without having to roll out a commercial container protection solution like Twistlock or AquaSec.
Don’t leave container security up to chance
What makes Docker so compelling is also what makes it dangerous: it takes work and decisions out of ops hands, and gives it to developers who may not understand (or care about) the details or why they are important. Using Docker moves responsibility for packaging and configuring application run-times from ops (who are responsible for making sure that this is done carefully and safely) to developers (who want to get it done quickly and simply).
This is why it is so important to add checks that can be run continuously to catch mistakes and known vulnerabilities in dependencies, and to enforce security and compliance policies when changes are made. The tools listed here can help you to reduce operational risks, without getting in the way of teams getting valuable work done.
When using any new CSS, the question of browser support has to be addressed. This is even more of a consideration when new CSS is used for layout as with Flexbox and CSS Grid, rather than things we might consider an enhancement.
In this article, I explore approaches to dealing with browser support today. What are the practical things we can do to allow us to use new CSS now and still give a great experience to the browsers that don't support it?
If you manage to create amazing API documentation and ensure that developers have a positive experience implementing your API, they will sing the praises of your product. Continuously improving your API documentation is an investment, but it can have a huge impact. Great documentation builds trust, differentiates you from your competition, and provides marketing value.
I’ve shared some best practices for creating good API documentation in my article “The Ten Essentials for Good API Documentation.” In this article, I delve into some research studies and show how you can both improve and fine-tune different aspects of your API documentation. Some of these extras, like readability, are closer to essentials, while others are more of a nice-to-have, like personality. I hope they give you some ideas for building the best possible docs for your product.
Whoever visits your API documentation needs to be able to decide at first glance whether it is worth exploring further. You should clearly show:
what your API offers (i.e., what your products do);
how it works;
how it integrates;
and how it scales (i.e., usage limits, pricing, support, and SLAs).
An overview page targets all visitors, but it is especially helpful for decision-makers. They have to see the business value: explain to them directly why a company would want to use your API.
Developers, on the other hand, want to understand the purpose of the API and its feature set, so they tend to turn to the overview page for conceptual information. Show them the architecture of your API and the structure of your docs. Include an overview of different components and an introduction into the request-response behavior (i.e., how to integrate, how to send requests, and how to process responses). Provide information on the platforms on which the API is running (e.g., Java) and possible interactions with other platforms.
As the study “The role of conceptual knowledge in API usability” found, without conceptual knowledge, developers struggle to formulate effective queries and to evaluate the relevance or meaning of content they find. That’s why API documentation should not only include detailed examples of API use, but also thorough introductions to the concepts, standards, and ideas in an API’s data structures and functionality. The overview page can be an important component to fulfill this role.
For some developers, examples play a more important role in getting started with an API than the explanations of calls and parameters.
The study found that after conducting an initial overview of what the API does and how it works, developers approach learning about the API in two distinct ways: some follow a top-down approach, where they try to build a thorough understanding of the API before starting to implement specific use cases, while others prefer to follow a bottom-up approach, where they start coding right away.
This latter group has a code-oriented learning strategy; they start learning by trying and extending code examples. Getting into an API is most often connected with a specific task. They look for an example that has the potential of serving as a basis to solve their problem, but once they’ve found the solution they were looking for, they usually stop learning.
Examples are essential for code-oriented learners, but all developers benefit from them. The study showed that developers often trust examples more than documentation, because if they work, they can’t be outdated or wrong. Developers often struggle with finding out where to start and how to begin with a new API—examples can become good entry points in this case. Many developers can grasp information more easily from code than text, and they can re-use code in examples for their own implementation. Examples also play other roles that are far from obvious: they automatically convey information about dependencies and prerequisites, they help identify relevant sections in the documentation when developers are scanning the page, and they intuitively show developers how code that uses the API should look.
Improve your examples
Because examples are such a crucial component in API documentation, better examples mean better docs.
To ensure the quality of your examples, they should be complete, be programmed professionally, and work correctly. Because examples convey so much more than the actual use case, make sure to follow the style guidelines of the respective community and show best-practice approaches. Add brief, informative explanations; although examples can be self-explanatory, comments included with sample code help comprehension.
Add concrete, real-life examples whenever you can. If you don’t have real examples, make sure they at least look real: use realistic variable names and functions instead of abstract ones.
When including examples, you have a variety of formats and approaches to choose from: auto-generated examples, sample applications, client libraries, and examples in multiple languages.
Autodoc tools like Swagger Codegen and API Blueprint automatically generate documentation from your source code and keep it up-to-date as the code changes. Use them to generate reference libraries and sample code snippets, but be aware that what you produce this way is only skeleton—not fleshed out—documentation. You will still have to add explanations, conceptual information, quick-start guides, and tutorials, and you should still pay attention to other aspects like UX and good-quality copy.
Working applications that use the API are a great way to show how everything works together and how the API integrates with different platforms and technologies. They are different than sample code snippets, because they are stand-alone solutions that show the big picture. As such, they are helpful to developers who would like to see how a full implementation works and to have an overall understanding of how everything in the API ties together. On the other hand, they are real products that showcase the services and quality of your API to decision makers. Apple’s iOS Developer Portal offers buildable, executable source examples of how to accomplish a task using a particular technology in a wide variety of categories.
Client libraries are chunks of code that developers can add to their own development projects. They are usually available in various programming languages, and cover basic functionality for an application to be able to interact with the API. Providing them is an extra feature that requires ongoing investment from the API provider, but doing so helps developers jump-start their use of the API. GitHub follows the practical approach of offering client libraries for the languages that are used the most with their API, while linking to unsupported, community-built libraries written in other, less popular languages.
Examples in multiple languages
API documentation is a tool that helps developers and other stakeholders do their job. You should adapt it to the way people use it, and make it as easy to use as possible. Consider the following factors:
Copy and paste: Developers copy and paste code examples to use them as a starting point for their own implementation. Make this process easier with either a copy button next to relevant sections or by making sections easy to highlight and copy.
Sticky navigation: When implemented well, fixing the table of contents and other navigation to the page view can prevent users from getting lost and having to scroll back up.
Clicking: Minimize clicking by keeping related topics close to each other.
Language selector: Developers should be able to see examples in the language of their choice. Put a language selector above the code examples section, and make sure the page remembers what language the user has selected.
URLs: Single page views can result in very long pages, so make sure people can link to certain sections of the page. If, however, a single page view doesn’t work for your docs, don’t sweat it: just break different sections into separate pages.
Another best practice from Stripe: the language selector also changes the URL, so URLs link to the right location in the right language.
Collaboration: Consider allowing users to contribute to your docs. If you see your users edit your documentation, it indicates there might be room for improvement—in those parts of your docs or even in your code. Additionally, your users will see that issues are addressed and the documentation is frequently updated. One way to facilitate collaboration is to host your documentation on GitHub, but be aware that this will limit your options of interactivity, as GitHub hosts static files.
Providing an option for users to interact with your API through the documentation will greatly improve the developer experience and speed up learning.
First, provide a working test API key or, even better, let your users log in to your documentation site and insert their own API key into sample commands and code. This way they can copy, paste, and run the code right away.
As a next step, allow your users to make API calls directly from the site itself. For example, let them query a sample database, modify their queries, and see the results of these changes.
A more sophisticated way to make your documentation more interactive is by providing a sandbox—a controlled environment where users can test calls and functions against known resources, manipulating data in real-time. Developers learn through the experience of interacting with your API in the sandbox, rather than by switching between reading your docs and trying out code examples themselves. Nordic APIs explains the advantages of sandboxing, discusses the role of documentation in a sandboxed environment, and shows a possible implementation. To see a sandbox in action, try out the one on Dwolla’s developer site.
The study exploring how software developers interact with API documentation also explored how developers look for help. In a natural work environment, they usually turn to their colleagues first. Then, however, many of them tend to search the web for answers instead of consulting the official product documentation. This means you should ensure your API documentation is optimized for search engines and will turn up relevant results in search queries.
To make sure you have the necessary content available for self-support, include FAQs and a well-organized knowledge base. For quick help and human interaction, provide a contact form, and—if you have the capacity—a help-desk solution right from your docs, e.g., a live chat with support staff.
The study also pointed at the significant role Stack Overflow plays: most developers interviewed mentioned the site as a reliable source of self-help. You can also support your developers’ self-help efforts and sense of community by adding your own developer forum to your developer portal or by providing an IRC or Slack channel.
As with all software, APIs change and are regularly updated with new features, bug fixes, and performance improvements.
When a new version of your API comes out, you have to inform the developers working with your API about the changes so they can react to them accordingly. A changelog, also called release notes, includes information about current and previous versions, usually ordered by date and version number, along with associated changes.
If there are changes in a new version that can break old use of an API, put warnings on top of relevant changelogs, even on top of your release notes page. You can also bring attention to these changes by highlighting or marking them permanently.
To keep developers in the loop, offer an RSS feed or newsletter subscription where they can be notified of updates to your API.
Besides the practical aspect, a changelog also serves as a trust signal that the API and its documentation are actively maintained, and that the information included is up-to-date.
Analytics and feedback
You can do some research by getting to know your current and potential clients, talking to people at conferences, exploring your competition, and even conducting surveys. Still, you will have to go with a lot of assumptions when you first build your API docs.
When your docs are up, however, you can start collecting usage data and feedback to learn how you can improve them.
Find out about the most popular use cases through analytics. See which endpoints are used the most and make sure to prioritize them when working on your documentation. Get ideas for tutorials, and see which use cases you haven’t covered yet with a step-by-step walkthrough from developer community sites like Stack Overflow or your own developer forums. If a question regarding your API pops up on these channels and you see people actively discussing the topic, you should check if it’s something that you need to explain in your documentation.
Collect information from your support team. Why do your users contact them? Are there recurring questions that they can’t find answers for in the docs? Improve your documentation based on feedback from your support team and see if you have been successful: have users stopped asking the questions you answered?
Listen to feedback and evaluate how you could improve your docs based on them. Feedback can come through many different channels: workshops, trainings, blog posts and comments about your API, conferences, interviews with clients, or usability studies.
Readability is a measure of how easily a reader can understand a written text—it includes visual factors like the look of fonts, colors, and contrast, and contextual factors like the length of sentences, wording, and jargon. People consult documentation to learn something new or to solve a problem. Don’t make the process harder for them with text that is difficult to understand.
While generally you should aim for clarity and brevity from the get-go, there are some specific aspects you can work on to improve the readability of your API docs.
Audience: Expect that not all of your users will be developers and that even developers will have vastly different skills and background knowledge about your API and business domain. To cater to the different needs of different groups in your target audience, explain everything in detail, but provide ways for people already familiar with the functionality to quickly find what they are looking for, e.g., add a logically organized quick reference.
Wording: Explain everything as simply as you can. Use short sentences, and make sure to be consistent with labels, menu names, and other textual content. Include a clear, straightforward explanation for each call. Avoid jargon if possible, and if not, link to domain-related definitions the first time you use them. This way you can make sure that people unfamiliar with your business domain get the help they need to understand your API.
Fonts: Both the font size and the font type influence readability. Have short section titles and use title case to make it easier to scan them. For longer text, use sans serif fonts. In print, serif fonts make reading easier, but online, serif characters can blur together. Opt for fonts like Arial, Helvetica, Trebuchet, Lucida Sans, or Verdana, which was designed specifically for the web. Contrast plays an important role as well: the higher the contrast, the easier the text is to read. Consider using a slightly larger font size and different typeface for code than for text to help your users’ visual orientation when switching back and forth between their code editor and your documentation.
Structure: API documentation should cater to newcomers and returning visitors alike (e.g., developers debugging their implementation). A logical structure that is easy to navigate and that allows for quick reference works for both. Have a clear table of contents and an organized list of resources, and make sections, subsections, error cases, and display states directly linkable.
Scannability: As Steve Krug claims in his book about web usability, Don’t Make Me Think, one of the most important facts about web users is that they don’t read, they scan. To make text easier to scan, use short paragraphs, highlight relevant keywords, and use lists where applicable.
Accessibility: Strive to make your API docs accessible to all users, including users who access your documentation through assistive technology (e.g., screen readers). Be aware that screen readers may often struggle with reading code and may handle navigation differently, so explore how screen readers read content. Learn more about web accessibility, study Web Content Accessibility Guidelines, and do your best to adhere to them.
You’ve worked hard to get to know your audience and follow best practices to leave a good impression with your API docs. Now, as a finishing touch, you can make sure your docs “sound” and look in tune with your brand.
Although API documentation and technical writing in general don’t provide much room for experimentation in tone and style, you can still instill some personality into your docs:
Use your brand book and make sure your API docs follow it to the letter.
A friendly tone and simple style can work wonders. Remember, people are here to learn about your API or solve a problem. Help them by talking to them in a natural manner that is easy to understand.
Add illustrations that help your readers understand any part of your API. Show how different parts relate to each other; visualize concepts and processes.
Select your examples carefully so that they reflect on your product the way you want them to. Playful implementations of your API will create a different impression than more serious or enterprise use cases. If your brand allows, you can even have some fun with examples (e.g., funny examples and variable names), but don’t go overboard.
You can insert some images (beyond illustrations) where applicable, but make sure they add something to your docs and don’t distract readers.
Think developer portal—and beyond
Although where you draw the line between API documentation and developer portal is still up for debate, most people working in technical communication seem to agree that a developer portal is an extension of API documentation. Originally, API documentation meant strictly the reference docs only, but then examples, tutorials, and guides for getting started became part of the package; yet we still called them API docs. As the market for developer communication grows, providers strive to extend the developer experience beyond API documentation to a full-fledged developer portal.
In fact, some of the ideas discussed above—like a developer forum or sandboxes—already point in the direction of building a developer portal. A developer portal is the next step in developer communication, where besides giving developers all the support they need, you start building a community. Developer portals can include support beyond docs, like a blog or videos. If it fits into the business model, they can also contain an app store where developers submit their implementations and the store provides a framework for them to manage the whole sales process. Portals connected to an API often also contain a separate area with landing pages and showcases where you can directly address other stakeholders, such as sales and marketing.
Even if you’re well into building your developer portal, you can still find ways to learn more and extend your reach. Attend and present at conferences like DevRelCon, Write The Docs or API The Docs to get involved in developer relations. Use social media: tweet, join group discussions, or send a newsletter. Explore the annual Stack Overflow Developer Survey to learn more about your main target audience. Organize code and documentation sprints, trainings, and workshops. Zapier has a great collection of blogs and other resources you can follow to keep up with the ever-changing API economy—you will surely find your own sources of inspiration as well.
"Angular and React are solving the same problems with different approaches."
"Angular is a full-stack framework that has solutions for almost each and every aspect of frontend development. React, on the other hand, is used mainly for building components and displaying them properly and efficiently. "
"In Angular, you pay for safety by sacrificing flexibility."
"Teams with a strong Java background usually feel more comfortable using Angular."
Sometimes writing code that just runs is not enough. We might want to know what goes on internally such as how memory is allocated, consequences of using one coding approach over another, implications of concurrent executions, areas to improve performance, etc. We can use profilers for this.
A Java Profiler is a tool that monitors Java bytecode constructs and operations at the JVM level. These code constructs and operations include object creation, iterative executions (including recursive calls), method executions, thread executions, and garbage collections.
Like most profilers, we can use this tool for both local and remote applications. This means that it’s possible to profile Java applications running on remote machines without having to install anything on them.
JProfiler also provides advanced profiling for both SQL and NoSQL databases. It provides specific support for profiling JDBC, JPA/Hibernate, MongoDB, Casandra, and HBase databases.
The below screenshot shows the JDBC probing interface with a list of current connections:
If we are keen on learning about the call tree of interactions with our database and see connections that may be leaked, JProfiler nicely handles this.
Live Memory is one feature of JProfiler that allows us to see current memory usage by our application. We can view memory usage for object declarations and instances or for the full call tree.
In the case of the allocation call tree, we can choose to view the call tree of live objects, garbage-collected objects, or both. We can also decide if this allocation tree should be for a particular class or package or all classes.
The screen below shows the live memory usage by all objects with instance counts:
YourKit also comes in handy those times when we want to profile thrown exceptions. We can easily find out what types of exceptions were thrown and the number of times each exception occurred.
YourKit has an interesting CPU profiling feature that allows focused profiling on certain areas of our code such as methods or subtrees in threads. This is very powerful as it allows for conditional profiling through its what-if feature.
Figure 5 shows an example of the thread-profiling interface:
We can also profile SQL, and NoSQL database calls with YourKit. It even provides a view for actual queries that were executed.
Though this is not a technical consideration, the permissive licensing model of YourKit makes it a good choice for multi-user or distributed teams, as well as for single-license purchases.
4. Java VisualVM
Java VisualVM is a simplified yet robust profiling tool for Java applications. By default, this tool is bundled with Sun’s distribution of the Java Development Kit (JDK). Its operation relies on other standalone tools provided in the JDK, such as JConsole, jstat, jstack, jinfo, and jmap.
Below, we can see a simple overview interface of an ongoing profiling session using Java VisualVM:
One interesting advantage of Java VisualVM is that we can extend it to develop new functionalities as plugins. We can then add these plugins to Java VisualVM’s built-in update center.
Java VisualVM supports local and remote profiling, as well as memory and CPU profiling. Connecting to remote applications requires providing credentials (hostname/IP and password as necessary) but does not provide support for ssh tunneling. We can also choose to either enable real-time profiling with instant updates (typically every 2 seconds).
Below, we can see the memory outlook of a Java application profiled using Java VisualVM:
Netbeans Profiler is also a good choice for lightweight development and profiling. NetBeans Profiler provides a single window for configuring and controlling the profiling session and displaying the results. It gives a unique feature of knowing how often garbage collection occurs.
6. Other Solid Profilers
Some honorable mentions here are Java Mission Control, New Relic, and Prefix (from Stackify) – these have less market share overall, but definitely, do deserve a mention. For example, Stackify’s Prefix is an excellent lightweight profiling tool, well-suited for profiling not only Java applications but other web applications as well.
In this write-up, we discussed profiling and Java Profilers. We looked at the features of each Profiler and what informs the potential choice of one over another.
There’re many Java profilers available with some having unique characteristics. The choice of which Java profiler to use, as we’ve seen in this article, is mostly dependent on a developer’s selection of tools, the level of analysis required, and features of the profiler.