119 stories
·
0 followers

Infix Functions in Kotlin

1 Comment

1. Introduction

Kotlin is a language that adds many fresh features to allow writing cleaner, easier-to-read code.

This, in turn, makes our code significantly easier to maintain and allows for a better end result from our development. Infix notation is one of such features.

2. What is an Infix Notation?

Kotlin allows some functions to be called without using the period and brackets. These are called infix methods, and their use can result in code that looks much more like a natural language.

This is most commonly seen in the inline Map definition:

map(
  1 to "one",
  2 to "two",
  3 to "three"
)

“to” might look like a special keyword but in this example, this is a to() method leveraging the infix notation and returning a Pair<A, B>.

3. Common Standard Library Infix Functions

Apart from the to() function, used to create Pair<A, B> instances, there are some other functions that are defined as infix.

For example, the various numeric classes – Byte, Short, Int, and Long – all define the bitwise functions and(), or(), shl(), shr(), ushr(), and xor(), allowing some more readable expressions:

val color = 0x123456
val red = (color and 0xff0000) shr 16
val green = (color and 0x00ff00) shr 8
val blue = (color and 0x0000ff) shr 0

The Boolean class defines the and(), or() and xor() logical functions in a similar way:

if ((targetUser.isEnabled and !targetUser.isBlocked) or currentUser.admin) {
    // Do something if the current user is an Admin, or the target user is active
}

The String class also defines the match and zip functions as infix, allowing some simple-to-read code:

"Hello, World" matches "^Hello".toRegex()

There are some other examples that can be found throughout the standard library, but these are possibly the most common.

4. Writing Custom Infix Methods

Often, we’re going to want to write our own infix methods. These can be especially useful, for example, when writing a Domain Specific Language for our application, allowing the DSL code to be much more readable.

Several Kotlin libraries already use this to great effect.

For example, the mockito-kotlin library defines some infix functions — doAnswerdoReturn, and doThrow — for use when defining mock behavior.

Writing an infix function is a simple case of following three rules:

  1. The function is either defined on a class or is an extension method for a class
  2. The function takes exactly one parameter
  3. The function is defined using the infix keyword

As a simple example, let’s define a straightforward Assertion framework for use in tests. We’re going to allow expressions that read nicely from left to right using infix functions:

class Assertion<T>(private val target: T) {
    infix fun isEqualTo(other: T) {
        Assert.assertEquals(other, target)
    }

    infix fun isDifferentFrom(other: T) {
        Assert.assertNotEquals(other, target)
    }
}

This looks simple and doesn’t seem any different from any other Kotlin code. However, the presence of the infix keyword allows us to write code like this:

val result = Assertion(5)

result isEqualTo 5 // This passes
result isEqualTo 6 // This fails the assertion
result isDifferentFrom 5 // This also fails the assertion

Immediately, this is cleaner to read and easier to understand.

Note that infix functions can also be written as extension methods to existing classes. This can be powerful, as it allows us to augment existing classes from elsewhere — including the standard library — to fit our needs.

For example, let’s add a function to a String to pull out all of the substrings that match a given regex:

infix fun String.substringMatches(r: Regex) : List<String> {
    return r.findAll(this)
      .map { it.value }
      .toList()
}

val matches = "a bc def" substringMatches ".*? ".toRegex()
Assert.assertEquals(listOf("a ", "bc "), matches)

5. Summary

This quick tutorial shows some of the things that can be done with infix functions, including how to make use of some existing ones and how to create our own to make our code cleaner and easier to read.

As always, code snippets can be found over on over on GitHub.

Read the whole story
walokra
1 day ago
reply
More of why Kotlin is so great :)
Share this story
Delete

Inspecting Docker container network traffic

1 Comment

When developing dockerized services with other communication end-points than browser client one soon needs some ways to capture and debug network traffic from containers. Here’s some tools and tips I’ve been using.

Capturing traffic

Docker uses network bridge for all traffic, and by default containers will be using bridge named docker0. However if you are using docker-compose, which by default creates own bridge for each configuration or you have other ways to configure docker networking the bridge you would like to capture would be different.

Use docker network ls command to list available Docker networks with the help of bridge link and ip addr show to find correct interface for your use case. Rest of this post will be using default docker0.

Docker Documentation for container networking has more details, and information for custom configurations.

Using tcpdump

Tcpdump is versatile commandline tool for capturing and analyzing network traffic. Try following to listen your containers:

tcpdump -i docker0

Or record traffic to a file:

tcpdump -i docker0 -w packets.cap

You could also use Wireshark which is GUI tool for analyzing traffic, and it could be also used to view output from tcpdump.

There’s still one problem though. Any sane service handling personal data should encrypt its communication preventing debuggin with simple packet capturing. To view encrypted TLS traffic we would need Man-in-the-middle transparently decryptin and re-encrypting traffic.

Setting up transparent HTTP(S) proxy

For man-in-the-middle setup we need following:

  1. Proxy
  2. IP packet forwarding to redirect traffic to proxy
  3. Configure CA certificates from proxy as trusted by the service we are examining

Mitmproxy is a perfect tool for this job.

Packet forwarding and Mitmproxy setup

  1. See Mitmproxy documentation for installation options or run it using official Docker images
  2. Enable packet forwarding in your host system with sysctl:

    sysctl -w net.ipv4.ip_forward=1
    
  3. Use iptables to forward interesting traffic from bridge to proxy. Following will forward HTTP targeting default port 80 and HTTPS to default port 443 to proxy running in 8080 which is Mitmproxy default.

    iptables -t nat -A PREROUTING -i docker0 -p tcp --dport 80 -j REDIRECT --to-port 8080
    iptables -t nat -A PREROUTING -i docker0 -p tcp --dport 443 -j REDIRECT --to-port 8080
    
  4. Run mitmproxy in transparent mode:

   mitmproxy -T --host

Configure CA certificates

Now HTTPS clients configured to verify server certificates would fail connecting, which will look like following in then Mitmproxy event log:

Mitmproxy event log

Mitmproxy generates its CA to directory $HOME/.mitmproxy, which could be mounted as a volume to your Docker container. If your are using Docker to run Mitmproxy you would mount volumes from that container.

docker run --volume $HOME/.mitmproxy:/usr/share/ca-certificates/custom some-image

Rest depends on used Linux distribution and service implementation you are targeting:

  • For Unix system tools in Alpine Linux based containers:

    1. Mount custom certificates under some dir eg. custom at /usr/share/ca-certificates

      docker run -v ~/.mitmproxy:/usr/share/ca-certificates/custom ...
      
    2. Add custom/mitmproxy-ca-cert.pem to /etc/ca-certificates.conf in your container

      echo custom/mitmproxy-ca-cert.pem >> /etc/ca-certificates.conf
      
    3. Update trusted root certificates by running:

      update-ca-certificates
      
  • NodeJS has support for NODE_EXTRA_CA_CERTS environment variable since v7.3.0

  docker run --volume $HOME/.mitmproxy:/opt/extra-ca -e NODE_EXTRA_CA_CERTS=/opt/extra-ca/mitmproxy-ca-cert.pem nodejs
  • Ruby OpenSSL uses system root certs or they could be overriden with SSL_CERT_FILE and SSL_CERT_DIR environment variables
  • For example Go, Elixir, Python based implementations would use system root certificates

After these steps it is possible to examine TLS traffic.

Mitmproxy showing response headers from TLS encrypted communication

Extra tricks with mitmproxy

Possibilities with Mitmproxy are not limited to just inspection. For example see official documentation for how to edit request or response before letting it go through proxy to client or server.

Read the whole story
walokra
33 days ago
reply
Good tips for capturing and debugging network traffic from Docker containers.
Share this story
Delete

Essential (and free) security tools for Docker

1 Comment and 2 Shares

Docker makes it easy for developers to package up and push out application changes, and spin up run-time environments on their own. Maybe too easy.

With Docker, developers can make their own decisions on how to configure and package applications. But this also means that they can make simple but dangerous mistakes that will leave the system unsafe without anyone noticing until it is too late.

Fortunately, there are some good tools that can catch many of these problems early, as part of your build pipelines and run-time configuration checks. Toni de la Fuente maintains a helpful list of Docker security and auditing tools here.

Unfortunately, many of the open source projects in this list have been shelved or orphaned. So, I want to put together a short list of the essential open source tools that are available today to help you secure your Docker environment.

Check your container configuration settings

As part of your build process and continuous run-time checks, it is important that you enforce safe and consistent configuration defaults for containers and the hosts that they run on.

The definitive guidelines for setting up Docker safely is the CIS Docker Benchmark, which lists over 100 recommendations and best practices for hardening the host configuration and Docker daemon configuration (including Swarm configuration settings), file permissions rules, container images and build file management, container runtime settings, and operations practices.

The Docker security team has provided a free tool, Docker Bench for Security, that checks Docker containers against this hardening guide (although the tests are organized a bit differently – the Swarm checks are all run together in a separate section for example). Docker Bench is updated for each release of the CIS benchmark guide, which is updated with each release of Docker, although there tends to be a brief lag.

Docker Bench ships as a small container which runs with high privilege, and executes a set of tests against all containers that it can find. Tests return PASS or WARN (clear fail) status, or INFO (for findings that need to be manually reviewed to see if they match expected results). NOTEs are printed for manual checks that need to be done separately.

After you run Docker Bench, you will need to work through fussy detailed findings and decide what makes sense for your environment. Docker Bench is an auditing tool, designed to be run and reviewed manually. Docker Bench Test shows how you can run Docker Bench in an automated test pipeline, by wrapping it inside the Bats test framework, although unfortunately it hasn’t been updated for a couple of years.

Another free auditing tool from the Docker security team is Actuary. According to Diogo Monica at Docker, Actuary checks the same rules as Docker Bench (for now), but runs across all nodes in a Docker Swarm. Actuary is positioned as a future replacement for Docker Bench: it is written in Go (instead of Bash scripts) and is more extensible, using configurable templates for checking and testing.

Image scanning and policy enforcement

In addition to making sure that your container run-time is configured correctly, you need to ensure that all of the image layers in a container are free from known vulnerabilities. This is done by static scanning of “cold images” in repos, or before they are pushed to a repo, as part of your image build process.

Commercial Docker customers can take advantage of Docker Security Scanning (DSS) (fka Nautilus) to automatically and continuously check images in private registries on Docker Hub or Docker Cloud for known vulnerabilities. DSS is also used to scan Official Repositories on Docker Hub.

If you’re using open source Docker, you’ll need to do your own checking. There are a few good open source tools available, all of which work basically the same way:

  • Scan the image (generally a binary scan), pull apart the layers, and build a detailed manifest or bill of materials of the contents
  • Take a snapshot of OS and software package vulnerability data
  • Compare the contents of the image manifest against the list of known vulnerabilities and report any matches

The effectiveness of these security scanning tools depends on:

  1. Depth and completeness of static analysis – the scanner’s ability to see inside image layers and the contents of those layers (packages and files)
  2. Quality of vulnerability feeds – coverage, and how up to date the vulnerability lists are
  3. How results are presented – is it clear what the problem is, where to find it, and what to do about it
  4. De-duplication and whitelisting capabilities to reduce noise
  5. Scanning speed

First, there is Clair from CoreOS, the scanning engine used in the Quay.io public container registry (an alternative to Docker Hub). Clair is a static analysis tool for Docker and appc containers, which scans an image and compares the vulnerabilities found against a whitelist to see if they have already been reviewed and accepted. It can be controlled through a JSON API or CLI.

If you’re using OpenSCAP there is the oscap-docker util which can be used to scan Docker images and running containers for CVEs, and compliance violations against SCAP policy guides.

Anchore is a powerful and flexible automated scanning and policy enforcement engine that is easy to integrate into your CI/CD build pipelines to check for CVEs – and much more – in Docker images. You can create whitelists (to suppress findings that you’ve determined are not exploitable) and blacklists (for required packages or banned packages, and prohibited content such as source code or secrets), as well as custom checks on container or application configuration rules, etc.

Anchore is available as a free SaaS online Navigator for public registries, and an open source engine for on prem scanning. The scanning engine can be wired in to your CI/CD pipelines using CLI or REST or a Jenkins plug in, to automatically analyze images as changes are checked in, and fail the build if checks don’t pass. A nice overview of running Anchore can be found here.

Anchore comes with a built-in set of security and compliance policies, analysis functions and decision gates. You can write your own analysis modules and policies, reports and certification workflows in a high-level language, or extend the analysis engine with custom plugins.

You can also integrate the Anchore scanning engine with Anchore Navigator, so that you can define policies and whitelists using Navigator’s graphical editor. Anchore will subscribe to updates so that you will be automatically notified of new CVEs, or updates to images in public registries.

Anchore (the company) offers premium support subscriptions, and enterprise solutions to discover, explore and analyze images, with additional analysis modules and policies, data feeds, tooling, and workflow integration options.

Another new and ambitious open source container scanner is Dagda. Dagda builds a consolidated vulnerability database, taking snapshots of CVE information from NIST’s NVD, publicly-reported security bugs in the SecurityFocus Bugtraq database, and known exploits from the Offensive Security database, and uses OWASP Dependency Check and Retire.JS to analyze dependencies, to identify known security vulnerabilities in Docker images. Dagda can be controlled through the command line or its REST API, and keeps a history of all checks for auditing and trend analysis.

It also runs ClamAV against Docker images to check for trojans and other malware, and integrates with Sysdig’s powerful (and free) Falco run-time anomaly checker to monitor containers on Linux hosts. Falco is installed as an agent on each host, which taps into kernel syscalls and filters against rules in a signature database to identify suspicious activity and catch attacks or operational problems on the host and inside containers.

Dagda throws everything but the kitchen sink at container security. It is a lot of work to set this up and keep all of it working, but it shows you how far you can go without having to roll out a commercial container protection solution like Twistlock or AquaSec.

Don’t leave container security up to chance

What makes Docker so compelling is also what makes it dangerous: it takes work and decisions out of ops hands, and gives it to developers who may not understand (or care about) the details or why they are important. Using Docker moves responsibility for packaging and configuring application run-times from ops (who are responsible for making sure that this is done carefully and safely) to developers (who want to get it done quickly and simply).

This is why it is so important to add checks that can be run continuously to catch mistakes and known vulnerabilities in dependencies, and to enforce security and compliance policies when changes are made. The tools listed here can help you to reduce operational risks, without getting in the way of teams getting valuable work done.

Read the whole story
walokra
93 days ago
reply
Good overview to Docker security tools.
Share this story
Delete

Using CSS Grid: Supporting Browsers Without Grid

1 Share

   

When using any new CSS, the question of browser support has to be addressed. This is even more of a consideration when new CSS is used for layout as with Flexbox and CSS Grid, rather than things we might consider an enhancement.

Using CSS Grid: Supporting Browsers Without Grid

In this article, I explore approaches to dealing with browser support today. What are the practical things we can do to allow us to use new CSS now and still give a great experience to the browsers that don't support it?

The post Using CSS Grid: Supporting Browsers Without Grid appeared first on Smashing Magazine.

Read the whole story
walokra
93 days ago
reply
Share this story
Delete

Ten Extras for Great API Documentation

1 Comment

If you manage to create amazing API documentation and ensure that developers have a positive experience implementing your API, they will sing the praises of your product. Continuously improving your API documentation is an investment, but it can have a huge impact. Great documentation builds trust, differentiates you from your competition, and provides marketing value.

I’ve shared some best practices for creating good API documentation in my article “The Ten Essentials for Good API Documentation.” In this article, I delve into some research studies and show how you can both improve and fine-tune different aspects of your API documentation. Some of these extras, like readability, are closer to essentials, while others are more of a nice-to-have, like personality. I hope they give you some ideas for building the best possible docs for your product.

Overview page

Whoever visits your API documentation needs to be able to decide at first glance whether it is worth exploring further. You should clearly show:

  • what your API offers (i.e., what your products do);
  • how it works;
  • how it integrates;
  • and how it scales (i.e., usage limits, pricing, support, and SLAs).
Screenshot: The homepage of Spotify's API documentation.
Spotify’s API documentation clearly states what the API does and how it works, and it provides links to guides and API references organized in categories.

An overview page targets all visitors, but it is especially helpful for decision-makers. They have to see the business value: explain to them directly why a company would want to use your API.

Developers, on the other hand, want to understand the purpose of the API and its feature set, so they tend to turn to the overview page for conceptual information. Show them the architecture of your API and the structure of your docs. Include an overview of different components and an introduction into the request-response behavior (i.e., how to integrate, how to send requests, and how to process responses). Provide information on the platforms on which the API is running (e.g., Java) and possible interactions with other platforms.

As the study “The role of conceptual knowledge in API usability” found, without conceptual knowledge, developers struggle to formulate effective queries and to evaluate the relevance or meaning of content they find. That’s why API documentation should not only include detailed examples of API use, but also thorough introductions to the concepts, standards, and ideas in an API’s data structures and functionality. The overview page can be an important component to fulfill this role.

Screenshot: Braintree's API overview page has an illustration showing how it works.
Braintree’s API overview page provides a clear overview of their SDKs, along with a visual step-by-step explanation of how their API works.

Examples

For some developers, examples play a more important role in getting started with an API than the explanations of calls and parameters.

A recent study, “Application Programming Interface Documentation—What Do Software Developers Want?,” explored how software developers interact with API documentation: what their goals are, how they learn, where they look for information, and how they judge the quality of API docs.

The role of examples

The study found that after conducting an initial overview of what the API does and how it works, developers approach learning about the API in two distinct ways: some follow a top-down approach, where they try to build a thorough understanding of the API before starting to implement specific use cases, while others prefer to follow a bottom-up approach, where they start coding right away.

This latter group has a code-oriented learning strategy; they start learning by trying and extending code examples. Getting into an API is most often connected with a specific task. They look for an example that has the potential of serving as a basis to solve their problem, but once they’ve found the solution they were looking for, they usually stop learning.

Examples are essential for code-oriented learners, but all developers benefit from them. The study showed that developers often trust examples more than documentation, because if they work, they can’t be outdated or wrong. Developers often struggle with finding out where to start and how to begin with a new API—examples can become good entry points in this case. Many developers can grasp information more easily from code than text, and they can re-use code in examples for their own implementation. Examples also play other roles that are far from obvious: they automatically convey information about dependencies and prerequisites, they help identify relevant sections in the documentation when developers are scanning the page, and they intuitively show developers how code that uses the API should look.

Improve your examples

Because examples are such a crucial component in API documentation, better examples mean better docs.

To ensure the quality of your examples, they should be complete, be programmed professionally, and work correctly. Because examples convey so much more than the actual use case, make sure to follow the style guidelines of the respective community and show best-practice approaches. Add brief, informative explanations; although examples can be self-explanatory, comments included with sample code help comprehension.

Add concrete, real-life examples whenever you can. If you don’t have real examples, make sure they at least look real: use realistic variable names and functions instead of abstract ones.

When including examples, you have a variety of formats and approaches to choose from: auto-generated examples, sample applications, client libraries, and examples in multiple languages.

Auto-generated examples

Autodoc tools like Swagger Codegen and API Blueprint automatically generate documentation from your source code and keep it up-to-date as the code changes. Use them to generate reference libraries and sample code snippets, but be aware that what you produce this way is only skeleton—not fleshed out—documentation. You will still have to add explanations, conceptual information, quick-start guides, and tutorials, and you should still pay attention to other aspects like UX and good-quality copy.

On the Readme blog, they explore autodoc tools and their limitations in more depth through a couple of real-world examples.

Sample applications

Working applications that use the API are a great way to show how everything works together and how the API integrates with different platforms and technologies. They are different than sample code snippets, because they are stand-alone solutions that show the big picture. As such, they are helpful to developers who would like to see how a full implementation works and to have an overall understanding of how everything in the API ties together. On the other hand, they are real products that showcase the services and quality of your API to decision makers. Apple’s iOS Developer Portal offers buildable, executable source examples of how to accomplish a task using a particular technology in a wide variety of categories.   

Client libraries

Client libraries are chunks of code that developers can add to their own development projects. They are usually available in various programming languages, and cover basic functionality for an application to be able to interact with the API. Providing them is an extra feature that requires ongoing investment from the API provider, but doing so helps developers jump-start their use of the API. GitHub follows the practical approach of offering client libraries for the languages that are used the most with their API, while linking to unsupported, community-built libraries written in other, less popular languages.

Examples in multiple languages

APIs are platform- and language-independent by nature. Developers can use an API’s services with the language of their choice, but this means good documentation should cover at least the most popular languages used with that particular API (e.g., C#, Java, JavaScript, Go, Objective-C, PHP, Python, Ruby, and Swift). Not only should you provide sample code and sample applications in different languages, but also these samples should reflect the best-practice approach for each language.

Usability

API documentation is a tool that helps developers and other stakeholders do their job. You should adapt it to the way people use it, and make it as easy to use as possible. Consider the following factors:

  • Copy and paste: Developers copy and paste code examples to use them as a starting point for their own implementation. Make this process easier with either a copy button next to relevant sections or by making sections easy to highlight and copy.
  • Sticky navigation: When implemented well, fixing the table of contents and other navigation to the page view can prevent users from getting lost and having to scroll back up.
  • Clicking: Minimize clicking by keeping related topics close to each other.
  • Language selector: Developers should be able to see examples in the language of their choice. Put a language selector above the code examples section, and make sure the page remembers what language the user has selected.
  • URLs: Single page views can result in very long pages, so make sure people can link to certain sections of the page. If, however, a single page view doesn’t work for your docs, don’t sweat it: just break different sections into separate pages.
    Screenshot: A specific section of the Stripe API documents with the location bar showing that the URL has changed.
    Great usability: Stripe’s API documentation changes the URL dynamically as you scroll through the page.

    Another best practice from Stripe: the language selector also changes the URL, so URLs link to the right location in the right language.

  • Collaboration: Consider allowing users to contribute to your docs. If you see your users edit your documentation, it indicates there might be room for improvement—in those parts of your docs or even in your code. Additionally, your users will see that issues are addressed and the documentation is frequently updated. One way to facilitate collaboration is to host your documentation on GitHub, but be aware that this will limit your options of interactivity, as GitHub hosts static files.

Interactivity

Providing an option for users to interact with your API through the documentation will greatly improve the developer experience and speed up learning.

First, provide a working test API key or, even better, let your users log in to your documentation site and insert their own API key into sample commands and code. This way they can copy, paste, and run the code right away.

As a next step, allow your users to make API calls directly from the site itself. For example, let them query a sample database, modify their queries, and see the results of these changes.

A more sophisticated way to make your documentation more interactive is by providing a sandbox—a controlled environment where users can test calls and functions against known resources, manipulating data in real-time. Developers learn through the experience of interacting with your API in the sandbox, rather than by switching between reading your docs and trying out code examples themselves. Nordic APIs explains the advantages of sandboxing, discusses the role of documentation in a sandboxed environment, and shows a possible implementation. To see a sandbox in action, try out the one on Dwolla’s developer site.

Help

The study exploring how software developers interact with API documentation also explored how developers look for help. In a natural work environment, they usually turn to their colleagues first. Then, however, many of them tend to search the web for answers instead of consulting the official product documentation. This means you should ensure your API documentation is optimized for search engines and will turn up relevant results in search queries.

To make sure you have the necessary content available for self-support, include FAQs and a well-organized knowledge base. For quick help and human interaction, provide a contact form, and—if you have the capacity—a help-desk solution right from your docs, e.g., a live chat with support staff.

The study also pointed at the significant role Stack Overflow plays: most developers interviewed mentioned the site as a reliable source of self-help. You can also support your developers’ self-help efforts and sense of community by adding your own developer forum to your developer portal or by providing an IRC or Slack channel.

Changelog

As with all software, APIs change and are regularly updated with new features, bug fixes, and performance improvements.

When a new version of your API comes out, you have to inform the developers working with your API about the changes so they can react to them accordingly. A changelog, also called release notes, includes information about current and previous versions, usually ordered by date and version number, along with associated changes.

If there are changes in a new version that can break old use of an API, put warnings on top of relevant changelogs, even on top of your release notes page. You can also bring attention to these changes by highlighting or marking them permanently.

To keep developers in the loop, offer an RSS feed or newsletter subscription where they can be notified of updates to your API.

Besides the practical aspect, a changelog also serves as a trust signal that the API and its documentation are actively maintained, and that the information included is up-to-date.

Analytics and feedback

You can do some research by getting to know your current and potential clients, talking to people at conferences, exploring your competition, and even conducting surveys. Still, you will have to go with a lot of assumptions when you first build your API docs.

When your docs are up, however, you can start collecting usage data and feedback to learn how you can improve them.

Find out about the most popular use cases through analytics. See which endpoints are used the most and make sure to prioritize them when working on your documentation. Get ideas for tutorials, and see which use cases you haven’t covered yet with a step-by-step walkthrough from developer community sites like Stack Overflow or your own developer forums. If a question regarding your API pops up on these channels and you see people actively discussing the topic, you should check if it’s something that you need to explain in your documentation.

Collect information from your support team. Why do your users contact them? Are there recurring questions that they can’t find answers for in the docs? Improve your documentation based on feedback from your support team and see if you have been successful: have users stopped asking the questions you answered?

Listen to feedback and evaluate how you could improve your docs based on them. Feedback can come through many different channels: workshops, trainings, blog posts and comments about your API, conferences, interviews with clients, or usability studies.

Readability

Readability is a measure of how easily a reader can understand a written text—it includes visual factors like the look of fonts, colors, and contrast, and contextual factors like the length of sentences, wording, and jargon. People consult documentation to learn something new or to solve a problem. Don’t make the process harder for them with text that is difficult to understand.

While generally you should aim for clarity and brevity from the get-go, there are some specific aspects you can work on to improve the readability of your API docs.

Audience: Expect that not all of your users will be developers and that even developers will have vastly different skills and background knowledge about your API and business domain. To cater to the different needs of different groups in your target audience, explain everything in detail, but provide ways for people already familiar with the functionality to quickly find what they are looking for, e.g., add a logically organized quick reference.

Wording: Explain everything as simply as you can. Use short sentences, and make sure to be consistent with labels, menu names, and other textual content. Include a clear, straightforward explanation for each call. Avoid jargon if possible, and if not, link to domain-related definitions the first time you use them. This way you can make sure that people unfamiliar with your business domain get the help they need to understand your API.

Fonts: Both the font size and the font type influence readability. Have short section titles and use title case to make it easier to scan them. For longer text, use sans serif fonts. In print, serif fonts make reading easier, but online, serif characters can blur together. Opt for fonts like Arial, Helvetica, Trebuchet, Lucida Sans, or Verdana, which was designed specifically for the web. Contrast plays an important role as well: the higher the contrast, the easier the text is to read. Consider using a slightly larger font size and different typeface for code than for text to help your users’ visual orientation when switching back and forth between their code editor and your documentation.

Structure: API documentation should cater to newcomers and returning visitors alike (e.g., developers debugging their implementation). A logical structure that is easy to navigate and that allows for quick reference works for both. Have a clear table of contents and an organized list of resources, and make sections, subsections, error cases, and display states directly linkable.

Screenshot: When the cursor hovers over specific arguments in Stripe's API, a linked icon appears.
Great usability: Linkability demonstrated on Stripe’s API documentation.

Scannability: As Steve Krug claims in his book about web usability, Don’t Make Me Think, one of the most important facts about web users is that they don’t read, they scan. To make text easier to scan, use short paragraphs, highlight relevant keywords, and use lists where applicable.

Accessibility: Strive to make your API docs accessible to all users, including users who access your documentation through assistive technology (e.g., screen readers). Be aware that screen readers may often struggle with reading code and may handle navigation differently, so explore how screen readers read content. Learn more about web accessibility, study Web Content Accessibility Guidelines, and do your best to adhere to them.

Personality

You’ve worked hard to get to know your audience and follow best practices to leave a good impression with your API docs. Now, as a finishing touch, you can make sure your docs “sound” and look in tune with your brand.

Although API documentation and technical writing in general don’t provide much room for experimentation in tone and style, you can still instill some personality into your docs:

  • Use your brand book and make sure your API docs follow it to the letter.
  • A friendly tone and simple style can work wonders. Remember, people are here to learn about your API or solve a problem. Help them by talking to them in a natural manner that is easy to understand.
  • Add illustrations that help your readers understand any part of your API. Show how different parts relate to each other; visualize concepts and processes.
  • Select your examples carefully so that they reflect on your product the way you want them to. Playful implementations of your API will create a different impression than more serious or enterprise use cases. If your brand allows, you can even have some fun with examples (e.g., funny examples and variable names), but don’t go overboard.
  • You can insert some images (beyond illustrations) where applicable, but make sure they add something to your docs and don’t distract readers.

Think developer portal—and beyond

Although where you draw the line between API documentation and developer portal is still up for debate, most people working in technical communication seem to agree that a developer portal is an extension of API documentation. Originally, API documentation meant strictly the reference docs only, but then examples, tutorials, and guides for getting started became part of the package; yet we still called them API docs. As the market for developer communication grows, providers strive to extend the developer experience beyond API documentation to a full-fledged developer portal.

In fact, some of the ideas discussed above—like a developer forum or sandboxes—already point in the direction of building a developer portal. A developer portal is the next step in developer communication, where besides giving developers all the support they need, you start building a community. Developer portals can include support beyond docs, like a blog or videos. If it fits into the business model, they can also contain an app store where developers submit their implementations and the store provides a framework for them to manage the whole sales process. Portals connected to an API often also contain a separate area with landing pages and showcases where you can directly address other stakeholders, such as sales and marketing.

Even if you’re well into building your developer portal, you can still find ways to learn more and extend your reach. Attend and present at conferences like DevRelCon, Write The Docs or API The Docs to get involved in developer relations. Use social media: tweet, join group discussions, or send a newsletter. Explore the annual Stack Overflow Developer Survey to learn more about your main target audience. Organize code and documentation sprints, trainings, and workshops. Zapier has a great collection of blogs and other resources you can follow to keep up with the ever-changing API economy—you will surely find your own sources of inspiration as well.

I hope “The Ten Essentials for Good API Documentation” and this article gave you valuable insight into creating and improving your API documentation and inspire you to get started or keep going.

Read the whole story
walokra
107 days ago
reply
Good overview for what should great API documentation cover.
Share this story
Delete

Angular 5 vs. React – Who’s one step ahead?

1 Comment

Deciding which JavaScript framework is best for your web application is never easy especially if you have to choose between Angular and React. We talked with Dr. Marius Hofmeister and Stephan Rauh about the advantages of Angular and React compared to other frameworks and when to use them.

The post Angular 5 vs. React – Who’s one step ahead? appeared first on JAXenter.

Read the whole story
walokra
115 days ago
reply
"Angular and React are solving the same problems with different approaches."

"Angular is a full-stack framework that has solutions for almost each and every aspect of frontend development. React, on the other hand, is used mainly for building components and displaying them properly and efficiently. "

"In Angular, you pay for safety by sacrificing flexibility."

"Does your team have years of experience in JavaScript? Then React is probably the best choice."

"Teams with a strong Java background usually feel more comfortable using Angular."

"There really are problems when using existing JavaScript libraries with TypeScript."
Share this story
Delete
Next Page of Stories