JSON, Homoiconicity, and Database Access

During a recent review of an internal web application based on the Node.js platform, we discovered that combining JavaScript Object Notation (JSON) and database access (database query generators or object-relational mappers, ORMs) creates interesting security challenges, particularly for JavaScript programming environments.

To see why, we first have to examine traditional SQL injection.

Traditional SQL injection

Most programming languages do not track where strings and numbers come from. Looking at a string object, it is not possible to tell if the object corresponds to a string literal in the source code, or input data which was read from a network socket. Combined with certain programming practices, this lack of discrimination leads to security vulnerabilities. Early web applications relied on string concatenation to construct SQL queries before sending them to the database, using Perl constructs like this to load a row from the users table:

# WRONG: SQL injection vulnerability
  SELECT * FROM users WHERE users.user = '$user'

But if the externally supplied value for $user is "'; DROP TABLE users; --", instead of loading the user, the database may end up deleting the users table, due to SQL injection. Here’s the effective SQL statement after expansion of such a value:

  SELECT * FROM users WHERE users.user = ''; DROP TABLE users; --'

Because the provenance of strings is not tracked by the programming environment (as explained above), the SQL database driver only sees the entire query string and cannot easily reject such crafted queries.

Experience showed again and again that simply trying to avoid pasting untrusted data into query strings did not work. Too much data which looks trustworthy at first glance turns out to be under external control. This is why current guidelines recommend employing parametrized queries (sometimes also called prepared statements), where the SQL query string is (usually) a string literal, and the variable parameters are kept separate, combined only in the database driver itself (which has the necessary database-specific knowledge to perform any required quoting of the variables).

Homoiconicity and Query-By-Example

Query-By-Example is a way of constructing database queries based on example values. Consider a web application as an example. It might have a users table, containing columns such as user_id (a serial primary key), name, password (we assume the password is stored in the clear, also this practice is debatable), a flag that indicates if the user is an administrator, a last_login column, and several more.

We could describe a concrete row in the users table like this, using JavaScript Object Notation (JSON):

  "user_id": 1,
  "name": "admin",
  "password": "secret",
  "is_admin": true,
  "last_login": 1431519292

The query-by-example style of writing database queries takes such a row descriptor, omits some unknown parts, and treats the rest as the column values to match. We could check user name an password during a login operation like this:

  "name": "admin",
  "password": "secret",

If the database returns a row, we know that the user exists, and that the login attempt has been successful.

But we can do better. With some additional syntax, we can even express query operators. We could select the regular users who have logged in today (“1431475200” refers to midnight UTC, and "$gte" stands for “greater or equal”) with this query:

  "last_login": {"$gte": 1431475200},
  "is_admin": false

This is in fact the query syntax used by Sequelize, a object-relational mapping tool (ORM) for Node.js.

This achieves homoiconicity refers to a property of programming environment where code (here: database queries) and data look very much alike, roughly speaking, and can be manipulated with similar programming language constructors. It is often hailed as a primary design achievement of the programming language Lisp. Homoiconicity makes query construction with the Sequelize toolkit particularly convenient. But it also means that there are no clear boundaries between code and data, similar to the old way of constructing SQL query strings using string concatenation, as explained above.

Getting JSON To The Database

Some server-side programming frameworks, notably Node.js, automatically decode bodies of POST requests of content type application/json into JavaScript JSON objects. In the case of Node.js, these JSON objects are indistinguishable from other such objects created by the application code.  In other words, there is no marker class or other attribute which allows to tell apart objects which come from inputs and objects which were created by (for example) object literals in the source.

Here is a simple example of a hypothetical login request. When Node.js processes the POST request on he left, it assigns a JavaScript object to the the req.body field in exactly the same way the JavaScript code on the right does.

POST request Application code
POST /user/auth HTTP/1.0
Content-Type: application/json

req.body = {
  name: "admin",
  password: "secret"

In a Node.js application using Sequelize, the application would first define a model User, and then use it as part of the authentication procedure, in code similar to this (for the sake of this example, we still assume the password is stored in plain text, the reason for that will be come clear immediately):

  where: {
    name: req.body.name,
    password: req.body.password
}).then(function (user) {
  if (user) {
    // We got a user object, which means that login was successful.
  } else {
    // No user object, login failure.

The query-by-example part is highlighted.

However, this construction has a security issue which is very difficult to fix. Suppose that the POST request looks like this instead:

POST /user/auth HTTP/1.0
Content-Type: application/json

  "name": {"$gte": ""},
  "password": {"$gte": ""}

This means that Sequelize will be invoked with this query (and the markers included here are invisible to the Sequelize code, they just illustrate the data that came from the post request):

  where: {
    name: {"$gte": ""},
    password: {"$gte": ""}

Sequelize will translate this into a query similar to this one:

SELECT * FROM users where name >= ''  AND password >= '';

Any string is greater than or equal to the empty string, so this query will find any user in the system, regardless of the user name or password. Unless there are other constraints imposed by the application, this allows an attacker to bypass authentication.

What can be done about this? Unfortunately, not much. Validating POST request contents and checking that all the values passed to database queries are of the expected type (string, number or Boolean) works to mitigate individual injection issues, but the experience with SQL injection issues mentioned at the beginning of this post suggests that this is not likely to work out in practice, particularly in Node.js, where so much data is exposed as JSON objects. Another option would be to break homoiconicity, and mark in the query syntax where the query begins and data ends. Getting this right is a bit tricky. Other Node.js database frameworks do not describe query structure in terms of JSON objects at all; Knex.js and Bookshelf.js are in this category.

Due to the prevalence of JSON, such issues are most likely to occur within Node.js applications and frameworks. However, already in July 2014, Kazuho Oku described a JSON injection issue in the SQL::Maker Perl package, discovered by his colleague Toshiharu Sugiyama.

Update (2015-05-26): After publishing this blog post, we learned that a very similar issue has also been described in the context of MongoDB: Hacking NodeJS and MongoDB.

Other fixable issues in Sequelize

Sequelize overloads the findOne method with a convenience feature for primary-key based lookup. This encourages programmers to write code like this:

User.findOne(req.body.user_id).then(function (user) {
  … // Process results.

This allows attackers to ship a complete query object (with the “{where: …}” wrapper) in a POST request. Even with strict query-by-example queries, this can be abused to probe the values of normally inaccessible table columns. This can be done efficiently using comparison operators (with one bit leaking per query) and binary search.

But there is another issue. This construct

  where: "user_id IN (SELECT user_id " +
    "FROM blocked_users WHERE unblock_time IS NULL)"
}).then(function (user) {
  … // Process results.

pastes the marked string directly into the generated SQL query (here it is used to express something that would be difficult to do directly in Sequelize (say, because the blocked_users table is not modeled). With the “findOne(req.body.user_id)” example above, a POST request such as

POST /user/auth HTTP/1.0
Content-Type: application/json

{"user_id":{"where":"0=1; DROP TABLE users;--"}}

would result in a generated query, with the highlighted parts coming from the request:

SELECT * FROM users WHERE 0=1; DROP TABLE users;--;

(This will not work with some databases and database drivers which reject multi-statement queries. In such cases, fairly efficient information leaks can be created with sub-queries and a binary search approach.)

This is not a defect in Sequelize, it is a deliberate feature. Perhaps it would be better if this functionality were not reachable with plain JSON objects. Sequelize already supports marker objects for including literals, and a similar marker object could be used for verbatim SQL.

The Sequelize upstream developers have mitigated the first issue in version 3.0.0. A new method, findById (with an alias, findByPrimary), has been added which queries exclusively by primary keys (“{where: …}” queries are not supported). At the same time, the search-by-primary-key automation has been removed from findOne, forcing applications to choose explicitly between primary key lookup and full JSON-based query expression. This explicit choice means that the second issue (although not completely removed from version 3.0.0) is no longer directly exposed. But as expected, altering the structure of a query by introducing JSON constructs (as with the "$gte example is still possible, and to prevent that, applications have to check the JSON values that they put into Sequelize queries.


JSON-based query-by-example expressions can be an intuitive way to write database queries. However, this approach, when taken further and enhanced with operators, can lead to a reemergence of injection issues which are reminiscent of SQL injection, something these tools try to avoid by operating at a higher abstraction level. If you, as an application developer, decide to use such a tool, then you will have to make sure that data passed into queries has been properly sanitized.

VENOM, don’t get bitten.

CC BY-SA CrowdStrike

CC BY-SA CrowdStrike

QEMU is a generic and open source machine emulator and virtualizer and is incorporated in some Red Hat products as a foundation and hardware emulation layer for running virtual machines under the Xen and KVM hypervisors.

CVE-2015-3456 (aka VENOM) is a security flaw in the QEMU’s Floppy Disk Controller (FDC) emulation. It can be exploited by a malicious guest user with access to the FDC I/O ports by issuing specially crafted FDC commands to the controller. It can result in guest controlled execution of arbitrary code in, and with privileges of, the corresponding QEMU process on the host. Worst case scenario this can be guest to host exit with the root privileges.

This issue affects all x86 and x86-64 based HVM Xen and QEMU/KVM guests, regardless of their machine type, because both PIIX and ICH9 based QEMU machine types create ISA bridge (ICH9 via LPC) and make FDC accessible to the guest. It is also exposed regardless of presence of any floppy related QEMU command line options so even guests without floppy disk explicitly enabled in the libvirt or Xen configuration files are affected.

We believe that code execution is possible but we have not yet seen any working reproducers that would allow this.

This flaw arises because of an unrestricted indexed write access to the fixed size FIFO memory buffer that FDC emulation layer uses to store commands and their parameters. The FIFO buffer is accessed with byte granularity (equivalent of FDC data I/O port write) and the current index is incremented afterwards. After each issued and processed command the FIFO index is reset to 0 so during normal processing the index cannot become out-of-bounds.

For certain commands (such as FD_CMD_READ_ID and FD_CMD_DRIVE_SPECIFICATION_COMMAND) though the index is either not reset for certain period of time (FD_CMD_READ_ID) or there are code paths that don’t reset the index at all (FD_CMD_DRIVE_SPECIFICATION_COMMAND), in which case the subsequent FDC data port writes result in sequential FIFO buffer memory writes that can be out-of-bounds of the allocated memory. The attacker has full control over the values that are stored and also almost fully controls the length of the write. Depending on how the FIFO buffer is defined, he might also have a little control over the index as in the case of Red Hat Enterprise Linux 5 Xen QEMU package, where the index variable is stored after the memory designated for the FIFO buffer.

Depending on the location of the FIFO memory buffer, this can either result in stack or heap overflow. For all of the Red Hat Products using QEMU the FIFO memory buffer is allocated from the heap.

Red Hat has issued security advisories to fix this flaw and instructions for applying the fix are available on the knowledge-base.


The sVirt and seccomp functionalities used to restrict host’s QEMU process privileges and resource access might mitigate the impact of successful exploitation of this issue.  A possible policy-based workaround is to avoid granting untrusted users administrator privileges within guests.

Explaining Security Lingo

This post is aimed to clarify certain terms often used in the security community. Let’s start with the easiest one: vulnerability. A vulnerability is a flaw in a selected system that allows an attacker to compromise the security of that particular system. The consequence of such a compromise can impact the confidentiality, integrity, or availability of the attacked system (these three aspects are also the base metrics of the CVSS v2 scoring system that are used to rate vulnerabilities). ISO/IEC 27000, IETF RFC 2828, NIST, and others have very specific definitions of the term vulnerability, each differing slightly. A vulnerability’s attack vector is the actual method of using the discovered flaw to cause harm to the affected software; it can be thought of as the entry point to the system or application. A vulnerability without an attack vector is normally not assigned a CVE number.

When a vulnerability is found, an exploit can be created that makes use of this vulnerability. Exploits can be thought of as a way of utilizing one or more vulnerabilities to compromise the targeted software; they can come in the form of an executable program, or a simple set of commands or instructions. Exploits can be local, executed by a user on a system that they have access to, or remote, executed to target certain vulnerable services that are exposed over the network.

Once an exploit is available for a vulnerability, this presents a threat for the affected software and, ultimately, for the person or business operating the affected software. ISO/IEC 27000 defines a threat as “A potential cause of an incident, that may result in harm of systems and organization”. Assessing threats is a crucial part of the threat management process that should be a part of every company’s IT risk management policy. Microsoft has defined a useful threat assessment model, STRIDE, that is used to assess every threat in several categories: Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privilege. Each of these categories correlates to a particular security property of the affected software; for example, if a vulnerability allows the attacker to tamper with the system (Tampering), the integrity of the that system is compromised. A targeted threat is a type of a threat that is specific to a particular application or system; such threats usually involve malware designed to utilize a variety of known vulnerabilities in specific applications that have a large user base, for example, Flash, WordPress, or PHP.

A related term often considered when assessing a threat is a vulnerability window. This is the time from the moment a vulnerability is published, regardless of whether an exploit exists, up to the point when a fix or a workaround is available that can be used to mitigate the vulnerability. If a vulnerability is published along with a fix, then the vulnerability window can also represent the time it takes to patch that particular vulnerability.

A zero-day vulnerability is a subclass of all vulnerabilities that is published while the affected software has no available patch that would mitigate the issue. Similarly, a zero-day exploit is an exploit that uses a vulnerability that has not yet been patched. Edit: Alternatively, the term zero-day can be used to refer to a vulnerability that has not yet been published publicly or semi-publicly (for example, on a closed mailing list). The term zero-day exploit would then refer to an exploit for an undisclosed vulnerability. The two differing definitions for the term zero-day may be influenced with the recent media attention security issues received. Media, maybe unknowingly, have coined the term zero-day to represent critical issues that are disclosed without being immediately patched. Nevertheless, zero-day as a term is not strictly defined and should be used with care to avoid ambiguity in communication.

Unpatched vulnerabilities can allow malicious users to conduct an attack. Attacking a system or an application is the act of using a vulnerability’s exploit to compromise the security policy of the attacked asset. Attacks can be categorized as either active, which directly affect integrity or availability of the system, or passive, which is used to compromise the confidentiality of the system without affecting the system. An example of an ongoing active attack can be a distributed denial of service attack that targets a particular website with the intention of compromising it’s availability.

The terminology described above is only the tip of the iceberg when it comes to the security world. IETF RFC 2828, for example, consists of 191 pages of definitions and 13 pages of references strictly relevant to IT security. However, the knowing the difference between terms such as threat or exploit can be quite crucial when assessing and communicating a vulnerability within a team or a community.

Container Security: Just The Good Parts

Security is usually a matter of trade-offs. Questions like: “Is X Secure?”, don’t often have direct yes or no answers. A technology can mitigate certain classes of risk even as it exacerbates others.

Containers are just such a recent technology and their security impact is complex. Although some of the common risks of containers are beginning to be understood, many of their upsides are yet to be widely recognized. To emphasize the point, this post will highlight three of advantages of containers that sysadmins and DevOps can use to make installations more secure.

Example Application

To give this discussion focus, we will consider an example application: a simple imageboard application. This application allows users to create and respond in threads of anonymous image and text content. Original posters can control their posts via “tripcodes” (which are basically per-post passwords). The application consists of the following “stack”:

  • nginx to serve static content, reverse proxy the active content, act as a cache-layer, and handle SSL
  • node.js to do the heavy lifting
  • mariadb to enable persistence

The Base Case

The base-case for comparison is the complete stack being hosted on a single machine (or virtual machine). It is true that this is a simple case, but this is not a straw man. A large portion of the web is served from just such unified instances.

The Containerized Setup

The stack naturally splits into three containers:

  • container X, hosting nginx
  • container J, hosting node.js
  • container M, hosting mariadb

Additionally, three /var locations are created on the host: (1) one for static content (a blog, theming, etc.), (2) one for the actual images, and (3) one for database persistence. The node.js container will have a mount for the the image-store, the mariadb container will have a mount for the database, and the nginx container will have mounts for both the image-store and static content.

Advantage #1: Isolated Upgrades

Let’s look at an example patch Tuesday under both setups.

The Base Case

The sysadmin has prepared a second staging instance for testing the latest patches from her distribution. Among the updates is a critical one for SSL that prevents a key-leak from a specially crafted handshake. After applying all updates, she starts her automatic test suite. Everything goes well until the test for tripcodes. It turns out that the node.js code uses the SSL library to hash the tripcodes for storage and the fix either changed the signature or behavior of those methods. This puts the sysadmin in a tight spot. Does she try to disable tripcodes? Hold back the upgrade?

The Contained Case

Here the sysadmin has more work to do. Instead of updating and testing a single staging instance, she will update and test each individual container, promoting them to production on a container-by-container basis. The nginx and mariadb containers suceed and she replaces them in production. Her keys are safe. As with the base case, the tripcode tests don’t succeed. Unlike the base case, the sysadmin has the option of holding back just the node.js’s SSL library and the nature of the flaw being key-exposure at handshake means that this is not an emergency requiring her to rush developers for a fix.

The Advantage

Of course, isolated upgrades aren’t unique to containers. node.js provides them itself, in the form of npm. So—depending on code specifics—the base case sysadmin might have been able to hold back the SSL library used for tripcodes. However, containers grant all application frameworks isolated upgrades, regardless of whether they provide them themselves. Further, they easily provide them to bigger portions of the stack.

Containers also simplify isolated upgrades. Technologies like rubygems or python virtualenvs create reliance on yet another curated collection of dependencies. It’s easy for sysadmins to be in a position where they need three or more such curated collections to update before their application is safe from a given vulnerability. Container-driven isolated upgrades let sysadmins lean on single collections, such as Linux distributions. These are much more likely to have—for example—paid support or guaranteed SLA’s. They also unify the dependency management to the underlying distribution’s update mechanism.

Containers can also make existing isolation mechanisms easier to manage. While the above case might have been handled via node.js’s npm mechanism, containers would have allowed the developers to deal with that complexity, simply handing an updated container to the sysadmin.

Of course, isolated upgrades are not always an advantage. In large-use environments the resource savings from shared images/memory may make it worth the additional headaches to move all applications forward in lock-step.

Advantage #2: Containers Simplify Real Isolation

Containers do not contain.” However, what containers do well is group related processes and create natural (if undefended) trusts boundaries. This—it turns out—simplifies the task of providing real containment immensely. SELinux, cgroups, iptables, and kernel capabilities have a—mostly undeserved—reputation of being complicated. Complemented with containers, these technologies become much simpler to leverage.

The Base Case

A sysadmin trying to lock-down their installation in the traditional case faces a daunting task. First, they must identify what processes should be allowed to do what. Does node.js as used in this application use /tmp? What kernel capabilities does mariadb need? The need to answer these questions is one of the reasons technologies such as SELinux are considered complicated. They require a deep understanding of the behavior of not just application code, but the application runtime and the underlying OS itself. The tools available to trouble-shoot these issues are often limited (e.g. strace).

Even if the sysadmin is able to nail down exactly what processes in her stack need what capabilities (kernel or otherwise) the question of how to actually bind the application by those restrictions is still a complicated one. How will the processes be transitioned to the correct SELinux context? The correct cgroup?

The Contained Case

In contrast, a sysadmin trying to secure a container has four advantages:

  1. It is trivial (and usually automatic) to transition an entire container into a particular SELinux context and/or cgroup (Docker has –security-opt, OpenShift PID-based groups, etc.).
  2. Operating system behavior need not be locked down, only the container/host relationship.
  3. The container is—usually—placed on a virtual network and/or interface (often the container runtime environment even has supplemental lock-down capabilities).
  4. Containers naturally provide for experimentation. You can easily launch a container with a varying set of kernel capabilities.

Most frameworks for launching containers do so with sensible “base” SELinux types. For example, both Docker and systemd-nspawn (when using SELinux under RHEL or Fedora) launch all containers with variations of svirt types based on previous work with libvirt. Additionally, many container launchers also borrow libvirt’s philosophy of giving each launched container unique Multi-Category Security (MCS) labels that can optionally be set by the admin. Combined with read-only mounting and the fact that an admin only needs to worry about container/host interactions, this MCS functionality can go a long way towards restricting an applications behavior.

For this application, it is straight-forward to:

  • Label the static, image, and database stores with unique MCS labels (e.g. c1, c2, and c3).
  • Launch the nginx container with labels and binding options (i.e. :ro) appropriate for reading only the image and static stores (-v /path:/path:ro and –security-opt=label:level:s0:c1,c2 for Docker).
  • Launch the node.js container binding the image store read/write and with a label giving it only access to that store.
  • Launching the mariadb container with only the data persistence store mounted read/write and with a label giving it access only to that store.

Should you need to go beyond what MCS can offer, most container frameworks support launching containers with specific SELinux types. Even when working with derived or original SELinux types, containers make everything easier as you need only worry about the interactions between the container and host.

With containers, there are many tools for restricting intra-container communication. Alternatively, for all container frameworks that give each container a unique IP, iptables can also be applied directly. With iptables—for example—it is easy to restrict:

  • The nginx container from speaking anything but HTTP to the nginx container and HTTPS to the outside world.
  • Block the node.js container from doing anything but speaking HTTP to the nginx container and using the database port of the mariadb container.
  • Block mariadb from doing anything but receiving request from the node.js container on it’s database port.

For preventing DDOS or other resource-based attacks, we can use the container launchers built-in tools (e.g. Docker’s ulimit options) or cgroups directly. Either way it is easy to—for example—restrict the node.js and mariadb containers to some hard resource limit (40% of RAM, 20% of CPU and so on).

Finally, container frameworks combined with unit tests are a great way for finding a restricted set of kernel capabilities with which to run an application layer. Whether the framework encourages starting with a minimal set and building up (systemd-nspawn) or with a larger set and letting you selectively drop (Docker), it’s easy to keep launching containers until you find a restricted—but workable—collection.

The configuration to isolation ratio of the above work is extremely high compared to “manual” SELinux/cgroup/iptables isolation. There is also much less to “go wrong” as it is much easier to understand the container/host relationship and its needs than it is to understand the process/OS relationship. Among other upsides, the above configuration: prevents a compromised nginx from altering any data on the host (including the image-store and database), prevents a compromised mariadb from altering anything other than the database, and—depending on what exact kernel capabilities are absolutely required—may go a long way towards prevention of privilege escalation.

The Advantage

While containers do not allow for any forms of isolation not already possible, in practice they make configuring isolation much simpler. They limit isolation to container/host instead of process/OS. By binding containers to virtual networks or interfaces, they simplify firewall rules. Container implementations often provide sensible SELinux or other protection defaults that can be easily extended.

The trade-off is that containers expose an additional container/container attack-surface that is not as trivial to isolate.

Advantage #3: Containers Have More Limited and Explicit Dependencies

The Base Case

Containers are meant to eliminate “works for me” problems. A common cause of “works for me” problems in traditional installations is hidden dependencies. An example is a software component depending on a common command line utility without a developer knowing it. Besides creating instability over installation types, this is a security issue. A sysadmin cannot protect against a vulnerability in a component they do not know is being used.

The flip-side of unknown dependencies and of much greater concern is extraneous or cross-over components. Components needed by one portion of the stack can actually make other components not designed with them in mind extremely dangerous. Many privilege escalation flaws involve abusing access to suid programs that, while essential to some applications, are extraneous to others.

The Contained Case

Obviously, container isolation helps prevent component dependency cross-over but containers also help to minimize extraneous dependencies. Containers are not virtual machines. Containers do not not have to boot, they do not have to support interactive usage, they are usually single user, and can be simpler than a full operating system in any number of ways. Thus containers can eschew service launchers, shells, sensitive configuration files, and other cruft that serves (from an application perspective) to only serve as an attack surface.

Truly minimal custom containers will more or less look like just the top few layers of their RPM/Deb/pkg “pyramid” without any of the bottom layers. Even “general” purpose containers are undergoing a healthy “race to the bottom” to have as minimal a starting footprint as possible. The Docker version of RHEL 7, an operating system not exactly famous for minimalism, is itself less than 155 megs uncompressed.

The Advantage

Container isolation means that when a portion of your application stack has a dependency, that dependency’s attack surface is available only to that portion of your application. This is in stark contrast to traditional installations where attack surfaces are always additive. Exploitation almost always involves chaining multiple vulnerabilities, so this advantage may be one of containers’ most powerful.

A common security complaint regarding containers is that in many ways they are comparable to statically linked binaries. The flip side is that this puts pressure on developers and maintainers to minimize the size of these blobs, which minimizes their attack surface. Shellshock is a good example of the kind of vulnerability this mitigates. It is nearly impossible for a traditional container to not have a highly complex shell, but many containers ship without a shell of any kind.

Beyond containers themselves this pressure has resulted in the rise of the minimal host operating system (e.g. Atomic, CoreOS, RancherOS). This has brought a reduced attack surface (and in the case of Atomic a certain degree of immutability) to the host as well as the container.

Containers Is As Containers Do

Other security advantages of containers include working well in an immutable and/or stateless paradigms, good content auditability (especially compared to virtual machines), and—potentially—good verifiability. A single blog post can’t cover all of the upsides of containers, much less the upsides and downsides. Ultimately, a large part of understanding the security impact of containers is coming to terms with the fact that containers are neither degenerate virtual machines nor superior jails. They are a unique technology whose impact needs to be assessed on its own.