RESTful Api spec is important in vulnerability scanner

APIGit

2023-07-27

api-vulnerability

How to find vulnerability in your web application?

Nowadays more and more companies provide web APIs to access their services. They usually follow REST style. Such a RESTful web service looks like a regular web application. It accepts an HTTP request, does some magic, and then replies with an HTTP response. One of the main differences is that the reply doesn’t normally contain HTML to be rendered in a web browser. Instead, the reply usually contains data in a format (for example, JSON or XML) which is easier to process by another application.

Unfortunately, since a RESTful web service is still a web application, it may contain typical security vulnerabilities for web applications such as SQL injections, XXE, etc. One of the ways to identify security issues in web applications is to use web security scanners. Fortunately, since a RESTful web service is still a web application, we can use web security scanners to look for security issues in web APIs.

There are several well-known web security scanners. One of them is w3af.

How does w3af find vulnerabilities?

w3af is a Web Application Attack and Audit Framework. The project’s goal is to create a framework to help you secure your web applications by finding and exploiting all web application vulnerabilities. This framework contains hundreds of plugins which will help to find vulnerabilities.

Crawl plugins use different techniques to identify new URLs, forms, and any other resource that might be of use during the audit and bruteforce phases. In other words, these plugins browse the application and try to discover entry points to test.

Audit plugins use the knowledge created by crawl plugins to find vulnerabilities on the remote web application and web server. These plugins test the discovered entry points for vulnerabilities.

By enabling some plugins you could find some vulnerabilities, but not much. Here is a result of my spring-boot web project. A web scanner usually tries to browse a web site to find all available pages and parameters which then can be tested for vulnerabilities. In case of a typical web site, a scanner often starts from a home page, and looks for links to other pages in the HTML code. But this approach doesn’t work with web APIs because usually API endpoints don’t serve HTML data, and APIs usually don’t refer to each other in their replies.

{
  "items": [
    {
      "href": "/scans/1/kb/0",
      "id": 0,
      "name": "Strange HTTP Reason message",
      "url": "http://192.168.1.65:8181/"
    },
    {
      "href": "/scans/1/kb/1",
      "id": 1,
      "name": "Cross site tracing vulnerability",
      "url": "http://192.168.1.65:8181/"
    },
    {
      "href": "/scans/1/kb/2",
      "id": 2,
      "name": "Omitted server header",
      "url": null
    },
    {
      "href": "/scans/1/kb/3",
      "id": 3,
      "name": "Allowed HTTP methods",
      "url": "http://192.168.1.65:8181/"
    },
    {
      "href": "/scans/1/kb/4",
      "id": 4,
      "name": "Click-Jacking vulnerability",
      "url": null
    }
  ]
}

A RESTful Api Spec is important for a scanner

How do we get a list of API endpoints and parameters to scan? There are two main ways:

  • #1 RESTful Api Spec A web service may have an OpenAPI specification which describes all endpoints, parameters, responses, authentication schemes, etc. Such a specification is normally provided by developers.
  • #2 Proxy Using a proxy, we can capture HTTP requests which were sent to the API endpoints by a client. Then, we can parse the captured requests and extract information about parameters.

The way #1 looks much better than #2. In a perfect world, each web service has an OpenAPI specification which is always available and up-to-date. But in the real world, it doesn’t seem to happen too often. A developer may change the APIs but forget to update the spec, or for some reason they don’t make the spec publicly available. In most cases, publicly available REST API have human-readable docs which is nice but it’s usually hard to use in an automated way.

  • But, APIGIT could make this easier.

APIGIT is a collaboration platform that stands out for its native Git support, which simplifies the API development process and version control, enabling users to easily design, document, mock, test, and share APIs. The platform's visual OpenAPI editor, in combination with its native Git support, makes it easy for team to collaborate and share their work in a seamless and efficient manner.

How does w3af find vulnerabilities according a RESTful Api Spec?

Here is part of my RESTful Api Spec.

"paths": {
    "/find_pet": {
      "get": {
        "summary": "List pet",
        "operationId": "listPet",
        "tags": [
          "pet"
        ],
        "parameters": [
          {
            "name": "version",
            "in": "query",
            "description": "Get pet by version",
            "required": true,
            "type": "string"
          }
        ],

By enable related plugins and feed with your RESTful Api Spec:

[crawl.open_api]
custom_spec_location = /var/log/wvs/openapi2.json
no_spec_validation = True

We can find more vulnerabilities.

{
  "items": [
    {
      "href": "/scans/0/kb/0",
      "id": 0,
      "name": "Strange HTTP Reason message",
      "url": "http://192.168.1.65:8181/"
    },
    {
      "href": "/scans/0/kb/1",
      "id": 1,
      "name": "Omitted server header",
      "url": null
    },
    {
      "href": "/scans/0/kb/2",
      "id": 2,
      "name": "Cross site tracing vulnerability",
      "url": "http://192.168.1.65:8181/"
    },
    {
      "href": "/scans/0/kb/3",
      "id": 3,
      "name": "Open API specification found",
      "url": "file:///var/log/wvs/openapi2.json"
    },
    {
      "href": "/scans/0/kb/4",
      "id": 4,
      "name": "Allowed HTTP methods",
      "url": "http://192.168.1.65:8181/"
    },
    {
      "href": "/scans/0/kb/5",
      "id": 5,
      "name": "Cross site scripting vulnerability",
      "url": "http://192.168.1.65:8181/find_pet"
    },
    {
      "href": "/scans/0/kb/6",
      "id": 6,
      "name": "Unhandled error in web application",
      "url": "http://192.168.1.65:8181/_pet"
    },
    {
      "href": "/scans/0/kb/7",
      "id": 7,
      "name": "Unhandled error in web application",
      "url": "http://192.168.1.65:8181/find_"
    },
    {
      "href": "/scans/0/kb/8",
      "id": 8,
      "name": "Unhandled error in web application",
      "url": "http://192.168.1.65:8181/"
    },
    {
      "href": "/scans/0/kb/9",
      "id": 9,
      "name": "Strange HTTP response code",
      "url": "http://192.168.1.65:8181/%2Fbin%2Fcat+%2Fetc%2Fpasswd_pet"
    },
    {
      "href": "/scans/0/kb/10",
      "id": 10,
      "name": "Click-Jacking vulnerability",
      "url": null
    }
  ]
}

Let’s talk about crawl.open_api plugin. The first version of the plugin tried to find an OpenAPI specification on one of the well-known locations such as:

/api/openapi.json

/api/v2/swagger.json

/api/v1/swagger.yaml

and so on. It’s a very nice feature. If you have multiple web services to test, and their API specs are available in well-known locations, then you just need to feed the scanner with the host names, and that’s all. The scanner is going to find all API endpoints by itself, and then test them. Unfortunately, sometimes web services don’t provide API specifications, and custom_spec_file parameter allows to set a local path to OpenAPI specification. A user can use API docs and build the spec by himself, or sometimes specifications are publicly available but not at well-known location.