Link ScraperLink Scraper API

OnlineCredit Usage:10 per callLive Data
avg: 1906ms|p50: 1734ms|p75: 2020ms|p90: 2363ms|p99: 3050ms

Link Scraper is a simple tool for scraping web page links. It returns all the links on a web page.

The Link Scraper API provides reliable and fast access to link scraper data through a simple REST interface. Built for developers who need consistent, high-quality results with minimal setup time.

To use Link Scraper, you need an API key. You can get one by creating a free account and visiting your dashboard.

POST Endpoint

URL
https://api.apiverve.com/v1/linkscraper

Code Examples

Here are examples of how to call the Link Scraper API in different programming languages:

cURL Request
curl -X POST \
  "https://api.apiverve.com/v1/linkscraper" \
  -H "X-API-Key: your_api_key_here" \
  -H "Content-Type: application/json" \
  -d '{
  "url": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html",
  "maxlinks": 20,
  "includequery": false
}'
JavaScript (Fetch API)
const response = await fetch('https://api.apiverve.com/v1/linkscraper', {
  method: 'POST',
  headers: {
    'X-API-Key': 'your_api_key_here',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    "url": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html",
    "maxlinks": 20,
    "includequery": false
})
});

const data = await response.json();
console.log(data);
Python (Requests)
import requests

headers = {
    'X-API-Key': 'your_api_key_here',
    'Content-Type': 'application/json'
}

payload = {
    "url": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html",
    "maxlinks": 20,
    "includequery": false
}

response = requests.post('https://api.apiverve.com/v1/linkscraper', headers=headers, json=payload)

data = response.json()
print(data)
Node.js (Native HTTPS)
const https = require('https');
const url = require('url');

const options = {
  method: 'POST',
  headers: {
    'X-API-Key': 'your_api_key_here',
    'Content-Type': 'application/json'
  }
};

const postData = JSON.stringify({
  "url": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html",
  "maxlinks": 20,
  "includequery": false
});

const req = https.request('https://api.apiverve.com/v1/linkscraper', options, (res) => {
  let data = '';
  res.on('data', (chunk) => data += chunk);
  res.on('end', () => console.log(JSON.parse(data)));
});

req.write(postData);
req.end();
PHP (cURL)
<?php

$ch = curl_init();

curl_setopt($ch, CURLOPT_URL, 'https://api.apiverve.com/v1/linkscraper');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, 'POST');
curl_setopt($ch, CURLOPT_HTTPHEADER, [
    'X-API-Key: your_api_key_here',
    'Content-Type: application/json'
]);

curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode({
    'url': 'https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html',
    'maxlinks': 20,
    'includequery': false
}));

$response = curl_exec($ch);
curl_close($ch);

$data = json_decode($response, true);
print_r($data);

?>
Go (net/http)
package main

import (
    "fmt"
    "io"
    "net/http"
    "bytes"
    "encoding/json"
)

func main() {
    payload := map[string]interface{}{
        "url": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html",
        "maxlinks": "20",
        "includequery": "false"
    }

    jsonPayload, _ := json.Marshal(payload)
    req, _ := http.NewRequest("POST", "https://api.apiverve.com/v1/linkscraper", bytes.NewBuffer(jsonPayload))

    req.Header.Set("X-API-Key", "your_api_key_here")
    req.Header.Set("Content-Type", "application/json")

    client := &http.Client{}
    resp, err := client.Do(req)
    if err != nil {
        panic(err)
    }
    defer resp.Body.Close()

    body, _ := io.ReadAll(resp.Body)
    fmt.Println(string(body))
}
Ruby (Net::HTTP)
require 'net/http'
require 'json'

uri = URI('https://api.apiverve.com/v1/linkscraper')
http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = true

payload = {
  "url": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html",
  "maxlinks": 20,
  "includequery": false
}

request = Net::HTTP::Post.new(uri)
request['X-API-Key'] = 'your_api_key_here'
request['Content-Type'] = 'application/json'

request.body = payload.to_json

response = http.request(request)
puts JSON.pretty_generate(JSON.parse(response.body))
C# (HttpClient)
using System;
using System.Net.Http;
using System.Text;
using System.Threading.Tasks;

class Program
{
    static async Task Main(string[] args)
    {
        using var client = new HttpClient();
        client.DefaultRequestHeaders.Add("X-API-Key", "your_api_key_here");

        var jsonContent = @"{
        ""url"": ""https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html"",
        ""maxlinks"": 20,
        ""includequery"": false
}";
        var content = new StringContent(jsonContent, Encoding.UTF8, "application/json");

        var response = await client.PostAsync("https://api.apiverve.com/v1/linkscraper", content);
        response.EnsureSuccessStatusCode();

        var responseBody = await response.Content.ReadAsStringAsync();
        Console.WriteLine(responseBody);
    }
}

Authentication

The Link Scraper API requires authentication via API key. Include your API key in the request header:

Required Header
X-API-Key: your_api_key_here

Learn more about authentication →

Interactive API Playground

Test the Link Scraper API directly in your browser with live requests and responses.

Parameters

The following parameters are available for the Link Scraper API:

Some Link Scraper parameters marked with Premium are available exclusively on paid plans.View pricing

Scrape Links

ParameterTypeRequiredDescriptionDefaultExample
urlstringrequired
The URL of the web page to scrape links from
Format: url
-https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html
maxlinksPremiumnumberrequired
Maximum number of links to scrape and return
5020
includequerybooleanoptional
Include query strings in the scraped links
-false

Response

The Link Scraper API returns responses in JSON, XML, YAML, and CSV formats:

Example Responses

JSON Response
200 OK
{
  "status": "ok",
  "error": null,
  "data": {
    "url": "http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html",
    "linkCount": 16,
    "externalLinkCount": 13,
    "internalLinkCount": 3,
    "links": [
      {
        "text": "Documentation",
        "href": "http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html/index.html",
        "external": false
      },
      {
        "text": "Amazon EC2 Instance Types Guide",
        "href": "https://docs.aws.amazon.com/ec2/latest/instancetypes/instance-types.html",
        "external": true
      },
      {
        "text": "Amazon EC2 Auto Scaling",
        "href": "https://docs.aws.amazon.com/autoscaling/",
        "external": true
      }
    ],
    "uniqueDomains": [
      "docs.aws.amazon.com",
      "aws.amazon.com"
    ],
    "maxLinksReached": false
  }
}
XML Response
200 OK
<?xml version="1.0" encoding="UTF-8"?>
<response>
  <status>ok</status>
  <error xsi:nil="true" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"/>
  <data>
    <url>http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html</url>
    <linkCount>16</linkCount>
    <externalLinkCount>13</externalLinkCount>
    <internalLinkCount>3</internalLinkCount>
    <links>
      <link>
        <text>Documentation</text>
        <href>http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html/index.html</href>
        <external>false</external>
      </link>
      <link>
        <text>Amazon EC2 Instance Types Guide</text>
        <href>https://docs.aws.amazon.com/ec2/latest/instancetypes/instance-types.html</href>
        <external>true</external>
      </link>
      <link>
        <text>Amazon EC2 Auto Scaling</text>
        <href>https://docs.aws.amazon.com/autoscaling/</href>
        <external>true</external>
      </link>
    </links>
    <uniqueDomains>
      <uniqueDomain>docs.aws.amazon.com</uniqueDomain>
      <uniqueDomain>aws.amazon.com</uniqueDomain>
    </uniqueDomains>
    <maxLinksReached>false</maxLinksReached>
  </data>
</response>
YAML Response
200 OK
status: ok
error: null
data:
  url: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html
  linkCount: 16
  externalLinkCount: 13
  internalLinkCount: 3
  links:
    - text: Documentation
      href: >-
        http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html/index.html
      external: false
    - text: Amazon EC2 Instance Types Guide
      href: https://docs.aws.amazon.com/ec2/latest/instancetypes/instance-types.html
      external: true
    - text: Amazon EC2 Auto Scaling
      href: https://docs.aws.amazon.com/autoscaling/
      external: true
  uniqueDomains:
    - docs.aws.amazon.com
    - aws.amazon.com
  maxLinksReached: false
CSV Response
200 OK
keyvalue
urlhttp://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html
linkCount16
externalLinkCount13
internalLinkCount3
links[{text:Documentation,href:http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html/index.html,external:false},{text:Amazon EC2 Instance Types Guide,href:https://docs.aws.amazon.com/ec2/latest/instancetypes/instance-types.html,external:true},{text:Amazon EC2 Auto Scaling,href:https://docs.aws.amazon.com/autoscaling/,external:true}]
uniqueDomains[docs.aws.amazon.com,aws.amazon.com]
maxLinksReachedfalse

Response Structure

All API responses follow a consistent structure with the following fields:

FieldTypeDescriptionExample
statusstringIndicates whether the request was successful ("ok") or failed ("error")ok
errorstring | nullContains error message if status is "error", otherwise nullnull
dataobject | nullContains the API response data if successful, otherwise null{...}

Learn more about response formats →

Response Data Fields

When the request is successful, the data object contains the following fields:

Response fields marked with Premium are available exclusively on paid plans.View pricing
FieldTypeSample ValueDescription
urlstring"http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html"-
linkCountnumber16-
externalLinkCountnumber13Number of external links
internalLinkCountnumber3Number of internal links
[ ] Array items:array[3]Array of objects-
â”” textstring"Documentation"-
â”” hrefstring"http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html/index.html"-
â”” externalbooleanfalse-
uniqueDomainsPremiumarray["docs.aws.amazon.com", ...]List of unique external domains found
maxLinksReachedbooleanfalse-

Headers

Required and optional headers for Link Scraper API requests:

Header NameRequiredExample ValueDescription
X-API-Keyrequiredyour_api_key_hereYour APIVerve API key. Found in your dashboard under API Keys.
Acceptoptionalapplication/jsonSpecify response format: application/json (default), application/xml, or application/yaml
User-AgentoptionalMyApp/1.0Identifies your application for analytics and debugging purposes
X-Request-IDoptionalreq_123456789Custom request identifier for tracking and debugging requests
Cache-Controloptionalno-cacheControl caching behavior for the request and response

Learn more about request headers →

GraphQL AccessALPHA

Access Link Scraper through GraphQL to combine it with other API calls in a single request. Query only the link scraper data you need with precise field selection, and orchestrate complex data fetching workflows.

Test Link Scraper in the GraphQL Explorer to confirm availability and experiment with queries.

Credit Cost: Each API called in your GraphQL query consumes its standard credit cost.

GraphQL Endpoint
POST https://api.apiverve.com/v1/graphql
GraphQL Query Example
query {
  linkscraper(
    input: {
      url: "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html"
      maxlinks: 20
      includequery: false
    }
  ) {
    url
    linkCount
    externalLinkCount
    internalLinkCount
    links
    uniqueDomains
    maxLinksReached
  }
}

Note: Authentication is handled via the x-api-key header in your GraphQL request, not as a query parameter.

CORS Support

The Link Scraper API supports Cross-Origin Resource Sharing (CORS) with wildcard configuration, allowing you to call Link Scraper directly from browser-based applications without proxy servers.

CORS HeaderValueDescription
Access-Control-Allow-Origin*Accepts requests from any origin
Access-Control-Allow-Methods*Accepts any HTTP method
Access-Control-Allow-Headers*Accepts any request headers

Browser Usage: You can call Link Scraper directly from JavaScript running in the browser without encountering CORS errors. No proxy server or additional configuration needed.

Learn more about CORS support →

Rate Limiting

Link Scraper API requests are subject to rate limiting based on your subscription plan. These limits ensure fair usage and maintain service quality for all Link Scraper users.

PlanRate LimitDescription
Free5 requests/minHard rate limit enforced - exceeding will return 429 errors
StarterNo LimitProduction ready - standard traffic priority
ProNo LimitProduction ready - preferred traffic priority
MegaNo LimitProduction ready - highest traffic priority

Learn more about rate limiting →

Rate Limit Headers

When rate limits apply, each Link Scraper response includes headers to help you track your usage:

HeaderDescription
X-RateLimit-LimitMaximum number of requests allowed per time window
X-RateLimit-RemainingNumber of requests remaining in the current window
X-RateLimit-ResetUnix timestamp when the rate limit window resets

Handling Rate Limits

Free Plan: When you exceed your rate limit, Link Scraper returns a 429 Too Many Requests status code. Your application should implement appropriate backoff logic to handle this gracefully.

Paid Plans: No rate limiting or throttling applied. All paid plans (Starter, Pro, Mega) are production-ready.

Best Practices for Link Scraper:

  • Monitor the rate limit headers to track your Link Scraper usage (Free plan only)
  • Cache link scraper responses where appropriate to reduce API calls
  • Upgrade to Pro or Mega for guaranteed no-throttle Link Scraper performance

Note: Link Scraper rate limits are separate from credit consumption. You may have credits remaining but still hit rate limits when using Link Scraper on Free tier.

Error Codes

The Link Scraper API uses standard HTTP status codes to indicate success or failure:

CodeMessageDescriptionSolution
200OKRequest successful, data returnedNo action needed - request was successful
400Bad RequestInvalid request parameters or malformed requestCheck required parameters and ensure values match expected formats
401UnauthorizedMissing or invalid API keyInclude x-api-key header with valid API key from dashboard
403ForbiddenAPI key lacks permission or insufficient creditsCheck credit balance in dashboard or upgrade plan
429Too Many RequestsRate limit exceeded (Free: 5 req/min)Implement request throttling or upgrade to paid plan
500Internal Server ErrorServer error occurredRetry request after a few seconds, contact support if persists
503Service UnavailableAPI temporarily unavailableWait and retry, check status page for maintenance updates

Learn more about error handling →

Need help? Contact support with your X-Request-ID for assistance.

Integrate Link Scraper with SDKs

Get started quickly with official Link Scraper SDKs for your preferred language. Each library handles authentication, request formatting, and error handling automatically.

Available for Node.js, Python, C#/.NET, and Android/Java. All SDKs are open source and regularly updated.

Integrate Link Scraper with No-Code API Tools

Connect the Link Scraper API to your favorite automation platform without writing code. Build workflows that leverage link scraper data across thousands of apps.

All platforms use your same API key to access Link Scraper. Visit our integrations hub for step-by-step setup guides.

Frequently Asked Questions

How do I get an API key for Link Scraper?
Sign up for a free account at dashboard.apiverve.com. Your API key will be automatically generated and available in your dashboard. The same key works for Link Scraper and all other APIVerve APIs. The free plan includes 1,000 credits plus a 500 credit bonus.
How many credits does Link Scraper cost?

Each successful Link Scraper API call consumes credits based on plan tier. Check the pricing section above for the exact credit cost. Failed requests and errors don't consume credits, so you only pay for successful link scraper lookups.

Can I use Link Scraper in production?

The free plan is for testing and development only. For production use of Link Scraper, upgrade to a paid plan (Starter, Pro, or Mega) which includes commercial use rights, no attribution requirements, and guaranteed uptime SLAs. All paid plans are production-ready.

Can I use Link Scraper from a browser?
Yes! The Link Scraper API supports CORS with wildcard configuration, so you can call it directly from browser-based JavaScript without needing a proxy server. See the CORS section above for details.
What happens if I exceed my Link Scraper credit limit?

When you reach your monthly credit limit, Link Scraper API requests will return an error until you upgrade your plan or wait for the next billing cycle. You'll receive notifications at 80% and 95% usage to give you time to upgrade if needed.

What's Next?

Continue your journey with these recommended resources

Was this page helpful?