Link Scraper API
Link Scraper is a simple tool for scraping web page links. It returns all the links on a web page.
The Link Scraper API provides reliable and fast access to link scraper data through a simple REST interface. Built for developers who need consistent, high-quality results with minimal setup time.
To use Link Scraper, you need an API key. You can get one by creating a free account and visiting your dashboard.
POST Endpoint
https://api.apiverve.com/v1/linkscraperCode Examples
Here are examples of how to call the Link Scraper API in different programming languages:
curl -X POST \
"https://api.apiverve.com/v1/linkscraper" \
-H "X-API-Key: your_api_key_here" \
-H "Content-Type: application/json" \
-d '{
"url": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html",
"maxlinks": 20,
"includequery": false
}'const response = await fetch('https://api.apiverve.com/v1/linkscraper', {
method: 'POST',
headers: {
'X-API-Key': 'your_api_key_here',
'Content-Type': 'application/json'
},
body: JSON.stringify({
"url": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html",
"maxlinks": 20,
"includequery": false
})
});
const data = await response.json();
console.log(data);import requests
headers = {
'X-API-Key': 'your_api_key_here',
'Content-Type': 'application/json'
}
payload = {
"url": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html",
"maxlinks": 20,
"includequery": false
}
response = requests.post('https://api.apiverve.com/v1/linkscraper', headers=headers, json=payload)
data = response.json()
print(data)const https = require('https');
const url = require('url');
const options = {
method: 'POST',
headers: {
'X-API-Key': 'your_api_key_here',
'Content-Type': 'application/json'
}
};
const postData = JSON.stringify({
"url": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html",
"maxlinks": 20,
"includequery": false
});
const req = https.request('https://api.apiverve.com/v1/linkscraper', options, (res) => {
let data = '';
res.on('data', (chunk) => data += chunk);
res.on('end', () => console.log(JSON.parse(data)));
});
req.write(postData);
req.end();<?php
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, 'https://api.apiverve.com/v1/linkscraper');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, 'POST');
curl_setopt($ch, CURLOPT_HTTPHEADER, [
'X-API-Key: your_api_key_here',
'Content-Type: application/json'
]);
curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode({
'url': 'https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html',
'maxlinks': 20,
'includequery': false
}));
$response = curl_exec($ch);
curl_close($ch);
$data = json_decode($response, true);
print_r($data);
?>package main
import (
"fmt"
"io"
"net/http"
"bytes"
"encoding/json"
)
func main() {
payload := map[string]interface{}{
"url": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html",
"maxlinks": "20",
"includequery": "false"
}
jsonPayload, _ := json.Marshal(payload)
req, _ := http.NewRequest("POST", "https://api.apiverve.com/v1/linkscraper", bytes.NewBuffer(jsonPayload))
req.Header.Set("X-API-Key", "your_api_key_here")
req.Header.Set("Content-Type", "application/json")
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
panic(err)
}
defer resp.Body.Close()
body, _ := io.ReadAll(resp.Body)
fmt.Println(string(body))
}require 'net/http'
require 'json'
uri = URI('https://api.apiverve.com/v1/linkscraper')
http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = true
payload = {
"url": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html",
"maxlinks": 20,
"includequery": false
}
request = Net::HTTP::Post.new(uri)
request['X-API-Key'] = 'your_api_key_here'
request['Content-Type'] = 'application/json'
request.body = payload.to_json
response = http.request(request)
puts JSON.pretty_generate(JSON.parse(response.body))using System;
using System.Net.Http;
using System.Text;
using System.Threading.Tasks;
class Program
{
static async Task Main(string[] args)
{
using var client = new HttpClient();
client.DefaultRequestHeaders.Add("X-API-Key", "your_api_key_here");
var jsonContent = @"{
""url"": ""https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html"",
""maxlinks"": 20,
""includequery"": false
}";
var content = new StringContent(jsonContent, Encoding.UTF8, "application/json");
var response = await client.PostAsync("https://api.apiverve.com/v1/linkscraper", content);
response.EnsureSuccessStatusCode();
var responseBody = await response.Content.ReadAsStringAsync();
Console.WriteLine(responseBody);
}
}Authentication
The Link Scraper API requires authentication via API key. Include your API key in the request header:
X-API-Key: your_api_key_hereInteractive API Playground
Test the Link Scraper API directly in your browser with live requests and responses.
Parameters
The following parameters are available for the Link Scraper API:
Scrape Links
| Parameter | Type | Required | Description | Default | Example |
|---|---|---|---|---|---|
url | string | required | The URL of the web page to scrape links from Format: url | - | |
maxlinksPremium | number | required | Maximum number of links to scrape and return | ||
includequery | boolean | optional | Include query strings in the scraped links | - |
Response
The Link Scraper API returns responses in JSON, XML, YAML, and CSV formats:
Example Responses
{
"status": "ok",
"error": null,
"data": {
"url": "http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html",
"linkCount": 16,
"externalLinkCount": 13,
"internalLinkCount": 3,
"links": [
{
"text": "Documentation",
"href": "http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html/index.html",
"external": false
},
{
"text": "Amazon EC2 Instance Types Guide",
"href": "https://docs.aws.amazon.com/ec2/latest/instancetypes/instance-types.html",
"external": true
},
{
"text": "Amazon EC2 Auto Scaling",
"href": "https://docs.aws.amazon.com/autoscaling/",
"external": true
}
],
"uniqueDomains": [
"docs.aws.amazon.com",
"aws.amazon.com"
],
"maxLinksReached": false
}
}<?xml version="1.0" encoding="UTF-8"?>
<response>
<status>ok</status>
<error xsi:nil="true" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"/>
<data>
<url>http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html</url>
<linkCount>16</linkCount>
<externalLinkCount>13</externalLinkCount>
<internalLinkCount>3</internalLinkCount>
<links>
<link>
<text>Documentation</text>
<href>http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html/index.html</href>
<external>false</external>
</link>
<link>
<text>Amazon EC2 Instance Types Guide</text>
<href>https://docs.aws.amazon.com/ec2/latest/instancetypes/instance-types.html</href>
<external>true</external>
</link>
<link>
<text>Amazon EC2 Auto Scaling</text>
<href>https://docs.aws.amazon.com/autoscaling/</href>
<external>true</external>
</link>
</links>
<uniqueDomains>
<uniqueDomain>docs.aws.amazon.com</uniqueDomain>
<uniqueDomain>aws.amazon.com</uniqueDomain>
</uniqueDomains>
<maxLinksReached>false</maxLinksReached>
</data>
</response>
status: ok
error: null
data:
url: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html
linkCount: 16
externalLinkCount: 13
internalLinkCount: 3
links:
- text: Documentation
href: >-
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html/index.html
external: false
- text: Amazon EC2 Instance Types Guide
href: https://docs.aws.amazon.com/ec2/latest/instancetypes/instance-types.html
external: true
- text: Amazon EC2 Auto Scaling
href: https://docs.aws.amazon.com/autoscaling/
external: true
uniqueDomains:
- docs.aws.amazon.com
- aws.amazon.com
maxLinksReached: false
| key | value |
|---|---|
| url | http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html |
| linkCount | 16 |
| externalLinkCount | 13 |
| internalLinkCount | 3 |
| links | [{text:Documentation,href:http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html/index.html,external:false},{text:Amazon EC2 Instance Types Guide,href:https://docs.aws.amazon.com/ec2/latest/instancetypes/instance-types.html,external:true},{text:Amazon EC2 Auto Scaling,href:https://docs.aws.amazon.com/autoscaling/,external:true}] |
| uniqueDomains | [docs.aws.amazon.com,aws.amazon.com] |
| maxLinksReached | false |
Response Structure
All API responses follow a consistent structure with the following fields:
| Field | Type | Description | Example |
|---|---|---|---|
status | string | Indicates whether the request was successful ("ok") or failed ("error") | ok |
error | string | null | Contains error message if status is "error", otherwise null | null |
data | object | null | Contains the API response data if successful, otherwise null | {...} |
Learn more about response formats →
Response Data Fields
When the request is successful, the data object contains the following fields:
| Field | Type | Sample Value | Description |
|---|---|---|---|
url | string | - | |
linkCount | number | - | |
externalLinkCount | number | Number of external links | |
internalLinkCount | number | Number of internal links | |
| [ ] Array items: | array[3] | - | |
â”” text | string | - | |
â”” href | string | - | |
â”” external | boolean | - | |
uniqueDomainsPremium | array | List of unique external domains found | |
maxLinksReached | boolean | - |
Headers
Required and optional headers for Link Scraper API requests:
| Header Name | Required | Example Value | Description |
|---|---|---|---|
X-API-Key | required | your_api_key_here | Your APIVerve API key. Found in your dashboard under API Keys. |
Accept | optional | application/json | Specify response format: application/json (default), application/xml, or application/yaml |
User-Agent | optional | MyApp/1.0 | Identifies your application for analytics and debugging purposes |
X-Request-ID | optional | req_123456789 | Custom request identifier for tracking and debugging requests |
Cache-Control | optional | no-cache | Control caching behavior for the request and response |
GraphQL AccessALPHA
Access Link Scraper through GraphQL to combine it with other API calls in a single request. Query only the link scraper data you need with precise field selection, and orchestrate complex data fetching workflows.
Credit Cost: Each API called in your GraphQL query consumes its standard credit cost.
POST https://api.apiverve.com/v1/graphqlquery {
linkscraper(
input: {
url: "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html"
maxlinks: 20
includequery: false
}
) {
url
linkCount
externalLinkCount
internalLinkCount
links
uniqueDomains
maxLinksReached
}
}Note: Authentication is handled via the x-api-key header in your GraphQL request, not as a query parameter.
CORS Support
The Link Scraper API supports Cross-Origin Resource Sharing (CORS) with wildcard configuration, allowing you to call Link Scraper directly from browser-based applications without proxy servers.
| CORS Header | Value | Description |
|---|---|---|
Access-Control-Allow-Origin | * | Accepts requests from any origin |
Access-Control-Allow-Methods | * | Accepts any HTTP method |
Access-Control-Allow-Headers | * | Accepts any request headers |
Browser Usage: You can call Link Scraper directly from JavaScript running in the browser without encountering CORS errors. No proxy server or additional configuration needed.
Rate Limiting
Link Scraper API requests are subject to rate limiting based on your subscription plan. These limits ensure fair usage and maintain service quality for all Link Scraper users.
| Plan | Rate Limit | Description |
|---|---|---|
| Free | 5 requests/min | Hard rate limit enforced - exceeding will return 429 errors |
| Starter | No Limit | Production ready - standard traffic priority |
| Pro | No Limit | Production ready - preferred traffic priority |
| Mega | No Limit | Production ready - highest traffic priority |
Learn more about rate limiting →
Rate Limit Headers
When rate limits apply, each Link Scraper response includes headers to help you track your usage:
| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum number of requests allowed per time window |
X-RateLimit-Remaining | Number of requests remaining in the current window |
X-RateLimit-Reset | Unix timestamp when the rate limit window resets |
Handling Rate Limits
Free Plan: When you exceed your rate limit, Link Scraper returns a 429 Too Many Requests status code. Your application should implement appropriate backoff logic to handle this gracefully.
Paid Plans: No rate limiting or throttling applied. All paid plans (Starter, Pro, Mega) are production-ready.
Best Practices for Link Scraper:
- Monitor the rate limit headers to track your Link Scraper usage (Free plan only)
- Cache link scraper responses where appropriate to reduce API calls
- Upgrade to Pro or Mega for guaranteed no-throttle Link Scraper performance
Note: Link Scraper rate limits are separate from credit consumption. You may have credits remaining but still hit rate limits when using Link Scraper on Free tier.
Error Codes
The Link Scraper API uses standard HTTP status codes to indicate success or failure:
| Code | Message | Description | Solution |
|---|---|---|---|
200 | OK | Request successful, data returned | No action needed - request was successful |
400 | Bad Request | Invalid request parameters or malformed request | Check required parameters and ensure values match expected formats |
401 | Unauthorized | Missing or invalid API key | Include x-api-key header with valid API key from dashboard |
403 | Forbidden | API key lacks permission or insufficient credits | Check credit balance in dashboard or upgrade plan |
429 | Too Many Requests | Rate limit exceeded (Free: 5 req/min) | Implement request throttling or upgrade to paid plan |
500 | Internal Server Error | Server error occurred | Retry request after a few seconds, contact support if persists |
503 | Service Unavailable | API temporarily unavailable | Wait and retry, check status page for maintenance updates |
Learn more about error handling →
Need help? Contact support with your X-Request-ID for assistance.
Integrate Link Scraper with SDKs
Get started quickly with official Link Scraper SDKs for your preferred language. Each library handles authentication, request formatting, and error handling automatically.
Available for Node.js, Python, C#/.NET, and Android/Java. All SDKs are open source and regularly updated.
Integrate Link Scraper with No-Code API Tools
Connect the Link Scraper API to your favorite automation platform without writing code. Build workflows that leverage link scraper data across thousands of apps.





All platforms use your same API key to access Link Scraper. Visit our integrations hub for step-by-step setup guides.
Frequently Asked Questions
How do I get an API key for Link Scraper?
How many credits does Link Scraper cost?
Each successful Link Scraper API call consumes credits based on plan tier. Check the pricing section above for the exact credit cost. Failed requests and errors don't consume credits, so you only pay for successful link scraper lookups.
Can I use Link Scraper in production?
The free plan is for testing and development only. For production use of Link Scraper, upgrade to a paid plan (Starter, Pro, or Mega) which includes commercial use rights, no attribution requirements, and guaranteed uptime SLAs. All paid plans are production-ready.
Can I use Link Scraper from a browser?
What happens if I exceed my Link Scraper credit limit?
When you reach your monthly credit limit, Link Scraper API requests will return an error until you upgrade your plan or wait for the next billing cycle. You'll receive notifications at 80% and 95% usage to give you time to upgrade if needed.



