New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details
Socket
Book a DemoSign in
Socket

@sharpapi/sharpapi-node-parse-resume

Package Overview
Dependencies
Maintainers
1
Versions
2
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@sharpapi/sharpapi-node-parse-resume

SharpAPI.com Node.js SDK for parsing resumes

Source
npmnpm
Version
1.0.0
Version published
Weekly downloads
1
Maintainers
1
Weekly downloads
 
Created
Source

SharpAPI GitHub cover

Resume/CV Parser API for Node.js

📄 Parse resumes and extract structured data with AI — powered by SharpAPI.

npm version License

SharpAPI Resume Parser extracts structured information from resume/CV documents in various formats (PDF, DOC, DOCX, TXT). Perfect for ATS systems, recruitment platforms, and HR automation tools.

📋 Table of Contents

Requirements

  • Node.js >= 16.x
  • npm or yarn

Installation

Step 1. Install the package via npm:

npm install @sharpapi/sharpapi-node-parse-resume

Step 2. Get your API key

Visit SharpAPI.com to get your API key.

Usage

const { SharpApiParseResumeService } = require('@sharpapi/sharpapi-node-parse-resume');
const fs = require('fs');

const apiKey = process.env.SHARP_API_KEY;
const service = new SharpApiParseResumeService(apiKey);

async function parseResume() {
  try {
    // Read resume file
    const resumeFile = fs.readFileSync('/path/to/resume.pdf');

    // Submit parsing job
    const statusUrl = await service.parseResume(resumeFile, 'resume.pdf', 'English');
    console.log('Job submitted. Status URL:', statusUrl);

    // Fetch results (polls automatically until complete)
    const result = await service.fetchResults(statusUrl);
    const parsedData = result.getResultJson();

    console.log('Candidate:', parsedData.result.name);
    console.log('Email:', parsedData.result.email);
    console.log('Skills:', parsedData.result.skills.join(', '));
  } catch (error) {
    console.error('Error:', error.message);
  }
}

parseResume();

API Documentation

Methods

parseResume(fileBuffer: Buffer, fileName: string, language?: string): Promise<string>

Parses a resume/CV file and extracts structured information.

Parameters:

  • fileBuffer (Buffer, required): The resume file content as a Buffer
  • fileName (string, required): Original filename with extension (e.g., 'resume.pdf')
  • language (string, optional): Expected resume language (default: 'English')

Supported File Formats:

  • PDF (.pdf)
  • Microsoft Word (.doc, .docx)
  • Plain Text (.txt)
  • Rich Text Format (.rtf)

Returns:

  • Promise: Status URL for polling the job result

Response Format

The API returns comprehensive structured data extracted from the resume:

{
  "data": {
    "type": "api_job_result",
    "id": "5de4887a-0dfd-49b6-8edb-9280e468c210",
    "attributes": {
      "status": "success",
      "type": "hr_parse_resume",
      "result": {
        "name": "John Doe",
        "email": "john.doe@email.com",
        "phone": "+1-555-123-4567",
        "location": {
          "city": "San Francisco",
          "state": "California",
          "country": "United States"
        },
        "summary": "Experienced software engineer with 8+ years in full-stack development, specializing in cloud-native applications and microservices architecture.",
        "work_experience": [
          {
            "job_title": "Senior Software Engineer",
            "company": "Tech Corp",
            "location": "San Francisco, CA",
            "start_date": "2020-03",
            "end_date": "Present",
            "description": "Led development of microservices architecture, improved system performance by 40%, mentored junior developers."
          },
          {
            "job_title": "Software Engineer",
            "company": "StartupXYZ",
            "location": "San Francisco, CA",
            "start_date": "2017-06",
            "end_date": "2020-02",
            "description": "Developed RESTful APIs and implemented CI/CD pipelines."
          }
        ],
        "education": [
          {
            "degree": "Bachelor of Science in Computer Science",
            "institution": "University of California",
            "location": "Berkeley, CA",
            "graduation_date": "2017-05"
          }
        ],
        "skills": [
          "JavaScript",
          "Node.js",
          "React",
          "AWS",
          "Docker",
          "Kubernetes",
          "PostgreSQL",
          "MongoDB",
          "CI/CD",
          "Agile"
        ],
        "certifications": [
          {
            "name": "AWS Certified Solutions Architect",
            "issuer": "Amazon Web Services",
            "date": "2021-08"
          }
        ],
        "languages": [
          {
            "language": "English",
            "proficiency": "Native"
          },
          {
            "language": "Spanish",
            "proficiency": "Intermediate"
          }
        ]
      }
    }
  }
}

Examples

Parse Single Resume

const { SharpApiParseResumeService } = require('@sharpapi/sharpapi-node-parse-resume');
const fs = require('fs');

const service = new SharpApiParseResumeService(process.env.SHARP_API_KEY);

async function parseAndDisplay(resumePath) {
  const fileBuffer = fs.readFileSync(resumePath);
  const fileName = resumePath.split('/').pop();

  const statusUrl = await service.parseResume(fileBuffer, fileName);
  const result = await service.fetchResults(statusUrl);
  const data = result.getResultJson().result;

  console.log('📋 Resume Summary:');
  console.log(`Name: ${data.name}`);
  console.log(`Email: ${data.email}`);
  console.log(`Phone: ${data.phone}`);
  console.log(`Location: ${data.location.city}, ${data.location.country}`);
  console.log(`\nExperience: ${data.work_experience.length} positions`);
  console.log(`Education: ${data.education.length} degrees`);
  console.log(`Skills: ${data.skills.join(', ')}`);
}

parseAndDisplay('./resumes/john_doe_resume.pdf');

Batch Resume Processing

const service = new SharpApiParseResumeService(process.env.SHARP_API_KEY);
const fs = require('fs');
const path = require('path');

async function processBatchResumes(resumeDirectory) {
  const files = fs.readdirSync(resumeDirectory);
  const candidates = [];

  for (const file of files) {
    if (file.match(/\.(pdf|docx|doc)$/i)) {
      const filePath = path.join(resumeDirectory, file);
      const fileBuffer = fs.readFileSync(filePath);

      try {
        const statusUrl = await service.parseResume(fileBuffer, file);
        const result = await service.fetchResults(statusUrl);
        const data = result.getResultJson().result;

        candidates.push({
          fileName: file,
          name: data.name,
          email: data.email,
          skills: data.skills,
          experience: data.work_experience.length,
          parsedSuccessfully: true
        });
      } catch (error) {
        candidates.push({
          fileName: file,
          parsedSuccessfully: false,
          error: error.message
        });
      }
    }
  }

  return candidates;
}

const allCandidates = await processBatchResumes('./incoming_resumes');
console.log(`Processed ${allCandidates.length} resumes`);
console.log(`Successful: ${allCandidates.filter(c => c.parsedSuccessfully).length}`);

ATS Integration

const service = new SharpApiParseResumeService(process.env.SHARP_API_KEY);

async function addCandidateToATS(resumeBuffer, fileName, jobId) {
  // Parse resume
  const statusUrl = await service.parseResume(resumeBuffer, fileName);
  const result = await service.fetchResults(statusUrl);
  const parsedData = result.getResultJson().result;

  // Structure for ATS
  const candidateProfile = {
    jobApplicationId: jobId,
    personalInfo: {
      fullName: parsedData.name,
      email: parsedData.email,
      phone: parsedData.phone,
      location: `${parsedData.location.city}, ${parsedData.location.country}`
    },
    professionalSummary: parsedData.summary,
    totalExperience: calculateTotalExperience(parsedData.work_experience),
    skills: parsedData.skills,
    education: parsedData.education,
    workHistory: parsedData.work_experience,
    certifications: parsedData.certifications || [],
    languages: parsedData.languages || [],
    resumeSource: 'automated_parsing',
    parsedAt: new Date().toISOString()
  };

  return candidateProfile;
}

function calculateTotalExperience(workHistory) {
  // Calculate years of experience
  let totalMonths = 0;
  workHistory.forEach(job => {
    const start = new Date(job.start_date);
    const end = job.end_date === 'Present' ? new Date() : new Date(job.end_date);
    totalMonths += (end.getFullYear() - start.getFullYear()) * 12 +
                   (end.getMonth() - start.getMonth());
  });
  return `${Math.floor(totalMonths / 12)} years, ${totalMonths % 12} months`;
}

const candidateProfile = await addCandidateToATS(resumeBuffer, 'resume.pdf', 'JOB-12345');
console.log('Candidate added to ATS:', candidateProfile.personalInfo.fullName);

Use Cases

  • Applicant Tracking Systems: Automatically parse incoming resumes
  • Recruitment Platforms: Extract candidate information for matching
  • HR Automation: Streamline resume screening processes
  • Talent Databases: Build searchable candidate databases
  • Job Boards: Enable resume upload and parsing features
  • Recruitment Agencies: Process high volumes of resumes efficiently
  • Career Portals: Parse user-uploaded resumes for profile creation

Extraction Capabilities

The parser extracts:

  • Personal Information: Name, email, phone, location
  • Professional Summary: Career objective or summary statement
  • Work Experience: Job titles, companies, dates, descriptions
  • Education: Degrees, institutions, graduation dates
  • Skills: Technical and soft skills
  • Certifications: Professional certifications and credentials
  • Languages: Spoken languages and proficiency levels
  • Projects: Notable projects (when available)
  • Publications: Academic or professional publications

API Endpoint

POST /hr/parse_resume (multipart/form-data)

For detailed API specifications, refer to:

License

This project is licensed under the MIT License. See the LICENSE.md file for details.

Support

Powered by SharpAPI - AI-Powered API Workflow Automation

Keywords

sharpapi

FAQs

Package last updated on 09 Jan 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts