Every SOAP service you've touched, every RSS feed you've consumed, every SVG you've manipulated — they're all XML. JavaScript has two built-in ways to parse it in the browser, and a solid npm library for Node.js. The tricky part isn't the parsing itself; it's navigating the resulting DOM, handling namespaces, and not getting bitten by the quirks that catch everyone the first time. Let's go through the real patterns.

DOMParser — Parsing XML in the Browser

The browser's built-in DOMParser API converts an XML string into a DOM document. Use the MIME type 'application/xml' (not 'text/html') so the parser applies strict XML rules:

js
const xmlString = `<?xml version="1.0" encoding="UTF-8"?>
<library>
  <book isbn="978-0-13-110362-7">
    <title>The C Programming Language</title>
    <authors>
      <author>Brian W. Kernighan</author>
      <author>Dennis M. Ritchie</author>
    </authors>
    <year>1988</year>
    <price currency="USD">45.99</price>
  </book>
  <book isbn="978-0-201-63361-0">
    <title>The Pragmatic Programmer</title>
    <authors>
      <author>Andrew Hunt</author>
      <author>David Thomas</author>
    </authors>
    <year>1999</year>
    <price currency="USD">52.00</price>
  </book>
</library>`;

const parser = new DOMParser();
const doc = parser.parseFromString(xmlString, 'application/xml');

// Always check for parse errors first
const parseError = doc.querySelector('parsererror');
if (parseError) {
  throw new Error('XML parse failed: ' + parseError.textContent);
}

console.log(doc.documentElement.tagName); // library
Always check for parsererror. Unlike JSON.parse() which throws, DOMParser returns a document containing a <parsererror> element when parsing fails — it doesn't throw an exception. If you skip the error check, you'll silently operate on a malformed document and get confusing results downstream.

Navigating the DOM — getElementsByTagName vs querySelector

Once you have a parsed document, you have two main APIs for finding elements. Both work, but they have different strengths:

js
// getElementsByTagName — returns a live HTMLCollection
const books = doc.getElementsByTagName('book');
console.log(books.length); // 2

// querySelector / querySelectorAll — CSS selector syntax, returns NodeList
const firstTitle = doc.querySelector('title').textContent;
console.log(firstTitle); // The C Programming Language

// Get all titles
const titles = [...doc.querySelectorAll('title')].map(el => el.textContent);
console.log(titles);
// ['The C Programming Language', 'The Pragmatic Programmer']

// Reading attributes
const firstBook = doc.querySelector('book');
const isbn = firstBook.getAttribute('isbn');
console.log(isbn); // 978-0-13-110362-7

// Reading the currency attribute from price
const priceEl = firstBook.querySelector('price');
console.log(priceEl.textContent);           // 45.99
console.log(priceEl.getAttribute('currency')); // USD

I prefer querySelector for targeted lookups — the CSS selector syntax is familiar and concise. Use getElementsByTagName when you need all elements with a given tag and want a live collection (though in practice, a spread NodeList is usually cleaner).

Extracting Structured Data — A Practical Pattern

Here's how to map an XML document into a clean JavaScript array of objects — the pattern you'll use when consuming a real XML API response:

js
function parseLibraryXml(xmlString) {
  const parser = new DOMParser();
  const doc = parser.parseFromString(xmlString, 'application/xml');

  if (doc.querySelector('parsererror')) {
    throw new Error('Invalid XML');
  }

  return [...doc.querySelectorAll('book')].map(book => ({
    isbn: book.getAttribute('isbn'),
    title: book.querySelector('title').textContent.trim(),
    authors: [...book.querySelectorAll('author')].map(a => a.textContent.trim()),
    year: parseInt(book.querySelector('year').textContent, 10),
    price: {
      amount: parseFloat(book.querySelector('price').textContent),
      currency: book.querySelector('price').getAttribute('currency')
    }
  }));
}

const books = parseLibraryXml(xmlString);
console.log(books[0].title);          // The C Programming Language
console.log(books[0].authors);        // ['Brian W. Kernighan', 'Dennis M. Ritchie']
console.log(books[0].price.amount);   // 45.99

Handling Namespaced XML

Namespaces are where most developers hit a wall. SOAP responses, Atom feeds, and SVG all use XML namespaces — and a naive querySelector('body') will return null on a SOAP document because the element is actually soap:Body. Here's how to handle it correctly:

js
const soapResponse = `<?xml version="1.0"?>
<soap:Envelope
  xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"
  xmlns:m="http://www.example.com/orders">
  <soap:Header/>
  <soap:Body>
    <m:GetOrderResponse>
      <m:OrderId>ORD-5521</m:OrderId>
      <m:Status>Shipped</m:Status>
      <m:Total currency="EUR">289.50</m:Total>
    </m:GetOrderResponse>
  </soap:Body>
</soap:Envelope>`;

const parser = new DOMParser();
const doc = parser.parseFromString(soapResponse, 'application/xml');

// Option 1: getElementsByTagNameNS — explicit namespace URI
const SOAP_NS = 'http://schemas.xmlsoap.org/soap/envelope/';
const ORDER_NS = 'http://www.example.com/orders';

const body = doc.getElementsByTagNameNS(SOAP_NS, 'Body')[0];
const orderId = doc.getElementsByTagNameNS(ORDER_NS, 'OrderId')[0].textContent;
const status = doc.getElementsByTagNameNS(ORDER_NS, 'Status')[0].textContent;

console.log(orderId); // ORD-5521
console.log(status);  // Shipped

// Option 2: XPath with namespace resolver (more flexible)
function nsResolver(prefix) {
  const namespaces = {
    soap: 'http://schemas.xmlsoap.org/soap/envelope/',
    m: 'http://www.example.com/orders'
  };
  return namespaces[prefix] || null;
}

const xpathResult = doc.evaluate(
  '//m:OrderId',
  doc,
  nsResolver,
  XPathResult.STRING_TYPE,
  null
);
console.log(xpathResult.stringValue); // ORD-5521

XPath Queries with evaluate()

XPath is a query language for XML documents. The browser exposes it via document.evaluate(). It's more powerful than CSS selectors for XML — you can query by attribute value, position, text content, and ancestry. See the MDN XPath docs for the full expression syntax:

js
// Using our library XML document from earlier
function xpath(doc, expression, contextNode = doc) {
  const result = doc.evaluate(
    expression,
    contextNode,
    null,  // namespace resolver — null for non-namespaced XML
    XPathResult.ANY_TYPE,
    null
  );
  return result;
}

// Get all book titles
const titlesResult = xpath(doc, '//book/title');
const titles = [];
let node;
while ((node = titlesResult.iterateNext())) {
  titles.push(node.textContent);
}
console.log(titles);
// ['The C Programming Language', 'The Pragmatic Programmer']

// Get the book with a specific ISBN
const bookResult = doc.evaluate(
  '//book[@isbn="978-0-13-110362-7"]/title',
  doc, null,
  XPathResult.STRING_TYPE,
  null
);
console.log(bookResult.stringValue); // The C Programming Language

// Get books priced over $50
const expensiveResult = xpath(doc, '//book[price > 50]/title');
let expensiveNode;
while ((expensiveNode = expensiveResult.iterateNext())) {
  console.log(expensiveNode.textContent); // The Pragmatic Programmer
}

Node.js — fast-xml-parser (the Best Option)

Node.js doesn't have DOMParser. You have two options: use the built-in node:stream-based SAX approach (painful), or use fast-xml-parser (the right choice for most use cases). It's fast, zero-dependency, and returns plain JavaScript objects:

bash
npm install fast-xml-parser
js
import { XMLParser } from 'fast-xml-parser';

const xmlString = `<?xml version="1.0"?>
<library>
  <book isbn="978-0-13-110362-7">
    <title>The C Programming Language</title>
    <year>1988</year>
    <price currency="USD">45.99</price>
  </book>
  <book isbn="978-0-201-63361-0">
    <title>The Pragmatic Programmer</title>
    <year>1999</year>
    <price currency="USD">52.00</price>
  </book>
</library>`;

const parser = new XMLParser({
  ignoreAttributes: false,     // include XML attributes
  attributeNamePrefix: '@_',   // prefix attributes to distinguish from elements
  isArray: (tagName) => tagName === 'book'  // always treat <book> as an array
});

const result = parser.parse(xmlString);
const books = result.library.book;

books.forEach(book => {
  console.log(book.title);       // The C Programming Language
  console.log(book['@_isbn']);   // 978-0-13-110362-7
  console.log(book.price['#text']);       // 45.99
  console.log(book.price['@_currency']); // USD
});
The isArray option is crucial. If your XML has a list element that sometimes contains one item and sometimes many, fast-xml-parser will give you an object for one item and an array for many. The isArray option forces consistent array behaviour for named tags — always use it for elements you know can repeat.

Error Handling in Node.js

js
import { XMLParser, XMLValidator } from 'fast-xml-parser';

function parseXmlSafely(xmlString) {
  // Validate first — returns true or an error object
  const validation = XMLValidator.validate(xmlString);
  if (validation !== true) {
    throw new Error(`Invalid XML: ${validation.err.msg} at line ${validation.err.line}`);
  }

  const parser = new XMLParser({ ignoreAttributes: false, attributeNamePrefix: '@_' });
  return parser.parse(xmlString);
}

try {
  const data = parseXmlSafely(xmlString);
  console.log(data);
} catch (err) {
  console.error('XML parsing failed:', err.message);
}

Related Tools

When working with XML in JavaScript projects: XML Formatter to pretty-print minified responses, XML Validator to check well-formedness before parsing, XML XPath Tester to experiment with XPath queries, and XML to JSON if you want to convert to a simpler structure.

Wrapping Up

In the browser, DOMParser with 'application/xml' is your go-to — just remember to check for parsererror. For namespaced XML, use getElementsByTagNameNS or XPath with a namespace resolver. In Node.js, fast-xml-parser gives you clean JavaScript objects without the DOM overhead. The patterns here cover 95% of real-world XML parsing scenarios — SOAP responses, RSS feeds, configuration files, and more.