{"id":5386,"date":"2025-07-13T09:57:25","date_gmt":"2025-07-13T08:57:25","guid":{"rendered":"https:\/\/www.utilewebsites.nl\/knowledgebase\/draai-een-large-language-model-llm-lokaal-met-ollama\/"},"modified":"2025-07-13T10:16:26","modified_gmt":"2025-07-13T09:16:26","slug":"draai-een-large-language-model-llm-lokaal-met-ollama","status":"publish","type":"wz_knowledgebase","link":"https:\/\/www.utilewebsites.nl\/en\/knowledgebase\/draai-een-large-language-model-llm-lokaal-met-ollama\/","title":{"rendered":"Run a Large Language Model (LLM) locally with Ollama"},"content":{"rendered":"<h3 class=\"wp-block-heading\">Introduction<\/h3>\n<p>Want to experiment with Large Language Models (LLMs) without relying on cloud services? With <strong>Ollama<\/strong> you can run powerful open-source language models directly on your own computer. This not only guarantees your privacy, but also gives you full control over your data and the models you use. In this article we explain step by step how to install Ollama, use an LLM locally, and how to integrate it with popular developer tools such as LangChain and Visual Studio Code.<\/p>\n<p data-wp-editing=\"1\"><a href=\"https:\/\/www.utilewebsites.nl\/wp-content\/uploads\/2025\/07\/ollama-lokale-AI-e1752397806395.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-5379\" src=\"https:\/\/www.utilewebsites.nl\/wp-content\/uploads\/2025\/07\/ollama-lokale-AI-e1752397806395.jpg\" alt=\"\" width=\"1024\" height=\"736\" srcset=\"https:\/\/www.utilewebsites.nl\/wp-content\/uploads\/2025\/07\/ollama-lokale-AI-e1752397806395.jpg 1024w, https:\/\/www.utilewebsites.nl\/wp-content\/uploads\/2025\/07\/ollama-lokale-AI-e1752397806395-300x216.jpg 300w, https:\/\/www.utilewebsites.nl\/wp-content\/uploads\/2025\/07\/ollama-lokale-AI-e1752397806395-768x552.jpg 768w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<h3 class=\"wp-block-heading\">What is Ollama?<\/h3>\n<p>Ollama is a tool that greatly simplifies the process of downloading, setting up, and running LLMs, such as Llama 3. It packages model weights and configurations into a single file, similar to how Docker works for applications. This makes it easy for both developers and enthusiasts to get started with LLMs.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<h3 class=\"wp-block-heading\">System Requirements<\/h3>\n<p>Before you begin, it's important to check that your system meets the minimum requirements. To run smaller models (around 7 billion parameters), the following is recommended:<\/p>\n<ul>\n<li><strong>RAM:<\/strong> At least <strong>8 GB<\/strong>, but <strong>16 GB<\/strong> is recommended for better performance.<\/li>\n<li><strong>Storage:<\/strong> Sufficient free disk space for the models, which can be several gigabytes in size.<\/li>\n<\/ul>\n<p>For larger models, you'll need significantly more RAM and possibly a powerful graphics card (GPU).<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<h3 class=\"wp-block-heading\">Installation<\/h3>\n<p>Ollama is available for Windows, macOS and Linux.<\/p>\n<ol>\n<li><strong>Download Ollama:<\/strong> Go to the official Ollama website at <a href=\"https:\/\/ollama.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/ollama.com\/<\/a> and download the installer for your operating system.<\/li>\n<li><strong>Install Ollama:<\/strong> Run the downloaded file and follow the installation instructions. After installation, Ollama runs in the background.<\/li>\n<\/ol>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<h3 class=\"wp-block-heading\">Downloading and Running a Model<\/h3>\n<ol>\n<li><strong>Open the terminal:<\/strong>\n<ul>\n<li><strong>Windows:<\/strong> Open the Start menu, type <code>cmd<\/code> or <code>Terminal<\/code>, and press Enter.<\/li>\n<li><strong>macOS:<\/strong> Open the <code>Terminal<\/code> app from the Utilities folder.<\/li>\n<li><strong>Linux:<\/strong> Open your preferred terminal emulator.<\/li>\n<\/ul>\n<\/li>\n<li><strong>Download a model:<\/strong> Choose a model from the Ollama library (found at <a href=\"https:\/\/ollama.com\/library\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/ollama.com\/library<\/a>). A popular and powerful model to start with is <strong>Llama 3.1<\/strong>. Download it with the following command:<\/li>\n<\/ol>\n<pre class=\"wp-block-code\"><code>ollama pull llama3.1<\/code><\/pre>\n<p>This may take a while, depending on the size of the model and your internet speed.<\/p>\n<ol start=\"3\">\n<li><strong>Run the model:<\/strong> Once the download is complete, you can use the model directly in your terminal with the following command:<\/li>\n<\/ol>\n<pre class=\"wp-block-code\"><code>ollama run llama3.1<\/code><\/pre>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<h3 class=\"wp-block-heading\">Interaction and Useful Commands<\/h3>\n<p>After running the <code>run<\/code> command, you can immediately start asking questions or giving commands to the model. You essentially chat with the LLM in your terminal.<\/p>\n<ul>\n<li><code>ollama list<\/code>: Shows a list of all models you have downloaded locally.<\/li>\n<li><code>ollama rm &lt;model-name&gt;<\/code>: Removes a specific model to free up disk space.<\/li>\n<li><code>\/bye<\/code>: Closes the current chat session with a model.<\/li>\n<\/ul>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<h3 class=\"wp-block-heading\">Integration with LangChain<\/h3>\n<p><strong>LangChain<\/strong> is a popular framework for building applications with LLMs. You can easily integrate your locally running Ollama model into both Python and JavaScript\/TypeScript projects.<\/p>\n<h4 class=\"wp-block-heading\">Python<\/h4>\n<ol>\n<li><strong>Install the Python package:<\/strong><\/li>\n<\/ol>\n<pre class=\"wp-block-code\"><code>pip install langchain-ollama<\/code><\/pre>\n<ol start=\"2\">\n<li><strong>Use in your code:<\/strong><\/li>\n<\/ol>\n<pre class=\"wp-block-code\"><code class=\"language-python\" lang=\"python\">from langchain_ollama import ChatOllama\n\nllm = ChatOllama(model=\"llama3.1\")\nresponse = llm.invoke(\"What is the capital of the Netherlands?\")\nprint(response.content)<\/code><\/pre>\n<h4 class=\"wp-block-heading\">Node.js (JavaScript\/TypeScript)<\/h4>\n<p>LangChain is also available for JavaScript\/TypeScript, ideal for back-ends (Node.js) or front-end frameworks (such as Vue.js, React or Svelte).<\/p>\n<ol>\n<li><strong>Install via npm or yarn:<\/strong><\/li>\n<\/ol>\n<pre class=\"wp-block-code\"><code># For npm\nnpm install @langchain\/ollama\n\n# For yarn\nyarn add @langchain\/ollama<\/code><\/pre>\n<ol start=\"2\">\n<li><strong>Use in your code:<\/strong><\/li>\n<\/ol>\n<pre class=\"wp-block-code\"><code class=\"language-javascript\" lang=\"javascript\">import { ChatOllama } from \"@langchain\/ollama\";\n\nasync function main() {\n  const llm = new ChatOllama({ model: \"llama3.1\" });\n  const response = await llm.invoke(\"What is the capital of the Netherlands?\");\n  console.log(response.content);\n}\n\nmain();<\/code><\/pre>\n<p>This way, you can seamlessly switch between cloud providers and your own local Ollama instance in both your Python backend and your JavaScript stack.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<h3 class=\"wp-block-heading\">Integration with Visual Studio Code<\/h3>\n<p>You can also use your locally running LLM as a complement to your development environment in <strong>Visual Studio Code<\/strong>. This gives you the ability to generate code and ask questions using your own, locally hosted model.<\/p>\n<ol>\n<li><strong>Ensure Ollama is running:<\/strong> The Ollama process must be active in the background.<\/li>\n<li><strong>Install a compatible extension:<\/strong> Search the VS Code Marketplace for an extension that offers Ollama integration, such as <strong>Continue<\/strong>.<\/li>\n<li><strong>Configure the extension:<\/strong> Follow the extension's instructions to set Ollama as the provider and select the model you want to use (for example <code>llama3.1<\/code>).<\/li>\n<\/ol>\n<p>Now you can call your local model for code suggestions and other programming tasks in the extension's chat interface, entirely within your own environment.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<h3 class=\"wp-block-heading\">Conclusion<\/h3>\n<p>Ollama makes running LLMs locally accessible to a wide audience. Whether you're a developer looking to build an AI application with LangChain, or want to improve your programming workflow in VS Code, Ollama lets you get started quickly and easily. With an active community and ongoing development, Ollama is an excellent choice for anyone looking to explore the world of <strong>local AI<\/strong>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction Want to experiment with Large Language Models (LLMs) without relying on cloud services? With Ollama you can run powerful open-source language models directly on your own computer. This not only guarantees your privacy, but also gives you full control over your data and the models you use. In this article we explain step by step how to install Ollama, use an LLM locally, and how to integrate it with popular developer tools such as LangChain and Visual Studio Code. What is Ollama? Ollama is a tool that greatly simplifies the process of downloading, setting up, and running LLMs, such&nbsp;<a href=\"https:\/\/www.utilewebsites.nl\/en\/knowledgebase\/draai-een-large-language-model-llm-lokaal-met-ollama\/\" class=\"read-more\">Continue Reading<\/a><\/p>\n","protected":false},"author":2,"featured_media":5380,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"wzkb_category":[93],"wzkb_tag":[],"class_list":["post-5386","wz_knowledgebase","type-wz_knowledgebase","status-publish","has-post-thumbnail","hentry","wzkb_category-ai-artificial-intelligence"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Run a Large Language Model (LLM) locally with Ollama - Utilewebsites<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.utilewebsites.nl\/en\/knowledgebase\/draai-een-large-language-model-llm-lokaal-met-ollama\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Run a Large Language Model (LLM) locally with Ollama - Utilewebsites\" \/>\n<meta property=\"og:description\" content=\"Introduction Want to experiment with Large Language Models (LLMs) without relying on cloud services? With Ollama you can run powerful open-source language models directly on your own computer. This not only guarantees your privacy, but also gives you full control over your data and the models you use. In this article we explain step by step how to install Ollama, use an LLM locally, and how to integrate it with popular developer tools such as LangChain and Visual Studio Code. What is Ollama? Ollama is a tool that greatly simplifies the process of downloading, setting up, and running LLMs, such&nbsp;Continue Reading\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.utilewebsites.nl\/en\/knowledgebase\/draai-een-large-language-model-llm-lokaal-met-ollama\/\" \/>\n<meta property=\"og:site_name\" content=\"Utilewebsites\" \/>\n<meta property=\"article:modified_time\" content=\"2025-07-13T09:16:26+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.utilewebsites.nl\/wp-content\/uploads\/2025\/07\/ollama-lokale-AI-e1752397806395.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.utilewebsites.nl\\\/en\\\/knowledgebase\\\/draai-een-large-language-model-llm-lokaal-met-ollama\\\/\",\"url\":\"https:\\\/\\\/www.utilewebsites.nl\\\/en\\\/knowledgebase\\\/draai-een-large-language-model-llm-lokaal-met-ollama\\\/\",\"name\":\"Run a Large Language Model (LLM) locally with Ollama - Utilewebsites\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.utilewebsites.nl\\\/en\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.utilewebsites.nl\\\/en\\\/knowledgebase\\\/draai-een-large-language-model-llm-lokaal-met-ollama\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.utilewebsites.nl\\\/en\\\/knowledgebase\\\/draai-een-large-language-model-llm-lokaal-met-ollama\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.utilewebsites.nl\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/ollama-lokale-AI-e1752397806395.jpg\",\"datePublished\":\"2025-07-13T08:57:25+00:00\",\"dateModified\":\"2025-07-13T09:16:26+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.utilewebsites.nl\\\/en\\\/knowledgebase\\\/draai-een-large-language-model-llm-lokaal-met-ollama\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.utilewebsites.nl\\\/en\\\/knowledgebase\\\/draai-een-large-language-model-llm-lokaal-met-ollama\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.utilewebsites.nl\\\/en\\\/knowledgebase\\\/draai-een-large-language-model-llm-lokaal-met-ollama\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.utilewebsites.nl\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/ollama-lokale-AI-e1752397806395.jpg\",\"contentUrl\":\"https:\\\/\\\/www.utilewebsites.nl\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/ollama-lokale-AI-e1752397806395.jpg\",\"width\":1024,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.utilewebsites.nl\\\/en\\\/knowledgebase\\\/draai-een-large-language-model-llm-lokaal-met-ollama\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.utilewebsites.nl\\\/en\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Knowledge Base\",\"item\":\"https:\\\/\\\/www.utilewebsites.nl\\\/en\\\/knowledgebase\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Run a Large Language Model (LLM) locally with Ollama\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.utilewebsites.nl\\\/en\\\/#website\",\"url\":\"https:\\\/\\\/www.utilewebsites.nl\\\/en\\\/\",\"name\":\"Utilewebsites\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.utilewebsites.nl\\\/en\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.utilewebsites.nl\\\/en\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/www.utilewebsites.nl\\\/en\\\/#organization\",\"name\":\"Utilewebsites\",\"url\":\"https:\\\/\\\/www.utilewebsites.nl\\\/en\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.utilewebsites.nl\\\/en\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/www.utilewebsites.nl\\\/wp-content\\\/uploads\\\/2019\\\/08\\\/logo-Utilewebsites-2017.png\",\"contentUrl\":\"https:\\\/\\\/www.utilewebsites.nl\\\/wp-content\\\/uploads\\\/2019\\\/08\\\/logo-Utilewebsites-2017.png\",\"width\":3000,\"height\":593,\"caption\":\"Utilewebsites\"},\"image\":{\"@id\":\"https:\\\/\\\/www.utilewebsites.nl\\\/en\\\/#\\\/schema\\\/logo\\\/image\\\/\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Run a Large Language Model (LLM) locally with Ollama - Utilewebsites","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.utilewebsites.nl\/en\/knowledgebase\/draai-een-large-language-model-llm-lokaal-met-ollama\/","og_locale":"en_US","og_type":"article","og_title":"Run a Large Language Model (LLM) locally with Ollama - Utilewebsites","og_description":"Introduction Want to experiment with Large Language Models (LLMs) without relying on cloud services? With Ollama you can run powerful open-source language models directly on your own computer. This not only guarantees your privacy, but also gives you full control over your data and the models you use. In this article we explain step by step how to install Ollama, use an LLM locally, and how to integrate it with popular developer tools such as LangChain and Visual Studio Code. What is Ollama? Ollama is a tool that greatly simplifies the process of downloading, setting up, and running LLMs, such&nbsp;Continue Reading","og_url":"https:\/\/www.utilewebsites.nl\/en\/knowledgebase\/draai-een-large-language-model-llm-lokaal-met-ollama\/","og_site_name":"Utilewebsites","article_modified_time":"2025-07-13T09:16:26+00:00","og_image":[{"width":1024,"height":1024,"url":"https:\/\/www.utilewebsites.nl\/wp-content\/uploads\/2025\/07\/ollama-lokale-AI-e1752397806395.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.utilewebsites.nl\/en\/knowledgebase\/draai-een-large-language-model-llm-lokaal-met-ollama\/","url":"https:\/\/www.utilewebsites.nl\/en\/knowledgebase\/draai-een-large-language-model-llm-lokaal-met-ollama\/","name":"Run a Large Language Model (LLM) locally with Ollama - Utilewebsites","isPartOf":{"@id":"https:\/\/www.utilewebsites.nl\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.utilewebsites.nl\/en\/knowledgebase\/draai-een-large-language-model-llm-lokaal-met-ollama\/#primaryimage"},"image":{"@id":"https:\/\/www.utilewebsites.nl\/en\/knowledgebase\/draai-een-large-language-model-llm-lokaal-met-ollama\/#primaryimage"},"thumbnailUrl":"https:\/\/www.utilewebsites.nl\/wp-content\/uploads\/2025\/07\/ollama-lokale-AI-e1752397806395.jpg","datePublished":"2025-07-13T08:57:25+00:00","dateModified":"2025-07-13T09:16:26+00:00","breadcrumb":{"@id":"https:\/\/www.utilewebsites.nl\/en\/knowledgebase\/draai-een-large-language-model-llm-lokaal-met-ollama\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.utilewebsites.nl\/en\/knowledgebase\/draai-een-large-language-model-llm-lokaal-met-ollama\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.utilewebsites.nl\/en\/knowledgebase\/draai-een-large-language-model-llm-lokaal-met-ollama\/#primaryimage","url":"https:\/\/www.utilewebsites.nl\/wp-content\/uploads\/2025\/07\/ollama-lokale-AI-e1752397806395.jpg","contentUrl":"https:\/\/www.utilewebsites.nl\/wp-content\/uploads\/2025\/07\/ollama-lokale-AI-e1752397806395.jpg","width":1024,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/www.utilewebsites.nl\/en\/knowledgebase\/draai-een-large-language-model-llm-lokaal-met-ollama\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.utilewebsites.nl\/en\/"},{"@type":"ListItem","position":2,"name":"Knowledge Base","item":"https:\/\/www.utilewebsites.nl\/en\/knowledgebase\/"},{"@type":"ListItem","position":3,"name":"Run a Large Language Model (LLM) locally with Ollama"}]},{"@type":"WebSite","@id":"https:\/\/www.utilewebsites.nl\/en\/#website","url":"https:\/\/www.utilewebsites.nl\/en\/","name":"Utilewebsites","description":"","publisher":{"@id":"https:\/\/www.utilewebsites.nl\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.utilewebsites.nl\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.utilewebsites.nl\/en\/#organization","name":"Utilewebsites","url":"https:\/\/www.utilewebsites.nl\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.utilewebsites.nl\/en\/#\/schema\/logo\/image\/","url":"https:\/\/www.utilewebsites.nl\/wp-content\/uploads\/2019\/08\/logo-Utilewebsites-2017.png","contentUrl":"https:\/\/www.utilewebsites.nl\/wp-content\/uploads\/2019\/08\/logo-Utilewebsites-2017.png","width":3000,"height":593,"caption":"Utilewebsites"},"image":{"@id":"https:\/\/www.utilewebsites.nl\/en\/#\/schema\/logo\/image\/"}}]}},"_links":{"self":[{"href":"https:\/\/www.utilewebsites.nl\/en\/wp-json\/wp\/v2\/wz_knowledgebase\/5386","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.utilewebsites.nl\/en\/wp-json\/wp\/v2\/wz_knowledgebase"}],"about":[{"href":"https:\/\/www.utilewebsites.nl\/en\/wp-json\/wp\/v2\/types\/wz_knowledgebase"}],"author":[{"embeddable":true,"href":"https:\/\/www.utilewebsites.nl\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.utilewebsites.nl\/en\/wp-json\/wp\/v2\/comments?post=5386"}],"version-history":[{"count":2,"href":"https:\/\/www.utilewebsites.nl\/en\/wp-json\/wp\/v2\/wz_knowledgebase\/5386\/revisions"}],"predecessor-version":[{"id":5388,"href":"https:\/\/www.utilewebsites.nl\/en\/wp-json\/wp\/v2\/wz_knowledgebase\/5386\/revisions\/5388"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.utilewebsites.nl\/en\/wp-json\/wp\/v2\/media\/5380"}],"wp:attachment":[{"href":"https:\/\/www.utilewebsites.nl\/en\/wp-json\/wp\/v2\/media?parent=5386"}],"wp:term":[{"taxonomy":"wzkb_category","embeddable":true,"href":"https:\/\/www.utilewebsites.nl\/en\/wp-json\/wp\/v2\/wzkb_category?post=5386"},{"taxonomy":"wzkb_tag","embeddable":true,"href":"https:\/\/www.utilewebsites.nl\/en\/wp-json\/wp\/v2\/wzkb_tag?post=5386"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}