{"id":10759,"date":"2026-05-15T12:45:54","date_gmt":"2026-05-15T11:45:54","guid":{"rendered":"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/"},"modified":"2026-05-15T12:45:58","modified_gmt":"2026-05-15T11:45:58","slug":"i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup","status":"publish","type":"post","link":"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/","title":{"rendered":"I ran this bulky LLM on an SBC cluster, and it&#039;s the most unhinged setup\u2026"},"content":{"rendered":"<div class=\"anp-pro-entry\">\n<p class=\"anp-pro-lead\">The topic <strong>I ran this bulky LLM on an SBC cluster, and it&#8217;s the most unhinged setup\u2026<\/strong> is currently the subject of lively discussion \u2014 readers and analysts are keeping a close eye on developments.<\/p>\n<p class=\"anp-pro-p\">This is taking place in a dynamic environment: companies\u2019 decisions and competitors\u2019 reactions can quickly change the picture.<\/p>\n<p class=\"anp-pro-p\">Ever since I jumped headfirst into the local LLM rabbit-hole, I\u2019ve grown fond of revitalizing old PCs by turning them into reliable AI workstations. With the right tweaks, I\u2019ve even managed to run powerful LLMs that can rival their cloud-based counterparts on something as outdated as a 10-year-old rig. That said, most of my hardcore LLM experiments involve full-fledged x86 gaming systems with dedicated graphics cards and excess RAM.<\/p>\n<p class=\"anp-pro-p\">That said, a Raspberry Pi 5 can handle up to 4B models without buckling under the extra load, which makes it a surprisingly decent option for hosting embedding models and simple chatbots. But since I wanted to run models that wouldn\u2019t otherwise fit on this SBC, I figured I could try clustering some spare boards. And well, it\u2019s probably one of the most cursed projects I\u2019ve worked on (but it still has some sliver of utility).<\/p>\n<p class=\"anp-pro-p\">Qwen3.6 runs on my old GPU and does what ChatGPT does for free<\/p>\n<figure class=\"anp-pro-inline-figure\" style=\"margin:1.75em auto;text-align:center;max-width:100%\"><img decoding=\"async\" class=\"anp-pro-inline-img\" src=\"https:\/\/innovatenews.site\/wp-content\/uploads\/2026\/05\/llama-cpp-raspberry-pi-cluster-8-failed.jpg\" alt=\"\" style=\"margin:0 auto;max-width:100%;width:auto;height:auto;object-fit:contain;object-position:center\" loading=\"lazy\"><\/figure>\n<p class=\"anp-pro-p\">Starting with the SBCs that I wanted to use as the guinea pigs participants for this project, I\u2019d initially planned to spin a cluster out of three devices. However, I quickly realized that most of my ARM boards were already engaged in some experiment or another, leaving a Raspberry Pi 5, Libre Computer Alta, and Le Frite as the only viable options. Unfortunately, the Le Frite is extremely weak for this project, and its USB 2.0 socket and 100M would end up bottlenecking an already feeble setup. So, I went with a 2-node cluster involving a Raspberry Pi 5 (8GB) and a Libre Computer Alta (4GB), with an RPC backend on llama.cpp splitting the inference tasks between the two systems.<\/p>\n<p class=\"anp-pro-p\">Fortunately, the setup process was a lot simpler than I\u2019d anticipated, even though I had to compile llama.cpp from scratch. Once I\u2019d armed both systems with a CLI distro (an older version of Ubuntu on Alta and Raspberry Pi OS Lite on you-know-what) and configured openssh-server, I logged into them via PuTTY and installed the pre-requisite packages by running sudo apt install -y git build-essential cmake pkg-config. Then, I cloned the llama.cpp repo with git clone https:\/\/github.com\/ggml-org\/llama.cpp.git and switched to its freshly created directory via the cd llama.cpp command. Finally, I created yet another folder called build-rpc via mkdir -p build-rpc before switching to it and executing the following commands to compile llama.cpp with RPC capabilities:<\/p>\n<p class=\"anp-pro-p\">Since I wanted the Alta SBC to act as the secondary server rig, I ran .\/bin\/rpc-server -H 0.0.0.0 -p 50052 on it and let the RPC server remain active for a while. After using the SCP command to move some LLMs from my main PC to the Raspberry Pi node, I ran the .\/bin\/llama-server -m \/home\/ayush\/models\/Qwen3.5-2B-Q4_K_M.gguf &#8211;rpc 192.168.0.150:50053 &#8211;host 0.0.0.0 &#8211;port 8080 command and waited for it to finish loading the model.<\/p>\n<p class=\"anp-pro-p\">Since I was using the fairly lightweight Gemma 3 4B, I expected my cluster to perform somewhat better than just my Raspberry Pi. However, running a couple of prompts via the llama-server\u2019s web UI proved otherwise. And I\u2019m not talking about complex prompts or inference tasks involving MCP servers, either. For something as simple as \u201cTell me something cool,\u201d the cluster would struggle to hit 2.20 tokens\/second. So, I restarted my Raspberry Pi, and ran the llama-server command once again. Except, I got rid of the &#8212; rpc flag this time. Sure enough, the inference engine managed to hit 4.37 t\/s, which is almost twice as fast as the clustered setup!<\/p>\n<figure class=\"anp-pro-inline-figure\" style=\"margin:1.75em auto;text-align:center;max-width:100%\"><img decoding=\"async\" class=\"anp-pro-inline-img\" src=\"https:\/\/innovatenews.site\/wp-content\/uploads\/2026\/05\/llama-cpp-raspberry-pi-cluster-9.jpg\" alt=\"\" style=\"margin:0 auto;max-width:100%;width:auto;height:auto;object-fit:contain;object-position:center\" loading=\"lazy\"><\/figure>\n<p class=\"anp-pro-p\">In theory, the cluster should either hit higher token generation rates, or, at the very least, provide speeds comparable to a Raspberry Pi-only setup. But it makes perfect sense when I factor the network and storage bottlenecks into the equation. You see, both SBCs feature a 1GbE connection, which is somewhat slow for high-speed AI inference tasks. Worse still, I\u2019d run out of SSDs in my home lab, so I had to make do with mere microSD cards, which definitely contribute to the speed factor (or the lack thereof). Toss in the fact that LLM operations are very sensitive to latency, and it\u2019s clear why my cluster performs terribly. I was about to label this project a failure and wrap things up here, but I wanted to try one last experiment before dissolving the cluster\u2026<\/p>\n<p class=\"anp-pro-p\">Ollama is great for getting you started&#8230; just don&#8217;t stick around.<\/p>\n<p class=\"anp-pro-p\">While its lackluster performance was a total buzzkill, my main objective behind this wacky project was to run large models that a Raspberry Pi with merely 8GB of RAM wouldn\u2019t be able to host. So, I spun llama-server up once again without the RPC flag and began working my way up the model parameter size. Qwen 3.5 (9B) is where llama-server crashed, as the SBC couldn\u2019t accommodate the large model.<\/p>\n<p class=\"anp-pro-p\">But when I ran it with the RPC flag pointing to the Alta, llama-server was able to load the LLM with relative ease. Just to satisfy my curiosity, I opened the web UI and began prompting the LLM. Well, it definitely worked, though it could only generate 1.27 tokens every second. That\u2019s nowhere near a feasible number for my productivity tasks and coding workloads. But it\u2019s still somewhat usable for automated tasks like generating tags for bookmarks or performing OCR scans on documents, especially considering that I can just leave my SBCs running all day without worrying about their energy consumption.<\/p>\n<p class=\"anp-pro-p\">And to be brutally honest, I figured I\u2019d end up measuring seconds per token instead of tokens per second. So, a token generation rate of 1.27 t\/s is somewhat surprising, especially since it\u2019s a model that my SBC couldn\u2019t even load in the first place. While I probably wouldn\u2019t use this cluster for SBC inference tasks, RPC definitely sounds useful. In fact, I might just try using it for my current LXC-based LLM-hosting workstations, which feature full-fledged 10G NICs.<\/p>\n<p class=\"anp-pro-p\">Llama.cpp is an open-source framework that runs large language models locally on your computer.\u00a0<\/p>\n<aside class=\"anp-pro-aside\" aria-label=\"context\">\n<p class=\"anp-pro-kicker\">Why it matters<\/p>\n<p class=\"anp-pro-p\">News like this often changes audience expectations and competitors\u2019 plans.<\/p>\n<p class=\"anp-pro-p\">When one player makes a move, others usually react \u2014 it is worth reading the event in context.<\/p>\n<\/aside>\n<aside class=\"anp-pro-aside\" aria-label=\"outlook\">\n<p class=\"anp-pro-kicker\">What to look out for next<\/p>\n<p class=\"anp-pro-p\">The full picture will become clear in time, but the headline already shows the dynamics of the industry.<\/p>\n<p class=\"anp-pro-p\">Further statements and user reactions will add to the story.<\/p>\n<\/aside>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>The topic I ran this bulky LLM on an SBC cluster, and it&#8217;s the most unhinged &hellip; <a title=\"I ran this bulky LLM on an SBC cluster, and it&#039;s the most unhinged setup\u2026\" class=\"hm-read-more\" href=\"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/\"><span class=\"screen-reader-text\">I ran this bulky LLM on an SBC cluster, and it&#039;s the most unhinged setup\u2026<\/span>Read more<\/a><\/p>\n","protected":false},"author":0,"featured_media":10760,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[1816,1881,816,170,867],"class_list":["post-10759","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-innovate","tag-cluster","tag-llama","tag-models","tag-raspberry","tag-setup"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.2 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>I ran this bulky LLM on an SBC cluster, and it&#039;s the most unhinged setup\u2026 - innovatenews.site<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"I ran this bulky LLM on an SBC cluster, and it&#039;s the most unhinged setup\u2026 - innovatenews.site\" \/>\n<meta property=\"og:description\" content=\"The topic I ran this bulky LLM on an SBC cluster, and it&#8217;s the most unhinged &hellip; I ran this bulky LLM on an SBC cluster, and it&#039;s the most unhinged setup\u2026Read more\" \/>\n<meta property=\"og:url\" content=\"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/\" \/>\n<meta property=\"og:site_name\" content=\"innovatenews.site\" \/>\n<meta property=\"article:published_time\" content=\"2026-05-15T11:45:54+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-05-15T11:45:58+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/innovatenews.site\/wp-content\/uploads\/2026\/05\/llama-cpp-raspberry-pi-cluster-11.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1600\" \/>\n\t<meta property=\"og:image:height\" content=\"900\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/\"},\"author\":{\"name\":\"\",\"@id\":\"\"},\"headline\":\"I ran this bulky LLM on an SBC cluster, and it&#039;s the most unhinged setup\u2026\",\"datePublished\":\"2026-05-15T11:45:54+00:00\",\"dateModified\":\"2026-05-15T11:45:58+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/\"},\"wordCount\":1151,\"image\":{\"@id\":\"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/innovatenews.site\/wp-content\/uploads\/2026\/05\/llama-cpp-raspberry-pi-cluster-11.jpg\",\"keywords\":[\"Cluster\",\"Llama\",\"Models\",\"Raspberry\",\"Setup\"],\"articleSection\":[\"Innovate\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/\",\"url\":\"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/\",\"name\":\"I ran this bulky LLM on an SBC cluster, and it&#039;s the most unhinged setup\u2026 - innovatenews.site\",\"isPartOf\":{\"@id\":\"https:\/\/innovatenews.site\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/innovatenews.site\/wp-content\/uploads\/2026\/05\/llama-cpp-raspberry-pi-cluster-11.jpg\",\"datePublished\":\"2026-05-15T11:45:54+00:00\",\"dateModified\":\"2026-05-15T11:45:58+00:00\",\"author\":{\"@id\":\"\"},\"breadcrumb\":{\"@id\":\"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/#primaryimage\",\"url\":\"https:\/\/innovatenews.site\/wp-content\/uploads\/2026\/05\/llama-cpp-raspberry-pi-cluster-11.jpg\",\"contentUrl\":\"https:\/\/innovatenews.site\/wp-content\/uploads\/2026\/05\/llama-cpp-raspberry-pi-cluster-11.jpg\",\"width\":1600,\"height\":900},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/innovatenews.site\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"I ran this bulky LLM on an SBC cluster, and it&#039;s the most unhinged setup\u2026\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/innovatenews.site\/#website\",\"url\":\"https:\/\/innovatenews.site\/\",\"name\":\"innovatenews.site\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/innovatenews.site\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"I ran this bulky LLM on an SBC cluster, and it&#039;s the most unhinged setup\u2026 - innovatenews.site","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/","og_locale":"en_US","og_type":"article","og_title":"I ran this bulky LLM on an SBC cluster, and it&#039;s the most unhinged setup\u2026 - innovatenews.site","og_description":"The topic I ran this bulky LLM on an SBC cluster, and it&#8217;s the most unhinged &hellip; I ran this bulky LLM on an SBC cluster, and it&#039;s the most unhinged setup\u2026Read more","og_url":"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/","og_site_name":"innovatenews.site","article_published_time":"2026-05-15T11:45:54+00:00","article_modified_time":"2026-05-15T11:45:58+00:00","og_image":[{"width":1600,"height":900,"url":"https:\/\/innovatenews.site\/wp-content\/uploads\/2026\/05\/llama-cpp-raspberry-pi-cluster-11.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/#article","isPartOf":{"@id":"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/"},"author":{"name":"","@id":""},"headline":"I ran this bulky LLM on an SBC cluster, and it&#039;s the most unhinged setup\u2026","datePublished":"2026-05-15T11:45:54+00:00","dateModified":"2026-05-15T11:45:58+00:00","mainEntityOfPage":{"@id":"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/"},"wordCount":1151,"image":{"@id":"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/#primaryimage"},"thumbnailUrl":"https:\/\/innovatenews.site\/wp-content\/uploads\/2026\/05\/llama-cpp-raspberry-pi-cluster-11.jpg","keywords":["Cluster","Llama","Models","Raspberry","Setup"],"articleSection":["Innovate"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/","url":"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/","name":"I ran this bulky LLM on an SBC cluster, and it&#039;s the most unhinged setup\u2026 - innovatenews.site","isPartOf":{"@id":"https:\/\/innovatenews.site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/#primaryimage"},"image":{"@id":"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/#primaryimage"},"thumbnailUrl":"https:\/\/innovatenews.site\/wp-content\/uploads\/2026\/05\/llama-cpp-raspberry-pi-cluster-11.jpg","datePublished":"2026-05-15T11:45:54+00:00","dateModified":"2026-05-15T11:45:58+00:00","author":{"@id":""},"breadcrumb":{"@id":"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/#primaryimage","url":"https:\/\/innovatenews.site\/wp-content\/uploads\/2026\/05\/llama-cpp-raspberry-pi-cluster-11.jpg","contentUrl":"https:\/\/innovatenews.site\/wp-content\/uploads\/2026\/05\/llama-cpp-raspberry-pi-cluster-11.jpg","width":1600,"height":900},{"@type":"BreadcrumbList","@id":"https:\/\/innovatenews.site\/index.php\/2026\/05\/15\/i-ran-this-bulky-llm-on-an-sbc-cluster-and-its-the-most-unhinged-setup\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/innovatenews.site\/"},{"@type":"ListItem","position":2,"name":"I ran this bulky LLM on an SBC cluster, and it&#039;s the most unhinged setup\u2026"}]},{"@type":"WebSite","@id":"https:\/\/innovatenews.site\/#website","url":"https:\/\/innovatenews.site\/","name":"innovatenews.site","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/innovatenews.site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"}]}},"_links":{"self":[{"href":"https:\/\/innovatenews.site\/index.php\/wp-json\/wp\/v2\/posts\/10759","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/innovatenews.site\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/innovatenews.site\/index.php\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/innovatenews.site\/index.php\/wp-json\/wp\/v2\/comments?post=10759"}],"version-history":[{"count":1,"href":"https:\/\/innovatenews.site\/index.php\/wp-json\/wp\/v2\/posts\/10759\/revisions"}],"predecessor-version":[{"id":10764,"href":"https:\/\/innovatenews.site\/index.php\/wp-json\/wp\/v2\/posts\/10759\/revisions\/10764"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/innovatenews.site\/index.php\/wp-json\/wp\/v2\/media\/10760"}],"wp:attachment":[{"href":"https:\/\/innovatenews.site\/index.php\/wp-json\/wp\/v2\/media?parent=10759"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/innovatenews.site\/index.php\/wp-json\/wp\/v2\/categories?post=10759"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/innovatenews.site\/index.php\/wp-json\/wp\/v2\/tags?post=10759"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}