diff --git a/content/posts/supercharge-your-bash-scripts-with-multiprocessing.md b/content/posts/supercharge-your-bash-scripts-with-multiprocessing.md
index 43e1a8c..dcb4b51 100644
--- a/content/posts/supercharge-your-bash-scripts-with-multiprocessing.md
+++ b/content/posts/supercharge-your-bash-scripts-with-multiprocessing.md
@@ -3,7 +3,7 @@ title = "Supercharge Your Bash Scripts with Multiprocessing"
date = "2021-05-05T17:08:12+03:00"
author = "Yigit Colakoglu"
authorTwitter = "theFr1nge"
-cover = ""
+cover = "images/supercharge-your-bash-scripts-with-multiprocessing.png"
tags = ["bash", "scripting", "programming"]
keywords = ["bash", "scripting"]
description = "Bash is a great tool for automating tasks and improving you work flow. However, it is ***SLOW***. Adding multiprocessing to the scripts you write can improve the performance greatly."
@@ -35,7 +35,7 @@ process you ran the command on, if you change a variable that the command in the
background uses while it runs, it will not be affected. Here is a simple
example:
-```bash
+{{< code language="bash" id="1" expand="Show" collapse="Hide" isCollapsed="false" >}}
foo="yeet"
function run_in_background(){
@@ -47,7 +47,7 @@ run_in_background & # Spawn the function run_in_background in the background
foo="YEET"
echo "The value of foo changed to $foo."
wait # wait for the background process to finish
-```
+{{< /code >}}
This should output:
@@ -67,14 +67,14 @@ efficient route first before moving on to the big boy implementation. Let's open
First of all, let's write a very simple function that allows us to easily test
our implementation:
-```bash
+{{< code language="bash" id="1" expand="Show" collapse="Hide" isCollapsed="false" >}}
function tester(){
# A function that takes an int as a parameter and sleeps
echo "$1"
sleep "$1"
echo "ENDED $1"
}
-```
+{{< /code >}}
Now that we have something to run in our processes, we now need to spawn several
of them in controlled manner. Controlled being the keyword here. That's because
@@ -82,7 +82,7 @@ each system has a maximum number of processes that can be spawned (You can find
that out with the command `ulimit -u`). In our case, we want to limit the
processes being ran to the variable `num_processes`. Here is the implementation:
-```bash
+{{< code language="bash" id="1" expand="Show" collapse="Hide" isCollapsed="false" >}}
num_processes=$1
pcount=0
for i in {1..10}; do
@@ -90,7 +90,7 @@ for i in {1..10}; do
((pcount++==0)) && wait
tester $i &
done
-```
+{{< /code >}}
What this loop does is that it takes the number of processes you would like to
spawn as an argument and runs `tester` in that many processes. Go ahead and test it out!
@@ -113,7 +113,8 @@ continuously pick up jobs from the job pool not waiting for any other process to
Here is the implementation that uses job pools. Brace yourselves, because it is
kind of complicated.
-```bash
+
+{{< code language="bash" id="1" expand="Show" collapse="Hide" isCollapsed="false" >}}
job_pool_end_of_jobs="NO_JOB_LEFT"
job_pool_job_queue=/tmp/job_pool_job_queue_$$
job_pool_progress=/tmp/job_pool_progress_$$
@@ -203,7 +204,7 @@ function job_pool_wait()
job_pool_stop_workers
job_pool_start_workers ${job_pool_job_queue}
}
-```
+{{< /code >}}
Ok... But that the actual fuck is going in here???
@@ -219,7 +220,6 @@ their purposes, shall we?
fifo's man page tells us that:
```
-
NAME
fifo - first-in first-out special file, named pipe
@@ -233,7 +233,7 @@ DESCRIPTION
that processes can access the pipe using a name in the filesystem.
```
-So put in **very** simple terms, a fifo is a named pipe that can allows
+So put in **very** simple terms, a fifo is a named pipe that allows
communication between processes. Using a fifo allows us to loop through the jobs
in the pool without having to delete them manually, because once we read them
with `read cmd args < ${job_queue}`, the job is out of the pipe and the next
@@ -285,7 +285,8 @@ inside an existing bash script. Whatever tickles your fancy. I have also
provided an example that replicates our first implementation. Just paste the
below code under our "chad" job pool script.
-```bash
+
+{{< code language="bash" id="1" expand="Show" collapse="Hide" isCollapsed="false" >}}
function tester(){
# A function that takes an int as a parameter and sleeps
echo "$1"
@@ -302,7 +303,7 @@ done
job_pool_wait
job_pool_shutdown
-```
+{{< /code >}}
Hopefully this article was(or will be) helpful to you. From now on, you don't
ever have to write single threaded bash scripts like normies :)
diff --git a/public/about/index.html b/public/about/index.html
index 0a06d29..caeda71 100644
--- a/public/about/index.html
+++ b/public/about/index.html
@@ -176,7 +176,26 @@ hit me up through social media, I am open to chat :)
-
+
+
+
+comments powered by Disqus
diff --git a/public/awards/index.html b/public/awards/index.html
index 03363a5..84bb172 100644
--- a/public/awards/index.html
+++ b/public/awards/index.html
@@ -165,7 +165,26 @@
-
+
+
+
+comments powered by Disqus
diff --git a/public/index.html b/public/index.html
index 8d3c7e6..741f9a7 100644
--- a/public/index.html
+++ b/public/index.html
@@ -1,7 +1,7 @@
-
+
Fr1nge's Personal Blog
@@ -146,6 +146,47 @@
+
+
+ Bash is a great tool for automating tasks and improving you work flow. However, it is SLOW. Adding multiprocessing to the scripts you write can improve the performance greatly.
+
+
diff --git a/public/index.xml b/public/index.xml
index 3ce6c9b..a04d06b 100644
--- a/public/index.xml
+++ b/public/index.xml
@@ -6,7 +6,339 @@
Recent content on Fr1nge's Personal BlogHugo -- gohugo.ioen-us
- Yigit Colakoglu
+ Yigit Colakoglu
+ Wed, 05 May 2021 17:08:12 +0300
+
+ Supercharge Your Bash Scripts with Multiprocessing
+ http://fr1nge.xyz/posts/supercharge-your-bash-scripts-with-multiprocessing/
+ Wed, 05 May 2021 17:08:12 +0300
+
+ http://fr1nge.xyz/posts/supercharge-your-bash-scripts-with-multiprocessing/
+ Bash is a great tool for automating tasks and improving you work flow. However, it is SLOW. Adding multiprocessing to the scripts you write can improve the performance greatly.
+What is multiprocessing? In the simplest terms, multiprocessing is the principle of splitting the computations or jobs that a script has to do and running them on different processes. In even simpler terms however, multiprocessing is the computer science equivalent of hiring more than one worker when you are constructing a building.
+ <p>Bash is a great tool for automating tasks and improving you work flow. However,
+it is <em><strong>SLOW</strong></em>. Adding multiprocessing to the scripts you write can improve
+the performance greatly.</p>
+<h2 id="what-is-multiprocessing">What is multiprocessing?</h2>
+<p>In the simplest terms, multiprocessing is the principle of splitting the
+computations or jobs that a script has to do and running them on different
+processes. In even simpler terms however, multiprocessing is the computer
+science equivalent of hiring more than one
+worker when you are constructing a building.</p>
+<h3 id="introducing-">Introducing “&”</h3>
+<p>While implementing multiprocessing the sign <code>&</code> is going to be our greatest
+friend. It is an essential sign if you are writing bash scripts and a very
+useful tool in general when you are in the terminal. What <code>&</code> does is that it
+makes the command you added it to the end of run in the background and allows
+the rest of the script to continue running as the command runs in the
+background. One thing to keep in mind is that since it creates a fork of the
+process you ran the command on, if you change a variable that the command in the
+background uses while it runs, it will not be affected. Here is a simple
+example:</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+foo="yeet"
+
+function run_in_background(){
+ sleep 0.5
+ echo "The value of foo in the function run_in_background is $foo"
+}
+
+run_in_background & # Spawn the function run_in_background in the background
+foo="YEET"
+echo "The value of foo changed to $foo."
+wait # wait for the background process to finish
+</code></pre>
+ </div>
+
+
+<p>This should output:</p>
+<pre><code>The value of foo changed to YEET.
+The value of foo in here is yeet
+</code></pre><p>As you can see, the value of <code>foo</code> did not change in the background process even though
+we changed it in the main function.</p>
+<h2 id="baby-steps">Baby steps…</h2>
+<p>Just like anything related to computer science, there is more than one way of
+achieving our goal. We are going to take the easier, less intimidating but less
+efficient route first before moving on to the big boy implementation. Let’s open up vim and get to scripting!
+First of all, let’s write a very simple function that allows us to easily test
+our implementation:</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+function tester(){
+ # A function that takes an int as a parameter and sleeps
+ echo "$1"
+ sleep "$1"
+ echo "ENDED $1"
+}
+</code></pre>
+ </div>
+
+
+<p>Now that we have something to run in our processes, we now need to spawn several
+of them in controlled manner. Controlled being the keyword here. That’s because
+each system has a maximum number of processes that can be spawned (You can find
+that out with the command <code>ulimit -u</code>). In our case, we want to limit the
+processes being ran to the variable <code>num_processes</code>. Here is the implementation:</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+num_processes=$1
+pcount=0
+for i in {1..10}; do
+ ((pcount=pcount%num_processes));
+ ((pcount++==0)) && wait
+ tester $i &
+done
+</code></pre>
+ </div>
+
+
+<p>What this loop does is that it takes the number of processes you would like to
+spawn as an argument and runs <code>tester</code> in that many processes. Go ahead and test it out!
+You might notice however that the processes are run int batches. And the size of
+batches is the <code>num_processes</code> variable. The reason this happens is because
+every time we spawn <code>num_processes</code> processes, we <code>wait</code> for all the processes
+to end. This implementation is not a problem in itself, there are many cases
+where you can use this implementation and it works perfectly fine. However, if
+you don’t want this to happen, we have to dump this naive approach all together
+and improve our tool belt.</p>
+<h2 id="real-chads-use-job-pools">Real Chads use Job Pools</h2>
+<p>The solution to the bottleneck that was introduced in our previous approach lies
+in using job pools. Job pools are where jobs created by a main process get sent
+and wait to get executed. This approach solves our problems because instead of
+spawning a new process for every copy and waiting for all the processes to
+finish we instead only create a set number of processes(workers) which
+continuously pick up jobs from the job pool not waiting for any other process to finish.
+Here is the implementation that uses job pools. Brace yourselves, because it is
+kind of complicated.</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+job_pool_end_of_jobs="NO_JOB_LEFT"
+job_pool_job_queue=/tmp/job_pool_job_queue_$$
+job_pool_progress=/tmp/job_pool_progress_$$
+job_pool_pool_size=-1
+job_pool_nerrors=0
+
+function job_pool_cleanup()
+{
+ rm -f ${job_pool_job_queue}
+ rm -f ${job_pool_progress}
+}
+
+function job_pool_exit_handler()
+{
+ job_pool_stop_workers
+ job_pool_cleanup
+}
+
+function job_pool_worker()
+{
+ local id=$1
+ local job_queue=$2
+ local cmd=
+ local args=
+
+ exec 7<> ${job_queue}
+ while [[ "${cmd}" != "${job_pool_end_of_jobs}" && -e "${job_queue}" ]]; do
+ flock --exclusive 7
+ IFS=$'\v'
+ read cmd args <${job_queue}
+ set -- ${args}
+ unset IFS
+ flock --unlock 7
+ if [[ "${cmd}" == "${job_pool_end_of_jobs}" ]]; then
+ echo "${cmd}" >&7
+ else
+ { ${cmd} "$@" ; }
+ fi
+
+ done
+ exec 7>&-
+}
+
+function job_pool_stop_workers()
+{
+ echo ${job_pool_end_of_jobs} >> ${job_pool_job_queue}
+ wait
+}
+
+function job_pool_start_workers()
+{
+ local job_queue=$1
+ for ((i=0; i<${job_pool_pool_size}; i++)); do
+ job_pool_worker ${i} ${job_queue} &
+ done
+}
+
+function job_pool_init()
+{
+ local pool_size=$1
+ job_pool_pool_size=${pool_size:=1}
+ rm -rf ${job_pool_job_queue}
+ rm -rf ${job_pool_progress}
+ touch ${job_pool_progress}
+ mkfifo ${job_pool_job_queue}
+ echo 0 >${job_pool_progress} &
+ job_pool_start_workers ${job_pool_job_queue}
+}
+
+function job_pool_shutdown()
+{
+ job_pool_stop_workers
+ job_pool_cleanup
+}
+
+function job_pool_run()
+{
+ if [[ "${job_pool_pool_size}" == "-1" ]]; then
+ job_pool_init
+ fi
+ printf "%s\v" "$@" >> ${job_pool_job_queue}
+ echo >> ${job_pool_job_queue}
+}
+
+function job_pool_wait()
+{
+ job_pool_stop_workers
+ job_pool_start_workers ${job_pool_job_queue}
+}
+</code></pre>
+ </div>
+
+
+<p>Ok… But that the actual fuck is going in here???</p>
+<h3 id="fifo-and-flock">fifo and flock</h3>
+<p>In order to understand what this code is doing, you first need to understand two
+key commands that we are using, <code>fifo</code> and <code>flock</code>. Despite their complicated
+names, they are actually quite simple. Let’s check their man pages to figure out
+their purposes, shall we?</p>
+<h4 id="man-fifo">man fifo</h4>
+<p>fifo’s man page tells us that:</p>
+<pre><code>NAME
+ fifo - first-in first-out special file, named pipe
+
+DESCRIPTION
+ A FIFO special file (a named pipe) is similar to a pipe, except that
+ it is accessed as part of the filesystem. It can be opened by multiple
+ processes for reading or writing. When processes are exchanging data
+ via the FIFO, the kernel passes all data internally without writing it
+ to the filesystem. Thus, the FIFO special file has no contents on the
+ filesystem; the filesystem entry merely serves as a reference point so
+ that processes can access the pipe using a name in the filesystem.
+</code></pre><p>So put in <strong>very</strong> simple terms, a fifo is a named pipe that can allows
+communication between processes. Using a fifo allows us to loop through the jobs
+in the pool without having to delete them manually, because once we read them
+with <code>read cmd args < ${job_queue}</code>, the job is out of the pipe and the next
+read outputs the next job in the pool. However the fact that we have multiple
+processes introduces one caveat, what if two processes access the pipe at the
+same time? They would run the same command and we don’t want that. So we resort
+to using <code>flock</code>.</p>
+<h4 id="man-flock">man flock</h4>
+<p>flock’s man page defines it as:</p>
+<pre><code> SYNOPSIS
+ flock [options] file|directory command [arguments]
+ flock [options] file|directory -c command
+ flock [options] number
+
+ DESCRIPTION
+ This utility manages flock(2) locks from within shell scripts or from
+ the command line.
+
+ The first and second of the above forms wrap the lock around the
+ execution of a command, in a manner similar to su(1) or newgrp(1).
+ They lock a specified file or directory, which is created (assuming
+ appropriate permissions) if it does not already exist. By default, if
+ the lock cannot be immediately acquired, flock waits until the lock is
+ available.
+
+ The third form uses an open file by its file descriptor number. See
+ the examples below for how that can be used.
+</code></pre><p>Cool, translated to modern English that us regular folks use, <code>flock</code> is a thin
+wrapper around the C standard function <code>flock</code> (see <code>man 2 flock</code> if you are
+interested). It is used to manage locks and has several forms. The one we are
+interested in is the third one. According to the man page, it uses and open file
+by its <strong>file descriptor number</strong>. Aha! so that was the purpose of the <code>exec 7<> ${job_queue}</code> calls in the <code>job_pool_worker</code> function. It would essentially
+assign the file descriptor 7 to the fifo <code>job_queue</code> and afterwards lock it with
+<code>flock --exclusive 7</code>. Cool. This way only one process at a time can read from
+the fifo <code>job_queue</code></p>
+<h2 id="great-but-how-do-i-use-this">Great! But how do I use this?</h2>
+<p>It depends on your preference, you can either save this in a file(e.g.
+job_pool.sh) and source it in your bash script. Or you can simply paste it
+inside an existing bash script. Whatever tickles your fancy. I have also
+provided an example that replicates our first implementation. Just paste the
+below code under our “chad” job pool script.</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+function tester(){
+ # A function that takes an int as a parameter and sleeps
+ echo "$1"
+ sleep "$1"
+ echo "ENDED $1"
+}
+
+num_workers=$1
+job_pool_init $num_workers
+pcount=0
+for i in {1..10}; do
+ job_pool_run tester "$i"
+done
+
+job_pool_wait
+job_pool_shutdown
+</code></pre>
+ </div>
+
+
+<p>Hopefully this article was(or will be) helpful to you. From now on, you don’t
+ever have to write single threaded bash scripts like normies :)</p>
+
+
+
$ ls awards/ certificates/
http://fr1nge.xyz/awards/
diff --git a/public/posts/index.html b/public/posts/index.html
index 126811e..1956dce 100644
--- a/public/posts/index.html
+++ b/public/posts/index.html
@@ -138,21 +138,25 @@
-
+ Bash is a great tool for automating tasks and improving you work flow. However, it is SLOW. Adding multiprocessing to the scripts you write can improve the performance greatly.
Read more →
+ href="/posts/supercharge-your-bash-scripts-with-multiprocessing/">Read more →
diff --git a/public/posts/index.xml b/public/posts/index.xml
index c138a20..a409b48 100644
--- a/public/posts/index.xml
+++ b/public/posts/index.xml
@@ -7,15 +7,336 @@
Hugo -- gohugo.ioen-usYigit Colakoglu
- Tue, 13 Apr 2021 23:26:07 +0300
+ Wed, 05 May 2021 17:08:12 +0300
- Test
- http://fr1nge.xyz/posts/test/
- Tue, 13 Apr 2021 23:26:07 +0300
+ Supercharge Your Bash Scripts with Multiprocessing
+ http://fr1nge.xyz/posts/supercharge-your-bash-scripts-with-multiprocessing/
+ Wed, 05 May 2021 17:08:12 +0300
- http://fr1nge.xyz/posts/test/
-
-
+ http://fr1nge.xyz/posts/supercharge-your-bash-scripts-with-multiprocessing/
+ Bash is a great tool for automating tasks and improving you work flow. However, it is SLOW. Adding multiprocessing to the scripts you write can improve the performance greatly.
+What is multiprocessing? In the simplest terms, multiprocessing is the principle of splitting the computations or jobs that a script has to do and running them on different processes. In even simpler terms however, multiprocessing is the computer science equivalent of hiring more than one worker when you are constructing a building.
+ <p>Bash is a great tool for automating tasks and improving you work flow. However,
+it is <em><strong>SLOW</strong></em>. Adding multiprocessing to the scripts you write can improve
+the performance greatly.</p>
+<h2 id="what-is-multiprocessing">What is multiprocessing?</h2>
+<p>In the simplest terms, multiprocessing is the principle of splitting the
+computations or jobs that a script has to do and running them on different
+processes. In even simpler terms however, multiprocessing is the computer
+science equivalent of hiring more than one
+worker when you are constructing a building.</p>
+<h3 id="introducing-">Introducing “&”</h3>
+<p>While implementing multiprocessing the sign <code>&</code> is going to be our greatest
+friend. It is an essential sign if you are writing bash scripts and a very
+useful tool in general when you are in the terminal. What <code>&</code> does is that it
+makes the command you added it to the end of run in the background and allows
+the rest of the script to continue running as the command runs in the
+background. One thing to keep in mind is that since it creates a fork of the
+process you ran the command on, if you change a variable that the command in the
+background uses while it runs, it will not be affected. Here is a simple
+example:</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+foo="yeet"
+
+function run_in_background(){
+ sleep 0.5
+ echo "The value of foo in the function run_in_background is $foo"
+}
+
+run_in_background & # Spawn the function run_in_background in the background
+foo="YEET"
+echo "The value of foo changed to $foo."
+wait # wait for the background process to finish
+</code></pre>
+ </div>
+
+
+<p>This should output:</p>
+<pre><code>The value of foo changed to YEET.
+The value of foo in here is yeet
+</code></pre><p>As you can see, the value of <code>foo</code> did not change in the background process even though
+we changed it in the main function.</p>
+<h2 id="baby-steps">Baby steps…</h2>
+<p>Just like anything related to computer science, there is more than one way of
+achieving our goal. We are going to take the easier, less intimidating but less
+efficient route first before moving on to the big boy implementation. Let’s open up vim and get to scripting!
+First of all, let’s write a very simple function that allows us to easily test
+our implementation:</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+function tester(){
+ # A function that takes an int as a parameter and sleeps
+ echo "$1"
+ sleep "$1"
+ echo "ENDED $1"
+}
+</code></pre>
+ </div>
+
+
+<p>Now that we have something to run in our processes, we now need to spawn several
+of them in controlled manner. Controlled being the keyword here. That’s because
+each system has a maximum number of processes that can be spawned (You can find
+that out with the command <code>ulimit -u</code>). In our case, we want to limit the
+processes being ran to the variable <code>num_processes</code>. Here is the implementation:</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+num_processes=$1
+pcount=0
+for i in {1..10}; do
+ ((pcount=pcount%num_processes));
+ ((pcount++==0)) && wait
+ tester $i &
+done
+</code></pre>
+ </div>
+
+
+<p>What this loop does is that it takes the number of processes you would like to
+spawn as an argument and runs <code>tester</code> in that many processes. Go ahead and test it out!
+You might notice however that the processes are run int batches. And the size of
+batches is the <code>num_processes</code> variable. The reason this happens is because
+every time we spawn <code>num_processes</code> processes, we <code>wait</code> for all the processes
+to end. This implementation is not a problem in itself, there are many cases
+where you can use this implementation and it works perfectly fine. However, if
+you don’t want this to happen, we have to dump this naive approach all together
+and improve our tool belt.</p>
+<h2 id="real-chads-use-job-pools">Real Chads use Job Pools</h2>
+<p>The solution to the bottleneck that was introduced in our previous approach lies
+in using job pools. Job pools are where jobs created by a main process get sent
+and wait to get executed. This approach solves our problems because instead of
+spawning a new process for every copy and waiting for all the processes to
+finish we instead only create a set number of processes(workers) which
+continuously pick up jobs from the job pool not waiting for any other process to finish.
+Here is the implementation that uses job pools. Brace yourselves, because it is
+kind of complicated.</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+job_pool_end_of_jobs="NO_JOB_LEFT"
+job_pool_job_queue=/tmp/job_pool_job_queue_$$
+job_pool_progress=/tmp/job_pool_progress_$$
+job_pool_pool_size=-1
+job_pool_nerrors=0
+
+function job_pool_cleanup()
+{
+ rm -f ${job_pool_job_queue}
+ rm -f ${job_pool_progress}
+}
+
+function job_pool_exit_handler()
+{
+ job_pool_stop_workers
+ job_pool_cleanup
+}
+
+function job_pool_worker()
+{
+ local id=$1
+ local job_queue=$2
+ local cmd=
+ local args=
+
+ exec 7<> ${job_queue}
+ while [[ "${cmd}" != "${job_pool_end_of_jobs}" && -e "${job_queue}" ]]; do
+ flock --exclusive 7
+ IFS=$'\v'
+ read cmd args <${job_queue}
+ set -- ${args}
+ unset IFS
+ flock --unlock 7
+ if [[ "${cmd}" == "${job_pool_end_of_jobs}" ]]; then
+ echo "${cmd}" >&7
+ else
+ { ${cmd} "$@" ; }
+ fi
+
+ done
+ exec 7>&-
+}
+
+function job_pool_stop_workers()
+{
+ echo ${job_pool_end_of_jobs} >> ${job_pool_job_queue}
+ wait
+}
+
+function job_pool_start_workers()
+{
+ local job_queue=$1
+ for ((i=0; i<${job_pool_pool_size}; i++)); do
+ job_pool_worker ${i} ${job_queue} &
+ done
+}
+
+function job_pool_init()
+{
+ local pool_size=$1
+ job_pool_pool_size=${pool_size:=1}
+ rm -rf ${job_pool_job_queue}
+ rm -rf ${job_pool_progress}
+ touch ${job_pool_progress}
+ mkfifo ${job_pool_job_queue}
+ echo 0 >${job_pool_progress} &
+ job_pool_start_workers ${job_pool_job_queue}
+}
+
+function job_pool_shutdown()
+{
+ job_pool_stop_workers
+ job_pool_cleanup
+}
+
+function job_pool_run()
+{
+ if [[ "${job_pool_pool_size}" == "-1" ]]; then
+ job_pool_init
+ fi
+ printf "%s\v" "$@" >> ${job_pool_job_queue}
+ echo >> ${job_pool_job_queue}
+}
+
+function job_pool_wait()
+{
+ job_pool_stop_workers
+ job_pool_start_workers ${job_pool_job_queue}
+}
+</code></pre>
+ </div>
+
+
+<p>Ok… But that the actual fuck is going in here???</p>
+<h3 id="fifo-and-flock">fifo and flock</h3>
+<p>In order to understand what this code is doing, you first need to understand two
+key commands that we are using, <code>fifo</code> and <code>flock</code>. Despite their complicated
+names, they are actually quite simple. Let’s check their man pages to figure out
+their purposes, shall we?</p>
+<h4 id="man-fifo">man fifo</h4>
+<p>fifo’s man page tells us that:</p>
+<pre><code>NAME
+ fifo - first-in first-out special file, named pipe
+
+DESCRIPTION
+ A FIFO special file (a named pipe) is similar to a pipe, except that
+ it is accessed as part of the filesystem. It can be opened by multiple
+ processes for reading or writing. When processes are exchanging data
+ via the FIFO, the kernel passes all data internally without writing it
+ to the filesystem. Thus, the FIFO special file has no contents on the
+ filesystem; the filesystem entry merely serves as a reference point so
+ that processes can access the pipe using a name in the filesystem.
+</code></pre><p>So put in <strong>very</strong> simple terms, a fifo is a named pipe that can allows
+communication between processes. Using a fifo allows us to loop through the jobs
+in the pool without having to delete them manually, because once we read them
+with <code>read cmd args < ${job_queue}</code>, the job is out of the pipe and the next
+read outputs the next job in the pool. However the fact that we have multiple
+processes introduces one caveat, what if two processes access the pipe at the
+same time? They would run the same command and we don’t want that. So we resort
+to using <code>flock</code>.</p>
+<h4 id="man-flock">man flock</h4>
+<p>flock’s man page defines it as:</p>
+<pre><code> SYNOPSIS
+ flock [options] file|directory command [arguments]
+ flock [options] file|directory -c command
+ flock [options] number
+
+ DESCRIPTION
+ This utility manages flock(2) locks from within shell scripts or from
+ the command line.
+
+ The first and second of the above forms wrap the lock around the
+ execution of a command, in a manner similar to su(1) or newgrp(1).
+ They lock a specified file or directory, which is created (assuming
+ appropriate permissions) if it does not already exist. By default, if
+ the lock cannot be immediately acquired, flock waits until the lock is
+ available.
+
+ The third form uses an open file by its file descriptor number. See
+ the examples below for how that can be used.
+</code></pre><p>Cool, translated to modern English that us regular folks use, <code>flock</code> is a thin
+wrapper around the C standard function <code>flock</code> (see <code>man 2 flock</code> if you are
+interested). It is used to manage locks and has several forms. The one we are
+interested in is the third one. According to the man page, it uses and open file
+by its <strong>file descriptor number</strong>. Aha! so that was the purpose of the <code>exec 7<> ${job_queue}</code> calls in the <code>job_pool_worker</code> function. It would essentially
+assign the file descriptor 7 to the fifo <code>job_queue</code> and afterwards lock it with
+<code>flock --exclusive 7</code>. Cool. This way only one process at a time can read from
+the fifo <code>job_queue</code></p>
+<h2 id="great-but-how-do-i-use-this">Great! But how do I use this?</h2>
+<p>It depends on your preference, you can either save this in a file(e.g.
+job_pool.sh) and source it in your bash script. Or you can simply paste it
+inside an existing bash script. Whatever tickles your fancy. I have also
+provided an example that replicates our first implementation. Just paste the
+below code under our “chad” job pool script.</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+function tester(){
+ # A function that takes an int as a parameter and sleeps
+ echo "$1"
+ sleep "$1"
+ echo "ENDED $1"
+}
+
+num_workers=$1
+job_pool_init $num_workers
+pcount=0
+for i in {1..10}; do
+ job_pool_run tester "$i"
+done
+
+job_pool_wait
+job_pool_shutdown
+</code></pre>
+ </div>
+
+
+<p>Hopefully this article was(or will be) helpful to you. From now on, you don’t
+ever have to write single threaded bash scripts like normies :)</p>
+
diff --git a/public/posts/supercharge-your-bash-scripts-with-multiprocessing/index.html b/public/posts/supercharge-your-bash-scripts-with-multiprocessing/index.html
new file mode 100644
index 0000000..c63c61d
--- /dev/null
+++ b/public/posts/supercharge-your-bash-scripts-with-multiprocessing/index.html
@@ -0,0 +1,542 @@
+
+
+
+
+ Supercharge Your Bash Scripts with Multiprocessing :: Fr1nge's Personal Blog
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Bash is a great tool for automating tasks and improving you work flow. However,
+it is SLOW. Adding multiprocessing to the scripts you write can improve
+the performance greatly.
In the simplest terms, multiprocessing is the principle of splitting the
+computations or jobs that a script has to do and running them on different
+processes. In even simpler terms however, multiprocessing is the computer
+science equivalent of hiring more than one
+worker when you are constructing a building.
While implementing multiprocessing the sign & is going to be our greatest
+friend. It is an essential sign if you are writing bash scripts and a very
+useful tool in general when you are in the terminal. What & does is that it
+makes the command you added it to the end of run in the background and allows
+the rest of the script to continue running as the command runs in the
+background. One thing to keep in mind is that since it creates a fork of the
+process you ran the command on, if you change a variable that the command in the
+background uses while it runs, it will not be affected. Here is a simple
+example:
+
+
+
+
+
+
+
+foo="yeet"
+
+function run_in_background(){
+ sleep 0.5
+ echo "The value of foo in the function run_in_background is $foo"
+}
+
+run_in_background & # Spawn the function run_in_background in the background
+foo="YEET"
+echo "The value of foo changed to $foo."
+wait # wait for the background process to finish
+
+
+
+
+
This should output:
+
The value of foo changed to YEET.
+The value of foo in here is yeet
+
As you can see, the value of foo did not change in the background process even though
+we changed it in the main function.
Just like anything related to computer science, there is more than one way of
+achieving our goal. We are going to take the easier, less intimidating but less
+efficient route first before moving on to the big boy implementation. Let’s open up vim and get to scripting!
+First of all, let’s write a very simple function that allows us to easily test
+our implementation:
+
+
+
+
+
+
+
+function tester(){
+ # A function that takes an int as a parameter and sleeps
+ echo "$1"
+ sleep "$1"
+ echo "ENDED $1"
+}
+
+
+
+
+
Now that we have something to run in our processes, we now need to spawn several
+of them in controlled manner. Controlled being the keyword here. That’s because
+each system has a maximum number of processes that can be spawned (You can find
+that out with the command ulimit -u). In our case, we want to limit the
+processes being ran to the variable num_processes. Here is the implementation:
+
+
+
+
+
+
+
+num_processes=$1
+pcount=0
+for i in {1..10}; do
+ ((pcount=pcount%num_processes));
+ ((pcount++==0)) && wait
+ tester $i &
+done
+
+
+
+
+
What this loop does is that it takes the number of processes you would like to
+spawn as an argument and runs tester in that many processes. Go ahead and test it out!
+You might notice however that the processes are run int batches. And the size of
+batches is the num_processes variable. The reason this happens is because
+every time we spawn num_processes processes, we wait for all the processes
+to end. This implementation is not a problem in itself, there are many cases
+where you can use this implementation and it works perfectly fine. However, if
+you don’t want this to happen, we have to dump this naive approach all together
+and improve our tool belt.
The solution to the bottleneck that was introduced in our previous approach lies
+in using job pools. Job pools are where jobs created by a main process get sent
+and wait to get executed. This approach solves our problems because instead of
+spawning a new process for every copy and waiting for all the processes to
+finish we instead only create a set number of processes(workers) which
+continuously pick up jobs from the job pool not waiting for any other process to finish.
+Here is the implementation that uses job pools. Brace yourselves, because it is
+kind of complicated.
In order to understand what this code is doing, you first need to understand two
+key commands that we are using, fifo and flock. Despite their complicated
+names, they are actually quite simple. Let’s check their man pages to figure out
+their purposes, shall we?
NAME
+ fifo - first-in first-out special file, named pipe
+
+DESCRIPTION
+ A FIFO special file (a named pipe) is similar to a pipe, except that
+ it is accessed as part of the filesystem. It can be opened by multiple
+ processes for reading or writing. When processes are exchanging data
+ via the FIFO, the kernel passes all data internally without writing it
+ to the filesystem. Thus, the FIFO special file has no contents on the
+ filesystem; the filesystem entry merely serves as a reference point so
+ that processes can access the pipe using a name in the filesystem.
+
So put in very simple terms, a fifo is a named pipe that can allows
+communication between processes. Using a fifo allows us to loop through the jobs
+in the pool without having to delete them manually, because once we read them
+with read cmd args < ${job_queue}, the job is out of the pipe and the next
+read outputs the next job in the pool. However the fact that we have multiple
+processes introduces one caveat, what if two processes access the pipe at the
+same time? They would run the same command and we don’t want that. So we resort
+to using flock.
SYNOPSIS
+ flock [options] file|directory command [arguments]
+ flock [options] file|directory -c command
+ flock [options] number
+
+ DESCRIPTION
+ This utility manages flock(2) locks from within shell scripts or from
+ the command line.
+
+ The first and second of the above forms wrap the lock around the
+ execution of a command, in a manner similar to su(1) or newgrp(1).
+ They lock a specified file or directory, which is created (assuming
+ appropriate permissions) if it does not already exist. By default, if
+ the lock cannot be immediately acquired, flock waits until the lock is
+ available.
+
+ The third form uses an open file by its file descriptor number. See
+ the examples below for how that can be used.
+
Cool, translated to modern English that us regular folks use, flock is a thin
+wrapper around the C standard function flock (see man 2 flock if you are
+interested). It is used to manage locks and has several forms. The one we are
+interested in is the third one. According to the man page, it uses and open file
+by its file descriptor number. Aha! so that was the purpose of the exec 7<> ${job_queue} calls in the job_pool_worker function. It would essentially
+assign the file descriptor 7 to the fifo job_queue and afterwards lock it with
+flock --exclusive 7. Cool. This way only one process at a time can read from
+the fifo job_queue
It depends on your preference, you can either save this in a file(e.g.
+job_pool.sh) and source it in your bash script. Or you can simply paste it
+inside an existing bash script. Whatever tickles your fancy. I have also
+provided an example that replicates our first implementation. Just paste the
+below code under our “chad” job pool script.
+
+
+
+
+
+
+
+function tester(){
+ # A function that takes an int as a parameter and sleeps
+ echo "$1"
+ sleep "$1"
+ echo "ENDED $1"
+}
+
+num_workers=$1
+job_pool_init $num_workers
+pcount=0
+for i in {1..10}; do
+ job_pool_run tester "$i"
+done
+
+job_pool_wait
+job_pool_shutdown
+
+
+
+
+
Hopefully this article was(or will be) helpful to you. From now on, you don’t
+ever have to write single threaded bash scripts like normies :)
+
+ Bash is a great tool for automating tasks and improving you work flow. However, it is SLOW. Adding multiprocessing to the scripts you write can improve the performance greatly.
+
+
+
+
+
diff --git a/public/tags/bash/index.xml b/public/tags/bash/index.xml
new file mode 100644
index 0000000..37b022a
--- /dev/null
+++ b/public/tags/bash/index.xml
@@ -0,0 +1,343 @@
+
+
+
+ bash on Fr1nge's Personal Blog
+ http://fr1nge.xyz/tags/bash/
+ Recent content in bash on Fr1nge's Personal Blog
+ Hugo -- gohugo.io
+ en-us
+ Yigit Colakoglu
+ Wed, 05 May 2021 17:08:12 +0300
+
+ Supercharge Your Bash Scripts with Multiprocessing
+ http://fr1nge.xyz/posts/supercharge-your-bash-scripts-with-multiprocessing/
+ Wed, 05 May 2021 17:08:12 +0300
+
+ http://fr1nge.xyz/posts/supercharge-your-bash-scripts-with-multiprocessing/
+ Bash is a great tool for automating tasks and improving you work flow. However, it is SLOW. Adding multiprocessing to the scripts you write can improve the performance greatly.
+What is multiprocessing? In the simplest terms, multiprocessing is the principle of splitting the computations or jobs that a script has to do and running them on different processes. In even simpler terms however, multiprocessing is the computer science equivalent of hiring more than one worker when you are constructing a building.
+ <p>Bash is a great tool for automating tasks and improving you work flow. However,
+it is <em><strong>SLOW</strong></em>. Adding multiprocessing to the scripts you write can improve
+the performance greatly.</p>
+<h2 id="what-is-multiprocessing">What is multiprocessing?</h2>
+<p>In the simplest terms, multiprocessing is the principle of splitting the
+computations or jobs that a script has to do and running them on different
+processes. In even simpler terms however, multiprocessing is the computer
+science equivalent of hiring more than one
+worker when you are constructing a building.</p>
+<h3 id="introducing-">Introducing “&”</h3>
+<p>While implementing multiprocessing the sign <code>&</code> is going to be our greatest
+friend. It is an essential sign if you are writing bash scripts and a very
+useful tool in general when you are in the terminal. What <code>&</code> does is that it
+makes the command you added it to the end of run in the background and allows
+the rest of the script to continue running as the command runs in the
+background. One thing to keep in mind is that since it creates a fork of the
+process you ran the command on, if you change a variable that the command in the
+background uses while it runs, it will not be affected. Here is a simple
+example:</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+foo="yeet"
+
+function run_in_background(){
+ sleep 0.5
+ echo "The value of foo in the function run_in_background is $foo"
+}
+
+run_in_background & # Spawn the function run_in_background in the background
+foo="YEET"
+echo "The value of foo changed to $foo."
+wait # wait for the background process to finish
+</code></pre>
+ </div>
+
+
+<p>This should output:</p>
+<pre><code>The value of foo changed to YEET.
+The value of foo in here is yeet
+</code></pre><p>As you can see, the value of <code>foo</code> did not change in the background process even though
+we changed it in the main function.</p>
+<h2 id="baby-steps">Baby steps…</h2>
+<p>Just like anything related to computer science, there is more than one way of
+achieving our goal. We are going to take the easier, less intimidating but less
+efficient route first before moving on to the big boy implementation. Let’s open up vim and get to scripting!
+First of all, let’s write a very simple function that allows us to easily test
+our implementation:</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+function tester(){
+ # A function that takes an int as a parameter and sleeps
+ echo "$1"
+ sleep "$1"
+ echo "ENDED $1"
+}
+</code></pre>
+ </div>
+
+
+<p>Now that we have something to run in our processes, we now need to spawn several
+of them in controlled manner. Controlled being the keyword here. That’s because
+each system has a maximum number of processes that can be spawned (You can find
+that out with the command <code>ulimit -u</code>). In our case, we want to limit the
+processes being ran to the variable <code>num_processes</code>. Here is the implementation:</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+num_processes=$1
+pcount=0
+for i in {1..10}; do
+ ((pcount=pcount%num_processes));
+ ((pcount++==0)) && wait
+ tester $i &
+done
+</code></pre>
+ </div>
+
+
+<p>What this loop does is that it takes the number of processes you would like to
+spawn as an argument and runs <code>tester</code> in that many processes. Go ahead and test it out!
+You might notice however that the processes are run int batches. And the size of
+batches is the <code>num_processes</code> variable. The reason this happens is because
+every time we spawn <code>num_processes</code> processes, we <code>wait</code> for all the processes
+to end. This implementation is not a problem in itself, there are many cases
+where you can use this implementation and it works perfectly fine. However, if
+you don’t want this to happen, we have to dump this naive approach all together
+and improve our tool belt.</p>
+<h2 id="real-chads-use-job-pools">Real Chads use Job Pools</h2>
+<p>The solution to the bottleneck that was introduced in our previous approach lies
+in using job pools. Job pools are where jobs created by a main process get sent
+and wait to get executed. This approach solves our problems because instead of
+spawning a new process for every copy and waiting for all the processes to
+finish we instead only create a set number of processes(workers) which
+continuously pick up jobs from the job pool not waiting for any other process to finish.
+Here is the implementation that uses job pools. Brace yourselves, because it is
+kind of complicated.</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+job_pool_end_of_jobs="NO_JOB_LEFT"
+job_pool_job_queue=/tmp/job_pool_job_queue_$$
+job_pool_progress=/tmp/job_pool_progress_$$
+job_pool_pool_size=-1
+job_pool_nerrors=0
+
+function job_pool_cleanup()
+{
+ rm -f ${job_pool_job_queue}
+ rm -f ${job_pool_progress}
+}
+
+function job_pool_exit_handler()
+{
+ job_pool_stop_workers
+ job_pool_cleanup
+}
+
+function job_pool_worker()
+{
+ local id=$1
+ local job_queue=$2
+ local cmd=
+ local args=
+
+ exec 7<> ${job_queue}
+ while [[ "${cmd}" != "${job_pool_end_of_jobs}" && -e "${job_queue}" ]]; do
+ flock --exclusive 7
+ IFS=$'\v'
+ read cmd args <${job_queue}
+ set -- ${args}
+ unset IFS
+ flock --unlock 7
+ if [[ "${cmd}" == "${job_pool_end_of_jobs}" ]]; then
+ echo "${cmd}" >&7
+ else
+ { ${cmd} "$@" ; }
+ fi
+
+ done
+ exec 7>&-
+}
+
+function job_pool_stop_workers()
+{
+ echo ${job_pool_end_of_jobs} >> ${job_pool_job_queue}
+ wait
+}
+
+function job_pool_start_workers()
+{
+ local job_queue=$1
+ for ((i=0; i<${job_pool_pool_size}; i++)); do
+ job_pool_worker ${i} ${job_queue} &
+ done
+}
+
+function job_pool_init()
+{
+ local pool_size=$1
+ job_pool_pool_size=${pool_size:=1}
+ rm -rf ${job_pool_job_queue}
+ rm -rf ${job_pool_progress}
+ touch ${job_pool_progress}
+ mkfifo ${job_pool_job_queue}
+ echo 0 >${job_pool_progress} &
+ job_pool_start_workers ${job_pool_job_queue}
+}
+
+function job_pool_shutdown()
+{
+ job_pool_stop_workers
+ job_pool_cleanup
+}
+
+function job_pool_run()
+{
+ if [[ "${job_pool_pool_size}" == "-1" ]]; then
+ job_pool_init
+ fi
+ printf "%s\v" "$@" >> ${job_pool_job_queue}
+ echo >> ${job_pool_job_queue}
+}
+
+function job_pool_wait()
+{
+ job_pool_stop_workers
+ job_pool_start_workers ${job_pool_job_queue}
+}
+</code></pre>
+ </div>
+
+
+<p>Ok… But that the actual fuck is going in here???</p>
+<h3 id="fifo-and-flock">fifo and flock</h3>
+<p>In order to understand what this code is doing, you first need to understand two
+key commands that we are using, <code>fifo</code> and <code>flock</code>. Despite their complicated
+names, they are actually quite simple. Let’s check their man pages to figure out
+their purposes, shall we?</p>
+<h4 id="man-fifo">man fifo</h4>
+<p>fifo’s man page tells us that:</p>
+<pre><code>NAME
+ fifo - first-in first-out special file, named pipe
+
+DESCRIPTION
+ A FIFO special file (a named pipe) is similar to a pipe, except that
+ it is accessed as part of the filesystem. It can be opened by multiple
+ processes for reading or writing. When processes are exchanging data
+ via the FIFO, the kernel passes all data internally without writing it
+ to the filesystem. Thus, the FIFO special file has no contents on the
+ filesystem; the filesystem entry merely serves as a reference point so
+ that processes can access the pipe using a name in the filesystem.
+</code></pre><p>So put in <strong>very</strong> simple terms, a fifo is a named pipe that can allows
+communication between processes. Using a fifo allows us to loop through the jobs
+in the pool without having to delete them manually, because once we read them
+with <code>read cmd args < ${job_queue}</code>, the job is out of the pipe and the next
+read outputs the next job in the pool. However the fact that we have multiple
+processes introduces one caveat, what if two processes access the pipe at the
+same time? They would run the same command and we don’t want that. So we resort
+to using <code>flock</code>.</p>
+<h4 id="man-flock">man flock</h4>
+<p>flock’s man page defines it as:</p>
+<pre><code> SYNOPSIS
+ flock [options] file|directory command [arguments]
+ flock [options] file|directory -c command
+ flock [options] number
+
+ DESCRIPTION
+ This utility manages flock(2) locks from within shell scripts or from
+ the command line.
+
+ The first and second of the above forms wrap the lock around the
+ execution of a command, in a manner similar to su(1) or newgrp(1).
+ They lock a specified file or directory, which is created (assuming
+ appropriate permissions) if it does not already exist. By default, if
+ the lock cannot be immediately acquired, flock waits until the lock is
+ available.
+
+ The third form uses an open file by its file descriptor number. See
+ the examples below for how that can be used.
+</code></pre><p>Cool, translated to modern English that us regular folks use, <code>flock</code> is a thin
+wrapper around the C standard function <code>flock</code> (see <code>man 2 flock</code> if you are
+interested). It is used to manage locks and has several forms. The one we are
+interested in is the third one. According to the man page, it uses and open file
+by its <strong>file descriptor number</strong>. Aha! so that was the purpose of the <code>exec 7<> ${job_queue}</code> calls in the <code>job_pool_worker</code> function. It would essentially
+assign the file descriptor 7 to the fifo <code>job_queue</code> and afterwards lock it with
+<code>flock --exclusive 7</code>. Cool. This way only one process at a time can read from
+the fifo <code>job_queue</code></p>
+<h2 id="great-but-how-do-i-use-this">Great! But how do I use this?</h2>
+<p>It depends on your preference, you can either save this in a file(e.g.
+job_pool.sh) and source it in your bash script. Or you can simply paste it
+inside an existing bash script. Whatever tickles your fancy. I have also
+provided an example that replicates our first implementation. Just paste the
+below code under our “chad” job pool script.</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+function tester(){
+ # A function that takes an int as a parameter and sleeps
+ echo "$1"
+ sleep "$1"
+ echo "ENDED $1"
+}
+
+num_workers=$1
+job_pool_init $num_workers
+pcount=0
+for i in {1..10}; do
+ job_pool_run tester "$i"
+done
+
+job_pool_wait
+job_pool_shutdown
+</code></pre>
+ </div>
+
+
+<p>Hopefully this article was(or will be) helpful to you. From now on, you don’t
+ever have to write single threaded bash scripts like normies :)</p>
+
+
+
+
+
diff --git a/public/tags/bash/page/1/index.html b/public/tags/bash/page/1/index.html
new file mode 100644
index 0000000..da96000
--- /dev/null
+++ b/public/tags/bash/page/1/index.html
@@ -0,0 +1 @@
+http://fr1nge.xyz/tags/bash/
\ No newline at end of file
diff --git a/public/tags/index.html b/public/tags/index.html
index 0d98a3a..70f1fae 100644
--- a/public/tags/index.html
+++ b/public/tags/index.html
@@ -138,6 +138,30 @@
+
+ Bash is a great tool for automating tasks and improving you work flow. However, it is SLOW. Adding multiprocessing to the scripts you write can improve the performance greatly.
+
+
+
+
+
diff --git a/public/tags/programming/index.xml b/public/tags/programming/index.xml
new file mode 100644
index 0000000..265d536
--- /dev/null
+++ b/public/tags/programming/index.xml
@@ -0,0 +1,343 @@
+
+
+
+ programming on Fr1nge's Personal Blog
+ http://fr1nge.xyz/tags/programming/
+ Recent content in programming on Fr1nge's Personal Blog
+ Hugo -- gohugo.io
+ en-us
+ Yigit Colakoglu
+ Wed, 05 May 2021 17:08:12 +0300
+
+ Supercharge Your Bash Scripts with Multiprocessing
+ http://fr1nge.xyz/posts/supercharge-your-bash-scripts-with-multiprocessing/
+ Wed, 05 May 2021 17:08:12 +0300
+
+ http://fr1nge.xyz/posts/supercharge-your-bash-scripts-with-multiprocessing/
+ Bash is a great tool for automating tasks and improving you work flow. However, it is SLOW. Adding multiprocessing to the scripts you write can improve the performance greatly.
+What is multiprocessing? In the simplest terms, multiprocessing is the principle of splitting the computations or jobs that a script has to do and running them on different processes. In even simpler terms however, multiprocessing is the computer science equivalent of hiring more than one worker when you are constructing a building.
+ <p>Bash is a great tool for automating tasks and improving you work flow. However,
+it is <em><strong>SLOW</strong></em>. Adding multiprocessing to the scripts you write can improve
+the performance greatly.</p>
+<h2 id="what-is-multiprocessing">What is multiprocessing?</h2>
+<p>In the simplest terms, multiprocessing is the principle of splitting the
+computations or jobs that a script has to do and running them on different
+processes. In even simpler terms however, multiprocessing is the computer
+science equivalent of hiring more than one
+worker when you are constructing a building.</p>
+<h3 id="introducing-">Introducing “&”</h3>
+<p>While implementing multiprocessing the sign <code>&</code> is going to be our greatest
+friend. It is an essential sign if you are writing bash scripts and a very
+useful tool in general when you are in the terminal. What <code>&</code> does is that it
+makes the command you added it to the end of run in the background and allows
+the rest of the script to continue running as the command runs in the
+background. One thing to keep in mind is that since it creates a fork of the
+process you ran the command on, if you change a variable that the command in the
+background uses while it runs, it will not be affected. Here is a simple
+example:</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+foo="yeet"
+
+function run_in_background(){
+ sleep 0.5
+ echo "The value of foo in the function run_in_background is $foo"
+}
+
+run_in_background & # Spawn the function run_in_background in the background
+foo="YEET"
+echo "The value of foo changed to $foo."
+wait # wait for the background process to finish
+</code></pre>
+ </div>
+
+
+<p>This should output:</p>
+<pre><code>The value of foo changed to YEET.
+The value of foo in here is yeet
+</code></pre><p>As you can see, the value of <code>foo</code> did not change in the background process even though
+we changed it in the main function.</p>
+<h2 id="baby-steps">Baby steps…</h2>
+<p>Just like anything related to computer science, there is more than one way of
+achieving our goal. We are going to take the easier, less intimidating but less
+efficient route first before moving on to the big boy implementation. Let’s open up vim and get to scripting!
+First of all, let’s write a very simple function that allows us to easily test
+our implementation:</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+function tester(){
+ # A function that takes an int as a parameter and sleeps
+ echo "$1"
+ sleep "$1"
+ echo "ENDED $1"
+}
+</code></pre>
+ </div>
+
+
+<p>Now that we have something to run in our processes, we now need to spawn several
+of them in controlled manner. Controlled being the keyword here. That’s because
+each system has a maximum number of processes that can be spawned (You can find
+that out with the command <code>ulimit -u</code>). In our case, we want to limit the
+processes being ran to the variable <code>num_processes</code>. Here is the implementation:</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+num_processes=$1
+pcount=0
+for i in {1..10}; do
+ ((pcount=pcount%num_processes));
+ ((pcount++==0)) && wait
+ tester $i &
+done
+</code></pre>
+ </div>
+
+
+<p>What this loop does is that it takes the number of processes you would like to
+spawn as an argument and runs <code>tester</code> in that many processes. Go ahead and test it out!
+You might notice however that the processes are run int batches. And the size of
+batches is the <code>num_processes</code> variable. The reason this happens is because
+every time we spawn <code>num_processes</code> processes, we <code>wait</code> for all the processes
+to end. This implementation is not a problem in itself, there are many cases
+where you can use this implementation and it works perfectly fine. However, if
+you don’t want this to happen, we have to dump this naive approach all together
+and improve our tool belt.</p>
+<h2 id="real-chads-use-job-pools">Real Chads use Job Pools</h2>
+<p>The solution to the bottleneck that was introduced in our previous approach lies
+in using job pools. Job pools are where jobs created by a main process get sent
+and wait to get executed. This approach solves our problems because instead of
+spawning a new process for every copy and waiting for all the processes to
+finish we instead only create a set number of processes(workers) which
+continuously pick up jobs from the job pool not waiting for any other process to finish.
+Here is the implementation that uses job pools. Brace yourselves, because it is
+kind of complicated.</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+job_pool_end_of_jobs="NO_JOB_LEFT"
+job_pool_job_queue=/tmp/job_pool_job_queue_$$
+job_pool_progress=/tmp/job_pool_progress_$$
+job_pool_pool_size=-1
+job_pool_nerrors=0
+
+function job_pool_cleanup()
+{
+ rm -f ${job_pool_job_queue}
+ rm -f ${job_pool_progress}
+}
+
+function job_pool_exit_handler()
+{
+ job_pool_stop_workers
+ job_pool_cleanup
+}
+
+function job_pool_worker()
+{
+ local id=$1
+ local job_queue=$2
+ local cmd=
+ local args=
+
+ exec 7<> ${job_queue}
+ while [[ "${cmd}" != "${job_pool_end_of_jobs}" && -e "${job_queue}" ]]; do
+ flock --exclusive 7
+ IFS=$'\v'
+ read cmd args <${job_queue}
+ set -- ${args}
+ unset IFS
+ flock --unlock 7
+ if [[ "${cmd}" == "${job_pool_end_of_jobs}" ]]; then
+ echo "${cmd}" >&7
+ else
+ { ${cmd} "$@" ; }
+ fi
+
+ done
+ exec 7>&-
+}
+
+function job_pool_stop_workers()
+{
+ echo ${job_pool_end_of_jobs} >> ${job_pool_job_queue}
+ wait
+}
+
+function job_pool_start_workers()
+{
+ local job_queue=$1
+ for ((i=0; i<${job_pool_pool_size}; i++)); do
+ job_pool_worker ${i} ${job_queue} &
+ done
+}
+
+function job_pool_init()
+{
+ local pool_size=$1
+ job_pool_pool_size=${pool_size:=1}
+ rm -rf ${job_pool_job_queue}
+ rm -rf ${job_pool_progress}
+ touch ${job_pool_progress}
+ mkfifo ${job_pool_job_queue}
+ echo 0 >${job_pool_progress} &
+ job_pool_start_workers ${job_pool_job_queue}
+}
+
+function job_pool_shutdown()
+{
+ job_pool_stop_workers
+ job_pool_cleanup
+}
+
+function job_pool_run()
+{
+ if [[ "${job_pool_pool_size}" == "-1" ]]; then
+ job_pool_init
+ fi
+ printf "%s\v" "$@" >> ${job_pool_job_queue}
+ echo >> ${job_pool_job_queue}
+}
+
+function job_pool_wait()
+{
+ job_pool_stop_workers
+ job_pool_start_workers ${job_pool_job_queue}
+}
+</code></pre>
+ </div>
+
+
+<p>Ok… But that the actual fuck is going in here???</p>
+<h3 id="fifo-and-flock">fifo and flock</h3>
+<p>In order to understand what this code is doing, you first need to understand two
+key commands that we are using, <code>fifo</code> and <code>flock</code>. Despite their complicated
+names, they are actually quite simple. Let’s check their man pages to figure out
+their purposes, shall we?</p>
+<h4 id="man-fifo">man fifo</h4>
+<p>fifo’s man page tells us that:</p>
+<pre><code>NAME
+ fifo - first-in first-out special file, named pipe
+
+DESCRIPTION
+ A FIFO special file (a named pipe) is similar to a pipe, except that
+ it is accessed as part of the filesystem. It can be opened by multiple
+ processes for reading or writing. When processes are exchanging data
+ via the FIFO, the kernel passes all data internally without writing it
+ to the filesystem. Thus, the FIFO special file has no contents on the
+ filesystem; the filesystem entry merely serves as a reference point so
+ that processes can access the pipe using a name in the filesystem.
+</code></pre><p>So put in <strong>very</strong> simple terms, a fifo is a named pipe that can allows
+communication between processes. Using a fifo allows us to loop through the jobs
+in the pool without having to delete them manually, because once we read them
+with <code>read cmd args < ${job_queue}</code>, the job is out of the pipe and the next
+read outputs the next job in the pool. However the fact that we have multiple
+processes introduces one caveat, what if two processes access the pipe at the
+same time? They would run the same command and we don’t want that. So we resort
+to using <code>flock</code>.</p>
+<h4 id="man-flock">man flock</h4>
+<p>flock’s man page defines it as:</p>
+<pre><code> SYNOPSIS
+ flock [options] file|directory command [arguments]
+ flock [options] file|directory -c command
+ flock [options] number
+
+ DESCRIPTION
+ This utility manages flock(2) locks from within shell scripts or from
+ the command line.
+
+ The first and second of the above forms wrap the lock around the
+ execution of a command, in a manner similar to su(1) or newgrp(1).
+ They lock a specified file or directory, which is created (assuming
+ appropriate permissions) if it does not already exist. By default, if
+ the lock cannot be immediately acquired, flock waits until the lock is
+ available.
+
+ The third form uses an open file by its file descriptor number. See
+ the examples below for how that can be used.
+</code></pre><p>Cool, translated to modern English that us regular folks use, <code>flock</code> is a thin
+wrapper around the C standard function <code>flock</code> (see <code>man 2 flock</code> if you are
+interested). It is used to manage locks and has several forms. The one we are
+interested in is the third one. According to the man page, it uses and open file
+by its <strong>file descriptor number</strong>. Aha! so that was the purpose of the <code>exec 7<> ${job_queue}</code> calls in the <code>job_pool_worker</code> function. It would essentially
+assign the file descriptor 7 to the fifo <code>job_queue</code> and afterwards lock it with
+<code>flock --exclusive 7</code>. Cool. This way only one process at a time can read from
+the fifo <code>job_queue</code></p>
+<h2 id="great-but-how-do-i-use-this">Great! But how do I use this?</h2>
+<p>It depends on your preference, you can either save this in a file(e.g.
+job_pool.sh) and source it in your bash script. Or you can simply paste it
+inside an existing bash script. Whatever tickles your fancy. I have also
+provided an example that replicates our first implementation. Just paste the
+below code under our “chad” job pool script.</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+function tester(){
+ # A function that takes an int as a parameter and sleeps
+ echo "$1"
+ sleep "$1"
+ echo "ENDED $1"
+}
+
+num_workers=$1
+job_pool_init $num_workers
+pcount=0
+for i in {1..10}; do
+ job_pool_run tester "$i"
+done
+
+job_pool_wait
+job_pool_shutdown
+</code></pre>
+ </div>
+
+
+<p>Hopefully this article was(or will be) helpful to you. From now on, you don’t
+ever have to write single threaded bash scripts like normies :)</p>
+
+
+
+
+
diff --git a/public/tags/programming/page/1/index.html b/public/tags/programming/page/1/index.html
new file mode 100644
index 0000000..3d52525
--- /dev/null
+++ b/public/tags/programming/page/1/index.html
@@ -0,0 +1 @@
+http://fr1nge.xyz/tags/programming/
\ No newline at end of file
diff --git a/public/tags/scripting/index.html b/public/tags/scripting/index.html
new file mode 100644
index 0000000..39b7892
--- /dev/null
+++ b/public/tags/scripting/index.html
@@ -0,0 +1,216 @@
+
+
+
+
+ scripting :: Fr1nge's Personal Blog
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Bash is a great tool for automating tasks and improving you work flow. However, it is SLOW. Adding multiprocessing to the scripts you write can improve the performance greatly.
+
+
+
+
+
diff --git a/public/tags/scripting/index.xml b/public/tags/scripting/index.xml
new file mode 100644
index 0000000..a018f23
--- /dev/null
+++ b/public/tags/scripting/index.xml
@@ -0,0 +1,343 @@
+
+
+
+ scripting on Fr1nge's Personal Blog
+ http://fr1nge.xyz/tags/scripting/
+ Recent content in scripting on Fr1nge's Personal Blog
+ Hugo -- gohugo.io
+ en-us
+ Yigit Colakoglu
+ Wed, 05 May 2021 17:08:12 +0300
+
+ Supercharge Your Bash Scripts with Multiprocessing
+ http://fr1nge.xyz/posts/supercharge-your-bash-scripts-with-multiprocessing/
+ Wed, 05 May 2021 17:08:12 +0300
+
+ http://fr1nge.xyz/posts/supercharge-your-bash-scripts-with-multiprocessing/
+ Bash is a great tool for automating tasks and improving you work flow. However, it is SLOW. Adding multiprocessing to the scripts you write can improve the performance greatly.
+What is multiprocessing? In the simplest terms, multiprocessing is the principle of splitting the computations or jobs that a script has to do and running them on different processes. In even simpler terms however, multiprocessing is the computer science equivalent of hiring more than one worker when you are constructing a building.
+ <p>Bash is a great tool for automating tasks and improving you work flow. However,
+it is <em><strong>SLOW</strong></em>. Adding multiprocessing to the scripts you write can improve
+the performance greatly.</p>
+<h2 id="what-is-multiprocessing">What is multiprocessing?</h2>
+<p>In the simplest terms, multiprocessing is the principle of splitting the
+computations or jobs that a script has to do and running them on different
+processes. In even simpler terms however, multiprocessing is the computer
+science equivalent of hiring more than one
+worker when you are constructing a building.</p>
+<h3 id="introducing-">Introducing “&”</h3>
+<p>While implementing multiprocessing the sign <code>&</code> is going to be our greatest
+friend. It is an essential sign if you are writing bash scripts and a very
+useful tool in general when you are in the terminal. What <code>&</code> does is that it
+makes the command you added it to the end of run in the background and allows
+the rest of the script to continue running as the command runs in the
+background. One thing to keep in mind is that since it creates a fork of the
+process you ran the command on, if you change a variable that the command in the
+background uses while it runs, it will not be affected. Here is a simple
+example:</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+foo="yeet"
+
+function run_in_background(){
+ sleep 0.5
+ echo "The value of foo in the function run_in_background is $foo"
+}
+
+run_in_background & # Spawn the function run_in_background in the background
+foo="YEET"
+echo "The value of foo changed to $foo."
+wait # wait for the background process to finish
+</code></pre>
+ </div>
+
+
+<p>This should output:</p>
+<pre><code>The value of foo changed to YEET.
+The value of foo in here is yeet
+</code></pre><p>As you can see, the value of <code>foo</code> did not change in the background process even though
+we changed it in the main function.</p>
+<h2 id="baby-steps">Baby steps…</h2>
+<p>Just like anything related to computer science, there is more than one way of
+achieving our goal. We are going to take the easier, less intimidating but less
+efficient route first before moving on to the big boy implementation. Let’s open up vim and get to scripting!
+First of all, let’s write a very simple function that allows us to easily test
+our implementation:</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+function tester(){
+ # A function that takes an int as a parameter and sleeps
+ echo "$1"
+ sleep "$1"
+ echo "ENDED $1"
+}
+</code></pre>
+ </div>
+
+
+<p>Now that we have something to run in our processes, we now need to spawn several
+of them in controlled manner. Controlled being the keyword here. That’s because
+each system has a maximum number of processes that can be spawned (You can find
+that out with the command <code>ulimit -u</code>). In our case, we want to limit the
+processes being ran to the variable <code>num_processes</code>. Here is the implementation:</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+num_processes=$1
+pcount=0
+for i in {1..10}; do
+ ((pcount=pcount%num_processes));
+ ((pcount++==0)) && wait
+ tester $i &
+done
+</code></pre>
+ </div>
+
+
+<p>What this loop does is that it takes the number of processes you would like to
+spawn as an argument and runs <code>tester</code> in that many processes. Go ahead and test it out!
+You might notice however that the processes are run int batches. And the size of
+batches is the <code>num_processes</code> variable. The reason this happens is because
+every time we spawn <code>num_processes</code> processes, we <code>wait</code> for all the processes
+to end. This implementation is not a problem in itself, there are many cases
+where you can use this implementation and it works perfectly fine. However, if
+you don’t want this to happen, we have to dump this naive approach all together
+and improve our tool belt.</p>
+<h2 id="real-chads-use-job-pools">Real Chads use Job Pools</h2>
+<p>The solution to the bottleneck that was introduced in our previous approach lies
+in using job pools. Job pools are where jobs created by a main process get sent
+and wait to get executed. This approach solves our problems because instead of
+spawning a new process for every copy and waiting for all the processes to
+finish we instead only create a set number of processes(workers) which
+continuously pick up jobs from the job pool not waiting for any other process to finish.
+Here is the implementation that uses job pools. Brace yourselves, because it is
+kind of complicated.</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+job_pool_end_of_jobs="NO_JOB_LEFT"
+job_pool_job_queue=/tmp/job_pool_job_queue_$$
+job_pool_progress=/tmp/job_pool_progress_$$
+job_pool_pool_size=-1
+job_pool_nerrors=0
+
+function job_pool_cleanup()
+{
+ rm -f ${job_pool_job_queue}
+ rm -f ${job_pool_progress}
+}
+
+function job_pool_exit_handler()
+{
+ job_pool_stop_workers
+ job_pool_cleanup
+}
+
+function job_pool_worker()
+{
+ local id=$1
+ local job_queue=$2
+ local cmd=
+ local args=
+
+ exec 7<> ${job_queue}
+ while [[ "${cmd}" != "${job_pool_end_of_jobs}" && -e "${job_queue}" ]]; do
+ flock --exclusive 7
+ IFS=$'\v'
+ read cmd args <${job_queue}
+ set -- ${args}
+ unset IFS
+ flock --unlock 7
+ if [[ "${cmd}" == "${job_pool_end_of_jobs}" ]]; then
+ echo "${cmd}" >&7
+ else
+ { ${cmd} "$@" ; }
+ fi
+
+ done
+ exec 7>&-
+}
+
+function job_pool_stop_workers()
+{
+ echo ${job_pool_end_of_jobs} >> ${job_pool_job_queue}
+ wait
+}
+
+function job_pool_start_workers()
+{
+ local job_queue=$1
+ for ((i=0; i<${job_pool_pool_size}; i++)); do
+ job_pool_worker ${i} ${job_queue} &
+ done
+}
+
+function job_pool_init()
+{
+ local pool_size=$1
+ job_pool_pool_size=${pool_size:=1}
+ rm -rf ${job_pool_job_queue}
+ rm -rf ${job_pool_progress}
+ touch ${job_pool_progress}
+ mkfifo ${job_pool_job_queue}
+ echo 0 >${job_pool_progress} &
+ job_pool_start_workers ${job_pool_job_queue}
+}
+
+function job_pool_shutdown()
+{
+ job_pool_stop_workers
+ job_pool_cleanup
+}
+
+function job_pool_run()
+{
+ if [[ "${job_pool_pool_size}" == "-1" ]]; then
+ job_pool_init
+ fi
+ printf "%s\v" "$@" >> ${job_pool_job_queue}
+ echo >> ${job_pool_job_queue}
+}
+
+function job_pool_wait()
+{
+ job_pool_stop_workers
+ job_pool_start_workers ${job_pool_job_queue}
+}
+</code></pre>
+ </div>
+
+
+<p>Ok… But that the actual fuck is going in here???</p>
+<h3 id="fifo-and-flock">fifo and flock</h3>
+<p>In order to understand what this code is doing, you first need to understand two
+key commands that we are using, <code>fifo</code> and <code>flock</code>. Despite their complicated
+names, they are actually quite simple. Let’s check their man pages to figure out
+their purposes, shall we?</p>
+<h4 id="man-fifo">man fifo</h4>
+<p>fifo’s man page tells us that:</p>
+<pre><code>NAME
+ fifo - first-in first-out special file, named pipe
+
+DESCRIPTION
+ A FIFO special file (a named pipe) is similar to a pipe, except that
+ it is accessed as part of the filesystem. It can be opened by multiple
+ processes for reading or writing. When processes are exchanging data
+ via the FIFO, the kernel passes all data internally without writing it
+ to the filesystem. Thus, the FIFO special file has no contents on the
+ filesystem; the filesystem entry merely serves as a reference point so
+ that processes can access the pipe using a name in the filesystem.
+</code></pre><p>So put in <strong>very</strong> simple terms, a fifo is a named pipe that can allows
+communication between processes. Using a fifo allows us to loop through the jobs
+in the pool without having to delete them manually, because once we read them
+with <code>read cmd args < ${job_queue}</code>, the job is out of the pipe and the next
+read outputs the next job in the pool. However the fact that we have multiple
+processes introduces one caveat, what if two processes access the pipe at the
+same time? They would run the same command and we don’t want that. So we resort
+to using <code>flock</code>.</p>
+<h4 id="man-flock">man flock</h4>
+<p>flock’s man page defines it as:</p>
+<pre><code> SYNOPSIS
+ flock [options] file|directory command [arguments]
+ flock [options] file|directory -c command
+ flock [options] number
+
+ DESCRIPTION
+ This utility manages flock(2) locks from within shell scripts or from
+ the command line.
+
+ The first and second of the above forms wrap the lock around the
+ execution of a command, in a manner similar to su(1) or newgrp(1).
+ They lock a specified file or directory, which is created (assuming
+ appropriate permissions) if it does not already exist. By default, if
+ the lock cannot be immediately acquired, flock waits until the lock is
+ available.
+
+ The third form uses an open file by its file descriptor number. See
+ the examples below for how that can be used.
+</code></pre><p>Cool, translated to modern English that us regular folks use, <code>flock</code> is a thin
+wrapper around the C standard function <code>flock</code> (see <code>man 2 flock</code> if you are
+interested). It is used to manage locks and has several forms. The one we are
+interested in is the third one. According to the man page, it uses and open file
+by its <strong>file descriptor number</strong>. Aha! so that was the purpose of the <code>exec 7<> ${job_queue}</code> calls in the <code>job_pool_worker</code> function. It would essentially
+assign the file descriptor 7 to the fifo <code>job_queue</code> and afterwards lock it with
+<code>flock --exclusive 7</code>. Cool. This way only one process at a time can read from
+the fifo <code>job_queue</code></p>
+<h2 id="great-but-how-do-i-use-this">Great! But how do I use this?</h2>
+<p>It depends on your preference, you can either save this in a file(e.g.
+job_pool.sh) and source it in your bash script. Or you can simply paste it
+inside an existing bash script. Whatever tickles your fancy. I have also
+provided an example that replicates our first implementation. Just paste the
+below code under our “chad” job pool script.</p>
+
+
+
+ <div class="collapsable-code">
+ <input id="1" type="checkbox" />
+ <label for="1">
+ <span class="collapsable-code__language">bash</span>
+
+ <span class="collapsable-code__toggle" data-label-expand="Show" data-label-collapse="Hide"></span>
+ </label>
+ <pre class="language-bash" ><code>
+function tester(){
+ # A function that takes an int as a parameter and sleeps
+ echo "$1"
+ sleep "$1"
+ echo "ENDED $1"
+}
+
+num_workers=$1
+job_pool_init $num_workers
+pcount=0
+for i in {1..10}; do
+ job_pool_run tester "$i"
+done
+
+job_pool_wait
+job_pool_shutdown
+</code></pre>
+ </div>
+
+
+<p>Hopefully this article was(or will be) helpful to you. From now on, you don’t
+ever have to write single threaded bash scripts like normies :)</p>
+
+
+
+
+
diff --git a/public/tags/scripting/page/1/index.html b/public/tags/scripting/page/1/index.html
new file mode 100644
index 0000000..77c620c
--- /dev/null
+++ b/public/tags/scripting/page/1/index.html
@@ -0,0 +1 @@
+http://fr1nge.xyz/tags/scripting/
\ No newline at end of file
diff --git a/static/images/glasses.png b/static/images/glasses.png
new file mode 100644
index 0000000..56eee60
Binary files /dev/null and b/static/images/glasses.png differ
diff --git a/static/images/supercharge-your-bash-scripts-with-multiprocessing.png b/static/images/supercharge-your-bash-scripts-with-multiprocessing.png
new file mode 100644
index 0000000..55cc338
Binary files /dev/null and b/static/images/supercharge-your-bash-scripts-with-multiprocessing.png differ