6/3/2025
📅 June 3, 2025Data is simply a syllable that hardware uses to speak to each other. Why does this data need converting? Because the Tower of Babel is not real — no hardware speaks a single language (read: format). Conversion is translation. Compatibility. Survival. Data is not always native to the places it lands.
Data conversion, at its core, is about changing data from one format, type, or structure into another so that it can be understood, processed, or stored differently. Think of it like moving water between containers. The shape of the container changes, but it’s still water — just restructured to fit where it needs to go. This is when data changes its outer shell; its packaging stays the same so different system can understand or display it. This happens at the programming level where data undergoes a deeper metamorphosis: a number becomes a string, a string becomes a boolean, and so on.
Binary is the most basic language of computers. It only has two states: 1 & 0. These are simply denotations for possible electric states — high and low. Each digit (called a bit) represents a power of 2.
Binary means base-2, so computers use the counting system of base-2, just as we use the counting system of base-10 because we have 10 digits.
Base-10:
253 = 2×100 + 5×10 + 3×1 = 2×10² + 5×10¹ + 3×10⁰
Base-2:
1101 = 1×2³ + 1×2² + 0×2¹ + 1×2⁰ = 8 + 4 + 0 + 1 = 13
With just one bit, you can represent 2 possible values: 0
or 1
. Add one more bit, and suddenly you have 4 values: 00
, 01
, 10
, and 11
. With three bits, the number jumps to 8 values, with four bits it doubles again to 16, and so on. In general, with n bits, you can represent 2n unique values.
Bits rarely live alone. They group together in chunks called bytes — usually 8 bits per byte. One byte can represent 28=256 different values, enough to cover all standard ASCII characters, numbers, and more. This exponential growth forms the backbone of modern computing, from the smallest data pieces to the largest memory systems.
Unfortunately, our ancestors didn't have 1 digit on each hand, thus nor they ever cared about thinking in binary. We, decimal thinkers, must have ways to translate into machine-speak (binary), and vice versa. How does a computer understand our decimal language? It doesn't. We must provide the translation. Every number we write must be converted into its binary equivalent so the machine can store and operate on.
Decimal, 156 is 10011100
in binary format. How?
We start from the highest power of 2 that fits into 156 and work our way down:
2⁷ = 128 → fits! Subtract 128 from 156, leaving 28.
2⁶ = 64 → too big for 28, skip it.
2⁵ = 32 → still too big for 28, skip.
2⁴ = 16 → fits! 28 - 16 = 12.
2³ = 8 → fits! 12 - 8 = 4.
2² = 4 → fits! 4 - 4 = 0.
2¹ = 2 → doesn't fit, skip.
2⁰ = 1 → doesn't fit, skip.
Putting it together:
156 = 128 + 16 + 8 + 4 = 10011100
But what if we want a more human-readable form of binary? What if we don’t want to count eight 1s and 0s every time we look at something?
That’s where hexadecimal comes in.
Hexadecimal is the shorthand language for binary. It's base-16, meaning it uses 16 different symbols:
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F
Where A = 10
, B = 11
, C = 12
, ..., F = 15
.
Why base-16? Because 1 hex digit = 4 binary bits. You can take a long binary string, split it into chunks of 4 bits, and replace each with a single hex digit.
For example, take this binary number: 10101100
Split it into two 4-bit chunks: 1010
1100
Convert each chunk:
1010
= (1×2³) + (0×2²) + (1×2¹) + (0×2⁰) = 8 + 0 + 2 + 0 = 10
→ A
1100
= (1×2³) + (1×2²) + (0×2¹) + (0×2⁰) = 8 + 4 + 0 + 0 = 12
→ C
So, 10101100
is AC in hexadecimal.
Hexadecimal isn't merely a shorthand. Let's take a look a common place where you can see these labels.
When a computer runs a program, it stores data in its memory. Each byte in memory has its own unique number, called an address, so the computer can find and use the data it needs.
Every piece of data in a computer’s memory has an address. These addresses are often huge numbers when written in decimal, but in hex, they become much cleaner and easier to read.
Let's suppose that a memory address is this: 0x1A3F
(0x
is the prefix that signals that the number is in hexadecimal).
If we apply our same method of conversion:
0x1 = 0001 0xA = 1010 0x3 = 0011 0xF = 1111
So, the memory address looks like this in binary: 0001 1010 0011 1111
Each hex digit neatly represents 4 bits, making conversion and understanding much simpler.
Computers use these addresses to access data quickly. Hexadecimal makes it easier to spot patterns and boundaries in memory addresses — something critical when debugging or working close to the hardware.
In short, memory addresses are simply labels for locations in a computer’s memory, and those labels are commonly written in hexadecimal format because it’s a neat, compact way to represent the underlying binary addresses.
We’ve traced the metamorphosis of data — from raw binary to structured decimal, then into the refined shorthand of hexadecimal.
Now, it takes on its most poetic form: the string
.
Legend has it there was once a protest among the metallic classes demanding their binary voices be heard. They refused to be bound by the cold, inhuman language of 1s and 0s.
The academic elites, however, did not so easily grant the public the privilege of speaking to machines in human words — Homo Sapiens.
The roads and buildings were taken and seized — party-line debates, philosophical crises, and whispered questions of whether this, perhaps, was a moral awakening.
A string is simply a sequence of characters - letters, numbers, punctuation marks, spaces, even symbols. It's how text is represented and stored in programming.
We simply understand it as sentences. May it be a name, a paragraph, or maybe your favorite poem.
I hide myself within my flower,
That fading from your Vase,
You, unsuspecting, feel for me -
Almost a loneliness.
In computer, each of those characters is encoded into numbers, and such numbers are stored in memories.
The string, Hello
is made up of five characters: 'H','e','l','l','o'. Each of these characters is stored as number,
according to an encoding standard - like ASCII or Unicode.
So, Hello
in its binary form is 01001000 01100101 01101100 01101100 01101111
,
and in ASCII: [72, 101, 108, 108, 111]
So the string Hello
is stored in memory as a sequence of bytes.
This is how computers interpret and store what we see as readable text.
So next time you say 'Hello' to your computer, remember — to it, you’re speaking in byte-sized numbers.
To turn human-readable characters into computer-readable numbers (remember, numbers are the only thing they understand!), we need a system. A formalism between machine and humans: the metalization of our flesh.
Enter ASCII, the American Standard Code for Information Interchange. Created in the early 1960s, it became the original common tongue between humans and machines.
ASCII assigns a unique number to each character. Just as in Morse code where "dot-dash" patterns represent letters, ASCII translates characters into numerical codes — which computers then convert into binary.
Before ASCII existed, different manufacturers used different encodings, causing chaos when systems tried to talk to each other. Don't get it twisted. The Tower of Babel is still not real.
ASCII gave everyone a shared alphabet of 128 characters. This is enough to cover all the English language, digits, punctuation, and control characters (like newline or tab).
So far, we have witnessed a metamorphosis of a spark to the curves of human language -- from binary to hex, to string, to ASCII -- the first bridge between two worlds, man and machine; flesh and silicon.
Just as we understand another one's language if we develop a thought-form in their domain -- their language. We only understand computational processes only if we abstract and think like them, and vice versa. Programming is, in its essence, a data conversion.
And maybe that's all communication ever was -- conversion.
📌 To save this post, press Ctrl+D (Windows) or Cmd+D (Mac) to bookmark.