0 or 1 value forms a "bit",i.e., when a value (say) 0 is applied, here '0' called a bit.
A bit is a smallest unit of data in computers.
Computers generally execute instructions in bit multiples known as "Bytes". A byte is a group of bits. Technically a byte is equal to eight bits.
A bit represents a value (either 0 or 1) and such 8 bits represents a byte. Now, a question arises in my mind that what does a byte represent? This byte can represent a number (integer) or a character (alphabet, sign, etc.)
The reason behind explaining all this stuff is that, a computer works on binary codes. I mean, generally all the computers, mobiles, super-computers, electronic gadgets works on binary codes.
These units can still be expanded and written as KB, MB, GB, TB and so on.
It is important to understand that when such notation is written, then b (small b) is taken as bit and when B (capital B) is taken as byte. This is an important note because it would result in the error of size directly by 8 times (8x) as 1 byte is equal to 8 bits. Whereas, k or K wouldn't matter at all.
1 KB = 1000 bytes = 8000 bits
1000 KB = 1 MB
1000 MB = 1 GB
1000 GB = 1 TB (TB means terabytes)
This is basic SI convention we used here. Earlier, 1 kB is said to be of 1024 bytes, 1 mB used to be 1024 kB, and so on but this convention is now obsolete.