[ITP: Understanding Networks] Definitions

Evil bit - a fictional IPv4 packet header field proposed to a Request for Comments (RFC) publication for April Fool’s in 2003. The RFC recommended that the last unused bit, the “Reserved bit” in the IPv4 packet header be used to indicate whether a packet had been sent with malicious intent, thus simplifying internet security.

The RFC states that benign packets will have this bit set to 0; those that are used for an attack will have the bit set to 1. Firewalls must drop all inbound packets that have the evil bit set and packets with the evil bit off must not be dropped. The RFC also suggests how to set the evil bit in different scenarios:

  • Attack applications may use a suitable API to request that the bit be set.

  • Packet header fragments that are dangerous must have the evil bit set.

    • If a packet with the evil bit set is fragmented by a router and the fragments themselves are not dangerous, the evil bit can be cleared in the fragments but reset in the reassembled packet.

  • Applications that hand-craft their own packets that are part of an attack must set the evil bit.

  • Hosts inside the firewall must not set the evil bit on any packets. (RFC 3514)

Packet header - Data sent over computer networks, such as the internet, is divided into packets. A packet header is a “label” which provides information about the packet’s contents, origin, and destination. Network packets include a header so that the device that receives them knows where the packets come from, what they are for, and how to process them.

Packets actually have more than one header and each header is used in a different part of the networking process. Packet headers are attached by certain types of networking protocols. At a minimum, most packets that traverse the internet will include a Transmission Control Protocol (TCP) header and an Internet Protocol (IP) header.

For example, the IPv4 packet header consists of 20 bytes of data that are divided into the following fields:

Checksum - a small-sized block of data derived from another block of digital data for the purpose of detecting errors that may have been introduced during its transmission or storage. Checksums are often used to verify data integrity but are not relied upon to verify data authenticity. A good checksum algorithm usually outputs a significantly different value for even small changes made to the input. If the computed checksum for the current data input matches a stored value of a previously computed checksum, there is a very high probability that the data has not been accidentally altered or corrupted.

An inconsistent checksum number can be caused by an interruption in the network connection, storage or space issues, a corrupted disk or file, or a third party interfering with data transfer.

Programmers can use cryptographic hash functions like SHA-0, SHA-1, SHA-2, and MD5 to generate checksum values. Common protocols used to determine checksum numbers are TCP and UDP. As an example, the UDP checksum algorithm works like this:

  1. Divides the data into 16-bit chunks

  2. Add the chunks together

  3. Any carry that is generated is added back to the sum

  4. Perform the 1’s complement of the sum

  5. Put that value in the checksum field of the UDP segement

Resources

https://en.wikipedia.org/wiki/Evil_bit

https://www.cloudflare.com/learning/network-layer/what-is-a-packet/

https://erg.abdn.ac.uk/users/gorry/course/inet-pages/ip-packet.html

https://en.wikipedia.org/wiki/Checksum

https://www.techtarget.com/searchsecurity/definition/checksum

https://www.educative.io/answers/how-does-checksum-work