"Computers are like bicycles for our minds." - Steve Jobs
Hey, TechFlixers!
Ever wondered why the traffic lights use 🟥🟨🟩? And what does that have to do with binary coding? Stick around, and we'll enlighten you by the end of this edition.
🎬 The Premiere
Fasten your seatbelts and sharpen your tech instincts! Today, we're diving headfirst into a riveting journey where we'll explore:
✅ The enchanting world of HDR technology
✅ The intricacies of designing an effective LRU Cache
✅ The revolutionary work culture of a manager-less company
✅ The art and science behind Github Co-pilot's Prompt Engineering
🔦 Spotlight
HDR Explained
Do you know why your friend's beach video looks so vibrant and lifelike? It's not just their new smartphone; it's the magic of HDR (High Dynamic Range), that option in see in your apps, but have no idea of. Let me explain.
First, examine the picture below.
You know how sometimes the sky looks too bright or your face and shirt look too dark? That's because your camera has trouble capturing both really bright and really dark things at the same time. This range from dark to bright is what we call the dynamic range.
Now, HDR, or High Dynamic Range, is like giving superpowers to your camera. It helps it capture or show more shades of colors, from very dark to very bright, all at the same time. With HDR, you can see all the details clearly and beautifully, like the clouds in the sky and your face. We can achieve this by taking multiple pictures at different exposure levels and combining them all into one.
HDR is great for quality. Meta wanted to support it in their Instagram Reels as more and more people uploaded HDR content. But. It’s not as straightforward since different device models can use different HDR specs. This needed to be backward compatible with devices that do not support HDR. Consistency should be maintained across devices and while processing on both the client and server sides, and in scenarios where text and other overlay objects are present on the content.
So, how did Meta get it working? Find out here: Bringing HDR Video To Reels
☕️ Tea Time
Prompt Engineering Tips
Prompt Engineering is the science (and art) of communicating with a Large Language Model (LLM). Think of LLMs as helpers trying to finish a story or conversation you started. Building this intuition of them being a “document completing tool” rather than a “chatbot” is important.
Imagine you're drafting a script for a play between a customer and a support agent. You set the stage like this:
[Company Name] IT Support Transcript:
Here's a conversation between a customer support agent and a user of XYZ product.
Agent: Hi, How may I help you?
Customer:
Now, your LLM has a clear view of the story it's supposed to tell. It’s technically just completing the script but will appear to the end user as an efficient “customer support bot.”
But the story will be better if the AI knows more about the subject. Think about connecting it to a database of information and giving additional context to the prompt.
Let’s take an example of how prompt engineering is implemented into GitHub Co-pilot. GitHub Co-pilot’s task is to complete the code. To do it effectively, it does the following:
It gets as much context as required- all the metadata the code editor has, like the programming language, configs, and all the open tabs in the editor, which indicate some relation with the code on the active tab that the dev is trying to complete.
It then filters out the context for the most relevant data, a process called Snippeting. LLMs work within a context window. Think of it as the maximum memory it can retain at once. This includes your prompt. Hence, the prompt must be as concise and efficient, so that the LLM has more room to flush out its response. (This probably won’t matter once we get models with huge context windows, but it can still have an impact on the quality of the response)
Once we have the filtered context, we have to dress it up in a way that is efficient and easy to understand. This is like rewriting all the gathered data points in an applicable format.
The last step in making the prompt is deciding what's most important. We need to put all the necessary info in the prompt, in order of priority. After the prompt is set, the AI will finish the document.
Most systems that use LLMs at the backend may follow a version of this approach to construct the most efficient prompt possible.
🎥 Behind the Scenes
Vertica, The Boss-less Company
Imagine a playground where every kid is in charge of their own games. Welcome to Vertica, a company redefining the traditional workplace by having no appointed managers.
Here’s how they navigate it.
Everyone's a Captain: Every employee is their own boss at Vertica.
Trust is the Glue: Trust plays an even bigger role in a managerless workplace. Everyone trusts each other to do their best and make good decisions.
Decisions, Decisions: Vertica believes in collective decision-making. Big or small, the team makes every decision together.
In a world where traditional workplaces and hierarchies are the norms, Vertica stands out with its unique managerless structure.
To explore further, check out this Talk by Helle Markmann, a project manager at Vertica.
🚀 Power Up
A real-world interview question, straight from the hallways of Rubrik.
Design an LRU (Least Recently Used) Cache. Implement a data structure that can efficiently store a limited number of items and quickly identify and evict the least recently used item when the cache reaches its capacity.
Let’s build an intuition on the design.
What's an LRU Cache?
Imagine a magic box (that’s your cache) where you can keep your favorite toys (that's your data). But the box has limited space. So, when it gets full, and you need to fit a new toy, you can remove an old toy you played with a long time ago. That's the idea of an LRU Cache!
How can we design it?
You need two key tools: a HashMap and a Doubly-Linked List. The HashMap helps you find any item fast, and the Doubly-Linked List lets you easily add and remove items (data).
How to implement it?
Every time you play with a toy (use an item), move it to the top of your Doubly-Linked List. This way, the least recently used toy always ends up at the bottom, and when the cache is full, it's the one to get kicked out.
Possible follow-up questions
How would you handle concurrency in your LRU Cache design?
You could use read/write locks. A read lock allows many toys to be read (data to be accessed) at once but only one toy to be played with (data to be modified) at a time.
Can you explain the time complexity of your solution?
Accessing and updating the cache is O(1), meaning they're fast. This is because we can quickly find items in the HashMap and swiftly add/remove items from the Doubly-Linked List.
And there you have it, folks! It's not as scary as it sounds, is it?
📨 Post Credits
Before we pull down the curtain, let's reveal the answer to our opening riddle. The traffic light uses Red, Yellow, and Green because these colors have long wavelengths that are easier to distinguish from far away. And how does this tie into binary? Like traffic lights, binary code also uses two distinct states - 0s and 1s - to transmit information efficiently.
Fin.