Valkyrie rears its wings

Chris Harris test drives the Adrian Newey designed Aston Martin Valkyrie around Bahrain track. Weird driving position and that sound is… loud. Newey started working on this car back when Aston Martin also sponsored Red Bull F1 Racing Team.

What does it look like for the web to lose? – Chris Coyier

Say, somehow, the web is dealt some massive blow, and native apps have all the momentum. What does it look like for the web to lose? As in, for native apps to somehow become the default choice for organizations building digital products.

  • Us designers and developers would either have to re-specialize on one particular platform or spread ourselves thin, getting mediocre at many.
  • There would be many good apps in just one of the walled gardens, leaving users feeling cheated that they have to choose and miss out no matter what they choose.
  • We would all live under the rule of these closed, privately-held systems. If they don’t like you or your app, you’re gone. That’s how they work now, but with nowhere else to go, gone is gone.
  • URLs are of the web, not native apps. URLs are what makes search engines a thing. Farewell to global, helpful search.
  • Did you (do you) learn and debug from being able to inspect the source code of the very thing you are looking at? Not anymore.
  • Isn’t it nice how really old websites are still perfectly available? The web does a wonderful job of backward compatibility. In a world of all native apps, one platform update can prevent any non-conforming app from running at all.
  • Not that websites are notoriously amazing at performance but have you weighed your average app download lately? 50MB is on the small end. So much for the web helping bring digital connectivity to developing nations.
  • If you sell the product, prepare to give up a sizeable chunk of every sale. 
  • Does it bring you comfort that the decision-making processes that guide the web are good-slow, because it prevents dangerous under-thought ideas? That’s how the open, collaborative web works, not private industry.
  • Do you like the idea of being able to exert control over websites with things like user stylesheets and web extensions? That’s not a thing on native apps.
  • Plus like, how do you even look up how to build native apps without websites amiright

This is a good list of items to keep an eye on and it is true that without the web, the native community will lose out on a lot. Now this doesn’t mean everything can and should be a web app. It also does not mean that native apps are web apps because they use URIs to “fetch data” / run a service.

The web has 2 roles to play imho: 1) fastest way to prototype a service and test it with the public and 2) fastest way to scale a service. Where it falters (at least on iOS and quickly on Android) is have a set of platform approved APIs that allows one to build a polished experience.

In product management, there is often discussion around mVP – Minimum Viable Product and mDP (my personal preference) – Minimum Delightful Product. The web is a wonderful platform to test MVP for sure. I worry that the web is currently unsuited for the mDP depending on your bar.

However, where currently the web fails well short, especially on mobile (and we can debate the why in a future discussion) is the MDP – the Maximum Delightful Product. This is where discovery (app stores), fluidity of the UX, haptics, speed, offline capabilities, notifications all come together and deliver a well, maximum delightful experience. The web fails on the mobile here for a few reasons.

I hope the web can get better there! On desktop, esp on Apple’s M-series of devices, the web (esp on Chrome) is a delight. It’s so darn powerful and some amazing experiences are created there – Figma being a wonderful example. I want web on mobile to get there and I am rooting for it.

Formula 1 – 2023 Season – Day 1 Testing – Bahrain

Here are the relevant articles to read to catch up on the Day 1 Testing.

Red Bull Racing finally unveiled their car and Gary Anderson gives it a one over. They are the favorites this year as well.

Mercedes is starting off with no bouncing issues. Seems like a low bar, but I just need to point you to

    They are being coy about their actual speeds. We should know more by the end of testing and Bahrain race.

    Alonso’s on INTENSO mode from the start. He put the Aston Martin Racing car on P2 at the end of Day1. Talking about the AMR, commentators seem to think it’s taken a step change improvement in stability and pace. This is something to keep a keen eye on.

    Improved home networking

    Who should read this?

    You shouldn’t think about this if you don’t want to. Most people don’t. However, if you are someone who appreciates and can notice fast, performant, low latency internet on your devices, the rest of this article is for you.

    Why should I think about my Home Network?

    As more and more people work (part/full) time from their home, the internet connection that used to be an afterthought for most people have now become a lifeline. US ISPs have not stepped up to the task of providing for this new environment – cough upload speeds cough. However, there are still steps one can take to improve the quality of their network.

    Most home network setups require one heavy investment to think and plan it once. Once done, the process (typically) pays compounded dividends over time. The compounded dividends are a result of modular upgrades as a result of separation of network components and most people, some early investments in efficient wired backhauls.

    For anyone interested in improving the network performance (a combination of latency, throughput and bandwidth) in their home, this document details the following areas. I’ve done my best to keep the sections self-contained, so you can chart your own path.

    Table of Contents

    • [[#Who should read this]]
    • [[#Why should I think about my Home Network]]
    • [[#My Setup]]
      • [[#The Home]]
      • [[#The Internet Connection]]
      • [[#My goals]]
      • [[#My Network Components]]
    • [[#The Options]]
      • [[#Set it and forget it]]
      • [[#The others most likely why you’re reading this article]]
        • [[#The networking components]]
          • [[#PPPoE]]
      • [[#Recommendations]]
        • [[#My Network Components]]
        • [[#Your Options]]
          • [[#Links to Options]]
    • [[#Network Performance]]
      • [[#Internet bandwidth measurement]]
      • [[#Home wired network performance measurement]]
      • [[#Loaded network improvements]]
    • [[#Some general tips]]
      • [[#Wired is better than Wireless]]
      • [[#The future is multi gig]]
      • [[#Multi-WAN is still mostly an unsolved problem for the home]]
      • [[#Smart queue can still benefit multi gigabit connections]]
      • [[#There is a shortage of networking components right now]]
      • [[#WiFi 5 6 or 6E]]

    My Setup

    The Home

    I live in a really old home. The core of the home was constructed in 1924 with some upgrades over the years. This means a lot of plaster and lathe walls, older (thicker) oak floors, older material for insulation, a lot more walls (compared to the newer open plans), a hard to access (and crumbling) attic. Most newer construction home will not be as hostile to wireless networking as my home is with newer sheet rock (drywall) constructions.

    The Internet Connection

    Living in Seattle, the only two options for us are Comcast (offering up to 1200/30 Mbps) or Centurylink (up to 940/940Mbps). I am a Centurylink customer since 2016. Their internet speeds, infrastructure and prices are better than Comcast as long as you keep a close eye on the billing. They constantly try to overcharge you if you don’t pay attention. Modulo this irritation, you pay $65 for fiber to the home (FTTH) 940/940Mbps.

    Comcast recently introduced a 2000/2000 Mbps symmetrical fiber option that’s almost 5 times the price at $299/month, which I have not tried.

    My goals

    Understanding what you want to do with your network is critical before setting it up. In my case, these were the critical journeys:

    • Must have: Uninterrupted video conferencing all day for at least 2 people
      • No judder, no lost connections, no delays
      • Remote work is hard and I don’t want to juggle connection issues
    • Must have: Steady throughput for cloud connected (i know not ideal) 7 cameras for home security
    • Must have: Decent download speeds for multi-gig downloads almost every day (for OS images), although for people with iOS and Mac OS devices, this should now be a requirement given their multi-gig OS downloads
    • Must have: Strong web performance, esp low latency DNS lookups
    • Must have: Local parental control for “free” (no subscriptions)
    • Must have: Individual / group control over which devices can be chatty and when
    • Must have: xPlatform access (iOS, Android, Web)
    • Must have: Strong security history and engineering culture by component companies
    • Nice to have: Strong privacy and encryption practices by the companies
    • Nice to have: low latency for gaming
    • Nice to have: future proof networking setup so I don’t have to reconsider this for about 5 years

    My Network Components

    In my case, I landed at the following:

    The eero Pro 6 as APs is a total waste of their capabilities. However, I use them because they formed my main network before the current structure. I also like that they have 2 5GHz radios that can be used for device connection as well. I know I don’t get DFS with them in bridge mode. However, it’s a trade off I am wiling to make.

    Zyxel C3000Z + firewalla gold + Unify Switch behind (cable management not done yet)

    The Options

    One can certainly go down the rabbit hole of networking gear and trust me, there are enough companies willing to sell you whatever you want (modulo supply chain issues). However, after perusing various reddit forums, reading multiple websites and conducting research, the fundamental question one needs to answer is the amount of flexibility they need, the amount of time they are willing to dedicate for the ongoing “upkeep” of the network and if they consider this as a time sink or a time-`

    Set it and forget it

    The simplest option to consider for an improved networking system in your home is to employ a mesh networking system like the google WiFi / Eero. For any home > 1500 soft, there will be an improvement in both coverage and hence throughput. Both google WiFi and Eero also give some basic networking security to your home by a) always being up to date and b) providing basic firewall practices. Most people will benefit by going with this option.

    The others (most likely why you’re reading this article)

    Any other option from now on will eschew simplicity for flexibility and / or performance. This will come at the cost of more upkeep for the home network. Let’s take a brief pause and learn some basic networking to understand why these options exist and how/why they improve the home network.

    Majority of homes will use the ISP provided appliance to connect to the internet. In most cases this is a modem/router/WiFi access point (AP) combo appliance. Most decent ISPs will do their best to keep this to a limited set if options that are well tested for their network and (hopefully) keep them updated to limit network intrusions / have open vulnerabilities.

    Aside: Unless there are extraneous circumstances that prevents you from replacing the ISP appliance: PPPoE support; locked down MAC address; some other specific implementation etc, the ideal situation would be to replace the whole thing with a system that you control.

    To achieve flexibility and (in some cases) improved performance, the next steps would be to split the single appliance into individual networking components.

    The networking components

    • modem: converts the home network signals into something the ISP can understand at the physical layer. In the case of cable internet, it’s physically converting it into electromagnetic waves of a specific frequency and channel so that the ISP knows how to interpret it. In the case of fiber, typically this device is often called an ONT – an Optical Network Terminal, which takes in a optical fiber usually GPON and then converts that into an ethernet signal (RJ45 output).
    • gateway: this is the interface between the ISP and your home network. Think of this as the interface between the ISP and your home network on the network and transport layers.
    • firewall: contains necessary logic to ensure that your home network is not intruded by unintended parties
    • router: this is the hub of your home network and allows the devices in your home network to connect to the internet. Your devices communicate over the internet using packets of pre-determined sizes. The router manages all the packets and ensures that the right packet is sent to the right device.
    • access point: your device can be connected to the router either using a wired connection (ethernet, direct copper or optical fiber) or using wireless connections via the access point. Hence you can have a wired / wireless access point.

    At its extreme, each of these components can be an individual device. However, for most homes, you don’t need that extreme modularity.

    You ideally want your modem separate from the router. This allows you to change your internet service provider without changing anything else in your network. You also want your wireless access points different than your router. This allows you to add / upgrade wireless access points in the future to more efficient / more performant technologies (WiFi 5 -> WiFi 6 -> WiFi 6E -> WiFi 7 etc) without changing other aspects of the home network.

    For most usecases, the router is usually combined with the firewall and gateway. There are specific cases where you may want to separate them though:


    PPPoE is an older protocol that was popular during the DSL days. It’s for authentication, encryption and compression of the home network packets for the ISP to process it. As it deals with authentication, encryption and compression, you can imagine that it’s a compute intensive operation. At the gigabit and multi gig speeds available today, this often consumes one entire core of a 1-2GHz core available in a device. Over time, most internet service providers will do away with this protocol. However, sadly, in my case, Centurylink not only uses it but relies on it for some legacy reasons. This means that if I combine my router with the device that also needs to perform the PPPoE connection, I can expect that device to have at least one of its core fully saturated for such managing PPPoE (unless it has specific ASICS to improve the process).

    To achieve the separation in my case, I allow the PPPoE (and VLAN tagging) required for my internet connection to the device provided by the ISP and connect that to a router. A purist would shout about double NAT, which is absolutely true. However, I’ve not found an affordable, usable PPPoE implementation in a gateway that doesn’t fundamentally reduce the throughput that the Centurylink provided device can do.

    [May 2022] I recently heard that eero 6 devices have a strong offloaded PPPoE implementation. I’ve not had a chance to test this yet.


    My Network Components

    In my case, I landed at the following:

    • Zyxel C3000Z (provided by Centurylink) as the gateway / PPPoE device
    • Firewalla gold as the router (capable of 3gigabit throughput with deep packet inspection and IPS and IDS)
    • Eero Pro 6 as APs
    • Unifi Switch 24 (connected via a 2Gbps link to firewalla gold)

    The eero Pro 6 as APs is a total waste of their capabilities. However, I use them because they formed my main network before the current structure. I also like that they have 2 5GHz radios that can be used for device connection as well. I know I don’t get DFS with them in bridge mode. However, it’s a trade off I am wiling to make.

    Your Options

    Here’s a flowchart I often recommend to people when they ask me the question:

    switch (situation)
    Case “set it and forget it”: buy a mesh network solution with some decent security - google WiFi / eero ;
    Case “set it and forget it + best performance”: buy a mesh network, but connect the satellites using a wired backhaul ;
    Case “security + set it and forget it”: firewalla gold (as router) + mesh network;
    Case “security + set it and forget it + performance”: firewalla gold (as router) + mesh network with wired backhaul;
    Case “flexibility + performance with minimal effort”: firewalla gold (as router) + unifi APs;
    case “flexibility + performance”: unifi dream machine pro / se + unifi switch + APs;
    case “ultimate flexibility”: pfSense on your device of choice + unifi APs;

    Note: only firewalla gold and purple have routing capabilities. The others – red, blue plus add network security

    Network Performance

    Most people want to measure the performance of an internet application on their device to get a representative sense of it is good enough / not? If that’s all you care, then just point your device to something like Fast or Speedtest and you will get a representative idea.

    However, if you want to understand network performance there are some terms you need to familiarize yourself with:

    • Bandwidth: this is usually a measure of the pipe that connects the internet to your home. This is often the number that the ISP will share with you. There is usually a download bandwidth and an upload bandwidth. For example Centurylink says their speeds will be up to 940Mbps for downlink and 940Mbps for uplink.
    • Throughput: Think of this as the “actual” speeds. It’s defined as the amount of bits per second that a measuring tool can actually count coming one way.
    • Latency (sometimes referred to as ping): latency is how much time passes before you get a response for a query from the internet?
      • Loaded latency: this is the latency when your network is a bit more representative when there are multiple devices talking to the internet at the same time
      • Bufferbloat: bufferbloat is the total amount of time spent by a packet in a queue waiting to be processed on the path from the server to the network. In most situations, it’s loaded latency – latency
      • Jitter: is the variance in latency. What’s the spread when you try and ping 30 times? Is it always 3ms? Or does that vary?

    For most home networks, you want latency and loaded latency to be as low as possible, but more critically, consistent (low jitter). Bandwidth should be ideally a little over what your peak load throughput needs to be. Popular mesh networks today have some really advanced technology to keep all of this humming to their best ability.

    If you don’t fall into that category, then network performance measurement falls into 3 steps

    • measure the speed of the internet to home
    • measure the throughput of the home network
    • ensure that the internet packets are routed as efficiently as possible

    Wireless network performance measurement is a whole another deal that I will not get into as it’s subjective to the device, distance to router, router software, interference in that current location at the current time etc.

    For the rest of this measurement, we will focus on wired network performance.

    Internet bandwidth measurement

    To measure this, you need to ideally run the measurement tool on the gateway / router (if gateway and router are combined). Most mesh networking systems have a friendly UX on this. If you have a different router, typically most dedicated routers allow you to run this in the worst case as a command line tool.

    Home wired network performance measurement

    The gold standard for network performance measurement is considered to be iPerf3. With the case of iperf3, what you have is a device acting as the server and another as the client. It allows the measurement of packet flow between those 2 end points. If you’re into it, you can run an iperf3 server across different parts of the network topology. However, for most purposes, running it as a peer to your client device will give you a good enough result.

    Loaded network improvements

    As the number of devices in your network increase (US households have 10+ network connected devices), it’s important to consider the effects of your network get loaded. This is when all the network connected devices become chatty at the same time. Assuming your networking bandwidth can handle it (else you need to up that at the ISP level), your router should be capable of processing all those packets in near realtime. Else, your router will be a bottle neck to your performance and you start to notice everything slowing down.

    This is exacerbated with gigabit and multi gig internet connections. I found this comment on the OpenWRT forums quite illuminating.

    Let’s take a look at the math: At 1Gbps using 1500 byte packets, you need to send/receive 83333 packets per second. The packets need to be received by an interrupt, go through the firewall, be inspected, maybe have NAT applied, sent into a queue, the queue calculates rates to avoid over-sending on the link and causing buffers, and then hardware interrupts are serviced to actually send the packet along…

    At 1 GHz processing rate, each packet gets 12000 clock cycles of calculation if the CPU is maxed out doing nothing but processing packets.

    Evidently in an ideal world, we should have maybe 1.2GHz processors or better, and maybe have two cores at least one can handle interrupts on the receive interface, and one can handle interrupts on the send interface, and they can share the firewall and queueing duties. Let’s not forget that there’s RAM latency and bandwidth issues if the packets need to go from kernel to userland (like for OpenVPN) and encryption/decryption also for VPNs.

    While the post is focused on why gigabit routing is (was?) expensive, the point remains, there is a lot of compute happening post 500Mbits that require beefier general purpose compute unless there are dedicated accelerators (which are increasing). As a result, the router will quickly become the bottle neck of your home network as the internet speed increases.

    Some general tips

    Wired is better than Wireless

    Yes, we are in the world of WiFi 7 and 40Gbps low latency throughput. However, until WiFi 7 becomes a widespread reality, wired trumps wireless. Its benefits are:

    • consistent low latency
    • duplex communication over wires
    • freeing up the airwaves for the devices that cannot be wired

    Even in the world of mesh networking there are benefits to having a wired backhaul in your home. You can then save either by buying dual band equivalents and ensuring future flexibility to replace them with access points as your networking situation changes.

    So, if at all possible, hire a local low voltage electrician / security company / AV company in some places and get some Cat 6 / Cat 6A Ethernet drops.

    The future is multi gig

    Internet service providers are slowly, but surely moving to a multi gig internet plan. However, most consumer networking equipment are only just starting to catch up to > 1000Mbps throughput. Granted, most home use cases do not need multi gig internet speeds, yet. However, if you’re setting up your home network now, at least modularize it to make it future proof.

    This means router separate from switch separate from access points, ideally.

    Multi-WAN is still mostly an unsolved problem for the home

    Multi-WAN is when you have more than one internet connection. The benefits of Multi-WAN, are redundancy and load balancing. Even most prosumer devices still only support failover modes. And even when load balancing, the technology required to treat that as a single logical connection is inaccessible to 90% of homes. It either requires buying (really expensive) enterprise level networking gear or buying suspect / highly inaccessible often not fully supported networking equipment.

    Smart queue can still benefit (multi) gigabit connections

    You will often hear that gigabit and greater than gigabit connections won’t benefit much from smart queuing. There is some truth to it – if you have a really good router capable of handling that packet traffic without causing bottle necks, then it’s true. With such a large pipe, you won’t get benefits of smart queues and more importantly, most smart queues are designed to solve the problem of trying to prioritize important traffic in highly constrained bandwidth situations (because of the emaciated upload pipes of most US internet connections).

    Yet, the truth is that most consumer routers are not actually capable of handling even gigabit internet connections.

    There is a shortage of networking components right now

    Owing to the supply chain disruptions, the chip shortage and inflation, most networking components are currently higher priced than before. While it is not as bad as trying to score a PS5, most devices linked could be out of stock. Some manufacturers provide their own inventory availability like firewalla. In the case of Unifi, I’ve found this /r/UbiquitiInStock to be the best source.

    WiFi 5, 6 or 6E?

    Don’t go below WiFi 5. There is a step change in wireless network performance with WiFi 5 (or 802.11ac). WiFi 6 has minor throughput improvements, but focuses on improved efficiency inside a network (by bringing in cellular technologies – OFDMA). WiFi 6E introduces 6GHz for faster throughput (but not by much for most real world scenarios) as 6GHz also attenuates faster than 5GHz.

    So, don’t buy into the marketing of the wireless AP companies. A general rule of thumb can be:

    • Get WiFi 5 if that’s what you can afford, especially if you have less than 10 devices per access point
    • If you have a home with 100+ wireless devices with many WiFi 6 capable devices, you might benefit going to WiFi 6 / 6E to achieve a more consistent per device throughput.

    Read here for WiFi 6 versus WiFi 5

    Microsoft to acquire Activision-Blizzard

    Microsoft purchasing Activision after the purchase of Zenimax / Bethesda is a clear indication that it believes its future is in its streaming bundle. 

    Let’s consider the Fallout (heh)

    This is an acknowledgement that Sony’s the better player at console gaming market. With its robust IP leading console sales with a keen eye on what matters for this market, Sony’s clearly leading the charge.

    IMO, Microsoft’s done playing that game (and losing). It’s flipping the rules of the game and trying to position itself for the next round – IP and streaming and subscription revenue.

    The benefits are vertical integration (for Microsoft), more optimized subscription revenue, less churn in IP and deals and while at the same time, taking future IP _away_ from Sony, and Nintendo and Steam.

    This is an inspired play and while no longer novel in the Satya-era of Microsoft, is definitely positioning itself for the future. However, the killer move isn’t here – yet. Here’s a hypothesis on the next parts of the play

    What does Sony do?

    Kickstart a buying spree of IP to protect its moat. The end result is most publishers / studios will eventually end up as Sony / Microsoft.

    1. Microsoft has MORE money

    2. Microsoft has potential for future revenue with Game Pass

    Adv: Microsoft

    to note: This also will likely fragment the hardware market, which has increasingly consolidated towards x86 heterogenous computing led by AMD.


    intel / qualcomm have a shot at improving in-house heterogenous computing 

    But, this is a potential opportunity for Intel / Qualcomm. This is a valuable market (both console and server hardware) to truly upgrade their knowledge and delivery of heterogenous computing systems.

    Microsoft will increasingly push for cloud streaming because that’s its advantage + pushes for subscription and the potential to pull the killer play – also become a cloud streaming services provider to further dis-intermediate Sony.

    Gaming studios are not going to have much of a say in this. They are not the “profitable” part of the value chain and will have to face increasing costs in the future (modulo Epic/Unity) in this increasingly bifurcated future, unless they have a strong IP.

    google / nVIDIA / amazon

    Now, if they have a strong IP, they could potentially play along by being the Disney+ / HBO Max to Microsoft’s Netflix – integrate upwards and become also becoming a streaming service / subscription player. There are some inherent bearishness to this plan. Not every studio can transition to a successful subscription revenue player (see Peacock / Paramount+ etc).

    I believe Take Two and possibly Ubisoft are the two players i see potentially popping up here. Specifically for streaming service provider, there are some additional cloud providers to consider – NVIDIA’s GeForce Now, Google’s Stadia tech along with Amazon’s Luna. At least 2 of them are strong cloud players and can potentially sway the strong studio IPs.


    This brings us to the game engine developers – Epic and Unity. They currently power the x-platform genre. Yes, there are some companies developing their own engine, but they also do that for their own _strong_ IP and I assume will be bought out by Sony / Microsoft in this battle / choose their own destiny with integrated streaming. In a world where streaming tech + Sony + Nintendo become the future, Epic and Unity take on a more diminutive role. Their best bet is to be purchased by one of the platforms – Microsoft, Sony, Nintendo or develop a relationship with cloud providers to become a target for them.

    The next 2 years in gaming is going to be … epic.

    Impossible Foods’ new pork is 0% pig. That’s a big deal. – Vox

    Pork is the most consumed meat on the planet, accounting for 36 percent of global meat intake. If Impossible Foods can get us to eat and enjoy a meatless version of it instead, it could help save millions of pigs from suffering on factory farms and curb the impact of pig farming on the environment. It could also improve human health, not least because it’ll help us combat risks like antibiotic resistance.

    You can soon buy meatless Impossible Pork and Impossible Sausage – Vox

    TIL: Pork accounts for 36% of all meat consumed in the world. Whoa.

    Faux pork will probably help Impossible Foods make inroads in Asia, a huge market where pork is extremely popular. It provides a way to guarantee continued access to the beloved meat even when, say, an epidemic hits. Since August 2018, the African swine fever epidemic has killed a quarter of all pigs around the world. China’s herd has shrunk by at least half. On the plus side, surveys have shown that Chinese consumers are very open to meat alternatives, more so than Americans.

    Thoughts on The Skywalker Saga

    Star Wars has always been dear to me. It’s the fantasy side of Star Wars that appealed to me, growing up. The epitome of the hero’s journey of the original trilogy. The much hated, but still interesting story (when combined with The Clone Wars) of the prequel trilogy. And the resuscitation of the series with the final trilogy.

    I was very excited with the reboot. I looked forward to the setup of new stories in the fantasy world that I spent countless hours across multiple mediums in.

    However, the latest trilogy has left me more disappointed than anything else. I was personally disappointed because I was expecting the stories to be designed for me. Be more complex than replaying the same story from the original trilogy.

    In fact, my own rating of the latest trilogy keeps the final episode, The Rise of Skywalker, as the nadir of the trilogy, often competing with Attack of the Clones (Episode 2). However, over the holidays, with time to ruminate, I think Star Wars hasn’t changed. I think I’ve changed.

    Star Wars has always been designed for the young generation to get excited about space fantasy. And in that mission, a look around the younger generation attending the movies is a very clear reminder that they are successful. They don’t worry about the plot holes (yet) and don’t worry that Rey as a Palpatine is a missed opportunity from The Last Jedi. They just enjoy the fun camaraderie between Rey, Poe and Finn. They are fascinated by the Jedi powers of force healing and force-time.

    So, from that perspective, I am genuinely happy for them. For me, though, The Clone Wars and The Mandalorian have presented far more interesting plots than the latest trilogy of movies did. The books have always been a good source of complex plots and experiments on storytelling in this fantastic world.


    Battery impact of dark mode in OLED screens

    With both iOS and Android supporting dark mode, some interesting analyses on energy consumption savings of dark mode is now available.

    Personally, the most interesting takeaway is that dark grey (with significant accessibility improvements) is about as battery efficient as ‘full black’ (which still has some accessibility concerns).

    XDA has more to read

    The fundamental cause seems to be the energy curve of OLED screens

    Energy consumption v luminance

    At the recommended 0.3% of luminance of white on a Oneplus Pro screen

    Now, barring unique display profile / configuration of the OnePlus Pro screen, this is a good scientific observation (also backed up by 2 real world measurement experiments).

    So, wonder why Apple is at least marketing pure black dark mode in the iOS 13 screenshots.

    Whatsapp too gets hacked

    The end in “end-to-end” sort of hides the fact there are several layers that exist before the data is fully encrypted, in a way that makes it invisible to the transport layer. First of all, you have to type it in to your phone, which exposes what you type to people (or cameras, mind you) around you. Even if your screen is covered, and keyboard, you are still leaking data from your keyboard, both visually and acoustically

    But then there’s also the operating system that your app is running on; you simply rely on the fact that your keyboard isn’t logging things as you type them, your camera isn’t recording when it shouldn’t, so on and so-forth. There are a lot of “loose” ends before the end-to-end shrouds your messages in mathematical secrecy. And then, there’s the recipient. In most cases, you have no idea what situation the recipient is in or who he or she might be. For all you care, they might be just broadcasting your texts to the building across from them.

    Encryption is just part of the puzzle, it is definitely not panacea.

    On one side, I do not want people over at Menlo Park to peer through my chats on Facebook’s WhatsApp nor do I want people in Switzerland to go through my ProtonMail email. I am not sure if they cannot right now, but I know without E2E, they can. I’ll take that side of the deal, and you should too. Similarly, basic encryption protects you from a customs officer at the border having a bad day, or an ex-boyfriend that just wants some dirt. The same argument goes for mitigation dragnet surveillance. Not everyone, yet, can afford NSO Group’s software.

    Yet, how do you explain to tens of Indians or Myanmar residents that you simply cannot control people’s behavior, when you are benefiting from the encryption mostly? Apple put on a brave face when it resisted FBI’s attempts, but will it be able to do the same if there was a bigger threat to national security? Will Microsoft? Would we even know that these companies cooperated with the government? If Google tomorrow drops a key logger on your phone, I am not sure if anyone would be the wiser.