Current 2024 - 5k Fun Run (or Walk)
At Current 24 a few of us will be going for an early run (or walk) on Tuesday morning. Everyone is very welcome!
At Current 24 a few of us will be going for an early run (or walk) on Tuesday morning. Everyone is very welcome!
I do my best to try and keep, if not abreast of, then at least aware of what’s going on in the world of data. That includes RDBMS, Event streaming, stream processing, open source data projects, data engineering, object storage, and more. If you’re interested in the same, then you might find this blog useful, because I’m sharing my sources :)
Let’s not bury the lede: it was DNS. However, unlike the meme ("It’s not DNS, it’s never DNS. It was DNS"), I didn’t even have an inkling that DNS might be the problem.
I’m writing a new blog about streaming Apache Kafka data to Apache Iceberg and wanted to provision a local Kafka cluster to pull data from remotely. I got this working nicely just last year using ngrok to expose the broker to the interwebz, so figured I’d use this again. Simple, right?
Nope.
After a break from using AWS I had reason to reacquaint myself with it again today, and did so via the CLI. The AWS CLI is pretty intuitive and has a good helptext system, but one thing that kept frustrasting me was that after closing the help text, the screen cleared—so I couldn’t copy the syntax out to use in my command!
The same thing happened when I ran a command that returned output - the screen cleared.
Here’s how to fix either, or both, of these
At this year’s Kafka Summit I’m planning to continue the tradition of going for a run (or walk) with anyone who’d like to join in. This started back at Kafka Summit San Francisco in 2019 over the Golden Gate Bridge and has continued since then. Whilst London’s Docklands might not offer quite the same experience it’ll be fun nonetheless.
This year Kafka Summit London includes a dedicated track for talks about Apache Flink. This reflects the continued rise of interest and use of Apache Flink in the streaming community, as well as the focus that Confluent (the hosts of Kafka Summit) has on it.
I’m looking forward to being back at Kafka Summit. I will be speaking on Tuesday afternoon, room hosting on Wednesday morning, and hanging out at the Decodable booth in between too.
Here’s a list of all the Flink talks, including the talk, time, and speaker. You find find more details, and the full Kafka Summit agenda, here.
At Decodable we migrated our docs platform onto Antora. I wrote previously about my escapades in getting cross-repository authentication working using Private Access Tokens (PAT). These are fine for just a single user, but they’re tied to that user, which isn’t a good practice for deployment in this case.
In this article I’ll show how to use GitHub Apps and Installation Access Tokens (IAT) instead, and go into some detail on how we’ve deployed Antora. Our GitHub repositories are private which makes it extra-gnarly.
A friend messaged me late last night with the scary news that Google had emailed him about a ton of spammy subdomains on his own domain.
Any idea how this could have happened, he asked?
Why should the Java folk have all the fun?!
My friend and colleague Gunnar Morling launched a fun challenge this week: how fast can you aggregate and summarise a billion rows of data? Cunningly named The One Billion Row Challenge (1BRC for short), it’s aimed at Java coders to look at new features in the language and optimisation techniques.
Not being a Java coder myself, and seeing how the challenge has already unofficially spread to other communities including Rust and Python I thought I’d join in the fun using what I know best: SQL.
Antora is a modern documentation site generator with many nice features including sourcing documentation content from one or more separate git repositories. This means that your docs can be kept under source control (yay 🎉) and in sync with the code of the product that they are documenting (double yay 🎉🎉).
As you would expect for a documentation tool, the Antora documentation is thorough but there was one sharp edge involving GitHub that caught me out which I’ll detail here.