Learning Golang (some rough notes) - S01E02 - Slices
Slices made sense, until I got to Slice length and capacity. Two bits puzzled me in this code:
Slices made sense, until I got to Slice length and capacity. Two bits puzzled me in this code:
I’ve never used pointers before. Found plenty of good resources about what they are, e.g.
But why? It’s like explaining patiently to someone that 2+2 = 4, without really explaining why would we want to add two numbers together in the first place.
My background is not a traditional CompSci one. I studied Music at university, and managed to wangle my way into IT through various means, ending up doing what I do now with no formal training in coding, and a grab-bag of hacky programming attempts on my CV. My weapons of choice have been BBC Basic, VBA, ASP, and more recently some very unpythonic-Python. It’s got me by, but I figured recently I’d like to learn something new, and several people pointed to Go as a good option.
Kafka Connect (which is part of Apache Kafka) supports pluggable connectors, enabling you to stream data between Kafka and numerous types of system, including to mention just a few:
Databases
Message Queues
Flat files
Object stores
The appropriate plugin for the technology which you want to integrate can be found on Confluent Hub.
For whatever reason, CSV still exists as a ubiquitous data interchange format. It doesn’t get much simpler: chuck some plaintext with fields separated by commas into a file and stick .csv
on the end. If you’re feeling helpful you can include a header row with field names in.
order_id,customer_id,order_total_usd,make,model,delivery_city,delivery_company,delivery_address
1,535,190899.73,Dodge,Ram Wagon B350,Sheffield,DuBuque LLC,2810 Northland Avenue
2,671,33245.53,Volkswagen,Cabriolet,Edinburgh,Bechtelar-VonRueden,1 Macpherson Crossing
In this article we’ll see how to load this CSV data into Kafka, without even needing to write any code
In v5.5 of Confluent Platform the REST Proxy added new Admin API capabilities, including functionality to list, and create, topics on your cluster.
Check out the docs here and download Confluent Platform here. The REST proxy is Confluent Community Licenced.
Question from the Confluent Community Slack group:
How can I access the data in object in an array like below using ksqlDB stream
"Total": [ { "TotalType": "Standard", "TotalAmount": 15.99 }, { "TotalType": "Old Standard", "TotalAmount": 16, " STID":56 } ]
Alfred is one of my favourite productivity apps for the Mac. It’s a file indexer, a clipboard manager, a snippet expander - and that’s just scratching the surface really. I recently got a new machine without it installed and realised just how much I rely on Alfred, particularly its clipboard manager.
I’ve been poking around recently with capturing Wi-Fi packet data and streaming it into Apache Kafka, from where I’m processing and analysing it. Kafka itself is rock-solid - because I’m using ☁️Confluent Cloud and someone else worries about provisioning it, scaling it, and keeping it running for me. But whilst Kafka works just great, my side of the setup—tshark
running on a Raspberry Pi—is less than stable. For whatever reason it sometimes stalls and I have to restart the Raspberry Pi and restart the capture process.
I wanted to get some data from a Kafka topic:
ksql> PRINT PERSON_STATS FROM BEGINNING;
Key format: KAFKA (STRING)
Value format: AVRO
rowtime: 2/25/20 1:12:51 PM UTC, key: robin, value: {"PERSON": "robin",
"LOCATION_CHANGES":1, "UNIQUE_LOCATIONS": 1}
into Postgres, so did the easy thing and used Kafka Connect with the JDBC Sink connector.
My name’s Robin, and I’m a Developer Advocate. What that means in part is that I build a ton of demos, and Docker Compose is my jam. I love using Docker Compose for the same reasons that many people do:
Spin up and tear down fully-functioning multi-component environments with ease. No bespoke builds, no cloning of VMs to preserve "that magic state where everything works"
Repeatability. It’s the same each time.
Portability. I can point someone at a docker-compose.yml
that I’ve written and they can run the same on their machine with the same results almost guaranteed.
ksqlDB 0.7 will add support for message keys as primitive data types beyond just STRING
(which is all we’ve had to date). That means that Kafka messages are going to be much easier to work with, and require less wrangling to get into the form in which you need them. Take an example of a database table that you’ve ingested into a Kafka topic, and want to join to a stream of events. Previously you’d have had to take the Kafka topic into which the table had been ingested and run a ksqlDB processor to re-key the messages such that ksqlDB could join on them. Friends, I am here to tell you that this is no longer needed!
Very simple to fix: go to https://calendar.google.com/calendar/syncselect and select the calendars that you want. Click save.