Oh Man, So Much to Update on ….

Terraform

Obviously after listing next areas of study in previous post, I would pick up something else entirely – Terraform. Back in James Turnbull land with this marvellous book. I had been looking at Ansible but was put off by all the Vagrant stuff which seemed too anti-container to me.

I have had some crunchy issues to work through. Frustrating at times but I have learnt a hell of a lot working through them – as ever learning a lot of newbie stuff on top.

One such lesson was what happens if you check your AWS access keys into a public Github repo. Turns out you get the attention of AWS and to a lesser extent Github pretty damn quick. Very impressive response particularly as I had no idea what I had done it initially.

As well as another lesson in just how easily an idiot can introduce a vulnerability, I had to figure out how to do the following to get back onto an even keel:

  • rotate IAM access keys
  • remove commits from a public repo (admittedly not really necessary once step above had been completed)

Big hugs to AWS for their response to this issue.

As the Terraform tutorial makes extensive use of Git, this has also been a great way to reinforce my Git skills.

I have realised I am in a space where I have learnt enough to be bold but not enough to avoid doing dangerous things. As with my Docker API faux pas, I am grateful I am doing this on my personal AWS account. Like Luke Skywalker in Empire Strikes Back taking on Vader before finishing his training.  Handy (haha) but ultimately doomed.

Weekly Webinar

Not been great at sticking to this goal, but a colleague referred me to this – particularly relevant to recent work challenges.

Back on SRE

Reading-wise, same colleague recommended the SRE book as a capacity modelling resource. He didn’t know I had a copy. Perfect opportunity to jump forward a few chapters to read about Intent Based Planning.

Black Friday

When trying to find Ansible learning materials that didn’t use Vagrant, I came across Udemy who had some Black Friday deals. I have purchased Kubernetes, Ansible and Python courses.

My Picks

Bladerunner 2049. Just because. I probably love it for all the reasons others don’t.

Curb Your Enthusiasm. Makes Mondays worthwhile. Will miss it when it is done.

Noel Gallagher – Who Built the Moon?. I adore this. And I am not a slavish follower of all things Oasis either. I took my copy of Standing on the Shoulders of Giants back to the shop on the grounds it was pants.

What next?

Update from last post.  I have just finished this …

… and this …

The former was wonderful, should have dug into this much more years ago.  I am going to try out the approach of checking in absolutely everything from this point.

The latter was not so good, but did demystify Mongo for me.  Made me realise that demystification of any subject is a great place to start quickly.  Mental note to purchase and read this when available.

Reading wise, I have finished the Bowie book (yeah, I know that doesnt really count but has cleared my reading backlog) and am 25% through Continuous Delivery.

So what next?  In my previous post, I said Kubernetes next.  I may still do that but have enticing Puppet, Terraform and Logstash books to look at.  And there is still the small matter of the Art of Monitoring to return to.  The key thing is to do something that has a hands-on element to keep the learning momentum going.  That in accompaniment to something like Continuous Delivery is a good mix.

chmod 600 and helping Bitcoin miners

I have just completed my second pass through The Docker Book.  Running through it again was a good decision.  I flew through it this time and surprised myself at how much I have learnt.

I even learnt what happens when you run the command ‘chmod 600’ on it’s own (a cut and paste error).  Even that turned out to be a positive learning experience as it pushed me down the route (or root haha) of using AWS snapshots and volumes for real to resolve the issue.

This time around I noticed the following text which escaped my attention first time:

The significance of this was not lost on me.  First time around I completed the Docker API chapter and being short of time left the “authenticate your API” chapter until a few days later.  When I came back to it, imagine my surprise at seeing Bitcoin mining containers happily whirring away on my host.

In both of these examples I realised that learning by your mistakes is a powerful learning tool, particularly as I am using my own AWS hosts.  Clearly either of these mistakes would have been disastrous in a real life production context.  This time around I was very thorough in only opening up as much access as was needed rather than having everything open to the world.

So what next?  I had been planning to Dockerise Riemann for the Art of Monitoring book.  I will get to that, but feel the need to branch out a bit even if Docker still plays a part.  Current thinking is to study and play with:

  • MongoDB
  • Git (I am embarrassed by how little I use it)
  • Kubernetes (really excited about this)

I am also going to try and organise my reading a little so am planning to tackle the following:

  • The DevOps Handbook
  • Sam Newman’s Microservices
  • Google SRE
  • The Lean Enterprise

Some choices there are influenced by a possible change of role – which will be a good step towards something more DevOpsy (sorry) – for which I am going to have to know my stuff.  I hope re-reading the first two will be as rewarding as a second pass through The Docker Book.

To further assist my mission, I have set myself the objective of watching a webinar each week. I am amazed at how much great material is out there.

My Picks

Finally, I liked the Food Fight’s Picks” so much, I am going to borrow it.  My first one in homage to Food Fight is one of their very own podcasts on the Netflix OSS.  This is a few years old (2013) but I enjoyed it immensely.  Find it here..

A forest in a bottle in a spaceship in a maze

The title of this post is one of my favourite quotes from one of my favourite episodes of my favourite shows.  See the foot of this post for more info.

I was reminded of this quote whilst tackling chapter five of The Docker Book.  I cannot recall finishing this section last time around as I was running Docker on my Mac, not an EC2 instance running Ubuntu as I am now.   I ran into limitations on the Mac Docker implementation.

When I had completed the section I had (deep breath), Jenkins running in a Docker container on an EC2 instance, creating Docker containers to run Ruby apps.  I connected to the EC2 instance from iTerm running on my Mac.

I am particularly proud of completing this one as I ran into a number of issues through which I logically debugged to fix.  I learnt  a lot as I did so but even better I was using tools, techniques, knowledge and logic that would have been beyond me first time around.

What I am not particularly proud of is that I managed to lose a Dockerfile that had an issue so could not compare it to the Dockerfile that ultimately worked – opportunity lost there to learn something else.  However, I know the area where the issue occurred so all is not lost.

If you want to know more about forests, bottles, rockets and mazes, visit here.

 

Docker vs Kubernetes vs Mojo

I have been making some good progress on my second run through of The Docker Book. I am definitely learning different things this time around. I have just completed the chapter on using Docker to make the testing of a web page easier – spinning up containers with different web technologies to test a simple page. I am now running through the chapter on doing something similar with a web application.

Confession time, I lost a little motivation over the past couple of weeks.  Not sure why but possibly a bit of tiredness, work being tough and everyone going back to the school routine.  Feel like my mojo has come back a bit in recent days though.

Despite my fun with Docker, more than one person I know has been hinting that Kubernetes has been “winning the war”.  In general, I think it would be a good idea to get stuck into a few specific technology areas next.  For a while I have been planning to run through Mongodb tutorials and my Git knowledge is not great so that needs some TLC too.  I will add Kubernetes to the list but there does not seem to be as many decent books and tutorials as for Docker.

Finally, I have produced the following Dockerfile whilst running through basic Docker commands.

# comment
# practising Docker commands
FROM ubuntu:16.04
MAINTAINER richardx14 "richard@blueharvest.co.uk"
#RUN apt-get update; apt-get install -y nginx
#RUN echo 'container!' > /var/www/html/index.html
#EXPOSE 80
#
# CMD when container has been launched. If you use a command after docker run, it overrides this
CMD ["/bin/bash"]
#
# ENTRYPOINT - any params used after docker run get passed as params to the ENTRYPOINT. Similar to CMD
# WORKDIR - cd
WORKDIR /var/log
#
# ENV env variables. Persisted into containers that are launched from image
ENV RICHARD richard
#
# USER account, UID, group that containers run from the image will be run by
#
# VOLUME - adds volumes to containers created from image. Allows access to the containers volumes
VOLUME ["/var/log"]
#
# ADD adds from build env into image
ADD testaddfile /var/log/testaddfile
#
# COPY add without without decompression. Will create missing directories
COPY testaddfile /var/log/copyoftestaddfile
COPY testaddfile /test/add/file/testaddfile
#
# LABEL - add metadata
LABEL name="Richard's test label"
#
# ARG - pass build time variables, can set default. use --build-arg build=buildnumber
ARG build
ARG builduser=user
#
# SHELL can override default shell
#
# HEALTHCHECK
HEALTHCHECK --interval=60s --timeout=1m --retries=5 CMD curl http://www.google.co.uk || exit 1
#
# ONBUILD triggered when image is used as the basis for another image
ONBUILD LABEL name="Richard's test label ONBUILD"

 

Change of tack and new reading materials

My mission has hit a bump in the road.  I have clearly made a pigs ear of my Riemman and Graphite exercises – I have been totally spammed by emails from one of my EC2 instances.  Thousands of emails, so many that I think Google has disabled my alerting email account.  That hasn’t helped as I now get spammed by the bounce backs.

This has lead me to the conclusion that I have gone too deep into the weeds.  A little too much too soon.
As a result I have made a decision to tear down my six instances (three for Riemann, three for Graphite – this will make more sense to those who have gone through The Art of Monitoring) and instead restart The Docker Book.  It is motivating me as I will not be starting from scratch.

As per my previous blogs, I intend to get to the point where I have Docker images for Riemann and Graphite, test them, then roll out across multiple instances.

It’s not been a lot of fun copying and pasting config from one iTerm window to another.  Of course, there have been many lessons learnt from building these by hand.

Adding to a feeling of being burnt out, I have decided to change approach with regards to my reading material.  I had got into the habit of reading mission-related materials on every commute apart from Monday mornings.  I commute Monday to Thursday, that’s seven session a week.

So this week I gave up and read for fun.  Next week I am going to start The Goal and read it alongside David Bowie, A Life and try and set a more sustainable pace.

               

Two new Art of Monitoring Riemann issues

I have been adding Riemann alerts for high CPU/disk usage/memory and encountered two issues that need fixing.  That’s the downside.  The upside is my Clojure is improving.

Anyway, error 1:

ERROR [2017-09-16 17:09:59,617] main - riemann.bin - Couldn't start
clojure.lang.Compiler$CompilerException: java.lang.RuntimeException: Invalid token: /percent_bytes-used, compiling:(/etc/riemann/riemann.config:58:52)

No idea what is going on there.  Can be hard to google this stuff.  Might be a typo somewhere.

And error 2:

WARN [2017-09-16 17:56:17,775] defaultEventExecutorGroup-2-1 - riemann.config - riemann.email$mailer$make_stream__9273$stream__9275@552dcd79 threw
com.sun.mail.util.MailConnectException: Couldn't connect to host, port: smtp.gmail.com:587, 25; timeout -1

Wonder what is going on there?  Same stuff works on other two instances.   Another argument for Dockerising the whole thing.  Did wonder whether this due to one of my nodes spamming gmail.  Tomorrow’s problem.  Made some progress but now time for a beer.

Book Blog 1 “Release It!”

I have decided to blog about some of the books I have been reading.   Reading is a vital part of my mission.

I am not going to “review” books, I am going to comment on them and how they have related to my education.

I am about to finish Release It! – author’s exclamation mark, not mine.  When the same book is referenced in a few other books you have been reading, it is probably worth a look.  I think that maybe I have tackled this one a bit too early.  It definitely taught me a few things, but it went into some technical depths that I am equipped to deal with yet.  It is also one of the few tech books that has made me laugh.  More than once.  It has helped me make the case for failing fast, timeouts, bulkheads and circuit breakers on a current assignment too.

Next up, I may continue with Site Reliability Engineering.  Other options are re-reading The DevOps Handbook.  I finished it in April but given it was such an easy read, I am going to try it again to see if it offers any insights I missed last time around.  I am also considering re-reading The Docker Book for similar reasons.

Continuing Riemann logging madness and Scaffolding

My full disks and Riemann logging issues continued over the past few days but appear to have been calmed.  Sadly I am not sure why.  I have a couple of theories though.

Firstly, after running through section 6.2 of The Art of Monitoring (checking processes are running), I pasted in new riemann.config files.  Not 100% sure but wonder if that has corrected a previous error – all the more reason for automation/Puppet/Docker eh?

Which brings me onto theory two.  I wonder if I stopped midway between sections and needed do further work to stop this happening.  This has happened to me before with The Docker Book when I exposed the Docker API publicly.   That’s a subject for another blog.

Of course, I may not have fixed this issue at all.  If I have, I would like to know what the fix was.  Chapters coming up will graph disk usage I believe.

On another note, it has occurred to me that there is probably huge value in revisiting the books I have gone through recently now that I know more.  That kind of makes my heart sink given my mission’s target date.  I have a continuing sense that I am learning different things to what the books intend.  Still learning in this space is surely going to be useful.

Finally, this pulled up outside my house this week.  Someone is trying to tell me something.  Insert your own Unikernel joke here.

The Medusa Touch

Over the past two weeks I have become a bit like Richard Burton in  The Medusa Touch.  In this film he plays a character who has visions of disasters before they happen.

It seems that I only have to read about how unwise it is to share a database between customer and reporting traffic in  Sam Newman’s Microservices  before a slow running reporting query creates issues for customers.

On another occasion, I read  about circuit breakers and fail-fast timeouts in Release It! and almost immediately afterwards hit an issue that would have been avoidable if circuit breakers and fail-fast timeouts were in place.

And then, shortly after listening to three principles of CI on this podcast (whilst dog-walking, naturally), I run into issues with devs checking in code whilst the build pipeline is down.

For the time being, my team have asked me to stop reading about things that can go wrong or at least warn them in advance.