Neuronen SchmiedeDie Neuronen Schmiede setzt sich in erster Linie mit Themen im Bereich Software Entwicklung auseinander. Ein Fokus dabei sind robuste Web-Anwendungen.http://strauss.io/2021-07-01T04:00:00ZDavid StraußStatechart Power: Geschäftsprozesse mit Statecharts abbildenhttp://strauss.io/blog/2021-statechart-power-geschaftsprozesse-mit-statecharts-abbilden.html2021-07-01T04:00:00Z2021-07-01T15:31:48+02:00David Strauß<p>Ich will dir Statecharts für die Umsetzung von Geschäftsprozessen ans Herz legen. Statecharts sind ein zeitloses Konzept, mit dem du dir deine Arbeit erleichtern kannst. Lass mich zeigen wie sie mir bei der Umsetzung einer Webanwendung geholfen haben.</p>
<p>Ende 2020 haben wir bei edgy circle die Now or Never Gallery gebaut. <a href="https://www.nowornever.gallery">Die Now or Never Gallery ist eine Online Galerie für Künstler*innen aus der ganzen Welt</a>. Im Vergleich zu einem klassischen Shop gibt es drei markante Unterschiede.</p>
<ul>
<li>Die Kunstwerke sind in der Regel Unikate. Der Versand und die damit verbundenen Kosten variieren bei jedem Verkauf.</li>
<li>Beim Kauf eines Kunstwerkes wird in Wahrheit nur ein Kaufangebot gestellt. Der Kauf muss erst bestätigt werden bevor er weiter abgewickelt wird.</li>
<li>Die Galerie ist nur Vermittler und verkauft die Kunstwerke im Namen der Künstler*innen. Nach dem Abzug der Provision geht der restliche Verkaufspreis an die Künstler*innen.</li>
</ul>
<p>In Zusammenarbeit mit dem Auftraggeber wurde der Verkaufsprozess eines Kunstwerkes modelliert. Der Verkaufsprozess ist in folgendem Statechart abgebildet. Es berücksichtigt alle geschäftlichen, rechtlichen und technischen Vorgaben.</p>
<p><img alt="" src="/geschaeftsprozesse-statechart-small-2e333574.png" />
<em><a href="/geschaeftsprozesse-statechart-large-935ae20b.png">Originalgröße der Grafik</a>. Die States, Events und Actions sind in dieser Grafik anonymisiert um keinen 1:1 Bauplan für eine Online Galerie zu liefern.</em></p>
<p>Dieses Statechart hat die Entwicklung immens erleichtert. Es gab selten einen Tag, an dem es nicht zur Hilfe herangezogen wurde. Es folgt eine Auflistung der größten Vorteile.</p>
<h2 id="schnellere-kommunikation-mit-weniger-missverstndnissen">Schnellere Kommunikation mit weniger Missverständnissen</h2>
<p>Die Grafik ist eine hervorragende Gesprächsgrundlage. Es sind keine ausschweifende Beschreibungen mehr nötig um sicherzugehen, dass alle vom selben sprechen. In den Worten unseres Auftraggebers ist das Statechart eine Karte, die uns den Weg zeigt. Das ist für den Auftraggeber so wertvoll, dass er den Wunsch geäußert hat das Statechart in der Webanwendung dynamisch darzustellen. Das würde ihm erlauben bei jedem Verkauf besser zu sehen, wo dieser im Prozess steht.</p>
<h2 id="sichtbarmachen-von-fehlerfllen">Sichtbarmachen von Fehlerfällen</h2>
<p>Eine robuste Webanwendung funktioniert auch unter widrigen Umständen. Dafür braucht es nicht nur fehlerfreien Quellcode. Der korrekte Umgang mit Vorfällen außerhalb der Webanwendung ist genauso wichtig. Mit dem Statechart in der Hand ist das einfacher. Jeder Zustand ist eine Einladung sich zu überlegen, was alles schiefgehen kann und wie darauf reagiert wird.</p>
<p>Ein paar Beispiele aus diesem Projekt:</p>
<ul>
<li>Keine zeitnahe Rückmeldung zum Kaufangebot durch die Künstler*innen.</li>
<li>Das Versandetikett für die DHL Abholung wird nicht zeitgerecht heruntergeladen.</li>
<li>Das Kunstwerk kommt beschädigt beim Käufer an.</li>
<li>Die Auszahlung über Stripe schlägt fehl.</li>
</ul>
<h2 id="schrittweise-umsetzung">Schrittweise Umsetzung</h2>
<p>Diverse Umstände machten es bei diesem Projekt nötig möglichst schnell in den Echtbetrieb zu gehen. Das Statechart war der Bauplan, nach dem wir vorgegangen sind. Der Auftraggeber konnte darauf einzeichnen, in welchen Schritten es sinnvoll ist die Funktionalität umzusetzen. Gleichzeitig war der aktuelle Stand sichtbar. Wir haben direkt auf dem Statechart vermerkt, was bereits implementiert ist.</p>
<h2 id="einbinden-manueller-arbeitsschritte">Einbinden manueller Arbeitsschritte</h2>
<p>Die meisten Geschäftsprozesse lassen sich nicht vollständig automatisieren. In diesem Projekt ist das Verpacken eines Kunstwerkes ein Beispiel. Das Statechart erleichtert es diese analogen Arbeitsschritte in die Software zu integrieren und zu visualisieren. Zum Beispiel gibt es bei der Now or Never Gallery einen Zustand "Warten auf Paketmaße". Erst wenn diese übermittelt wurden, übernimmt wieder die Automatisierung.</p>
<h2 id="gemeinsames-verstndnis">Gemeinsames Verständnis</h2>
<p>Alle Beteiligten sehen auf einen Blick wie der Prozess funktioniert. Es braucht keinen Blick in den Quellcode, um zu verstehen, was der zeitliche Ablauf ist, welche E-Mails es gibt, wann diese versandt werden und was passiert, wenn die Zahlung bei Stripe fehlschlägt. Das Statechart kommuniziert all das und noch viel mehr.</p>
<h2 id="weniger-quellcode">Weniger Quellcode</h2>
<p>Bevor eine Anweisung ausgeführt werden kann, braucht es zahlreiche Überprüfungen. Diese sind an den unterschiedlichsten Stellen verteilt und müssen dementsprechend umfangreich getestet werden. Viele dieser Überprüfungen finden auf Basis des aktuellen Zustands statt. Dieses Wissen ist in einem Statechart explizit festgehalten. Dadurch ersparen wir uns etliche Überprüfungen im Anwendungscode und den dazugehörigen Tests.</p>
<p>Für uns war die Verwendung von Statecharts ein voller Erfolg. Es hat unsere Arbeit auf Technik-, Planungs- und Kommunikationseben verbessert. Ich kann dir nur empfehlen Statecharts für Geschäftsprozesse in Erwägung zu ziehen.</p>
<p>Wenn du mehr über Statecharts herausfinden willst, kann ich dir <a href="https://statecharts.dev">https://statecharts.dev</a> empfehlen. Für den Einstieg eignen sich Frontend UI Komponenten am besten. Im Gegensatz zu Geschäftsprozessen im Backend kannst du dort auf Persistierung des aktuellen States verzichten. Des Weiteren sind die UI Komponenten isoliert. Also ein idealer Ort, um Erfahrung mit dieser Herangehensweise zu sammeln. Wirf einen Blick auf <a href="https://xstate.js.org/">XState</a>, wenn dich das Thema interessiert.</p>
Mindesthaltbarkeitsdatum von Softwarehttp://strauss.io/blog/2021-mindesthaltbarkeitsdatum-von-software.html2021-05-18T04:15:00Z2021-05-28T06:39:07+02:00David Strauß<p>Welche Haltbarkeit hat Software und wie wird diese signalisiert? In der Lebensmittelindustrie werden Produkte mit einem Mindesthaltbarkeitsdatum gekennzeichnet. Das schafft eine geteilte Erwartungshaltung und gibt eine gewisse Garantie. Bei sachgemäßer Handhabung ist das Produkt bis zu diesem Zeitpunkt in Ordnung.</p>
<p>Auch Software Produkte haben eine begrenzte Lebensdauer. Mit fortschreitender Zeit steigt das Risiko, dass sie nicht mehr funktionieren. Das passiert ohne äußerliche Einwirkung und obwohl der Quellcode selbst nicht verfällt. Die Ursache der begrenzten Lebensdauer ist in den Abhängigkeiten der Software zu finden. Diese haben ein Eigenleben und verändern sich auf einer anderen Zeitschiene. Jede Veränderung hat die Chance für die abhängige Software fatal zu sein.</p>
<p>Der erste Schritt ist es diese expliziten und impliziten Abhängigkeiten zu identifizieren. Statt jede potenzielle Abhängigkeit einzeln aufzulisten, sind diese gruppiert.</p>
<h2 id="risikogruppen-von-abhngigkeiten">Risikogruppen von Abhängigkeiten</h2>
<h3 id="infrastruktur"><strong>Infrastruktur</strong></h3>
<p>Eine klassische Desktop-Anwendung ist für eine bestimmte CPU-Architektur kompiliert. Bei der Markteinführung der Apple M1 Geräte funktionierten plötzlich bestehende Desktop-Anwendungen nicht mehr aufgrund der neuen CPU-Architektur. Dasselbe gilt für Web-Anwendungen. Bei diesen kommt hinzu dass sie in der Regel auf fremden Geräten betrieben werden. Dieses Gerät kann jederzeit verschwinden: Der Hosting-Provider schließt, ändert seinen Fokus, passt das Produktsortiment an oder verliert sein gesamtes Rechenzentrum bei einem Brand. Fakt ist, dass selbst das Fundament, auf dem Anwendungen stehen, instabil ist. Zumindest bei der Betrachtung auf einer langen Zeitspanne.</p>
<h3 id="programmiersprache"><strong>Programmiersprache</strong></h3>
<p>Auch die Programmiersprache kann der Grund sein, dass etwas von heute auf morgen nicht mehr funktioniert. Bei einer interpretierten Programmiersprache braucht es eine funktionierende Laufzeitumgebung um etwas ausführen zu können. Das eigene Programm funktioniert oft nur mit einer kleinen Auswahl an Versionen. Sobald davon keine mehr vorhanden ist, steht alles. PHP ist dafür ein anschauliches Beispiel. Unzählige Web-Anwendungen mussten angepasst werden als die Hosting-Provider veraltete PHP Versionen aus ihrem Angebot entfernten.</p>
<h3 id="fremdcode"><strong>Fremdcode</strong></h3>
<p>Die meisten Anwendungen nutzen Bibliotheken, Frameworks und Softwarekomponenten Dritter. Diese Abhängigkeiten entwickeln sich kontinuierlich weiter. Eine unglückliche Kombination davon kann dazu führen, dass die eigene Anwendung nicht mehr installierbar ist ohne das die Abhängigkeiten aktualisiert und angepasst werden. Das trifft besonders auf umfangreichen Fremdcode zu, der selbst wieder Abhängigkeiten hat.</p>
<h3 id="externe-dienste"><strong>Externe Dienste</strong></h3>
<p>Eine Anwendung bedient sich oft externer Dienste, um die eigene Aufgabe zu erfüllen. Dazu kommuniziert sie per Schnittstelle mit anderen Anwendungen. In einer Web-Anwendung werden zum Beispiel E-Mails über Postmark versandt und Bezahlungen per Stripe abgewickelt. Auch diese Schnittstellen unterlaufen Veränderungen. Irgendwann sind diese nicht mehr rückwärts kompatibel und führen dazu, dass bestimmte Funktionen der Anwendung nicht mehr korrekt funktionieren.</p>
<h3 id="anwendungsfeld"><strong>Anwendungsfeld</strong></h3>
<p>Zu guter Letzt kann sich das Anwendungsfeld der Software selbst ändern. Sobald ein Unternehmen seine Prozesse verändert funktioniert die Web-Anwendung zur Prozessautomatisierung nicht mehr. Aus Sicht des Unternehmens hat die Software Wert verloren und ist möglicherweise nicht mehr brauchbar.</p>
<h2 id="risikoreduzierung">Risikoreduzierung</h2>
<p>Wie zu sehen ist, existiert Software in einer harschen Umwelt. Die Kräfte die auf sie einwirken sind nicht kontrollierbar. Bewusste Entscheidungen bei der Entwicklung können die Lebensdauer jedoch absolut verlängern.</p>
<ul>
<li>
<p>Standardhardware für Server bzw. Computer wählen. Spezialplattformen vermeiden, die nur bei einem Anbieter erhältlich sind. Es gibt zu viele Gründe, wieso eines Tages ein Wechsel nötig sein wird. Ohne Vendor Lock-in ist das wesentlich einfacher zu überleben.</p>
</li>
<li>
<p>Langweilige und etablierte Programmiersprache nutzen. Idealerweise produziert sie eine einzelne ausführbare Binary.</p>
</li>
<li>
<p>Konservative Wahl von Frameworks, Bibliotheken und anderen Komponenten. Es ist unrealistisch keinen Fremdcode zu nutzen. Die optimale Anzahl liegt jedoch näher bei 0 als bei 100.</p>
</li>
<li>
<p>Dasselbe Prinzip gilt für externe Dienste, so wenige wie möglich verwenden. Wenn die Integration über einen eigenen Adapter passiert, reduzieren sich die nötigen Anpassungen bei einem Anbieterwechsel.</p>
</li>
</ul>
<p>Die Frage nach dem Mindesthaltbarkeitsdatum von Software ist nicht einfach beantwortet. Sie erfordert eine reflektierte Auseinandersetzung mit dem gesamten Entstehungsprozess. Passiert das nicht, entsteht in der Regel eine fragile Anwendung die auf Veränderungen in der Umwelt anfällig ist.</p>
<p>Darüber nachzudenken wie lange eine Anwendung garantiert lauffähig bleibt, ist der erste Schritt daran etwas zu ändern.</p>
Statechart Power: Fixing a Distributed Event Sourced Systemhttp://strauss.io/blog/2021-statechart-power-fixing-a-distributed-event-sourced-system.html2021-05-08T15:57:00Z2021-05-08T18:16:00+02:00David Strauß<p>Don't waste your time learning a new technology or framework because it's shiny and new. It will be replaced by the next thing that comes along, and you will start from scratch. Instead, invest in learning fundamentals and concepts. They already stood the test of time. If something is used for 10 years it is likely it will be useful for another 10. Statecharts are one such thing. They helped me fix a nasty bug in a distributed event sourced system. If you are familiar with state machines you can think of statecharts as state machines on steroids. In essence, they visualize your software in a certain way. This allows you to reason, debug and think about your system without looking at source code. You can even integrate them directly in your project, but that is only the icing on the cake.</p>
<p>Let me give you context before we look at the bug. The system in question is an <a href="https://www.dartboard.io">online darts tracker</a>. It allows people to play darts against each other from their home instead of meeting up in a pub. Each game of darts is stored as a sequence of events. An event is something that happened. <code>EnteredDart</code> and <code>TurnHandedOverToNextPlayer</code> are two examples. The exact state of the game is reconstructed by looping over all events. This concept is known as Event Sourcing.</p>
<p>To make the playing experience as smooth as possible the logic is run directly client-side. The distributed clients synchronize with each other via a central server. When a player does something in the game the client generates new events. At this point the local event history differs from the remote history on the server. To synchronize it makes an HTTP request to append the new events to the server's event history.</p>
<p>This part of the system is called Appender. It looks and works like this.</p>
<p><img alt="" src="/appender-1-fab95e01.png" /></p>
<p>Don't worry if the image confuses you. The bubbles are the states the Appender can be in. The arrows tell you how it transitions from one state to another. Let's focus on the state <code>idling</code>. When the Appender is in this state and receives <code>GAME.NEW_HISTORY</code> it transitions to <code>appending</code>. There it makes the HTTP request to append the new local history. As soon as the HTTP request succeeds with <code>HTTP.OK</code> it transitions back to <code>idling</code> where it waits for a new history.</p>
<p>The nasty bug I mentioned at the start is connected with appending new events to the remote history on the server. A player got an error message that the client was unable to append its local history because the server's history was not what it expected. Each HTTP request includes an expected version in the payload along the actual events. This prevents the server from overwriting and losing someone else's history in case there is a bug in the game logic.</p>
<p>But that was not the case. The problem was the Appender itself. Looking at the statechart you can see there is a transition from <code>appending</code> to <code>waiting_for_retry</code> due to <code>HTTP.TIMEOUT</code> after 10 seconds. I added this to protect against slow HTTP requests. My reasoning was that after 10 seconds the Appender should just try again for a few times. I did not realize that an HTTP request could successfully reach the server but then take a long time to return the response.</p>
<p>That is exactly what happened. For some reason one of the many HTTP requests took 24 seconds to receive the response instead of mere milliseconds. After 10 seconds the Appender triggered <code>HTTP.TIMEOUT</code> and tried to append again. But the server already received the new history and returned an error indicating a version conflict.</p>
<p>The bug is easily fixed by removing the <code>HTTP.TIMEOUT</code> after 10 seconds.</p>
<p><img alt="" src="/appender-2-e3764b09.png" /></p>
<p>Slow responses no longer crash the client because no <code>HTTP.TIMEOUT</code> is happening. But slow responses are still occurring. It would be nice to show the player a notice that synchronizing is slow. Adding this feature demonstrates perfectly the value of statecharts. By turning <code>appending</code> into a parallel state we can do two independent things. The substate <code>http.appending</code> appends the events with the known HTTP request. The other substate contains a 10 second timeout. But compared to the initial version this timeout does not cancel the HTTP request. Instead, it transitions from <code>monitor.waiting_on_success</code> to <code>monitor.waiting_on_success_with_notice</code>. Whenever the statechart is in <code>monitor.waiting_on_success_with_notice</code> it makes the UI show a notice regarding the slow synchronization. As soon as the HTTP request succeeds the <code>http.appending</code> state is left and the UI will no longer show the notice.</p>
<p><img alt="" src="/appender-3-77d2998b.png" /></p>
<p>This image looks very different from the previous ones. You might think this would mean a near rewrite of the logic. But that is not the case and demonstrates the power of statecharts. They allow you to make significant changes to your system while limiting the amount of source code that needs to be touched.</p>
<p>If you want to learn more about statecharts I highly recommend <a href="https://statecharts.dev">https://statecharts.dev</a>. For TypeScript and JavaScript projects <a href="https://xstate.js.org">XState</a> is the best library to integrate statecharts directly into the source code.</p>
Statechart Power: Robustheit mit State Machines modellierenhttp://strauss.io/blog/2021-statechart-power-robustheit-mit-state-machines-modellieren.html2021-03-21T07:41:00Z2021-05-08T17:56:45+02:00David Strauß<p>Mit State Machines lässt sich gut veranschaulichen, wie robust eine gewählte Implementierung ist. Als Beispiel sehen wir uns einen kleinen Teil der <a href="https://www.nowornever.gallery">Now or Never Gallery</a> an. Nachdem eine Künstlerin eine Kaufanfrage für eines ihrer Kunstwerke erhalten hat, kann sie entscheiden, ob sie verkaufen möchte oder nicht. Wir sehen uns jetzt an wie unterschiedlich die Ablehnung modelliert werden kann.</p>
<p>Bei der Ablehnung einer Kaufanfrage müssen drei Dinge passieren.</p>
<ul>
<li>Modifizieren des Datenstandes in der Datenbank.</li>
<li>Rückbuchen des Kaufbetrages über Stripe.</li>
<li>Informieren des Kunden per E-Mail.</li>
</ul>
<h2 id="variante-1">Variante 1</h2>
<p><img alt="" src="/robustheit-mit-statemachines-1-260f18b2.png" /></p>
<p>Die naivste Variante geht implizit davon aus das immer alles funktioniert. Beim <code>decline</code> Event wird in die Datenbank geschrieben und es werden zwei asynchrone Tasks terminiert. Wie im Statechart zu sehen ist, ändert sich der State sofort in den Endzustand. Aus der Perspektive unserer Anwendung ist die Kaufanfrage abgelehnt.</p>
<p>Selbst, wenn die E-Mail niemals versendet wird und der API Aufruf nicht bei Stripe ankommt, glaubt unser System, das alles in Ordnung ist. Den Benutzern der Anwendung bietet sich dasselbe Bild einer heilen Welt.</p>
<p>Aber das ist eine Täuschung. In einem Fehlerfall gibt es manuellen Handlungsbedarf. Im Stripe Dashboard muss die Rückbuchung von Hand gemacht werden und der Kunde wird persönlich informiert.</p>
<h2 id="variante-2">Variante 2</h2>
<p><img alt="" src="/robustheit-mit-statemachines-2-3485c6ac.png" /></p>
<p>In dieser Variante wird akzeptiert, dass es bei der Rückbuchung zu Fehlern kommen kann. Wie bei der ersten Variante wird beim <code>decline</code> Event die Welt in Bewegung gesetzt. Der Unterschied liegt in der Zustandsänderung in den <code>refunding</code> State. Er signalisiert dem System und idealerweise den Benutzerinnen das der Prozess noch nicht abgeschlossen ist.</p>
<p>Erst wenn die Rückbuchung erfolgreich durchgeführt wurde signalisiert das <code>refund_done</code> Event das der Prozess abgeschlossen ist. Zumindest in der modellierten Form. Ein Fehler beim E-Mail Versand hat weiterhin keine Auswirkung auf das System und bleibt für die Menschen unsichtbar.</p>
<p>An dieser Stelle könnte jemand argumentieren, dass das kein Problem ist, es handelt sich ja "nur" um E-Mails. Diese Haltung kann ich nicht nachvollziehen. Wenn es nicht wichtig ist, ob eine transaktionale E-Mail versandt wird, wieso versendest du sie überhaupt?</p>
<h2 id="variante-3">Variante 3</h2>
<p><img alt="" src="/robustheit-mit-statemachines-3-240aefc5.png" /></p>
<p>Diese State Machine zeigt die volle Ausprägung. Statt den asynchronen Task für den E-Mail-Versand bereits am Beginn zu terminieren, passiert das hier erst, nachdem die Rücküberweisung erfolgreich war. Des Weiteren ändert das <code>refund_done</code> Event den Zustand nur in den <code>informing_customer</code> State. Wie bei der Rücküberweisung signalisiert das für die Beteiligten, das die E-Mail noch nicht erfolgreich versandt wurde. Erst wenn auch das eingetreten ist, abgebildet als <code>email_delivered</code> Event, sind wir im Endzustand und fertig.</p>
Reproduzierbare Entwicklungsumgebung mit nix-shellhttp://strauss.io/blog/2020-reproduzierbare-entwicklungsumgebung-mit-nix-shell.html2020-11-30T16:57:00Z2021-03-20T17:04:17+01:00David Strauß<p>Wenn du schon einmal Zeit und Nerven aufgewendet hast die Entwicklungsumgebung für ein Projekt auf einem neuen Computer aufzusetzen, ist <strong>nix-shell</strong> eventuell das richtige. Ich verwende es seit Anfang 2020 und habe mir bisher viel Zeit und Nerven gespart. Es ermöglicht mir pro Projekt eine Sandbox zu starten in welcher die richtige Version von Node, Ruby, PostgreSQL, PHP, MariaDB installiert ist.</p>
<p>Konkret hat mich nix-shell aus folgenden Situationen gerettet, die ich mit meinem bisherigen Ansatz nicht meistern konnte.</p>
<ul>
<li>Ein Projekt mit alter Ruby Version lauffähig bekommen das nach einem automatischen Update von OpenSSL durch Homebrew nicht mehr funktioniert hat.</li>
<li>Alte Node Versionen installiert um angestaubte Ember.js Projekte weiterzuentwickeln.</li>
<li>gnuplot Graphen repariert, die nach einem macOS Update Artefakte enthalten haben.</li>
<li>Einen LAMP Stack unter macOS aufsetzen ohne XAMPP oder Ähnliches installieren zu müssen.</li>
<li>Eine unter Homebrew nicht mehr installierbare Ansible Version verwenden.</li>
<li>Eine identische Entwicklungsumgebung unter macOS und Ubuntu haben.</li>
</ul>
<h2 id="tipps">Tipps</h2>
<ul>
<li>Nix ist ein Package Manager, ein Betriebssystem und eine Programmiersprache. Ich verwende nur den Package Manager und notgedrungen die Programmiersprache für die Definition meiner Entwicklungsumgebung.</li>
<li>Die Definition der Entwicklungsumgebung kommt in die Datei <code>shell.nix</code> im Projektverzeichnis.</li>
<li><a href="https://nixos.org/download.html">Installationsanleitung</a>.</li>
<li>Um Festplattenspeicher freizubekommen kannst du die Garbage Collection mit <code>nix-collect-garbage</code> laufen lassen.</li>
<li>Du solltest die verwendete Version von <code>nixpkgs</code> selbst möglichst genau definieren. Ansonsten bekommst du in einem Jahr andere Packages installiert als heute. Die meisten Anleitungen machen diesen Fehler, am Ende des Tages ist es somit erst Recht wieder nicht reproduzierbar.</li>
</ul>
<h2 id="workflow">Workflow</h2>
<ol>
<li>Terminal öffnen</li>
<li>In Projektordner wechseln</li>
<li>Sandbox mit <code>nix-shell</code> starten</li>
<li>Entwickeln</li>
</ol>
<h2 id="beispiele">Beispiele</h2>
<p>Ein kleines Projekt mit der aktuellsten Ruby und cURL Version.</p>
<figure>
<pre class="highlight plaintext"><code>{
pkgs ? import (fetchGit {
url = https://github.com/NixOS/nixpkgs-channels;
ref = "nixos-20.03";
}) {}
}:
pkgs.mkShell {
buildInputs = with pkgs; [
curl
ruby
];
shellHook = ''
mkdir -p .nix-gems
export GEM_HOME=$PWD/.nix-gems
export GEM_PATH=$GEM_HOME
export PGHOST="$PWD/.local-data/postgresql/sockets"
unset SSL_CERT_FILE
unset NIX_SSL_CERT_FILE
'';
}
</code></pre>
</figure>
<p>Verwaltung meiner Finanzen mit hledger und gnuplot.</p>
<figure>
<pre class="highlight plaintext"><code>{
pkgs ? import (fetchGit {
url = https://github.com/NixOS/nixpkgs-channels;
ref = "nixos-20.03";
}) {}
}:
pkgs.mkShell {
buildInputs = with pkgs; [
hledger
gnuplot
gnumeric
];
shellHook = ''
export FONTCONFIG_FILE=${pkgs.fontconfig.out}/etc/fonts/fonts.conf
'';
}
</code></pre>
</figure>
<p>Dieses Projekt benötigt eine alte Ruby Version die sonst nicht mehr installierbar ist.</p>
<figure>
<pre class="highlight plaintext"><code>{
pkgs ? import (fetchGit {
url = https://github.com/NixOS/nixpkgs-channels;
ref = "nixos-20.03";
}) {},
oldpkgs ? import (fetchGit {
url = https://github.com/NixOS/nixpkgs-channels;
ref = "nixos-19.03";
}) {},
ruby ? oldpkgs.ruby_2_3,
bundler ? oldpkgs.bundler.override { inherit ruby; }
}:
pkgs.mkShell {
buildInputs = with pkgs; [
ruby
bundler
which
git
postgresql_9_6
parallel
];
shellHook = ''
mkdir -p .local-data/gems
export GEM_HOME=$PWD/.local-data/gems
export GEM_PATH=$GEM_HOME
export PGHOST="$PWD/.local-data/postgresql/sockets"
'';
}
</code></pre>
</figure>
<p>Infrastruktur die mit Ansible verwaltet wird.</p>
<figure>
<pre class="highlight plaintext"><code>{
pkgs ? import (fetchGit {
url = https://github.com/NixOS/nixpkgs-channels;
ref = "nixos-20.03";
}) {}
}:
pkgs.mkShell {
buildInputs = with pkgs; [
ansible_2_9
git-crypt
sshpass
];
}
</code></pre>
</figure>
<p>Weiteres Projekt mit einer bestimmten Ruby Version.</p>
<figure>
<pre class="highlight plaintext"><code>{
pkgs ? import (fetchGit {
url = https://github.com/NixOS/nixpkgs-channels;
ref = "nixos-20.03";
}) {},
oldpkgs ? import (fetchGit {
url = https://github.com/NixOS/nixpkgs-channels;
ref = "nixos-19.03";
}) {}
}:
pkgs.mkShell {
buildInputs = with pkgs; [
oldpkgs.ruby_2_5
oldpkgs.bundler
tychus
git
postgresql_9_6
parallel
];
shellHook = ''
mkdir -p .nix-gems
export GEM_HOME=$PWD/.nix-gems
export GEM_PATH=$GEM_HOME
export PGHOST="$PWD/.local-data/postgresql/sockets"
unset SSL_CERT_FILE
unset NIX_SSL_CERT_FILE
'';
}
</code></pre>
</figure>
<p>Ember.js Projekt mit Ember CLI</p>
<figure>
<pre class="highlight plaintext"><code>{ pkgs ? import (fetchGit {
url = https://github.com/NixOS/nixpkgs-channels;
ref = "nixos-20.03";
}) {} }:
pkgs.mkShell {
buildInputs = with pkgs; [
yarn
nodejs-12_x
git
];
shellHook = ''
'';
}
</code></pre>
</figure>
<p>Projekt mit sehr alten Ember CLI Version sowie anderen Abhängigkeiten.</p>
<figure>
<pre class="highlight plaintext"><code>{
pkgs ? import (fetchGit {
url = https://github.com/NixOS/nixpkgs-channels;
ref = "nixos-19.03";
}) {}
}:
pkgs.mkShell {
buildInputs = with pkgs; [
nodejs-8_x
git
rsync
openssh
python
];
shellHook = ''
'';
}
</code></pre>
</figure>
<p>PHP 7.3, MariaDB und Caddy als Webserver sowie eine Erhöhung der maximalen Upload Größe von PHP.</p>
<figure>
<pre class="highlight plaintext"><code>{
pkgs ? import (fetchGit {
url = https://github.com/NixOS/nixpkgs;
ref = "nixos-20.09";
}) {},
php73 ? pkgs.php73.buildEnv {
extraConfig = "upload_max_filesize = 20M";
}
}:
pkgs.mkShell {
buildInputs = with pkgs; [
php73
php73Packages.composer
php73Packages.php-cs-fixer
mariadb
parallel
caddy
];
shellHook = ''
'';
}
</code></pre>
</figure>
Widerstandsfähige Background Jobs in Webanwendungen mit PostgreSQLhttp://strauss.io/blog/2020-widerstandsfahige-background-jobs-in-webanwendungen-mit-postgresql.html2020-07-03T08:00:00Z2021-03-20T17:04:17+01:00David Strauß<p>Sidekiq, eine populäre Ruby Lösung für Hintergrundverarbeitung, ist anfällig auf Datenverlust wenn sie laut Handbuch integriert wird. Sidekiq wird von einem Großteil von Ruby Webanwendungen für die Hintergrundverarbeitung verwendet. <a href="https://www.edgycircle.com/">So auch Applikationen, die meine Firma edgy circle aus Salzburg entwickelt</a>. Bis vor Kurzem habe ich das nicht weiter hinterfragt und Sidekiq wie empfohlen integriert.</p>
<p>Durch Brandur Leach bin ich darauf aufmerksam geworden, <a href="https://brandur.org/job-drain">dass meine Verwendung von Sidekiq alles andere als robust ist</a>. Die Auslagerung von Aufgaben in den Hintergrund ist ein wichtiger Bestandteil von Webanwendungen. Sidekiq ist also ein fundamentaler Baustein meiner Arbeit.</p>
<p>Für mich war jedenfalls schnell klar, dass sich etwas ändern muss. Dieser wackelige Baustein ist mit meinem Anspruch an Robustheit und Stabilität nicht vereinbar. Wie sieht eine bessere Lösung aus?</p>
<h2 id="kontext-und-anforderungen-an-hintergrundverarbeitung">Kontext und Anforderungen an Hintergrundverarbeitung</h2>
<p>Ohne Kontext haben die folgenden Anforderungen keine Aussagekraft und bieten zu viel Raum für Interpretation. Deshalb ein kurzer Blick auf meinen Kontext. Wie schon gesagt entwickeln wir Webanwendungen und Digitale Services. Unsere Auftraggeber sind dabei Unternehmen aus dem DACH Raum. Keine Silicon Valley Skalierung ohne Rücksicht auf Verluste. In unserem Umfeld geht es darum Geschäftsprozesse und Online-Dienste abzubilden. Dabei steht Korrektheit und Stabilität im Vordergrund.</p>
<p>In einem Softwaresystem gibt es Aufgaben die ohne direkte Aktion einer Benutzerin erledigt gehören. Der tägliche automatische Versand von Zahlungserinnerungen zum Beispiel.</p>
<p>Zusätzlich gibt es Situationen in denen bei einer Benutzer-Interaktion rechenintensive Anweisungen in den Hintergrund verlagert werden. Dieses Vorgehen ermöglicht es dem Benutzer schnell eine Antwort zu präsentieren, ohne dabei von der rechenintensiven Aufgabe ausgebremst zu werden. Ein klassisches Beispiel dafür ist der notorisch langsame und unzuverlässige Versand von E-Mails. Was sind die Anforderungen an solche asynchronen Anweisungen in einem System?</p>
<ul>
<li>Wie der Name schon sagt, muss die Verarbeitung der asynchronen Anweisungen im Hintergrund passieren.</li>
<li>Bei Fehlern während der Verarbeitung darf die Anweisung nicht verloren gehen. Stattdessen muss die Verarbeitung kontrolliert wiederholt werden, bis sie gelingt oder ein Operator informiert werden muss.</li>
<li>Für Aufgaben die in der Zukunft liegen muss es bereits in der Gegenwart möglich sein die Verarbeitung für später anzusetzen.</li>
<li>Es muss garantiert sein, dass eine Anweisung mindestens einmal ausgeführt wird.</li>
<li>Anweisungen müssen innerhalb einer maximalen Zeit verarbeitet werden. Ob die Latenz zufriedenstellend ist, ist natürlich von der Gesamtmenge der ausstehenden Anweisungen sowie der zur Verfügung stehenden Ressourcen abhängig. Unter für uns typischen Rahmenbedingungen sollte die Lösung 170 Anweisungen pro Sekunde verarbeiten können. Ein Newsletter mit 100.000 Empfängern kann so innerhalb von 10 Minuten versandt werden.</li>
</ul>
<h2 id="probleme-von-sidekiq">Probleme von Sidekiq</h2>
<p>Sidekiq funktioniert gut und erfüllt fast alle Anforderungen. Es fehlt jedoch eine essenzielle Eigenschaft: kein Datenverlust.</p>
<p>Sidekiq speichert die asynchronen Anweisungen in Redis. Dadurch kann es bei der empfohlenen Verwendung zu Datenverlust kommen. Die Ursache liegt darin, dass sich eine PostgreSQL Transaktion nicht auf den Schreibvorgang in Redis auswirkt. Es kann deshalb zu folgenden Situationen kommen.</p>
<figure>
<pre class="highlight plaintext"><code>with_postgresql_transaction {
postgresql_write_1()
sidekiq_redis_write()
postgresql_write_2()
}
</code></pre>
</figure>
<figure>
<pre class="highlight plaintext"><code>with_postgresql_transaction {
postgresql_write_1()
postgresql_write_2()
}
sidekiq_redis_write()
</code></pre>
</figure>
<p>Im ersten Pseudocode Beispiel befindet sich der Sidekiq Aufruf innerhalb einer Transaktion. Hier kann es passieren, dass durch <code>postgresql_write_2()</code> ein Abbruch der Transaktion ausgelöst wird. In diesem Fall würden die Daten fälschlicherweise weiterhin in Redis gespeichert sein. <code>sidekiq_redis_write()</code> ist nämlich unabhängig von der PostgreSQL Transaktion und wird nicht mit zurückgerollt.</p>
<p>Das zweite Pseudocode Beispiel zeigt ein ähnliches Problem. Hier befindet sich der <code>sidekiq_redis_write()</code> Aufruf außerhalb der Transaktion. Jetzt kann das Gegenteil zum vorherigen Szenario passieren. Nach einer erfolgreichen PostgreSQL Transaktion kann der <code>sidekiq_redis_write()</code> Aufruf einen Fehler werfen oder der gesamte Server Prozess abstürzen. In diesem Fall wären die Daten in der PostgreSQL Datenbank gespeichert, aber die asynchrone Anweisung in Redis wäre verloren gegangen.</p>
<p>Diese Eigenschaft von Sidekiq macht es uns unmöglich robuste Webanwendungen zu bauen.</p>
<h2 id="implementierung-ohne-datenverlust">Implementierung ohne Datenverlust</h2>
<p>Um die beschriebenen Probleme zu umgehen, dürfen die asynchronen Anweisungen nicht in Redis gespeichert werden. Stattdessen müssen sie ebenfalls in PostgreSQL abgelegt werden. Auf diese Weise gelten die transaktionellen Garantien auch für unsere asynchronen Anweisungen. In Pseudocode sieht das wie folgt aus.</p>
<figure>
<pre class="highlight plaintext"><code>with_postgresql_transaction {
postgresql_write_1()
schedule_asynchronous_task_in_postgresql()
postgresql_write_2()
}
</code></pre>
</figure>
<p>Die Transaktion garantiert, dass Anwendungsdaten und asynchrone Anweisungen gemeinsam gespeichert oder zurückgerollt werden. Es kommt zu keinem Datenverlust unter widrigen Bedingungen.</p>
<p>Für eine Implementierung ohne Datenverlust kann auf eine Sidekiq Alternative (<a href="https://github.com/collectiveidea/delayed_job">Delayed::Job</a>, <a href="https://github.com/que-rb/que">Que</a> und <a href="https://github.com/QueueClassic/queue_classic">queue_classic</a>) zurückgegriffen werden die PostgreSQL statt Redis als Datenspeicher verwenden.</p>
<p>Mit einem <a href="https://brandur.org/job-drain">zusätzlichen Prozess kann Sidekiq weiterhin verwendet werden</a>. Die asynchronen Anweisungen werden zuerst in PostgreSQL gespeichert bevor sie durch eine weitere Hintergrundverarbeitung in Redis gespeichert werden.</p>
<p><strong>Keiner dieser Wege überzeugt mich. Die Alternativen verwenden <a href="https://brandur.org/postgres-queues">keine für PostgreSQL idealen Techniken</a> und die Sidekiq Lösung holt eine weitere Abhängigkeit ins Boot.</strong></p>
<p>Deshalb haben wir uns dazu entschieden eine eigene Lösung zu implementieren. Das legt die Verantwortung unmissverständlich in unsere Hände und hat folgende Vorteile.</p>
<ul>
<li>
<p>Unsere Infrastruktur kann auf Redis verzichten. Neben der offensichtlichen Vereinfachung müssen wir nicht mehr lernen, wie Redis korrekt unter Last im Echtbetrieb verwendet wird.</p>
</li>
<li>
<p>Die PostgreSQL basierte Lösung garantiert, dass keine asynchronen Anweisungen mehr verloren gehen.</p>
</li>
<li>
<p>Ohne Sidekiq haben unsere Anwendungen eine große Abhängigkeit weniger. Das bedeutet schnellere Startzeiten und weniger Quellcode da unsere eigene Lösung weniger Features benötigt.</p>
</li>
<li>
<p>Für Backups ist es ausreichend sich mit der Datenbank zu beschäftigen. Ein PostgreSQL Dump inkludiert automatisch alle ausstehenden Anweisungen.</p>
</li>
<li>
<p>Wir können <a href="https://www.2ndquadrant.com/en/blog/what-is-select-skip-locked-for-in-postgresql-9-5/">geeignete PostgreSQL Techniken verwenden</a>, die uns die beste Performance liefern.</p>
</li>
</ul>
<p>Eine selbst entwickelte Lösung hat natürlich nicht nur Vorteile. Die beiden größten Nachteile im Vergleich zu Sidekiq sind die geringeren Leistungswerte sowie fehlende Praxiserfahrung aus dem Echtbetrieb. Die geringere Leistung ist in unseren Webanwendungen wie Eingangs beschrieben kein Problem. Und die fehlende Erfahrung aus dem Betrieb können wir nur wettmachen, in dem wir es verwenden und lernen.</p>
<p>Die Implementierung besteht aus 7 Komponenten die einzeln in Isolation betrachtet werden können.</p>
<h3 id="datenbanktabelle"><strong>Datenbanktabelle</strong></h3>
<p>Alle asynchronen Anweisungen, die noch abzuarbeiten sind, werden in einer Datenbanktabelle gespeichert. Diese Tabelle muss sich in derselben Datenbank befinden wie die restlichen Anwendungsdaten. Ansonsten kann die Datenbank keine Garantien mit Transaktionen geben.</p>
<h3 id="anweisung"><strong>Anweisung</strong></h3>
<p>Damit in der Zukunft klar ist was zu tun ist, muss die Intention, also die Anweisung selbst alle nötigen Informationen haben. In der Regel ist das ein eindeutiger Name, mit dem die Anweisung identifiziert wird, sowie alle weiteren benötigten Daten.</p>
<h3 id="geschftslogik"><strong>Geschäftslogik</strong></h3>
<p>Von außen betrachtet gibt es eine einzelne Funktion, die die Geschäftslogik repräsentiert. Als Argument wird die Anweisung übergeben, um der Geschäftslogik die benötigten Daten zur Verfügung zu stellen.</p>
<h3 id="ausfhrbares-programm"><strong>Ausführbares Programm</strong></h3>
<p>Eine leichte Hülle um den Prozess der die Anweisungen ausführt. Wird während der Entwicklung lokal bzw. in Produktion auf dem Server gestartet. Reagiert auf Steuersignale des Betriebssystems und koordiniert die Prozesskomponenten.</p>
<h3 id="prozess-um-kontinuierlich-anweisungen-abzuarbeiten"><strong>Prozess um kontinuierlich Anweisungen abzuarbeiten</strong></h3>
<p>Ruft in einer Endlosschleife Anweisungen aus der Datenbank ab und übergibt diese an die Geschäftslogik. Stoppt auf Kommando und stellt sicher, dass die Datenbank nicht von Abrufen überlastet wird.</p>
<h3 id="transaktionaler-abruf-einer-anweisung"><strong>Transaktionaler Abruf einer Anweisung</strong></h3>
<p>Lädt mit einem sicheren Mechanismus die nächste Anweisung aus der Datenbank und gibt sie zurück. Bei einem erfolgreichen Abarbeiten wird die Anweisung abschließend aus der Datenbank gelöscht. Im Fehlerfall wird die Anweisung für eine erneute Abarbeitung in der Zukunft terminiert.</p>
<h3 id="zentrale-zuordnung-von-anweisungen-zu-geschftslogik"><strong>Zentrale Zuordnung von Anweisungen zu Geschäftslogik</strong></h3>
<p>Ein Dienst der weiß, welche Geschäftslogik Funktionen mit welchen Anweisungen aufgerufen werden. Als zentraler Punkt ebenfalls geeignet für übergreifende Funktionalität.</p>
<h2 id="abschlieende-gedanken">Abschließende Gedanken</h2>
<p>Dieser Text ist nicht als Kritik an Sidekiq zu verstehen. Ich bin derjenige der seine Hausaufgaben nicht gemacht hat und eine Technologie eingesetzt hat, ohne die Konsequenzen vollständig zu verstehen. Da ich vermutlich nicht der einzige Unwissende bin, habe ich versucht meinen Wissensstand hier zusammenzufassen.</p>
<p>Die konkrete Ruby Implementierung zeige ich bewusst nicht. Im Moment ist der Quellcode noch direkt in die Anwendung eingebettet und enthält entsprechend viele Domänen spezifische Konzepte. Sobald das in eine eigene Bibliothek extrahiert wurde, gibt es hier ein Update.</p>
<p>Danke an Hannah Langhagel und Christoph Edthofer für Feedback und Korrekturen.</p>
Characteristics of Quality Software Projectshttp://strauss.io/blog/2018-characteristics-of-quality-software-projects.html2018-08-07T17:21:00Z2021-03-20T17:04:17+01:00David Strauß<p>Following characteristics signal that a software project is in good shape on a technical level. The different traits improve the developer experience and help shipping quality software. They are based on my experience developing and operating web-based applications.</p>
<h2 id="list-of-dependencies">01. List of dependencies</h2>
<p>Every <a href="https://www.strauss.io/blog/2018-dont-forget-that-dependencies-have-a-cost.html">dependency has a cost</a> and people working on a system should be aware of this. Therefore it's best to have a list of every dependency and its license. Naturally this includes dependencies of dependencies and everything else needed to operate the system like external services for example.</p>
<h2 id="unified-way-to-run-development-tasks">02. Unified way to run development tasks</h2>
<p>A single interface towards common development tasks like running tests, starting the development server, creating and executing database migrations.</p>
<h2 id="single-command-to-start-system-in-development">03. Single command to start system in development</h2>
<p>A single command that starts the application server, background workers and any other additional processes needed to run the system in development.</p>
<h2 id="command-to-run-tests">04. Command to run tests</h2>
<p>A single command that runs and verifies all test cases of the project.</p>
<h2 id="command-to-run-only-unit-tests">05. Command to run only unit tests</h2>
<p>A single command that only runs fast unit tests that ensure a short feedback loop and reduce friction when practicing TDD. By definition this command does not run slower acceptance tests that ensure users can actually use the system.</p>
<h2 id="console">06. Console</h2>
<p>A command that starts an interactive console that allows developers to directly interact with the systems components.</p>
<h2 id="deployment">07. Deployment</h2>
<p>A single command to deploy a new version of the system and run necessary database migrations.</p>
<h2 id="code-reload-for-development">08. Code reload for development</h2>
<p>Source code changes automatically restart the development system. Instead of fancy reload strategies and long running processes the entire system is stopped before it’s started again.</p>
<h2 id="single-executable-for-every-separate-system-component">09. Single executable for every separate system component</h2>
<p>The application server and every other separate component of the system is a single executable so there is no difference between running it in development or production.</p>
<h2 id="customisability-of-development-setup">10. Customisability of development setup</h2>
<p>The command to start the development setup (see point 2.) uses a sensible default configuration for the system. If a developer has reason to deviate from the default configuration it is done in a standardised way.</p>
<h2 id="provide-seed-data">11. Provide seed data</h2>
<p>A configuration option that starts the system with a set of curated seed data. The seed data should cover all major scenarios and use cases covered by the system. It speeds up development, makes edge cases visible and can be used for acceptance tests.</p>
<h2 id="explicit-configuration">12. Explicit Configuration</h2>
<p>Every component of the system defines it's configuration explicitly as a class. Changing the configuration can no longer happen by just adding a value somewhere, it must be done deliberately. This also allows unit testing the configuration.</p>
<h2 id="configuration-validation-at-startup">13. Configuration validation at startup</h2>
<p>Runtime errors due to a missing or corrupt configuration are prevented by validating the provided configuration at startup.</p>
<h2 id="no-environment-logic-inside-the-application">14. No environment logic inside the application</h2>
<p>The application contains no logic based on the environment it is operated in. Required implementation differences like email delivery services are handled via the configuration of the system.</p>
<h2 id="sql-database-migrations">15. SQL database migrations</h2>
<p>The database migrations are written in plain SQL instead of a custom framework or programming language syntax.</p>
<h2 id="database-schema-in-sql">16. Database schema in SQL</h2>
<p>For quick insights the database schema is checked into source control as plain SQL file.</p>
<h2 id="support-for-background-work">17. Support for background work</h2>
<p>A standardised mechanism to move work outside the request response cycle.</p>
<h2 id="logging">18. Logging</h2>
<p>The system provides a logger that writes its output to <code>STDOUT</code>.</p>
<h2 id="error-tracking">19. Error tracking</h2>
<p>No error is dropped, all components report occurring errors to a central error tracking service.</p>
<h2 id="email-delivery">20. Email delivery</h2>
<p>Reliably delivering email is hard, the system delegates this burden to an external service. In non-production situations the emails are sent to a local catch-all server to allow introspection.</p>
<h2 id="templates-without-logic">21. Templates without logic</h2>
<p>It is not possible to transform or access data by accident in templates. The template only takes values and renders the output.</p>
<h2 id="previews-of-visible-system-parts">22. Previews of visible system parts</h2>
<p>During development and quality assurance all viewable parts are available for preview outside the specific use cases they are part of. This includes web pages, emails, PDFs and other artefacts.</p>
Disable Browser History to Remove Procrastination Opportunitieshttp://strauss.io/blog/2018-disable-browser-history-to-remove-procrastination-opportunities.html2018-05-25T19:39:00Z2021-03-20T17:04:17+01:00David Strauß<p>By disabling the browser history wasted time is sharply reduced. For example when a new tab is opened and the letter <code>t</code> is typed out of habit the browser won’t show <code>twitter.com</code> as first suggestion.</p>
<p>Which in turn reduces the number of times Twitter is visited. To visit Twitter one must now deliberately type the entire domain.</p>
<p>Don’t mistake that setting with private / incognito browsing. Only the history is disabled, sign-in cookies work as usual.</p>
Filesystem Hierarchy Standardhttp://strauss.io/blog/2018-filesystem-hierarchy-standard.html2018-05-24T18:57:00Z2021-03-20T17:04:17+01:00David Strauß<p>The <a href="http://www.pathname.com/fhs/pub/fhs-2.3.html">Filesystem Hierarchy Standard</a> is an amazing resource to learn more about Linux servers and their directories.</p>
<p>It already answered a few questions and showed that the current server structure can be improved upon.</p>
Better Ways to Consume Twitterhttp://strauss.io/blog/2018-better-ways-to-consume-twitter.html2018-05-23T19:01:00Z2021-03-20T17:04:17+01:00David Strauß<p>Imagine something like RSS feeds plus reader for Twitter. There is a screen to view the entire chronologic feed. In addition there is a feed for every followed user and one only consisting of retweets and quotes.</p>
<p>Like in a RSS reader tweets can be read, naturally this works across the different feeds without additional work on the user side. With continued usage and consumption the computer could make suggestions whom to unfollow to reduce the noise level.</p>
<p>Maybe the software also supports regular RSS feeds along the Twitter feeds. A single place to consume information. Something even better would be a feed consisting of linked content based on the tweets.</p>
<p>To me this sounds like a massive improvement to the current situation. I wish somebody would build it.</p>
GDPR Compliance with Ruby on Rails - IP-Address Logginghttp://strauss.io/blog/2018-gdpr-compliance-with-ruby-on-rails-ip-address-logging.html2018-05-22T18:38:00Z2021-03-20T17:04:17+01:00David Strauß<p>The General Data Protection Regulation (GDPR) requires Personally Identifiable Information (PII) to be protected or not be processed or stored at all. An IP-Address counts as PII and therefore requires special treatment.</p>
<p>By default a Ruby on Rails application logs the IP-Address to a log file. One of the cleanest ways to protect visitors is to not log the actual IP-Address but an anonymized one.</p>
<p>A custom <code>Rails::Rack:Logger</code> class inherits from <code>ActiveSupport::LogSubscriber</code> and implements a custom method to produce logs without full IP-Addresses.</p>
<figure>
<figcaption>
<p>config/initializers/rack_logger.rb
</p>
</figcaption>
<pre class="highlight ruby"><code><span class="k">module</span> <span class="nn">Rails</span>
<span class="k">module</span> <span class="nn">Rack</span>
<span class="k">class</span> <span class="nc">Logger</span> <span class="o"><</span> <span class="no">ActiveSupport</span><span class="o">::</span><span class="no">LogSubscriber</span>
<span class="k">def</span> <span class="nf">started_request_message</span><span class="p">(</span><span class="n">request</span><span class="p">)</span>
<span class="s1">'Started %s "%s" for %s at %s'</span> <span class="o">%</span> <span class="p">[</span>
<span class="n">request</span><span class="p">.</span><span class="nf">request_method</span><span class="p">,</span>
<span class="n">request</span><span class="p">.</span><span class="nf">filtered_path</span><span class="p">,</span>
<span class="n">anonymized_ip</span><span class="p">(</span><span class="n">request</span><span class="p">),</span>
<span class="no">Time</span><span class="p">.</span><span class="nf">now</span><span class="p">.</span><span class="nf">to_default_s</span> <span class="p">]</span>
<span class="k">end</span>
<span class="k">def</span> <span class="nf">anonymized_ip</span><span class="p">(</span><span class="n">request</span><span class="p">)</span>
<span class="n">ip</span> <span class="o">=</span> <span class="no">IPAddr</span><span class="p">.</span><span class="nf">new</span><span class="p">(</span><span class="n">request</span><span class="p">.</span><span class="nf">ip</span><span class="p">)</span>
<span class="k">if</span> <span class="n">ip</span><span class="p">.</span><span class="nf">ipv4?</span>
<span class="n">ip</span><span class="p">.</span><span class="nf">mask</span><span class="p">(</span><span class="mi">24</span><span class="p">).</span><span class="nf">to_s</span>
<span class="k">else</span>
<span class="n">ip</span><span class="p">.</span><span class="nf">mask</span><span class="p">(</span><span class="mi">48</span><span class="p">).</span><span class="nf">to_s</span>
<span class="k">end</span>
<span class="k">end</span>
<span class="k">end</span>
<span class="k">end</span>
<span class="k">end</span>
</code></pre>
</figure>
Marillen Knödelhttp://strauss.io/blog/2018-marillen-knodel.html2018-05-21T18:50:00Z2021-03-20T17:04:17+01:00David Strauß<p><img alt="Marillen Knödel" src="/marillen-knoedel-a16a3fda.jpg" /></p>
<p><strong>Zutaten Knödel</strong><br />
250g Topfen<br />
70g Butter<br />
90g griffiges Mehl<br />
90g Grieß<br />
1 Ei<br />
1 Prise Salz<br />
6-9 Marillen</p>
<p><strong>Zutaten Brösel</strong><br />
Butter<br />
Semmelbrösel<br />
Zucker</p>
<p><strong>Zubereitung</strong><br />
1. Butter schmelzen.<br />
2. Topfen, Mehl, Grieß, Ei, Salz und geschmolzene Butter in Schüssel vermengen.<br />
3. Mit Mixer und Knethaken verrühren.<br />
4. Für 30 Minuten im Kühlschrank ruhen lassen.<br />
5. In einem großen Topf Wasser erhitzen.<br />
6. Je nach Größe der Marillen Teig in 6-9 gleich große Teile teilen.<br />
7. Marillen mit Teig umhüllen und zu Knödeln formen.<br />
8. Knödel für 20 Minuten in leicht brodelndes Wasser geben.<br />
9. In der Zwischenzeit Pfanne auf mittlere Stufe erhitzen.<br />
10. Nach und nach ein Stück Butter, 3 Teile Semmelbrösel und 1 Teil Zucker in die Pfanne geben und kontinuierlich verrühren.<br />
11. Knödel aus dem Wasser in die Brösel legen und herum rollen. <br />
12. Knödel mit etwas Brösel servieren.<br />
13. Je nach Reifegrat der Marillen mit Zucker verfeinern. </p>
Rhabarber Kuchen Rezepthttp://strauss.io/blog/2018-rhabarber-kuchen-rezept.html2018-05-20T19:39:00Z2021-03-20T17:04:17+01:00David Strauß<p><img alt="Rhabarber Kuchen" src="/rhabarber-kuchen-0dec6836.jpg" /></p>
<p><strong>Zutaten</strong><br />
500g Rharbarber<br />
330g Butter<br />
400g Mehl<br />
270g Zucker<br />
1 Packung Vanillezucker<br />
1 Teelöffel Backpulver<br />
1 Prise Salz<br />
1 Ei</p>
<p><strong>Zubereitung</strong><br />
1. Butter schmelzen.<br />
2. Ofen auf 175 Grad einstellen und vorheizen.<br />
2. Rhabarber waschen und in ca. 5mm Scheiben schneiden.<br />
3. Springform einfetten.<br />
4. Mehl, Zucker, Vanillezucker, Backpulver und Salz in einer Schüssel vermengen.<br />
5. Ei dazugeben.<br />
6. Butter dazugeben.<br />
7. Mit einem Mixer die Masse verrühren, dabei entstehen Brösel.<br />
8. ⅔ des Teiges in die Springform geben.<br />
9. Teig an den Wänden und dem Boden festdrücken.<br />
10. Rhabarber in die Springform füllen.<br />
11. Rhabarber mit dem übrig gebliebenen ⅓ Teig bestreuen.<br />
12. Springform für 50 Minuten mittig in den Ofen geben. Ab Minute 45 die Farbe kontrollieren und den Kuchen gegebenenfalls frühzeitig aus dem Ofen nehmen. </p>
Treat Transactional Emails like Lettershttp://strauss.io/blog/2018-treat-transactional-emails-like-letters.html2018-05-19T11:23:00Z2021-03-20T17:04:17+01:00David Strauß<p>Transactional emails that are sent by a software system have a specific reason compared to a newsletter. Password resets, order confirmations, invoices and reminders are just a few examples. Such emails are an integral part of a system and not something that is done on the side just for fun.</p>
<p>But best practices moves sending emails literally outside the HTTP request cycle into background jobs. While this is a good thing for various reasons it also makes sending emails even more obscure to developers and operators.</p>
<p>Instead of shoving emails into a generic background job framework they should be treated with the respect any other value generating part of the system receives.</p>
<p>Privileged operators should have insight into what emails are getting sent and their delivery status. Services like <a href="https://postmarkapp.com/">Postmark</a> have rich APIs to make email delivery insightful.</p>
<p>Similar to a letter an undeliverable email should come back to the system so a human can take action to resolve the problem. Instead an undeliverable email usually ends up in a dead job queue.</p>
<p>To provide even better insights the system can show upcoming email deliveries that will be triggered by the passing of time. This is not trivial to implement but helps immensely understanding a complex software system.</p>
Bikepacking Hof bei Salzburg to Chieminghttp://strauss.io/blog/2018-bikepacking-hof-bei-salzburg-to-chieming.html2018-05-18T18:52:00Z2021-03-20T17:04:17+01:00David Strauß<p>Hannah and I started May 11th to bike from our home in Hof bei Salzburg to Chieming. We pitched our tent at Camping Seehäusl ten meters from the lake and continued our ride back home the next day.</p>
<p><img alt="Bikepacking pack list with Hilleberg Anjan 3" src="/bikepacking-pack-list-e39a2104.jpg" />
<img alt="Camping Seehäusel Chiemsee" src="/camping-seehaeusl-chiemsee-e73a4cdc.jpg" />
<img alt="Hannah at the Chiemsee with the Hilleberg Anjan 3" src="/hannah-camping-seehaeusl-chiemsee-0d26348c.jpg" />
<img alt="Camping Seehäusel Chiemsee in the evening" src="/bikepacking-evening-chiemsee-04335815.jpg" /></p>
<h2 id="route">Route</h2>
<p>Hof bei Salzburg - Salzburg - Freilassing - Teisendorf - Traunstein - Chieming - Nußdorf - Kammer - Waging am See - Petting - Laufen - Salzburg - Hof bei Salzburg</p>
<p>Total distance was 152 kilometers with an elevation gain of 1243 meters and a ride time of 9 hours and 40 minutes.</p>
<h2 id="packlist">Packlist</h2>
<p>2x Bike<br />
1x Handlebar Bag<br />
2x Pannier Rack Bag<br />
2x Backpack<br />
2x White Light<br />
2x Red Light<br />
2x Gloves<br />
2x Helmet<br />
2x Bike Shoes<br />
2x Bike Socks<br />
2x Socks<br />
2x Bike Short<br />
2x Bike Shirt<br />
1x Bike BH<br />
2x Rain Jacket<br />
2x Rain Trouser<br />
2x Long Trouser<br />
2x Sleeping Bag<br />
2x Sleeping Pad<br />
1x Tent<br />
1x Footprint<br />
1x Stove<br />
1x Gas<br />
1x Pot<br />
2x Cup<br />
2x Spork<br />
2x Lighter<br />
2x Headlamp<br />
6x Headlamp Battery<br />
2x iPhone<br />
1x iPhone Charger<br />
1x Battery Bank<br />
1x Garmin<br />
1x Garmin Charger<br />
2x Map<br />
2x Kindle<br />
3x Dry Bag<br />
4x Underwear<br />
2x Leggings<br />
4x Base Shirt<br />
2x Base Shirt Long<br />
2x Fleece<br />
2x Belt<br />
2x Buff<br />
2x Woollen Hat<br />
8x Food<br />
1x Toiletpaper<br />
1x Poop Digger<br />
1x Soap<br />
2x Towels<br />
1x Sponge<br />
1x Cloth<br />
1x Hand Sanitiser<br />
3x Handkerchiefs<br />
1x Deodorant<br />
2x Toothbrush<br />
1x Toothpaste<br />
10x Contactlenses<br />
1x Hairbrush<br />
1x Sunscreen<br />
1x Medkit<br />
3x Water Bottle<br />
2x Sandals<br />
2x Passport<br />
2x Wallet<br />
2x Keys<br />
1x Lock<br />
2x Compression Bag<br />
1x Tweezer<br />
2x Notebook + Pens<br />
1x Tent Repair </p>
Honest Project Management Softwarehttp://strauss.io/blog/2018-honest-project-management-software.html2018-05-17T18:35:00Z2021-03-20T17:04:17+01:00David Strauß<p>At the start of a typical project tasks are written down and estimated. Then put into a nifty tool where they are arranged before a reassuring (Gantt) diagram is produced.</p>
<p>Such a plan is a snapshot in time and represents an ideal world where things go according to plan. As the project progresses the actual status of the project steadily moves away from the status shown on the plan.</p>
<p>This mismatch is real and only natural. The problem is the plan, it’s still in peoples heads and starts to warp their perception of the project.</p>
<p>Which makes it easier to cling onto wishful thinking that there is still a chance that it turns out as initially planned even though the facts say something different.</p>
<p>There are cases where it’s clear that things are going off the rails and still nobody tries to adapt to reality. To some extent the initial plan is to blame for that. It makes people believe that things are still going according to plan to some degree and that there is still hope.</p>
<p>Instead a honest project management software should require an update each interval where target and actual status are compared. This makes people see the discrepancy and increases chances that the plan is adapted to reality.</p>
The Fine Line Between CRUD and Business Logichttp://strauss.io/blog/2018-the-fine-line-between-crud-and-business-logic.html2018-05-16T19:30:00Z2021-03-20T17:04:17+01:00David Strauß<p>In the beginning it’s often easy to start a feature or entire system based on a CRUD approach. As time progresses the CRUD approach becomes less ideal or an outright entangled mess.</p>
<p>For example take an imaginative thing called an event. It has a title and description people can read plus a number of seats. A CRUD implementation works perfectly fine for this.</p>
<p>A new feature is added that allows people to book seats at an event. At this moment something important happens that often goes unnoticed. The number of seats at an event are no longer something that should be implemented with CRUD.</p>
<p>Instead making seats available for an event needs to be modelled explicitly in the business logic same as booking a seat. This explicitness makes it possible to build and establish business processes that are visible.</p>
<p>It’s often hard to spot when the fine line between CRUD and business logic is crossed. The later it is spotted the more painful moving away from CRUD gets.</p>
Fertigstellungskriterien nutzen um Wissen zu archivierenhttp://strauss.io/blog/2018-fertigstellungskriterien-nutzen-um-wissen-zu-archivieren.html2018-05-15T19:36:00Z2021-03-20T17:04:17+01:00David Strauß<p>Um die Dokumentation eines Software Projektes kontinuierlich zu verbessern und Dokumentieren als festen Bestandteil im Software Entwicklungsprozess zu etablieren kann man die Fertigstellungskriterien nutzen.</p>
<p>So wie bereits Automatisierte Tests, Code Reviews, Abnahme auf Staging und andere Punkte in den Fertigstellungskriterien festgehalten sind kann man diese um einen Wissens Punkt erweitern.</p>
<p>Mit Fragen bzw. Checkpoints wie “Was wurde neues über das Projekt gelernt?”, “Welche Annahmen und Fakten haben sich verändert?” und “Welche Informationen wurden in der Dokumentation gesucht aber nicht gefunden?” wird ein Bewusstsein geschaffen das auch die Dokumentation lebt und gepflegt werden will.</p>
<p>Natürlich müssen die gewonnen Erkenntnisse auch schriftlich festgehalten werden um die Fertigstellungskriterien zu erfüllen.</p>
Single Directory Ruby Deploymentshttp://strauss.io/blog/2018-single-directory-ruby-deployments.html2018-05-14T18:51:00Z2021-03-20T17:04:17+01:00David Strauß<p>Based on search results the default deployment method in the Ruby world is something along the lines of Capistrano. The application server checks out the source code, installs gems via Bundler, precompiles assets and runs migrations. </p>
<p>This setup requires Node.js, Ruby and bundler installed on every application server. With Go you can build a single binary and only push that to the application server. Less moving parts makes a system more stable and robust so that’s a big plus.</p>
<p>It could be possible to achieve something similar with Ruby. Deploy a single directory that contains a Ruby binary plus all required gems. The first challenge is probably correctly building gems with native extensions for the application server.</p>
Realistic Seed Datahttp://strauss.io/blog/2018-realistic-seed-data.html2018-05-13T19:13:00Z2021-03-20T17:04:17+01:00David Strauß<p>Setting up and maintaining a script with realistic seed data is unrewarding and tedious. But as a system grows it starts paying dividends.</p>
<ul>
<li>
<p>Testing and showing different use cases is a matter of seconds instead of filling in unrealistic dummy data that doesn’t cover edge cases.</p>
</li>
<li>
<p>Acceptance tests can use it as starting point and no additional fixture or factory library is needed.</p>
</li>
<li>
<p>A good seed script can be run multiple times to create increasing amounts of data. <a href="http://www.strauss.io/blog/2018-planned-capacity-of-software-systems.html">Capacity planning a system</a> just got a whole lot easier.</p>
</li>
</ul>
<p>Invest in a good seed script with realistic data that covers all different use cases of the system, it will pay off in the near future.</p>
Planned Capacity of Software Systemshttp://strauss.io/blog/2018-planned-capacity-of-software-systems.html2018-05-12T19:13:00Z2021-03-20T17:04:17+01:00David Strauß<p>Similar to the Hoover Dam a software system has limited capacity. A limitation can be requests per second, database rows, storage size, amount of business processes, number of tenants and so on.</p>
<p>Other than the Hoover Dam at a majority of software projects the bottlenecks and their capacity are not known. Hitting a limit may not be catastrophic for a software system but usually it’s a problem nonetheless.</p>
<p>It’s possible to be aware of these problems beforehand just by designing a system to a certain capacity. By making the capacity explicit it can be load tested and measured. Monitoring these measurements provides early warnings when the system approaches its capacity.</p>
<p>Stating explicit capacities also makes it harder to fall in the trap of gradually decreasing system performance that is not noticed by regular users but turns away possible new users.</p>
Distraction Guardrailshttp://strauss.io/blog/2018-distraction-guardrails.html2018-05-11T19:12:00Z2021-03-20T17:04:17+01:00David Strauß<p>I have trouble being disciplined. When there is no clear next action and my energy runs low I tend to procrastinate. Even more so if there are distractions around. As a someone how spends a significant time at the computer this is a major problem.</p>
<p>A computer with internet connection is a bottomless hole of distractions. News websites, Hacker News, Facebook, Reddit, Youtube and Twitter are just a few that can suck you into mindless consumption with the blink of an eye.</p>
<p>My current approach for spending my time more conscious is setting up guardrails against distractions. <a href="https://freedom.to">Freedom</a> is configured to block distracting websites between certain hours. <a href="https://www.rescuetime.com/dashboard">RescueTime</a> tracks where I’m spending my time so I can continuously refine my block lists based on hard data.</p>
<p>Firefox is also setup to not save my browsing history. It means more typing but also prevents me from seeing distracting autocomplete suggestions.</p>
<p>One of the most effective ways to focus is a timer that pings me periodically and writing down what I’m doing right in this moment.</p>
My Seth Godin Challengehttp://strauss.io/blog/2018-my-seth-godin-challenge.html2018-05-10T19:12:00Z2021-03-20T17:04:17+01:00David Strauß<p>The blog of Seth Godin is a source of short and well written posts that usually make you think. The other extraordinary thing is that they appear day after day for the last 14 years.</p>
<p>I believe that writing and communicating ideas is one of the most valuable skills anyone can learn. To become better I take Seth Godin as inspiring example and will publish 21 posts on a day by day basis.</p>
<p>To reduce possible excuses around my expectations I will limit the posts to 300 words a piece and aim for 150 words.</p>
<p>If you are unfamiliar with Seth Godin <a href="http://sethgodin.typepad.com/">take a look</a> and browse around for a little while.</p>
Structuring Notes With Symbolshttp://strauss.io/blog/2018-structuring-notes-with-symbols.html2018-05-09T19:38:00Z2021-03-20T17:04:17+01:00David Strauß<p>I like to take notes both on paper and digital. I’m experimenting with symbols to make them easier to scan and communicate additional meaning. The symbols need to be easy to draw and the analog and digital versions must be comparable. My current list looks like this:</p>
<p>⭐ Goal<br />
Something I want to achieve that typical requires multiple actions to be completed.</p>
<p>⚡ Action<br />
Something that can be done. I try to use the format action verb + activity + purpose + due date.</p>
<p>💭 Idea<br />
An idea or thought I had that I will probably revisit at a later time.</p>
<p>👁 Observation<br />
Something that I observed that could be relevant for myself in the future.</p>
<p>⚠️ Warning<br />
I should pay attention to that since chances are high that it will bite me later.</p>
<p>💲 Reference<br />
An idea, a quote or otherwise useful material that I discovered somewhere.</p>
<p>🌀 Other<br />
Diary style notes and other stuff where other symbols don’t fit. </p>
Lieferbare Leistungen in der Software Entwicklunghttp://strauss.io/blog/2018-lieferbare-leistungen-in-der-software-entwicklung.html2018-05-08T20:09:00Z2021-03-20T17:04:17+01:00David Strauß<p>Bei jeder Leistung muss das Lieferbare genannt und definiert sein.</p>
<p>Lieferbar ist …</p>
<p>… etwas das gesehen werden kann. Ein Screen, eine Email, ein PDF oder das Resultat einer API Abfrage.</p>
<p>… etwas das gemacht werden kann. Eine Buchung erstellen, einen Kurs überarbeiten ein Stellenangebot nach vorne reihen.</p>
<p>… etwas das ausgelöst werden kann. Das inaktiv schalten eines Stellenangebots nach der Laufzeit, das versenden einer Bestätigungsemail oder das erstellen und aussenden von Umfragen.</p>
<p>Direkte oder indirekte Auswirkungen die durch die Lieferung einer Leistung entstehen sind nicht im Leistungsumfang enthalten.</p>
<p>Unbekannte Anforderungen die nicht in den Leistungsspezifischen Fertigstellungskriterien oder den Allgemeinen Fertigstellungskriterien aufgeführt sind, sind ebenfalls als neue Leistungen anzusehen.</p>
Things Are Made by Fellow Humanshttp://strauss.io/blog/2018-things-are-made-by-fellow-humans.html2018-05-07T17:42:00Z2021-03-20T17:04:17+01:00David Strauß<p>In summer I will backpack in Norway with my partner Hannah. She just finished building a custom footprint for our Hilleberg Anjan 3 tent.</p>
<p>Starting from scratch without instructions she ordered the raw materials and built the whole thing in day. </p>
<p>It looks fabulous and I’m amazed by the result. As someone who builds and creates in the digital world I have to remind myself that almost anything I can see and touch was made by a fellow human.</p>
<p>Therefore I can do the same and make anything I can imagine in the digital and analog world.</p>
Quarantining ActiveRecord Keeps Applications Healthyhttp://strauss.io/blog/2018-quarantining-activerecord-keeps-applications-healthy.html2018-05-06T15:14:00Z2021-03-20T17:04:17+01:00David Strauß<p>ActiveRecord models and their instances have an unimaginable API surface. Any location that gets an ActiveRecord instance passed in as argument can use the full API.</p>
<p>This can lead to <a href="http://www.strauss.io/blog/2018-locking-the-database-away-from-unit-tests.html">unwanted consequences like querying the database in unit tests</a>. Something that in most cases is not desired.</p>
<p>Allowing method calls to ActiveRecord models and instances only in controllers and other models could solve this problem.</p>
Use SQL for Database Migrationshttp://strauss.io/blog/2018-use-sql-for-database-migrations.html2018-05-05T21:18:00Z2021-03-20T17:04:17+01:00David Strauß<p>Ruby on Rails, Hanami and Sequel offer tooling to manage database schemas. The syntax between the respective migration files is similar but not identical.</p>
<p>These differences cause mental overhead as soon as work happens in multiple projects that use different technologies.</p>
<p>Instead of a custom Ruby syntax the migration files should be written in plain SQL. The code to run them are simple bash scripts which are wrapped by the respective command line interfaces.</p>
<p>The advantages of this approach are vast. Writing migrations is the same for every technology. In the first paragraph only Ruby is mentioned, but SQL migrations work in every programming language and environment.</p>
<p>Autonomy from specific technologies makes it also easier to use features specific to a database without having to rely on support from another tool.</p>
<p>Are you using plain SQL migrations? If not, what is holding you back?</p>
Locking the Database Away from Unit Testshttp://strauss.io/blog/2018-locking-the-database-away-from-unit-tests.html2018-05-04T19:19:00Z2021-03-20T17:04:17+01:00David Strauß<p>Unit tests are supposed to be fast. Connecting to and querying a database is slow.</p>
<p>In a typical Ruby on Rails project it is easy to write unit tests that talk to the database. A model has a direct line to the database. One wrong method call and the test case is already talking to the database.</p>
<p>Making the database inaccessible in all unit tests would make the tangled mess visible and enforce a separation between infrastructure and domain. It also raises awareness that <a href="http://www.strauss.io/blog/2018-dont-forget-that-dependencies-have-a-cost.html">the database is a dependency and has a cost</a>.</p>
<p>What do you think of this idea? How would you implement it? I'm curious, send me an email or tweet at me.</p>
Don't Forget That Dependencies Have a Costhttp://strauss.io/blog/2018-dont-forget-that-dependencies-have-a-cost.html2018-05-03T17:03:00Z2021-03-20T17:04:17+01:00David Strauß<p>Google is changing their services around Google Maps. In a month they will insist on filled out billing information and massively decrease free quotas of their platform. Initial reactions show that this is a fatal problem to some projects forcing them to shutdown.</p>
<p>A situation like this reveals how fragile ones software development process is. Is there a checklist in place for adding dependencies to a project?</p>
<p>The checklist creates awareness for these situations long before they occur. It makes sure there is an alternative plan in place and at least shows the boundaries of operation.</p>
<p>What does your checklist look like?</p>
Sanity Checking an Event Sourced Systemhttp://strauss.io/blog/2017-sanity-checking-an-event-sourced-system.html2017-03-19T18:46:00Z2021-03-20T17:04:17+01:00David Strauß<p>Around one and a half years ago I stumbled upon Domain Driven Design, Event Sourcing and CQRS. I'm still amazed about the world behind that door and its sheer size. Without countless blog posts and talks I wouldn't have started walking on this path so here is my first attempt at giving back.</p>
<p>My side project <a href="https://www.dartboard.io">dartboard.io</a> helps you playing dart by offering an easy way to keep track of the scores and calculating interesting statistics. It is also my playing field field for experimenting with Event Sourcing and learning to understand it. The application started one year before I learned about Domain Driven Design so it should be no surprise that the first iteration was built upon CRUD concepts. After one year in production the CRUD version transformed to the second iteration based on Event Sourcing principles. Every dart match that was being played would result in a conventional CRUD model but there is an <code>events</code> attribute that is being used for the Event Sourcing part. For the lack of a better word I call this implementation <em>In-Place Event Sourcing</em>.</p>
<h2 id="why-do-a-sanity-check-of-the-events">Why do a sanity check of the events?</h2>
<p>While not yet implemented the application will allow players to play a remote dart match in a future iteration. In order to keep multiple players up-to-date the events present themselves as an ideal model for synchronisation. When you are at a point where single events get pushed or pulled within your system it seems easier to implement a classic event store and say farewell to the In-Place Event Sourcing implementation.</p>
<p>Before making such a transition I wanted to verify how robust my implementation is. After all it's my first event sourced system and I knew from the past that there were one or two bugs that resulted in impossible event sequences. I just did not know how many of them would be there.</p>
<h2 id="how-did-the-sanity-check-work">How did the sanity check work?</h2>
<p>First I manually built a map of events and their possible successor events. These instructions basically laid out the rules for the computer. "If you have an event <code>X</code> the next one can be <code>Y</code> or <code>Z</code>". You can take <a href="https://github.com/stravid/datsu-api/blob/master/tools/sanity_check_events#L22-L52">a look at the real map</a> or be happy with my abstract example:</p>
<figure>
<figcaption>
<p>Instructions for the sanity check
</p>
</figcaption>
<pre class="highlight plaintext"><code>PlayerAdded -> PlayerAdded, MatchStarted
MatchStarted -> LegAdded
PlayerScored -> PlayerWonLeg, TurnChanged
</code></pre>
</figure>
<p>Based on this map the computer would iterate through the events of a match and check the sequence at every step. If a violation was detected it would output at which event it happened, what the successor event was and that successors it expected and move to the next match. In addition it would group the violations by their pattern and output a summary to make analysis easier.</p>
<figure>
<figcaption>
<p>Grouped output of a specific violation
</p>
</figcaption>
<pre class="highlight plaintext"><code>Pattern: MatchStarted -> PlayerAdded
Occurrences: 3
Match IDs: 1, 20, 45
</code></pre>
</figure>
<h2 id="what-was-the-outcome">What was the outcome?</h2>
<p>An interesting and unexpected look at the system that I built. After checking 16.391 matches with a total of 3.463.360 events there are several things to point out.</p>
<p>1.) I once more fully appreciated the idea behind Event Sourcing. It was a really nice feeling realising how easy it is to go back in time and see how a dart match evolved. For me personally this is probably the biggest benefit of Event Sourcing.</p>
<p>2.) My initial event map used for detecting violations was incomplete. There were three situations that could happen in the lifetime of a match that were not obvious to me. It took me a while to be sure that the system worked correctly and me not being aware of certain scenarios was the problem. As a result the sanity check itself was adapted and I also wrote new unit tests covering these scenarios specifically. </p>
<p>3.) The sanity check showed that past bug fixes were indeed effective. There were edge cases in the past that would produce event sequences that made no sense. By correlating the matches timestamp where a certain violation occurred with the timestamp of a commit I could tell that the bug fix was working.</p>
<p>4.) It also turned up new bugs that were not visible in the UI but would slightly skew statistical projections. On a technical level it showed that having thorough invariant checks in the match aggregate is vital. Raised errors by these checks would have surfaced the problems earlier.</p>
<p>5.) I realised I should really add timestamps to relevant events. Some issues would be easier to investigate if you could tell how much time passed between certain events.</p>
<p>6.) Old matches that were ported from the first iteration (CRUD) were missing events that were added later. In my case additional events would be emitted to make it possible to generate more advanced statistics.</p>
<p>Overall I'm very happy with the outcome of the sanity check and my decision to dive into Event Sourcing one and a half years ago. I'm not yet sure how to approach the sixth point with the missing events. Maybe I will know what to do after reading <a href="https://leanpub.com/esversioning">Versioning in an Event Sourced System</a>. I hope it will also help me with introducing timestamps for certain existing events.</p>
<p>Thanks for reading and since I'm very inexperienced with Event Sourcing any feedback is welcome.</p>
How to use JSON API & Ember in Non-CRUD Applicationshttp://strauss.io/blog/2016-how-to-use-json-api-ember-in-non-crud-applications.html2016-06-14T04:58:00Z2021-03-20T17:04:17+01:00David Strauß<p>I am a big fan of both JSON API and Ember but I'm becoming increasingly unhappy with them when it comes to building complex applications. That has to do with my growing interest in Domain-Driven-Design (DDD). As I am learning more about DDD I realise how good it feels to not constantly think about your application in CRUD terms.</p>
<p>You can find a lot of material on how to apply DDD to server-side or desktop applications but I'm still in the dark how it works with an API and client-side JavaScript applications. Before reading any further you have to know I'm not a DDD expert, I wouldn't even call myself a beginner. I'm just very interested in the concept and reading about it triggered a flood of thoughts which you are currently reading.</p>
<p><strong>I am very interested how you work with Ember in complex domains and applications. Please share your experience.</strong></p>
<h2 id="complex-domains">Complex Domains</h2>
<p>The opposite of complex domains are domains where CRUD applications are sufficient to cover all use cases. CRUD is an acronym and stands for Create, Read, Update, and Delete. Nurtured by frameworks like Ruby on Rails and guided by JSON API and Ember Data we build most applications on the same foundation that is CRUD. This works remarkably well if you build something like a little note taking application. Someone using it can create new notes, update existing one, read a note and probably also delete it.</p>
<p>There are many applications where CRUD is not a good fit. Let's ignore the read and delete parts and focus on create and update since they are the parts where we usually first hit a mental roadblock when working in a complex domain.</p>
<p>Take for example an accounting application that deals with invoices. An invoice will never be "created" in the sense of how CRUD / JSON API / Ember Data make us think about it. It's far more likely and natural to create an invoice draft which you can modify until you decide to issue an invoice. In our business domain issuing an invoice involves multiple steps.</p>
<ol>
<li>Copy the invoice drafts attributes to the invoice.</li>
<li>Associate the drafts items with the invoice.</li>
<li>Set the due date of the invoice.</li>
<li>Find and set the next sequential invoice number.</li>
<li>Persist the invoice.</li>
<li>Delete the draft.</li>
<li>Create a due payment for the invoice.</li>
<li>Generate an store the invoice PDF document.</li>
</ol>
<p>How do we do all this in an Ember application? My naive approach is doing something like <code>POST /api/invoice-drafts/123/issue</code>. Sadly doing this feels like going against the grain. The JSON API specification does not provide any guidance how to deal with Non-CRUD endpoints. There is also no support from Ember Data, you have to use an addon or implement your own solution. And on the server-side libraries like JSONAPI::Resources do not provide a clear way of supporting such actions.</p>
<p>Don't take this as a rant about the different parts of the ecosystem and their shortcomings. I'm just puzzled by the problem and would like to know how others are dealing with it because non of the possible approaches I can think of seem to be any good. <strong>How do you model something like issuing an invoice in your application?</strong></p>
<h2 id="approaches-i-thought-about">Approaches I thought about</h2>
<ul>
<li>
<p>Using a custom and separate action/endpoint like <code>POST /api/invoice-drafts/123/issue</code>. Personally this approach feels "right" to me. On the other hand there is close to zero official support and everything has to be hand-rolled. Which in turn means there are no conventions and no shared building blocks to move forward together.</p>
</li>
<li>
<p>Moving the business logic into the client, instead of having the API do all the work the client must do it itself. I see two issues with this approach the first one being code duplication. Even in a scenario where the client does all the work there will be business rules and validations on the server-side. Future clients will also have to re-implement the business logic on their end.</p>
<p>The second issue is the split up business logic in some cases. Take the invoice example, the client has no way of deciding what the next invoice number will be. So even though we are putting most business logic in the client the API will still have to do some things. We now have two very separate pieces of code that operate in the same problem space.</p>
</li>
<li>
<p>Introducing new resources to represent these custom actions. Instead of using custom Non-CRUD endpoints we could introduce a new resource type for our use case. A <code>POST /api/issue-invoices/</code> endpoint would work nicely with Ember Data, JSON API and JSONAPI::Resources for example.</p>
<p>I don't like this approach, it feels like making an already defined domain even more complicated. My domain model already has an <code>Invoice#issue</code> interface, why would I want to make it more complicated to the outside. Naming things is hard, having to invent countless resource types for all of the domains possible use cases sounds scary to me.</p>
</li>
</ul>
<h2 id="your-feedback-and-ideas">Your feedback and ideas</h2>
<p>As already said in the beginning, I have no idea what I am doing. So please send your tweets to <a href="https://twitter.com/stravid">@stravid</a>, email me at <a href="mailto:david@strauss.io">david@strauss.io</a> or drop by the <a href="http://www.edgycircle.com">edgy circle</a> office in Salzburg.</p>
<p>I'm looking forward to your input. I strongly believe that together we can shed some light on this topic and make building Ember applications even better.</p>
Use Sidekiq With Hanamihttp://strauss.io/blog/2016-use-sidekiq-with-hanami.html2016-01-20T20:23:00Z2021-03-20T17:04:17+01:00David Strauß<p><a href="http://hanamirb.org/">Hanami</a>, formerly known as Lotus, is a Ruby web framework like Ruby on Rails. I'm really fond of it due to it's small footprint and architecture which guides you to well designed applications. Another piece of software I have high opinions of is <a href="http://sidekiq.org/">Sidekiq</a>, a simple and efficient job processing solution for Ruby. Now let me show you how to use them together.</p>
<h2 id="setting-up-sidekiq-within-hanami">Setting up Sidekiq within Hanami</h2>
<p>The first step is adding Sidekiq to your <code>Gemfile</code> and running <code>bundle install</code>. I also tend to generate binstubs for the gems I need to use outside the application code. To do this for Sidekiq run <code>bundle binstubs sidekiq</code>.</p>
<figure>
<figcaption>
<p>Gemfile
</p>
</figcaption>
<pre class="highlight ruby"><code><span class="n">gem</span> <span class="s1">'sidekiq'</span><span class="p">,</span> <span class="s1">'~> 4.0.2'</span>
</code></pre>
</figure>
<p>Once Sidekiq is installed it's time to configure it. Mainly I want to specify which Redis URL it uses. For example when you are running both Production and Staging versions of your application on the same server and both access the same Redis instance you want them to not interfere with each other.</p>
<p>Instead of using the namespace option I will use different Redis databases. (A default Redis instance comes with 16 of these.) The database is specified in the Redis URL which in turn is an environment variable. You can read more about this topic in an <a href="http://www.mikeperham.com/2015/09/24/storing-data-with-redis/">article by Mike Perham, the creator of Sidekiq</a>.</p>
<p>To configure Sidekiq we create <code>config/sidekiq.rb</code> and setup both server and client. In addition we require our newly created file in <code>config/environment.rb</code> so it is actually available within Hanami.</p>
<figure>
<figcaption>
<p>config/sidekiq.rb
</p>
</figcaption>
<pre class="highlight ruby"><code><span class="no">Sidekiq</span><span class="p">.</span><span class="nf">configure_server</span> <span class="k">do</span> <span class="o">|</span><span class="n">config</span><span class="o">|</span>
<span class="n">config</span><span class="p">.</span><span class="nf">redis</span> <span class="o">=</span> <span class="p">{</span> <span class="ss">url: </span><span class="no">ENV</span><span class="p">.</span><span class="nf">fetch</span><span class="p">(</span><span class="s2">"REDIS_URL"</span><span class="p">)</span> <span class="p">}</span>
<span class="k">end</span>
<span class="no">Sidekiq</span><span class="p">.</span><span class="nf">configure_client</span> <span class="k">do</span> <span class="o">|</span><span class="n">config</span><span class="o">|</span>
<span class="n">config</span><span class="p">.</span><span class="nf">redis</span> <span class="o">=</span> <span class="p">{</span> <span class="ss">url: </span><span class="no">ENV</span><span class="p">.</span><span class="nf">fetch</span><span class="p">(</span><span class="s2">"REDIS_URL"</span><span class="p">)</span> <span class="p">}</span>
<span class="k">end</span>
</code></pre>
</figure>
<figure>
<figcaption>
<p>config/environment.rb
</p>
</figcaption>
<pre class="highlight ruby"><code><span class="c1"># ...</span>
<span class="nb">require_relative</span> <span class="s1">'../lib/your-application-name'</span>
<span class="nb">require_relative</span> <span class="s1">'../apps/web/application'</span>
<span class="nb">require_relative</span> <span class="s1">'./sidekiq'</span>
<span class="c1"># ...</span>
</code></pre>
</figure>
<p>Lastly add the <code>REDIS_URL</code> environment variable to your <code>dotenv</code> environment files and you are good to go. Remember to use different Redis databases for your environments. In this example I use the database <code>0</code> as you can see at the end of the URL.</p>
<figure>
<figcaption>
<p>.env.<environment>
</environment></p>
</figcaption>
<pre class="highlight shell"><code><span class="nv">REDIS_URL</span><span class="o">=</span>redis://localhost:6379/0
</code></pre>
</figure>
<h2 id="adding-sidekiq-workers-to-a-hanami-application">Adding Sidekiq workers to a Hanami application</h2>
<p>If you are using Hanami you are probably already aware of the two important directories <code>lib/</code> and <code>apps/</code>. The first one houses your application core and the second contains your various delivery mechanisms like your API or web interface. Sidekiq workers are like entities at the core of your application. They are independent of the delivery mechanism and can be used in various places.</p>
<p>Therefore I suggest we store our workers within the <code>lib/<your-application-name/workers</code> directory. Let's add an example worker that does nothing but sleep.</p>
<figure>
<figcaption>
<p>lib/<your-application-name/workers/sleep_worker.rb
</p>
</figcaption>
<pre class="highlight ruby"><code><span class="k">class</span> <span class="nc">SleepWorker</span>
<span class="kp">include</span> <span class="no">Sidekiq</span><span class="o">::</span><span class="no">Worker</span>
<span class="k">def</span> <span class="nf">perform</span><span class="p">(</span><span class="n">workload</span><span class="p">)</span>
<span class="nb">sleep</span> <span class="n">workload</span>
<span class="k">end</span>
<span class="k">end</span>
</code></pre>
</figure>
<p>Within our application we can now schedule background jobs as shown in the Sidekiq manual. Just call <code>SleepWorker.perform_async(10)</code> and you are done. At this point we are able to schedule jobs, but that's it. Sidekiq stores them happily in Redis but never looks at them again. To change this we have to run Sidekiq along our web server.</p>
<h2 id="running-sidekiq-and-hanami">Running Sidekiq and Hanami</h2>
<p>In development I use <a href="http://ddollar.github.io/foreman/">Foreman</a> to manage all the different pieces of an application. In our case that would be a web server and Sidekiq. The <code>Procfile</code> contains an entry for each piece so everything can be started and stopped with a single command. You can see we are using the <code>-r</code> flag to tell Sidekiq to require our applications Hanami environment file. This makes sure Sidekiq is correctly configured and the application core is loaded.</p>
<p>You can start everything with <code>foreman start</code> and watch how your workers get busy.</p>
<figure>
<figcaption>
<p>Procfile
</p>
</figcaption>
<pre class="highlight shell"><code>app: bin/hanami server --host 127.0.0.1
sidekiq: bin/sidekiq -e development -r ./config/environment.rb
</code></pre>
</figure>
<h2 id="mounting-the-sidekiq-web-dashboard-in-hanami">Mounting the Sidekiq web dashboard in Hanami</h2>
<p>Sidekiq comes with <code>Sidekiq::Web</code> a nice web interface that allows a certain amount of introspection. In order to run it two things are necessary. First add Sinatra to your <code>Gemfile</code> and do not require it, Sidekiq itself will only require the parts it needs.</p>
<figure>
<figcaption>
<p>Gemfile
</p>
</figcaption>
<pre class="highlight ruby"><code><span class="n">gem</span> <span class="s1">'sinatra'</span><span class="p">,</span> <span class="s1">'~> 1.4.6'</span><span class="p">,</span> <span class="ss">require: </span><span class="kp">false</span>
</code></pre>
</figure>
<p>In the second step you mount <code>Sidekiq::Web</code> to make it available. At the moment <code>config.ru</code> seems to be the only place where it works<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup>.</p>
<figure>
<figcaption>
<p>config.ru
</p>
</figcaption>
<pre class="highlight ruby"><code><span class="nb">require</span> <span class="s1">'./config/environment'</span>
<span class="nb">require</span> <span class="s1">'sidekiq/web'</span>
<span class="n">map</span> <span class="s1">'/admin/sidekiq'</span> <span class="k">do</span>
<span class="n">use</span> <span class="no">Sidekiq</span><span class="o">::</span><span class="no">Web</span>
<span class="k">end</span>
<span class="n">run</span> <span class="no">Hanami</span><span class="o">::</span><span class="no">Container</span><span class="p">.</span><span class="nf">new</span>
</code></pre>
</figure>
<p>Once you updated your <code>config.ru</code> you can view the Sidekiq web dashboard at <code>localhost:2300/admin/sidekiq/</code>. Be aware that the Sidekiq web dashboard is not protected with any kind of authentication. Anybody can visit the URL and mess with your jobs!</p>
<p><strong>So please make sure to use somekind of authentication like <code>Rack::Auth::Basic</code> in production.</strong></p>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>Using Hanamis <code>mount</code> method results in a mounted Sidekiq web interface where all URL paths miss a critical segment. Which leads to wrong links and missing assets like stylesheets and scripts. This happens because the <code>SCRIPT_NAME</code> environment variable is not set and <code>Sidekiq::Web</code> uses it to construct paths. I'm still investigating the best way to solve this. <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>
On the Fly Image Processing in Rubyhttp://strauss.io/blog/2015-on-the-fly-image-processing-in-ruby.html2015-08-04T10:09:00Z2021-03-20T17:04:17+01:00David Strauß<p>Imagine you are riding your bicycle. The road ahead is clear and the sky bright and blue. You start thinking about the view you will have from the top of the mountain you are closing in on. You start climbing and your breath quickens. It's time to adjust gears. You come to an halt and jump off the bike. Kneeling on it's side you grab the chain and move it to a better suited cog and off you go again.</p>
<p>Nobody would like to ride a bike without an shifting system in the mountains. Why do you use Ruby file upload gems where you have to do this exact same thing when it comes to different versions of images?</p>
<h2 id="what-is-wrong-with-most-ruby-file-upload-solutions">What Is Wrong with Most Ruby File Upload Solutions?</h2>
<p>It is common to display uploaded images in various sizes throughout a Ruby web application. Let's call these various sizes versions for the time being.</p>
<p>Popular file upload gems like <code>CarrierWave</code> and <code>Paperclip</code> allow you to define these different versions in your application code. The definitions allow the gem to process and generate the versions upfront whenever a new image gets uploaded.</p>
<p>This upfront image processing is the problem.</p>
<h2 id="why-is-upfront-image-processing-wrong">Why Is Upfront Image Processing Wrong?</h2>
<p>It complicates and slows down the development process. Giving every version a specific name is the classic naming things problem we all know. Should we call it <code>thumbnail</code>, <code>avatar</code>, <code>thumb</code> or <code>profile_picture</code>? And what happens if the image size requirements change for a specific part of the application? Do we invent another version with a new name? Or does this requirement change in one part of the application effect other parts? If you care about optimal images sizes and have multiple use cases with different sizes you will end up in a version naming mess.</p>
<p>Adjusting the versions itself is paired with additional work. Since all versions are generated upfront at the time of the upload you have to re-generate them whenever you change a version that is already in use. This is also the case whenever you introduce a new version into your system. Unfortunately this is a characteristic of upfront image processing. It discourages you and your team from adjusting and testing different image sizes.</p>
<p>Doing upfront image processing also adds a performance penalty to your pageloads. For every uploaded image all versions need to be created before the response can be sent to the client. Doing such heavy work in the request lifecycle is bad practice therefore you are almost forced to offload the image processing into background jobs. For some inexplicable reason none of the popular gems provides a background adapter out of the box.</p>
<h2 id="how-does-on-the-fly-image-processing-work">How Does on the Fly Image Processing Work?</h2>
<p>Instead of generating specific versions upfront, the correct image is generated whenever a client requests it. In practice the image URL usually contains parameters that specify the exact size of the image. Whenever such an image is requested the application extracts these parameters and generates a new image based on the original.</p>
<p>To improve performance the newly generated image can be stored or cached. The next time someone requests the image with the same size the webserver can respond without having to forward the request to the application.</p>
<p>Embedding the image size information in the URL allows you to experiment with it. Changing image sizes and experimenting with them doesn't mean additional work. The system generates new sizes as there is need.</p>
<h2 id="which-ruby-file-upload-gems-offer-on-the-fly-image-processing">Which Ruby File Upload Gems Offer on the Fly Image Processing?</h2>
<p>The most established gem is probably <a href="https://github.com/markevans/dragonfly">Dragonfly</a> by Mark Evans. It allows you to handle uploads in any Rack based Ruby application and can process files on the fly. Dragonfly is also able to process non-image files and supports user defined processes.</p>
<p><a href="https://github.com/refile/refile">Refile</a> is the successor of CarrierWave and Jonas Nicklas third attempt at getting file uploads right. While it's still a pretty new gem it has nice features like AJAX file uploads out of the box.</p>
<p>Speaking of new there is also a third contender, the <a href="https://github.com/stravid/mountable_file_server">MountableFileServer</a> gem. Inspired by Dragonfly and Refile this is my attempt at building a simple file upload solution that supports on the fly image processing. Although there is no official release yet, it is already used in production and lets us build sophisticated interfaces that deal with file uploads.</p>
Speed Up Ember.js List Rendering By Examplehttp://strauss.io/blog/2015-speed-up-ember-js-list-rendering-by-example.html2015-08-03T15:07:00Z2021-03-20T17:04:17+01:00David Strauß<p>While Ember.js helps you a lot when it comes to building robust web applications it's not an out of the box solution for building mobile apps. Building a snappy app for already underpowered mobile devices requires work and deliberate thinking about performance.</p>
<p>My recent article about <a href="http://strauss.io/blog/2015-lessons-learned-building-a-fast-ember-js-mobile-app.html">lessons learned while buidling an Ember.js mobile app for triathlon results</a> showed that rendering lists is a point where Ember.js doesn't shine by default.</p>
<h2 id="what-are-possible-techniques-to-speed-up-list-rendering-in-emberjs">What Are Possible Techniques to Speed up List Rendering in Ember.js</h2>
<ul>
<li>
<p>Use pagination to limit rendering to a specific number of entries. The fewer elements are displayed on a page, the faster the list will be rendered. Users have to use some kind of navigation to move between pages. Infinite scroll is very similar and can also work. Make sure that rendering a new batch of entries doesn't require a re-render of already visible entries.</p>
</li>
<li>
<p>Only render entries that are currently visible to the user. When the user starts scrolling you have to add and remove entries accordingly. This technique keeps the number of rendered entries to the absolute minimum. Scroll event handling in mobile browsers and handling entries with varying heights can make this tricky. <code>Ember.ListView</code> and <code>ember-cloaking</code> are probably worth looking into.</p>
</li>
<li>
<p>Reduce the number of DOM elements per entry. A single entry constructed from ten independent DOM elements seems innocent enough. But when you have four hundred of these your list suddenly consists of four thousand DOM elements. The fewer DOM elements it has to deal with the happier the mobile browser is.</p>
</li>
<li>
<p>Use custom functions and helpers to construct the HTML manually. By sidestepping Ember.js and doing it manually you can save a lot of CPU cycles that would be otherwise spent within the Ember.js source code. Needless to say sidestepping Ember.js has its downsides.</p>
</li>
</ul>
<h2 id="which-techniques-did-you-pick">Which Techniques Did You Pick?</h2>
<p>The first step was replacing the built in <code>each</code> helper with a custom helper that constructed the HTML manually.</p>
<figure>
<figcaption>
<p>Initial version with the default each helper
</p>
</figcaption>
<pre class="highlight handlebars"><code>
<span class="nt"><ol></span>
<span class="k">{{#</span><span class="nn">each</span> <span class="nv">result</span> <span class="nv">in</span> <span class="nv">filteredResults</span><span class="k">}}</span>
<span class="nt"><li</span> <span class="na">class=</span><span class="s">"result-item"</span><span class="nt">></span>
<span class="k">{{#</span><span class="nn">link-to</span> <span class="s1">'results.team'</span> <span class="nv">result</span><span class="p">.</span><span class="nv">id</span><span class="k">}}</span>
// ...
<span class="k">{{/</span><span class="nn">link-to</span><span class="k">}}</span>
<span class="nt"></li></span>
<span class="k">{{/</span><span class="nn">each</span><span class="k">}}</span>
<span class="nt"></ol></span>
</code></pre>
</figure>
<figure>
<figcaption>
<p>Updated version with the custom helper
</p>
</figcaption>
<pre class="highlight handlebars"><code>
<span class="nt"><ol></span>
<span class="k">{{</span><span class="nv">each-result</span> <span class="nv">filteredResults</span><span class="k">}}</span>
<span class="nt"></ol></span>
</code></pre>
</figure>
<p>The <code>each-result</code> helper is not complicated, actually the concept is straightforward. It takes an array of entries and builds the entire list via string concatenation. The resulting string is returned and inserted into the DOM via Ember.js. Since most attributes need formatting, like the zero padded rank, the helper relies and uses other custom helpers.</p>
<figure>
<figcaption>
<p>app/helpers/each-result.js
</p>
</figcaption>
<pre class="highlight javascript"><code>
<span class="kr">import</span> <span class="nx">Ember</span> <span class="nx">from</span> <span class="s1">'ember'</span><span class="p">;</span>
<span class="kr">import</span> <span class="p">{</span> <span class="nx">formatListRank</span> <span class="p">}</span> <span class="nx">from</span> <span class="s1">'./format-list-rank'</span><span class="p">;</span>
<span class="kr">import</span> <span class="p">{</span> <span class="nx">formatTime</span> <span class="p">}</span> <span class="nx">from</span> <span class="s1">'./format-time'</span><span class="p">;</span>
<span class="kr">import</span> <span class="p">{</span> <span class="nx">joinArray</span> <span class="p">}</span> <span class="nx">from</span> <span class="s1">'./join-array'</span><span class="p">;</span>
<span class="kr">export</span> <span class="kd">function</span> <span class="nx">eachResult</span><span class="p">(</span><span class="nx">parameters</span><span class="p">)</span> <span class="p">{</span>
<span class="k">return</span> <span class="k">new</span> <span class="nx">Ember</span><span class="p">.</span><span class="nx">Handlebars</span><span class="p">.</span><span class="nx">SafeString</span><span class="p">(</span>
<span class="nx">parameters</span><span class="p">[</span><span class="mi">0</span><span class="p">].</span><span class="nx">map</span><span class="p">(</span><span class="kd">function</span><span class="p">(</span><span class="nx">result</span><span class="p">)</span> <span class="p">{</span>
<span class="k">return</span> <span class="s1">'<li>'</span>
<span class="o">+</span> <span class="s1">'<a href="/ergebnisse/details/'</span> <span class="o">+</span> <span class="nx">result</span><span class="p">.</span><span class="nx">id</span> <span class="o">+</span> <span class="s1">'">'</span>
<span class="o">+</span> <span class="s1">'<span>'</span> <span class="o">+</span> <span class="nx">formatListRank</span><span class="p">([</span><span class="nx">result</span><span class="p">])</span> <span class="o">+</span> <span class="s1">'</span>'</span>
<span class="o">+</span> <span class="s1">'<span>'</span>
<span class="o">+</span> <span class="s1">'<span>'</span> <span class="o">+</span> <span class="nx">result</span><span class="p">.</span><span class="nx">number</span> <span class="o">+</span> <span class="s1">'</span>'</span>
<span class="o">+</span> <span class="s1">'<span>'</span> <span class="o">+</span> <span class="nx">result</span><span class="p">.</span><span class="nx">teamName</span> <span class="o">+</span> <span class="s1">'</span>'</span>
<span class="o">+</span> <span class="s1">'<span>'</span> <span class="o">+</span> <span class="nx">formatTime</span><span class="p">([</span><span class="nx">result</span><span class="p">.</span><span class="nx">totalTime</span><span class="p">])</span> <span class="o">+</span> <span class="s1">'</span>'</span>
<span class="o">+</span> <span class="s1">'<span>'</span> <span class="o">+</span> <span class="nx">joinArray</span><span class="p">([</span><span class="nx">result</span><span class="p">.</span><span class="nx">teamMembers</span><span class="p">])</span> <span class="o">+</span> <span class="s1">'</span>'</span>
<span class="o">+</span> <span class="s1">'<ul>'</span>
<span class="o">+</span> <span class="s1">'<li>'</span> <span class="o">+</span> <span class="nx">formatTime</span><span class="p">([</span><span class="nx">result</span><span class="p">.</span><span class="nx">timeOfSwimmer</span><span class="p">])</span> <span class="o">+</span> <span class="s1">'</li>'</span>
<span class="o">+</span> <span class="s1">'<li>'</span> <span class="o">+</span> <span class="nx">formatTime</span><span class="p">([</span><span class="nx">result</span><span class="p">.</span><span class="nx">timeOfBiker</span><span class="p">])</span> <span class="o">+</span> <span class="s1">'</li>'</span>
<span class="o">+</span> <span class="s1">'<li>'</span> <span class="o">+</span> <span class="nx">formatTime</span><span class="p">([</span><span class="nx">result</span><span class="p">.</span><span class="nx">timeOfRunner</span><span class="p">])</span> <span class="o">+</span> <span class="s1">'</li>'</span>
<span class="o">+</span> <span class="s1">'</ul>'</span>
<span class="o">+</span> <span class="s1">'</span>'</span>
<span class="o">+</span> <span class="s1">'</a>'</span>
<span class="o">+</span> <span class="s1">'</li>'</span><span class="p">;</span>
<span class="p">}).</span><span class="nx">join</span><span class="p">(</span><span class="s1">''</span><span class="p">)</span>
<span class="p">);</span>
<span class="p">}</span>
<span class="kr">export</span> <span class="k">default</span> <span class="nx">Ember</span><span class="p">.</span><span class="nx">HTMLBars</span><span class="p">.</span><span class="nx">makeBoundHelper</span><span class="p">(</span><span class="nx">eachResult</span><span class="p">);</span>
</code></pre>
</figure>
<p>Initially the mobile app used inline SVG icons for their convenience. A single list entry had four different icons. One for each of the three disciplines swimming, biking and running. And a fourth one for the link to the details page. These inline icons dramatically increased the number of DOM elements needed. The fact that every icon itself had a lot of characters due to the <code><path></code> element made matters only worse.</p>
<p>Besides using the custom <code>each-result</code> helper a second step helped reducing the render speed further. Switching to a PNG based sprite-sheet and removing the SVG icons dramatically reduced the number of DOM elements needed. Which in turn lead to a decrease in render time.</p>
<h2 id="how-much-did-the-render-speed-improve">How much did the render speed improve?</h2>
<p>Without any of the improvements the initial render time of the list hovered around 1200 ms. Navigation to a different screen and then coming back to the list resulted in 850 ms spent for re-rendering the list.</p>
<p>Switching to the custom <code>each-result</code> helper reduced the initial render time to 130 ms. The subsequent re-render time of the list when coming from a different screen dropped to 80 ms.</p>
<p>The second step, reducing the number of DOM elements used per list entry, improved the render time even further. The initial render time went down to 50 ms and the re-render time clocked in at 25 ms.</p>
<h2 id="what-should-i-be-aware-of">What should I be aware of?</h2>
<p>Switching from inline SVG icons to a PNG based sprite-sheet complicates the development workflow. At least if you intend to experiment and switch icons. In terms of developer experience sprite-sheets are inferior to inline SVG icons. But compared to the downsides of sidestepping Ember.js with the custom helper it's negligible.</p>
<p>Using a custom helper as seen above has downsides, depending on the app it can be even impossible to use this technique. The most obvious limitation are the missing bindings. Whenever something changes, and even if it's only a single attribute of one list entry, the entire list has to be re-rendered. By sidestepping it, Ember.js has no knowledge of the template and can't update specific parts.</p>
<p>Another limitation is the missing support for actions and links. As seen in the original code the <code>link-to</code> helper is used to link a list entry to a details page. In the custom helper the <code>link-to</code> is replaced by a common <code><a></code> element. While this works it's not perfect, since Ember.js doesn't know the link clicking it will result in a full pageload.</p>
<p>To get the normal behaviour back we have to add a little workaround to the route. When the route is activated we use jQuery to bind the <code>click</code> event of every relevant link within the list. The callback grabs the <code>href</code> attribute of the clicked link and instructs the router to transition to it. This restores the old behaviour where you can navigate the mobile app without having to do full pageloads.</p>
<figure>
<pre class="highlight javascript"><code><span class="kr">import</span> <span class="nx">Ember</span> <span class="nx">from</span> <span class="s1">'ember'</span><span class="p">;</span>
<span class="kr">export</span> <span class="k">default</span> <span class="nx">Ember</span><span class="p">.</span><span class="nx">Route</span><span class="p">.</span><span class="nx">extend</span><span class="p">({</span>
<span class="na">activate</span><span class="p">:</span> <span class="kd">function</span><span class="p">()</span> <span class="p">{</span>
<span class="kd">var</span> <span class="nx">self</span> <span class="o">=</span> <span class="k">this</span><span class="p">;</span>
<span class="nx">Ember</span><span class="p">.</span><span class="nx">$</span><span class="p">(</span><span class="s1">'body'</span><span class="p">).</span><span class="nx">on</span><span class="p">(</span><span class="s1">'click'</span><span class="p">,</span> <span class="s1">'.result-item a'</span><span class="p">,</span> <span class="kd">function</span><span class="p">(</span><span class="nx">event</span><span class="p">)</span> <span class="p">{</span>
<span class="nx">event</span><span class="p">.</span><span class="nx">preventDefault</span><span class="p">();</span>
<span class="nx">self</span><span class="p">.</span><span class="nx">transitionTo</span><span class="p">(</span><span class="nx">$</span><span class="p">(</span><span class="k">this</span><span class="p">).</span><span class="nx">attr</span><span class="p">(</span><span class="s1">'href'</span><span class="p">));</span>
<span class="p">});</span>
<span class="p">},</span>
<span class="na">deactivate</span><span class="p">:</span> <span class="kd">function</span><span class="p">()</span> <span class="p">{</span>
<span class="nx">Ember</span><span class="p">.</span><span class="nx">$</span><span class="p">(</span><span class="s1">'body'</span><span class="p">).</span><span class="nx">off</span><span class="p">(</span><span class="s1">'click'</span><span class="p">,</span> <span class="s1">'.result-item a'</span><span class="p">);</span>
<span class="p">}</span>
<span class="p">});</span>
</code></pre>
</figure>
<h2 id="summary">Summary</h2>
<p>While rendering long lists is certainly not a strength of Ember.js is absolutely possible to improve the render speed. By using a custom helper that builds the list via string concatenation and reducing the number of DOM elements due to dropping inline SVG icons it was possible to cut the initial render time from <strong>1200 ms</strong> down to only <strong>50 ms</strong>.</p>
<p>The techniques used to accomplish such a massiv performance boost also have their downsides. The absence of bindings, actions and link helpers limits the number of apps where it's feasible to use a technique such as the custom helper.</p>
<p><strong>Thanks for reading, if you have any additional questions on this topic hit me up on Twitter or via email.</strong></p>
Lessons Learned: Building a Fast Ember.js Mobile Apphttp://strauss.io/blog/2015-lessons-learned-building-a-fast-ember-js-mobile-app.html2015-07-21T13:20:00Z2021-03-20T17:04:17+01:00David Strauß<p>The traffic light turns green and you steer your Tesla around the bend, leaving the city and its jammed streets behind you. Lush green meadows reaching to the horizon surround the open road ahead of you.</p>
<p>You floor the gas pedal and observe how you body gets pushed into the seat. The meadows start to disappear. By the time they are reduced to a green blur you are already smiling in delight.</p>
<p><strong>Your users deserve the same delight. Put a smile on their face with a fast Ember.js mobile app.</strong></p>
<p>We built an <a href="http://ergebnis.g-sport.at/trumer-triathlon-2015/willkommen">Ember.js mobile app for the Trumer Triathlon</a> in Austria. The mobile app allowed athletes and spectators to view race results on their mobile phones. It included search, different rankings and detailed results of every athlete. Here is what we learned.</p>
<h2 id="set-a-clear-performance-goal">Set a clear performance goal</h2>
<p>If you want to build a fast mobile app you have to define what “fast” means. With such a goal you can constantly check where you are in terms of performance and if you are going in the right direction.</p>
<p>Don’t define some arbitrary numbers which you then check in the render speed tab of Ember Inspector. Pick a device from the lower end of the spectrum instead. Make sure you have this device on hand. Now use it to visit a few of your favourite websites. Embrace the pain while doing that.</p>
<p>At this point you are ready to settle on a number. Think about this marvellous device, think about the person holding it. How long should it take to load and display your mobile app? </p>
<p>We decided the mobile app should load and display the main competition in under 30 seconds on a Samsung Galaxy Gio GT-S5660 running Android 2.2.1. Once the app was already loaded rendering the biggest list of athletes should happen in under ten seconds. </p>
<h2 id="develop-with-realistic-data">Develop with realistic data</h2>
<p>Scratch the word “develop”, this is important from the start. At every step of the process you should use realistic data. In our case we just took the data from last years event and duplicated a few entries to match the number of expected athletes.</p>
<p>If such data is not available fake it and make sure it’s diverse. For example if you are dealing with peoples names like we did your data during development should contain multiple examples of these cases:</p>
<ul>
<li>A short name like “Xi”.</li>
<li>A long name like “Marvin Kreisecker”.</li>
<li>An even longer name like “Johann Gottfried Graf von Tattenbach zu Eberschwang”.</li>
</ul>
<p>Don’t make the mistake of building a mobile app that looks great with uniform data during development but breaks once it is exposed to real production data.</p>
<h2 id="do-as-little-work-as-possible-on-the-device">Do as little work as possible on the device</h2>
<p>Our mobile app displays certain information that is not directly available from data export we get from the timekeeper. Instead of loading, aggregating and calculating this additional information on every device we let the server do the hard work.</p>
<p>The server periodically loads the raw data export from the timekeeper and makes a few hundred additional requests to augment the athletes data. Doing this allows the mobile app to load at most three JSON files where everything is already pre-calculated. It’s only job is displaying the data.</p>
<p>Since the timekeeper already exports the data as JSON we could have used that API directly from the mobile app. There would have been no need for us to develop a custom backend. But by doing it nevertheless we were able to remove a big burden from the mobile app which lead to better performance.</p>
<h2 id="improve-rendering-of-lists">Improve rendering of lists</h2>
<p>Before even starting out we knew we would get into trouble with performance when it comes to rendering lists. The triathlon featured about 400 athletes in the main category so we had to display a list of rankings with around 400 entries. This combined with the fact that Ember.js is not particular famous for rendering big lists, especially on mobile devices made us think about solutions early on.</p>
<p>The simplest thing you can do to improve list rendering is rendering fewer items. There are a two general ways to accomplish this:</p>
<ol>
<li>
<p>Use pagination to display only <code>N</code> entries at a time. In our opinion a not so great user experience in the first place. Secondly how do we figure out what <code>N</code> is? The differences between our poor Samsung Galaxy Gio and a modern iPhone will be huge. Either we punish one type of device or we would have to figure out a dynamic <code>N</code> based on device performance.</p>
</li>
<li>
<p>Use something like <code>Ember.ListView</code> or <code>ember-cloaking</code> to only render entries that are visible. Sounds like a perfect solution, unfortunately such libraries involve scroll events which we are wary of. In combination with the new kid on the block called Glimmer and old mobile devices this would mean we would have to do a ton of manual testing to make sure this libraries really work and don’t have some weird glitches.</p>
</li>
</ol>
<p>In the end we abandoned both ideas due to time constraints and complexity and went with a custom helper that took an array of results and concatenated it to a huge string.</p>
<p>Ditching bindings and Ember.js template helpers was only possible because there was only a link to the details page and it was unnecessary to update and re-render a single list entry.</p>
<p>While this was definitely not a beautiful solution it worked flawlessly and allowed us to render the list of 400 entries in a short time. You can read <a href="http://strauss.io/blog/2015-speed-up-ember-js-list-rendering-by-example.html">Speed Up Ember.js List Rendering By Example</a> to learn more about this specific technique.</p>
<h2 id="remove-inline-svgs">Remove inline SVGs</h2>
<p>Due to convenience we used inline SVGs for all icons throughout the mobile app. A single list entry had four of them. One for each of the three disciplines swimming, biking and running. And a fourth one for the link to the details page.</p>
<p>With a list consisting of 400 entries this meant the DOM contained at least 1.600 additional SVG nodes. Switching to a PNG based sprite-sheet and removing the SVG icons dramatically reduced the render time further.</p>
<p><strong>I’m happy to answer questions and get into more detail if you are interested. Just hit me up via Twitter or email. Thank you for your time.</strong></p>
Alternative Interactor Implementation in Lotushttp://strauss.io/blog/2015-alternative-interactor-implementation-in-lotus.html2015-05-12T05:46:00Z2021-03-20T17:04:17+01:00David Strauß<p>Lotus comes with a built-in interactor module that can be used to implement interactor classes. An interactor is basically an object that represents an use case of an application. It uses entities and repositories to accomplish the task at hand. The interactor usually returns the outcome of the use case to the caller. In a situation where something does not go according to the plan, an error should be communicated to the caller.</p>
<p>The interactor module of Lotus uses a result object for both of these things, in my opinion this is not optimal. This implementation requires the caller to check if the use case performed correctly or if there was a failure. Following code sample illustrates how a controller action would make use of such an interactor. Calling the interactor returns a result object which is then used to check if everything turned out as expected. If that is not the case the errors are returned in the else branch.</p>
<figure>
<pre class="highlight ruby"><code><span class="k">module</span> <span class="nn">Datsu::Controllers::Identities</span>
<span class="k">class</span> <span class="nc">Create</span>
<span class="kp">include</span> <span class="no">Datsu</span><span class="o">::</span><span class="no">Action</span>
<span class="n">expose</span> <span class="ss">:identity</span>
<span class="n">params</span> <span class="k">do</span>
<span class="n">param</span> <span class="ss">:identity</span> <span class="k">do</span>
<span class="n">param</span> <span class="ss">:email</span><span class="p">,</span> <span class="ss">type: </span><span class="no">String</span><span class="p">,</span> <span class="ss">presence: </span><span class="kp">true</span>
<span class="n">param</span> <span class="ss">:password</span><span class="p">,</span> <span class="ss">type: </span><span class="no">String</span><span class="p">,</span> <span class="ss">presence: </span><span class="kp">true</span>
<span class="k">end</span>
<span class="k">end</span>
<span class="k">def</span> <span class="nf">initialize</span><span class="p">(</span><span class="n">interactor</span> <span class="o">=</span> <span class="no">Interactors</span><span class="o">::</span><span class="no">Identity</span><span class="o">::</span><span class="no">Create</span><span class="p">)</span>
<span class="vi">@interactor</span> <span class="o">=</span> <span class="n">interactor</span>
<span class="k">end</span>
<span class="k">def</span> <span class="nf">call</span><span class="p">(</span><span class="n">params</span><span class="p">)</span>
<span class="n">result</span> <span class="o">=</span> <span class="n">interactor</span><span class="p">.</span><span class="nf">new</span><span class="p">(</span><span class="n">params</span><span class="p">).</span><span class="nf">call</span>
<span class="k">if</span> <span class="n">result</span><span class="p">.</span><span class="nf">success?</span>
<span class="vi">@identity</span> <span class="o">=</span> <span class="n">result</span><span class="p">.</span><span class="nf">identity</span>
<span class="k">else</span>
<span class="n">halt</span> <span class="mi">422</span><span class="p">,</span> <span class="no">ErrorSerializers</span><span class="o">::</span><span class="no">Hash</span><span class="p">.</span><span class="nf">new</span><span class="p">(</span><span class="n">result</span><span class="p">.</span><span class="nf">errors</span><span class="p">).</span><span class="nf">to_json</span>
<span class="k">end</span>
<span class="k">end</span>
<span class="k">end</span>
<span class="k">end</span>
</code></pre>
</figure>
<p>I find this branching logic using the interactors result distracting. In most situations the caller of the interactor assumes that the use case got performed without failures. Therefore the <code>success?</code> method is not needed and only introduces branching logic in the application code. In addition there is the possibility that failures go unnoticed when the application code does not check the result. Since use cases are at the core of an application there should be no possibility that one could fail silently.</p>
<p>A different approach to interactors that I'm using in my applications takes this into account. While a interactor still returns a result object, it is different to the one described above. The returned object only contains resultin values from the use case that are of interest to the caller. There is no <code>success?</code> method to check the status of an use case. It is explicitly assumed that calling an interactor performs the use case successfully.</p>
<p>In case something does go wrong the interactor raises a specific error. This ensures that there are no silent failures when working with an interactor. The caller can decide if it wants to handle the error cases. If there are multiple scenarios where something can go wrong the different errors are intention revealing and help understanding the code. The first time I read about this type of implementation of use cases was in <a href="http://hawkins.io/2014/01/writing_use_cases/">Adam Hawkins work</a>.</p>
<p>Using an interactor implemented as described would turn the first code example into following.</p>
<figure>
<pre class="highlight ruby"><code><span class="k">module</span> <span class="nn">Datsu::Controllers::Identities</span>
<span class="k">class</span> <span class="nc">Create</span>
<span class="kp">include</span> <span class="no">Datsu</span><span class="o">::</span><span class="no">Action</span>
<span class="n">expose</span> <span class="ss">:identity</span>
<span class="n">params</span> <span class="k">do</span>
<span class="n">param</span> <span class="ss">:identity</span> <span class="k">do</span>
<span class="n">param</span> <span class="ss">:email</span><span class="p">,</span> <span class="ss">type: </span><span class="no">String</span><span class="p">,</span> <span class="ss">presence: </span><span class="kp">true</span>
<span class="n">param</span> <span class="ss">:password</span><span class="p">,</span> <span class="ss">type: </span><span class="no">String</span><span class="p">,</span> <span class="ss">presence: </span><span class="kp">true</span>
<span class="k">end</span>
<span class="k">end</span>
<span class="k">def</span> <span class="nf">initialize</span><span class="p">(</span><span class="n">interactor</span> <span class="o">=</span> <span class="no">Interactors</span><span class="o">::</span><span class="no">Identity</span><span class="o">::</span><span class="no">Create</span><span class="p">)</span>
<span class="vi">@interactor</span> <span class="o">=</span> <span class="n">interactor</span>
<span class="k">end</span>
<span class="k">def</span> <span class="nf">call</span><span class="p">(</span><span class="n">params</span><span class="p">)</span>
<span class="vi">@identity</span> <span class="o">=</span> <span class="vi">@interactor</span><span class="p">.</span><span class="nf">call</span><span class="p">(</span><span class="n">params</span><span class="p">[</span><span class="ss">:identity</span><span class="p">]).</span><span class="nf">identity</span>
<span class="k">rescue</span> <span class="no">Interactors</span><span class="o">::</span><span class="no">Identity</span><span class="o">::</span><span class="no">Create</span><span class="o">::</span><span class="no">EmailAlreadyInUse</span>
<span class="n">halt</span> <span class="mi">422</span><span class="p">,</span> <span class="no">ErrorSerializers</span><span class="o">::</span><span class="no">Hash</span><span class="p">.</span><span class="nf">new</span><span class="p">({</span> <span class="ss">email: </span><span class="p">[</span><span class="ss">:in_use</span><span class="p">]</span> <span class="p">}).</span><span class="nf">to_json</span>
<span class="k">end</span>
<span class="k">end</span>
<span class="k">end</span>
</code></pre>
</figure>
<p>Compared to the first implementation there is no additional branching and it is clear with what kind of error the code is dealing with. The interactor will also fail loudly should there be an error and the application code does not handle the failure scenario.</p>
Loosely Coupled Push Updateshttp://strauss.io/blog/2015-loosely-coupled-push-updates.html2015-04-25T11:24:00Z2021-03-20T17:04:17+01:00David Strauß<p>I was not at RailsConf Atlanta and have not watched the DHH keynote. But I did read more than a few Tweets and blog posts about it, or should I say about Action Cable? Action Cable will be part of version five of Ruby on Rails and will integrate push updates via WebSockets. It's not yet available but people are already quite sceptical and I'm one of them. Not of the idea of having push updates via WebSockets. I'm a big fan of push updates and think they are a key component of a good web application. Especially JavaScript applications can benefit a lot by push updates.</p>
<p>Push updates are great, but I'm not sure if it's a good idea to integrate them as tightly with Ruby on Rails as it currently seems. People want to build applications with push updates, like they already build applications with background job processing and sending emails. In my opinion Action Cable should be very similar to Active Job and Action Mailer. There should be a nice interface that allows the programmer to send push updates from within Ruby on Rails.</p>
<p>The actual service that is responsible for delivering the push updates should be outside of Ruby on Rails. This would be very similar to Active Job and Action Mailer. Your application code should not care about these things, it's irrelevant if you are sending emails directly from your server or use a third party service. Push updates should be loosley coupled in the same way, it makes building robust applications easier since it has a few benefits compared to a tightly integrated solution.</p>
<p><strong>Deployment of Ruby on Rails applications does not get more complicated.</strong> In most cases it should not be required to touch the push update service when you deploy your application. For example there is already a lot of ill-informed advice when it comes to background processing and deployment. A lot of tutorials and guides encourage you to use something like <code>capistrano/sidekiq</code>. That's not a good idea, managing a service like Sidekiq is the job of the operating system and not of the deployment process of your application. Otherwise your background jobs are not processed after a scheduled server reboot. (Yes I speak from experience. Deploying a non-trivial Ruby on Rails application in a robust way is a topic with surprising little information on the internet so I had to learn by making each and every mistake myself.)</p>
<p><strong>Your application will work without push updates.</strong> In a scenario where the push update service gets overwhelmed with connections or something entirely different goes wrong your application will keep running and serving HTTP requests. Granted the user experience will degrade but that's a lot better than having your application server go down due to a problem with push updates.</p>
<p>How could this work? Let's take an Ember.js application with a Ruby on Rails backend as API. The Ember.js frontend communicates with Ruby on Rails via a JSON API as seen in countless examples. There is nothing novel to this approach. As long as the backend is up and running the application is working for our users. Now we can add the push service into the mix, this is a separate program running on the server. Whenever a <code>create</code>, <code>update</code> or <code>destroy</code> action is triggered the backend schedules a push update. Basically it puts the JSON payload into a queue before responding with it to the HTTP request. The push update service takes the payload from the queue and delivers it to all connected WebSocket clients so they receive an immediate update.</p>
<p>Even if the push update service is not working the application itself is not broken. The client is still getting its plain old HTTP response with the payload. The only thing not working is the immediate push update to all other clients.</p>
Adding Migrations to a Lotus Projecthttp://strauss.io/blog/2015-adding-migrations-to-a-lotus-project.html2015-04-16T06:23:00Z2021-03-20T17:04:17+01:00David Strauß<p>Until Lotus supports migrations out of the box you have to roll your own solution.<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup> Thankfully this is not so difficult, let me show you how to do it. The following setup will actually work in any Ruby project that already leverages Sequel. In the end you will have following things:</p>
<ul>
<li>A place to store your migration files.</li>
<li>A Rake task to run the migrations.</li>
<li>A Rake task to rollback the last migration.</li>
</ul>
<p>The projects <code>Rakefile</code> is the place to start. First you tell Rake where to find your custom Rake tasks and then you add an <code>environment</code> task on which other tasks can depend upon. This task is responsible for loading the applications environment. Your migration tasks need the environment in order to know which database should be used for the migrations.</p>
<figure>
<figcaption>
<p>Rakefile - Tell Rake where to find the custom tasks
</p>
</figcaption>
<pre class="highlight ruby"><code><span class="no">Rake</span><span class="p">.</span><span class="nf">add_rakelib</span> <span class="s1">'lib/tasks'</span>
</code></pre>
</figure>
<figure>
<figcaption>
<p>Rakefile - Add the environment task
</p>
</figcaption>
<pre class="highlight ruby"><code><span class="n">task</span> <span class="ss">:environment</span> <span class="k">do</span>
<span class="nb">require_relative</span> <span class="s1">'./config/environment'</span>
<span class="no">Lotus</span><span class="o">::</span><span class="no">Application</span><span class="p">.</span><span class="nf">preload!</span>
<span class="k">end</span>
</code></pre>
</figure>
<p>The environment task uses <code>Lotus::Application.preload!</code> to load the Lotus environment. This does not load any of your application code. If you want to load the application code use <code>Lotus::Application.preload_applications!</code> but be aware of the <a href="http://blog.testdouble.com/posts/2014-11-04-healthy-migration-habits.html">healthy migration habits</a> and don't rely on application specific code in your migrations.</p>
<p>Both the <code>migrate</code> and <code>rollback</code> tasks are heavily inspired by the <a href="http://sequel.jeremyevans.net/rdoc/files/doc/migration_rdoc.html">official Sequel migrations guide</a>. The tasks depend on the <code>environment</code> task you wrote earlier and require the Sequel gem with the migration extension. In order to run all migrations you need a connection to a database and a directory where the migrations are stored.</p>
<p>The environment variable <code>DATABASE_URL</code> is used to connect to the database, make sure to change this so it matches your own project environment. Your migrations are supposed to be in the <code>db/migrations</code> directory.</p>
<p>Rolling back a migration is a little more complicated. You need to tell Sequel a target version you want to roll back to. Since you want to roll back a single version per <code>rollback</code> invocation you can automate the process of finding the correct version. Be aware that Sequel supports an <code>IntegerMigrator</code> and a <code>TimestampMigrator</code>. There are a few differences but for the task at hand only two are relevant: The naming of the migration files and where Sequel stores information about the schema.</p>
<p>This article focuses on the <code>TimestampMigrator</code>, therefore the schema information is stored in the <code>schema_migrations</code> table which can be queried to find the second to last migration. Then you have to extract the version, that's the timestamp at the beginning, from the filename. Once you have determined the version you can use the same command as in the migrate task and provide an additional <code>target</code> argument.</p>
<figure>
<figcaption>
<p>lib/tasks/db.rake
</p>
</figcaption>
<pre class="highlight ruby"><code><span class="n">namespace</span> <span class="ss">:db</span> <span class="k">do</span>
<span class="n">desc</span> <span class="s1">'Run migrations'</span>
<span class="n">task</span> <span class="ss">:migrate</span> <span class="o">=></span> <span class="ss">:environment</span> <span class="k">do</span>
<span class="nb">require</span> <span class="s1">'sequel'</span>
<span class="no">Sequel</span><span class="p">.</span><span class="nf">extension</span> <span class="ss">:migration</span>
<span class="n">db</span> <span class="o">=</span> <span class="no">Sequel</span><span class="p">.</span><span class="nf">connect</span> <span class="no">ENV</span><span class="p">.</span><span class="nf">fetch</span><span class="p">(</span><span class="s1">'DATABASE_URL'</span><span class="p">)</span>
<span class="no">Sequel</span><span class="o">::</span><span class="no">Migrator</span><span class="p">.</span><span class="nf">run</span> <span class="n">db</span><span class="p">,</span> <span class="s1">'db/migrations'</span>
<span class="k">end</span>
<span class="n">desc</span> <span class="s1">'Rollback last migration'</span>
<span class="n">task</span> <span class="ss">:rollback</span> <span class="o">=></span> <span class="ss">:environment</span> <span class="k">do</span>
<span class="nb">require</span> <span class="s1">'sequel'</span>
<span class="no">Sequel</span><span class="p">.</span><span class="nf">extension</span> <span class="ss">:migration</span>
<span class="n">db</span> <span class="o">=</span> <span class="no">Sequel</span><span class="p">.</span><span class="nf">connect</span> <span class="no">ENV</span><span class="p">.</span><span class="nf">fetch</span><span class="p">(</span><span class="s1">'DATSU_LOTUS_DATABASE_URL'</span><span class="p">)</span>
<span class="n">table_name</span> <span class="o">=</span> <span class="ss">:schema_migrations</span>
<span class="k">if</span> <span class="n">db</span><span class="p">.</span><span class="nf">tables</span><span class="p">.</span><span class="nf">include?</span><span class="p">(</span><span class="n">table_name</span><span class="p">)</span> <span class="o">&&</span> <span class="n">db</span><span class="p">[</span><span class="n">table_name</span><span class="p">].</span><span class="nf">count</span> <span class="o">></span> <span class="mi">1</span>
<span class="n">last_two_migrations</span> <span class="o">=</span> <span class="n">db</span><span class="p">[</span><span class="n">table_name</span><span class="p">].</span><span class="nf">order</span><span class="p">(</span><span class="ss">:filename</span><span class="p">).</span><span class="nf">last</span><span class="p">(</span><span class="mi">2</span><span class="p">)</span>
<span class="n">filename</span> <span class="o">=</span> <span class="n">last_two_migrations</span><span class="p">[</span><span class="mi">1</span><span class="p">][</span><span class="ss">:filename</span><span class="p">]</span>
<span class="n">version</span> <span class="o">=</span> <span class="sr">/^(\d{14})\D*/</span><span class="p">.</span><span class="nf">match</span><span class="p">(</span><span class="n">filename</span><span class="p">).</span><span class="nf">captures</span><span class="p">[</span><span class="mi">0</span><span class="p">].</span><span class="nf">to_i</span>
<span class="k">else</span>
<span class="n">version</span> <span class="o">=</span> <span class="mi">0</span>
<span class="k">end</span>
<span class="no">Sequel</span><span class="o">::</span><span class="no">Migrator</span><span class="p">.</span><span class="nf">run</span> <span class="n">db</span><span class="p">,</span> <span class="s1">'db/migrations'</span><span class="p">,</span> <span class="ss">target: </span><span class="n">version</span>
<span class="k">end</span>
<span class="k">end</span>
</code></pre>
</figure>
<p>Having all this in place the only thing left to do for you is writing migrations. Sequel has an excellent documentation on <a href="http://sequel.jeremyevans.net/rdoc/files/doc/schema_modification_rdoc.html">what schema modifications are possible</a> and the <a href="http://sequel.jeremyevans.net/rdoc/files/doc/migration_rdoc.html">migrations guide</a> will help you understand what can be done in a migration.</p>
<p>You can run your migrations with <code>bundle exec rake db:migrate</code> and roll back the last migration with <code>bundle exec rake db:rollback</code>.</p>
<p>An example migration with this setup might look like this:</p>
<figure>
<figcaption>
<p>db/migrations/20150404175000_create_identities_table.rb
</p>
</figcaption>
<pre class="highlight ruby"><code><span class="no">Sequel</span><span class="p">.</span><span class="nf">migration</span> <span class="k">do</span>
<span class="n">change</span> <span class="k">do</span>
<span class="n">create_table</span><span class="p">(</span><span class="ss">:identities</span><span class="p">)</span> <span class="k">do</span>
<span class="n">primary_key</span> <span class="ss">:id</span>
<span class="no">String</span> <span class="ss">:email</span><span class="p">,</span> <span class="ss">null: </span><span class="kp">false</span>
<span class="no">String</span> <span class="ss">:password_digest</span><span class="p">,</span> <span class="ss">null: </span><span class="kp">false</span>
<span class="n">index</span> <span class="ss">:email</span><span class="p">,</span> <span class="ss">unique: </span><span class="kp">true</span>
<span class="k">end</span>
<span class="k">end</span>
<span class="k">end</span>
</code></pre>
</figure>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>You can track the progress for the <a href="https://github.com/lotus/lotus/issues/136">generator</a>, <a href="https://github.com/lotus/lotus/issues/137">migrate</a> and <a href="https://github.com/lotus/lotus/issues/138">rollback</a> support on GitHub. <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>
Validations in a Lotus JSON API apphttp://strauss.io/blog/2015-validations-in-a-lotus-json-api-app.html2015-04-07T14:21:00Z2021-03-20T17:04:17+01:00David Strauß<p>One thing that really stuck with me while reading/watching/learning about Uncle Bobs Clean Architecture was the idea that invalid data should not cross the boundary between the delivery mechanism and the application code.<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup> As a consequence the application code is free of validations and only works with proper data.</p>
<p>To illustrate why this is a desirable result we take a look at Ruby on Rails. Let's assume we have a web form and users can sign up. They have to provide an email address and a password. In a vanilla Ruby on Rails project you would have an ActiveRecord model <code>Identity</code> and add two presence validations to it. While this works for this case it starts breaking when you add the ability for admins to create new users without providing a password. Now you have to start mutilating your application code by skipping validations in certain places.</p>
<p><strong>Having to skip validations is a sign that validating data occurs in the wrong place.</strong></p>
<p>The correct place to validate that an email and a password are provided is at the boundary to the application. The user action has a different entry point than the admin action. Therefore it is easy to specify the proper validation rules for each entry point and prevent invalid data from entering the system. Lotus has this mechanism built into its controller actions. These actions are the entry points to your application when HTTP is used as a delivery mechanism.</p>
<figure>
<figcaption>
<p>Action that requires both email and password parameter
</p>
</figcaption>
<pre class="highlight ruby"><code><span class="nb">require</span> <span class="s1">'lotus/controller'</span>
<span class="k">class</span> <span class="nc">Create</span>
<span class="kp">include</span> <span class="no">Lotus</span><span class="o">::</span><span class="no">Action</span>
<span class="n">params</span> <span class="k">do</span>
<span class="n">param</span> <span class="ss">:identity</span> <span class="k">do</span>
<span class="n">param</span> <span class="ss">:email</span><span class="p">,</span> <span class="ss">type: </span><span class="no">String</span><span class="p">,</span> <span class="ss">presence: </span><span class="kp">true</span>
<span class="n">param</span> <span class="ss">:password</span><span class="p">,</span> <span class="ss">type: </span><span class="no">String</span><span class="p">,</span> <span class="ss">presence: </span><span class="kp">true</span>
<span class="k">end</span>
<span class="k">end</span>
<span class="k">def</span> <span class="nf">call</span><span class="p">(</span><span class="n">params</span><span class="p">)</span>
<span class="n">halt</span> <span class="mi">422</span> <span class="k">unless</span> <span class="n">params</span><span class="p">.</span><span class="nf">valid?</span>
<span class="c1"># Work with proper parameters</span>
<span class="k">end</span>
<span class="k">end</span>
</code></pre>
</figure>
<p>Even when you don't know Lotus the code above should be understandable. We specify that we expect the parameters <code>identity[email]</code> and <code>identity[password]</code>. First of all this acts as a whitelist, parameters not specified get stripped before they enter the application. <code>type: String</code> tells Lotus to coerce the parameter, your application code does not have to be bothered with type checks because the data will always have the correct type. The final part is the presence validation making sure that the parameters are actually present.</p>
<p>Should the validations fail we halt the request and return a <code>422 Unprocessable Entity</code> response. This is exactly what we want in terms of incoming data validations. Adding an admin action where the password is not required is easy because the validations happen at the boundary and not inside the application.</p>
<p>Since this article has “JSON” in the title we are not finished yet. In order to show the user what went wrong most JavaScript frameworks expect a response with the errors.<sup id="fnref:2"><a href="#fn:2" class="footnote">2</a></sup> If you are using Ember.js with Ember Data like me the expected response looks like this:</p>
<figure>
<pre class="highlight json"><code><span class="p">{</span><span class="w">
</span><span class="s2">"errors"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="s2">"email"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"presence"</span><span class="p">],</span><span class="w">
</span><span class="s2">"password"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"presence"</span><span class="p">]</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre>
</figure>
<p>How can we produce such a response in Lotus? Simple, we pass <code>halt</code> a message as second argument, this will be the body of our response. In order to achieve this in a clean way we write a small module and a class. The module will hold our unified logic for halting the request if the parameters are not valid. The class serializes the Lotus validations errors object into a consumable format. (If you know a better way to accomplish this I would love to hear from you.)</p>
<figure>
<pre class="highlight ruby"><code><span class="nb">require</span> <span class="s1">'json'</span>
<span class="k">module</span> <span class="nn">ErrorSerializers</span>
<span class="k">class</span> <span class="nc">LotusValidationsErrors</span>
<span class="k">def</span> <span class="nf">initialize</span><span class="p">(</span><span class="n">errors</span><span class="p">)</span>
<span class="vi">@errors</span> <span class="o">=</span> <span class="n">errors</span>
<span class="k">end</span>
<span class="k">def</span> <span class="nf">to_json</span>
<span class="no">JSON</span><span class="p">.</span><span class="nf">generate</span><span class="p">({</span> <span class="ss">errors: </span><span class="n">errors_hash</span> <span class="p">})</span>
<span class="k">end</span>
<span class="kp">private</span>
<span class="k">def</span> <span class="nf">errors_hash</span>
<span class="vi">@errors_hash</span> <span class="o">||=</span> <span class="vi">@errors</span><span class="p">.</span><span class="nf">to_h</span><span class="p">.</span><span class="nf">inject</span><span class="p">({})</span> <span class="p">{</span> <span class="o">|</span><span class="nb">hash</span><span class="p">,</span> <span class="p">(</span><span class="n">key</span><span class="p">,</span> <span class="n">value</span><span class="p">)</span><span class="o">|</span> <span class="nb">hash</span><span class="p">.</span><span class="nf">merge</span><span class="p">({</span> <span class="n">value</span><span class="p">.</span><span class="nf">first</span><span class="p">.</span><span class="nf">attribute_name</span><span class="p">.</span><span class="nf">to_sym</span> <span class="o">=></span> <span class="n">value</span><span class="p">.</span><span class="nf">map</span><span class="p">(</span><span class="o">&</span><span class="ss">:validation</span><span class="p">)</span> <span class="p">})</span> <span class="p">}.</span><span class="nf">to_h</span>
<span class="k">end</span>
<span class="k">end</span>
<span class="k">end</span>
</code></pre>
</figure>
<figure>
<pre class="highlight ruby"><code><span class="nb">require_relative</span> <span class="s1">'../error_serializers/lotus_validations_errors'</span>
<span class="k">module</span> <span class="nn">ParameterValidation</span>
<span class="kp">private</span>
<span class="k">def</span> <span class="nf">validate!</span>
<span class="n">halt_with_errors</span> <span class="k">unless</span> <span class="n">params</span><span class="p">.</span><span class="nf">valid?</span>
<span class="k">end</span>
<span class="k">def</span> <span class="nf">halt_with_errors</span>
<span class="n">halt</span> <span class="mi">422</span><span class="p">,</span> <span class="no">ErrorSerializers</span><span class="o">::</span><span class="no">LotusValidationsErrors</span><span class="p">.</span><span class="nf">new</span><span class="p">(</span><span class="n">errors</span><span class="p">).</span><span class="nf">to_json</span>
<span class="k">end</span>
<span class="k">end</span>
</code></pre>
</figure>
<p>Having these two pieces of code in place we can include the module in our action and add an before callback to invoke the <code>validate!</code> method. Now we can clean up the <code>call</code> method and remove the guard condition. At this point the action works again as we want and it is easy to add the same behaviour to other actions. Using <code>controller.prepare</code> you can easily add it by default to all actions.</p>
<figure>
<figcaption>
<p>Refactored action using the ParameterValidation module and a before callback
</p>
</figcaption>
<pre class="highlight ruby"><code><span class="nb">require</span> <span class="s1">'lotus/controller'</span>
<span class="nb">require_relative</span> <span class="s1">'../parameter_validation'</span>
<span class="k">class</span> <span class="nc">Create</span>
<span class="kp">include</span> <span class="no">Lotus</span><span class="o">::</span><span class="no">Action</span>
<span class="kp">include</span> <span class="no">ParameterValidation</span>
<span class="n">before</span> <span class="ss">:validate!</span>
<span class="n">params</span> <span class="k">do</span>
<span class="n">param</span> <span class="ss">:identity</span> <span class="k">do</span>
<span class="n">param</span> <span class="ss">:email</span><span class="p">,</span> <span class="ss">type: </span><span class="no">String</span><span class="p">,</span> <span class="ss">presence: </span><span class="kp">true</span>
<span class="n">param</span> <span class="ss">:password</span><span class="p">,</span> <span class="ss">type: </span><span class="no">String</span><span class="p">,</span> <span class="ss">presence: </span><span class="kp">true</span>
<span class="k">end</span>
<span class="k">end</span>
<span class="k">def</span> <span class="nf">call</span><span class="p">(</span><span class="n">params</span><span class="p">)</span>
<span class="c1"># Work with proper parameters</span>
<span class="k">end</span>
<span class="k">end</span>
</code></pre>
</figure>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>A quick search did not turn up a resource where Uncle Bob himself talks about this. In Adam Hawkins <a href="http://hawkins.io/2014/01/form_objects_with_virtus/">article about Form Objects with Virtus</a> Avdi Grimm is mentioned in the context of this topic. But I do believe Adam Hawkins article series about software architecture aligns well with Clean Architecture. <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
<li id="fn:2">
<p>If you are building a JavaScript application and you have client side validations that's great. You still want your backend to decide what is valid data and what not. Therefore you should communicate the errors to the client and show some meaningful help to your user. Otherwise it can happen that the client side validations pass but the backend rejects the data due to different requirements and the user is left alone without any clue what is actually happening. <a href="#fnref:2" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>
Datsu - An Open Source Ember.js Darts Calculatorhttp://strauss.io/blog/2015-datsu-an-open-source-ember-js-darts-calculator.html2015-02-26T08:30:00Z2021-03-20T17:04:17+01:00David Strauß<p><em>For the impatient: <a href="http://datsu-demo.stravid.com/">Datsu demo</a>, <a href="https://github.com/stravid/datsu-frontend">frontend repository</a> and <a href="https://github.com/stravid/datsu-backend">backend repository</a>.</em></p>
<p>A few weeks before christmas 2014 <a href="http://onehundredpercent.at/">Hannah Langhagel</a> gifted the edgy circle office a custom made dart board in the shape and style of the company logo including nine darts.</p>
<p>The gift was well received and we discovered that darts is an excellent and fun game. We started using <a href="http://dartrechner.de/">dartrechner.de</a> to keep track of our games. Unfortunately the custom made dart board started to disintegrate around two thousand thrown darts and we had to order a real one.</p>
<p>In addition we were very unhappy with the darts calculator we were using. Its design and usability were subpar and we had to play “double out”, meaning you have to hit one of the smaller segments on the outside to finish a game. And to be honest we were, and still are, not good enough to play it that way.</p>
<p>At that point I decided to build an darts calculator for our own needs. The result is called <strong>Datsu</strong> and you can use it to <a href="http://datsu-demo.stravid.com/">keep track of your own darts games</a> or just play around. There are also open source repositories for the <a href="https://github.com/stravid/datsu-frontend">Ember.js frontend</a> and <a href="https://github.com/stravid/datsu-backend">Laravel 4.2 backend</a> on GitHub.</p>
<p>I’m certainly not a designer but I try my best to create nice looking things even when I’m on my own in a project. After looking at existing darts calculators, settling on a vocabulary, finding a color scheme and making some sketches in my notebook I opened up Sketch 3. With the help of a grid consisting of columns and baselines I came up with a design I would call reasonably good looking for my skills.</p>
<p>As mentioned in the headline Datsu is built with Ember.js. Ember CLI makes it really easy to start new Ember.js projects. Since I didn’t want to create a custom backend I convinced myself to try Firebase.<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup></p>
<p>Creating the actual dart board in SVG was a fun exercise and reminded me how awesome SVG is. Wiring it up in the end with an Ember.js component was an easy task. Getting to a state where I could use Datsu was in general pretty straightforward up until to the point where I decided it was time to write tests.</p>
<p>At this point I removed Firebase from the project and started writing my own backend. Writing tests with an Firebase backend is impossible unless you are willing to do one of three things:</p>
<ul>
<li>
<p>Let your tests run against Firebase directly which makes them slow and requires constant internet access.</p>
</li>
<li>
<p>Create a fake Firebase server from scratch that you can use in your tests.</p>
</li>
<li>
<p>Use an entirely different adapter for you models like RESTAdapter or ActiveModelAdapter while running the tests so you can use established solutions like Pretender.</p>
</li>
</ul>
<p>Since I was not willing to do any of those things I started writing my own backend in PHP with Laravel 4.2. The reason why I picked PHP instead of Ruby, which I normally use for all of my backend projects, is rather simple. I can only run simple Ruby scripts on my webspace from Host Europe. So instead of setting up a custom server for Datsu I bit the bullet and saw Laravel as another experiment to broaden my horizon.</p>
<p>Turns out Laravel is pretty great apart from PHP and all its issues. I even discovered things where it is ahead of Ruby on Rails in terms of out of the box experience. Once I had the backend up and running my attention turned back to the Ember.js application.</p>
<p>I spent a lot of time adding animations and handling slow connections. Explaining all that goes beyond this posts scope which original was “tell the world about Datsu”. If you have specific questions or want me to write an follow up on certain topics tell me on Twitter or by email.</p>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>I would never do that in a serious project, handing over all your application data to a third party where you have no control over it is a no-go. Even more so since Firebase is a Google thing. <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>
Ember CLI mit Bourbon, Neat und Normalize.csshttp://strauss.io/blog/2015-ember-cli-mit-bourbon-neat-und-normalize-css.html2015-02-04T14:22:00Z2021-03-20T17:04:17+01:00David Strauß<p>So ziemlich jedes Projekt von mir verwendet Bourbon, Neat und Normalize.css, das ist auch bei Ember CLI Projekten so.</p>
<figure>
<figcaption>
<p>Neues Ember CLI Projekt erstellen
</p>
</figcaption>
<pre class="highlight shell"><code>ember new frontend-demo
</code></pre>
</figure>
<p>Damit das ganze funktioniert brauchen wir Sass.</p>
<figure>
<figcaption>
<p>Sass Support installieren
</p>
</figcaption>
<pre class="highlight shell"><code>ember install:npm broccoli-sass
</code></pre>
</figure>
<p>Jetzt können wir Normalize.css, Bourbon und Neat installieren. Beim installieren von Neat kann es passieren, dass man zwischen zwei Bourbon Versionen wählen muss. Wenn das passiert <code>bourbon 3.2.1</code> auswählen<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup>.</p>
<figure>
<figcaption>
<p>Normalize.css, Bourbon und Neat installieren
</p>
</figcaption>
<pre class="highlight shell"><code>ember install:bower normalize.css
ember install:addon ember-cli-bourbon
ember install:bower neat
</code></pre>
</figure>
<p>Nach der Installation wird <code>app.scss</code> Stylesheet modifiziert und die entsprechenden Dateien importiert. Im <code>Brocfile.js</code> importieren wir Normalize.css damit Ember CLI es zum Vendor Stylesheet hinzufügt. Eine zusätzlich erstellte <code>_variables.scss</code> Datei dient zur Konfiguration von Neat.</p>
<figure>
<figcaption>
<p>Zur Brocfile.js Datei hinzufügen
</p>
</figcaption>
<pre class="highlight javascript"><code><span class="nx">app</span><span class="p">.</span><span class="kr">import</span><span class="p">(</span><span class="s1">'bower_components/normalize.css/normalize.css'</span><span class="p">);</span>
</code></pre>
</figure>
<figure>
<figcaption>
<p>app/styles/app.scss
</p>
</figcaption>
<pre class="highlight scss"><code><span class="k">@import</span> <span class="s2">"variables"</span><span class="p">;</span>
<span class="k">@import</span> <span class="s2">"bourbon"</span><span class="p">;</span>
<span class="k">@import</span> <span class="s2">"bower_components/neat/app/assets/stylesheets/neat"</span><span class="p">;</span>
<span class="nt">div</span><span class="nc">.container</span> <span class="p">{</span>
<span class="k">@include</span> <span class="nd">outer-container</span><span class="p">;</span>
<span class="p">}</span>
</code></pre>
</figure>
<figure>
<figcaption>
<p>app/styles/_variables.scss
</p>
</figcaption>
<pre class="highlight scss"><code><span class="nv">$column</span><span class="p">:</span> <span class="m">75px</span><span class="p">;</span>
<span class="nv">$gutter</span><span class="p">:</span> <span class="m">30px</span><span class="p">;</span>
<span class="nv">$grid-columns</span><span class="p">:</span> <span class="m">12</span><span class="p">;</span>
<span class="nv">$max-width</span><span class="p">:</span> <span class="m">1240px</span><span class="p">;</span>
<span class="nv">$visual-grid</span><span class="p">:</span> <span class="bp">true</span> <span class="o">!</span><span class="nb">default</span><span class="p">;</span>
<span class="nv">$border-box-sizing</span><span class="p">:</span> <span class="bp">true</span> <span class="o">!</span><span class="nb">default</span><span class="p">;</span>
</code></pre>
</figure>
<p>Eine modifizierte <code>application.hbs</code> zeigt das alles funktioniert.</p>
<figure>
<figcaption>
<p>app/templates/application.hbs
</p>
</figcaption>
<pre class="highlight html"><code><span class="nt"><div</span> <span class="na">class=</span><span class="s">"container"</span><span class="nt">></span>
<span class="nt"><h1></span>Hello there!<span class="nt"></h1></span>
<span class="nt"></div></span>
</code></pre>
</figure>
<div class="footnotes">
<ol>
<li id="fn:1">
<p><a href="https://github.com/yapplabs/ember-cli-bourbon">Bourbon 4 ist inkompatibel</a> mit <code>libsass</code> da es neuere Sass Funktionen verwendet. <code>libsass</code> wird von <code>node-sass</code> verwendet was wiederum von <code>broccoli-sass</code> verwendet wird. <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>
Deploy Laravel on Shared Hostinghttp://strauss.io/blog/2015-deploy-laravel-on-shared-hosting.html2015-01-21T08:44:00Z2021-03-20T17:04:17+01:00David Strauß<p>Sometimes you have to deploy a Laravel application to a shared hosting webspace. Most of the time that implies some crucial limitations. If this does not apply to your shared hosting plan consider yourself lucky.</p>
<ul>
<li>
<p>It is not possible to create or modify virtual hosts. Your domain is pointing to a specific folder on the webspace and the URL path maps one-to-one to the folder structure. If you can create subdomains the same limitations apply.</p>
</li>
<li>
<p>You have to use a specific version of PHP and cannot install additional extensions.</p>
</li>
<li>
<p>The only way to access the webspace is via the web interface of the hoster or FTP. There is no SSH or rsync available.</p>
</li>
</ul>
<p>In order to operate a Laravel application on such a shared hosting plan you have to cover three areas. The first one is getting the application to run on the webspace. Then you should automate the deploy process so you can deploy new versions with a single command. In the end the only thing left to do is running migrations.</p>
<h2 id="setting-up-laravel-on-shared-hosting">Setting up Laravel on shared hosting</h2>
<p>First of all make sure PHP 5.4 or greater is available and the MCrypt extension is enabled. If that is not the case you can stop right now, sorry you are out of luck. Create a database for your application and gather the credentials for the database and the FTP login.</p>
<p>This article only covers how to run Laravel directly on the domain or a subdomain. So if you want to run it in a subfolder like <code>http://mydomain.com/my-app/</code> you have to do it by yourself. Let's say you have a folder named <code>my-app/</code> on your webspace and you would like to put the Laravel application inside that folder. To make that work the domain or subdomain you are using has to point to <code>my-app/public/</code>. (It is okay if this public folder does not exist yet, we will create it in a second.) <strong>If the domain or subdomain is not pointing to the public folder Laravel will not run!</strong></p>
<p>Now it is time to add the database credentials to the environment file. During development I'm using Postgres<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup> while in production it is MySQL. To allow this I not only put the credentials into the environment file but also which database driver Laravel should use. Laravel running in the production environment uses the <code>.env.php</code> file to get the environment variables, now it is your turn to fill it out.</p>
<figure>
<figcaption>
<p>.env.php
</p>
</figcaption>
<pre class="highlight php"><code><span class="cp"><?php</span>
<span class="k">return</span> <span class="k">array</span><span class="p">(</span>
<span class="s1">'ENCRYPTION_KEY'</span> <span class="o">=></span> <span class="s1">'a-very-long-random-string'</span><span class="p">,</span>
<span class="s1">'DATABASE'</span> <span class="o">=></span> <span class="s1">'mysql'</span><span class="p">,</span>
<span class="s1">'DATABASE_USER'</span> <span class="o">=></span> <span class="s1">'user'</span><span class="p">,</span>
<span class="s1">'DATABASE_PASSWORD'</span> <span class="o">=></span> <span class="s1">'secret'</span><span class="p">,</span>
<span class="s1">'DATABASE_NAME'</span> <span class="o">=></span> <span class="s1">'my-app-production'</span><span class="p">,</span>
<span class="s1">'MIGRATION_TOKEN'</span> <span class="o">=></span> <span class="s1">'another-very-long-random-string'</span>
<span class="p">);</span>
</code></pre>
</figure>
<p>For the time being ignore the <code>MIGRATION_TOKEN</code> variable, we will use it later to run the migrations. The next code listing shows you how Laravel uses these environment variables to set up the database connection.</p>
<figure>
<figcaption>
<p>app/config/database.php
</p>
</figcaption>
<pre class="highlight php"><code><span class="cp"><?php</span>
<span class="k">return</span> <span class="k">array</span><span class="p">(</span>
<span class="c1">// Lots of comments and other stuff
</span>
<span class="s1">'default'</span> <span class="o">=></span> <span class="nv">$_ENV</span><span class="p">[</span><span class="s1">'DATABASE'</span><span class="p">],</span>
<span class="s1">'connections'</span> <span class="o">=></span> <span class="k">array</span><span class="p">(</span>
<span class="s1">'mysql'</span> <span class="o">=></span> <span class="k">array</span><span class="p">(</span>
<span class="s1">'driver'</span> <span class="o">=></span> <span class="s1">'mysql'</span><span class="p">,</span>
<span class="s1">'host'</span> <span class="o">=></span> <span class="s1">'localhost'</span><span class="p">,</span>
<span class="s1">'database'</span> <span class="o">=></span> <span class="nv">$_ENV</span><span class="p">[</span><span class="s1">'DATABASE_NAME'</span><span class="p">],</span>
<span class="s1">'username'</span> <span class="o">=></span> <span class="nv">$_ENV</span><span class="p">[</span><span class="s1">'DATABASE_USER'</span><span class="p">],</span>
<span class="s1">'password'</span> <span class="o">=></span> <span class="nv">$_ENV</span><span class="p">[</span><span class="s1">'DATABASE_PASSWORD'</span><span class="p">],</span>
<span class="s1">'charset'</span> <span class="o">=></span> <span class="s1">'utf8'</span><span class="p">,</span>
<span class="s1">'collation'</span> <span class="o">=></span> <span class="s1">'utf8_unicode_ci'</span><span class="p">,</span>
<span class="s1">'prefix'</span> <span class="o">=></span> <span class="s1">''</span><span class="p">,</span>
<span class="p">),</span>
<span class="s1">'pgsql'</span> <span class="o">=></span> <span class="k">array</span><span class="p">(</span>
<span class="s1">'driver'</span> <span class="o">=></span> <span class="s1">'pgsql'</span><span class="p">,</span>
<span class="s1">'host'</span> <span class="o">=></span> <span class="s1">'localhost'</span><span class="p">,</span>
<span class="s1">'database'</span> <span class="o">=></span> <span class="nv">$_ENV</span><span class="p">[</span><span class="s1">'DATABASE_NAME'</span><span class="p">],</span>
<span class="s1">'username'</span> <span class="o">=></span> <span class="nv">$_ENV</span><span class="p">[</span><span class="s1">'DATABASE_USER'</span><span class="p">],</span>
<span class="s1">'password'</span> <span class="o">=></span> <span class="nv">$_ENV</span><span class="p">[</span><span class="s1">'DATABASE_PASSWORD'</span><span class="p">],</span>
<span class="s1">'charset'</span> <span class="o">=></span> <span class="s1">'utf8'</span><span class="p">,</span>
<span class="s1">'prefix'</span> <span class="o">=></span> <span class="s1">''</span><span class="p">,</span>
<span class="s1">'schema'</span> <span class="o">=></span> <span class="s1">'public'</span><span class="p">,</span>
<span class="p">)</span>
<span class="p">),</span>
<span class="c1">// More comments and stuff
</span></code></pre>
</figure>
<p>After modifying these two files we can upload the Laravel application to the webspace. Open the FTP program of your choice, enter the credentials and upload the entire application folder to <code>my-app</code>. As you can see the public folder is now also there.</p>
<h2 id="deploy-laravel-with-a-single-command">Deploy Laravel with a single command</h2>
<p>Deploying web applications by hand via a FTP program is a bad practice, you should not do that. Opening a FTP program, selecting the application files and uploading them to the correct location is a slow and error-prone process. We can do better by using a deploy script.</p>
<p>Such a deploy script usually uses SSH and/or rsync to do the work but in our case these technologies are not available on shared hosting. Thankfully there is a small FTP client called LFTP which we can control from the command line, <a href="http://lftp.yar.ru/">install LFTP</a> and come back when you are finished.</p>
<p>Our deploy script is a little bash script that uses LFTP to upload the files to the webspace. The only thing you have to do is run <code>./deploy</code> in the command line to deploy the Laravel application. Let us start by creating the deploy script and make it executable.</p>
<figure>
<figcaption>
<p>Create the deploy script and make it executable
</p>
</figcaption>
<pre class="highlight shell"><code>touch deploy
chmod +x deploy
</code></pre>
</figure>
<p>This file does several things, in the beginning it makes sure there is a <code>.environment</code> file present and that it defines the necessary environment variables. If that is not the case the script aborts and prints an error message.</p>
<p>If all needed variables are set it proceeds to run the LFTP command. First LFTP connects to the webspace and starts uploading the relevant files and folders needed by Laravel. It then makes sure the <code>app/storage</code> folder is writeable by Laravel. Before finishing the deployment the script runs the migrations, the next part explains how running the migrations works.</p>
<figure>
<figcaption>
<p>deploy
</p>
</figcaption>
<pre class="highlight shell"><code><span class="c">#!/bin/bash</span>
<span class="nb">command</span> -v lftp >/dev/null 2>&1 <span class="o">||</span> <span class="o">{</span> <span class="nb">echo</span> >&2 <span class="s2">"LFTP is required."</span>; <span class="nb">exit </span>1; <span class="o">}</span>
<span class="nb">test</span> -f <span class="s2">".environment"</span> <span class="o">||</span> <span class="o">{</span> <span class="nb">echo</span> <span class="s2">".environment is required."</span>; <span class="nb">exit</span>; <span class="o">}</span>
<span class="nb">source</span> <span class="s1">'.environment'</span>;
<span class="nb">test</span> ! -z <span class="s2">"</span><span class="nv">$FTP_USER</span><span class="s2">"</span> <span class="o">||</span> <span class="o">{</span> <span class="nb">echo</span> <span class="s2">"FTP_USER variable is required."</span>; <span class="nb">exit</span>; <span class="o">}</span>
<span class="nb">test</span> ! -z <span class="s2">"</span><span class="nv">$FTP_PASSWORD</span><span class="s2">"</span> <span class="o">||</span> <span class="o">{</span> <span class="nb">echo</span> <span class="s2">"FTP_PASSWORD variable is required."</span>; <span class="nb">exit</span>; <span class="o">}</span>
<span class="nb">test</span> ! -z <span class="s2">"</span><span class="nv">$MIGRATION_TOKEN</span><span class="s2">"</span> <span class="o">||</span> <span class="o">{</span> <span class="nb">echo</span> <span class="s2">"MIGRATION_TOKEN variable is required."</span>; <span class="nb">exit</span>; <span class="o">}</span>
<span class="nb">echo</span> <span class="s2">"Deployment started"</span>;
lftp <span class="sh"><< EOF
set ssl:verify-certificate no;
open -u $FTP_USER,$FTP_PASSWORD my-domain
put -O /my-app/ .env.php
put -O /my-app/ artisan
mirror -v -R --delete -x .DS_Store -x .gitkeep public/ /my-app/public/
mirror -v -R --delete -X .* -X .*/ -x storage/ app/ /my-app/app/
mirror -v -R --delete -X .* -X .*/ bootstrap/ /my-app/bootstrap/
mirror -v -R --delete -X .* -X .*/ vendor/ /my-app/vendor/
mkdir -pf /my-app/app/storage/cache/
mkdir -pf /my-app/app/storage/logs/
mkdir -pf /my-app/app/storage/meta/
mkdir -pf /my-app/app/storage/sessions/
mkdir -pf /my-app/app/storage/views/
chmod -Rf 0777 /my-app/app/storage
EOF
</span>curl -X POST http://my-domain.com/migrate/<span class="nv">$MIGRATION_TOKEN</span>
<span class="nb">echo</span> <span class="s2">"Deployment finished"</span>;
</code></pre>
</figure>
<p>In case you are wondering how the <code>.environment</code> file looks, here you go. It is very simple and just contains a few variables needed by the deploy script. Make sure to modify the deploy script so it reflects your own webspace structure and the domain or subdomain you are using!</p>
<figure>
<figcaption>
<p>.environment
</p>
</figcaption>
<pre class="highlight shell"><code><span class="nv">FTP_USER</span><span class="o">=</span><span class="s2">"user"</span>;
<span class="nv">FTP_PASSWORD</span><span class="o">=</span><span class="s2">"secret"</span>;
<span class="nv">MIGRATION_TOKEN</span><span class="o">=</span><span class="s2">"very-long-random-string"</span>;
<span class="nb">export </span>FTP_USER;
<span class="nb">export </span>FTP_PASSWORD;
<span class="nb">export </span>MIGRATION_TOKEN;
</code></pre>
</figure>
<h2 id="running-laravel-migrations-on-shared-hosting">Running Laravel migrations on shared hosting</h2>
<p>Normally running migrations works the same way as it does in development, you just execute <code>php artisan migrate</code> in the command line. But since the shared hosting does not provide SSH access we can not do this. The best solution that I found is using a HTTP request to run the migrations. This is certainly not perfect<sup id="fnref:2"><a href="#fn:2" class="footnote">2</a></sup> but at least better than using Adminer, phpMyAdmin to create the schema yourself.</p>
<p>To enable us to do this we create a new route in the routes file. To run the migrations it has to be a <code>POST</code> request and include the migration token defined in the environment variables. Be sure to use a long random string as migration token because anybody can make that request. If the token from the request matches the environment variable we tell artisan to run the migrate command.</p>
<figure>
<figcaption>
<p>Add this route to app/routes.php
</p>
</figcaption>
<pre class="highlight php"><code>Route::post('/migrate/{token?}', array(function($token = null)
{
if ($token == $_ENV['MIGRATION_TOKEN']) {
Artisan::call('migrate', array('--force' => true));
}
else
{
App::abort(403);
}
}));
</code></pre>
</figure>
<p>That's it, with this setup I am successfully running a Laravel 4.2 application on a Host Europe shared hosting plan.</p>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>I wrote about my <a href="">Laravel 4.2 setup on OS X Yosemite</a> with Postgres and the built in PHP version. <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
<li id="fn:2">
<p>For starters anybody with internet access can run the migrations if the token is known. So make sure it is not. In addition the migrations are limited by the PHP execution time limit like every other HTTP request. So if your migrations take a long time this method will not work. <a href="#fnref:2" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>
Laravel 4.2 Setup on OS X Yosemitehttp://strauss.io/blog/2015-laravel-4-2-setup-on-os-x-yosemite.html2015-01-19T07:42:00Z2021-03-20T17:04:17+01:00David Strauß<p>To run Laravel 4.2 on a OS X Yosemite setup with the default PHP binary and Postgres as a database<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup> you have to follow a few steps. Make sure you have Homebrew installed before you start.</p>
<p>Laravel needs the MCrypt PHP extension and since we want to use Postgres we also have to install the PDO Postgres driver.</p>
<figure>
<figcaption>
<p>Check PHP version
</p>
</figcaption>
<pre class="highlight shell"><code>php -v
</code></pre>
</figure>
<p>Replace <code>55</code> within following commands with whatever version of PHP you are running.</p>
<figure>
<figcaption>
<p>Install MCrypt and the Postgres PDO driver
</p>
</figcaption>
<pre class="highlight shell"><code>brew tap josegonzalez/php
brew install php55-mcrypt --without-homebrew-php
brew install php55-pdo-pgsql --without-homebrew-php
</code></pre>
</figure>
<p>Now we have to tell PHP about the new extensions. These paths depend on your installation so make sure to lookup the correct paths for your machine.</p>
<figure>
<figcaption>
<p>Tell PHP about the newly installed extensions
</p>
</figcaption>
<pre class="highlight shell"><code>sudo cp /etc/php.ini.default /etc/php.ini
sudo <span class="nb">echo</span> <span class="s1">'extension="/usr/local/Cellar/php55-mcrypt/5.5.20/mcrypt.so"'</span> >> /etc/php.ini
sudo <span class="nb">echo</span> <span class="s1">'extension="/usr/local/Cellar/php55-pdo-pgsql/5.5.20/pdo_pgsql.so"'</span> >> /etc/php.ini
</code></pre>
</figure>
<p>The next step is installing Composer so we can install Laravel and its dependencies. When adding the composer binary to your <code>PATH</code> make sure to adapt the path.</p>
<figure>
<figcaption>
<p>Installing Composer
</p>
</figcaption>
<pre class="highlight shell"><code>curl -sS https://getcomposer.org/installer | php
mv composer.phar /usr/local/bin/composer
<span class="nb">echo</span> <span class="s1">'export PATH="/Users/david/.composer/vendor/bin:$PATH"'</span> >> ~/.zshrc
</code></pre>
</figure>
<p>Finally we can install Laravel itself and create a new project.</p>
<figure>
<figcaption>
<p>Installing Laravel and setting up a new project
</p>
</figcaption>
<pre class="highlight shell"><code>composer global require <span class="s2">"laravel/installer=~1.1"</span>
laravel new my-new-project
</code></pre>
</figure>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>I usually have nothing to do with the PHP world so I don't want to install any versions of PHP, MySQL or other pieces of tooling that I don't use for my usual development tasks. <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>