<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Adam.Fanello<Building Apps in the Cloud>]]></title><description><![CDATA[I'm a Consulting Software Engineer, Software Architect, and AWS Solutions Architect with over 34 years of experience. My specializations include AWS serverless ]]></description><link>https://adam.fanello.net</link><generator>RSS for Node</generator><lastBuildDate>Sun, 12 Apr 2026 20:58:08 GMT</lastBuildDate><atom:link href="https://adam.fanello.net/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Nanoservice State of Mind]]></title><description><![CDATA[Lessons from Amazon Prime Video
Recently there's been a bit of controversy that microservices may be a mistake, and monoliths are (once again) the way to go. This began due to a blog post from the Amazon Prime Video team indicating the great cost red...]]></description><link>https://adam.fanello.net/nanoservice-state-of-mind</link><guid isPermaLink="true">https://adam.fanello.net/nanoservice-state-of-mind</guid><category><![CDATA[software architecture]]></category><category><![CDATA[Microservices]]></category><category><![CDATA[monolith]]></category><category><![CDATA[software development]]></category><dc:creator><![CDATA[Adam Fanello]]></dc:creator><pubDate>Wed, 14 Jun 2023 23:40:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1686785944827/9d88aee6-145c-49d7-8645-aa3fb1d435a1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-lessons-from-amazon-prime-video">Lessons from Amazon Prime Video</h1>
<p>Recently there's been a bit of controversy that microservices may be a mistake, and monoliths are (once again) the way to go. This began due to a <a target="_blank" href="https://www.primevideotech.com/video-streaming/scaling-up-the-prime-video-audio-video-monitoring-service-and-reducing-costs-by-90">blog post from the Amazon Prime Video</a> team indicating the great cost reduction they realized by switching from various AWS serverless services into a containerized “monolith”. Fireworks lit up the AWS community! Is the Amazon Prime Video team <em>really</em> telling us to dump serverless and microservices?</p>
<p>A key line from the blog post comes at the beginning of the second paragraph:</p>
<blockquote>
<p>Our Video Quality Analysis (VQA) team at Prime Video…</p>
</blockquote>
<p>They did not turn all of Prime Video into a monolith and throw it all into a single ECS container! Rather, this is <em>one</em> team responsible for a relatively small functionality of the overall application. Isn’t that the exact target of a microservice?</p>
<blockquote>
<p>Microservices are an architectural and organizational approach to software development where software is composed of small independent services that communicate over well-defined APIs. <em>These services are owned by small, self-contained teams.</em></p>
<p>— https://aws.amazon.com/microservices/</p>
</blockquote>
<p>What we have here is a classic growing pain of greenfield application development. The team broke their functionality down <em>too far</em> into deployed <a target="_blank" href="https://techbeacon.com/app-dev-testing/nanoservices-where-they-fit-where-they-dont">nanoservices</a>. When they ran into trouble, they reorganized the way they deploy and brought their functionality back together into a single Video Quality Analysis <em>microservice</em>.</p>
<p>Finding the right balance between deploying a monolith, microservices, and nanoservices is almost impossible to predict upfront when developing new applications. If you are writing a new application, how do you mitigate this unknown?</p>
<h1 id="heading-how-to-find-balance">How to find balance</h1>
<p>The first key is to separate your <em>software</em> architecture from your <em>solution</em> architecture.</p>
<p><em>Software</em> architecture is how developers organize code.</p>
<p><em>Solution</em> architecture is how developers (or DevOps) <em>deploy</em> code.</p>
<p>Far too commonly, a stringent solution architecture is defined first. The solution is broken down into microservices from the beginning. Walls are put up between teams, and each team goes to their separate corners and write their software. This is a proven technique and comes from a need to manage large applications and groups of developers. Go ahead and do this for really obviously separate parts of your applications.</p>
<p>But when in doubt, <em>don’t</em>.</p>
<p>Putting up walls too early is like premature optimization. Just as you don’t know early on where the most bang-for-your-buck can be found in optimizations, you don’t know yet the best way to deploy your application.</p>
<p>The way to find balance is to <strong>separate the concerns of organizing code</strong> and communication, <strong>from deploying code</strong> and communication channels. This is where the <em>software</em> architecture comes in:</p>
<blockquote>
<p>Write everything as nanoservices, but deploy them as a monolith</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686785204783/ced55667-b962-4786-b6e1-cd80a3c9c0c7.png" alt class="image--center mx-auto" /></p>
<p>Now that I have your attention, here’s the more nuanced version:</p>
<blockquote>
<p>Write everything <em>you can</em> as nanoservices, but deploy them <em>initially</em> as a monolith <em>unless you have a clear reason not to. As needed, refactor what code is deployed where.</em></p>
</blockquote>
<h1 id="heading-how-to-write-software-independent-of-deployment">How to write software independent of deployment</h1>
<p>The <strong>software</strong> architecture field is highly mature and based on principles that have evolved for roughly fifty years. In the euphoria of cloud solution architectures, containers, and serverless, the architecture of the software within these solutions has sometimes been overlooked. This approach outlined below tries to encapsulate a smorgasbord of software architecture principles:</p>
<ul>
<li><a target="_blank" href="https://en.wikipedia.org/wiki/Information_hiding">Information Hiding</a>, <a target="_blank" href="https://en.wikipedia.org/wiki/Law_of_Demeter">least knowledge</a>, <a target="_blank" href="https://blog.cleancoder.com/uncle-bob/2014/05/08/SingleReponsibilityPrinciple.html">single responsibility</a>, <a target="_blank" href="https://medium.com/clarityhub/low-coupling-high-cohesion-3610e35ac4a6">high-cohesion and low-coupling</a>, <a target="_blank" href="https://deviq.com/principles/dependency-inversion-principle">dependency inversion</a>, <a target="_blank" href="https://deviq.com/principles/interface-segregation">interface segregation</a>, <a target="_blank" href="https://deviq.com/principles/liskov-substitution-principle">Liskov substitution</a>, <a target="_blank" href="https://deviq.com/principles/open-closed-principle">open-closed</a>, <a target="_blank" href="https://en.wikipedia.org/wiki/Separation_of_concerns">separation of concerns</a>.</li>
</ul>
<p>This list may be long, and some may recognize all five <a target="_blank" href="https://en.wikipedia.org/wiki/SOLID">SOLID</a> principles, but they all relate together such that every decision plays a part in satisfying multiple principles.</p>
<h2 id="heading-everything-quietly-does-one-thing-well">Everything Quietly Does One Thing Well</h2>
<h3 id="heading-hiding-implementation-detail">Hiding Implementation Detail</h3>
<p>Right down to the class level (source file or module), write that class as though it were a nanoservice itself. The public API on that class is always written from the <strong><em>client’s</em></strong> point of view. (The client being any other class that uses it.)</p>
<p>For example, take the classic layered architecture <a target="_blank" href="https://deviq.com/design-patterns/repository-pattern/">repository classes</a>. A repository manages a collection of data. The actual source of the data could be anything: a relational database, a NoSQL database, another microservice, or even hardcoded mock data. In true microservice thinking though, the details are hidden away from the clients of this class. As part of this information hiding, the class is named “<strong>Widget</strong>Repository”, not “MySqlWidgetRepository” or “OtherServiceProxyService”. From the perspective of anything using this repository, it’s just the repository for a collection of data and the class and function names reflect that perspective.</p>
<h3 id="heading-each-feature-is-independent">Each Feature Is Independent</h3>
<p>Every feature is a use case, and has its own independent code module. It's triggered by an event or request, executes one cohesive piece of business logic, and responds to a request. It can use repositories to manipulate data, without a care to where that data lives. If it does something noteworthy enough that something else might need to happen as a result, it published an event about it. Done. You might have all of these extend a base class, which will help for tying it all together later:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">abstract</span> <span class="hljs-keyword">class</span> UseCase&lt;generics Message, Response&gt;
  <span class="hljs-keyword">abstract</span> process(message: Message) <span class="hljs-keyword">return</span> Response
</code></pre>
<p>This means “AddWidget” is a separate use case class module from “DeleteWidget”, etc.. A classic layered architecture would have these put together into a single all-knowing “WidgetService”. That approach though usually results in a huge “do everything” class that violates multiple design principles.</p>
<p>Once every feature is its own little module, these little use cases can be deployed in any way without them knowing or caring. Group them in folders by domains (i.e.: src/use-cases/widgets/), but initially KISS and deploy as a monolith. When you have a reason, deploy some separately, but don't move the code!</p>
<p>Use <strong>dependency injection</strong> to provide the external interfaces, such as “WidgetRepository”. (This also sets up easy unit testing on the all-important business logic in these use cases.) Depending on the capabilities of the programming language, it is best if the domain use cases define these dependencies as interfaces or abstract classes. Binding the interface to the actual implementation is a <em>deployment</em> consideration and key to making your use case nanoservice portable. You may initially inject a “WidgetRepository” implementation that accesses a database directly. Later, after deciding to break this use case out to another microservice, that deployment can bind a proxy implementation of “WidgetRepository” while other use cases that remain in the monolith continue to use the database implementation.</p>
<p>How you organize your domain code is independent of how you deploy the code! It's easiest to group related <em>source</em> code near each other, but that is not always the best way to <em>run</em> code. These are two different concerns, so don't let one force unnatural structure on the other.</p>
<h3 id="heading-tying-it-together">Tying it Together</h3>
<h3 id="heading-message-brokers-and-buses">Message Brokers and Buses.</h3>
<p>Use cases register themselves, or are registered, as handing a single specific message (event or request). In a monolith, this might be to an in-process bus. Events are published to the bus, and the bus routes to the use case. For a more distributed (microservice or serverless) system, the bus is replaced with a message broker. (Examples: ApacheMQ, RabbitMQ, Kafka, Amazon EventBridge, Amazon SNS, …)</p>
<p>What if you have a distributed system, but an event is published in the same process as the subscriber? Should optimize that? Probably not. If ultra-low latency is critical, then yes let the in-process bus recognize and optimize this. In most cases though, go ahead and send the event to the broker and let it come back. This lets the system work as intended, with queuing, failure and retry, archiving, process lifecycle, etc. all provided by your solution architecture. Also, this allows for other external subscriptions to <em>also</em> receive the event and maintained the option to split out that in-process subscriber without any other changes.</p>
<p>That brings us back to where we started:</p>
<blockquote>
<p>Each use case can be its own nanoservice. Always code as if it is.</p>
</blockquote>
<h3 id="heading-what-is-a-microservice">What is a Microservice?</h3>
<p>Under this approach, a microservice is a deployment of a group of nanoservices (use cases). Along those lines, a monolith is a deployment of <strong><em>all</em></strong> of your nanoservices. By coding in nanoservices, you maintain full agility in how you deploy.</p>
<h2 id="heading-it-works-for-us-and-can-for-you-too">It Works For Us, And Can for You Too</h2>
<p>I’ve used this approach on a couple of projects now. In one unusual case, the application was originally written to deploy as a serverless application on AWS (Lambdas, DynamoDB, SNS, API Gateway). Upon business request, it was <em>also</em> deployed as a pair of monoliths in a cloud virtual machine and a laptop. This was done without changing any of the use case code! That’s the power of separating software architecture from solution architecture.</p>
<p>This approach is built into an accelerator platform created and used by the <a target="_blank" href="https://www.rackspace.com/applications/cloud-native">Rackspace Professional Services Cloud Native Development</a> team. Let us accelerate your next application.</p>
<hr />
<p><a target="_blank" href="https://docs.rackspace.com/blog/nanoservice-state-of-mind-lessons-from-prime-video/"><em>Originally published on the Rackspace Technical Blog</em></a></p>
]]></content:encoded></item><item><title><![CDATA[See you at AWS re:Invent 2022!]]></title><description><![CDATA[It’s almost time for the annual migration of cloud techies to Las Vegas! I’ve been working within my employer, Onica by Rackspace Technology, on multiple treats for attendees who come visit us.
First Timer?
Going this year, but have never attended in...]]></description><link>https://adam.fanello.net/see-you-at-aws-reinvent-2022</link><guid isPermaLink="true">https://adam.fanello.net/see-you-at-aws-reinvent-2022</guid><category><![CDATA[Rackspace]]></category><category><![CDATA[AWS]]></category><category><![CDATA[AI]]></category><category><![CDATA[cloud native]]></category><dc:creator><![CDATA[Adam Fanello]]></dc:creator><pubDate>Mon, 21 Nov 2022 23:07:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1668812836811/E8E_PRKiJ.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It’s almost time for the annual migration of cloud techies to Las Vegas! I’ve been working within my employer, Onica by Rackspace Technology, on multiple treats for attendees who come visit us.</p>
<h2 id="heading-first-timer">First Timer?</h2>
<p>Going this year, but have never attended in person before? Rackspace’s Chief Technology Evangelist Jeff DeVerter, Director of Cloud Native Development Matt Puccio, and I chatted about <em>How to Survive AWS re:Invent</em>. Watch for our collective tips:</p>
<iframe src="https://www.linkedin.com/video/embed/live/urn:li:ugcPost:6994397992338276352" height="399" width="710"></iframe>

<h2 id="heading-game-day">Game Day!</h2>
<p>Monday morning at re:Invent is Game Day! There are a few options, but you might want to check out this one - it looks really cool 🦄:</p>
<iframe src="https://www.linkedin.com/embed/feed/update/urn:li:ugcPost:7000504614643003392?compact=1" height="399" width="710"></iframe>

<h2 id="heading-knights-of-ai">Knights of AI!</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1668811759384/i91oBYMxF.jpg" alt="Rackspace-Email-Image-Knights-of-AI-AWS-TSK-7771-spare-no-text.jpg" /></p>
<p>I'm excited to tell you that I've been part of a small team of engineers, architects, UI designers, and marketing folks to build for you a fun interactive game you can play at re:Invent. I'm really looking forward to seeing the response to this - it's going to be hot! 🔥 Prizes are awarded each day, but the real fun is how this shows off what we can do with Machine Learning and the type of software design I like to go on and on about: cloud native, serverless, event-driven, clean code, and automated testing!</p>
<p>Pictures say more than words, so what a wealth of knowledge these 54,000 pictures streamed into a video must be! (3 minutes @ 30fps)</p>
<p>Watch this behind the scenes video about Knights of AI:</p>
<iframe src="https://www.linkedin.com/embed/feed/update/urn:li:ugcPost:6998642982090412033?compact=1" height="399" width="710"></iframe>

<p>Marketing put out a little clip of my personal invitation to you too:</p>
<iframe src="https://www.linkedin.com/embed/feed/update/urn:li:ugcPost:6999395948388892672?compact=1" height="399" width="710"></iframe>

<h2 id="heading-come-visit">Come Visit</h2>
<p>I’ll be hanging out in the Venetian Expo for much of re:Invent - primarily at booth 244. I’d love to chat cloud native software architecture and engineering with you. I’ll do an official presentation about how we built Knights of AI at some time, in the booth. (Follow me on LinkedIn and I’ll post the day and time when I know it.)</p>
]]></content:encoded></item><item><title><![CDATA[My Career's Course Correction]]></title><description><![CDATA[Starting in Summer 2021 and into early 2022, I started contemplating the next step in my career. I had been with the same employer for over seven years, matching a record for me, and the job market was hot. 🔥
In March, I made the choice to make the ...]]></description><link>https://adam.fanello.net/my-careers-course-correction</link><guid isPermaLink="true">https://adam.fanello.net/my-careers-course-correction</guid><category><![CDATA[Career]]></category><dc:creator><![CDATA[Adam Fanello]]></dc:creator><pubDate>Thu, 27 Oct 2022 16:00:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/pczTw272ejo/upload/v1666805407481/2uOZLFR_V.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Starting in Summer 2021 and into early 2022, I started contemplating the next step in my career. I had been with the same employer for over seven years, matching a record for me, and the job market was hot. 🔥</p>
<p>In March, I made the choice to make the attempt at becoming an independent consultant. I knew it wasn’t the safest or easiest option and that I might not succeed, but had a drive to try and knew that I could easily pick up another employer if it didn’t work out.</p>
<p>Nearly six months later, I had generated content, got matters in place, chased the dream… and nothing. I went public with that dream, and had a great deal of encouragement but still no clients.</p>
<p>Since last March, the economy and job market has changed. A recession is on the horizon and some tech companies have hiring freezes and even layoffs. Uh oh. 😬</p>
<p>I considered a full gamut of options…</p>
<h2 id="heading-independent">Independent</h2>
<p>My first dream was going independent. I’d been working for a consultancy for years, and the idea of cutting out the middleman is highly appealing.</p>
<p>Throughout the year though, I discovered that the timing was bad:</p>
<ul>
<li>The economy is diving into recession, and we’re seeing layoffs and freezes in the tech field.</li>
<li>Little contract demand right now; those hiring want full-time positions, often in old tech rather than the leading edge that I’m accustomed to.</li>
<li>What contract jobs I found are commoditized to lower rates, often entry-level.</li>
</ul>
<p>I thought recruiters would be helpful in finding contracts. One of my driving forces was a report by one of these recruiting companies showing high contract rates. That very company has proven useless, and I suspect the numbers in the report are “up to” numbers, not typical rates. I never saw a single solid contract lead from a recruiter.</p>
<p>My own leads have not lead to contracts, including the client that I had hoped to launch with.</p>
<h3 id="heading-aside-on-recruiters">Aside on Recruiters</h3>
<p>Nearly all recruiters are clueless. I don’t wish to be mean, it’s just what I have experienced and should be expected. Almost none of these kind folks actually understand the jobs they are recruiting for. My experience is clearly in application development - I just happen to use the AWS ecosystem to create these applications. Recruiters see my AWS Solutions Architect certification though and say “Ah ha! DevOps!” or “Ah ha! Security expert.” They see my desire for “consultant”, and offer me full-time jobs. They see “24 years of professional experience”, and suggest mid-level software engineer positions. (At least <em>they</em> clue in to the software portion.)</p>
<p>I want to love recruiters. I want them to be my friends and allies. But they aren’t. Recruiters are exhausting. 😮‍💨</p>
<h2 id="heading-other-consultancies">Other Consultancies</h2>
<p>Some of my frustrations with my current employer, Rackspace, are typical frustrations when working with large companies. So I checked out smaller consultancies.</p>
<p>The smaller consultancies appeared to be more like Onica, the company I worked for that Rackspace swallowed up. Each of these though, were also DevOps shops just trying to get into the (cloud native) custom application business. This carries the risk that they may not have a sales pipeline, leading to being idle and the possibility of them giving up on this new offering. In one case, it looked like what they wanted was help in the “dev” part of DevOps. That isn’t very interesting to me.</p>
<p>Finally, I would need to rebuild my reputation within the new company.</p>
<p>All that risk, for the <em>same</em> compensation I have now.</p>
<h2 id="heading-product-companies">Product Companies</h2>
<p>Many of my former colleagues have left consultancies to work for product companies. I came from product companies.</p>
<p>Since working for a consultancy though, I have honed specific specialized skills. As a consultant, companies will hire specialists. When looking at full-time employment at product companies though, few wanted my specialization. The work is more adjacent - moving into tech stacks that I don’t care so much for. I could do them, but it would mean more generalization and giving up my subject matter expert status.</p>
<p>Then there’s compensation. Here’s how the jobs and compensation with product companies compare to what I’m earning now at Rackspace:</p>
<ol>
<li>Slightly lower compensation for same work.</li>
<li>Same compensation to move up in management. (i.e.: V.P. of Engineering for small company.)</li>
<li>Higher compensation if I compromise my ethics. (i.e. planet harming industry)</li>
<li>Higher compensation if I compromise my health and family. (i.e. hello 60 hour work week)</li>
</ol>
<h2 id="heading-the-grass-isnt-greener">The grass isn’t greener</h2>
<p>There is a common phrase: <em>The grass is greener on the other side</em>. This phrase is actually a warning, that things <em>appear</em> better elsewhere. We are tempted by “greener pastures”. We are envious of anecdotes of others who have made the leap and found success. Often these success stories hide the downsides (see bullets 3 and 4 above).</p>
<p>I have taken a closer look at the neighbors’ lawns (employment options), and found that they too have flaws. Looking back on my own place, I find that maybe I can achieve my goals right here…</p>
<h2 id="heading-stay-at-rackspace-with-change">Stay at Rackspace… with change</h2>
<p>I had written <a target="_blank" href="https://adam.fanello.net/going-independent">in my announcement</a> about going independent:</p>
<blockquote>
<p>I looked at moving out of management over to a Principal Software Architect role, but […] that new poorly defined role I suspected would be <em>manager-lite.</em></p>
</blockquote>
<p>and</p>
<blockquote>
<p>I will keep learning and growing and, I hope, leave a wake of <strong><a target="_blank" href="https://adam.fanello.net/old-developers-were-still-here">younger software engineers</a></strong>
 who have learned and grown as well.</p>
</blockquote>
<p>and</p>
<blockquote>
<p>Charting my Own Path</p>
</blockquote>
<p>Solution: As a last act as manager, <em>define the role</em> as exactly what I’m looking for!</p>
<p>It would be hilarious to create the job description, open it, apply for it, and hire myself. The boss didn’t quite go for that, in part because the existing broad definition of the role was actually already a good fit for what I wanted, and I guess we aren’t allowed to hire ourselves. 🤷‍♂️ 😉</p>
<h3 id="heading-new-role">New role</h3>
<p>So what will I do as a Principal Software Architect of Cloud Native Development? The overarching concept is “be a force multiplier.” Rather than spend all my time delivering for one customer, a principal architect has a team-wide impact. This includes:</p>
<ul>
<li>Evaluate tools and services, and spreading knowledge both within and outside the team.</li>
<li>Create and evolve reusable solutions.</li>
<li>Be a mentor for the entire team.</li>
<li>Be an expert troubleshooter on any project, and ensure they stay on track.</li>
<li>Work with pre-sales, helping set up projects for success before they even begin.</li>
</ul>
<h2 id="heading-did-i-backtrack">Did I backtrack?</h2>
<blockquote>
<p>You said you were doing something, and now you’re not! Flip flopper!</p>
</blockquote>
<p>Fortunately I’m not in politics, and so I’m allowed to change my mind. 😜 </p>
<p>I continually evaluate. <em>Circumstances</em> change. Knowledge grows. Course corrections are made. I may have “backtracked” on going independent, but not on charting a new course in my career.</p>
<h3 id="heading-what-is-changing-for-me">What is changing <em>for me</em>?</h3>
<p>I had been feeling <em>idle and unfocused</em>. This new role has <em>reinvigorated</em> me.</p>
<p>At the time I went into management, it was the only way to advance. Last April Rackspace opened up the new non-management career path. Even though I have grown competent in the manager role, it never fit into my self identity. It turns out that <em>titles do matter;</em> I never liked calling myself a Practice Manager, but am very happy to start calling myself a Principal Software Architect. Titles matter because self identity matters.</p>
<p>When not part of a delivery team, as a manager instead, <em>I felt more isolated</em>. The hope that this new role will offer more <em>opportunity for human interaction</em>. I had blamed the isolation on lost company culture, but this is just a reality of the pandemic and post-pandemic work-from-home era. I love working from home and never liked the distracting nature of the office, but there <em>is</em> a psychological impact. There’s a newly forming Rackspace Culture Team, and I have volunteered to join this team so that I can be part of the solution. 🤞</p>
<p>The great resignation has shaken us all, and resulted in brain-drain everywhere. I think this chaotic shuffling of jobs is easing now (I was almost part of it), so now it is time for recovery. As part of my new role, I hope again be part of the solution and make Rackspace a great place to work by being a resource for everyone. I hope to reverse the talent loss by spreading my reach and helping my fellow Rackers grow.</p>
<p>Finally, Rackspace itself is ever evolving changing. There’s a new CEO who sounds more cognizant what needs to be improved, and a global reorganization about to land. There’s also the global and local culture teams. I’m hopeful and energized again.</p>
]]></content:encoded></item><item><title><![CDATA[Be a Code Artist]]></title><description><![CDATA[Inspiration
I ran across this blog post some time ago:
Four reasons why everyone except for Computer Scientists writes sloppy code by Ari Joury
It’s a really good write up, except that it starts right off in the title insulting a whole lot of develop...]]></description><link>https://adam.fanello.net/be-a-code-artist</link><guid isPermaLink="true">https://adam.fanello.net/be-a-code-artist</guid><category><![CDATA[coding]]></category><category><![CDATA[Computer Science]]></category><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[General Programming]]></category><dc:creator><![CDATA[Adam Fanello]]></dc:creator><pubDate>Mon, 18 Jul 2022 00:31:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/Jo-ypJVt8gQ/upload/v1658104027709/XBtIXmI9e.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-inspiration">Inspiration</h2>
<p>I ran across this blog post some time ago:</p>
<p><a target="_blank" href="https://towardsdatascience.com/four-reasons-why-everyone-except-for-computer-scientists-writes-sloppy-code-b8505254e251">Four reasons why everyone except for Computer Scientists writes sloppy code</a> by Ari Joury</p>
<p>It’s a really good write up, except that it starts right off in the title insulting a whole lot of developers. Joury doesn’t define who a Computer Scientist is. It is only someone like me with a degree that says “Bachelors of Computer Science”, or is it anyone who thinks like a computer scientist? I believe it is the latter: how someone thinks. However, in that case the very label makes no sense. This is exemplified in Joury’s Reason #1, which hits the nail on the head:</p>
<blockquote>
<p>“Reason 1: For Computer Scientists, coding is an art. For everyone else, it’s a tool.”</p>
<ul>
<li>Ari Joury</li>
</ul>
</blockquote>
<p>I read this line and my head popped. 🤯 I’ve seen this sentiment before, but never so succinctly. But there’s a problem with it. If you study physics, chemistry, biology… you would be studying science. You would be pursuing knowledge of what <em>is</em>. As software developers though, we might study the problem space of what we are trying to solve with our solution, but then we go and <em>create something new</em>.</p>
<p>So I’ll modify this reason:</p>
<blockquote>
<p>“Reason 1: For <strong>passionate software developers</strong>, coding is an art. For everyone else, it’s a tool”</p>
<ul>
<li>me</li>
</ul>
</blockquote>
<h2 id="heading-passion-art">Passion = Art!</h2>
<p>How does a passionate software developer differ from everyone else?</p>
<p>Everyone else will look at the requirements, perhaps with a user story, and dive in writing code. Tweak it. Stack Overflow it. In the end, the output is something that “works.” That’s the end for these “users of a tools” people.</p>
<blockquote>
<p>“It works for that user story, therefore it is done.” </p>
<ul>
<li>those people</li>
</ul>
</blockquote>
<p>This code is usually easy to spot, because it will be a jumbled mess of spaghetti. That’s evolution. Does it work? Yes, at least for that one use case. Can it be maintained? Probably not.</p>
<p>A passionate software developer would be embarrassed by that. How can you be passionate about your work and deliver a mess? A code artist might go through the same evolution but then it will be refactored into a testable succinct piece of <em>art</em>.</p>
<p>More often though, a code artist won’t have just blindly started typing code. Writing is a form of art. Like any novelist, the artist will outline the overall plot first. The characters and interactions are identified and the story obtains some structure before a single sentence, a line of code, is written.</p>
<p>If you have a code artist doing your code reviews for a while, you will experience this passion for the art of programming. I hope you will come out of the experience as a matured artist yourself. We will challenge you. Either we will convince you that our artistic style is best, or you will rise to the challenge of standing up for your own artistic approaches. Or, you will remain a “it works therefore it is good” user of tools and hate all artists.</p>
<blockquote>
<p>“You will get better every day. It is an art, not a science. It is a mixture of your creativity, your personality, your heart, your mind, your ethics and your values. It is you.”</p>
<ul>
<li>James Stanier</li>
</ul>
</blockquote>
<p>Stanier was talking about being a manager, but this is true of anything you care about.</p>
<h2 id="heading-audience">Audience</h2>
<blockquote>
<p>Reason 2: Developers don’t always write with the reader in mind</p>
<ul>
<li>Ari Joury</li>
</ul>
</blockquote>
<p>This calls back to the point that a code artist is a writer, and we’re writing to two completely different readers of the same manuscript: the computer and our fellow developers. The computer doesn’t care one bit about English and is perfectly happy with this program:</p>
<pre><code class="lang-c"><span class="hljs-keyword">static</span> <span class="hljs-keyword">int</span> e,n,j,o,y;<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span>{<span class="hljs-keyword">for</span>(++o;(n=-~getchar());e+=<span class="hljs-number">11</span>==n,y++)o=n&gt;<span class="hljs-number">0xe</span>^<span class="hljs-number">012</span>&gt;n&amp;&amp;<span class="hljs-string">'`'</span>^n^<span class="hljs-number">65</span>?!n:!o?++j:o;<span class="hljs-built_in">printf</span>(<span class="hljs-string">"%8d%8d%8d\n"</span>,e^n,j+=!o&amp;&amp;y,y);}
</code></pre>
<p><a target="_blank" href="https://www.ioccc.org/2019/burton/prog.clean.c">Source: International Obfuscated C Code Contest</a></p>
<p>Our fellow developers care only a little about the programming language syntax; really it’s English that we read. What we name things is most important here, as well as how we organize things. This is what makes code readable, and not just functional.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>So what’s my point? Have passion for your work! Go forth and create art!</p>
]]></content:encoded></item><item><title><![CDATA[Embed an Expert to Accelerate into the Cloud]]></title><description><![CDATA[Let’s say you have existing applications on-premise, co-location, or private cloud, and an existing engineering team maintaining it. You recognize the benefits of moving to AWS’s public cloud, but how do you get there?

(Watch out for stormy clouds a...]]></description><link>https://adam.fanello.net/embed-an-expert</link><guid isPermaLink="true">https://adam.fanello.net/embed-an-expert</guid><category><![CDATA[app development]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[ IT Consulting]]></category><category><![CDATA[Experience ]]></category><category><![CDATA[software development]]></category><dc:creator><![CDATA[Adam Fanello]]></dc:creator><pubDate>Sat, 25 Jun 2022 03:01:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/JZMdGltAHMo/upload/v1656262661058/mbphj1ksG.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Let’s say you have existing applications on-premise, co-location, or private cloud, and an existing engineering team maintaining it. You recognize the benefits of moving to AWS’s public cloud, but how do you get there?</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656125555707/cNOMJbUM5.jpeg" alt="Watch out for stormy clouds ahead - photo by SpaceX" />
<em>(Watch out for stormy clouds ahead - photo by SpaceX)</em></p>
<h2 id="heading-the-outside-team">The Outside Team</h2>
<p>When asked that question, nearly all consultants jump into how to move your <em>application</em>. They’ll talk about lift &amp; shift vs cloud native, microservices, serverless, domain-driven and event-driven designs. They are right to do so; these are important decisions to make when modernizing your applications.</p>
<p>An outside team will build or transform your product on their own and drop it in the lap of your in-house engineers, who are then lost. What is this cloud magic? How does this work? </p>
<p>Here’s the downside. Outsiders now know your product better than your in-house team. Without having developed that deep understanding, they won’t know how to maintain and enhance the work those experts built. Worse perhaps, they will <em>break</em> what was built by reverting to old ways and thus you loose the advantages that the outside team brought.</p>
<p>Big consultancies usually give little thought to your engineers; they are out to sell services and they are motivated to make you <em>dependent on their services</em>. They would have you keep your existing engineers only to maintain your old products as they attrition away. Then you have to keep coming back to the consultancy for more.</p>
<p>Having lead that outside expert team for several consultancy customers, I have seen time and again how ill prepared my client is at the end of the engagement. A few hours of knowledge transfer meetings can’t impart the skills that come from experience.</p>
<h2 id="heading-use-your-existing-team">Use Your Existing Team</h2>
<p>Your engineers need to learn how to architect and program the new cloud native way, that’s the only long term play. </p>
<p>Training courses will teach developers the new concepts, but often leave them unsure how to actually proceed with a real enterprise-level project. It then becomes trial and error, and stumbles, as they find their way to something that “works”. Before that first thing works, they will discover something that would have been better, but it’s too late to course correct.</p>
<p>Your developers aren’t bad, but they do <em>learn by doing</em>. As with anything we humans learn, we develop better skills and techniques as we practice. While learning, their lack of experience can lead to the same old problems of costly bills, difficulty scaling, and security holes. These problems can hobble your product for a long time to come, leading to another big redesign effort.</p>
<h2 id="heading-embed-an-expert">Embed an Expert</h2>
<p>A third approach is to embed an individual cloud application development expert into your team. This expert can guide your in-house team and accelerate the project. An expert has gone through this trial and error many times <em>already</em>, so they skip the false starts and go right to a solution for you based on their vast experience.</p>
<p>Embed that expert as a member of your team, and build your new product <em>together</em>. Your engineers learn by doing, you have a product designed and guided by a cloud expert, and in the end <em>your</em> engineers understand the resulting work because they were involved in it every step of the way.</p>
<p>I have had only a couple of “build with” clients in my time at AWS consultancies, and always feel better about the handoff of these projects. I expect these applications to live on and continue to grow for years to come.</p>
<hr />
<p>Inspired by:
<a target="_blank" href="https://www.informationweek.com/cloud/laying-out-a-road-map-to-close-the-cloud-skills-gap/a/d-id/1341571">Laying Out a Road Map to Close the Cloud Skills Gap - InformationWeek</a></p>
]]></content:encoded></item><item><title><![CDATA[Clean Code]]></title><description><![CDATA[Introduction
Throughout my career as a software engineer, certain nuggets of wisdom have come my way. I may pick some up from co-workers, from reading blogs, or often enough the old fashioned way - learning from my own errors.
I have tried to pass al...]]></description><link>https://adam.fanello.net/clean-code</link><guid isPermaLink="true">https://adam.fanello.net/clean-code</guid><category><![CDATA[clean code]]></category><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Adam Fanello]]></dc:creator><pubDate>Thu, 16 Jun 2022 00:29:52 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1655339231553/WYNXt5Hxc.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Throughout my career as a software engineer, certain nuggets of wisdom have come my way. I may pick some up from co-workers, from reading blogs, or often enough the old fashioned way - learning from my own errors.</p>
<p>I have tried to pass along what I have learned, to train newer software developers on the good things to do when writing applications, what not to do, and why. My key phrase I have latched onto is “maintainable code”. I’ve written my own guides on the subject;  internal to companies, in public blog posts, and most often as comments in pull requests! </p>
<p>Over the decades, my expertise in my craft has grown and I’ve tried to give more back.</p>
<p>Then… something happened. I met Uncle Bob.</p>
<p>Not in person, unfortunately, but somehow I found myself to his <a target="_blank" href="http://cleancoder.com/">web site</a>, <a target="_blank" href="http://cleancoder.com/products#:~:text=More%20Info...-,Talks,-Expecting%20Professionalism">videos</a> of his speeches, and finally to his book: <em><a target="_blank" href="https://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882?_encoding=UTF8&amp;qid=1655338535&amp;sr=1-1&amp;linkCode=ll1&amp;tag=fanello-20&amp;linkId=0b721fc87b3b43fedf66fd2f56e1055c&amp;language=en_US&amp;ref_=as_li_ss_tl">Clean Code: A Handbook of Agile Software Craftsmanship</a></em> by Robert C. Martin.</p>
<p>I have been wasting my time. 😉</p>
<p>This is the holy book of programming. All that I have discovered, is right here. In fact, this is the source of much of the advice I received over my many years.</p>
<p>I had heard of this book before, certainly, but actually read a book? On <em>paper?</em> For so long I have been learning online that this seemed quaint. Well I’m glad I finally gave in. I’m here to say: <em>read this book! On Paper!</em> </p>
<h2 id="heading-but-why">But why?</h2>
<p>First off, after twenty-five years or so, there’s little in Clean Code that I haven’t heard before. So why read it now?</p>
<ol>
<li>Uncle Bob doesn’t just say what to do, but convinces you <em>why</em> you should do it.</li>
<li>It’s all the advice on how to <em>craft code</em> in one referenceable manual.</li>
</ol>
<p>That first point is where the biggest value has been for me, because even though I knew most of this advice, I wasn’t following all of it.</p>
<p>The second point make my goal of training up developers easier. Much of what I have written, and have been planning to write, is already in this book! When I see someone in need of learning, instead of writing a dissertation, I can just ask for their mailing address and send them a copy of Clean Code!</p>
<p>Here’s the crux of it: this is a shortcut to performing as a top-tier senior engineer!</p>
<ul>
<li>Buy this book (don’t borrow - you want to have it handy).</li>
<li>Buy it on paper. (Flipping pages to reference code example isn’t so easy in a PDF, and e-books mutilate formatting.)</li>
<li>Read it.</li>
<li>Put what you learned into <em>practice</em>.</li>
</ul>
<h2 id="heading-could-it-be-better">Could it be better?</h2>
<p>Clean Code has one big flaw prevalent throughout: Java.</p>
<p>The author’s mindset is in Java. The examples are all in Java. Not just the code examples, but even the text assumes you know common enterprise Java concepts and tooling.</p>
<p>Back when Clean Code was written in the first decade of the 2000’s, Java was <em>the</em> language of enterprise application development, so it makes sense. Ten to twenty years later though, the book frequently refers to Java ecosystem concepts that won’t make any sense to someone who didn’t go through that era. It’d be great if new editions where made for other modern programming languages.</p>
<p>For the most part though, these Java-centric concepts can be glossed over by the reader. You could be writing in anything from C onwards and easily apply the lessons here, because most of them are timeless and most languages follow the same concepts. True functional languages may be the most distant from Java, but still I think even these programmers will find valuable lessons in <a target="_blank" href="https://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882?_encoding=UTF8&amp;qid=1655338535&amp;sr=1-1&amp;linkCode=ll1&amp;tag=fanello-20&amp;linkId=0b721fc87b3b43fedf66fd2f56e1055c&amp;language=en_US&amp;ref_=as_li_ss_tl">Clean Code</a>.</p>
<h2 id="heading-more-must-reads">More Must Reads?</h2>
<p>Are there other must-read books for software engineers and architects? Put it in a comment.</p>
]]></content:encoded></item><item><title><![CDATA[A study of Test Driven Development and Functional Programming in TypeScript]]></title><description><![CDATA[Introduction
I come from an object-oriented programming (OOP) background - it was the hot stuff in my formative
years of learning to program - driven by C++ and Java. Embracing unit testing meant embracing
Inversion of Control (IoC) and Dependency In...]]></description><link>https://adam.fanello.net/tdd-and-fp-study</link><guid isPermaLink="true">https://adam.fanello.net/tdd-and-fp-study</guid><category><![CDATA[TDD (Test-driven development)]]></category><category><![CDATA[Functional Programming]]></category><category><![CDATA[TypeScript]]></category><dc:creator><![CDATA[Adam Fanello]]></dc:creator><pubDate>Mon, 06 Jun 2022 23:07:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/O33IVNPb0RI/upload/v1654303044145/YWpzuHVmI.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>I come from an object-oriented programming (OOP) background - it was the hot stuff in my formative
years of learning to program - driven by C++ and Java. Embracing unit testing meant embracing
Inversion of Control (IoC) and Dependency Injection (DI). This was the way.</p>
<p>As my language of choice switched to TypeScript, and passing data as objects defined by interfaces
passed through HTTP APIs, real OOP classes fell away but I continued to use classes for DI. Business
logic was in service classes, which were really just collections of functions with shared DI in the
constructor. Unit tests mocked out all those dependencies, and with pride my coverage often hit
100%. I tested my code against unit tests <em>first</em>, so that everything usually worked upon first
deployment. I largely forgot how to use the IDE's debugger.</p>
<p>All was good.</p>
<p>I had heard the fans of Test Driven Development (TDD) exclaim the wonders of it, but figured my
<em>test before deploy</em> approach basically did that. (Turns out, not really.)</p>
<p>I had heard the fans of functional programming (FP) exclaim their superiority over OOP. But hey, I'm
just using classes to organize my functions for DI - so I was basically functional. (Yes, but no.)</p>
<p>I did not see the connection between these. I didn't fully understand either one, and I knew that.
Occasionally I'd read a blog article about TDD or FP to try to understand the excitement. I wanted
to find these techniques exciting! But everything I read was so basic that I couldn't see how to
apply it to real problems, and they didn't show how to connect the concepts. Or worse perhaps, TDD
was <a target="_blank" href="https://www.jamesshore.com/v2/blog/2005/microsoft-gets-tdd-completely-wrong">completely misrepresented</a>
and looked horrible because doing it wrong <em>is</em> horrible.</p>
<p>I'd think: functional is great for <em>pure</em> functions, but how do you <em>test</em> the orchestration that
isn't pure?</p>
<h2 id="heading-what-really-is-test-driven-development">What really is Test Driven Development?</h2>
<p>Robert C. Martin (Uncle Bob) covers a lot of ground about software development practices in his talk
about <a target="_blank" href="https://www.youtube.com/watch?v=BSaAMQVq01E">Expecting Professionalism</a>
and specifically TDD <a target="_blank" href="https://vimeo.com/97516288">here</a>. (Watch these.)</p>
<p>The analogy to the accounting practice of double-entry bookkeeping especially resonated for me.
Double-entry bookkeeping is the discipline of keeping a transaction ledger for each account. When
you subtract from one account, it <em>must</em> be added to another - immediately. Often this other
"account" isn't one in the traditional sense, it is just a means of tracking and <em>checking your
work</em>.
It's a checksum. Subtract here; add there; compare the totals. If you make a mistake, the bookkeeper
immediately knows.</p>
<p>We often write all the code, and <em>then</em> write all our tests to check it.
Bookkeepers <em>don't</em>. They subtract an entry here, and add it there. It's an atomic operation. Why?
Because if they did all one and then the other and make a mistake, the checksum at the end is
simply "it doesn't match". Where is the mistake? It's <em>somewhere</em> in one of the <em>many</em> transactions
they added. That's painful, and entering all transactions atomicly is less so.</p>
<p>TDD is like double-entry bookkeeping. The rules are put in two places, atomically. Write test code,
then write some production code. Checksum. Write some test code, write some production code.
Checksum. It may sound painful, but if you introduce a bug you will know it <em>immediately</em>. It's the
ultimate linting tool. How painful is this compared to a bug in production? Once you get used to it,
not at all. This is the #1 TDD promise: Bug free code! (Caveat: You have to understand the desired
behavior, of course. Otherwise you'll write the wrong behavior twice.)</p>
<p>Here's the #2 TDD promise: Your code will be cleaner, easier to maintain, and self documenting.
Watch the videos and find the arguments for this promise.</p>
<p>If Uncle Bob and I finally convinced you that Test Driven Development (TDD) is <em>amazing</em>, follow it
up with Ian Cooper's dedicated talk on the topic:
<a target="_blank" href="https://www.youtube.com/watch?v=EZ05e7EMOLM">TDD, Where Did It All Go Wrong</a> (Yes, this is actually
pro TDD! He addresses where people get it wrong.)</p>
<p>Next in my research, I discovered
<a target="_blank" href="https://medium.com/spotlight-on-javascript/real-world-node-js-tdd-example-4f877a46e1f1">this</a>
guide. No longer a simplistic example. TDD and FP rolled up in an example that is big enough to hit
real challenges, and explained progressively rather than just an end result. But.... well... what's
with all the factory functions? Everything is a factory now. 😬</p>
<p>That last article linked to
this <a target="_blank" href="https://www.jamesshore.com/v2/blog/2018/testing-without-mocks">original guide to testing without mocks</a>
. Here's another concept that ties in neatly with TDD, but this one is using classes instead of
functions. The classes though, still involve factories. Also, the examples are showing React code -
reminding me that React components themselves use the factory pattern, whether functional or
class-based. I don't know why he wants to go <em>entirely</em> without mocks (to the point of running a
fake server in-process - just mock it already!), but the takeaway is that mocking (and faking) can
be a
final option rather than the first go-to choice.</p>
<h2 id="heading-what-is-functional-programming">What is Functional Programming?</h2>
<p>Go full functional, or use classes? Either work in JavaScript, to an extent,
but that's something standing out in
these examples: they are both JavaScript not TypeScript. To some extent, I can see that powerful
argument for strong typing is weakened when doing proper TDD: failures happen by default. That makes
me think maybe I can allow implicit <code>any</code> (turn off that eslint rule) and not define types for
<em>everything</em>, but there are still benefit to strong types. Are types more difficult with FP? There
does appear to be a correlation between FP and going full dynamic typing. 🧐</p>
<p>When to use class vs function? <a target="_blank" href="https://labs42io.github.io/clean-code-typescript/">This article</a>
about applying <a target="_blank" href="http://cleancoder.com/">CleanCode</a> to TypeScript helped me realize it is not an
either-or choice. Looking at the patterns, I realize I can <em>lean</em> functional, but enjoy class and
real OOP goodness where it makes sense. In short: classes are for object-oriented logic, modules are
for grouping related functions.</p>
<p><em>But wait</em>, you a FP practitioner shouts! <em>That isn't what functional programming is about!</em>
Yes, <a target="_blank" href="https://vimeo.com/97514630">Uncle Bob</a> has set me straight in his talk about Functional
Programming. Actual FP is about being stateless. A Pure Function is one with no side effects - given
the same input it will always have the same output. Pure. Testable. Stateless. Often recursive.
In fact, FP purists tell us that functional has no state, even local variables, and no loops;
recursion
takes the place of both and for this not to be a poor developer experience you need a language
designed for this. (Lisp, Haskell, Clojure, ...)
Simply using just functions instead of classes doesn't automatically mean FP. Really,
applications can't be entirely FP; only functions can be pure. An application with no state is just
a big function and has pretty limited use. Also, we're talking about JavaScript here,
which <a target="_blank" href="https://en.wikipedia.org/wiki/Functional_programming">not a real FP language</a>,
but rather a hybrid.
Functions don't have to be pure. We even have classes (functions plus state).</p>
<p>Can we do FP in JavaScript? Yes, the <code>Array</code> functions like <code>map</code> and <code>filter</code> are functional.
The popular <a target="_blank" href="https://rxjs.dev/">RxJS</a> library is also functional, and if you have worked with it
you have an idea of how complicated functional can get. The big dog in TypeScript functional is the
<a target="_blank" href="https://gcanti.github.io/fp-ts/">fp-ts</a>. Even more than RxJS, it requires massive buy-in
by the entire application team to learn and use this approach.</p>
<p>So am I looking at real FP, or just not using the <code>class</code> keyword?</p>
<h2 id="heading-so-tdd">So... TDD?</h2>
<p>I'm buying what TDD is selling. It sounds doable, and individuals can do it even on existing
projects and without forcing every other developer of an application, now and in the future, to
understand and use it. (Indeed, I have started doing this!)</p>
<p>Not sure how to get actually get going? </p>
<p>Here's the Three Rules of TDD:</p>
<ol>
<li>Write production code only to pass a failing unit test.</li>
<li>Write no more of a unit test than sufficient to fail (compilation failures are failures).</li>
<li>Write no more production code than necessary to pass the one failing unit test.</li>
</ol>
<p>Here's some 
<a target="_blank" href="https://www.jamesshore.com/v2/projects/lunch-and-learn">Live Stream Examples doing TDD in JavaScript</a>.</p>
<h2 id="heading-so-fp">So... FP?</h2>
<p>For me, using TypeScript, no.</p>
<p><a target="_blank" href="https://kentcdodds.com/blog/classes-complexity-and-functional-programming">Kent C. Dodd's arguments for a functional approach</a>
is worth a read. I don't buy into the "<code>this</code> is too complicated" argument, I think he's just
allergic to to the <code>class</code> keyword, but stateless programming with pure functions instead of OOP has
some merit. Not only does it make testing easy, but it plays real nice with using objects defined as
interfaces, instead of classes. One might implement the builder pattern with a class though, for
some logical transformations.</p>
<p>Should we scope functions at the module (file) level instead of class?
Here's how we might do Inversion of Control (IoC) without any fancy Dependency Injection (DI) magic:</p>
<pre><code class="lang-typescript"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">a</span>(<span class="hljs-params"></span>) </span>{
}

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">fakeA</span>(<span class="hljs-params"></span>) </span>{
}

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">b</span>(<span class="hljs-params"></span>) </span>{
}

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">fakeB</span>(<span class="hljs-params"></span>) </span>{
}

<span class="hljs-keyword">export</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">moduleFactory</span>(<span class="hljs-params">
    otherModuleDependency = otherModuleFactory(),
</span>) </span>{
    <span class="hljs-keyword">return</span> {a, b};
}

<span class="hljs-keyword">export</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">fakeModuleFactory</span>(<span class="hljs-params"></span>) </span>{
    <span class="hljs-keyword">return</span> {a: fakeA, b: fakeB};
}
</code></pre>
<p>Much FP code is actually allergic not only to the <code>class</code> keyword, but even <code>function</code>!
So we might export our factory function as the module default like this:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> UserRepository <span class="hljs-keyword">from</span> <span class="hljs-string">"user.repository"</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> (repo = UserRepository()) =&gt; {
    <span class="hljs-keyword">return</span> {
        a: <span class="hljs-function">() =&gt;</span> {
        },
        b: <span class="hljs-function">() =&gt;</span> {
        },
    };
}
</code></pre>
<p>To me, this is getting less readable than a nice <code>export class UserLogic</code> with constructor DI.</p>
<p><a target="_blank" href="https://www.reddit.com/r/typescript/comments/ufucle/comment/i72c5di/?utm_source=share&amp;utm_medium=web2x&amp;context=3">dvlsg on Reddit</a>
perhaps sums up the functions (closure) vs class debate best:</p>
<blockquote>
<p>Some people just prefer using closures to using state on classes, and don't like using 'this'.
That's really it. It's a valid opinion, but it is just an opinion.</p>
</blockquote>
<p>Then there's <a target="_blank" href="https://twitter.com/thdxr/status/1510814420691083267?s=20&amp;t=I1fJzUyxQdfZMjA6vG5Dsw">Dax Raad on Twitter</a>:</p>
<blockquote>
<p>a fundamental tradeoff that no one talks about with functional programming is the more you use it,
the more annoying you become</p>
</blockquote>
<p>FP is a big shift. Going all the way with it will make your code unmaintainable by most of the
programming community, and so requires total buy-in. Choosing to do FP in a language it isn't
native to feels even worse than trying convincing folks to switch to a proper FP language.
Arguments can be made to do this, but it is a huge leap.</p>
<h2 id="heading-so-oop">So... OOP?</h2>
<p>Functional and object-oriented aren't black-and-white polar opposite options.
Let's explore a little.</p>
<p>We can certainly borrow much from FP, just as JavaScript ES2015 did when it introduced
the new <code>Array</code> functions like <code>map</code> and <code>reduce</code>.
Tucking business logic into pure functions and immutable classes adopts some FP concepts
without going too deep.</p>
<p>The function-only and "no mock" (which turns out to just be hand-crafted fakes) approach for DI and
testing just doesn't appear to buy anything. Class constructors are replaced with
factories. <a target="_blank" href="https://dev.to/mindplay/a-successful-ioc-pattern-with-functions-in-typescript-2nac">Here is a good example.</a>
Whether highly related functions are grouped together as returned by one factory, or grouped
together as one class, doesn't make any difference. I have certainly had trouble with classes
growing large, with all the logic and orchestration for a certain <em>thing</em> crammed together, but that
can happen with functions and modules too.</p>
<p>The solution is not to group functions by the thing they operate on, but by dependencies they share
(<a target="_blank" href="https://en.wikipedia.org/wiki/Cohesion_(computer_science)">high cohesion</a>) and what they do
(<a target="_blank" href="https://en.wikipedia.org/wiki/Single-responsibility_principle">single responsibility</a>).
Instead of having a <code>UserService</code> class that grows huge, have a class per use-case that operates on
a user. That keeps the classes small.</p>
<p>Below I'm using interfaces and a (pure) guard function from <code>"domain/models"</code>,
pure functions from <code>"domain/logic"</code> and <code>"utils/assertions"</code>,
and coordinating it all as a <em>use case</em> in a clean cohesive class.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { isUserSignUpRequest, UserSignUpRequest, UserSignUpResponse } <span class="hljs-keyword">from</span> <span class="hljs-string">"domain/models"</span>;
<span class="hljs-keyword">import</span> { newUserFactory } <span class="hljs-keyword">from</span> <span class="hljs-string">"domain/logic/user"</span>;
<span class="hljs-keyword">import</span> { assertValidInput } <span class="hljs-keyword">from</span> <span class="hljs-string">"utils/assertions"</span>;

<span class="hljs-comment">// Optionally use DI magic to gain performance of singletons:</span>
<span class="hljs-comment">// @Injectable()</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> UserSignUpUseCase {
    <span class="hljs-comment">// Dependencies defaulted - tests can provide mocks or fakes</span>
    <span class="hljs-keyword">constructor</span>(<span class="hljs-params">
        auth = <span class="hljs-keyword">new</span> AuthService(),
        userRepo = <span class="hljs-keyword">new</span> UserRepository(),
    </span>) {
    }

    <span class="hljs-keyword">async</span> process(request: UserSignUpRequest): <span class="hljs-built_in">Promise</span>&lt;UserSignUpResponse&gt; {
        <span class="hljs-built_in">this</span>.validateRequest(request);
        <span class="hljs-keyword">const</span> user = newUserFactory(request);
        <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.userRepo.put(user);
        <span class="hljs-keyword">return</span> {user};
    }

    <span class="hljs-keyword">private</span> validateRequest(request: UserSignUpRequest): <span class="hljs-built_in">void</span> {
        <span class="hljs-built_in">this</span>.auth.assertIsNotAuthenticated();
        assertValidInput(request, isUserSignUpRequest);
    }
}
</code></pre>
<p>These use-case classes can also be called "Controllers" in the traditional sense.
That name is often used now for RESTful path handling though, so I find <code>UseCase</code> is clearer.</p>
<p>Could this be done with module scope exporting a factory function? Absolutely.
Would it be as readable? I say no.</p>
<h2 id="heading-rules-to-follow">Rules to Follow?</h2>
<p>This is my personal take and how I intend to proceed.
Your research and background may lead you to a different conclusions.</p>
<ol>
<li>Use TDD, with circumstantial flexibility. (Don't be obsessive.)</li>
<li>Use types and interfaces for APIs.</li>
<li>Favor pure functions for business transformation logic - anything that takes a parameter or two,
and returns a result or two without mutation or outside state.</li>
<li>Use OOP where it has value - but the object must be self-contained so that state changes <em>only</em>
impact that object instance.</li>
<li>Use Dependency Injection (DI) with <em>small</em> classes that have a
<a target="_blank" href="https://en.wikipedia.org/wiki/Single-responsibility_principle">single responsibility</a> and
<a target="_blank" href="https://en.wikipedia.org/wiki/Cohesion_(computer_science)">high cohesion</a>.<ul>
<li>These classes aren't OOP, just collections of functions and dependencies.</li>
<li>This can be done with modules and factory functions, but this is more effort for less
readable syntax than a nice clean class; this syntactic sugar was added for good reason.
Also, separating the function grouping name (class) from the file name feels more flexible.</li>
</ul>
</li>
<li>If a test fake is needed, export it along-side the real code so that it can
be reused by any test that depend on it.</li>
</ol>
<h2 id="heading-file-structure">File structure</h2>
<p>Wrapping all this up, how might I organize the source code?</p>
<pre><code class="lang-plain">.
└── src
    ├── domain
    │   ├── logic  # domain logic functions &amp; object-oriented classes
    │   │   └── user.logic.ts
    │   └── models
    │       ├── index.ts
    │       └── user.ts
    ├── handlers
    │   └── user-sign-up.lambda.ts
    ├── external
    │   ├── dynamodb.service.ts
    │   └── repositories
    │       └── user.repository.ts
    └── use-cases
        └── user
            ├── user-sign-up.test.ts
            └── user-sign-up.ts
</code></pre>
<p>The entire <code>domain</code>, or just <code>domain/models</code>, might go into a separate package to be shared with
client code - these models are your API contracts.</p>
<p>The <code>handlers</code> are the entry points into your code. You might have separate handlers for different
environments such as one for AWS Lambda and another for a containerized cloud. This puts you on
the road to <a target="_blank" href="https://adam.fanello.net/hexagonal-architecture-by-example-meetup">Hexagonal Architecture</a>.
Here, <code>user-sign-up.lambda.ts</code> just deals with unpacking the request from API Gateway and Lambda,
calling the use-case, and formatting the proper response back to Lambda and API Gateway.</p>
<p>The <code>external</code> directory is the right-side of your Hexagonal Architecture - the interfaces out
to external sources and targets of state.</p>
<p>Finally, <code>use-cases</code> coordinate processing of individual requests and commands.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>These explorations of techniques are fun, but sure do make for long blog posts!
The relation between TDD and FP isn't as strong as I started off thinking, but figuring out how
to make unit testing work with FP was highly relevant.</p>
<p>Let's sum this up:</p>
<ol>
<li>TDD gives you superpowers; embrace it!</li>
<li><em>Real</em> Functional Programming is only practical in a language made for it, 
but we can take lessons from FP and apply them in other languages.</li>
<li>The <code>class</code> keyword is not poison, and is useful syntactic sugar beyond OOP.</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[I spoke at the Serverless Stack 1.0 Conference today!]]></title><description><![CDATA[SST v1!
Serverless Stack (SST)
is the latest in tooling to make it simpler to develop serverless applications in AWS.
Serverless Framework is the biggest player in this field,
because it was the first and did a pretty good job. 
I've used it for a fe...]]></description><link>https://adam.fanello.net/serverless-stack-1-conference</link><guid isPermaLink="true">https://adam.fanello.net/serverless-stack-1-conference</guid><category><![CDATA[serverless]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[AWS]]></category><category><![CDATA[infrastructure]]></category><dc:creator><![CDATA[Adam Fanello]]></dc:creator><pubDate>Tue, 17 May 2022 21:04:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1652821464503/dW-g8fXeq.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-sst-v1">SST v1!</h2>
<p><a target="_blank" href="https://serverless-stack.com/">Serverless Stack (SST)</a>
is the latest in tooling to make it simpler to develop serverless applications in AWS.
<a target="_blank" href="https://www.serverless.com/">Serverless Framework</a> is the biggest player in this field,
because it was the first and did a pretty good job. 
I've used it for a few years. However, when working with anything
more than a few Lambdas, an API, and a sprinkling of other serverless services like SQS, pain points
appear and Infrastructure as Code (IaC) becomes just as much workaround as code.</p>
<p>At today's SST 1.0 Conference, I gave a brief presentation about these pain points and workarounds,
and how SST simplified my deployment tooling.
You can watch <a target="_blank" href="https://youtu.be/6FzLjpMYcu8?t=6340">my talk at SST 1.0 Conf here</a>.
I strongly encourage you to watch the <a target="_blank" href="https://v1conf.sst.dev/">entire conference</a>
There is amazing content in this short conference, from employees at SST and
volunteer presenters such as myself. </p>
<h2 id="heading-compare-to-aws-amplify">Compare to AWS Amplify</h2>
<p>Frequently folks pop onto the <a target="_blank" href="https://serverless-stack.com/slack">Serverless Stack Slack</a>
(where there is amazing support!) and ask how SST compares with 
<a target="_blank" href="https://aws.amazon.com/amplify/">AWS Amplify</a>.
You might know that I did an
<a target="_blank" href="https://adam.fanello.net/series/in-deep-with-aws-amplify">entire series about AWS Amplify</a>
and even though I started it with high hopes, and completed my effort, the series is a painful
journey to behold. It is filled with road blocks, workarounds, and bugs.</p>
<p>Both AWS Amplify and Serverless Stack attempt to make it easier to author serverless applications.
Both are still evolving and improving.
Unfortunately, my conclusion is that the AWS Amplify approach is fatally flawed;
it missed the mark and often makes it <em>harder</em> to work with AWS rather than easier.
Serverless Stack ❤️ has succeeded where Amplify 😔 failed.</p>
]]></content:encoded></item><item><title><![CDATA[Software Testing Strategies]]></title><description><![CDATA[In 2021, I was asked to participate in a podcast explaining software testing strategies
to a team of people who were technical, but not themselves software developers.
While I cannot share the actual podcast, I can share the prepared Q&A that was use...]]></description><link>https://adam.fanello.net/software-testing-strategies</link><guid isPermaLink="true">https://adam.fanello.net/software-testing-strategies</guid><category><![CDATA[software development]]></category><category><![CDATA[Testing]]></category><category><![CDATA[TDD (Test-driven development)]]></category><category><![CDATA[unit testing]]></category><category><![CDATA[Software Testing]]></category><dc:creator><![CDATA[Adam Fanello]]></dc:creator><pubDate>Sat, 07 May 2022 20:31:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1651955376741/GSVuaXMh9.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>In 2021, I was asked to participate in a podcast explaining software testing strategies
to a team of people who were technical, but not themselves software developers.
While I cannot share the actual podcast, I can share the prepared Q&amp;A that was used as the
basis of the live discussion.</em></p>
<h3 id="heading-can-you-identify-some-of-the-most-common-testing-strategies-in-this-industry-and-define-what-those-are">Can you identify some of the most common testing strategies in this industry and define what those are?</h3>
<p>That's a huge question!
Testing strategies can be broken down into a matrix of categories.
First, there is manual vs automated. That's easy to understand - automated tests are written as code
and can run automatically. Manual tests are often viewed as a procedure, in a human sense, in that a
person performs each step of the procedure.</p>
<p>From those two broad categories, we then have to look at scope. What is included in the test, and
what is not. Software is built on layers. Some of these are easy to see: the app running on your
iPhone, vs the custom business logic running in the cloud, vs third party services like what AWS
provides. With IoT, there's the actual hardware devices as well - that's another layer. When you
decide how many of these layers you want to test together, you are defining the scope of the test.</p>
<p>When we only test one layer in isolation, that's a unit test - one unit of code.
If two or more layers are tested together, that's an integration test.
If all layers are tested together, that's called end-to-end testing.</p>
<p>We also have regression testing. Regression testing refers to automated tests that are designed to
catch when something changes unexpectedly. They are often written after fixing a bug that existing
tests failed to catch. Once written though, they’re just another automated test.</p>
<p>There are more special adjectives for different scopes of integration tests. UI tests are scoped
around the user interface. API tests are scoped to start with the application program interfaces -
server endpoints - down through all layers of the server.</p>
<h3 id="heading-can-you-dive-deeper-into-unit-tests-and-explain-how-those-are-typically-setup">Can you dive deeper into unit tests and explain how those are typically setup?</h3>
<p>Unit testing allows us to focus on the smallest scope of code, and ensure that it is working
correctly. This requires isolating it, by replacing the layers around it with fake or "mock" implementations.
Those surrounding layers are the input into the unit being tested. 
We code up a variety of inputs to feed into that unit of code,
and verify that the resulting output is what we want. If it is, the test passed.</p>
<p>This sort of isolation is only really possible for coded tests, which means automated. Because these
are entirely automated and isolated, they are fast and easy to run in the various automated
continuous integration pipelines like Bitbucket Pipeline, AWS CodePipeline, Jenkins, etc.</p>
<h3 id="heading-what-about-ui-tests-can-you-speak-to-when-these-might-be-the-most-useful-and-what-type-of-tools-would-best-accomplish-this">What about UI tests, can you speak to when these might be the most useful and what type of tools would best accomplish this?</h3>
<p>UI tests are typically automated integration tests that focus on the user interface. The automated
tests, which is code, interacts with the user interface like a user would and checks that the
results on the screen are as expected. These work well to test entire user stories, although they
aren’t limited to that.</p>
<p>The most popular tools for UI testing for a long time has been Selenium and some other tools that
build on top of it. They’re rather difficult to work with though. 
For webapp testing, a relative newcomer called Cypress is far more pleasant to work with.</p>
<h3 id="heading-what-about-testing-with-hardware-is-there-a-lot-more-to-define-there">What about testing with hardware? Is there a lot more to define there?</h3>
<p>Firmware and device testing has all the same kinds of considerations as software.
The manufacture of hardware will be concerned about physical testing - that's hardware testing -
but that is outside my area of expertise. These devices are however, programmed. The programming on hardware
devices is often called firmware, but that's more of a historical artifact than a real distinction
these days. Devices now have an operating system and run application software. No surprise then that
the same concepts of testing still apply. The distinction though is how we isolate all the physical
inputs to perform automated testing. That sometimes involves another piece of hardware to
electronically push buttons and flip switches, or it may connect via a communication port and
simulate inputs that way.</p>
<h3 id="heading-what-experience-do-you-have-with-load-testing">What experience do you have with load testing?</h3>
<p>The most common approach: throw it into production, monitor as use increases, and react. When you're
dealing with a new product bringing customers in very gradually, that isn't as terrible as it
sounds. With a hyperscaler like AWS, a well architected solution will be fine.</p>
<p>Better of course is a proper load test, and I led this effort for an IoT customer
that asked for it.
In this case, they were planning to very quickly migrate tens of thousands of devices and thousands of
users from a legacy platform to a new one that we had built, so we have to be ready.</p>
<p>Load testing is just a form of integration test, usually scoped to just your cloud-side layers. A
pretty basic load test will just hit one application endpoint hard and see if or when it breaks.
More sophisticated is to script an automated integration test that acts like real users. Then you
can use something like AWS Fargate to scale out our simulated users to produce a heavy load.
For an application with input coming from somewhere other than end users, such as IoT devices,
the load test is modeled very similar to a unit test - mocking the inputs from any direction into
the cloud. With a single automated test simulating both thousands of IoT devices and the thousands
of users interacting with them in realistic ways, you have a real cloud end-to-end test under load.</p>
<p>Cloud monitoring tools can monitor the system health during a load
test. The testing scripts themselves can also track latency and failures from their end and report
results.</p>
<h3 id="heading-can-you-talk-to-us-a-bit-about-automated-testing-which-i-think-might-tie-into-test-driven-development-and-if-not-can-you-explain-the-differences">Can you talk to us a bit about automated testing, which I think might tie into test driven development, and if not, can you explain the differences?</h3>
<p>Automated tests are written so that they can be run again and again without human involvement.
That is in contrast to manual tests.</p>
<p>Test Driven Development is a discipline of testing whereby the programmer writes the automated unit
tests at the same time as implementing the logic. 
We start by writing a test the way we <em>want</em> to use the logic, and this initially fails because
the logic code doesn't yet exist.
We then write just enough logic until the test passes, perhaps be writing a function that does nothing.
Then we write some more test until it fails again, and write more code until is passes again.
This continues back and forth until the solution is complete. It sounds absolutely crazy, but
can expose nice clean solutions that we might not have thought of without writing a fake consumer
of the logic right from the start. You end up with just the right amount of solution and unit
tests as you need, and no more. Integration tests can be created this way as well.</p>
<p>While this approach takes a little extra upfront effort, when done right bugs become rare.</p>
<h3 id="heading-what-can-you-tell-me-about-working-with-external-qa-teams-in-your-experience-have-you-seen-an-efficient-workflow-from-external-qa-teams">What can you tell me about working with external QA teams? In your experience, have you seen an efficient workflow from external QA teams?</h3>
<p>Oh yes. Quality quality assurance members are there to find problems before the end users do.
That's a valuable service! 
Developers can get tunnel vision; we know what we expect the user to do and code for that.
You need a separate person to do the unexpected to find bugs, and try the same thing in two versions
each of three different browsers, for instance.</p>
<h3 id="heading-on-the-other-side-of-that-what-have-you-seen-that-simply-just-doesnt-work">On the other side of that, what have you seen that simply just doesn’t work?</h3>
<p>The key to successful QA is a respectful relationship. Not only do the developers and testers need
to recognize that we are there to support each other, but the customer must not poison that. When a
customer is willing to pay for QA, and then is shocked and assigns blame when QA actually finds
bugs, that destroys the relationship. It can't be competition.</p>
<h3 id="heading-what-are-the-best-testing-tools-you-would-recommend">What are the best testing tools you would recommend?</h3>
<p>I already mentioned Cypress for UI testing of web applications. Other than that, the tools vary by
programming language. Unit tests must be written in the same language as the code it tests. That
isn’t required for integration tests, but often using the same tool for integration testing as the
unit tests reduces the cognitive load and allows for reuse of some code.</p>
<h3 id="heading-can-you-describe-what-the-ideal-test-coverage-looks-like">Can you describe what the ideal test coverage looks like?</h3>
<p>Testing should be targeted to where the smallest amount of it can have the biggest impact. If you
unit test, and API integration test, and end-to-end test the same code - you’re testing the same
logic three times. There are always some added benefits to more test coverage, but the returns
diminish.</p>
<p>That said, my default for unit tests is 100% function and line coverage. That is often reached, but
I’m not going to spend much time trying to hit one hard to reach branch that is not even expected to
ever happen. For APIs, with the code behind it already unit tests, the target is every API but with
just a representative sample of possible inputs. Maybe one success and one failure.</p>
]]></content:encoded></item><item><title><![CDATA[Old developers - we're still here!]]></title><description><![CDATA[Robert C. Marten, legend of computer science, points out that
half of developers have less than five years of experience. 
It has been this way since the very first programmers in the 1960's;
the number of programmers in the world has been doubling r...]]></description><link>https://adam.fanello.net/old-developers-were-still-here</link><guid isPermaLink="true">https://adam.fanello.net/old-developers-were-still-here</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[trends]]></category><category><![CDATA[mentorship]]></category><dc:creator><![CDATA[Adam Fanello]]></dc:creator><pubDate>Tue, 03 May 2022 20:26:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/6rkJD0Uxois/upload/v1651609398811/2oTdFlWej.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="http://cleancoder.com/">Robert C. Marten</a>, <em>legend</em> of computer science, points out that
<a target="_blank" href="https://youtu.be/ecIWPzGEbFc?t=3059">half of developers have less than five years of experience</a>. 
It has been this way since the very first programmers in the 1960's;
the number of programmers in the world has been doubling roughly every five years.</p>
<p>Therefore, half of the population is inexperienced; junior to middle-level. 
With five years, you're the old guy! (We frequently call them <em>senior</em> software engineers.)</p>
<p>Extend this out:</p>
<ul>
<li>75% of software developers have fewer than 10 years of experience</li>
<li>87% of software developers have fewer than 15 years of experience</li>
<li>94% of software developers have fewer than 20 years of experience</li>
</ul>
<p>I've been writing software since 1987, earning my
Computer Science degree and starting my first professional job in 1998.</p>
<p>Young developers often ask where all the old programmers went. 
They wishfully think we all got rich and retired early. A few did, but that's pretty rare.
Many moved to management. Some moved to project management, UI/UX design, and other related fields.
The rest of us are still here, but with exponential growth in jobs we are vastly outnumbered!</p>
<p>Those of us with a passion for the work stick around. We continue to perfect our craft. 
We learn from mentors like <a target="_blank" href="http://cleancoder.com/">"Uncle Bob" Martin</a>. 
We mentor others, trying to level-up our younger colleagues, so that they too can become "seniors". 😉</p>
]]></content:encoded></item><item><title><![CDATA[AWS re:Invent 2021 from an application developer's perspective]]></title><description><![CDATA[I spoke with Jeff DeVerter, Rackspace's Chief Technology Evangelist, today about AWS re:Invent 2021 from a software developer's perspective. 
Here's a link to the recording on LinkedIn.
(Unfortunately my video quality was terrible. Thanks Cox Communi...]]></description><link>https://adam.fanello.net/aws-reinvent-2021-from-an-application-developers-perspective</link><guid isPermaLink="true">https://adam.fanello.net/aws-reinvent-2021-from-an-application-developers-perspective</guid><category><![CDATA[AWS]]></category><category><![CDATA[software development]]></category><dc:creator><![CDATA[Adam Fanello]]></dc:creator><pubDate>Thu, 09 Dec 2021 23:05:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1650492787109/pG2xvP-gQ.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I spoke with <a target="_blank" href="https://www.linkedin.com/in/jdeverter/">Jeff DeVerter</a>, Rackspace's Chief Technology Evangelist, today about AWS re:Invent 2021 from a software developer's perspective. </p>
<p>Here's a <a target="_blank" href="https://www.linkedin.com/posts/adam-fanello_reinvent2021-activity-6874743599066685440-TjlM">link to the recording on LinkedIn</a>.</p>
<p>(Unfortunately my video quality was terrible. Thanks Cox Communications!)</p>
]]></content:encoded></item><item><title><![CDATA[Science Says this One Weird Trick indicates Emotional Intelligence]]></title><description><![CDATA[This is the most clickbait title I could think of. Why did you click it?!
Note to authors and editors: If you use any of these phrases, I will not click!
Why did you?
Science says…
“Science” doesn’t talk. A scientific study may indicate some interest...]]></description><link>https://adam.fanello.net/science-says-this-one-weird-trick-indicates-emotional-intelligence-d80ec0a48a0a</link><guid isPermaLink="true">https://adam.fanello.net/science-says-this-one-weird-trick-indicates-emotional-intelligence-d80ec0a48a0a</guid><dc:creator><![CDATA[Adam Fanello]]></dc:creator><pubDate>Sun, 25 Apr 2021 03:49:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1650409273495/BCgh0IXe_.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is the most clickbait title I could think of. <em>Why did you click it?!</em></p>
<p>Note to authors and editors: <em>If you use any of these phrases, I will not click!</em></p>
<p>Why did you?</p>
<h3 id="heading-science-says">Science says…</h3>
<p>“Science” doesn’t talk. A <em>scientific study</em> may indicate some interesting discovery. Real science employes the <em>scientific method</em> of hypothesis, experimentation, and peer review. If an article claims that “science” says something, and it is missing these components or a source with them… it isn’t science. It’s either marketing or political manipulation.</p>
<h3 id="heading-one-weird-trick">One Weird Trick</h3>
<p>This is just pure clickbait by uncreative marketing people. Anybody still using this isn’t even trying anymore.</p>
<h3 id="heading-emotional-intelligence">Emotional Intelligence</h3>
<p>This is the latest catch phrase for being in control of your emotions, and having empathy for others. Long ago we’d call it “being in touch with your feminine side”, because somehow we didn’t realize how ridiculously sexist that was. (Or just didn’t care, and thus weren’t? 🤔) Anyway, this is a new headline fad. I don’t even know what sort of articles are behind these headlines, because I refuse to click on them.</p>
]]></content:encoded></item><item><title><![CDATA[Reduce Existing Javascript Lambda Package by 49%? Yes please!]]></title><description><![CDATA[Photo by Markus Spiske on Unsplash
I was reading about the new aws-sdk for Javascript version 3 (aws-sdk-js-v3), and part of the advancement is the modularization of the SDK to reduce the resulting deployment package size — and by extension cold-star...]]></description><link>https://adam.fanello.net/reduce-existing-javascript-lambda-package-by-49-yes-please-fe08c9aa60d</link><guid isPermaLink="true">https://adam.fanello.net/reduce-existing-javascript-lambda-package-by-49-yes-please-fe08c9aa60d</guid><dc:creator><![CDATA[Adam Fanello]]></dc:creator><pubDate>Tue, 29 Dec 2020 17:56:52 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1650409281489/LTGrZcRBY.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Photo by <a target="_blank" href="https://unsplash.com/@markusspiske?utm_source=medium&amp;utm_medium=referral">Markus Spiske</a> on <a target="_blank" href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></p>
<p>I was reading about the new aws-sdk for Javascript version 3 (aws-sdk-js-v3), and part of the advancement is the modularization of the SDK to reduce the resulting deployment package size — and by extension cold-start time.</p>
<blockquote>
<p>You can read about the new v3 SDK, including the modular architecture, <a target="_blank" href="https://aws.amazon.com/about-aws/whats-new/2020/12/aws-sdk-javascript-version-3-generally-available/">here</a>.</p>
</blockquote>
<p>What if you are working on a project though that is stabilizing for release, or you just aren’t ready to make the jump to v3 yet? That’s where I was, but out of curiosity I read the <a target="_blank" href="https://github.com/aws-samples/aws-sdk-js-v3-workshop/blob/master/packages/backend/README.md">migration guide</a> and learned that the first step is to convert the existing aws-sdk-js v2 imports away from things like:</p>
<p>import AWS from “aws-sdk”;<br />const dynamoDB = new AWS.DynamoDB();</p>
<p>into:</p>
<p>import DynamoDB from "aws-sdk/clients/dynamodb";<br />const dynamoDB = new DynamoDB();</p>
<blockquote>
<p><strong>I’m already using Webpack and Tree Shaking!</strong></p>
</blockquote>
<p>This would supposedly reduce the amount of code imported, even with v2. So maybe I could make this simple change, preparing the code base for v3 later <em>and</em> safely improve performance now? I wasn’t sure how much such a change would help, since the project was already tree-shaking with <a target="_blank" href="https://github.com/serverless-heaven/serverless-webpack">webpack</a>. I was aware though that webpack doesn’t work well on aws-sdk-js… so maybe?</p>
<p>To try it out, I first added a new rule to <strong>tslint.json</strong> (I’m using Typescript):</p>
<p>"import-blacklist": [true, "aws-sdk"]</p>
<p>With that, tslint highlights all the places that need to be changed and bans anyone from unwittingly doing a top-level import later and undoing any benefits. Cool. 😎</p>
<p>Because the code base uses <a target="_blank" href="https://github.com/onicagroup/hexagonal-example">hexagonal architecture</a>, tslint only found twelve files that needed to be updated. 👍 I made a few changes such as:</p>
<p><strong>-</strong> import {ApiGatewayManagementApi} from "aws-sdk";</p>
<p><strong>+</strong> import ApiGatewayManagementApi from "aws-sdk/clients/apigatewaymanagementapi";</p>
<p>and</p>
<p><strong>-</strong> import * as AWS from 'aws-sdk';</p>
<p><strong>+</strong> import Iot from 'aws-sdk/clients/iot';<br /><strong>+</strong> import IotData from 'aws-sdk/clients/iotdata';<br /><strong>+</strong> import {AWSError} from 'aws-sdk/lib/error';</p>
<p>A quick deploy and into the AWS Console I go to look in the stack deployment buckets in AWS S3 and compare the new and previous packages. <strong>What I found sealed the deal!</strong> Here area few of the results:</p>
<p>╔════════════╦═══════════════╦══════════╦═══════════╗<br />║ Stack Name ║ Previous Size ║ New Size ║ Reduction ║<br />╠════════════╬═══════════════╬══════════╬═══════════╣<br />║ ingest     ║     9.0 MB    ║  7.4 MB  ║    18%    ║<br />║ provision  ║     4.2 MB    ║  1.8 MB  ║    57%    ║<br />║ device-api ║    14.3 MB    ║  6.5 MB  ║    55%    ║<br />║ general-api║     1.4 MB    ║  0.6 MB  ║    57%    ║<br />║ account-api║     5.6 MB    ║  2.4 MB  ║    57%    ║<br />╚════════════╩═══════════════╩══════════╩═══════════╝</p>
<blockquote>
<p>Overall average reduction in size: 49%</p>
</blockquote>
<p>It was an easy change that took mere <em>minutes</em> to forever speed up cold starts.</p>
<h3 id="heading-yes-bundle-your-aws-sdk">Yes, bundle your AWS SDK!</h3>
<p>One last note, as I’m sure someone will point out that you can reduce sizes even more by using the SDK already within the Lambda execution environment instead of bundling it with your code.</p>
<blockquote>
<p>Don’t fall for that trap!</p>
</blockquote>
<p>That goes against <a target="_blank" href="https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html">Best practices for working with AWS Lambda functions</a>, which includes controlling your dependencies. The SDK in the execution environment is there for quick little functions with no other dependencies — often made from within the AWS Console. Relying on this ever-changing version of the SDK in your production environment means that you never know when your system will change from under you and mysteriously <em>break</em>. It isn’t worth it! Bundle your dependencies, but <em>only</em> what you need.</p>
]]></content:encoded></item><item><title><![CDATA[Hexagonal Architecture by Example (Meetup)]]></title><description><![CDATA[I was the featured presenter for JavaScriptLA Meetup tonight. Watch the recording on
YouTube!
The code and text of this presentation are in GitHub.
Here's the details from the Meetup event:

Onica's Lead Software Architect, Adam Fanello, will introdu...]]></description><link>https://adam.fanello.net/hexagonal-architecture-by-example-meetup</link><guid isPermaLink="true">https://adam.fanello.net/hexagonal-architecture-by-example-meetup</guid><category><![CDATA[AWS]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[Meetup]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[JavaScript]]></category><dc:creator><![CDATA[Adam Fanello]]></dc:creator><pubDate>Thu, 30 Jul 2020 07:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1650493438058/4VaVdXVzz.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I was the featured presenter for JavaScriptLA Meetup tonight. Watch the recording on
<a target="_blank" href="https://www.youtube.com/watch?v=qZEMSK6S0QM">YouTube</a>!</p>
<p>The code and text of this presentation are in <a target="_blank" href="https://github.com/onicagroup/hexagonal-example">GitHub</a>.</p>
<p>Here's the details from the <a target="_blank" href="https://www.meetup.com/javascriptla/events/271779905/">Meetup event</a>:</p>
<hr />
<p>Onica's Lead Software Architect, Adam Fanello, will introduce how to use hexagonal architecture to organize your code so that it is focused on what makes your code unique while being scalable, testable, and flexible. Theory will be put into practice by showing how an unorganized feature can be transformed</p>
<p>About the Presenter:</p>
<p>Adam Fanello is a Lead Software Architect at Onica, a Rackspace Technology Company, focused on cloud native development in AWS as well as the web app clients that use the cloud. He's been creating full stack web applications since the days of JSP.</p>
]]></content:encoded></item><item><title><![CDATA[Migrating a Legacy App to Cloud Native — Part 8]]></title><description><![CDATA[Ship it!
This is part 8 in a series documenting my journey migrating my progressive web app, called SqAC, to AWS cloud native. If you haven’t been following it before now, here are the previous posts:

Part 1: Background
Part 2: Requirements & Archit...]]></description><link>https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-8-bd908b9bb199</link><guid isPermaLink="true">https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-8-bd908b9bb199</guid><dc:creator><![CDATA[Adam Fanello]]></dc:creator><pubDate>Wed, 25 Mar 2020 17:50:52 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1650409315552/ZMQBZx__a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ship it!</p>
<p>This is part 8 in a series documenting my journey migrating my progressive web app, called SqAC, to AWS cloud native. If you haven’t been following it before now, here are the previous posts:</p>
<ul>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-1-68a1adbb95d5">Part 1: Background</a></li>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-2-533dfebd38fb">Part 2: Requirements &amp; Architecture</a></li>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-3-4bb187fea485">Part 3: Authentication</a></li>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-4-2741585e4953">Part 4: Add Cloud Storage</a></li>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-5-34696c6f0f43">Part 5: Use Cloud Storage</a></li>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-6-280cd65a0937">Part 6: AppSync API and S3 Trigger</a></li>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-7-ec359519e04a">Part 7: Federation &amp; Guest Access</a></li>
</ul>
<p>Now in part 8, I’ll add the final touches to the new AWS-hosted application and take it live! Also, my final conclusions.</p>
<h3 id="heading-custom-domain-name">Custom Domain Name</h3>
<p>Hosting an application on S3 is easy — we did it back in <a target="_blank" href="https://adamfanello.medium.com/migrating-a-legacy-app-to-cloud-native-part-3-4bb187fea485">part 3</a> with the <code>amplify hosting add</code> command. Easy, but not production worthy with a URL like <a target="_blank" href="http://sqac-amplify-20190817123020-hostingbucket-dev.s3-website-us-west-2.amazonaws.com">http://sqac-amplify-20190817123020-hostingbucket-dev.s3-website-us-west-2.amazonaws.com</a></p>
<p>In <a target="_blank" href="https://adamfanello.medium.com/migrating-a-legacy-app-to-cloud-native-part-7-ec359519e04a">part 7</a>, CloudFront was added in order to have HTTPS, and we got a more succinct URL at <a target="_blank" href="https://d3l0j9nusq7n6r.cloudfront.net."><em>https://d3l0j9nusq7n6r.cloudfront.net</em>.</a> That was better, but still not something I can ask someone to type into their browser. No, a custom domain is needed. In fact, my old application is running at <em>sqac.fanello.net</em> and it was time for this new cloud native version to take its place.</p>
<p>I first tried <a target="_blank" href="https://docs.aws.amazon.com/amplify/latest/userguide/custom-domains.html#custom-domain-third-party">setting up a custom domain</a> on my application via the AWS Amplify Console. The AWS Amplify Console though didn’t reder much new content upon clicking the <em>Add Domain</em> button and I found <em>errors</em> in the browser dev console. 😲 I suspect because this is feature centers around the idea of using AWS Amplify Console as a build-pipeline, which I don’t, and so without doing that domain management fails. Unfriendly UI, but it’s okay. This isn’t really an Amplify problem to solve; my app is really just hosted in an S3 bucket fronted by CloudFront and so I just need to find how to attach a custom domain to that. If this were just HTTP (no S), it would be a simple matter of adding a <a target="_blank" href="https://en.wikipedia.org/wiki/CNAME_record">DNS CNAME record</a> to forward my sub-domain to the AWS hosting domain. The security certificate provided by CloudFront complicates that because I can’t have a subdomain under <em>fanello.net</em> serve up a <em>cloudfront.net</em> certificate and expect a browser to accept it.</p>
<p>I tried <a target="_blank" href="https://docs.aws.amazon.com/amplify/latest/userguide/howto-third-party-domains.html">Connecting to Third-Party Custom Domains</a>, but that isn’t quite what I was trying to accomplish and so was unhelpful in itself. However, it led me to <a target="_blank" href="https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html">AWS Certificate Manager</a>.</p>
<p>In AWS Console, I went to the AWS Certificate Manager and chose to <a target="_blank" href="https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html">create a new public certificate</a>. (I did first <a target="_blank" href="https://aws.amazon.com/certificate-manager/pricing/">check</a> the pricing on such a certificate: it’s free!) I set up DNS Validation and then waited for it to happen… and nothing did.</p>
<p>After a couple tries, with a few days waiting each time, I eventually found that the CNAME name provided had a dot at the end that I had to remove when setting it in my DNS name server. 🤷‍♂️</p>
<p>When I tried to then attach the certificate to my CloudFront distribution, it wasn’t found. A quick Internet search <a target="_blank" href="https://aws.amazon.com/premiumsupport/knowledge-center/custom-ssl-certificate-cloudfront/">turned up</a> that CloudFront can only use certificates in us-east-1 (N. Virginia), something no tutorial bothered mentioning. 😞 It <em>is</em> on the CloudFront distribution settings page, so I would have known had read the small light grey text under the form field. Thus after a week finally getting my cert setup in us-west-2 (Oregon), I had to start over again. Fortunately I learned my lessons the first time. I removed the end dot, set the DNS TTL (Time To Live) values low, and had the new certificate setup in about five minutes.</p>
<p>Finally having the needed certificate, I could configure the CloudFront distribution by <a target="_blank" href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CNAMEs.html">editing the settings</a>. I added the certificate, set the “Alternative Domain Names” to my subdomain, turned on HTTP/2 (why is it not by default?) and lowered the price class to U.S., Canada, and Europe. Since the progressive web app is strongly cached by the browser, the slower access elsewhere in the world won’t rarely matter.</p>
<p>Refresh at <a target="_blank" href="https://sqac.fanello.net">https://sqac.fanello.net</a> aaaand…. failure. Certificate still shows my old Let’s Encrypt issued certificate. With a delay of about a week playing around with the certificate, I forgot the initial goal! 🤦‍♂️ After setting the <em>sqac.fanello.net</em> subdomain as a CNAME to point to my CloudFront distribution…</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1650409314201/wc3vxqhN-.png" alt /></p>
<p>I have a valid certificate from Amazon, the app loaded, but access to the S3 bucket managed by Amplify Storage is denied. Progress!</p>
<p>If you look back at the <a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-7-ec359519e04a">last blog post</a>, after setting up CloudFront I then had to reconfigure authentication with the CloudFront URL. Now I have to change that to my new domain:</p>
<pre><code class="lang-shell">$ amplify update auth
</code></pre>
<p>Please note that certain attributes may not be overwritten if you choose to use defaults settings.</p>
<p>You have configured resources that might depend on this Cognito resource.  Updating this Cognito resource could have unintended side effects.</p>
<pre><code class="lang-plain"> Using service: Cognito, provided by: awscloudformation  
 What do you want to do? Add/Edit signin and signout redirect URIs
 Which redirect signin URIs do you want to edit? https://d3l0j9nusq7n6r.cloudfront.net
 ? Update https://d3l0j9nusq7n6r.cloudfront.net/ https://sqac.fanello.net/  
 Do you want to add redirect signin URIs? No  
 Which redirect signout URIs do you want to edit? https://d3l0j9nusq7n6r.cloudfront.net/ 
? Update https://d3l0j9nusq7n6r.cloudfront.net/ https://sqac.fanello.net/ 
 Do you want to add redirect signout URIs? No  
TypeError: Cannot use 'in' operator to search for 'dev' in undefined  
    at keys.forEach.key (/Users/adamfanello/.nvm/versions/node/v10.16.3/lib/node\_modules/@aws-amplify/cli/lib/extensions/amplify-helpers/envResourceParams.js:33:19)  
    at Array.forEach (&lt;anonymous&gt;)  
    at getOrCreateSubObject (/Users/adamfanello/.nvm/versions/node/v10.16.3/lib/node\_modules/@aws-amplify/cli/lib/extensions/amplify-helpers/envResourceParams.js:32:10)  
    at AmplifyToolkit.saveEnvResourceParameters \[as \_saveEnvResourceParameters\] (/Users/adamfanello/.nvm/versions/node/v10.16.3/lib/node\_modules/@aws-amplify/cli/lib/extensions/amplify-helpers/envResourceParams.js:66:23)  
    at Object.saveResourceParameters (/Users/adamfanello/.nvm/versions/node/v10.16.3/lib/node\_modules/@aws-amplify/cli/node\_modules/amplify-provider-awscloudformation/src/resourceParams.js:26:19)  
    at saveResourceParameters (/Users/adamfanello/.nvm/versions/node/v10.16.3/lib/node\_modules/@aws-amplify/cli/node\_modules/amplify-category-auth/provider-utils/awscloudformation/index.js:110:12)  
    at serviceQuestions.then (/Users/adamfanello/.nvm/versions/node/v10.16.3/lib/node\_modules/@aws-amplify/cli/node\_modules/amplify-category-auth/provider-utils/awscloudformation/index.js:330:7)  
    at process.\_tickCallback (internal/process/next\_tick.js:68:7)  
There was an error adding the auth resource
</code></pre>
<p>Sigh. 😞 I issued Amplify CLI <a target="_blank" href="https://github.com/aws-amplify/amplify-cli/issues/3273">bug #3273</a>. It turns out that <em>something</em> deleted my <code>team-provider-info.json</code> file, which is no longer in source control because it contained the Google Sign-In secret. I went to re-pull the environment from AWS, and thus recreate the file, but that too failed and I issued Amplify CLI <a target="_blank" href="https://github.com/aws-amplify/amplify-cli/issues/3274">bug #3274</a>. 🙄 As a last straw, I found the old version of the file from source control, and then issued an <code>amplify env pull</code>. This time it worked, and prompted me for the Google authentication ID and secret, which I recovered from the Google Cloud Platform console. While in the GCP console, I realized I also needed to add the new subdomain in the list of <code>Authorized redirect URIs</code>, and so did so.</p>
<h3 id="heading-then-disaster">Then, Disaster</h3>
<p>Everything was in place: custom domain name, SSL certificate, DNS redirection, missing file restore, and authentication reconfigure. Now was the time for the ultimate triumph! I issued an <code>amplify push</code>, and it froze. 😱 An hour later, the CloudFormation stack timed out and began a rollback to recover. After <em>another</em> hour, the rollback too had failed. The end explanation:</p>
<pre><code>The <span class="hljs-keyword">following</span> resource(s) failed <span class="hljs-keyword">to</span> <span class="hljs-keyword">update</span>: [OAuthCustomResourceInputs]
</code></pre><p>Huh? The top Google search hit for this error message was <em>my own blog post</em> from <a target="_blank" href="https://adamfanello.medium.com/migrating-a-legacy-app-to-cloud-native-part-3-4bb187fea485">part 3</a> of this series! Unfortunately that problem was different. That time, there was a log in CloudWatch explaining what it didn’t like. This time there’s:</p>
<pre><code>ERROR Uncaught <span class="hljs-keyword">Exception</span>   
{  
    "errorType": "Runtime.ImportModuleError",  
    "errorMessage": "Error: Cannot find module 'cfn-response'",  
    "stack": \[  
        "Runtime.ImportModuleError: Error: Cannot find module 'cfn-response'",  
        "    at \_loadUserApp (/var/runtime/UserFunction.js:100:13)",  
        "    at Object.module.exports.load (/var/runtime/UserFunction.js:140:17)",  
        "    at Object.&lt;anonymous&gt; (/var/runtime/index.js:45:30)",  
        "    at Module.\_compile (internal/modules/cjs/loader.js:778:30)",  
        "    at Object.Module.\_extensions..js (internal/modules/cjs/loader.js:789:10)",  
        "    at Module.load (internal/modules/cjs/loader.js:653:32)",  
        "    at tryModuleLoad (internal/modules/cjs/loader.js:593:12)",  
        "    at Function.Module.\_load (internal/modules/cjs/loader.js:585:3)",  
        "    at Function.Module.runMain (internal/modules/cjs/loader.js:831:12)",  
        "    at startup (internal/bootstrap/node.js:283:19)"  
    \]  
}
</code></pre><p>Searching for that turned up <a target="_blank" href="https://github.com/aws/aws-sdk-js/issues/2955">this issue</a> with upgrading CloudFormation custom lambdas to Node.js 10 — something recently added by Amplify. I edited my authentication CloudFormation template to fix the relative path import as noted in the issue. The <a target="_blank" href="https://aws-amplify.github.io/docs/cli/lambda-node-version-update">Amplify Node.js update instructions</a> mention this change in behavior, but only in terms of our own custom functions, not the ones created by Amplify and hidden within CloudFormation templates. 😡</p>
<p>With that fixed, I still didn’t know how to recover. My “auth” CloudFormation stack was in the <code>UPDATE_ROLLBACK_FAILED</code> state. Multiple attempts to continue the rollback failed as well (after an hour each). I suspect that it was still trying to run the bad custom lambdas.</p>
<p>Failures like this can happen with any technology. This is why backups are important and when dealing with cloud services, we must always regard infrastructure as ephemeral. <em>Data</em> however, is not ephemeral. This stack contains the Cognito users. Also, as a <a target="_blank" href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-nested-stacks.html">nested stack</a> for the entire application, it being in a failed state means the entire application stack is in a failed state.</p>
<p>I contacted someone on the AWS Amplify team asking for help. That contact, whom I met on LinkedIn and at re:Invent 2019, was quick to respond and pass me to a colleague on the Amplify CLI team. A couple email exchanges led simply to: “works for me”. 😢</p>
<h3 id="heading-starting-over">Starting Over</h3>
<p>Fortunately this was a development environment, and so the only data was my own. I had intended to simply roll it into production, but with the broken stack I was forced to start over with a brand new production environment:</p>
<blockquote>
<p>$ <strong>amplify init</strong><br />Scanning for plugins...<br />Plugin scan successful<br />Note: It is recommended to run this command from the root of your app directory<br />? Do you want to use an existing environment? <strong>No</strong><br />? Enter a name for the environment <strong>prod</strong><br />Using default provider  awscloudformation</p>
<p>For more information on AWS Profiles, see:<br />https://docs.aws.amazon.com/cli/latest/userguide/cli-multiple-profiles.html</p>
<p>? Do you want to use an AWS profile? <strong>Yes</strong><br />? Please choose the profile you want to use <strong>sqac-amplify-cli</strong><br />Adding backend environment prod to AWS Amplify Console app: d2g2lwfk7ok1zm<br />⠹ Initializing project in the cloud...</p>
<p>CREATE_IN_PROGRESS amplify-sqac-amplify-prod-161346 AWS::CloudFormation::Stack Sat Feb 01 2020 16:13:48 GMT-0800 (Pacific Standard Time) User Initiated<br />CREATE_IN_PROGRESS DeploymentBucket                 AWS::S3::Bucket            Sat Feb 01 2020 16:13:51 GMT-0800 (Pacific Standard Time)<br />CREATE_IN_PROGRESS UnauthRole                       AWS::IAM::Role             Sat Feb 01 2020 16:13:51 GMT-0800 (Pacific Standard Time)<br />CREATE_IN_PROGRESS AuthRole                         AWS::IAM::Role             Sat Feb 01 2020 16:13:52 GMT-0800 (Pacific Standard Time)<br />CREATE_IN_PROGRESS UnauthRole                       AWS::IAM::Role             Sat Feb 01 2020 16:13:52 GMT-0800 (Pacific Standard Time) Resource creation Initiated<br />CREATE_IN_PROGRESS DeploymentBucket                 AWS::S3::Bucket            Sat Feb 01 2020 16:13:52 GMT-0800 (Pacific Standard Time) Resource creation Initiated<br />CREATE_IN_PROGRESS AuthRole                         AWS::IAM::Role             Sat Feb 01 2020 16:13:52 GMT-0800 (Pacific Standard Time) Resource creation Initiated<br />⠹ Initializing project in the cloud...</p>
<p>CREATE_COMPLETE UnauthRole AWS::IAM::Role Sat Feb 01 2020 16:14:06 GMT-0800 (Pacific Standard Time)<br />CREATE_COMPLETE AuthRole   AWS::IAM::Role Sat Feb 01 2020 16:14:07 GMT-0800 (Pacific Standard Time)<br />⠇ Initializing project in the cloud...</p>
<p>CREATE_COMPLETE DeploymentBucket AWS::S3::Bucket Sat Feb 01 2020 16:14:13 GMT-0800 (Pacific Standard Time)<br />⠸ Initializing project in the cloud...</p>
<p>CREATE_COMPLETE amplify-sqac-amplify-prod-161346 AWS::CloudFormation::Stack Sat Feb 01 2020 16:14:15 GMT-0800 (Pacific Standard Time)<br />✔ Successfully created initial AWS cloud resources for deployments.<br />✔ Initialized provider successfully.  </p>
<p>You've opted to allow users to authenticate via Google.  If you haven't already, you'll need to go to <a target="_blank" href="https://developers.google.com/identity">https://developers.google.com/identity</a> and create<br />an App ID.   </p>
<p>Enter your Google Web Client ID for your OAuth flow:  <strong></strong><br /> Enter your Google Web Client Secret for your OAuth flow:  <strong></strong><br />? Do you want to configure Lambda Triggers for Cognito? <strong>No</strong><br />Initialized your environment successfully.</p>
<p>Your project has been successfully initialized and connected to the cloud!</p>
<p>Some next steps:<br />"amplify status" will show you what you've added already and if it's locally configured or deployed<br />"amplify add " will allow you to add features like user login or a backend API<br />"amplify push" will build all your local backend resources and provision it in the cloud<br />“amplify console” to open the Amplify Console and view your project status<br />"amplify publish" will build all your local backend and frontend resources (if you have hosting category added) and provision it in the cloud</p>
<p>Pro tip:<br />Try "amplify add api" to create a backend API and then "amplify publish" to deploy everything</p>
</blockquote>
<p>Then push up the new environment:</p>
<blockquote>
<p>$ <strong>amplify push</strong><br />✔ Successfully pulled backend environment prod from the cloud.</p>
<p>Current Environment: prod</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Category</td><td>Resource name</td><td>Operation</td><td>Provider plugin</td></tr>
</thead>
<tbody>
<tr>
<td>Hosting</td><td>S3AndCloudFront</td><td>Create</td><td>awscloudformation</td></tr>
<tr>
<td>Auth</td><td>sqacauth</td><td>Create</td><td>awscloudformation</td></tr>
<tr>
<td>Storage</td><td>storage</td><td>Create</td><td>awscloudformation</td></tr>
<tr>
<td>Function</td><td>S3Trigger08755fbf</td><td>Create</td><td>awscloudformation</td></tr>
<tr>
<td>Api</td><td>sqacamplify</td><td>Create</td><td>awscloudformation</td></tr>
</tbody>
</table>
</div><p>? Are you sure you want to continue? Yes</p>
<p>GraphQL schema compiled successfully.</p>
<p>Edit your schema at /Users/adamfanello/dev/sqac/sqac-amplify/amplify/backend/api/sqacamplify/schema.graphql or place .graphql files in a directory at /Users/adamfanello/dev/sqac/sqac-amplify/amplify/backend/api/sqacamplify/schema<br />? Do you want to update code for your updated GraphQL API No<br />⠹ Updating resources in the cloud. This may take a few minutes...</p>
<p>... Lots of CloudFormation output as everything is built ...</p>
</blockquote>
<p>See pull request <a target="_blank" href="https://github.com/kernwig/sqac-amplify/pull/16">part8/go-live</a>.</p>
<p>With this in complete, I installed the seed data into S3 (with the S3 trigger automatically indexing it into DynamoDB) and the new SqAC is now live! One long (weekend warrior) journey migrating a legacy app to cloud-native AWS complete! 🎉</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>As you saw, doing a clean install to a new environment got past the failed stack. My celebration felt tainted though, with the dead stack still staring at me in bright red on the AWS Console. While creating a fresh production environment was probably the right thing to do anyway, it is not the solution to every problem. Developing and maintaining a software system over months and years involves updates. Updates that can’t lose users and user data. Being unable to find the root cause of the broken stack makes me very nervous about trusting Amplify with user data. There’s much good and much promise in Amplify — the new <a target="_blank" href="https://aws-amplify.github.io/docs/js/datastore">Amplify DataStore</a> feature looks <em>a-maz-ing</em> — but my final experience leaves me hesitant to use or recommend this toolset. I waited several weeks to write this final post. Part of that delay was to give the Amplify team time to discover an explanation. Perhaps the bigger reason is that, after seven months on this exploration, I really did not want it to end this way. 😕</p>
<p>In the end, my final conclusion about using AWS Amplify is simple:</p>
<p><em>There are no shortcuts</em></p>
<p>Amplify comes across as a magic toolset to let frontend developers create applications without being backend developers or cloud engineers. This simply isn’t the case, and the members of the Amplify team that I have met tell me that this was never the intent. It is a toolset to <em>help</em> with the minutiae of managing and using cloud infrastructure. As my journey and (excruciatingly detailed) blog posts have shown, you still need to understand the AWS services behind everything Amplify is helping you with. A tool is only useful in the hands of someone who knows how to use it, and there is truth in the phrase: “I know enough to be dangerous.” My advice for app developers looking to leverage the cloud: learn the full stack or team up with a solutions architect. Find an AWS Meetup group in your area and do some networking. Find a <a target="_blank" href="https://www.linkedin.com/feed/hashtag/thinkcloudnative/">consultancy that can #thinkcloudnative</a>. AWS is still the most comprehensive platform on which to develop, so go forth and create!</p>
<p><em>(This story was original posted</em> <a target="_blank" href="http://fanello.net/home/2020/03/25/migrating-a-legacy-app-to-cloud-native-part-8/"><em>here</em></a> <em>in March 2020.)</em></p>
]]></content:encoded></item><item><title><![CDATA[Migrating a Legacy App to Cloud Native — Part 7]]></title><description><![CDATA[Photo by Zan on Unsplash
This is part 7 in a series documenting my journey migrating my progressive web app, called SqAC, to AWS cloud native. If you haven’t been following it before now, here are the previous posts:

Part 1: Background
Part 2: Requi...]]></description><link>https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-7-ec359519e04a</link><guid isPermaLink="true">https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-7-ec359519e04a</guid><dc:creator><![CDATA[Adam Fanello]]></dc:creator><pubDate>Sun, 19 Jan 2020 18:36:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1650409321176/qr3PgStQF.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Photo by <a target="_blank" href="https://unsplash.com/@zanilic?utm_source=medium&amp;utm_medium=referral">Zan</a> on <a target="_blank" href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></p>
<p>This is part 7 in a series documenting my journey migrating my progressive web app, called SqAC, to AWS cloud native. If you haven’t been following it before now, here are the previous posts:</p>
<ul>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-1-68a1adbb95d5">Part 1: Background</a></li>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-2-533dfebd38fb">Part 2: Requirements &amp; Architecture</a></li>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-3-4bb187fea485">Part 3: Authentication</a></li>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-4-2741585e4953">Part 4: Add Cloud Storage</a></li>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-5-34696c6f0f43">Part 5: Use Cloud Storage</a></li>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-6-280cd65a0937">Part 6: AppSync API and S3 Trigger</a></li>
</ul>
<p>Now in part 7, I’ll enhance user accounts first with Google Sign-in (federation), and then with local (guest) accounts for users to try out the app first.</p>
<p>But first…</p>
<h3 id="heading-updated-dependencies">Updated Dependencies</h3>
<p>This series isn’t about Angular, so I won’t go into details. However I found that to update to the latest Amplify libraries, I had to upgrade Angular from 6.0 to 8.0. At the same time, I updated various other dependencies.</p>
<p>Just about a week before AWS deprecated Node.js v8 in Lambdas, Amplify finally added support for Node.js v10 and so that update has been performed as well.</p>
<p>Pull Requests of code changes:</p>
<ul>
<li><a target="_blank" href="https://github.com/kernwig/sqac-amplify/pull/8">Part 7 — Update Dependencies</a></li>
<li><a target="_blank" href="https://github.com/kernwig/sqac-amplify/pull/15">Part 7 — Amplify Update for Node.js 10</a></li>
</ul>
<h3 id="heading-add-google-sign-in">Add Google Sign-In</h3>
<p>The new cloud native version of SqAC gained user authentication via Cognito back in <a target="_blank" href="https://adamfanello.medium.com/migrating-a-legacy-app-to-cloud-native-part-3-4bb187fea485">part 3</a>. At the time, I didn’t add any federated sign-ins. The legacy app supported Google and Facebook sign-in, but I found that Facebook sign-in wasn’t popular. With Cognito accounts available in this new app, there’s no true <em>need</em> for any federated accounts, but personally I like using my Google account for authentication because it’s easier than tracking another account. Besides, this is another feature of Amplify to explore!</p>
<p>Amplify’s documentation to setup Google is <a target="_blank" href="https://aws-amplify.github.io/docs/js/cognito-hosted-ui-federated-identity#google-sign-in-instructions-1">here</a>. Yes, I actually used a bit of Google Cloud Platform for my AWS native app. Breath. It’s okay.</p>
<blockquote>
<p>$ <strong>amplify auth update</strong><br />Scanning for plugins...<br />Plugin scan successful<br />Please note that certain attributes may not be overwritten if you choose to use defaults settings.</p>
<p>You have configured resources that might depend on this Cognito resource.  Updating this Cognito resource could have unintended side effects.</p>
<p>Using service: Cognito, provided by: <strong>awscloudformation</strong><br />What do you want to do? <strong>Update OAuth social providers</strong><br />Select the identity providers you want to configure for your user pool: <strong>Google</strong>  </p>
<p>You've opted to allow users to authenticate via Google.  If you haven't already, you'll need to go to <a target="_blank" href="https://developers.google.com/identity">https://developers.google.com/identity</a> and create<br />an App ID.   </p>
<p>Enter your Google Web Client ID for your OAuth flow:  <strong></strong><br />Enter your Google Web Client Secret for your OAuth flow:  <strong></strong><br />Successfully updated resource sqacauth locally</p>
<p>Some next steps:<br />"amplify push" will build all your local backend resources and provision it in the cloud<br />"amplify publish" will build all your local backend and frontend resources (if you have hosting category added) and provision it in the cloud</p>
<p>$ <strong>amplify push</strong><br />✔ Successfully pulled backend environment dev from the cloud.</p>
</blockquote>
<p>An <code>amplify push</code> later, and Cognito is setup to use Google Sign-In. Once again, Amplify makes authentication easy! 🥳</p>
<p>In my Angular application’s account page, I inject <code>AmplifyService</code> as <code>amplifySvc</code> and added the function:</p>
<pre><code class="lang-typescript">signInWithGoogle() {  
   <span class="hljs-built_in">this</span>.amplifySvc.auth().federatedSignIn({provider: <span class="hljs-string">'Google'</span>});  
}
</code></pre>
<p>A button on the page calls this function. This takes me to the familiar Google account selection page, and then navigates back to my application. There were a few problems though:</p>
<ol>
<li>It doesn’t recognize that I’m signed in until I navigated to the account page. Simply rendering the <code>&lt;amplify-authenticator&gt;</code> component triggered the sign-in to be recognized, but I don't want that on the app landing page.</li>
<li>My name is “Unknown”.</li>
<li>No photograph of myself was received.</li>
</ol>
<p>We’ll get to problem #1 in a moment. The second two problems caught me by surprise because the legacy SqAC application also supported Google sign-in, through <a target="_blank" href="http://www.passportjs.org/">Passport</a>. In the Cognito AWS Console, I was able to select attribute mapping for name and picture (email and sub were already checked). This resolved receiving my name and photograph. This was easy to do, but I already knew where to look because I have experience with OAuth and Cognito. Someone looking to use Amplify without AWS experience would be lost.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1650409319847/CH3JHfDIo.png" alt /></p>
<p>Problem #1 remains: sign-in isn’t recognized right away. Upon sign-in with Google, the app reloads and I see the Cognito user information in browser local storage. Yet, <code>Auth.authStateChange$</code> reports a state of <code>“signedOut”</code>. Explicitly requesting the current auth user also returns “No current user”. If I reload the page, then the state becomes <code>“signedIn”</code> and all is well. This seems like a bug. 🐞 I added information to the existing <a target="_blank" href="https://github.com/aws-amplify/amplify-js/issues/4621">Amplify-js Issue #4621</a>. After some messages exchanged and some experimentation, here’s a copy of my comment after I resolved this issue:</p>
<blockquote>
<p>Okay, found two fixes based on your comments <a target="_blank" href="https://github.com/Amplifiyer">@Amplifiyer</a>, both involving using the <code>Hub</code> to listen for signIn. I first used this as a direct analogue to monitoring <code>Auth.authStateChange$</code>. The <code>Hub</code> told me when a federated user signed in/out, and <code>authStateChange$</code> for a user pool user. I still wanted to land on my accounts page though. So I changed it to navigate to the account page on <code>Hub signIn</code> event. Doing that caused <code>Auth.authStateChange$</code> to trigger when the auth component rendered, so I just rely on that. Here's the code change: <a target="_blank" href="https://github.com/kernwig/sqac-amplify/commit/8de758e5dcc8c0f6ef7ce729aa872acf0d96abc2">kernwig/sqac-amplify@8de758e</a></p>
<p>Note: The <code>Hub</code> doesn't play nice with Angular, thus the need to use <code>NgZone</code> directly.</p>
</blockquote>
<p><a target="_blank" href="https://aws-amplify.github.io/docs/js/hub">Amplify Hub</a> is an in-browser publish-subscribe capability that is used by Amplify and available for us to use too. In Angular, services and RxJS provide for similar behavior and so I hadn’t used Hub before this.</p>
<h3 id="heading-auth-secrets">Auth Secrets</h3>
<p>Upon going to commit these changes to github, I noticed that my Google Web Client Secret was stored in <code>amplify/team-provider-info.json</code>. Whether or not to commit this file is discussed in <a target="_blank" href="https://aws-amplify.github.io/docs/cli-toolchain/quickstart#environments-and-teams">this new section</a> of the Amplify documentation. I ended up removing the file from source control.</p>
<h3 id="heading-secure-hosting-andamp-authentication">Secure Hosting &amp; Authentication</h3>
<p>To take federated login live, I needed HTTPS hosting. (OAuth doesn’t allow http callback URLs.) Thus I had to update my hosting configuration to include <a target="_blank" href="https://aws.amazon.com/cloudfront/">CloudFront</a>, the AWS Content Delivery Network:</p>
<blockquote>
<p>$ <strong>amplify update hosting</strong><br />? Specify the section to configure <strong>CloudFront</strong><br />CloudFront is NOT in the current hosting<br />? Add CloudFront to hosting <strong>Yes</strong><br />? default object to return from origin <strong>index.html</strong><br />? Default TTL for the default cache behavior <strong>86400</strong><br />? Max TTL for the default cache behavior <strong>31536000</strong><br />? Min TTL for the default cache behavior <strong>60</strong><br />? Configure Custom Error Responses <strong>No</strong><br />? Specify the section to configure <strong>Publish</strong><br />You can configure the publish command to ignore certain directories or files.<br />Use glob patterns as in the .gitignore file.<br />? Please select the configuration action on the publish ignore. <strong>exit</strong><br />? Specify the section to configure <strong>exit</strong><br />$ <strong>amplify publish</strong></p>
</blockquote>
<p>This deployed SqAC to <a target="_blank" href="https://d3l0j9nusq7n6r.cloudfront.net">https://d3l0j9nusq7n6r.cloudfront.net</a>. I had to add this to the app’s <a target="_blank" href="https://console.cloud.google.com/apis/credentials">credentials at Google</a>, and then to my app:</p>
<blockquote>
<p>$ <strong>amplify update auth</strong><br />Please note that certain attributes may not be overwritten if you choose to use defaults settings.</p>
<p>You have configured resources that might depend on this Cognito resource.  Updating this Cognito resource could have unintended side effects.</p>
<p>Using service: Cognito, provided by: <strong>awscloudformation</strong><br /> What do you want to do? Add/Edit signin and signout redirect URIs<br /> Which redirect signin URIs do you want to edit? (Press  to select, <a> to toggle all, <i> to invert selection)<br /> Do you want to add redirect signin URIs? <strong>Yes</strong><br /> Enter your new redirect signin URI: <a target="_blank" href="https://d3l0j9nusq7n6r.cloudfront.net/"><strong>https://d3l0j9nusq7n6r.cloudfront.net/</strong></a><br />? Do you want to add another redirect signin URI <strong>No</strong><br /> Which redirect signout URIs do you want to edit? (Press  to select, <a> to toggle all, <i> to invert selection)<br /> Do you want to add redirect signout URIs? <strong>Yes</strong><br /> Enter your new redirect signout URI: <a target="_blank" href="https://d3l0j9nusq7n6r.cloudfront.net/"><strong>https://d3l0j9nusq7n6r.cloudfront.net/</strong></a><br />? Do you want to add another redirect signout URI <strong>No</strong><br />Successfully updated resource sqacauth locally</i></a></i></a></p>
<p>$ <strong>amplify push</strong></p>
</blockquote>
<p>But wait, this doesn’t work! 😱The Javascript library isn’t smart enough to choose between this CloudFront URL or the localhost one I already had. The <code>aws-exports.js</code> file contains both URLs comma-separated, but at runtime just the first one (localhost, since it was there first) is used. This utterly fails when running the app from CloudFront. See the <a target="_blank" href="https://github.com/aws-amplify/amplify-cli/issues/2792">issue report here</a>. How this gap continues to exist, I do not understand. Surely nearly all front-end developers work first locally (localhost) before deploying to the cloud (CloudFront)? I added this <em>hack</em> to the Angular <code>main.ts</code> to handle the problem:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> awsconfig <span class="hljs-keyword">from</span> <span class="hljs-string">'./aws-exports'</span>;

<span class="hljs-comment">// Choose an OAuth config based on environment</span>
<span class="hljs-keyword">const</span> redirectSignInOptions = awsconfig.oauth.redirectSignIn.split(<span class="hljs-string">','</span>);
<span class="hljs-keyword">const</span> redirect = environment.production
   ? redirectSignInOptions.find(<span class="hljs-function"><span class="hljs-params">s</span> =&gt;</span> s.startsWith(<span class="hljs-string">'https'</span>))  
   : redirectSignInOptions.find(<span class="hljs-function"><span class="hljs-params">s</span> =&gt;</span> s.includes(<span class="hljs-string">'localhost'</span>));  
awsconfig.oauth.redirectSignIn = redirect;  
awsconfig.oauth.redirectSignOut = redirect;
</code></pre>
<p>Here’s another surprise: by default, <code>amplify publish</code> does not invalidate the CloudFront cache. 🙄 That means after we publish and reload in browser, we will not see our changes! (Unless we wait a day - not even clearing the browser cache will help.) To actually publish and test try out the changes, we must explicitly pass an option: <code>amplify publish --invalidateCloudFront</code>. Yes, this is documented <a target="_blank" href="https://aws-amplify.github.io/docs/cli-toolchain/quickstart#workflow-1">here</a> and of course I won’t remember this next time I publish and so I added a script to my <code>package.json</code> for it.</p>
<p>Pull Request: <a target="_blank" href="https://github.com/kernwig/sqac-amplify/pull/13">Part 7 — Add Google Auth Provider</a></p>
<h3 id="heading-guest-access">Guest Access</h3>
<p>One of my requirements when migrating to AWS was to add guest access. I’ve had some complaints from folks who wanted to try out my app, but were turned off by having to authenticate with Google or Facebook in order to do anything. One way of appeasing this concern was to use a Cognito User Pool, so that users don’t need to link a social account. More impactful though is to allow basic functionality without creating an account at all. In SqAC, I called this a “local account” and provided some warnings about data loss without cloud backup. However, this provides a means for someone to play with the app and decide if it is something they want to go farther with before providing an email address.</p>
<p>Most of my work consisted of changes to my client application itself. I was able to load public content from storage (AWS S3) without difficulty thanks to the work I did in <a target="_blank" href="https://adamfanello.medium.com/migrating-a-legacy-app-to-cloud-native-part-4-2741585e4953">part 4</a>. User data is stored in the browser’s <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API/Using_IndexedDB">IndexedDB</a>. When there’s a cloud account, this data is copied up to S3 as well; for local accounts this step is skipped. Everything was going great until I hit the last feature to test: search via AppSync. This failed with an exception thrown saying “no current user”. 🤔 This surprised me — I figured not including an <code>@auth</code> on my GraphQL type meant “no auth” and thus no need for a user. 🤷‍♂️ Nope. A search brought me to some background and ultimately I stumbled right to the mass of <a target="_blank" href="https://aws-amplify.github.io/docs/cli-toolchain/graphql#public-authorization">Amplify documentation</a> that tells us how to do public authentication. It <em>says</em>...</p>
<blockquote>
<p>👉<strong><em>Note: Don’t do this! Read on!</em></strong> 👈<br />First, use API KEY authentication type:</p>
<p>$ <strong>amplify update api</strong><br />? Please select from one of the below mentioned services: <strong>GraphQL</strong><br />? Choose the default authorization type for the API <strong>API key</strong><br />? Enter a description for the API key: <strong>Public access</strong><br />? After how many days from now the API key should expire (1-365): <strong>7</strong><br />? Do you want to configure advanced settings for the GraphQL API <strong>No, I am done</strong>.</p>
<p>The following types do not have '@auth' enabled. Consider using @auth with @model  </p>
<ul>
<li>Collection<br />Learn more about @auth here: <a target="_blank" href="https://aws-amplify.github.io/docs/cli-toolchain/graphql#auth">https://aws-amplify.github.io/docs/cli-toolchain/graphql#auth</a> </li>
</ul>
<p>GraphQL schema compiled successfully.</p>
</blockquote>
<p>Second, use auth mode API KEY and <code>@auth(rules: [{allow: public}])</code> on the GraphQL type.</p>
<p>I was nervous about the 7 day (default) expiration. Upon an <code>amplify push</code>, the CLI showed me a value for the “GraphQL API KEY” and it is set in my <code>aws-export.js</code>. My app was able to successfully perform the GraphQL query without creating a Cognito user, but it was clear that this would stop working in a week. I could increase this to 365, but that only delays the problem. Amplify CLI <a target="_blank" href="https://github.com/aws-amplify/amplify-cli/issues/1450">issue 1450</a> has conversation of people struggling with this.</p>
<p>Back in that <a target="_blank" href="https://aws-amplify.github.io/docs/cli-toolchain/graphql#public-authorization">Amplify documentation</a>, it first says that API KEY must be used for public access, then gives a second example with IAM authentication for public access. Why not try that? 🤷‍♂️</p>
<p>I updated my GraphQL with <code>@auth(rules: [{allow: public, provider: iam}])</code> and then on the CLI:</p>
<blockquote>
<p>$ <strong>amplify update api</strong><br />? Please select from one of the below mentioned services: <strong>GraphQL</strong><br />? Choose the default authorization type for the API <strong>IAM</strong><br />? Do you want to configure advanced settings for the GraphQL API <strong>No, I am done.</strong></p>
<p>GraphQL schema compiled successfully.</p>
<p>$ <strong>amplify api gql-compile</strong><br />$ <strong>amplify push</strong></p>
</blockquote>
<p>A bit of testing with a cloud account and a local (guest) account — and it’s working! 🎉</p>
<p><em>Call out to the Amplify team:</em> <a target="_blank" href="https://aws-amplify.github.io/docs/cli-toolchain/graphql#public-authorization">This bit</a> of documentation can be made clearer that there are two approaches, and how to choose between them.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>As with everything Amplify (and honestly, doing new things in general) adding Google Sign-In and guest access was harder than it looked. It took some web searching and experimentation, but ultimately was easier than it would have been without Amplify.</p>
<h3 id="heading-coming-next-time">Coming next time…</h3>
<p>Cut over! The legacy app is still at <a target="_blank" href="https://sqac.fanello.net/">https://sqac.fanello.net/</a>, and this new one is at <a target="_blank" href="https://d3l0j9nusq7n6r.cloudfront.net">https://d3l0j9nusq7n6r.cloudfront.net</a>. One has a slightly friendly name than the other. 😉</p>
<p>I just paid for another month of DigitalOcean hosting for the legacy app, and hope for it to be the last.</p>
<p>Until next time. 😎</p>
<p><em>(This story was originally published</em> <a target="_blank" href="http://fanello.net/home/2020/01/19/migrating-a-legacy-app-to-cloud-native-part-7/"><em>here</em></a> <em>in January 2020.)</em></p>
]]></content:encoded></item><item><title><![CDATA[Key Serverless Announcements at re:Invent 2019]]></title><description><![CDATA[I wrote this blog article for my employer, Onica.
Every year, re:Invent is a Las Vegas all-you-can-eat buffet of new AWS capabilities being announced. Earlier Onica blog posts have covered the 2019 re:Invent keynote announcements from Andy Jassy and ...]]></description><link>https://adam.fanello.net/key-serverless-announcements-at-reinvent-2019</link><guid isPermaLink="true">https://adam.fanello.net/key-serverless-announcements-at-reinvent-2019</guid><category><![CDATA[AWS]]></category><category><![CDATA[serverless]]></category><dc:creator><![CDATA[Adam Fanello]]></dc:creator><pubDate>Tue, 10 Dec 2019 08:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1650922763267/ft75dOiLW.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>I wrote this blog article for my employer, Onica.</em></p>
<p>Every year, re:Invent is a Las Vegas all-you-can-eat buffet of new AWS capabilities being announced. Earlier Onica blog posts have covered the 2019 re:Invent keynote announcements from <a target="_blank" href="https://onica.com/blog/aws-announcements/aws-reinvent-2019-recap-andy-jassy-keynote-event/">Andy Jassy</a> and <a target="_blank" href="https://onica.com/blog/ai-machine-learning/aws-reinvent-2019-recap-dr-werner-vogels-keynote-event/">Dr. Werner Vogels</a> in near real time. There is so much going on though, that many other announcements fly under the radar. This humble serverless and web application developer attended his first re:Invent this year and noticed a few new capabilities that were not mentioned in the keynotes, but caught my attention. One may be the answer to a popular question of our times: what comes after serverless?</p>
<p><em>Read the full post at</em>
<a target="_blank" href="https://onica.com/blog/aws-announcements/key-serverless-announcements-at-reinvent-2019/">https://onica.com/blog/aws-announcements/key-serverless-announcements-at-reinvent-2019/</a></p>
]]></content:encoded></item><item><title><![CDATA[Migrating a Legacy App to Cloud Native — Part 6]]></title><description><![CDATA[This is part 6 in a series documenting my journey. If you haven’t been following it before now, here are the previous posts:

Part 1: Background
Part 2: Requirements & Architecture
Part 3: Authentication
Part 4: Add Cloud Storage
Part 5: Use Cloud St...]]></description><link>https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-6-280cd65a0937</link><guid isPermaLink="true">https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-6-280cd65a0937</guid><dc:creator><![CDATA[Adam Fanello]]></dc:creator><pubDate>Wed, 27 Nov 2019 18:17:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1650409287369/mvwevTpfl.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is part 6 in a series documenting my journey. If you haven’t been following it before now, here are the previous posts:</p>
<ul>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-1-68a1adbb95d5">Part 1: Background</a></li>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-2-533dfebd38fb">Part 2: Requirements &amp; Architecture</a></li>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-3-4bb187fea485">Part 3: Authentication</a></li>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-4-2741585e4953">Part 4: Add Cloud Storage</a></li>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-5-34696c6f0f43">Part 5: Use Cloud Storage</a></li>
</ul>
<h3 id="heading-review-our-goal">Review Our Goal</h3>
<p>User settings and collections can be saved to and retrieved from the cloud. Now I need to finish off publishing “public” collections and add in the ability for other users to find these. Let’s refer back to the architecture diagram:</p>
<p>In this series installment, I will implement the row down the middle: <a target="_blank" href="https://aws.amazon.com/appsync/">AppSync</a>, <a target="_blank" href="https://aws.amazon.com/dynamodb/">DynamoDB</a>, and the <em>Validate &amp; Index</em> <a target="_blank" href="https://aws.amazon.com/lambda/">lambda</a>.</p>
<p>The <a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-2-533dfebd38fb">last time I showed this diagram</a>, the two lambdas were outside of the “Managed by AWS Amplify” box. I have since found how to include those within Amplify…</p>
<h3 id="heading-amplify-api-is-appsync">Amplify API is AppSync</h3>
<p>Amplify has two options for implementing <a target="_blank" href="https://aws-amplify.github.io/docs/js/api">APIs</a>. One is a traditional API Gateway to Lambda approach, that Amplify refers to as REST. (Nothing about it actually enforces <a target="_blank" href="https://restfulapi.net/">RESTfulness</a>, it simply <em>can</em> be RESTful.) Beyond the Amplify documentation, I have seen no chatter about this option. When someone talks about Amplify API, they’re talking <a target="_blank" href="https://graphql.org/">GraphQL</a>. On the AWS cloud end, this is done using the <a target="_blank" href="https://aws.amazon.com/appsync/">AppSync</a> service.<br />I’ll add an API to <a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-1-68a1adbb95d5">SqAC</a> via the <code>amplify add api</code> command:</p>
<blockquote>
<p>$ <strong>amplify add api</strong><br />? Please select from one of the below mentioned services GraphQL<br />? Provide API name: <strong>sqacamplify</strong><br />? Choose an authorization type for the API <strong>Amazon Cognito User Pool</strong><br />Use a Cognito user pool configured as a part of this project<br />? Do you have an annotated GraphQL schema? No<br />? Do you want a guided schema creation? Yes<br />? What best describes your project: Single object with fields (e.g., “Todo” with ID, name, description)<br />? Do you want to edit the schema now? Yes<br />Please edit the file in your editor: /Users/adamfanello/dev/sqac/sqac-amplify/amplify/backend/api/sqacamplify/schema.graphql<br />? Press enter to continue </p>
<p>GraphQL schema compiled successfully.</p>
</blockquote>
<p>For authorization I chose <em>Amazon Cognito User Pool</em>, which is what you want for user-facing clients. I’m actually building a public read-only API and thus won’t enforce any authentication, but there’s no “none” option.</p>
<p>My GraphQL API schema is extremely simple. Really, I could easily do this by other means with a single DynamoDB table and the REST API option — or even with the client accessing DynamoDB directly! The point of this journey is to try new things though, thus GraphQL and AppSync with Amplify. Here’s the GraphQL schema:</p>
<pre><code class="lang-graphql">type Collection  
 # Store in DynamoDB, disable mutations and subscriptions  
 @model( mutations: null, subscriptions: null)  
{  
 id: ID!  
 created: String!  
 modified: String!  
 revision: Int!

 name: String!  
 author: String!  
 authorUserId: String!  
 description: String!  
 difficulty: Int!  
 level: String!  
 formations: Int  
 families: Int  
 calls: Int  
 modules: Int!  
 license: String!  
}
</code></pre>
<p>This is a public read-only API, thus in the model I’m disabling all mutations and there’s no <code>@auth</code> <a target="_blank" href="https://aws-amplify.github.io/docs/cli-toolchain/graphql#auth">directive</a>. This simply lets users of the app search for collections that the individual authors have released to the public. The <code>@model</code> <a target="_blank" href="https://aws-amplify.github.io/docs/cli-toolchain/graphql#model">directive</a> tells Amplify to expand this type with a <code>GetCollection</code> query and a <code>ListCollections</code> query, and all the GraphQL input and output types needed to drive those. (It would have done more had I not disabled mutations and subscriptions.) Best of all, since Amplify knows I’m using the Angular framework, it also generated Typescript types and an Angular service for me to use. The result: I don’t actually <em>have</em> to touch GraphQL in my client; it’s simply a function call to do what I need. 😎</p>
<p>The DynamoDB table created is by default <code>PAY_PER_REQUEST</code>, which makes it truly serverless. However, this <a target="_blank" href="https://aws.amazon.com/blogs/aws/amazon-dynamodb-on-demand-no-capacity-planning-and-pay-per-request-pricing/">on-demand option</a> does not fall under the AWS free tier. So I switched <a target="_blank" href="https://aws-amplify.github.io/docs/cli-toolchain/graphql#dynamodbbillingmode">DynamoDBBillingMode</a> to <code>PROVISIONED</code> with the default of 5 RCU and 5 WCU - which fits within free tier and is plenty for my app’s current needs.</p>
<h3 id="heading-s3-trigger-lambda">S3 Trigger Lambda</h3>
<p>Since the API is read-only, you may wonder where the data comes from. That’s the job of the <em>Validate &amp; Index</em> lambda — an S3 trigger. Whenever a collection is uploaded, it’ll trigger this Lambda that will manage public collections, including updating the DynamoDB table that this API is reading from. Adding the trigger:</p>
<blockquote>
<p>$ <strong>amplify update storage</strong><br />? Please select from one of the below mentioned services Content (Images, audio, video, etc.)<br />? Who should have access: Auth and guest users<br />? What kind of access do you want for Authenticated users? (Press  to select, <a> to toggle all, <i> to invert selection)create/update, read, dele<br />te<br />? What kind of access do you want for Guest users? (Press  to select, <a> to toggle all, <i> to invert selection)read<br />? Do you want to add a Lambda Trigger for your S3 Bucket? <strong>Yes</strong><br />? Select from the following options Create a new function<br />Successfully added resource S3Trigger08755fbf locally<br />? Do you want to edit the local S3Trigger08755fbf lambda function now? Yes<br />Please edit the file in your editor: /Users/adamfanello/dev/sqac/sqac-amplify/amplify/backend/function/S3Trigger08755fbf/src/index.js<br />? Press enter to continue<br />Successfully updated resource</i></a></i></a></p>
</blockquote>
<p>This wiped out all my customization in <code>storage/parameters.json</code> as described in <a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-5-34696c6f0f43">part 5</a>. 😑 Fortunately, I have the good sense to review all the changes from a CLI tool before committing them, and so was able to revert the damage leaving only the “<code>triggerFunction</code>” parameter changed.</p>
<p>I also edited the CloudFormation template for this function to use Node.js 10 and limit to a single concurrent execution. (Amplify is still setting Lambdas to the Node.js 8 runtime, even though its deprecation is only five weeks from when I write this.)</p>
<p>Now I need this trigger to be able to write to the Collection table in DynamoDB. For this, two things are needed:</p>
<ol>
<li>The lambda needs the name of the table.</li>
<li>The lambda needs <em>permission</em> to write to the table.</li>
</ol>
<h3 id="heading-finding-the-table">Finding the table</h3>
<p>This required some experimentation and mild hackery. Amplify doesn’t provide any information other than the environment name (“dev”) to the trigger Lambda. Looking in AWS Console, I see that the table name is a concatenation of my type name “Collection”, the generated AppSync API identifier, and the environment name. “Collection” is a static name and so safe to hard code. The environment is provided to the Lambda via environment variable (overloading the word “environment” in different contexts), and the AppSync ID I found in the <code>amplify-meta.json</code> file. 🤔 I tried doing a Javascript <code>require(‘../../../amplify-meta’)</code> to fetch this content, but the file wasn’t delivered as part of the Lambda source and thus failed at runtime. (I expected as much, but it was worth a try.) With some searching, I found that Amplify has a pre-deployment hook capability, as described in the <a target="_blank" href="https://aws-amplify.github.io/docs/cli-toolchain/usage#build-options">documentation</a>. 🎉 I used this <code>package.json</code> script to copy <code>amplify-meta.json</code> to the function <code>src</code> directory, and added Typescript compilation while at it!</p>
<p>In the Lambda’s <code>package.json</code> at <code>amplify/backend/function/S3Trigger08755fbf/src/</code></p>
<pre><code><span class="hljs-string">"scripts"</span>: {  
  <span class="hljs-string">"build"</span>: <span class="hljs-string">"cp ../../../amplify-meta.json . &amp;&amp; ../../../../../node\_modules/.bin/tsc"</span>  
}
</code></pre><p>In the Lambda <code>src</code> directory, I also ran <code>tsc --init</code> to generate a <code>tsconfig.json</code> and thus ready for Typescript.</p>
<p>Then in my top-level <code>package.json</code>:</p>
<pre><code><span class="hljs-string">"scripts"</span>: {  
  <span class="hljs-string">"amplify:S3Trigger08755fbf"</span>: <span class="hljs-string">"cd amplify/backend/function/S3Trigger08755fbf/src/ &amp;&amp; npm run build"</span>  
}
</code></pre><p>It could have all been put in the top-level, but this way I can do trial builds from the lambda <code>src</code> directory. Now whenever doing an <code>amplify push</code>, the latest meta data will be copied and Typescript compiled. 🎉<br />(Of course I added the copied <code>amplify-meta.json</code> and compiled <code>.js</code> output to my <code>.gitignore</code> file.)</p>
<p>Put all together, my Lambda source code now includes:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> amplifyMeta = <span class="hljs-built_in">require</span>(<span class="hljs-string">'./amplify-meta'</span>);  
<span class="hljs-keyword">const</span> ddbTableName = <span class="hljs-string">'Collection-'</span>
  + amplifyMeta.api.sqacamplify.output.GraphQLAPIIdOutput   
  + <span class="hljs-string">'-'</span> + process.env.ENV;
</code></pre>
<p>I did attempt another approach as mentioned <a target="_blank" href="https://github.com/aws-amplify/amplify-cli/issues/1002#issuecomment-482717942">here</a>, where @mikeparisstuff states that if I just define an input parameter for a CloudFormation stack (say, the S3 trigger lambda), then Amplify will provide the value. I tried this with <code>AppSyncApiId</code>, but it failed as unresolvable. Perhaps this works only within the API category.</p>
<h3 id="heading-adding-permissions">Adding permissions</h3>
<p>There may be elegant solutions for adding permission for the S3 trigger lambda to access the DynamoDB table, but I couldn’t find any. I simply added the new statement to the policy given to the lambda’s execution role, right in the storage category’s CloudFormation. I even gave it access to all resources, which I feel is fine because I’m following the practice of one environment to one AWS account. As such the wildcard resource only gives access to the one DynamoDB table in the account. Specifically, I modified the <code>S3TriggerBucketPolicy</code> in <code>amplify/backend/storage/storage/s3-cloudformation-template.json</code> to add this to the Statement array:</p>
<pre><code>{  
   <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,  
   <span class="hljs-attr">"Action"</span>: [  
       <span class="hljs-string">"dynamodb:PutItem"</span>,  
       <span class="hljs-string">"dynamodb:DeleteItem"</span>,  
       <span class="hljs-string">"dynamodb:GetItem"</span>,  
       <span class="hljs-string">"dynamodb:Query"</span>,  
       <span class="hljs-string">"dynamodb:UpdateItem"</span>  
   ],  
   <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"\*"</span>  
}
</code></pre><h3 id="heading-the-s3-trigger-logic">The S3 Trigger Logic</h3>
<p>Amplify created a bare bones example Lambda, and I have now given it permission to access the DynamoDB table created by the Amplify API category. Now I have to write up the exact logic. 🙌 This is the easy part for me. Infrastructure is meh 🤷‍♂. Fighting with tools is argh! 😱 Getting back to just writing some code with familiar tools, language, and services is like comfort food; I can just relax and enjoy. 🤗</p>
<p>After some iterative experimentation and coding, the Lambda was written. You can find it in <a target="_blank" href="https://github.com/kernwig/sqac-amplify/pull/7/files#diff-4da5aa6aa8c351a993fe612bf34bfd56">PR #7</a>. (Notice the actual handler is just eight lines of loop and error handling — everything else is in small helper functions. Access to S3 and DynamoDB are abstracted into the <code>storage.ts</code> and <code>database.ts</code> files.)</p>
<p>The resulting logical flow is described below. Note that there is only one S3 bucket and one Lambda (with multiple executions). They are shown repeatedly here to break it down by steps.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1650409286012/ZzSiX7lZ4.png" alt /></p>
<p>The “Triggered but no action” lambda executions are unfortunate side effects because everything is in one bucket and without <a target="_blank" href="https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html#notification-how-to-filtering">pre-execution filtering</a>. (Hand-rolled S3 allows both of these approaches, but not Amplify.) Since the Lambda will always be warm at those two points (I set concurrency to one), it’s just a quick sub-100 millisecond execution to detect state and exit.</p>
<h3 id="heading-implement-search-in-client">Implement Search in Client</h3>
<p>To start using the Amplify API category, the first step is to add API to the list of Amplify modules registered in <code>app.module.ts</code>, as described <a target="_blank" href="https://aws-amplify.github.io/docs/js/angular#option-2-configuring-the-amplify-provider-with-specified-amplify-js-modules">here</a>. The exact code to add for this is not specified; the linked documentation just has examples for three other categories. The key is <code>import API from "@aws-amplify/api";</code> and then add <code>API</code> to the object passed to <code>AmplifyModules</code>.</p>
<p>My search capability allows the user to enter a bit of text, and then searches for that text in the collection’s name, description, and author’s name. The challenge was figuring out just how to do that with the generated <code>ListComponents</code> function. The <a target="_blank" href="https://github.com/kernwig/sqac-amplify/pull/7/files#diff-054ad11f5781091a8c630b686ff95013">input type</a> allows for criteria to be given for each property, but does not specify whether multiple criteria is an <code>and</code> operation or an <code>or</code> operation. Experimentation found that it’s an<code>AND</code>, but what I really wanted was a mix of ORs (text in any of three properties) and <code>ANDs</code> for the other search criteria (difficulty and level properties). I finally noticed that the generated <code>ModelCollectionFilterInput</code> has three extra properties at the end that are not part of my data model: <code>and</code>, <code>or</code>, and <code>not</code>. That could work. By the time I noticed these I had also realized through my experimentation that the text searches are case-<em>sensitive</em>. This is how DynamoDB behaves, and thus so does AppSync talking to DynamoDB. In the end I added a <code>searchText</code> property to the GraphQL schema and added a bit to the Lambda to set this value when writing to DynamoDB. With this new property containing the three text fields concatenated and in all lower case, the input to <code>ListComponents</code> <a target="_blank" href="https://github.com/kernwig/sqac-amplify/pull/7/files#diff-158659f833a70ba45f7a6e953037c8e6R55-R85">in the client</a> can now simply set the relevant criteria using the lower-cased search text and default <code>and</code> operation behavior.</p>
<p>I explain all that dry detail because <em>you won’t find it explained anywhere else</em>. Even digging into the resolver for <code>ListComponents</code> lead to a dead end - a Velocity Template utility function that translates the input to a DynamoDB filter expression, but without the details of how. (Find <code>toDynamoDBFilterExpression</code> on <a target="_blank" href="https://docs.aws.amazon.com/appsync/latest/devguide/resolver-util-reference.html#dynamodb-helpers-in-util-dynamodb">this page</a> if you want to see it.) Are you tired of me complaining about poor documentation? Me too. Moving on.</p>
<p>Here’s the good news: after this, the Amplify → Angular → GraphQL → AppSync → DynamoDB integration just worked! Once some details are cleared up, it’s really easy. 🎉 I linked to all the pertinent little bits of code in the paragraphs above.</p>
<h3 id="heading-coming-next-time">Coming next time…</h3>
<p>Upon completing this section, I was excited to realize that this migrated version of SqAC has reached feature parity with the existing production version! 🎉 🥳</p>
<p>There is one more feature on my to do list: guest accounts. Additionally, there are finishing touches such as S3 policy, revision purging by age, CloudWatch monitoring, and email notifications.</p>
<p>Until next time. 🙇‍♂️</p>
<p><em>(Story originally published</em> <a target="_blank" href="http://fanello.net/home/2019/11/27/migrating-a-legacy-app-to-cloud-native-part-6/"><em>here</em></a> <em>in November 2019.)</em></p>
]]></content:encoded></item><item><title><![CDATA[Migrating a Legacy App to Cloud Native — Part 4]]></title><description><![CDATA[Photo by Steve Johnson on Unsplash
In part 4 of this series, I add Amplify Storage and explore its security model and how to customize it…

Part 1: Background
Part 2: Requirements & Architecture
Part 3: Authentication

Amplify has a storage module wh...]]></description><link>https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-4-2741585e4953</link><guid isPermaLink="true">https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-4-2741585e4953</guid><dc:creator><![CDATA[Adam Fanello]]></dc:creator><pubDate>Wed, 18 Sep 2019 22:12:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1650409297446/0wzsbCIHc.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Photo by <a target="_blank" href="https://unsplash.com/@steve_j?utm_source=medium&amp;utm_medium=referral">Steve Johnson</a> on <a target="_blank" href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></p>
<p>In part 4 of this series, I add Amplify Storage and explore its security model and how to customize it…</p>
<ul>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-1-68a1adbb95d5">Part 1: Background</a></li>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-2-533dfebd38fb">Part 2: Requirements &amp; Architecture</a></li>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-3-4bb187fea485">Part 3: Authentication</a></li>
</ul>
<p>Amplify has a storage module which may be backed in AWS by either S3 or DynamoDB. Back in part 2 when exploring application requirements, I noted that S3 would be used for storage of user settings and data collections. In short, DynamoDB could not be used because:</p>
<ol>
<li>Records are limited to 400 KB and I don’t want to limit a collection to that size.</li>
<li>Amplify’s storage API may use S3 <em>or</em> DynamoDB; we can’t use both for different data. Thus user settings, while small, will also go into S3.</li>
</ol>
<p><em>Note: I later discovered that while the CLI provides an option for DynamoDB storage, none of the Amplify libraries, in any language, support it.</em></p>
<p>We will use DynamoDB for search (see requirements), but that will come later. Right now, we just want to save and read back user settings and collections.</p>
<p>If it hasn’t been clear yet in this series, I’m writing this as I attempt to <em>use</em> Amplify. This is a series about a <em>journey</em>, not a straight-up how-to guide.</p>
<h3 id="heading-add-storage">Add Storage</h3>
<p>Where did we leave off?</p>
<blockquote>
<p>$ amplify status  </p>
<p>Current Environment: dev  </p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Category</td><td>Resource name</td><td>Operation</td><td>Provider plugin</td></tr>
</thead>
<tbody>
<tr>
<td>Hosting</td><td>S3AndCloudFront</td><td>No Change</td><td>awscloudformation</td></tr>
<tr>
<td>Auth</td><td>sqacauth</td><td>No Change</td><td>awscloudformation</td></tr>
</tbody>
</table>
</div><p>Hosting endpoint: http://sqac-amplify-20190817123020-hostingbucket-dev.s3-website-us-west-2.amazonaws.com<br />Hosted UI Endpoint: https://sqac-dev.auth.us-west-2.amazoncognito.com/</p>
</blockquote>
<p>I have only hosting and auth. So let’s add storage:</p>
<blockquote>
<p>$ amplify add storage<br />? Please select from one of the below mentioned services Content (Images, audio, video, etc.)<br />? Please provide a friendly name for your resource that will be used to label this category in the project: storage<br />? Please provide bucket name: sqac-amplify-user-data<br />? Who should have access: Auth and guest users<br />? What kind of access do you want for Authenticated users? create/update, read, delete<br />? What kind of access do you want for Guest users? read<br />? Do you want to add a Lambda Trigger for your S3 Bucket? No<br />Successfully added resource storage locally  </p>
<p>Some next steps:<br />"amplify push" builds all of your local backend resources and provisions them in the cloud<br />"amplify publish" builds all of your local backend and front-end resources (if you added hosting category) and provisions them in the cloud</p>
</blockquote>
<p>I choose content as “Images, audio, video, etc” vs the NoSQL option. This is how you get S3 instead of DynamoDB. I then gave things friendly names, rather than the randomly generated defaults.</p>
<p>When exploring the requirements, I stated that I wanted first-time visitors to be able to play with my application without first creating an account, thus I selected to provide guest read-only access. Authenticated users have full read-write.</p>
<p>While we will eventually add a Lambda Trigger to the S3 bucket, this is a big task for later and so I answered No for now. Amplify allows us to use the <code>amplify storage update</code> command to change things later.</p>
<p>Moving on, let’s deploy these changes…</p>
<blockquote>
<p>$ amplify push  </p>
<p>Current Environment: dev  </p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Category</td><td>Resource name</td><td>Operation</td><td>Provider plugin</td></tr>
</thead>
<tbody>
<tr>
<td>Storage</td><td>storage</td><td>Create</td><td>awscloudformation</td></tr>
<tr>
<td>Hosting</td><td>S3AndCloudFront</td><td>No Change</td><td>awscloudformation</td></tr>
<tr>
<td>Auth</td><td>sqacauth</td><td>No Change</td><td>awscloudformation</td></tr>
</tbody>
</table>
</div><p>? Are you sure you want to continue? Yes<br />⠴ Updating resources in the cloud. This may take a few minutes...  </p>
<p>… A few dozen lines of CloudFormation output over a few minutes …   </p>
<p>✔ All resources are updated in the cloud</p>
</blockquote>
<p>Nothing to it. In the AWS Console (web site), I see a new <code>sqac-amplify-user-data-dev</code> empty bucket. (The <code>-dev</code> is my environment name; it gets appended to everything to support multiple environments. 🥳)</p>
<p>So we’re done, right? Well… that depends.</p>
<h3 id="heading-security-policies-and-parameters">Security Policies and Parameters</h3>
<p><em>Warning: I’m going to dive deep into AWS policies and CloudFormation here. You can follow along with the files in the</em> <a target="_blank" href="https://github.com/kernwig/sqac-amplify/pull/3"><em>part 4 pull request</em></a><em>, or just let your eyes glaze over.</em> 😳</p>
<p>I poked my nose into the new CloudFormation stack that defines the storage, including policies, and noticed it isn’t exactly what I want. I want users to keep their private data at the private access level, and shared data in the protected access level, and nothing else. The policies though are allowing users write access in the public area too, which is not very useful as anyone could put anything here and any one else can modify or delete it. I really don’t want that. Uh oh? But then I realized I was looking at the stack’s “Parameters” section and there is a <code>parameters.json</code> file in the amplify folder. 💡Here is <code>parameters.json</code>:</p>
<pre><code>{  
   <span class="hljs-attr">"bucketName"</span>: <span class="hljs-string">"sqac-amplify-user-data"</span>,  
   <span class="hljs-attr">"authPolicyName"</span>: <span class="hljs-string">"s3_amplify_7405df3b"</span>,  
   <span class="hljs-attr">"unauthPolicyName"</span>: <span class="hljs-string">"s3_amplify_7405df3b"</span>,  
   <span class="hljs-attr">"authRoleName"</span>: {  
       <span class="hljs-attr">"Ref"</span>: <span class="hljs-string">"AuthRoleName"</span>  
   },  
   <span class="hljs-attr">"unauthRoleName"</span>: {  
       <span class="hljs-attr">"Ref"</span>: <span class="hljs-string">"UnauthRoleName"</span>  
   },  
   <span class="hljs-attr">"selectedGuestPermissions"</span>: [  
       <span class="hljs-string">"s3:GetObject"</span>,  
       <span class="hljs-string">"s3:ListBucket"</span>  
   ],  
   <span class="hljs-attr">"selectedAuthenticatedPermissions"</span>: [  
       <span class="hljs-string">"s3:PutObject"</span>,  
       <span class="hljs-string">"s3:GetObject"</span>,  
       <span class="hljs-string">"s3:ListBucket"</span>,  
       <span class="hljs-string">"s3:DeleteObject"</span>  
   ],  
   <span class="hljs-attr">"s3PermissionsAuthenticatedPublic"</span>: <span class="hljs-string">"s3:PutObject,s3:GetObject,s3:DeleteObject"</span>,  
   <span class="hljs-attr">"s3PublicPolicy"</span>: <span class="hljs-string">"Public_policy_9efc80af"</span>,  
   <span class="hljs-attr">"s3PermissionsAuthenticatedUploads"</span>: <span class="hljs-string">"s3:PutObject"</span>,  
   <span class="hljs-attr">"s3UploadsPolicy"</span>: <span class="hljs-string">"Uploads_policy_9efc80af"</span>,  
   <span class="hljs-attr">"s3PermissionsAuthenticatedProtected"</span>: <span class="hljs-string">"s3:PutObject,s3:GetObject,s3:DeleteObject"</span>,  
   <span class="hljs-attr">"s3ProtectedPolicy"</span>: <span class="hljs-string">"Protected_policy_7b753c06"</span>,  
   <span class="hljs-attr">"s3PermissionsAuthenticatedPrivate"</span>: <span class="hljs-string">"s3:PutObject,s3:GetObject,s3:DeleteObject"</span>,  
   <span class="hljs-attr">"s3PrivatePolicy"</span>: <span class="hljs-string">"Private_policy_7b753c06"</span>,  
   <span class="hljs-attr">"AuthenticatedAllowList"</span>: <span class="hljs-string">"ALLOW"</span>,  
   <span class="hljs-attr">"s3ReadPolicy"</span>: <span class="hljs-string">"read_policy_9efc80af"</span>,  
   <span class="hljs-attr">"s3PermissionsGuestPublic"</span>: <span class="hljs-string">"s3:GetObject"</span>,  
   <span class="hljs-attr">"s3PermissionsGuestUploads"</span>: <span class="hljs-string">"DISALLOW"</span>,  
   <span class="hljs-attr">"GuestAllowList"</span>: <span class="hljs-string">"ALLOW"</span>,  
   <span class="hljs-attr">"triggerFunction"</span>: <span class="hljs-string">"NONE"</span>  
}
</code></pre><p>Cool. Let’s see what I want here…</p>
<pre><code><span class="hljs-string">"s3PermissionsAuthenticatedPublic"</span>: <span class="hljs-string">"s3:GetObject"</span>,
</code></pre><p>I <em>think</em> that’ll do it. All I did was remove permission to write to the public area. Anyone, including guests, can still read from it. Only data that I manually put in there can exist, so I can use this for the guest “demo mode” content. (I won’t though, read on.) Nothing in here makes it entirely clear how the <em>protected</em> functionality works, but these are just configuration options not the actual policies. For that, I dig around in the <code>s3-cloudformation-template.json</code> file that Amplify created. The JSON format is a bit verbose though, so I pop open the CloudFormation Designer (GUI) tool in AWS Console and use that to select policies to look at in more concise resulting YAML.</p>
<p>First interesting bit I found:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">S3GuestReadPolicy:</span>  
    <span class="hljs-attr">DependsOn:</span>  
      <span class="hljs-bullet">-</span> <span class="hljs-string">S3Bucket</span>  
    <span class="hljs-attr">Condition:</span> <span class="hljs-string">GuestReadAndList</span>  
    <span class="hljs-attr">Type:</span> <span class="hljs-string">'AWS::IAM::Policy'</span>  
    <span class="hljs-attr">Properties:</span>  
      <span class="hljs-attr">PolicyName:</span> <span class="hljs-type">!Ref</span> <span class="hljs-string">s3ReadPolicy</span>  
      <span class="hljs-attr">Roles:</span>  
        <span class="hljs-bullet">-</span> <span class="hljs-type">!Ref</span> <span class="hljs-string">unauthRoleName</span>  
      <span class="hljs-attr">PolicyDocument:</span>  
        <span class="hljs-attr">Version:</span> <span class="hljs-number">2012-10-17</span>  
        <span class="hljs-attr">Statement:</span>  
          <span class="hljs-bullet">-</span> <span class="hljs-attr">Effect:</span> <span class="hljs-string">Allow</span>  
            <span class="hljs-attr">Action:</span>  
              <span class="hljs-bullet">-</span> <span class="hljs-string">'s3:GetObject'</span>  
            <span class="hljs-attr">Resource:</span>  
              <span class="hljs-bullet">-</span> <span class="hljs-type">!Join</span>   
                <span class="hljs-bullet">-</span> <span class="hljs-string">''</span>  
                <span class="hljs-bullet">-</span> <span class="hljs-bullet">-</span> <span class="hljs-string">'arn:aws:s3:::'</span>  
                  <span class="hljs-bullet">-</span> <span class="hljs-type">!Ref</span> <span class="hljs-string">S3Bucket</span>  
                  <span class="hljs-bullet">-</span> <span class="hljs-string">/protected/*</span>  
          <span class="hljs-bullet">-</span> <span class="hljs-attr">Effect:</span> <span class="hljs-string">Allow</span>  
            <span class="hljs-attr">Action:</span>  
              <span class="hljs-bullet">-</span> <span class="hljs-string">'s3:ListBucket'</span>  
            <span class="hljs-attr">Resource:</span>  
              <span class="hljs-bullet">-</span> <span class="hljs-type">!Join</span>   
                <span class="hljs-bullet">-</span> <span class="hljs-string">''</span>  
                <span class="hljs-bullet">-</span> <span class="hljs-bullet">-</span> <span class="hljs-string">'arn:aws:s3:::'</span>  
                  <span class="hljs-bullet">-</span> <span class="hljs-type">!Ref</span> <span class="hljs-string">S3Bucket</span>  
            <span class="hljs-attr">Condition:</span>  
              <span class="hljs-attr">StringLike:</span>  
                <span class="hljs-attr">'s3:prefix':</span>  
                  <span class="hljs-bullet">-</span> <span class="hljs-string">public/</span>  
                  <span class="hljs-bullet">-</span> <span class="hljs-string">public/*</span>  
                  <span class="hljs-bullet">-</span> <span class="hljs-string">protected/</span>  
                  <span class="hljs-bullet">-</span> <span class="hljs-string">protected/*</span>
</code></pre>
<p>If you haven’t noticed yet, Amplify manages access levels by prefixing the S3 key with <code>public</code>, <code>protected</code>, or <code>private</code>, followed by the authenticated user’s ID. (You can think of these as file system folders, even though technically they are not.) Thus a hypothetical user with Cognito ID 123 will store private data in <code>private/123/</code> and protected data in <code>protected/123/</code>. Notice here that there is a <code>public/*</code> prefix matching condition. Does that mean that users can only write to their own public area? If so, what is the difference between public and protected? I must dig some more!</p>
<p>This policy does answer a question for me though: a guest user <em>can</em> read protected content, not just public. I may have no need for public at all then.</p>
<p>The <code>S3AuthReadPolicy</code> (for authenticated users, not guests) is similar, but has two more prefixes to allow reads of the user’s private data. Thus authenticated users can read anything except other user’s private sections:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">Condition:</span>  
      <span class="hljs-attr">StringLike:</span>  
        <span class="hljs-attr">'s3:prefix':</span>  
          <span class="hljs-bullet">-</span> <span class="hljs-string">public/</span>  
          <span class="hljs-bullet">-</span> <span class="hljs-string">public/*</span>  
          <span class="hljs-bullet">-</span> <span class="hljs-string">protected/</span>  
          <span class="hljs-bullet">-</span> <span class="hljs-string">protected/*</span>  
          <span class="hljs-bullet">-</span> <span class="hljs-string">'private/${cognito-identity.amazonaws.com:sub}/'</span>  
          <span class="hljs-bullet">-</span> <span class="hljs-string">'private/${cognito-identity.amazonaws.com:sub}/*'</span>
</code></pre>
<p>There’s an <code>S3GuestUploadPolicy</code> which would allow guests to upload to an <code>uploads/</code> prefix, but it is nullified by the default parameter of <code>s3PermissionsGuestUploads</code> being set to <code>DISALLOW</code>. 👍This leads to the <code>S3AuthUploadPolicy</code> for authenticated users, which is allowing users to dump files into the <code>uploads/</code> prefix with no way of reading it back. I figure this must be a feature to allow for uploads that are then processed by a triggered Lambda. I have no use for such uploads, and so in <code>parameters.json</code> I set <code>s3PermissionsAuthenticatedUploads</code> to <code>DISALLOW</code>, just like the parameter for guests.</p>
<p>Moving on, I see an <code>S3AuthPublicPolicy</code> which ties to the <code>s3PermissionsAuthenticatedPublic</code> parameter that I already changed to <code>S3:GetObject</code>; thus authenticated users can only read, not write to the public area. I see that this applies to anywhere in the public prefix, with no restriction around the user’s ID:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">Resource:</span>  
      <span class="hljs-bullet">-</span> <span class="hljs-type">!Join</span>   
        <span class="hljs-bullet">-</span> <span class="hljs-string">''</span>  
        <span class="hljs-bullet">-</span> <span class="hljs-bullet">-</span> <span class="hljs-string">'arn:aws:s3:::'</span>  
          <span class="hljs-bullet">-</span> <span class="hljs-type">!Ref</span> <span class="hljs-string">S3Bucket</span>  
          <span class="hljs-bullet">-</span> <span class="hljs-string">/public/*</span>
</code></pre>
<p>Contrast that to <code>S3AuthProtectedPolicy</code>, which <em>does</em> restrict write activity to the user’s ID:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">Resource:</span>  
       <span class="hljs-bullet">-</span> <span class="hljs-type">!Join</span>   
          <span class="hljs-bullet">-</span> <span class="hljs-string">''</span>  
          <span class="hljs-bullet">-</span> <span class="hljs-bullet">-</span> <span class="hljs-string">'arn:aws:s3:::'</span>  
            <span class="hljs-bullet">-</span> <span class="hljs-type">!Ref</span> <span class="hljs-string">S3Bucket</span>  
            <span class="hljs-bullet">-</span> <span class="hljs-string">'/protected/${cognito-identity.amazonaws.com:sub}/*'</span>
</code></pre>
<p>The <code>S3AuthPrivatePolicy</code> looks nearly the same, protecting the private folder:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">Resource:</span>  
       <span class="hljs-bullet">-</span> <span class="hljs-type">!Join</span>   
          <span class="hljs-bullet">-</span> <span class="hljs-string">''</span>  
          <span class="hljs-bullet">-</span> <span class="hljs-bullet">-</span> <span class="hljs-string">'arn:aws:s3:::'</span>  
            <span class="hljs-bullet">-</span> <span class="hljs-type">!Ref</span> <span class="hljs-string">S3Bucket</span>  
            <span class="hljs-bullet">-</span> <span class="hljs-string">'/private/${cognito-identity.amazonaws.com:sub}/*'</span>
</code></pre>
<p>So what’s the difference? Scroll back up to <code>S3GuestReadPolicy</code> and <code>S3AuthReadPolicy</code>. While <code>S3AuthProtectedPolicy</code> and <code>S3AuthPrivatePolicy</code> cover what a user can <em>write</em> to, the earlier policies allow anybody to <em>read</em> the <code>protected/*</code> prefix.</p>
<p>I have long been puzzled as to why Amplify’s documentation didn’t clearly define the access roles: public vs protected vs private. I couldn’t find any explanation in the past, but see that there are now some details <a target="_blank" href="https://aws-amplify.github.io/docs/js/storage#using-amazon-s3">here</a>. However, I still find it a bit ambiguous. Now I see some justification for that: it’s up to you! Editing the <code>parameters.json</code> file lets you dictate the behavior. However, that too is not documented. 😔</p>
<p>Based on this quick study, I’ve put together a summary of what I <em>think</em> are the rules. I may be mistaken on some of it though; no guarantees. Assuming you select guest access and read-write for authenticated users, then this is the default behavior and the parameter to change if you wish:</p>
<ul>
<li><code>upload/</code>Authenticated users may upload to this. (<code>s3PermissionsAuthenticatedUploads</code>)<br />Guests may not. (<code>s3PermissionsGuestUploads</code>)</li>
<li><code>public/*</code>Authenticated users may read and write. (<code>s3PermissionsAuthenticatedPublic</code>)<br />Guests may read. (<code>s3PermissionsGuestPublic</code>)</li>
<li><code>protected/</code>Anybody may read (not configurable)</li>
<li><code>protected/{user-id}/*</code>Anybody may read (not configurable)<br />The matching authenticated user may write (<code>s3PermissionsAuthenticatedProtected</code>)</li>
<li><code>protected/</code>No access</li>
<li><code>protected/{user-id}/*</code>The matching authenticated user may read and write (<code>s3PermissionsAuthenticatedPrivate</code>)</li>
<li>Authenticated users may list contents of any prefix they can read (<code>AuthenticatedAllowList</code>)</li>
<li>Guests may list contents of any prefix they can read (<code>GuestAllowList</code>)</li>
<li>If in the CLI you selected no guest access, then unauthenticated users have no access.</li>
</ul>
<p>Possible values of these properties are a comma-separated list of any of <code>s3:ListBucket</code>, <code>s3:GetObject</code>, <code>s3:PutObject</code>, <code>s3:DeleteObject</code> or simply <code>DISALLOW</code>.</p>
<p>The parameters file also includes <code>selectedGuestPermissions</code> and <code>selectedAuthenticatedPermissions</code>, yet I don’t see them used anywhere in the CloudFormation template. 🤔🤦‍♂️</p>
<p>For SqAC, I modified the parameters to disable the upload and public features entirely, while keeping the default behavior for protected and private.</p>
<h3 id="heading-cors">CORS?</h3>
<p>As part of the <em>Storage</em> / <em>Using Amazon S3</em> documentation for Amplify, is <a target="_blank" href="https://aws-amplify.github.io/docs/js/storage#amazon-s3-bucket-cors-policy-setup">this bit</a> telling you to manually configure CORS policy on your S3 bucket. 😲 This would break Infrastructure as Code (IaC) and easy use of multiple environments! Fortunately, the documentation is a <a target="_blank" href="https://en.m.wikipedia.org/wiki/Red_herring">red herring</a> — Amplify has already set the bucket policy as documented. (I have submitted a <a target="_blank" href="https://github.com/aws-amplify/docs/issues/918">bug report</a> to remove this.)</p>
<h3 id="heading-to-be-continued">To be continued…</h3>
<p>Find all of this in the <a target="_blank" href="https://github.com/kernwig/sqac-amplify/pull/3">part 4 pull request</a>.</p>
<p>I had intended to include updating the client app to use storage as part of this post, but the security policy analysis turned this into a big post already, and the next one is shaping up to be a good bit of work as well.</p>
<p>Coming next time… <em>using</em> Amplify Storage!</p>
]]></content:encoded></item><item><title><![CDATA[Migrating a Legacy App to Cloud Native — Part 5]]></title><description><![CDATA[A storm in brewing — photo by Adam Fanello
This is part 5 in a series. If you haven’t been following it before now, here are the previous posts:

Part 1: Background
Part 2: Requirements & Architecture
Part 3: Authentication
Part 4: Add Cloud Storage
...]]></description><link>https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-5-34696c6f0f43</link><guid isPermaLink="true">https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-5-34696c6f0f43</guid><dc:creator><![CDATA[Adam Fanello]]></dc:creator><pubDate>Wed, 18 Sep 2019 17:00:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1650409292661/SFwXxesyv.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A storm in brewing — photo by Adam Fanello</p>
<p>This is part 5 in a series. If you haven’t been following it before now, here are the previous posts:</p>
<ul>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-1-68a1adbb95d5">Part 1: Background</a></li>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-2-533dfebd38fb">Part 2: Requirements &amp; Architecture</a></li>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-3-4bb187fea485">Part 3: Authentication</a></li>
<li><a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-4-2741585e4953">Part 4: Add Cloud Storage</a></li>
</ul>
<p>Now that I have added cloud storage via Amplify CLI, and configured it, it’s time to use the new storage in my app.</p>
<h3 id="heading-preface">Preface</h3>
<p>Generally this series is written as I use the tool and presented in the order that I created the content. There’s some editing from draft to publishing, but this series is about the journey and so I present it as such.</p>
<p>This time though, I’m coming back and adding this note before you get into the content of the post. This was absolutely the most challenging part of the migration so far, and I suspect it will remain so throughout the series. Roughly half of my time was spent reverse engineering Amplify and the Javascript SDK due to poor documentation and my frustration shows below. I want to state here that I <em>really do</em> like AWS, and I came into this <em>really</em> looking to succeed with Amplify. The engineers at AWS have created amazing tools and services that have transformed what it is to be a software developer over the past decade, making it possible to build products at scales and paces never before possible. So while I am frustrated with imperfection, I also acknowledge that without the amazing work at Amazon and AWS I’d probably still be back in part 1 building an authentication system. 😬</p>
<p>Be sure to read the conclusion too before getting scared off. Here we go…</p>
<h3 id="heading-add-storage-support-to-amplify-app">Add Storage Support to Amplify App</h3>
<p>The first step is to add the Amplify storage module to my Angular app as shown <a target="_blank" href="https://aws-amplify.github.io/docs/js/angular#option-2-configuring-the-amplify-provider-with-specified-amplify-js-modules">here</a>. In anticipation of this need though, I already did that step when adding authentication. Moving on…</p>
<h3 id="heading-basic-use-not-so-easy">Basic Use — Not so Easy</h3>
<p>The web app had been using <a target="_blank" href="https://feathersjs.com/">Feathers</a>’s client library to communicate with the Feathers backend. That must now be replaced with Amplify Storage. Fortunately, I designed the application pretty well by isolating all of the actual communication into a <code>PersistenceService</code> class. 🎉</p>
<p>The first thing to do is write something to storage, which leads to <code>Storage.put(key, content, config)</code>. What are the <code>config</code> options? No single place tells you all of them. The Typescript declarations give nothing, only <a target="_blank" href="https://aws-amplify.github.io/docs/js/storage#put">this section</a> of the documentation tells you what may or may not be a complete list through a series of examples. There’s an API reference <a target="_blank" href="https://aws-amplify.github.io/amplify-js/api/classes/storageclass.html">here</a>, obviously generated from some Typescript and simply listing the config options as type <code>any</code>. 😕 The API for the <code>get</code> function is worse in a way, as it indicates that it returns “either a presigned url or the object”. Which one is a mystery perhaps controlled via the equally mysterious <code>config?: any</code> second parameter. Regardless, the Amplify Storage <a target="_blank" href="https://aws-amplify.github.io/docs/js/storage#get">documentation</a> states that get “retrieves a publicly accessible URL” so this would seem the most likely if less useful result. Perhaps so, but my earlier suspicion appeared confirmed when I tracked down the <a target="_blank" href="https://github.com/aws-amplify/amplify-js/blob/master/packages/storage/src/Storage.ts">source code</a> and found that the API documentation is <em>not</em> generated from the entirety of the code. The JsDoc proved more useful:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">/**  
 * Get a presigned URL of the file or the object data when download:true  
 *  
 * @param {String} key - key of the object  
 * @param {Object} \[config\] - { level : private|protected|public, download: true|false }  
 * @return - A promise resolves to either a presigned url or the object  
 */</span>  
<span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> get(key: <span class="hljs-built_in">string</span>, config?): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">String</span> | <span class="hljs-built_in">Object</span>&gt; {
</code></pre>
<p>Thank goodness for open source! The actual logic is in <a target="_blank" href="https://github.com/aws-amplify/amplify-js/blob/master/packages/storage/src/Providers/AWSS3Provider.ts">AWSS3Provider.ts</a> the contents of which answered my next question regarding error handling. I see the <code>Promise</code> returns the raw response from the AWS-SDK’s <code>s3.getObject</code>. So tracking <a target="_blank" href="https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getObject-property">that API</a> down reveals only that the reject data type is <code>Error</code>, as generic as it gets. 😕 Also, only here do I learn that upon choosing to download, the resulting Object will have a property named <code>Body</code> of “Typed Array”… whatever that is.  Through experimentation (i.e. <code>console.log</code>), I found that the body is <code>Uint8Array</code>, and a <code>toString(‘UTF-8’)</code> on that gives back the string that I stored. (I'm generally under the opinion that any API with anything that amounts to a getter and setter aught to return a value in the same form it was set.)</p>
<p>The situation is even worse <em>again</em> for <code>Storage.list()</code>. The example <a target="_blank" href="https://aws-amplify.github.io/docs/js/storage#list-keys">here</a> simply indicates that it returns a result. That’s it. It returns <em>something</em>. Once again I traced it down to the <a target="_blank" href="https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#listObjects-property">SDK s3.listObjects documentation</a>, but this time the actual results did not even match what is there and thus showing that AWS’s documentation problem extends beyond Amplify. (FYI: The actual returned result is just the <code>Contents</code> array, and its fields start with lower-case, not the documented upper-case.)</p>
<p>I can probably dig further into the source, but this exercise has reached a level of absurdity. It is a failure of an API, and a sign of unmaintainable code, when anything past the documentation and function signature needs to be read by someone trying to use it. The point of Amplify is to make using AWS easy. Its storage module turned out to be just another layer of mystery to sleuth my way through. It’s particularly a miss here given that the Amplify source itself <em>has</em> a bit more documentation sitting there ready to provide help, but it is hidden away.</p>
<p>Here are highlight <em>snippets</em> of what actually worked with the Amplify Storage API. The full code can be found in <a target="_blank" href="https://github.com/kernwig/sqac-amplify/pull/4/files?w=1">PR #4</a>.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> {AmplifyService} <span class="hljs-keyword">from</span> <span class="hljs-string">'aws-amplify-angular'</span>;  
<span class="hljs-keyword">import</span> {StorageClass} <span class="hljs-keyword">from</span> <span class="hljs-string">'aws-amplify'</span>;

<span class="hljs-meta">@Injectable</span>()  
<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> PersistenceService {  
  <span class="hljs-comment">/** API to the cloud storage */</span>
  <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> cloud: StorageClass;

 <span class="hljs-keyword">constructor</span>(<span class="hljs-params"><span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> amplifySvc: AmplifyService</span>) {  
    <span class="hljs-built_in">this</span>.cloud = <span class="hljs-built_in">this</span>.amplifySvc.storage();  
  }

  <span class="hljs-keyword">async</span> loadUser(): <span class="hljs-built_in">Promise</span>&lt;UserSettings&gt; {  
    <span class="hljs-keyword">const</span> downloadedObj = <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.cloud.get(  
      settingsKey,   
      {level: <span class="hljs-string">'private'</span>, download: <span class="hljs-literal">true</span>}  
    );  
    <span class="hljs-keyword">const</span> downloadedStr = (downloadedObj <span class="hljs-keyword">as</span> <span class="hljs-built_in">any</span>).Body.toString(<span class="hljs-string">'utf-8'</span>);
    <span class="hljs-keyword">const</span> downloadedJson = <span class="hljs-built_in">JSON</span>.parse(downloadedStr) <span class="hljs-keyword">as</span> UserSettingsJSON;  
    <span class="hljs-comment">// etc  </span>
  }

  <span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> saveModelToCloud&lt;T <span class="hljs-keyword">extends</span> AbstractStorableModel&gt;(model: T, id: <span class="hljs-built_in">string</span>, level: <span class="hljs-string">'private'</span>|<span class="hljs-string">'protected'</span>): <span class="hljs-built_in">Promise</span>&lt;T&gt; {  
    <span class="hljs-keyword">let</span> json = model.toJSON() <span class="hljs-keyword">as</span> AbstractStorableModelJSON;  
   <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.cloud.put(  
      id, <span class="hljs-built_in">JSON</span>.stringify(json),  
      {level, contentType: <span class="hljs-string">"application/json"</span>}  
    );
  }  
}
</code></pre>
<p>The good news is, I now have a file in my storage S3 bucket! 🎉</p>
<h3 id="heading-when-sub-is-not-sub">When sub is not sub</h3>
<p>In <a target="_blank" href="https://adam.fanello.net/migrating-a-legacy-app-to-cloud-native-part-4-2741585e4953">part 4</a> of this series I explored the storage policies and found the policy secure the S3 storage path <code>"private/${cognito-identity.amazonaws.com:sub}/"</code> in the CloudFormation.</p>
<p>Once I stored a file though, I saw it actually create the S3 prefix <code>“private/us-west-2:ecee2298-97c5-4331-867c-908eef1660c8/”</code>, while by <code>CognitoUser</code> has sub <code>"508903f1-9203-4cf6-b0d8-353fc54c2916"</code>. 🤔 Why aren’t these matching?</p>
<p>After a good bit of digging, I finally noticed that the first part of the CloudFormation says “cognito-<em>identity</em>”. This is the Cognito <strong>Identity</strong> Pool. Meanwhile, the <code>CognitoUser</code> sub is from the Cognito <strong>User</strong> Pool. Two different things. Whoops. This is a gotcha and it is talked about at length in Amplify CLI <a target="_blank" href="https://github.com/aws-amplify/amplify-cli/issues/1847">issue #1847</a> and <a target="_blank" href="https://github.com/aws-amplify/amplify-js/issues/54">Amplify JS issue #54</a>, where some commenters have gone a bit bizerk with complicated workarounds. It’s a surprise, but a solvable one with just a couple lines of code. Simply, I call <code>AuthService#currentUserInfo()</code> and use the ID from this (Identity Pool ID) instead of the value from <code>AuthService#authStateChange$</code>. The code changes can be seen in <a target="_blank" href="https://github.com/kernwig/sqac-amplify/commit/2ffb97af1c4821b4d2d8c8d093b48ffc91a5e97c?w=1">this commit</a>.</p>
<h3 id="heading-storage-nosql-option">Storage NoSQL Option?</h3>
<p>In digging through Amplify to discover how to use the Storage module, I learned that it is a thin wrapper over the AWS SDK and its behavior is tightly coupled to S3. What would happen if I had chosen the NoSQL (DynamoDB) option when doing the <code>amplify add storage</code> command? Would the same APIs provide entirely different results? I looked back in the <a target="_blank" href="https://github.com/aws-amplify/amplify-js/tree/master/packages/storage">amplify-js storage source code</a>, and found only the S3 provider. 😲</p>
<p>Then I took a peek at the iOS and Android documentation. Like the Javascript documentation, they say to choose the “Content” option. (I missed that remark in my earlier readings.) I suppose the team writing the Angular CLI is simply ahead of the client library teams. I’m glad I choose the S3 option when contemplating this during the architecting phase of the project!</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>While the Amplify CLI makes setting up AWS infrastructure easier, and the Amplify Authentication module makes managing users <em>really</em> easy, the Storage module has been a bust. Given that I had to dig down to the underlying SDK to figure out how to use any of it, it would have been easier to just use that underlying SDK. It is far from a lost cause though. The capabilities are there; redemption only requires proper API documentation! Proper use of Typescript would also go a long way. (Avoid type <code>any</code>.)</p>
<p>This is unfortunately a too common problem with developer-driven products. We developers like to make features, not documentation. We make the feature to the “works for me” point, call it complete, and move on to the next shiny thing. Few of us like writing for <em>humans</em>. Even when there is documentation (and Amplify does have a <em>lot</em> of documentation written) it turns out to be more fluff. “Look at this shiny new feature! Use it to do great things!” When you actually try to use it though, we get things like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1650409291303/Ug25LIikH.png" alt /></p>
<p>The list function takes two parameters of something, and returns a something. 🤷‍♂️ This is not an AWS problem, it’s an industry problem that I’ve seen throughout my career. Perhaps part of the reason open source has become so dominant isn’t economic, but rather because given a choice between an undocumented proprietary library and an undocumented open source library, we use the only one that <em>can</em> be used.</p>
<h3 id="heading-coming-next-time">Coming next time…</h3>
<p>I need to spend a bit more time testing my app and making sure this storage is working, so there may be a bit of delay before I can move on. The next part shall involve using Amplify CLI to setup the very small GraphQL API so that it will generate the DynamoDB table. To get data <em>into</em> that table though, I’ll be monitoring the S3 data bucket for changes and storing the meta data into DynamoDB. As seen in my architectural diagram (end of <a target="_blank" href="https://adamfanello.medium.com/migrating-a-legacy-app-to-cloud-native-part-2-533dfebd38fb">part 2</a>), I may need to step outside of Amplify for this. More to come!</p>
<p><em>(Story originally published</em> <a target="_blank" href="http://fanello.net/home/2019/09/18/migrating-a-legacy-app-to-cloud-native-part-4/"><em>here</em></a> <em>in September 2019.)</em></p>
]]></content:encoded></item></channel></rss>