Deployed dccbf67 with MkDocs version: 1.1.2

This commit is contained in:
Kalyanasundaram Somasundaram
2020-12-07 13:12:21 +05:50
parent 9f7fc3659b
commit 27b089555c
4 changed files with 3 additions and 3 deletions

View File

@@ -1118,7 +1118,7 @@
<p>Initially we can start with deploying this app on one virtual machine on any cloud provider. But this is a <code>Single point of failure</code> which is something we never allow as an SRE (or even as an engineer). So an improvement here can be having multiple instances of applications deployed behind a load balancer. This certainly prevents problems of one machine going down.</p>
<p>Scaling here would mean adding more instances behind the load balancer. But this is scalable upto only a certain point. After that, other bottlenecks in the system will start appearing. ie: DB will become the bottleneck, or perhaps the load balancer itself. How do you know what is the bottleneck? You need to have observability into each aspects of the application architecture.</p>
<p>Only after you have metrics, you will be able to know what is going wrong where. <strong>What gets measured, gets fixed!</strong></p>
<p>Get deeper insights into scaling from School Of SRE's Scalability module and post going through it, apply your learnings and takeaways to this app. Think how will we make this app geographically distributed and highly available and scalable.</p>
<p>Get deeper insights into scaling from School Of SRE's <a href="../../systems_design/scalability/">Scalability module</a> and post going through it, apply your learnings and takeaways to this app. Think how will we make this app geographically distributed and highly available and scalable.</p>
<h2 id="monitoring-strategy">Monitoring Strategy</h2>
<p>Once we have our application deployed. It will be working ok. But not forever. Reliability is in the title of our job and we make systems reliable by making the design in a certain way. But things still will go down. Machines will fail. Disks will behave weirdly. Buggy code will get pushed to production. And all these possible scenarios will make the system less reliable. So what do we do? <strong>We monitor!</strong></p>
<p>We keep an eye on the system's health and if anything is not going as expected, we want ourselves to get alerted.</p>

View File

@@ -1156,7 +1156,7 @@
<li>What if URL is temporarily down?</li>
</ol>
<h3 id="4-storage">4. Storage</h3>
<p>Finally, storage. Where will we store the data that we will generate over time? There are multiple database solutions available and we will need to choose the one that suits this app the most. Relational database like MySQL would be a fair choice but <strong>be sure to checkout School of SRE's database section for deeper insights into making a more informed decision.</strong></p>
<p>Finally, storage. Where will we store the data that we will generate over time? There are multiple database solutions available and we will need to choose the one that suits this app the most. Relational database like MySQL would be a fair choice but <strong>be sure to checkout School of SRE's <a href="../../databases_sql/intro/">SQL database section</a> and <a href="../../databases_nosql/intro/">NoSQL databases section</a> for deeper insights into making a more informed decision.</strong></p>
<h3 id="5-other">5. Other</h3>
<p>We are not accounting for users into our app and other possible features like rate limiting, customized links etc but it will eventually come up with time. Depending on the requirements, they too might need to get incorporated.</p>
<p>The minimal working code is given below for reference but I'd encourage you to come up with your own.</p>