AWS (Amazon Web Services) FAQ

This post lists the AWS (Amazon Web Services) FAQ (Frequently Asked Questions and answers) for freshers for next job interview or AWS (Amazon Web Services) Certification.

AWS provides various certification exams for Cloud Practitioner, Developer, and Architect, Administrator, and Specialty. The specific certification lasts for two years can be recertified by the IT professionals after it expires. There are hundreds of test centers around the globe to take the exams for the same.

Types of AWS (Amazon Web Services) Certifications:

  • Foundational Level Certifications:
    • AWS Certified Cloud Practitioner (Foundational)
  • Associate level Certifications:
    • AWS Certified Solutions Architect
    • AWS Certified Developer
    • AWS Certified SysOps Administrator
  • Specialty Level Certificates:
    • AWS Certified Advanced Networking
    • AWS Certified Data Analytics
    • AWS Certified Security
    • AWS Certified Machine Learning
    • AWS Certified Alexa Skill Builder
    • AWS Certified Database
  • Professional Level Certifications:
    • AWS Certified Solutions Architect
    • AWS Certified DevOps Engineer

For more details, read the full blog:

What are the AWS Certifications?

Amazon Web Services (AWS) is the world’s most completely defined and broadly used cloud platform adopted around the globe, offering over 175 fully-featured services globally from various data centers. AWS is a platform that offers reliable, flexible, scalable, user-friendly, and cost-effective cloud computing solutions. The AWS platform is built with a combination of Infrastructure as a service (IaaS), Platform as a service (PaaS), and Packaged software as a service (SaaS).

History of AWS

AWS was actually launched back in 2002, as a free service that used to allow companies to include Amazon.com features on their own websites. The early version was aimed at helping developers to build applications and tools that allow them to include some of the unique features of Amazon.com into their websites.

For more details, read the full blog:

What is AWS?

AWS Lambda (or lambda in short) is an event-driven, serverless compute service offered by Amazon Web Services (AWS) that compute the code in response to the event and also manages the resources automatically So that one does not need to look after the resources required to execute the code.

It allows users to simply focus on computational logic and desired outcome instead of worrying about resources such as operating systems, scaling, etc. Once the code is provided, it runs the code on high availability infrastructure, manages all the resources, and performs all the essential activities including server, operating system maintenance, security patch deployment, etc. AWS Lambda can also be used to extend the functionality of other AWS services using custom logic.

Here the terms used for AWS Lambda:

Serverless:

When we say lambda is a serverless compute service, It does not mean that server is not used or involved. Here serverless means the developer does not need to maintain the server on its own. Lambda is a fully managed service that looks after resources and infrastructure by itself.

Event-driven:

Here event driven means to drive this service an event must take place. The developer will mention the event and when that event will take place, it will then trigger the lambda function to perform the computation or execution of the code.

For more details, read the full blog:

What is AWS Lambda?

AWS stands for Amazon Web Services and AWS is one of, most popular and ever-evolving cloud computing platforms developed by Amazon. This cloud computing platform not only allows the computation but also offers many other services in order to make it much easier for a developer to develop an application and also allows users to focus primarily on the business logic as it automatically manages most of the application components such as Operating system, hardware needed, memory required, etc with “pay as you go” pricing scheme.

The high availability of an application means how fast the application responds to the server or to the customer. Here availability means the application is available for the customer. Higher availability results in lower latency. So today in this article we will discuss various strategies to attain High availability on an application created on AWS.

For more details, read the full blog:

What is high availability in AWS?

Amazon Web Services (AWS) Cloud is a type of cloud platform just like you will see in the market. Some big giants examples are Microsoft’s Azure, Google Cloud, IBM Cloud, Adobe Create Cloud, etc. Cloud platforms are the easiest way to store and operate your work in a virtual space instead of a physical setup which gives you the freedom of operating it from anywhere in the world.

But what makes AWS so stand out is that it is the world’s most completely defined and broadly used cloud platform adopted around the globe, offering over 175 fully-featured services globally from various data centers. AWS is a platform that offers reliable, flexible, scalable, user-friendly, and cost-effective cloud computing solutions.

The AWS platform is built with a combination of Infrastructure as a service (IaaS), Platform as a service (PaaS), and Packaged software as a service (SaaS).

The below point tells why AWS is the Leading Cloud Platform:

  • Most Functionality
  • The largest community of customers and partners
  • Most Secure
  • Fastest pace of innovation
  • Most proven operational expertise
  • Global Network of AWS Regions

For more details, read the full blog:

Why AWS is a Leading Global Platform

Suppose you are new in business and just launched your application with a single server. After some time your business started growing exponentially which brings more traffic to your website but as you had a single server, and traffic on the website exceeded the capabilities of the server, customers are facing low latency issues. To overcome the problem you need to distribute the traffic across the servers. Now in traditional cases, customers will report low latency then you will check for the problem and get servers installed by spending huge capital. This will be time-consuming, hectic, and dissatisfactory.  Instead of this wouldn’t it be better if there is some service that can automatically do it for you? Introducing Amazon Elastic Load Balancer or ELB

Types of AWS Load Balancing techniques:

AWS offers three types of Elastic Load Balancer

  • Application Load Balancer.
  • Network Load Balancer.
  • Classic Load Balancer.

For more details, read the full blog:

What is AWS Elastic Load Balancing (ELB)?

AWS Backup is a backup service that’s completely manageable which is created to centralize and automate the backing up of data across AWS services both in AWS cloud as well as on-premises using AWS Storage Gateway. AWS Backup can be used to centrally configure backup policies and to monitor backup activity for AWS resources. AWS backup can be automated which helps to consolidate backup tasks that were performed before service-by-service, and also removes the need to create custom scripts and manual processes. With very few steps within the AWS Backup console, anyone can construct backup policies that automatically backup retention management and backup schedules. AWS Backup provides a managed, simplifying your backup management, policy-based backup solution, enabling you to satisfy your business need and regulatory backup compliance requirements

AWS Backup Features

  • Centralized Backup Management
  • Cross-Region Backup
  • Policy-Based Backup Solutions
  • Tag-Based Backup Policies
  • Backup Activity Monitoring
  • Lifecycle Management Policies
  • Backup Access Policies

For more details, read the full blog:

What are the AWS Backup and Use cases?

Amazon Auto-scaling is the service offered by AWS which allows us to increase or decrease the capacity of the application on demand depending on the limits we set using Cloudwatch. Auto-scaling not only helps in dealing with the servers by dynamically scaling EC2 capacity up or down but also helps in maintaining availability. It automatically adds instances when the traffic increases or removes the instance when not enough traffic is coming depending on the conditions defined by us. Amazon EC2 scaling also does fleet management and health checks of the EC2 instances.

AWS EC2 Scaling policies

EC2 auto scaling engine offers three types of policies:

  • Target tracking scaling.
  • Step Scaling.
  • Simple Scaling.

Types of  AWS EC2 Scaling:

  • Scheduled Scaling.
  • Dynamic Scaling.
  • Predictive Scaling.

For more details, read the full blog:

All about AWS EC2 Auto Scaling

Amazon Lex is a service for building conversational interfaces into any application utilizing voice and text. Amazon Lex gives the advanced deep learning functionalities of Automatic Speech Recognition (ASR) for changing over voice to text, and Natural Language Understanding (NLU) to perceive the purpose of the text, to empower you to construct applications with exceptionally captivating user experience and lifelike conversational collaborations. With Amazon Lex, the similar deep learning technologies that power Amazon Alexa is now accessible to any developer, empowering you to rapidly and effectively build sophisticated, natural language, conversational bots ("chatbots").

Benefits of using AWS / Amazon Lex:

  • Ease of Access.
  • Seamless deployment and scaling.
  • Built-in integration with the AWS platform.
  • Cost-effectiveness.

Some of AWS Lex’s Use Cases 

  • Call Center Bots
  • Informational Bots
  • Application Bots
  • Enterprise Productivity Bots
  • Amazon Lex in Internet Of Things (IoT)

For more details, read the full blog:

All About AWS Lex

Amazon Athena is an interactive query service that makes it simple to analyze data in Amazon S3 applying standard SQL. Athena is serverless, so there is no infrastructure to handle, and you pay just for the queries that you run.

Athena is simple to use. You simply had to point to your data in Amazon S3, establish the schema, and begin querying utilizing standard SQL. Most outcomes are conveyed in practically no time. With Athena, there's no requirement for complex ETL jobs to set up your data for analysis. This makes it easier for anybody with SQL skills to rapidly analyze large-scale datasets.

Athena is out-of-the-box integrated with AWS Glue Data Catalog, permitting you to generate a unified metadata storehouse across different services, crawl data sources to explore schemas, and populate your Catalog with a new and remodeled table and partition definitions, and sustain schema versioning.

Benefits of using Amazon Athena:

  • Start querying instantly.
  • Pay per query.
  • Open, powerful, standard.
  • Fast, Really fast.

For more details, read the full blog:

All about AWS Athena

AWS is a cloud-based service offered by Amazon named as Amazon Web Services (shortly named as AWS) with pay as you go pricing scheme and easy to install and use services. But as soon as you start developing an application or software you need to launch an EC2 instance. Here EC2 stands for Elastic Cloud Compute. It is one of the main services which let you lent virtual computers to perform computation and development of software or application. Whenever someone is intended to create an application on the cloud, EC2 instance is needed to be launched in a location.

To launch an EC2 instance AWS offers various geographic locations worldwide. These locations are composed of regions, availability zone, and local zones. Every Region is a different geographic territory. Every Region has different, separated areas known as Availability Zones. Local Zones give you the capacity to put resources, for example, process and capacity, in different areas closer to your end-clients. Resources aren't repeated across Regions except if you explicitly decide to do as such. So before discussing EC2 let's discuss Regions and Availability Zones.

To know more about AWS Region: Click Here  

AWS Glue is a completely overseen ETL (extract, transform, and load) service that makes it straightforward and cost-effective to arrange your data, clean it, improve it, and move it reliably between different data stores and data streams. AWS Glue comprises a central metadata storehouse known as the AWS Glue Data Catalog, an ETL engine that automatically creates Python or Scala code, and an adaptable scheduler that handles dependency resolution, job monitoring, and retries. AWS Glue is serverless, so there's no framework to set up or manage.

AWS Glue is intended to work with semi-organized data. It presents a component called a dynamic frame, which you can use in your ETL scripts. A dynamic frame is like an Apache Spark data frame, which is a data abstraction used to sort out data into rows and columns, then again, except that each record is self-defining so no schema is required in the starting. With dynamic frames, you get schema flexibility and a set of advanced transformations explicitly intended for dynamic frames. You can change over between dynamic frames and Spark data frames, with the goal so that you can take advantage of both AWS Glue and Spark transformations to do the sorts of reviews that you need.

Benefits of AWS Glue:

  • Less hassle
  • More Power
  • Cost-effective

For more details, read the full blog:

All About AWS Glue

Amazon CloudWatch is a checking and recognizability administration that worked for DevOps engineers, designers, site unwavering quality specialists (SREs), and IT chiefs. CloudWatch furnishes you with information and noteworthy bits of knowledge to screen your applications, react to framework-wide execution changes, advance asset usage, and get a bound-together perspective on operational wellbeing. CloudWatch gathers checking and operational information as logs, measurements, and occasions, furnishing you with a brought together perspective on AWS assets, applications, and administrations that sudden spike in demand for AWS and on-premises servers. You can utilize CloudWatch to identify peculiar conduct in your surroundings, set cautions, picture logs, and measurements one next to the other, take robotized activities, investigate issues, and find experiences to keep your applications running easily.

You can make alerts that watch measurements and send warnings or consequently make changes to the assets you are checking when a limit is penetrated. For instance, you can screen the CPU utilization and plate peruses and composes of your Amazon EC2 occasions and afterward utilize this information to decide if you should dispatch extra occurrences to deal with an expanded burden. You can likewise utilize this information to stop under-utilized cases to set aside cash.

With CloudWatch, you gain framework-wide perceivability into asset usage, application execution, and operational wellbeing.

For more details, read the full blog:

All About AWS Cloudwatch

Amazon QuickSight is a business analytics service that you can use to create visualizations, perform ad hoc analysis, and get business insights from your data. It can consequently find AWS data sources and furthermore works with your data sources. Amazon QuickSight empowers organizations to scale to a huge number of users and conveys responsive performance by utilizing a robust in-memory engine (SPICE).

Amazon QuickSight is a quick, cloud-powered business intelligence service that makes it simple to convey bits of knowledge to everybody in your organization.

As a fully managed service, QuickSight lets you effectively build and publish interactive dashboards that incorporate ML Insights. Dashboards can then be able to be accessed from any device, and installed into your applications, portals, and websites.

With their Pay-per-Session pricing, QuickSight permits you to give everyone access to the data they need, while just paying for what you use.

You can do the following, using Amazon QuickSight :

  • Get started quickly – Sign in, pick a data source, and make your first visualization in minutes.
  • Access data from multiple sources – Upload files, connect to AWS data sources or utilize your own external data sources.
  • Take advantage of dynamic visualizations – Smart visualizations are dynamically build dependent on the fields that you select.
  • Get answers fast – Generate quick, interactive visualizations on enormous data sets.
  • Tell a story with your data – Create data dashboards and point-in-time visuals, share insights, and team up with others.

For more details, read the full blog:

All about AWS QuickSight or Amazon Quicksight

AWS Elastic Beanstalk is the quickest and easiest approach to get web applications fully operational on AWS. Designers just transfer their application code and the services consequently handle all the details, for example, resource allocation, load balancing, auto-scaling, and observing. Flexible Beanstalk is perfect on the off chance that you have a PHP, Java, Python, Ruby, Node.js, .NET, Go, or Docker web application. Flexible Beanstalk utilizes center AWS services, for example, Amazon EC2, Amazon Elastic Container Service (Amazon ECS), Auto Scaling, and Elastic Load Balancing to effectively bolster applications that need to scale to serve a large number of clients.

AWS Elastic Beanstalk is a simple to-utilize administration for conveying and scaling web applications and administrations created with different languages on natural servers, for example, Apache, Nginx, Passenger, and IIS. You can just transfer your code and Elastic Beanstalk naturally handles the organization, from limit provisioning, load adjusting, auto-scaling to application wellbeing observing. Simultaneously, you hold full command over the AWS assets driving your application and can get to the hidden assets whenever.

Characteristics of AWS Elastic Beanstalk:

  • Scaling Apps
  • Monitoring Apps
  • Application Health
  • Monitoring, Logging, and Tracing
  • Wide Selection of Application Platforms
  • Customization
  • Variety of Application Deployment Options
  • Management and Updates

For more details, read the full blog:

All About AWS Elastic Beanstalk

What is Edge computing?

Edge computing refers to the distributed environment in which both data storage and computation are brought closer to the required location, thereby improving the response time. This helps in efficiently fulfilling the computational requirements. Let us now explain the advantage of edge computing through real-life examples

  • Broadcasting Platforms-Streaming platforms like Netflix, prime video, etc uses edge storage to create a smooth experience. This is made possible by preserving common data in facilities close to ending users.
  • AI-enabled vehicles-Vehicles enabled with artificial intelligence uses a lot of real-time information from the environment. In this case, edge computing decreases response time which would not be possible in cloud computing.
  • Smart Homes-Processing of data is done closer to the source is done to minimize response time. Medical teams, fire are some examples where response time matters a lot.

What is Cloud Computing?

Cloud computing helps in providing different resources via the internet, helping the user in cost reduction and focussing more on the business rather than being concerned about IT issues.

Cloud computing can have different models as follows: 

  • Community Cloud companies that have similar goals and requirements can access a cloud. usually implemented on-site by a third party.
  • Private cloud is specific to a particular organization. 
  • The public cloud is Mainly operated by a company and provides facilities to users on a subscription basis.
  • Hybrid cloud-It is a combination of both public and private clouds. It also helps data and apps to migrate from one domain to other.

Let us now study the differences below in a tabular manner.

Edge Computing

Cloud Computing

Different application programs having different run time is used in the development of Edge computing.

Specific development is done using one programming language.

Need for comprehensive security strategy involving authentication and attack handling procedures.

Does not require a strict program for security.

Edge computing is ideal for applications having bandwidth issues.

Ideal for development programs that needs huge data processing.

Edge computing occurs on the system itself. 

Uses cloud storage.