Monday, 5 October 2020

Certification Learnings

AZ - 204T00A-A -->Microsoft Certification name

Certifications focus areas

--------------------------------------

Azure funtion is server less architiecture


Logic Apps -->


 https://www.skillpipe.com/  azure@123 and uname is my off mail id


pp Service

------------------------------------

PAAS Service -->


Users location should be region in resource group



App service plan question

-------------------------------

this is nothing but a kind of subscription plan which is used to host a website


ACU -- Azure Compute Units(cmp 2 plans with this)


A-series(basic virtual machines with in cloud) compute equivalant



Is there a Limit number of webapp (created) can run under app service plan?

No restrictions put by azure


Basic plan incurs cost anly dev and test has free


Automatic scaling not possible in basic plan


Scaleup/Vertical Scaling -->can be Done manually only(change priciing tier) (from B1 tp Premium example)


Scaling out / horizontal scaling --->


A single PaasVM is not sufficient i.e getting a lot of users , usage performance gone up then azure gives auto scaling option in this case .Azure creates replica of the same VM  what web site is deployed in first paasVM will be deployed in the new VM also.


Autoscaling used when load increasing.Load balancing will be taken care by azure

Horizantal scaling and auto scale both are sames



when load reduces again due to business reasons in the future post auto scaling, how azure handles those cases?

We can define rules(like when traffic is more than 80% then start auto scaling and when less than 60% do not auto scale)


Encrypted credentials in publishing profiles

We can download profile from azure portal(profiles like dev, UAT,Production)then using this profile we can publish from IDE



for Azure Service plan by default it won't create a storage account?

It won't create.


Azure web app can connect with onPremise data base

Yes, this is called hybrid approach

Options 

1)Networking tab Azure provides

Virtual Network configure(Data base will be in this n/w)


2)Hybrid connections

with help of gateway/connection manager

One - One configuration

https://docs.microsoft.com/en-us/azure/app-service/app-service-hybrid-connections


for each app service plan we have a dedicated PAAS VM ?

Depends on the configuration.If basic then 1 PASSVM


any limitation how many web apps can be deployed within a resource group?

No such limitation

from lokesh reddy to everyone:

Logging and tracing enabled through App Insights

we have classess for logging


Azure Traffic Manager(Certification )https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-faqs

-----------------------------------------

It's a load Balancer DNs based.Its on a global level we want to do.

What is DNs based ??

Refer DNS based.jpg

DNS Level Load balancing is done by Azure Traffic Manager

We can enable traffic manager in Azure portal




This is recorded while doing task imgapinamratha.azurewebsites.net



Deployment slot

-----------------------------

Down time reduces(SWAp from dev to UAT)


What is happening under the hood in swapping

--------------------------------------------

Virtual IP swapping is the thing Azure is doing(Changing the IP reference)



16/6/2020

------------------------------------------------------

Storage Accounts

---------------

Blob storage 

unstructured data(video,image file) stored in this blob storage


Is Blob storage in some way related to HDFS (Hadoop distributed file system) storage?


500 Terra byte -- Single storage account


Classic (w.r.t) Azure portal

Modern azure Portal  -- ARM -- Azure Resouce Manager API


Classic Portal  -- ASM -- Assure Service Management API



LRS --->Replication --Durability of the data

Access tier - hot/cool Active data /Passive data which we are storing


Blob storage 

Block blobs  -- Videos,PDF (100 mb chunks)  (Single file 4.75 tb can be stored as a single block blob in block blobs,A single block blobs contain 50000 blocks)

Append blobs  --- Composed of blobs.Optimised for append operations.Each blob can be different type 1.95 Gb maximum size of single Append blob.Editable file.Text logs.Append operation

Page blobs --- storing virtual hard disk.Similar to harddisk storage

Az - copy is tool to upload and download huge amount of data


Durability options

--------------------

LRS (Lease Redundant storage)-- cheapest option.Create 3 replicas of data.Protect against rack failures,disk node

Refer snapshot durability.This data center fails then gone


ZRS (Zone Redundant storage)--Data replicated synchronously.3 replicas , 3 zones,one region.Entire region goes down .Protect against rack failures,disk node,zone failures


GRS  Multiple Regions  -- six replicas  , 2 regions(3 per region).Async copy to secondary.protect against major regional disasters.


RA - GRS


-------GRS + Read access to Seconday.We can use both the copies.In normal GRs till primary available we cannot use secondary

-------GRS + Read access to Seconday.We can use both the copies.In normal GRs till primary available we cannot use secondary


GZRS -- Refer Durability_1 snapshot


RA-GZRS -- Refer Durability_1 snapshot


Blob level we get archive option


Shared Access Signature key

https://docs.microsoft.com/en-us/rest/api/storageservices/create-account-sas?redirectedfrom=MSDN

SAS token even private then we can access 

SAS tokens can be grouped together


Blob storage Client libraray helps us to create storage Account and maintain Our data


Exclusive access for modifying a blob

Then take a lease (lease.png snapshot)


Azure queue storage message can remain for 7 days


CosMos DB

------------------------------

What is no SQL Data base

If we need to display data like face book then using relational database we need to join many tables...Not a feasible approach

Problem with relational data base is predefined scehma like employee record in employee table and so on

RDBMS limits

--------------------------

It cannot go on scaling i.e no horizonal , vertical scaling


Sharding pattern if we implement then it can horizally scale the data base


In no SQL we do not have any schema

Then how we store data?


Why no SQL

Store infinite data with out performance issue,wanted hyper scale

Brain storm on how we create partition


Plenty of No SQL databases

-----------------------------

MongoDb,Casendra then why cosmos? COSMOS is a kind of wrapper

Internally uses MONGO DB


Using COsMOs we can do global replication

Consistency is a challenge?How to overcome this?


Social media platforms can use loose  consistency.Advantange is that application/database is highly available

On the othet hand there are applications which deal with transactions then consistency is imp then availability will go down


Certification will have consistency level questions



https://docs.microsoft.com/en-us/azure/cosmos-db/request-units





17/6/2020

------------------------------------------------------------

Server less services/Architecture

What is azure Function

-------------------------------

Serverless component

web App and a Function App difference

------------------------------------------------

Web App -->If we change a function in the code then we need to deploy entire web App again.It should be one language

With Micro services coming in picture its a new architecture , in which we try to break application to the most micro level

Example Micro services amazon web site send an email to user number of items purchased(1 micro service)

What is the advantage of micro service is 

Testing of entire is not needed only changed functions can be tested

Each of the function can be written in different language



Each of the Azure function can be hosted as Micro service

To start an Azure function

Call it from http end point


Logic App can call function App


Azure Kubernetes service helps to write Micro service


Durable functions

---------------------------------

Gives a specific design pattern

Used in case of chaining  F1-->F2-->F3

Refer Durable snip shot

https://github.com/Azure/azure-functions-durable-extension/


It is an extension of azure function


Orchestrator function -> Checks the health of all the functions which are in chaining

Durable Function Scenarion  Fan In/Fan Out -->Multiple functions in parellel


What is a Logic App

-------------------

Its a kind of business logic layer.Something like middle layer.Its a designer focused tool which azure is giving to do integration(like intergration of services)


Logic_App SnapShot

We can increase number of instances

We can create work flows inside logic app


Custom connectors can be written in logic apps

https://docs.microsoft.com/en-us/connectors/custom-connectors/create-logic-apps-connector


IAAP

----------------

ARM Template


In Azure we can create different virtual machines


Check what is hyper-v? will have .vhd extension


MAnaged / Unmanaged disks

-------------------------------------

Each of storage account has limitation of In putOut put operations

Looking for higher IOP's then managed disk is risky


Microsoft spreads across various storage accounts so that we get higher IOP's


Highly available env azure provides??

Availability set if you put 2 machines then it will meet 99% assured SLA


fault domain,update domain


Dockers , Containers

============================

Dockers are for quick deplyment

Difference b/w container and virtual machine

Each of Virtual machine has its own OS

Container is a isolated componenet for running application 

Container can run on any operating system

Dockers help you to create containers



https://docs.docker.com/engine/reference/builder/


Azure Container Registry

------------------------------------------

To maintain code base

This is build on Docker registry service


ACR Build

-------------------

Helps to stream line whole process of Build process container


Kubernetes is the famous orchestrator developed by google

-----------------------------------------------------------

orchestrator is some one who will run the show

-------------------------------------------------------

Since containers are there we need orchestrator


AKS (azure Kubernetes service)(Cross vendor maintanace is the advantage)

If we do not want to use AKS then use Azure Container Instance(Fastest and simplest way to run a container in Azure)


18/6/2020

-----------------------------------------------------------

Implement User Authentication and User Authorization

--------------------------------------------------------------

Authentication --- CGI Id card to enter office premise

Authorization -- What we can do once we are in?

Microsoft Identity Platform which provides Microsoft authentication library


Microsoft graph : common api to work with all 0365 resources as well microsoft with all microsoft suit(like access calendars,..etc)


Azure Active Directory : Runs entire show in terms of Authen and autho

Local AD?why we cannot use?


SSO -- Single sign ON what is?


Sync pws when we sync local AD  to Azure AD?

It is our choice of sync pwds


ADFS we ue when pwd sync

-----------------------

Active Directory Federation service

Authenticate user against remote directory

SAML tokens generated as a part of ADFS Authentication

Active Directory Authentication Library(ADAL)

------------------------------------------



Microsoft Graph

--------------------------------

Rest APi Endpoint that helps to work with different services

Graph explorer tool gives the end point

https://developer.microsoft.com/en-us/graph/graph-explorer


MSAL -- Microsoft authentication library


https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure/blob/master/Instructions/Labs/AZ-204_06_lab_ak.md



All data (including metadata) written to Azure Storage is automatically encrypted using Storage Service Encryption (SSE).


Implement Secure Cloud Solutions

------------------------------------------

How to secure credentials --Azure KeyVault(Its a dedicated pass service where it can manage certificates,credentials)

HSM -- H/w security modules these are backed by Azure Key Vault


Each of the vault will have a owner assigned

Authentication

Managed Identity(https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview)

Service Principal

Entity which will have some sort of permission associated with it

it can be pwd based

it can be certificate based

API Management
-------------------------
PAAS Service that microsoft is giving for management of API
We can use this with existing APIs no need to make any code changess
Working components
Certification question regarding policy adding in API management

19/6/2020
------------------------------------
Message
Azure event grid ,event hub,notification hub

Event grid:
service which listens to events
WebHook : Applications are using this as an event's when ever this happens we have incoming and outgoing webhooks
When ever some specific item in cosmos db is updated then do some action this can be achieved using event grid

Sunbscription can be created with filters and based on this we can filter events


Azure Event Hubs
-----------------------
Specialized for huge amount of data
It is a streaming service
It can take millions of events per second
Take a burst of dta from different sources then plug event grid and then pass it to subscribers
With in Event hub we have paritions

Notification Hubs
---------------------------------
App Developers
PNS -- Platform notification system 


Messaging options
--------------------------------
Specialised for highly reliable messaging
----------------------------------------------
Service BuS and Queue storage:
These are Queing service
1 MB size of message in a queue
FIFO
3 communication mechanisms
Queues
Topics
Relays

Message time to live is not asked in event grid but asked in queue,topics

Queue Storage
-------------------------------
It is with the storage account
Messages can be out of sequence
Single msg can be delivered multiple times

event grid domain helps you manage a large amount of event grids that are related to each other https://docs.microsoft.com/en-us/azure/event-grid/event-domains


Instrument Soulutions to support monitoring and logging
--------------------------------------------------------------------
Monitoring services
Azure Monitor -->Connect data from variety of servers
We can have alerts created
Trouble shooting steps can be done through azure monitor
Log Analytics
Metrics
We can create Alert rule in monitor

which lang is used for querying here in Logs

Kusto

Application Insights
--------------------------------------
Feature of Aure Monitor(APM Application performance Management service)

Instumentation key we can put and check logging in Azure portal
Load testing can be done from Application Insights
Its  a stand alone servce can be used for non-Cloud service also

Implement code that handles transient faults
-----------------------------------------------------------
a temporary issue
Code that we are writing in a cloud shold be good enough to find any transient issue
Retry


CDN

Content delivery
Amazon, netflix has CDN centered architecture

Purging
---------------------------
Flush down across the servers
Azure cache for redis
------------------------------------
Distributed cache technology is provided by redis
Distributed cache?(inmemory datasturcure)
Multiple instances of Web Apps

Web front end server and each of which will have its own cache
refer distributed cache.png
VM1
Vm2
VM3


My Learnings from browsing
-------------------------------------------------------
 Only storage accounts of kind StorageV2 (general purpose v2) and BlobStorage support event integration. Storage (general purpose v1) does not support integration with Event Grid.
Reference:
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-event-overview


Learnings
-------------------------
Linux os is the only one that supports container instances to be available from virtual n/w
Millions of aggreements per hour then use data ingestion service like Azure Event hub.You can use event capture to store aggrements in to azure blob storage for long term storage
We can use lease ,so that no other process can modigy the blob while the blob is being accessed
On-Premise data gateway to allow logic app to access onpremise data resources
For MongoDB ,if we need migration with minimum downtime , we need to implement online migration using Azure database migration service
Accelearated networking to improve performance of Azure VM
Authorisation level - Function -->here authorisation key is sent  with the function invocation .This adds extra level security for azure function

Deplying webapp to VM is Infrastructure As A service(maintananace is required)
Deploying web App to App service Plan is Platform as service(internally web app deplys to VM)--->no maintanace is req--underlying infrastucture management not required
Configure back up to create restorable archive copies of your apps content ,configuration,database.Buthere App service Plan should be standard or higher

App service plan can be scaled up at any point of time
Std app service plan and premium app service plan required for deployment slot option
Azure web job can be used to run a background task and this is a part of web app
Azure function should have a storage account
Azure web job can be used to run a background task and this is a part of web app
.NetCore 3.1 is supported on both windows and linux based platforms
ASP.NETV4 is supported on windows platform only
Blob storage also can store VM disks
If we use AZ-copy tool to upload a blob to container then here we have to give permissions to user using Access Control in Storage Account
We can add meta data to the blob in the container and we can use this to fetch the container data
Using Shared access Signature we can access blob with pvt access.Here link is generated to access blob and has a specific time interval with in which we can access,we can give specific ips only ,can specify expiry data time.
Sahred access signature can be at blob level , or at account level.using Share access signature we can give read , write i.e specific permissions can be given.
Sanp Shot at blob is nothing but creates a copy of blob item
Lease operation to acquire lock on blob so that no other client can access it 
Hot spot when data is inserted in to only one partition 
Function App host.json -->we can specify time out here
   -->Consumption Plan valid range is 1 sec to 10 minimum
   ---->Premium plan it is 1 sec to 60 minimum
   ----->Dedicated App service Plan there is no overall limit
we have retry support by default for Azure Blob Storage,Azure Queue Storage , Azure Service Bus(queue/topic)

For Azure queue storage and service bus storage if retry fails messages are sent to poison queue
In case of consumption plan for Function App , always ON setting  option is not there.It is available for App Service Plan
YAML file/ARM template can be used to deploy group of containers
Once partition key is set to the container , it cannot be changed
Change feed --> listens to all changes in the container and gives the list of changes
Azure Cosmos DB
Storage procedure
Trigger(Wehn adding data to container we can append some data)Pre trigger
Function trigger(chnage feed)
Synthetic Partition Key
---------------------------------------
Does not have any propert ideal for partition key
As a work around we can do the following -->combination of some fields/ one field+random suffix
---------------------------------------------------------------
Time to live can be set on the items in the container or at the container level

Order by the items like asc then should be a part of composite policy, they should be defined in indexing policy

Access control can be added can resource group level or at indivisual resources or at subscription level also
Owner role
Manage every thing including access to resources
Contributor role
Manage every thing except access to resources
Reader role
Response type(Mentioned by client) as code then it is authorisation code work flow with external service provider which the user is using
What happens in autorisation work flow?
---------------------------------
1)Client ----> External Identity Provider (Goolge)(confidential clients)
2)Goolgle--->(responds with authorisation Code)- to Client
3)Clinet-->(again sends this authorisation code to the google so that it gets access token) google
4)Google gives access token so that now client uses this and access the resorce server from which it needs data

Above steps of exchanging authorisation code to get access token are done using back end channel

Java script applications can use implicit workflow where there is no authorisation code directly it asks for access tokens

external identity provider will have two uls one to give authorisation code and other to give access tokens


OATH 2.0 implicit workflow(Here the response type is token)
----------------------------------------
Single page application(Called as public clients)
Not hosted on server

Client Credentials work flow
---------------------------------------
Used by the clients to obtain access tokens outside context of the user i.e we do not have user here

OPENID Connect
------------------------
used only for autentication on top of OATH.response type is code.when we exchange authorisation code we get id token

Microsoft Authentication Library used by developers to get tokens

Multifactor Authentication in Azure AD
----------------------------------------
Conditional Access Policy -- we need Azure AD Premium p2 license is needed
Group claims
-------------------------
we need to modify application manifest file

Azure Key Valult
------------------------------
To store secrets

Service Principal
-------------------------------
Used by external application to access the resources(secrets,certificates,keys) in azure key vault

Create this service principalin azure ad using power shell commands from there you get application id, tenant,password
set the env variables
execute one more command to set policy so that service principal gets permission over secrets

Role based access control given at azure key vault level

Data in storage account can be encrpted by using a managed key , this is done by azure it self
Managed service identity -- can be assigned to storage account so that it can authorise it self with azure key vault service and get the key and use it for encrption

Auto Scaling
----------------------------
Conditions can be written based on auto scaling increasing VM instances can be done
Scale up is used incraese tier of App service Plan
Scale out is used to enable auto scaling i.e increase VM
Application Insights
---------------------------
funnels (user going from index page to products page)--> All pages whether user is going through
UserFlows
Impact
Retention(users are returning back to your application)
Availability feature is also part of application sights
Ping test we can do as a part of this.From different locations we can do ping test

APPlication Telemetary to add application insights
Custom telemetry is also possible.

Azure cache for reddis
--------------------------------------
Fetching of data is much faster as all of data is stored on RAM
Massive amounts of transational data should be stored on sql / no sql data store should not store on azure reddis
Frequently accessed data should be stored in Azure Reddis
Different types of data can be stored in azure cache for reddis
Data which is changed every min then it should not be stored in reddis
Application should ensure that data is updated in  reddis cache
In reddis data is stored in form of key value pairs
Tiers available here also
Premium tier --> data stored in clusters,data can be stored on some disk
SetString to set data on the cache
Stringset method to set key/value pair
Data Invalidation in cache
--------------------------------------
Some times data in SQL D is updated and so cache should alos be updated so in this case update cache or invalidate your cache
Invalidate -- remove key all together and this is manual effort
We can add expiry time/policy to the items in cache  ( we have a class DistributedCacheEntryOptions which has expiry time method)

stack exchange for redis package for console based application here we have IDataBase

Content delivery N/w
------------------------------------
To reduce latency
Here we have point of presence ,and edge servers
Edge servers are located globally
Uers will make a call tp Azure content deliver n/w service then it will see  user location and then transfer request to that edge server near to user.
Edge servers can also cache data.Edge server will check with POP if POP has response then sends to the user else redirects request to origin and then the response is cached by POP for future use
Using this users across the world will have same experience with the web app
This also has different pricing tiers

Caching in Azure CDN
------------------------------
BypASS  --- Do not cache even application is setting some cache
OVERRIDE
SETIF MISSING
Cache in case of query string
-------------------------------------------
ByPass Cache
Cache every unique url
Ignore query strings

Azure Front Door Service
-------------------------------------
client req are sent to most fast performing backend
Back end pool can be web app
Weightage,Priority can be specified to each back end
health probes we have here which keeps checking our backend service
File size should be b/w 1 KB and 8 MB, supports all compressions brotil and gzip

Transient faults
------------------------------
Azure is a shared env
We can do retry in these cases

Azure service bus
---------------------------
Messaging service

Queue service
FIFO message delivery
Topic service
Multiple consumers(subscribers)
When we create instance if service bus we are creating name spaces
Here in basic tier we do not have topic from standard tier we do have(Tier can be upscaled at any point in time)
We create name space then either queue/topic can be created
using service bus explorer we can send a message to queue
we have two operations in queue
peek  (Active message count remains same)
receive(Active message count decreases)
Queues/Topics have their own shared Acess Policies where we give permission of sending message,receiving msg,listing mesg
Message sent on queue should be in bytes.SendAsync Method we use
Receiving msg two step operation
receive and process message
delete msg 
Complete async will do the above two steps
We have message handler which gets triggered when we receive message from the queue
Difference b/w storage queue and this queue service is in storage queue we have to get count of messages which we have received and then we display it but in this queue Message handler is triggered when message is reveived

Properties
Time to live(deleted from main queue once time expires)
Dead letter queue contains msg that cannot be processed/ did not deliver to any receiver
Lock duration --> default is 30 sec(gets invisible for 30 sec)
peek message ability to see
Receive and delete
Message object
broker properties -- Content type we are sending in msg , message id(unique number given from application perspective this can be used in case of duplication detection feature),sequence number(64 bit int assigned on msg),correlation id(replier scenario),session id, Reply to session id 


Duplicate detection
-------------------------
can be enabled in azure in queue section, msg id can be given unique

Corelation id advantages
-----------------------------------
Correlates multiple messages together

Event Grid
----------------------------
Event sources (publishers) Azure blob storage , Custom Events, Resouce Groups
Event handlers -- event hubs, functions ,logic apps etc

event Grid Schema
---------------------------------
Event emitted from a aprticular resource follow a particular schema
events are sent from source to event grid as an array and size of array is 1 MB
Azure service bus can also be a receiver to the events emitted from an event grid
event grid also has filters
Particular event has a subject property we can add filter by this like subject ends with and subject begins with
we have custom filters also like if data length less than 12 bytes then receive events

Features for subscriptions for events
------------------
max event delivery attempts 30 -->try 30 times to deliver event to end point
dead lettering
event subscription time

Custom web hook as event handler
-------------------------------------------
Handshake is needed with event grid at first
In Hand shake event grid sends validation url and validation code to our custom web hook
Custom web hook also responds with the same


nrock tool to expose web applications running in local

Azure functions as web hook te receive events from event grid
--------------------------------------------------------------------------

Custom topics
-------------------------
here event which we send from custom resource to custom topic should follow event grid schema

Azure Event Hubs
-------------------------
Big data streaming platform and event ingestion service .It  can receive millions of events per second
Receive events from multiple devices then we can use this server less service (no need to manage infrastructure)
Not a persistent data store
It can route the data to Azure blob storage
First create a name space
then we can create multple event hubs
Multiple partitions here we have so that through put is great
Once the message is read it is not deleted , Message retention should be specified
Minimu is 2 partitions max is 32
Msg can be sent on one partition also
readers can also read from particular partition
Each message will have a unique sequence number

Consumer groups
------------------------
we have multiple consumers groups for event hubs
Recomendation is one active receiver on a partition per consumer group

Offset
------------------------
Used so that same messages are not read again and again
AzureMessagingEventHubProcessor keeps track of the messages that have been read from the event hub
Update check point method helps in tracking the above point

Azure search service(AZ-203)
-------------------------------------
Here we use index
Index is like a table and documents are like rows in the table
Only retrivable attribute can be changed at any point of time.Once the index is in place no other attribute other than retrivable can be changed

Azure API Management service
--------------------------
iNTERFACE B/W USERS AND BACK EnD api
API Policies we have -- xml operations

CheckHttpHeader
Limit call rate
Authentication policy
We have conditions with in policy


Application can afford read out of order writes so consistency can be eventual

Cache 
security for 


OPEN API SPECIFICATION
---------------------------------
swashbuckle checls for the specification
SWAGGWER API

No comments:

Post a Comment

Pass a HashMap from Angular Client to Spring boot API

This example is for the case where fileData is very huge and in json format   let map = new Map<string, string>()      map.set(this.ge...