Apptitude - Number System

 Concepts

  • Multiples and Factors
  • Total no. of factors of a composite number
  • LCM & HCF
  • Co-prime Numbers
  • Unit Digit
  • Highest power of P in n!
  • Concepts of Remainder
  • Divisibility

Co-Prime Numbers: 

Co-prime numbers are two numbers that have no other common factor than one. A set of co-prime numbers should consist of at minimum two numbers. Co-prime numbers, for example, {4 and 7}, {5, 7, 9} and 9, have just 1 as their greatest common factor. Co-prime numbers do not always have to be prime numbers.

Highest Common Factors (H.C.F) or greatest common Divisor (G.C.D) 
 two or more than two numbers is the greatest number that divides each one of them exactly. 
 Eg: H.C.F of 8 & 20 is 4 

Lowest Common Multiple (L.C.M): 
 The least number which is exactly divisible by each of them given numbers is called their lowest common multiple (L.C.M) 
 Eg: L.C.M of 8 & 20 is 40

Relation between L.C.M and H.C.F. of two Numbers:
 LCM X HCF = Product of the two numbers 

Compiler Design Part-3

 Parsers

Generation of Parse Tree Using:

  • Top down Approach  - Which production to use
  • Bottom up Approach - When to reduce
Classification of Parsers:


Compiler Design Part-2

 Topic Covers

Grammar:

A phrase structure grammar is (N,T,P,S) where,

  • N - Non terminal
  • T - Terminal
  • N intersection T = phi
  • S - start symbol
  • P- Production rules
Type 0: / unstricted grammar
Type 1: Length Increasing grammar- Context Sensitve Grammar
Type 2: CFG
Type 3: Regular Grammar




Derivation of CFG:

  • Left Derivation
  • Right Derivation
  • Parse tree Derivation

Left Recursion-

  • A production of grammar is said to have left recursion if the leftmost variable of its RHS is same as variable of its LHS.
  • A grammar containing a production having left recursion is called as Left Recursive Grammar.

Example- 

S → Sa / 

(Left Recursive Grammar)

 

  • Left recursion is considered to be a problematic situation for Top down parsers.
  • Therefore, left recursion has to be eliminated from the grammar.

Ambiguity in CFG:

Possible questions to ask
Whether the grammar is predective parsing or not
Check Whether the grammar is ambiguos or not
Problem due to Ambiguity
Associativity Property Violation
Precedence property Violation

Determine Associativity and precedence of the operators

NOTE:
  • For unambiguous grammars, Leftmost derivation and Rightmost derivation represents the same parse tree.
  • For ambiguous grammars, Leftmost derivation and Rightmost derivation represents different parse trees.

Recursion in CFG

Left recursion
Right Recursion
Elimination of Left Recursion
Problems with Left Recursion
Conversion of left to right Recursion

Non-Deterministic CFG
Elimination of Non-determination


Simple Linear Regression

Regression analysis is a statistical process for estimating the relationships between variables. It can be used to build a model to predict the value of the target variable from the predictor variables.

Mathematically, a regression model is represented as y= f(X), where y is the target or dependent variable and X is the set of predictors or independent variables (x1, x2, …, xn).

If a linear regression model involves only one predictor variable, it is called a Simple Linear Regression model.

f(X) = ß0 + ß1*x1 + ∈ 

The ß values are known as weights (ß0 is also called intercept and the subsequent ß1, ß2, etc. are called as coefficients). The error ,  ϵ is assumed to be normally distributed with a constant variance.

Assumptions of Linear Regression

Assumption 1: The target (dependent) variable and the predictor (independent) variables should be continuous numerical values.

Assumption 2: There should be linear relationship between the predictor variable and the target variable. A scatterplot with the predictor and the target variables along the x-axis and the y-axis, can be used as a simple check to validate this assumption.

Assumption 3: There should not be any significant outliers in the data.

Assumption 4: The data is iid (Independent and identically distributed). In other words, one observation should not depend on another.

Assumption 5: The residuals (difference between the actual value and predicted value) of a regression should not exhibit any pattern. That is, they should be homoscedastic (exhibit equal variance across all instances). This assumption can be validated by plotting a scatter plot of the residuals. If the residuals exhibit a pattern, then they are not homoscedastic (in other words, they are heteroscedastic). If the residuals are randomly distributed, then it is homoscedastic in nature.

Assumption 6: The residuals of the regression line should be approximately normally distributed. The assumption can be checked by plotting a Normal Q-Q plot on the residuals.

Implementation:
SLR Implementation



Compiler Design Part-1

Compiler: It is a language translator which translates from one language to other

Contents:

  •  Lexical analysis, parsing, syntax-directed translation
  • Runtime environments
  • Intermediate code generation
  • Local optimization
  • Data flow analysis: constant propagation, liveness analysis, common subexpression elimination.  


Language Processing System:
    

       Structure of a compiler/Phases of a compiler



    Usage of Symbol Table

Phase Usage
Lexical Analysis --> create new entites for new Identifiers
Syntax Analysis:--> Adds information regarding attributes like type, scope, dimension, line of reference & line of use
Semantic Analysis:--> Use the available information to check for semantics & is updated
Intermediate code--> generation to add temporary variables
code optimization --> Information in symbol table used in machine-dependent optimization by considering addresses & aliased variables information.
Target code generator-> Generates the code by using the addresses information of identifiers.
       

      Symbol table entries

Each entry in the symbol table is assciated with attributes that support the compiler in different phases

Attributes are: Name, Size, Dimension, Type, Line of declaration, line of usage, Address.

          Lexical analysis:

  •      We need to define the rules to construct the expression based on source code.
  •      The lexer takes the regular expression as input and then converts it into the equivalent FA
  •      Every string which is scanned for source code is given as an input for Finite Automata to check the validity of the string.
  •      If the string is accepted by FA, then the string becomes a token
P     Problem Example

     Difference between lexeme and token:
Lexeme is a token name, where token contains token name and attribute value.

Design of Lexical Analyzer
We can either write a manual program for lexical analyzer or use tool.(Lex tool)
Manual design: Token -> Pattern -> reg Exp -> FA -> Transistion table -> transistion function
     
Secondary Function of a Lexical analyzer

  •         Elimination of white spaces.
  •         Removal of comment lines
  •        Correlating the error msg by tracing the line number

     LA uses DFA for Tokenization, It is the only phase that reads the program char by char

Lexical Error: will be updated
Lexical Error Recovery

    Left Factoring:
     To get rid of non-determinism and common prefixes.
     Eliminationg non-determinism doesn't effect Ambuguity.

Popular Post

MindMaps

Featured post

Question 1: Reverse Words in a String III

  def reverseWords(s: str) -> str: words = s.split() return ' '.join(word[::-1] for word in words)