- __First Year
- ___MCQS Semester 1
- ___MCQS Semester 2
- __Second Year
- ___MCQS Semester 3
- ___MCQS Semester 4
- __Third Year
- ___MCQS Semester 5
- ___Semester 6
Total page views, best amazon products, recent comments, recent post, featured listings, saturday, february 13, 2021, operations research (mcqs).
About Study For Buddies Stupid Broadcasters Entertainment Youtube Channel
Post a comment.
- FY Semester 1 MCQS
- FY Semester 1 Theory
- FY Semester 2 MCQS
- FY Semester 2 Theory
- M.COM FY Semester 1 MCQS
- M.com Semester 2 Mcqs
- M.com Semester 3 Mcqs
- Online FY Test
- Online Test
- S.y Semester 3 Mcqs
- SY Semester 3 MCQS
- SY Semester 3 theory
- SY Semester 4 MCQS
- T.Y Semester 5 Mcqs
- T.y Semester 6 Mcqs
- TY Semester 5 MCQS
- TY Semester 5 theory
- TY Semester 6 Mcqa
- Ty Semester 6 Mcqs
- Ty Semester 6 Theory
How Does Linear Programming Problems Can Be Solved By Mcq?
The first step in finding a solution to a linear problem is to write down the steps required to carry out the function. This way, when you go back and try to solve the problem using MCQ, you can find out how far along the linear program has gone. By doing this backwards, you can determine the best way to solve the linear problem. Once you have determined a reasonable path for the linear program, the next step is to carry out the function completely, in a single step.
However, linear programming problems can be difficult to solve when you don’t know which way is the best. For example, in the real world, we always know that there are solutions to problems, even when they are nagging at our minds. So, what makes linear problems so elusive? To answer this question, let’s look at why linear problems are so hard to solve.
The first and most important factor to remember when working with a linear program is that you need a constant constraint on your inputs. Any function that you apply to a variable will change the value of that variable, but this change must be constrained within the function. Therefore, for a given set of inputs, it can sometimes be very difficult to predict what the output will be. To overcome this difficulty, you should use theorems such as the law of large numbers, or some other technique that ensures that the number of steps that you take to reach a solution is not too large.
Another important factor to keep in mind when dealing with linear programming problems can be the nature of the function. Functions that contain an arithmetic operator are notoriously hard to deal with. If you try to solve a linear program using a linear function that contains an arithmetic operator, for instance, you will quickly find that your computer system will crash. Even the most powerful computers can only handle a finite number of calls on such operators, hence the name ‘linear programming’. It is impossible to predict what the output of a linear program will be, so you must either stop at the first step, or else continue in the middle. This is why linear programming problems can be extremely frustrating.
The most basic linear programming problems can be easily solved by using the function PCQ. PCQ stands for Partitioned Quantitative Analysis. Basically, it is a method of dividing any numerical data into smaller partitions and then analyzing each partition independently. To do this, all you need to do is run each partition on its own separate computer and analyze the results. Since PCQ is a well established mathematical procedure, it is usually quite accurate, especially when analyzing real data.
A much more challenging kind of linear programming problems can be solved by MCQ. This method has proven to be even more accurate than PCQ, but is far less stable when working with real data. MCQ uses an advanced and highly complex algorithm that evaluates the results of linear programs on a grid. This grid, obviously, cannot be changed very much, which makes it particularly difficult to predict what the output of the algorithm will be.
So how do you deal with linear programming problems? Most people just ignore them. If you are planning on doing some real work with linear programs, then you should definitely be prepared to work really hard. The best way to solve them is to actually use them yourself. You can either hire someone to actually write the programs for you, or you can learn to implement linear programs by using MCQ.
- School Guide
- Class 12 Syllabus
- Class 12 Revision Notes
- Maths Notes Class 12
- Physics Notes Class 12
- Chemistry Notes Class 12
- Biology Notes Class 12
- NCERT Solutions Class 12 Maths
- RD Sharma Solutions Class 12
- Write an Admission Experience
- Share Your Campus Experience
- CBSE Class 12 Maths Notes
Chapter 1: Relations and Functions
- Types of Functions
- Composite functions – Relations and functions
- Invertible Functions
- Composition of Functions
- Inverse Functions
- Verifying Inverse Functions by Composition
Chapter 2: Inverse Trigonometric Functions
- Inverse Trigonometric Functions
- Graphs of Inverse Trigonometric Functions – Trigonometry | Class 12 Maths
- Properties of Inverse Trigonometric Functions
- Inverse Trigonometric Identities
Chapter 3: Matrices
- Types of Matrices
- Matrix Operations
- Matrix Addition
- Matrix Multiplication
- Transpose of a Matrix
- Symmetric and Skew Symmetric Matrices
- Elementary Operations on Matrices
- Inverse of a Matrix by Elementary Operations – Matrices | Class 12 Maths
- Invertible Matrix
Chapter 4: Determinants
- Determinant of a Matrix
- Properties of Determinants – Class 12 Maths(Image Pending)
- Area of a Triangle using Determinants
- Minors and Cofactors
- Adjoint of a Matrix
- Applications of Matrices and Determinants
Chapter 5: Continuity and Differentiability
- Continuity and Discontinuity in Calculus – Class 12 CBSE
- Differentiability of a Function | Class 12 Maths
- Derivatives of Inverse Functions
- Derivatives of Implicit Functions – Continuity and Differentiability | Class 12 Maths
- Derivatives of Composite Functions
- Derivatives of Inverse Trigonometric Functions | Class 12 Maths
- Derivative of Exponential Functions
- Logarithmic Differentiation – Continuity and Differentiability
- Proofs for the derivatives of eˣ and ln(x) – Advanced differentiation
- Rolle’s Theorem and Lagrange’s Mean Value Theorem
- Derivative of Functions in Parametric Forms
- Second Order Derivatives in Continuity and Differentiability | Class 12 Maths
- Mean Value Theorem
- Algebra of Continuous Functions – Continuity and Differentiability | Class 12 Maths
Chapter 6: Applications of Derivatives
- Critical Points
- Derivatives as Rate of Change
- Increasing and Decreasing Functions
- Increasing and Decreasing Intervals
- Tangents and Normals
- Equation of Tangents and Normals
- Relative Minima and Maxima
- Absolute Minima and Maxima
- Concave Function
- Inflection Point
- Curve Sketching
- Approximations & Maxima and Minima – Application of Derivatives | Class 12 Maths
- Higher Order Derivatives
Chapter 7: Integrals
- Integration by Substitution
- Integration by Partial Fractions
- Integration by Parts
- Integration of Trigonometric Functions
- Functions Defined by Integrals
- Definite Integral
- Computing Definite Integrals
- Fundamental Theorem of Calculus
- Finding Derivative with Fundamental Theorem of Calculus
- Evaluating Definite Integrals
- Properties of Definite Integrals
- Definite Integrals of Piecewise Functions
- Improper Integrals
- Riemann Sum
- Riemann Sums in Summation Notation
- Trapezoidal Rule
- Definite Integral as the Limit of a Riemann Sum
- Indefinite Integrals
- Particular Solutions to Differential Equations
- Integration by U-substitution
- Reverse Chain Rule
- Partial Fraction Expansion
- Trigonometric Substitution
Chapter 8: Applications of Integrals
- Area under Simple Curves
- Area Between Two Curves – Calculus
- Area between Polar Curves
- Area as Definite Integral
Chapter 9: Differential Equations
- Differential Equations
- Homogeneous Differential Equations
- Separable Differential Equations
- Exact Equations and Integrating Factors
- Implicit Differentiation
- Implicit differentiation – Advanced Examples
- Advanced Differentiation
- Disguised Derivatives – Advanced differentiation | Class 12 Maths
- Derivative of Inverse Trig Functions
- Logarithmic Differentiation
Chapter 10: Vector Algebra
- Vector Algebra
- Dot and Cross Products on Vectors
- How to Find the Angle Between Two Vectors?
- Section Formula – Vector Algebra
Chapter 11: Three-dimensional Geometry
- Direction Cosines and Direction Ratios
- Equation of a Line in 3D
- Angles Between two Lines in 3D Space
- Shortest Distance Between Two Lines in 3D Space | Class 12 Maths
- Points, Lines and Planes
Chapter 12: Linear Programming
Linear programming, graphical solution of linear programming problems, chapter 13: probability.
- Conditional Probability and Independence – Probability | Class 12 Maths
- Multiplication Theorem
- Dependent and Independent Events
- Bayes’ Theorem
- Probability Distribution
- Binomial Distribution in Probability
- Binomial Mean and Standard Deviation – Probability | Class 12 Maths
- Bernoulli Trials and Binomial Distribution
- Discrete Random Variable
- Expected Value
- NCERT Solutions for Class 12 Maths
- RD Sharma Class 12 Solutions for Maths
Linear programming is the simplest way of optimizing a problem. Through this method, we can formulate a real-world problem into a mathematical model. There are various methods for solving Linear Programming Problems and one of the easiest and most important methods for solving LPP is the graphical method. In Graphical Solution of Linear Programming, we use graphs to solve LPP.
We can solve a wide variety of problems using Linear programming in different sectors, but it is generally used for problems in which we have to maximize profit, minimize cost, or minimize the use of resources. In this article, we will learn about Solutions of Graphical solutions of linear programming problems, their types, examples, and others in detail.
Table of Content
Graphical Solution of a Linear Programming Problems
Corner point methods, iso-cost methods.
- Solved Examples
Linear programming is a mathematical technique employed to determine the most favorable solution for a problem characterized by linear relationships. It is a valuable tool in fields such as operations research, economics, and engineering, where efficient resource allocation and optimization are critical.
Now let’s learn about types of linear programming problems
Types of Linear Programming Problems
There are mainly three types of problems based on Linear programming they are,
Manufacturing Problem: In this type of problem, some constraints like manpower, output units/hour, and machine hours are given in the form of a linear equation. And we have to find an optimal solution to make a maximum profit or minimum cost.
Diet Problem: These problems are generally easy to understand and have fewer variables. Our main objective in this kind of problem is to minimize the cost of diet and to keep a minimum amount of every constituent in the diet.
Transportation Problem: In these problems, we have to find the cheapest way of transportation by choosing the shortest route/optimized path.
Some commonly used terms in linear programming problems are,
Objective function: The direct function of form Z = ax + by, where a and b are constant, which is reduced or enlarged is called the objective function. For example, if Z = 10x + 7y. The variables x and y are called the decision variable.
Constraints: The restrictions that are applied to a linear inequality are called constraints.
- Non-Negative Constraints: x > 0, y > 0 etc.
- General Constraints: x + y > 40, 2x + 9y ≥ 40 etc.
Optimization problem: A problem that seeks to maximization or minimization of variables of linear inequality problem is called optimization problems.
Feasible Region: A common region determined by all given issues including the non-negative (x ≥ 0, y ≥ 0) constrain is called the feasible region (or solution area) of the problem. The region other than the feasible region is known as the infeasible region.
Feasible Solutions: These points within or on the boundary region represent feasible solutions of the problem. Any point outside the scenario is called an infeasible solution.
Optimal(Most Feasible) Solution: Any point in the emerging region that provides the right amount (maximum or minimum) of the objective function is called the optimal solution.
- If we have to find maximum output, we have to consider the innermost intersecting points of all equations.
- If we have to find minimum output, we consider the outermost intersecting points of all equations.
- If there is no point in common in the linear inequality, then there is no feasible solution.
We can solve linear programming problems using two different methods are,
To solve the problem using the corner point method you need to follow the following steps:
Step 1: Create mathematical formulation from the given problem. If not given.
Step 2: Now plot the graph using the given constraints and find the feasible region.
Step 3: Find the coordinates of the feasible region(vertices) that we get from step 2.
Step 4: Now evaluate the objective function at each corner point of the feasible region. Assume N and n denotes the largest and smallest values of these points.
Step 5: If the feasible region is bounded then N and n are the maximum and minimum value of the objective function. Or if the feasible region is unbounded then:
- N is the maximum value of the objective function if the open half plan is got by the ax + by > N has no common point to the feasible region. Otherwise, the objective function has no solution.
- n is the minimum value of the objective function if the open half plan is got by the ax + by < n has no common point to the feasible region. Otherwise, the objective function has no solution.
Examples on LPP using Corner Point Methods
Example 1: Solve the given linear programming problems graphically:
Maximize: Z = 8x + y
- 2x + y ≤ 60
- x ≥ 0, y ≥ 0
Step 1: Constraints are,
Step 2: Draw the graph using these constraints.
Here both the constraints are less than or equal to, so they satisfy the below region (towards origin). You can find the vertex of feasible region by graph, or you can calculate using the given constraints:
x + y = 40 …(i)
2x + y = 60 …(ii)
Now multiply eq(i) by 2 and then subtract both eq(i) and (ii), we get
Now put the value of y in any of the equations, we get
x = 20
So the third point of the feasible region is (20, 20)
Step 3: To find the maximum value of Z = 8x + y. Compare each intersection point of the graph to find the maximum value
So the maximum value of Z = 240 at point x = 30, y = 0.
Example 2: One kind of cake requires 200 g of flour and 25g of fat, and another kind of cake requires 100 g of flour and 50 g of fat Find the maximum number of cakes that can be made from 5 kg of flour and 1 kg of fat assuming that there is no shortage of the other ingredients, used in making the cakes.
Step 1: Create a table like this for easy understanding (not necessary).
Step 2: Create linear equation using inequality
- 200x + 100y ≤ 5000 or 2x + y ≤ 50
- 25x + 50y ≤ 1000 or x + 2y ≤ 40
- Also, x > 0 and y > 0
Step 3: Create a graph using the inequality (remember only to take positive x and y-axis)
Step 4: To find the maximum number of cakes (Z) = x + y. Compare each intersection point of the graph to find the maximum number of cakes that can be baked.
Clearly, Z is maximum at co-ordinate (20, 10). So the maximum number of cakes that can be baked is Z = 20 + 10 = 30.
The term iso-cost or iso-profit method provides the combination of points that produces the same cost/profit as any other combination on the same line. This is done by plotting lines parallel to the slope of the equation.
To solve the problem using Iso-cost method you need to follow the following steps:
Step 3: Now find the coordinates of the feasible region that we get from step 2.
Step 4: Find the convenient value of Z(objective function) and draw the line of this objective function.
Step 5: If the objective function is maximum type then draw a line which is parallel to the objective function line and this line is farthest from the origin and only has one common point to the feasible region. Or if the objective function is minimum type then draw a line which is parallel to the objective function line and this line is nearest from the origin and has at least one common point to the feasible region.
Step 6: Now get the coordinates of the common point that we find in step 5. Now, this point is used to find the optimal solution and the value of the objective function.
Linear Programming Graphical Solution of Linear Inequalities
Solved Examples of Graphical Solution of LPP
Maximize: Z = 50x + 15y
- 5x + y ≤ 100
Step 1: Finding points
We can also write as
5x + y = 100….(i)
x + y = 50….(ii)
Now we find the points
so we take eq(i), now in this equation
When x = 0, y = 100
When y = 0, x = 20
So, points are (0, 100) and (20, 0)
Similarly, in eq(ii)
When x = 0, y = 50
When y = 0, x = 50
So, points are (0, 50) and (50, 0)
Step 2: Now plot these points in the graph and find the feasible region.
Step 3: Now we find the convenient value of Z(objective function)
So, to find the convenient value of Z, we have to take the lcm of coefficient of 50x + 15y, i.e., 150. So, the value of Z is the multiple of 150, i.e., 300. Hence,
50x + 15y = 300
Put x = 0, y = 20
Put y = 0, x = 6
draw the line of this objective function on the graph:
Step 4: As we know that the objective function is maximum type then we draw a line which is parallel to the objective function line and farthest from the origin and only has one common point to the feasible region.
Step 5: We have a common point that is (12.5, 37.5) with the feasible region. So, now we find the optimal solution of the objective function:
Z = 50x + 15y
Z = 50(12.5) + 15(37.5)
Z = 625 + 562.5
Thus, maximum value of Z with given constraint is, 1187
Example 2: Solve the given linear programming problems graphically:
Minimize: Z = 20x + 10y
- x + 2y ≤ 40
- 3x + y ≥ 30
- 4x + 3y ≥ 60
l 1 = x + 2y = 40 ….(i)
l 2 = 3x + y = 30 ….(ii)
l 3 = 4x + 3y = 60 ….(iii)
So we take eq(i), now in this equation
When x = 0, y = 20
When y = 0, x = 40
So, points are (0, 20) and (40, 0)
When x = 0, y = 30
When y = 0, x = 10
So, points are (0, 30) and (10, 0)
Similarly, in eq(iii)
When y = 0, x = 15
So, points are (0, 20) and (15, 0)
So let us assume z = 0
20x + 10y = 0
Step 4: As we know that the objective function is minimum type then we draw a line which is parallel to the objective function line and nearest from the origin and has at least one common point to the feasible region.
This parallel line touch the feasible region at point A. So now we find the coordinates of point A:
As you can see from the graph at point A l2 and l3 line intersect so we find the coordinate of point A by solving these equations:
l 2 = 3x + y = 30 ….(iv)
l 3 = 4x + 3y = 60 ….(v)
Now multiply eq(iv) with 4 and eq(v) with 3, we get
12x + 4y = 120
12x + 9y = 180
Now subtract both the equation we get coordinates (6, 12)
Step 5: We have a common point that is (6, 12) with the feasible region. So, now we find the optimal solution of the objective function:
Z = 20x + 10y
Z = 20(6) + 10(12)
Z = 120 + 120
Thus, the minimum value of Z with the given constraint are 240
FAQs on Graphical Solution of LPP
1. what is linear programming problems(lpp).
Linear Programming Problems are the mathematical problems that are used to solve or optimize the mathematical problems. We can solve Linear Programming Problems to maximize and minimize the special linear condition.
2. What are Types of Linear Programming Problems(LPP) Solution?
There are various types of Solution to Linear Programming Problems that are, Linear Programming Problems Solution by Simplex Method Linear Programming Problems Solution by R Method Linear Programming Problems Solution by Graphical Method
3. What are Types of Graphical Solution of Linear Programming Problems(LPP)?
There are various types of graphical solution of linear programming problems that are, Corner Points Methods Iso-Cost Methods
Please Login to comment...
Improve your coding skills with practice.
- Create A Quiz
- Harry Potter
- Online Exam
- Training Maker
- Survey Maker
- Brain Games
Linear Programming Quizzes, Questions & Answers
Top trending quizzes.
JOIN ADRE 2.0 Telegram Group
By gkseries more articles, linear programming multiple choice questions and answers | linear programming quiz.
- General Knowledge /
- linear programming multiple choice questions and answers
Answer: All of the above
Answer: Predict future operation
DOWNLOAD CURRENT AFFAIRS PDF FROM APP
Free mock test.
Answer: It helps in converting the verbal description and numerical data into mathematical expression
Answer: all of the above
Answer: Infinite number of optimum solutions
Answer: Must satisfy all the constraints simultaneously
what is the value of the objective function?
Answer: highest value is chosen among allowable decision
Answer: Objective function
Answer: A verbal model
Answer: Unbounded solution
Answer: Identify the decision variables
Answer: all of above
Take Mock Tests
Mcqs Engineering interview questions,Objective Questions,Class Notes,Seminor topics,Lab Viva Pdf free download. CIVIL | Mechanical | CSE | EEE | ECE | IT | Chemical Online Quiz Tests for Freshers.
Home » Prestressed Concrete Structures Objective Questions » 250+ TOP MCQs on Optimisation Techniques and Answers
250+ TOP MCQs on Optimisation Techniques and Answers
Prestressed Concrete Structures Multiple Choice Questions on “Optimisation Techniques”.
Answer: b Clarification: In using the mathematical programming methods the process of optimization begins with an acceptable design point and new point is selected suitability so as to minimize the objective function and the search for another new point is continued is continued from the previous point until the optimum point is reached and there are several well established techniques for selecting a new point and to proceed towards the optimum point, depending upon the nature of the problem, such as linear and non linear programming.
Answer: c Clarification: In a linear programming problem, the objective function and constraints are linear functions of the design variables and the solution is based on the elementary properties of systems of linear equations and the properties of systems proportionally, additivity, divisibility and deterministic features are utilized in the mathematical formulation of the linear programming problem.
Answer: b Clarification: A linear function in tree-dimensional space is a plane representing the locus of all design points in n-dimensional space, the surface so defined is a hyper plane and in these cases, the intersections of the constraints give solutions which are the simultaneous solutions of the constraint equations meeting at that point.
Answer: a Clarification: Linear programming problems can be conveniently solved by the revised simplex method and the simplex algorithm for solving the general linear programming problem is an iterative procedure which yields an exact optima solution in a finite number of steps.
Answer: b Clarification: One of the most powerful techniques for solving non linear programming is to transform the problem by some means into a form which permits the application of the simplex algorithm and thus, the simple method turns out to be one of the most powerful computational devices for solving linear as well as non linear programming problems.
Answer: c Clarification: In non linear programming problems, the objective function and or the constraints are non linear function of the design variables and since the boundaries of the feasible region or the contours of equal values of the merit function are straight lines the optimum solution need not necessarily be at an intersection of the constraints.
Answer: d Clarification: Over the years, several techniques have been developed for the solution of non linear programming problems and some of the prominent techniques are: method of feasible directions, sequential unconstrained minimization technique, sequential linear programming and dynamic programming.
Answer: a Clarification: In non linear programming the method of feasible direction can be grouped under the direct methods of approach on general non linear inequality constrained optimization problems and two well known procedures which embody the philosophy of the method of feasible directions are Rosens gradient projection algorithum and Zountendijks procedure.
Answer: c Clarification: This method (method of feasible direction) was probably the first non linear programming procedure to be used in structural optimization problems by schmist in 1960 and this method starting from an initial feasible point, the nearest boundary is reached and a new feasible direction is found and an appropriate step is taken along this feasible direction to get the new design point and the procedure is repeated until the optimum design point is reached.
Answer: a Clarification: Structural optimization together with the efficient management of labour, materials and the use of new construction techniques, development and use of indigenous and new materials of construction would result in considerable economy in the overall cost of prestressed concrete structural systems.
Leave a Reply Cancel reply
Your email address will not be published. Required fields are marked *
- Computer Science Engineering (CSE)
- Programming for Problem Solving
Programming for Problem Solving Solved MCQs
Login to Continue
It will take less than 2 minutes