Chapter 1. The subject and tasks of operations research.

§ 1. What is operations research and what does it do.

§ 2. Basic concepts and principles of operation research.

§ 3. Mathematical models of operations.

Chapter 2. Varieties of tasks in operations research and approaches to their solution.

§ 4. Direct and inverse problems of operations research. Deterministic tasks.

§ 5. The problem of choosing a solution under conditions of uncertainty.

§ 6. Multicriteria problems of operations research. "Systems approach".

Chapter 3. Linear programming.

§ 7. Problems of linear programming.

§ 8. The main problem of linear programming.

§ 9. Existence of a solution 03LP and ways of finding it.

§ 10. Transport problem of linear programming.

§ 11. Problems of integer programming. The concept of non-linear programming.

Chapter 4. Dynamic Programming.

§ 12. Method of dynamic programming.

§ 13. Examples of solving dynamic programming problems.

§ 14. The problem of dynamic programming in general form. The principle of optimality.

Chapter 5 Markov random processes.

§ 15. The concept of a Markov process.

§ 16. Flows of events.

§ 17. Kolmogorov's equations for state probabilities. Final probabilities of states.

Chapter 6

§ 18. Tasks of the theory of queuing. Classification of queuing systems.

§ 19. Scheme of death and reproduction. Little formula.

§ 20. The simplest queuing systems and their characteristics.

§ 21. More complicated problems in the theory of queuing.

Chapter 7. Statistical modeling of random processes (Monte Carlo method).

§ 22. Idea, purpose and scope of the method.

§ 23. Single lot and forms of its organization.

§ 24. Determining the characteristics of a stationary random process from one implementation.

Chapter 8

§ 25. Subject and problems of game theory.

§ 26. Antagonistic matrix games.

§ 27. Methods for solving finite games.

§ 28. Problems of the theory of statistical solutions.

SUBJECT AND OBJECTIVES OF OPERATIONS RESEARCH

Basic concepts and principles of operations research

In this section, we will get acquainted with the terminology, basic concepts and principles of the science of "operations research".

An operation is any event (a system of actions) united by a single idea and aimed at achieving some goal (all the events discussed in paragraphs 1 - 8 of the previous paragraph are "operations").

An operation is always a controlled event, that is, it depends on us how to choose some of the parameters that characterize its organization. "Organization" here is understood in the broad sense of the word, including the set of technical means used in the operation.

Any definite choice of parameters depending on us is called a decision. Decisions can be successful and unsuccessful, reasonable and unreasonable. Solutions are called optimal if they are preferred over others for one reason or another. The purpose of operations research is a preliminary quantitative justification of optimal solutions.

Sometimes (relatively rarely) as a result of the study, it is possible to indicate a single strictly optimal solution, much more often - to identify an area of ​​practically equivalent optimal (reasonable) solutions within which the final choice can be made.

Note that the very decision-making goes beyond the scope of the study of the operation and falls within the competence of the responsible person, more often - a group of persons who have been given the right to make the final choice and who are responsible for this choice. When making a choice, they can take into account, along with the recommendations arising from the mathematical calculation, a number of considerations (quantitative and qualitative nature) that were not taken into account by this calculation.

The indispensable presence of a person (as the final decision-making authority) is not canceled even in the presence of a fully automated control system, which, it would seem, makes a decision without human participation. We must not forget that the very creation of a control algorithm, the choice of one of its possible options, is also a decision, and a very responsible one. With the development of control automata, human functions are not canceled, but simply moved from one, elementary, level to another, higher. In addition, a number of automated control systems provide for active human intervention during the controlled process.

Those parameters, the totality of which forms a solution, are called elements of the solution. Various numbers, vectors, functions, physical signs, etc. may appear as elements of the solution. For example, if a plan is drawn up for the transportation of homogeneous goods from points of departure A 1, A 2, ..., A m to destinations IN 1,В 2 , ..., В n , then the elements of the solution will be the numbers x ij , showing how much cargo will be sent from the 1st point of departure A i V j-th destination In j . Set of numbers x 11 , x 12, …, x 1 n , …, x m 1 , x m 2 , …, xmn forms a solution.

In the simplest problems of operations research, the number of solution elements can be relatively small. But in most problems of practical importance, the number of solution elements is very large, which the reader can verify by trying to independently select and “name by name” the solution elements in examples 1 - 8 of the previous paragraph. For simplicity, we will denote the entire set of elements of the solution by one letter x and say "decision X".

In addition to the solution elements that we, to some extent, can dispose of, in any task of operations research there are also given, “disciplining” conditions that are fixed from the very beginning and cannot be violated (for example, the load capacity of the machine; the size of the planned task ;

weight characteristics of equipment, etc.). In particular, such conditions include the means (material, technical, human) that we have the right to dispose of, and other restrictions imposed on the solution. In their totality, they form the so-called "set of possible solutions".

Let's denote this set again by one letter x, but the fact that the solution X belongs to this set, we will write it as a formula: X X(read: element X included in the set x).

It is about the fact that in the multitude of possible solutions X highlight those solutions X(sometimes - one, and more often - a whole range of solutions), which from one point of view or another are more effective (more successful, more preferable) than others. In order to compare different solutions in terms of efficiency, you need to have some kind of quantitative criterion, the so-called efficiency indicator (it is often called the “objective function”). This indicator is chosen so that it reflects the target orientation of the operation. The “best” solution will be considered the one that contributes to the achievement of the goal to the maximum extent. To select "say by name" performance measure W, First of all, you need to ask yourself: what we want What are we striving for by undertaking the operation? When choosing a solution, we will naturally prefer one that inverts the efficiency indicator W maximum (or minimum). For example, I would like to maximize the income from an operation; if costs are an indicator of efficiency, it is desirable to minimize them. If it is desirable to maximize the efficiency indicator, we will write it in the form W => max, and if minimized - W => min.

Very often, the operation is accompanied by the action of random factors (“whims” of the weather, fluctuations in supply and demand, failures of technical devices, etc.). In such cases, usually, not the value itself, which we would like to maximize (minimize), but its average value (mathematical expectation) is taken as an indicator of efficiency.

In some cases, it happens that the operation, accompanied by random factors, pursues some very specific goal. A, which can only be achieved completely or not achieved at all (“yes-no” scheme), and we are not interested in any intermediate results. Then, as an indicator of efficiency, the probability of achieving this goal is chosen. R(A). For example, if some object is being fired at with an indispensable condition to destroy it, then the efficiency indicator will be the probability of destroying the object.

The wrong choice of performance indicator is very dangerous. Operations organized from the point of view of an unsuccessfully chosen criterion can lead to unjustified costs and losses (remember, for example, the notorious "val" as the main criterion for assessing the economic activity of enterprises).

To illustrate the principles of choosing an indicator of efficiency, let us return again to examples 1 - 8 of § 1, choose for each of them a natural indicator of efficiency and indicate whether it is required to maximize or minimize it. When examining the examples, one should keep in mind that not all of them have the choice of an efficiency indicator unambiguously dictated by the verbal description of the task, so that there may be differences between the reader and the author on this issue.

1. Plan for the supply of enterprises. The task of the operation is to ensure the supply of raw materials with minimal transportation costs. Performance indicator R- the total cost of transporting raw materials per unit of time, for example, a month ( R => min).

2. Construction of a section of the highway. It is required to plan the construction in such a way as to complete it as soon as possible. A natural indicator of efficiency would be the time to complete the construction, if it were not associated with random factors (failures of equipment, delays in the performance of individual works). Therefore, as an indicator of efficiency, you can choose the average expected time T completion of construction (T => min).

3. Sale of seasonal goods. As an indicator of efficiency, we can take the average expected profit P from the sale of goods for the season (P => max).

4. Snow protection of roads. This is the most economically advantageous snow protection plan, therefore, as an indicator of efficiency, you can choose the average costs per unit of time (for example, per year) R for the maintenance and operation of roads, including costs associated with both the construction of protective devices and road clearance and traffic delays (R => min).

5. Anti-submarine raid. Since the raid has a very specific goal A - the destruction of the boat, then as an indicator of efficiency, you should choose the probability R(A) that the boat will be destroyed.

6. Selective control of products. The natural performance measure suggested by the problem statement is the average expected cost. R for control per unit of time, provided that the control system provides a given level of quality, for example, the average percentage of rejects is not higher than the specified ( R=> min).

7. Medical examination. As an indicator of efficiency, you can choose the average percentage (share) Q patients and carriers of infection who were identified (Q=> check).

8. Library service. Some vagueness was deliberately admitted in the formulation of the problem:

it is not clear what is meant by "best customer service" or "meeting their needs to the greatest extent possible". If the quality of service is judged by the time that the subscriber who requested the book is waiting for it to be received, then the average time can be taken as an indicator of efficiency. T expectations of the book by the reader who applied for it ( T=> min). You can approach the issue from a slightly different perspective, choosing the average number as an indicator of efficiency. M books issued per unit of time (M => max).

The considered examples are specially chosen to be so simple that the choice of an efficiency indicator is relatively easy and is directly dictated by the verbal formulation of the task, its (almost always) unambiguous target orientation. However, in practice this is not always the case. The reader can be convinced of this by trying, for example, to choose an indicator of the efficiency of urban transport. What should be taken as such an indicator? The average speed of movement of passengers in the city? Or the average number of passengers carried? Or the average number of kilometers that a person will have to walk, whom transport cannot deliver to the right place? There is something to think about here!

Unfortunately, in most problems of practical importance, the choice of an efficiency indicator is not simple and is solved ambiguously. For any complex task, a situation is typical when the effectiveness of an operation cannot be exhaustively characterized by a single number - it has to involve others to help it. We will get acquainted with such “multicriteria” problems in § 6.

Examples of solving dynamic programming problems

In this section, we will consider (and even solve to the end) several simple (extremely simplified) examples of dynamic programming problems

1. Laying the most advantageous route between two points. Let us recall problem 4 of the previous paragraph and solve it to the end under extremely (and intentionally) simplified conditions. We need to build a path connecting

two points A And IN, of which the second lies to the northeast of the first. For simplicity, let's say. that laying the path consists of a series of steps, and at each step we can move either due east or due north; any way from A V IN is a stepped broken line, the segments of which are parallel to one of the coordinate axes (Fig. 13.1). The construction costs of each of these segments are known. It is required to lay such a path from A V IN, where the total cost is minimal.

How to do it? You can do one of two ways: either go through all possible path options and choose the one on which the costs are minimal (and with a large number of segments this is very, very difficult!); or split the transition process from A V IN into individual steps (one step - one segment) and optimize the control by steps. It turns out that the second method is incomparably more convenient! Here, as elsewhere in operations research, there are advantages to purposeful, organized search for a solution over naive "blind" enumeration.

Let's demonstrate how this is done with a specific example. Divide the distance from A before IN in the east direction, say, into 7 parts, and in the north direction - into 5 parts (in principle, the fragmentation can be arbitrarily small). Then any path from A V IN comprises T\u003d 7 + 5 \u003d= 12 segments directed to the east or north (Fig. 13.2). Let's put down on each of the segments a number expressing (in some arbitrary units) the cost of laying a path along this segment. It is required to choose such a path from A V IN, for which the sum of the numbers on the segments is minimal.

We will consider the constructed path as a controlled system S, moving under the influence of control from the initial state A to the final IN. The state of this system before the start of each step will be characterized by two coordinates: east (X) and northern (y), both are integer (0 X 5 7, 0 at 5). For each of the states of the system (the nodal point of the rectangular grid in Fig. 13.2), we must find the conditional optimal control: we go from this point to the north (control "c") or to the east (control "c"). This control is chosen so that the cost of all remaining steps (including this one) is minimal. We will continue to call this cost the “conditional optimal gain” (although in this case it is not a “gain”, but a “loss”) for a given state of the system. S before starting the next step.

We will unfold the conditional optimization procedure in the opposite direction - from the end to the beginning. First of all, we perform conditional optimization of the last, 12th step. Consider separately the upper right corner of our rectangular grid (Fig. 13.3). Where can we be after the 11th step? Only


where in one (last) step you can get to IN, i.e. at one of the points IN 1 or AT 2 . If we are at the point IN 1 , we have no choice (forced control): we must go east, and it will cost us 10 units. Let's write this number 10 in a circle near the point IN 1 , and the optimal control will be shown by a short arrow emanating from IN 1 and directed to the east. For point AT 2 control is also forced (north), the flow to the end is 14, we will write it in a circle at the point AT 2 . Thus, the conditional optimization of the last step is done, and the conditional optimal gain for each of the points B 1 , B 2 found and recorded in the appropriate mug.

Now let's optimize the penultimate (11th) step. After the penultimate (10th) step, we could end up at one of the points C 1, C 2, C 3(Fig. 13.4). Let us find for each of them a conditional optimal control and a conditional optimal payoff. For point From 1 forced management: go east;

it will cost us 21 units to the end (11 at this step, plus 10, written in a circle at IN 1). The number 21 is written in a circle at a dot From 1 . For point From 2 management is no longer forced: we can go both east and north. In the first case, we will spend 14 units at this step and from AT 2 to the end - 14 more, only 28 units. If we go north, we will spend 13 + 10, for a total of 23 units. Hence, the conditional optimal control at the point From 2 - go north (mark this with an arrow, and write the number 23 in a circle near C 2), For point From 3 the control is again forced (“c”), it will cost 22 units to the end (put the arrow to the north, write the number 22 in a circle when From 3).

Similarly, “backing away” from the penultimate step back, we find for each point with integer coordinates the conditional optimal control (“c” or “b”), which we denote by an arrow, and the conditional optimal gain (expenditure to the end of the path), which we write in a circle. It is calculated as follows: the flow rate at a given step is added to the already optimized flow rate, written in the circle where the arrow leads. Thus, at each step, we optimize only this step, and the ones following it are already optimized. The end result of the optimization procedure is shown in Fig. 13.5.

Thus, conditional optimization has already been performed: no matter where we are, we already know where to go (arrow) and what it will cost us to reach the end (number in a circle). In a circle at a dot A the optimal payoff for the entire construction of the path from A V IN:

W* = 118.

Now it remains to construct an unconditional optimal control - a trajectory leading from A And IN in the cheapest way. To do this, you only need to "obey the shooters", that is, read what they prescribe to do at each step. Such an optimal trajectory is marked in Fig. 13.5 circled twice. The corresponding unconditional optimal control will be:

x* =(s, s, s, s, in, in, s, in, in, in, in, in),

that is, we must take the first four steps to the north, the next two to the east, then again one to the north and the remaining five to the east. Problem solved.

Note that in the course of conditional optimization, we may encounter the case when both controls for some point on the plane are optimal, i.e., lead to the same cost of funds from this point to the end. For example, at the point with coordinates (5; 1) both controls "c" and "b" are optimal and give the flow to the end equal to 62. We arbitrarily choose any of them (in our case, we chose "c"; we could just as well have chosen "c"). Such cases of ambiguous choice of the optimal control are constantly encountered in dynamic programming; in the future, we will not specially mark them, but simply choose arbitrarily any of the equivalent options. Of course, the optimal control of the entire process may depend on this arbitrariness, but not the optimal gain. In general, in dynamic programming problems (as well as in linear programming problems), the solution is far from always unique.

And now let's go back to the beginning and try to solve the problem in a "naive" way, choosing at each step, starting from the first, the most profitable (for this step) direction (if there are two of them, we choose any). In this way we get control

x = (s, s, in, in, in, in, s, in, in, in, s, s).

Let's calculate the costs for this trajectory. They will be equal W=10 +12 +8+10 +11 +13 +15+8 + +10+9+8+14=128, which is definitely more than W* = 118. In this case the difference is not very large, but in others it can be significant.

In the problem solved above, the conditions were deliberately simplified to the extreme. Of course, no one will lead the railway track "on the steps", moving only due to the north or due to the east. We made such a simplification in order to choose only from two controls at each point: “from” or “to”. Instead of two possible directions, it would be possible to introduce several of them and, in addition, take smaller steps; this is not of fundamental importance, but, of course, it complicates and lengthens the calculations.

Note that tasks similar to those considered above are very often encountered in practice: for example, when choosing the fastest path between two points or the most economical (in terms of fuel consumption) gain in speed and altitude by an aircraft.

Let us make one passing remark. The attentive reader has probably noticed that in our problem the points A And IN(beginning and end) are in principle no different from each other: it would be possible to build conditional optimal controls not from the end to the beginning, but from the beginning to the end, and unconditional - in the opposite direction. Indeed, this is so: in any dynamic programming problem, “beginning” and “end” can be swapped. This is completely equivalent to the previously described method in terms of calculation, but it is somewhat less convenient when verbally explaining the idea of ​​the method: it is easier to argue, referring to the “already established” conditions at the beginning of this step than to those that are still “coming” after this step. In essence, both approaches are completely equivalent.

2. The problem of resource allocation. The dynamic programming method makes it possible to successfully solve many economic problems (see, for example, ). Consider one of the simplest such problems. We have some stock of funds (resources) at our disposal TO, which should be distributed among T enterprises P 1 , P 2 , ..., P m . Each of the enterprises i when investing in it X generates income based on x, i.e., representing some function ( x). All functions ( x) (i = 1, 2, ..., T) are given (of course, these functions are nondecreasing). The question is how to allocate funds TO. between enterprises, so that in total they give the maximum income?

This problem is easily solved by the dynamic programming method. Although in its formulation it does not contain a mention of time, you can still mentally unfold the operation of distributing funds in some sequence, considering the investment of funds in the enterprise P 1 as the first step, and in P 2 etc.

Managed system S in this case, the funds or resources that are distributed. State of the system S before each step is characterized by one number S- cash reserve not yet invested. In this problem, the "step controls" are the means x 1, x 2, ..., x 3, allocated to businesses. It is required to find the optimal control, i.e. such a set of numbers x 1, x 2, ..., xm, at which the total income is maximum:

(13.1)

We will solve this problem first in a general, formulaic form, and then - for specific numerical data. Find for everyone i-th step is the conditional optimal payoff (from this step to the end), if we approached this step with a margin of funds S. Denote the conditional optimal payoff W i (S), and the corresponding conditional optimal control is the funds invested in i enterprise, - x i (S).

Let's start optimization from the last one, T - step. Let us come to this step with the rest of the funds S. What should we do? Obviously invest the whole amount S entirely to the enterprise P m. Therefore, the conditional optimal control on m-th step: give the last enterprise all available funds S, i.e.

and the conditional optimal payoff

W m (S)= (S).

Given a whole gamut of values S(placing them closely enough), we for each value S we will know x m (S) And W m (S). The last step is optimized.

Let's move on to the penultimate (T- 1)th step. Let us approach him with a reserve of funds S. Denote W m -1 (S) conditional optimal payoff on the last two steps: ( m- 1)-m and m-m (which is already optimized). If we allocate to ( m- 1)th step ( m- 1)th enterprise means X, then the last step will be S-x. Our payoff on the last two steps will be equal to

,

and need to find it X, at which this gain is maximum:

The sign means that the maximum value is taken over all X, what are possible (to invest more than S, we can't) from the expression in curly braces. This maximum is the conditional optimal payoff for the last two steps, and then the value X, at which this maximum is reached, is the conditional optimal control on (T- 1)th step.

and the corresponding conditional optimal control x i (S) - that value X, at which this maximum is reached.

Continuing in this way, we finally reach the 1st enterprise P 1 . Here we don't need to change the values S; we know for sure that the stock before the first step is equal to TO:

So, the maximum gain (income) from all enterprises is found. Now it remains only to "read the recommendations." That meaning X, at which the maximum (13.4) is reached, and there is an optimal control at the 1st step. After we invest these funds in the 1st enterprise, we will have TO- ."Reading" the recommendation for this value S, we allocate the optimal amount of funds to the second enterprise:

,

and so on until the end.

Now let's solve a numerical example. Initial stock of funds K = 10 (conventional units), and it is required to optimally distribute it among five enterprises (t = 5). For simplicity, we will assume that only integer amounts of funds are invested. income functions ( X) are given in Table 13.1.

Table 13.1

X
0,5 1,0 1,4 2,0 2,5 2,8 3,0 3,0 0,1 0,5 1,2 1,8 2,5 2,9 3,5 3,5 0,6 1,1 1,2 1,4 1,6 1,7 1,8 1,8 0,3 0,6 1,3 1,4 1,5 1,5 1,5 1,5 1,0 1,2 1,3 1,3 1,3 1,3 1,3 1,3

In each column, starting from a certain amount of investments, incomes cease to grow (in reality, this corresponds to the fact that each enterprise is able to "master" only a limited amount of funds).

Let's perform conditional optimization as described above, starting from the last, 5th step. Every time we come to the next step, having a reserve of funds S, we try to allocate one or another amount of funds for this step, take the gain at this step according to table 13.1, add it to the already optimized gain at all subsequent steps until the end (considering that we already have less funds left, just by such an amount of funds, which we have allocated) and find the investment at which this sum reaches its maximum. This investment is the conditional optimal control at this step, and the maximum itself is the conditional optimal gain.

Table 13.2 shows the results of conditional optimization for all steps. The table is structured as follows: the first column gives the values ​​​​of the stock of funds S, with with which we approach this step. Further, the table is divided into five pairs of columns, corresponding to the step number. The first column of each pair contains the value

Table 13.2

S i=5 i=4 i=3 i=2 i=1
x5(S) W 5(S) x4(S) W4(S) x 3(S) W 3(S) x2(S) W2(S) x 1(S) W 1(S)
1,0 1,2 1,3 1,3 1,3 1,3 1,3 1,3 1,3 1,3 1,0 1,3 1,6 2,3 2,5 2,6 2,7 2,8 2,8 2,8 1,0 1,6 2,1 2,4 2,9 3,4 3,6 3,7 3,9 4,1 1,0 1,6 2,1 2,4 2,9 3,5 4,1 4,6 5,1 5,6 5,6

conditional optimal control, in the second - conditional optimal payoff. The table is filled from left to right, top to bottom. The decision at the fifth - last - step is forced: all funds are allocated;

At all other steps, the solution has to be optimized. As a result of sequential optimization of the 5th, 4th, 3rd, 2nd and 1st steps, we will get a complete list of all recommendations for optimal control and an unconditional optimal gain W* for the entire operation - in this case, it is equal to 5.6. In the last two columns of Table 13.2, only one line is filled in, since we know exactly the state of the system before the start of the first step:

S0 = K = 10. Optimal controls at all steps are framed. Thus, we got the final conclusion: we need to allocate two units out of ten to the first enterprise, five units to the second, two to the third, none to the fourth, one unit to the fifth. With this distribution, the income will be maximum and equal to 5.6.

To make it clear to the reader how table 13.2 is filled out, we will demonstrate this on one calculation sample. Let, for example, we need to optimize the solution x 3(7)- what to do in the third step, if we approached it with a reserve of funds S= 7, and what is the maximum we can win on all remaining

Table 13.3

x 7 - x W4(7 - x) +W 4 (7 - x)
1,8 1,7 1,6 1,4 1,2 1,1 0,6 1,0 1,3 1,6 2,3 2,5 2,6 2,7 1,8 2,7 2,9 3,0 3,5 3,2 2,7

steps, including the third? Let us assume that all steps after the third (4th and 5th) have already been optimized, that is, the first two pairs of columns of Table 13.2 have been filled. Let's find x 3 (7) and W 3(7). To do this, we will compile an auxiliary plate (see table 13.3). The first column lists all possible attachments. X on the third step, not exceeding S = 7. In the second column - what will remain after such an investment from the stock of funds S = 7. In the third column - the gain in the third step from investing X in the third enterprise is filled in by column (table 13.1). In the fourth column - the optimal payoff on all remaining steps (fourth and fifth), provided that we approached the fourth step with the remaining funds (filled in the column i = 4 tables 13.2). In the fifth column - the sum of two payoffs: step and optimized further for a given investment X in the third step.

Of all the payoffs of the last column, the maximum one is selected (in table 13.3 it is equal to W 3(7) = 3.6, and the corresponding control X(7) = 2).

The question arises: what if in the auxiliary table of type 13.3 the maximum is reached for more than one x, and at two or more? We answer: it does not matter which one to choose; the gain does not depend on it. In general, in dynamic programming problems, the solution should not be unique at all (we have already mentioned this).

3. The problem of loading the machine. Using the dynamic programming method, you can successfully solve a number of the optimization problems described in Chapter 3, in particular, some integer programming problems. Note that the integrality of solutions, which makes linear programming problems so difficult, in this case does not complicate, but, on the contrary, simplifies the procedure (as it was simplified for us by the integrality of embeddings in the previous problem).

As an example, consider the problem of loading a machine (we already mentioned it in the previous chapter): there is a certain set of objects P 1 , P 2 ,..., P n (each in a single copy); their weights are known q 1 , q 2 , ..., q n and cost from 1, with 2 , ..., with n . The load capacity of the machine is Q. The question is which of the items should be taken into the car so that their total cost (with the total weight Q) was the maximum?

You should learn the basic concepts and definitions of operations research.

An operation is any controlled activity aimed at achieving a goal. The result of the operation depends on the method of its implementation, organization, otherwise - on the choice of certain parameters. Any definite choice of parameters is called a solution. Optimal solutions are considered to be those that, for one reason or another, are preferable to others. Therefore, the main task of operations research is a preliminary quantitative justification of optimal solutions.

Remark 1

Attention should be paid to the formulation of the problem: decision-making itself goes beyond the scope of operations research and belongs to the competence of the responsible person or group of persons, who may take into account other considerations other than mathematically justified.

Remark 2

If in some tasks of operations research the optimal solution is the one in which the chosen efficiency criterion takes on the maximum or minimum value, then in other tasks this is not necessary at all. So, in the problem, we can consider optimal, for example, such a number of outlets and staff in them, in which the average customer service time does not exceed, for example, 5 minutes, and the average queue length at any time will be no more than 3 people (1, page .10-11).

The efficiency of production and commercial activities is largely determined by the quality of decisions made on a daily basis by managers of different levels. In this regard, the tasks of improving decision-making processes, which operations research allows to solve, are of great importance. The term "operations research" was first used in 1939-1940. in the military area. By this time, military equipment and its management had become fundamentally more complicated as a result of the scientific and technological revolution. And therefore, by the beginning of World War II, there was an urgent need to conduct scientific research in the field of the effective use of new military equipment, quantitative assessment and optimization of decisions made by the command. In the post-war period, the successes of the new scientific discipline were in demand in peaceful areas: in industry, entrepreneurial and commercial activities, in government institutions, and in educational institutions.

Operations research is a methodology for applying mathematical quantitative methods to justify solutions to problems in all areas of purposeful human activity. Methods and models of operations research allow you to get solutions that best meet the goals of the organization.

Operations research is a science concerned with the development and practical application of methods for the most effective (or optimal) management of organizational systems.

The main postulate of operations research is as follows: the optimal solution (control) is such a set of values ​​of variables that achieves the optimal (maximum or minimum) value of the efficiency criterion (objective function) of the operation and observes the specified restrictions.

The subject of operations research is the problem of making optimal decisions in a system with control based on an assessment of the effectiveness of its functioning. The characteristic concepts of operations research are: model, variable variables, constraints, objective function.

The subject of operations research in reality is organizational management systems (organizations), which consist of a large number of interacting departments, and the interests of the departments are not always consistent with each other and can be opposite.

The purpose of operations research is to quantitatively substantiate the decisions made on the management of organizations.

The solution that is the most beneficial for the entire organization is called the optimal solution, and the solution that is most beneficial to one or more departments will be suboptimal.

As an example of a typical task of organizational management, where the conflicting interests of departments collide, consider the problem of managing the inventory of an enterprise.

The production department strives to produce as many products as possible at the lowest cost. Therefore, he is interested in the longest possible and continuous production, i.e., in the production of products in large quantities, because such production reduces the cost of reconfiguring equipment, and hence the overall production costs. However, the production of products in large quantities requires the creation of large volumes of stocks of materials, components, etc.

The sales department is also interested in large stocks of finished products to satisfy any customer demand at any given time. Concluding each contract, the sales department, striving to sell as many products as possible, must offer the consumer the widest possible range of products. As a result, there is often a conflict between the production department and the sales department over the product range. At the same time, the sales department insists on the inclusion in the plan of many products produced in small quantities, even when they do not bring large profits, and the production department demands the exclusion of such products from the product range.

The financial department, seeking to minimize the amount of capital required for the operation of the enterprise, is trying to reduce the amount of "related" working capital. Therefore, he is interested in reducing stocks to a minimum. As you can see, the requirements for the size of stocks for different departments of the organization are different. The question arises as to which inventory strategy will be most beneficial to the entire organization. This is a typical task of organizational management. It is connected with the problem of optimizing the functioning of the system as a whole and affects the conflicting interests of its divisions.

Key Features of Operations Research:

1. A systematic approach to the analysis of the problem posed. The systems approach, or systems analysis, is the main methodological principle of operations research, which is as follows. Any task, no matter how private it may seem at first glance, is considered from the point of view of its influence on the criterion for the functioning of the entire system. Above, the systematic approach was illustrated by the example of the inventory management problem.

2. It is typical for operations research that when solving each problem, more and more new tasks arise. Therefore, if narrow, limited goals are set at first, the application of operational methods is not effective. The greatest effect can be achieved only with continuous research, ensuring continuity in the transition from one task to another.

3. One of the essential features of operations research is the desire to find the optimal solution to the problem. However, such a solution often turns out to be unattainable due to the limitations imposed by the available resources (money, computer time) or the level of modern science. For example, for many combinatorial problems, in particular, problems of scheduling with the number of machines n > 4, the optimal solution with the modern development of mathematics can be found only by a simple enumeration of options. Then one has to limit oneself to searching for a “good enough” or suboptimal solution. Therefore, one of its creators, T. Saaty, defined operations research as "... the art of giving bad answers to those practical questions that are given even worse answers by other methods."

4. A feature of operational research is that it is carried out in a complex manner, in many areas. An operational group is being created to conduct such a study. It consists of specialists from different fields of knowledge: engineers, mathematicians, economists, sociologists, psychologists. The task of creating such operational groups is a comprehensive study of the whole set of factors influencing the solution of the problem, and the use of ideas and methods of various sciences.

Each operational research goes through the following main stages in sequence:

1) description of the planning task,

2) building a mathematical model,

3) finding a solution,

4) checking and correcting the model,

5) implementation of the found solution in practice.

Description of the planning task:

    Tasks of network planning and management

consider the relationship between the deadlines for the completion of a large complex of operations (works) and the moments of the beginning of all operations of the complex. These tasks consist in finding the minimum duration of a set of operations, the optimal ratio of cost and timing of their implementation.

    Queuing tasks are devoted to the study and analysis of queuing systems with queues of applications or requirements and consist in determining the performance indicators of systems, their optimal characteristics, for example, in determining the number of service channels, service time, etc.

    Inventory management tasks are to find the optimal values ​​of inventory levels (order points) and order sizes. The peculiarity of such tasks is that with an increase in the level of stocks, on the one hand, the costs of their storage increase, but, on the other hand, losses due to a possible shortage of the stocked product decrease.

    Resource allocation problems arise with a certain set of operations (works) that must be performed with limited available resources, and it is required to find the optimal distribution of resources between operations or the composition of operations.

    The tasks of repair and replacement of equipment are relevant due to the wear and tear of equipment and the need to replace it over time. The tasks are reduced to determining the optimal timing, the number of preventive repairs and checks, as well as the moments of equipment replacement with modernized ones.

    The tasks of scheduling (scheduling) are to determine the optimal sequence of operations (for example, processing parts) on various types of equipment.

    The tasks of planning and placement are to determine the number and location of new objects, taking into account their interaction with existing objects and among themselves.

    Route selection problems, or network problems, are most often encountered in the study of various problems in transport and in the communication system and consist in determining the most economical routes (1, p. 15).

Under operation is understood as any event united by a single idea and direction to achieve a specific goal.

An operation is always a managed event, i.e. it is up to us to choose the parameters that characterize the way it is organized.

Any definite choice of parameters depending on us will be called decision.

Optimal solutions are those that, for one reason or another, are preferable to others.

The main goal of operations research is preliminary quantitative justification of optimal solutions. Operations research does not aim to fully automate decision making. The decision is always made by the person. The purpose of operations research is to produce quantitative data and recommendations that make it easier for a person to make decisions.

Along with the main task - substantiation of optimal solutions - The scope of operations research includes other tasks:

Comparative evaluation of various options for organizing the operation,

Evaluation of the impact on the operation of various parameters,

Investigation of "bottlenecks", i.e. elements, the disruption of which has a particularly strong effect on the success of the operation, etc.

These auxiliary tasks are of particular importance when a given operation is considered not in isolation, but as an integral element of the whole. systems operations. A “systemic” approach to the tasks of operations research requires taking into account the mutual dependence and conditionality of a whole range of activities, i.e. make the final decision taking into account the role and place of this operation in the system.

Under efficiency operation is understood as the degree of its adaptation to the performance of the task facing it.

In order to judge the effectiveness of an operation and to compare the effectiveness of differently organized operations, one must have some numerical evaluation criterion or performance indicator.

Sequence of actions in operations research.

1. The purpose of the study is formulated and the problem statement is developed.

2. To apply quantitative methods in any field, it is always necessary to build a mathematical model of the phenomenon. Based on the analysis of the properties of the original, this model is built.

3. After building the model, results are obtained on it

4. They are interpreted in terms of the original and transferred to the original.

5. With the help of comparison, the simulation results are compared with the results obtained from a direct study of the original.

If the results obtained using the model are close to the results obtained in the study of the original, then with respect to these properties, the model can be considered adequate to the original.

When designing and operating automated control systems, tasks often arise related to the analysis of both quantitative and qualitative patterns of their functioning, determining their optimal structure, etc.

Direct experimentation on objects to solve these problems has a number of significant disadvantages:

1. The established mode of operation of the object is violated.

2. In a full-scale experiment, it is impossible to analyze all alternative options for building a system, etc.

It is advisable to solve these problems on a model separated from the object and implemented on a computer.

When modeling information systems, mathematical models are widely used.

The method of mathematical modeling is a method of studying various objects by compiling an appropriate mathematical description and calculating, on its basis, the characteristics of the object under study.

It is necessary to build a mathematical model. It formally reflects the functioning of the original and describes the main patterns of its behavior. In this case, all secondary, insignificant factors are excluded from consideration.

The object of mathematical modeling are complex systems. A complex system is a certain way organized and purposeful functioning set of a large number of information-related and interacting elements under the influence of external factors.

There are 4 main stages of modeling systems on a computer:

Building a conceptual model of the system and its formalization;

Algorithmization of the system model and development of a modeling program;

Obtaining and interpreting preliminary simulation results;

Checking the adequacy of the model and system; model adjustment

The main calculation of the quality indicators of the functioning of the system based on the results of modeling, the implementation of the model.

Lecture 3. Basic concepts of the method of expert assessments. Formation of expert groups. polling procedures. Methods of ranking, paired comparisons, evaluation in a relative scale.

1. Basic concepts of IO

AND ABOUT scientific discipline, engaged in the development and practical application of methods for the most effective management of various organizational systems.

IO includes the following sections:

1) mathematical program. (substantiation of plans, programs of economic activity); it includes sections: linear program, non-linear program, dynamic program

2) the theory of queuing based on the theory of random processes;

3) game theory, which makes it possible to substantiate decisions made under conditions of incompleteness of information.

When solving a specific control problem, the use of IO methods involves:

Construction of economic and mathematical models for decision-making problems in complex situations or in conditions of uncertainty;

Studying the interrelations that determine subsequent decision-making, and establishing performance criteria that allow evaluating the advantage of one or another course of action.

Basic concepts and definitions of IO.

Operation any controlled activity aimed at achieving a goal. The result of the operation depends on the method of its implementation, organization, otherwise - on the choice of certain parameters. An operation is always a controlled event, that is, it depends on us how to choose some of the parameters that characterize its organization. "Organization" here is understood in the broad sense of the word, including the set of technical means used in the operation.

Any given choice of parameters is called decision . Decisions can be successful and unsuccessful, reasonable and unreasonable. Optimal consider those solutions that, for one reason or another, are preferable to others. The main task of operations research is the preliminary quantitative justification of optimal solutions.

Operation model this is a fairly accurate description of the operation using the mathematical apparatus (various kinds of functions, equations, systems of equations and inequalities, etc.). Drawing up an operation model requires understanding the essence of the described phenomenon and knowledge of the mathematical apparatus.

Operation efficiency the degree of its adaptability to the fulfillment of the task is quantitatively expressed in the form of an efficiency criterion - the objective function. The choice of efficiency criterion determines the practical value of the study. (An incorrectly chosen criterion can be harmful, because operations organized under such a criterion of efficiency sometimes lead to unjustified costs.)

Tasks of network planning and management consider the relationship between the deadlines for the completion of a large complex of operations (works) and the moments of the beginning of all operations of the complex. These tasks consist in finding the minimum duration of a complex of operations, the optimal ratio of cost and timing of their implementation.

Queuing Tasks devoted to the study and analysis of queuing systems with queues of applications or requirements and consist in determining the performance indicators of systems, their optimal characteristics, for example, in determining the number of service channels, service time, etc.

Inventory Management Tasks consist in finding the optimal values ​​of the stock level (reorder point) and order size. The peculiarity of such tasks is that with an increase in the level of stocks, on the one hand, the costs of their storage increase, but on the other hand, losses due to a possible shortage of the stocked product decrease.

Resource Allocation Tasks arise with a certain set of operations (works) that must be performed with limited available resources, and it is required to find the optimal distribution of resources between operations or the composition of operations.

Equipment repair and replacement tasks relevant due to the wear and tear of equipment and the need to replace it over time. The tasks are reduced to determining the optimal timing, the number of preventive repairs and checks, as well as the moments of equipment replacement with modernized ones.

Scheduling (scheduling) tasks consist in determining the optimal sequence of operations (for example, processing parts) on various types of equipment.

Planning and placement tasks nia consist in determining the optimal number and location of new objects, taking into account their interaction with existing objects and among themselves.

Route selection tasks, or network tasks most often encountered in the study of various tasks in transport and in the communication system and consist in determining the most economical routes.

2. The general task of a linear program. Optimum solution

Economic and mathematical model

LP is an area of ​​mathematics that develops the theory and numerical methods for solving problems of finding the extremum (maximum or minimum) of a linear function of many variables in the presence of linear constraints, i.e., equalities or inequalities connecting these variables.

LP methods are applied to practical problems in which: 1) it is necessary to choose the best solution (optimal plan) from a set of possible ones; 2) the solution can be expressed as a set of values ​​of some magnitude variables; a) restrictions imposed on feasible solutions by the specific conditions of the problem are formulated in the form of linear equations or inequalities; 4) the goal is expressed as a linear function of the main variables. The values ​​of the objective function, allowing you to compare different solutions, serve as a criterion for the quality of the solution.

For a practical solution of an economic problem by mathematical methods, first of all, it should be written using an economic-mathematical model. Economic-mathematical model - a mathematical description of the investigated economic process or object. This model expresses the patterns of the economic process in an abstract form using mathematical relationships.

General scheme of model formation: I

1) the choice of a certain number of variables, the assignment - numerical values ​​of which uniquely determines one of the possible states of the phenomenon under study;

2) the expression of the relationships inherent in the phenomenon under study, in the form of mathematical relationships (equations, inequalities). These relations form a system of problem constraints;

3) quantitative expression of the selected optimality criterion in the form of an objective function; I

4) mathematical formulation of the problem as a problem of finding the extremum of the objective function, subject to the constraints imposed on the variables.

General problem of linear programming looks like:

Given a system of m linear equations and inequalities with n variables

and linear function

It is necessary to find such a solution to the system X=(x1,x2,…,xj,…,xn), where the linear function F takes the optimal (i.e. maximum or minimum) value.

System (1) is called a constraint system, and the function F is called a linear function, a linear form, an objective function, or an objective function.

More briefly, the general linear programming problem can be represented as:

with restrictions:

Optimal solution (or optimal plan) LP problem is a solution X=(x1,x2,…,xj,…,xn), of the constraint system (1), that satisfies condition (3), under which the linear function (2) takes the optimal (maximum or minimum) value.

Provided that all variables are non-negative, the system of constraints (1) consists of only inequalities - such a linear programming problem is called standard (symmetric); if the system of constraints consists of only equations, then the problem is called canonical.

A special case of the canonical problem is the problem in basic form, which differs in that all coefficients of the constraint vector b are non-negative, and in each equation there is a variable with a coefficient of 1, which is not included in any of the other equations. A variable with this property is called a base variable.

The standard and canonical problems are special cases of the general one. Each of them is used in its specific area. Moreover, all three formulations are equivalent to each other: any linear programming problem can be reduced to a canonical, standard, or general problem using simple mathematical transformations.

4 . Elements of linear algebra

The system of m linear equations with n variables has the form

or in shorthand

Any m variables of a system of m linear equations with n variables (m< n) называются основными (или базисными), если определитель матрицы коэффициентов при них отличен от нуля. Такой определитель часто называют базисным минором матрицы А. Тогда остальные m–n переменных называются неосновными (или свободными).

To solve system (2.1) under the condition m< n сформулируем утверждение.

Statement 2.1. If for the systemmlinear equations withnvariables (m < n) the rank of the matrix of coefficients for the variables is equal to m, i.e. there is at least one group of basic variables, then this system is indefinite, and each arbitrary set of values ​​of non-basic variables corresponds to one solution of the system.

Solution X=(x1,x2,…,xn) of system (2.1) is called admissible if it contains only non-negative components, i.e. xj>=0 for any j=1,n. Otherwise, the solution is called invalid.

Among the infinite set of solutions of the system, the so-called basic solutions are distinguished.

Basic solution a system of m linear equations with n variables is called a solution in which all n–m non-basic variables are equal to zero.

In linear programming problems, admissible basic solutions, or, as they are also called, support plans, are of particular interest. A basic solution in which at least one of the main variables is equal to zero is called degenerate.

Convex sets of points

The general defining property that distinguishes a convex polygon from a non-convex one is that if you take any two of its points and connect them with a segment, then the entire segment will belong to this polygon. This property can be taken as the definition of a convex set of points.

The set of points is called convex, if it, together with any two of its points, contains the entire segment connecting these points.

Convex sets have an important property: the intersection (common part) of any number of convex sets is a convex set.

Among the points of a convex set, one can single out interior, boundary, and corner points.

The point of the set is called internal, if some of its neighborhood contains points of only this set.

The point of the set is called the boundary, if any of its neighborhoods contains both points that belong to the given set and points that do not belong to it.

Corner points are of particular interest in linear programming problems. The point of the set is called angular(or extreme) if it is not internal for any segment that entirely belongs to the given set.

On fig. 2.4 shows examples of various points of the polygon: internal (point M), boundary (point N) and corner (points A, B, C, D, E). Point A is angular, since for any segment that belongs entirely to the polygon, for example, segment AP, it is not internal; point A is internal to the segment KL, but this segment does not belong entirely to the polygon.

For a convex set, the corner points always coincide with the vertices of the polygon (polyhedron), while at the same time, this is not necessary for a non-convex set. A set of points is called closed if it includes all of its boundary points. The set of points is called limited, if there is a ball (circle) of radius of finite length centered at any point of the set that completely contains the given set; otherwise the set is called unbounded.

A convex closed set of points in a plane that has a finite number of corner points is called a convex polygon if it is bounded, and a convex polygonal region if it is unbounded.

Geometric meaning of solutions of inequalities, equations and their systems

Let us consider solutions of inequalities.

Statement 1. The set of solutions to the inequality with two variables a11x1+a12x2<=b1 является одной из двух полуплоскостей, на которые вся плоскость делится прямой a11x1+a12x2=b1 , включая и эту прямую, а другая полуплоскость с той же прямой есть множество решений неравен­ства a11x1+a12x2>=b1.

To determine the desired half-plane (upper or lower), it is recommended to set an arbitrary control point that does not lie on its boundary - the constructed line. If the inequality is satisfied at a control point, then it is also satisfied at all points of the half-plane containing the control point, and is not satisfied at all points of the other half-plane. And vice versa, if the inequality is not satisfied at the control point, it is not satisfied at all points of the half-plane containing the control point, and is satisfied at all points of the other half-plane. As a control point, it is convenient to take the origin of coordinates O (0; 0), which does not lie on the constructed line.

Consider the set of solutions to systems of inequalities.

Assertion 2. The set of solutions of a compatible system m of linear inequalities in two variables is a convex polygon (or a convex polygonal region).

Each of the inequalities, in accordance with Statement 1, defines one of the half-planes, which is a convex set of points. The set of solutions of a joint system of linear inequalities are points that belong to the half-planes of solutions of all inequalities, i.e. belong to their intersection. According to the statement about the intersection of convex sets, this set is convex and contains a finite number of corner points, i.e. is a convex polygon (a convex polygonal area).

The coordinates of the corner points - the vertices of the polygon are found as the coordinates of the intersection points of the corresponding lines.

When constructing areas of solutions for systems of inequalities, other cases may also occur: the set of solutions is a convex polygonal region (Fig. 2.9, a); one point (Fig. 2.9, b); an empty set when the system of inequalities is inconsistent (Fig. 2.9, c).

5 . Geometric method for solving LP problems

optimal solution of the LP problem

Theorem 1. If the LP problem has an optimal solution, then the linear function takes its maximum value at one of the corner points of the solution polyhedron. If a linear function takes on a maximum value at more than one corner point, then it takes it at any point that is a convex linear combination of those points.

The theorem indicates the principal way of solving LP problems. Indeed, according to this theorem, instead of studying an infinite set of feasible solutions, in order to find the desired optimal solution among them, it is necessary to study only a finite number of corner points of the solution polyhedron.

The following theorem is devoted to the analytical method for finding corner points.

Theorem 2. Each admissible basic solution of the LP problem corresponds to a corner point of the solution polyhedron, and vice versa, to each corner point of the solution polyhedron there corresponds an admissible basic solution.

An important corollary immediately follows from Theorems 1 and 2: if an LP problem has an optimal solution, then it coincides with at least one of its admissible basic solutions.

So, the optimum of the linear function of the LP problem should be sought among a finite number of its admissible basic solutions.

So, the set of admissible solutions (solution polyhedron) of the LP problem is a convex polyhedron (or convex polyhedral region), and the optimal solution of the problem is located at least in one of the corner points of the solution polyhedron.

Consider a problem in standard form with two variables (P = 2).

Let the geometric representation of the constraint system be a polygon ABCDE(Fig. 4.1). It is necessary to find among the points of this polygon such a point at which the linear function F=c1x1+c2x2 takes the maximum (or minimum) value.

Consider the so-called level line linear function F, i.e. line along which this function takes the same fixed value A, i.e. F = A, or c1x1+c2x2=a.

On the solution polygon, find the point through which the level line of the function passes F with the largest (if the linear function is maximized) or smallest (if it is minimized) level.

The equation of the level line of the function c1x1+c2x2=a is the equation of a straight line. At various levels A the level lines are parallel, since their slopes are determined only by the ratio between the coefficients c1 and c2 and, therefore, are equal. So the feature level lines F these are peculiar "parallels", usually located at an angle to the coordinate axes.

An important property of the level line of a linear function is that with a parallel shift of the line in one direction, the level only increases, and with a shift in the other direction, it only decreases. The vector c=(c1,c2) ​​starting from the origin indicates the direction of the fastest increase of the function F. The level line of the linear function is perpendicular to the vector c=(c1,c2).

The order of the graphical solution of the LP problem:

1. Construct a solution polygon.

2. Construct a vector c = (c1, c2), and draw a line of the level of a linear function for it F, for example, F=0.

3. Parallel displacement of the straight line F=0 in the direction of the vector c(-c) to find the point Amax(Bmin), where F reaches its maximum (minimum).

1. Solving together the equations of lines intersecting at the optimum point, find its coordinates.

2. Calculate Fmax(Fmin).

Comment. The minimum point is the "entry" point into the decision polygon, and the maximum point is the "exit" point from the polygon.

6. General idea of ​​the simplex method. Geometric interpretation

The graphical method is applicable to a very narrow class of linear programming problems: it can effectively solve problems containing no more than two variables. The main theorems of linear programming were considered, from which it follows that if a linear programming problem has an optimal solution, then it corresponds to at least one corner point of the solution polyhedron and coincides with at least one of the admissible basic solutions of the constraint system. A way to solve any linear programming problem was indicated: to enumerate a finite number of feasible basic solutions of the system of constraints and choose among them the one on which the goal function makes the optimal decision. Geometrically, this corresponds to enumeration of all corner points of the solution polyhedron. Such an enumeration will eventually lead to an optimal solution (if it exists), but its practical implementation is associated with enormous difficulties, since for real problems the number of feasible basic solutions, although finite, can be extremely large.

The number of admissible basic solutions to be enumerated can be reduced if the enumeration is carried out not randomly, but taking into account changes in the linear function, i.e. seeking to ensure that each next solution is "better" (or at least "not worse") than the previous one in terms of the values ​​of the linear function (increasing it when finding the maximum, decreasing it when finding the minimum) . Such an enumeration allows one to reduce the number of steps in finding the optimum. Let's explain this with a graphical example.

Let the area of ​​feasible solutions be represented by a polygon ABCDE. Suppose its corner point A corresponds to the original admissible basic solution. A random enumeration would have to try five feasible basic solutions corresponding to the five corner points of the polygon. However, the drawing shows that after the top A advantageous to go to the next peak IN, and then to the optimal point WITH. Instead of five, only three vertices were traversed, consistently improving the linear function.

The idea of ​​sequential improvement of the solution formed the basis of a universal method for solving linear programming problems - simplex method or method of successive improvement of the plan.

The geometric meaning of the simplex method consists in a sequential transition from one vertex of the constraint polyhedron (called the initial one) to the neighboring one, in which the linear function takes the best (at least not the worst) value in relation to the goal of the problem; until the optimal solution is found - the vertex where the optimal value of the goal function is reached (if the problem has a finite optimum).

The simplex method was first proposed by the American scientist J. Danzig in 1949, but as early as 1939, the ideas of the method were developed by the Russian scientist L.V. Kantorovich.

The simplex method, which allows solving any linear programming problem, is universal. Currently, it is used for computer calculations, but simple examples using the simplex method can also be solved manually.

To implement the simplex method - successive improvement of the solution - it is necessary to master three main elements:

a method for determining any initial feasible basic solution to the problem;

the rule of transition to the best (more precisely, not the worst) solution;

criterion for checking the optimality of the found solution.

To use the simplex method, the linear programming problem must be reduced to the canonical form, i.e. the system of constraints must be presented in the form of equations.

The literature describes in sufficient detail: finding the initial reference plan (initial feasible basic solution), also using the artificial basis method, finding the optimal reference plan, solving problems using simplex tables.

7 . Algorithm of the simplex method.

Let us consider the solution of the LLP by the simplex method and present it in relation to the maximization problem.

1. According to the condition of the problem, its mathematical model is compiled.

2. The constructed model is converted to the canonical form. In this case, a basis with an initial reference plan can stand out.

3. The canonical model of the problem is written in the form of a simplex table so that all free terms are non-negative. If the initial reference plan is selected, then go to step 5.

Simplex table: a system of restrictive equations and an objective function are entered in the form of expressions resolved with respect to the initial basis. The line in which the coefficients of the objective function F are entered is called the F-line or the line of the objective function.

4. The initial reference plan is found by performing simplex transformations with positive resolution elements corresponding to the minimum simplex ratios, and not taking into account the signs of the elements of the F-row. If in the course of transformations there is a 0-row, all elements of which, except for the free term, are zeros, then the system of restrictive equations of the problem is inconsistent. If, on the other hand, there is a 0-row in which, apart from the free term, there are no other positive elements, then the system of restrictive equations has no non-negative solutions.

The reduction of system (2.55), (2.56) to a new basis will be called simplex transformation . If a simplex transformation is considered as a formal algebraic operation, then it can be seen that as a result of this operation, the roles are redistributed between two variables included in a certain system of linear functions: one variable goes from dependent to independent, and the other vice versa - from independent to dependent. This operation is known in algebra as Jordan elimination step.

5. The found initial basic plan is examined for optimality:

a) if there are no negative elements in the F-row (not counting the free term), then the plan is optimal. If there are no zeros, then the optimal plan is unique; if there is at least one zero, then there are an infinite number of optimal plans;

b) if there is at least one negative element in the F-row, which corresponds to a column of non-positive elements, then;

c) if there is at least one negative element in the F-row, and at least one positive element in its column, then we can switch to a new reference plan that is closer to the optimal one. To do this, the specified column must be assigned as resolving, by the minimum simplex ratio, find the resolving row and perform a simplex transformation. The obtained basic plan is re-examined for optimality. The described process is repeated until an optimal plan is obtained or until the problem is unsolvable.

The column of coefficients for a variable included in the basis is called resolving. Thus, choosing a variable introduced into the basis (or choosing a resolving column) by the negative element of the F-row, we ensure that the function F increases .

A little more difficult is to determine the variable to be excluded from the basis. To do this, they make up the ratios of the free members to the positive elements of the resolving column (such relations are called simplex) and find the smallest among them, which determines the row (resolving) containing the excluded variable. The choice of a variable to be excluded from the basis (or the choice of a resolving string) according to the minimum simplex ratio guarantees, as already established, the positivity of the basis components in the new reference design.

In step 3 of the algorithm, it is assumed that all elements of the column of free terms are non-negative. This requirement is not mandatory, but if it is met, then all subsequent simplex transformations are performed only with positive resolution elements, which is convenient for calculations. If there are negative numbers in the column of free members, then the resolving element is chosen as follows:

1) look at the line corresponding to some negative free member, for example, a t-row, and select any negative element in it, and the column corresponding to it is taken as allowing (we assume that the constraints of the problem are compatible);

2) make up the ratios of the elements of the column of free members to the corresponding elements of the resolving column that have the same signs (simplex ratios);

3) choose the smallest of the simplex relations. It will determine the permission string. Let it be, for example, R-line;

4) at the intersection of the resolving columns and rows, a resolving element is found. If the element of the y-string turned out to be resolving, then after the simplex transformation the free member of this string will become positive. Otherwise, at the next step, the t-row is again addressed. If the problem is solvable, then after a certain number of steps there will be no negative elements in the column of free terms.

8. Inverse matrix method

Consider an LP of the form:

A is the constraint matrix;

C=(c1,c2,…,cn)–vector–row;

X=(x1,x2,…,xn) – variables;

is the vector of the right side.

We assume that all equations are linearly independent, i.e., rank(a)=m. In this case, the basis is the minor of the order of the matrix A. That is, there is at least one submatrix B of order m such that |B|<>0. All unknowns corresponding to B are called basic. All others are free.

Let B be some basis. Then, by permuting the columns of the matrix A, one can always bring A to the form A=(B|N),

where N is a submatrix consisting of columns of the matrix A that do not belong to the basis. In the same way, it is possible to split the vector x into – the vector of basic variables and.

Any solution of problem (1) satisfies the condition A*x=b, and, consequently, the system takes the following form:

Because |B|<>0, then there is an inverse matrix. Multiplying by the inverse on the left, we get:

- common decision.

A basic solution (with respect to the basis B) is a particular solution to problem (2) obtained under the condition. Then it is uniquely determined.

The basic solution is called realizable, If.

The basis corresponding to the implemented basic solution. called realizable basis. The basic solution is called degenerate if the vector has zero components.

The general solution contains all the solutions that exist. Let's return to the objective function. We introduce Cb-coefficients in front of the basic variables, Cn-the rest.

Thus, we get. We substitute from the general solution:

Statement. Optimality criterion for the basic solution.

Let's say. Then the basic solution is optimal. If, then the basic solution is not optimal.

Doc-in: Let be. Consider the basic solution, .

Therefore, is the value of the objective function for the basic solution.

Let there be another solution: (Xb,Xn).

Then we look

Thus, the basic solution is min. Let, on the contrary, not hold, i.e. exists.

Then there is a solution for which the value of the objective function will be less than the value of the objective function for the basic solution.

Let corresponds to a free variable Xi:Xj, assign a value and introduce it into the basis, and derive another variable and call it free.

How to determine? All free variables except for are still 0 as well.

Then in the general solution, where.

We take out: - a necessary condition.

A basic solution is called regular if. We derive the variable from the basis. With a new solution, the objective function decreases, because

Algorithm:

1. LP problem in standard form.

2. We leave linearly independent equations.

3. Find a matrix B such that |B|<>0 and the basic solution.

We calculate:

if, then there is an optimal solution - this is the basic solution;

if, then we find the component, attach it, thus we will find another solution; – at which one of the basic variables =0. We discard this variable from the basis, enter xi. We have obtained a new basis B2 conjugate to the basis B1. Then we calculate again.

1. If there is an optimal solution, then after a finite number of steps we will get it.

Geometrically, the procedure is interpreted as a transition from a corner point to a conjugate corner point along the boundary of the set X, the set of solutions to the problem. Because there is a finite number of corner points and the strict decrease of the function F(x) prohibits passing through the same extreme point twice, then if there is an optimal solution, then after a finite number of steps we will get it.

9. Economic interpretation of the problem, the dual problem of the use of resources

Task. For the manufacture of two types of products P1 and P2, four types of resources S1, S2, S3, S4 are used. Given stocks of resources, the number of units of resources spent on the manufacture of a unit of output. The profit received from a unit of production P1 and P2 is known. It is necessary to draw up a plan for the production of products in which the profit from its sale will be maximum.

TaskI(original):

F=c1x1+c2x2+…+CnXn->max with restrictions:

and the non-negativity condition x1>=0, x2>=0,…,Xn>=0

Draw up such a production plan X=(x1,x2,…,Xn), in which the profit (revenue) from the sale of products will be maximum, provided that the consumption of resources for each type of product does not exceed the available stocks

TaskII(dual)

Z=b1y1+b2y2+…+BmYm->min

with restrictions:

and the non-negativity condition

y1>=0, y2>=0,…,yn>=0.

Find such a set of prices (estimates) of resources Y=(y1,y2,…,yn), at which the total cost of resources will be minimal, provided that the cost of resources in the production of each type of product will be no less than the profit (revenue) from the sale of this products

In the above model, bi(i=1,2,…,m) denotes the stock of resource Si; aij- the number of resource units Si consumed in the production of a unit of production Pj(j=1,2,…,n); cj- profit (revenue) from the sale of a unit of production Pj (or the price of production Pj) .

Let's assume that some organization has decided to purchase enterprise resources S1,S2,…,Sm and it is necessary to set optimal prices for these resources y1,y2,…,ym. It is obvious that the purchasing organization is interested in the fact that the costs of all resources Z in quantities b1,b2,…,bm at prices y1,y2,…,ym, respectively, were minimal, i.e. Z=b1,y1+b2y2+…+bmym->min.

On the other hand, an enterprise selling resources is interested in the fact that the revenue received is not less than the amount that the enterprise can receive when processing resources into finished products.

A11 units of resource S1, a21 units of resource S2,…., aj1 units of resource Si1 ,……, am1 units of resource Sm are spent on the production of a unit of product P1 at a price y1,y1,…,yi,…,ym, respectively. Therefore, to meet the requirements of the seller, the cost of resources consumed in the manufacture of a unit of production P1 must be at least its price c1, i.e. a11y1+a21y2+…+am1ym>=c1.

Similarly, it is possible to compose constraints in the form of inequalities for each type of product P1,P2,…Pn. The economic-mathematical model and meaningful interpretation of the thus obtained dual problem II are given in the right part of the table.

Resource prices y1,y1,…,yi,…,ym have received various names in the economic literature: accounting, implicit, shadow . The meaning of these names is that conditional , "fake" prices. In contrast to the "external" prices c1,c2,…,cn for products, which are known, as a rule, before the start of production, the prices of resources y1,y2,…,ym are internal , because they are not set from the outside, but are determined directly as a result of solving the problem, therefore they are often called estimates resources.

10. Mutually dual LP problems and their properties

Let us formally consider two problems I and II of linear programming presented in the table, abstracting from the meaningful interpretation of the parameters included in their economic and mathematical models.

Both tasks have the following properties:

1. In one problem, they are looking for the maximum of a linear function, in the other - a minimum.

2. Coefficients for variables in a linear function of one problem are free members of the system of constraints in another.

3. Each of the problems is given in the standard form, and in the maximization problem all inequalities of the form "<=", а в задаче минимизации – все неравенства вида ">=".

4. Matrices of coefficients for variables in the systems of constraints of both problems are transposed to each other.

5. The number of inequalities in the system of constraints of one problem is the same as the number of variables in another problem.

6. The conditions of non-negativity of variables are preserved in both problems.

Comment. If the non-negativity condition is imposed on the j-th variable of the original problem, then the j-th constraint of the dual problem will be an inequality, but if the j-th variable can take both positive and negative values, then the j-th constraint of the dual problem will be an equation; the constraints of the original problem and the variables of the dual problem are similarly related.

Two problems I and II of linear programming with the indicated properties are called symmetric mutually dual problems. In what follows, for simplicity, we will simply call them dual tasks.

Each LP problem can be associated with its dual problem.

11. Algorithm for compiling a dual problem:

1. Bring all the inequalities of the constraint system of the original problem to the same meaning: if the maximum of a linear function is sought in the original problem, then all the inequalities of the constraint system are reduced to the form "<=", а если минимум – к виду ">=". For those inequalities in which this requirement is not met, multiply by -1.

2. Compile an augmented matrix of system A, which includes a matrix of coefficients for variables, a column of free members of the system of restrictions and a row of coefficients for variables in a linear function.

3. Find the matrix transposed to matrix A .

4. Formulate a dual problem based on the resulting matrix and the conditions for the non-negativity of the variables: make up the objective function of the dual problem, taking as the coefficients of the variables the free members of the system of constraints of the original problem; compose a system of constraints for the dual problem, taking the elements of the matrix as coefficients of the variables, and the coefficients of the variables in the objective function of the original problem as free members, and write down inequalities of the opposite meaning; write down the condition of non-negativity of the variables of the dual problem.

12. First duality theorem

The connection between optimal solutions of dual problems is established with the help of duality theorems.

A sufficient sign of optimality.

If X*=(x1*,x2*,…,xn*) And Y*=(y1*,y2*,…,ym*) are admissible solutions of mutually dual problems for which equality holds,

then is the optimal solution of the original problem I, and is the optimal solution of the dual problem II.

In addition to a sufficient criterion for the optimality of mutually dual problems, there are other important relations between their solutions. First of all, questions arise: is there always a simultaneous optimal solution for each pair of dual problems; is it possible that one of the dual problems has a solution and the other does not. These questions are answered by the following theorem.

The first (main) duality theorem. If one of the mutually dual problems has an optimal solution, then so does the other, and the optimal values ​​of their linear functions are:

Fmax = Zmin or F(X*)=Z(Y*) .

If the linear function of one of the problems is not limited, then the conditions of the other problem are contradictory (the problem has no solution).

Comment. The statement opposite to the second part of the main duality theorem is not true in the general case, i.e. from the fact that the conditions of the original problem are contradictory, it does not follow that the linear function of the dual problem is not bounded.

Economic meaning of the first duality theorem.

Production plan X*=(x1*,x2*,…,xn*) and a set of prices (estimates) of resources Y*=(y1*,y2*,…,ym*) turn out to be optimal if and only if the profit (revenue) from the product, found at "external" (known in advance) prices c1,c2,…,cn, is equal to the cost of resources at "internal" (determined only from the solution of the problem) prices y1 ,y2,…,ym. For all the other plans X And Y both tasks, the profit (revenue) from the product is always less than (or equal to) the cost of resources.

The economic meaning of the first duality theorem can also be interpreted as follows: the enterprise does not care whether to produce products according to the optimal plan X*=(x1*,x2*,…,xn*) and get the maximum profit (revenue) Fmax or sell resources at optimal prices Y* =(y1*,y2*,…,ym*) and reimburse from the sale equal to it the minimum cost of resources Zmin.

13. Second duality theorem

Let two mutually dual problems be given. If each of these problems is solved by the simplex method, then it is necessary to reduce them to a canonical form, for which it is necessary to introduce into the system of constraints of problem I (in short notation) T non-negative variables, but into the system of constraints of problem II () n non-negative variables, where i(j) is the number of the inequality in which the additional variable is introduced.

The systems of constraints for each of the mutually dual problems will take the form:

Let's establish a correspondence between the initial variables of one of the dual problems and the additional variables of the other problem (table).


Theorem. Positive (nonzero) components of the optimal solution of one of the mutually dual problems correspond to zero components of the optimal solution of the other problem, i.e. for any i=1,2,…,m u j=1,2,…,n: if X*j>0, then; If , then, and similarly,

if, then ; if, then.

An important conclusion follows from this theorem that the introduced correspondence between the variables of mutually dual problems upon reaching the optimum (i.e., at the last step of solving each problem by the simplex method) represents a correspondence between main(as a rule, not equal to zero) variables of one of the dual problems and non-core(equal to zero) variables of another problem when they form admissible basic solutions.

The second duality theorem. The components of the optimal solution of the dual problem are equal to the absolute values ​​of the coefficients at the corresponding variables of the linear function of the original problem, expressed in terms of the minor variables of its optimal solution.

Comment. If in one of the mutually dual problems the uniqueness of the optimal solution is violated, then the optimal solution of the dual problem is degenerate. This is due to the fact that if the uniqueness of the optimal solution of the original problem is violated, at least one of the main variables is missing in the expression of the linear function of its optimal solution in terms of non-basic variables.

14. Objectively determined assessments and their meaning

The components of the optimal solution of the dual problem are called optimal (dual) estimates of the original problem. Academician L.V. Kantorovich called them objectively conditioned estimates ( in the literature they are also called hidden income) .

Additional variables of the original problem I, representing the difference between the stocks bi of resources S1,S2,S3,S4 and their consumption, express leftover resources , and additional variables of dual problem II representing the difference between the costs of resources for the production of a unit of output from them and the prices cj of products P1,P2 , express excess cost over price.

Thus, objectively determined estimates of resources determine the degree of scarcity of resources: according to the optimal production plan, scarce (ie, fully used) resources receive non-zero estimates, and non-scarce - zero estimates. The value y*i is the estimate of the i-th resource. The higher the value of the estimate y*i, the higher the scarcity of the resource. For a non-scarce resource y*i=0.

So, only cost-effective, unprofitable types of products can fall into the optimal production plan (although the criterion of profitability here is peculiar: the price of a product does not exceed the costs of the resources consumed in its manufacture, but is exactly equal to them).

Third duality theorem . The components of the optimal solution of the dual problem are equal to the values ​​of the partial derivatives of the linear function Fmax(b1, b2,…, bm)according to the corresponding arguments, i.e.

Objectively determined estimates of resources show how many monetary units the maximum profit (revenue) from the sale of products will change when the stock of the corresponding resource changes by one unit.

Dual estimates can serve as a tool for analysis and making the right decisions in a constantly changing industry. So, for example, with the help of objectively determined estimates of resources, it is possible to compare the optimal conditional costs and production results.

Objectively determined estimates of resources make it possible to judge the effect of not any, but only relatively small changes in resources. With abrupt changes, the estimates themselves may become different, which will lead to the impossibility of using them for the analysis of production efficiency. According to the ratios of objectively determined estimates, the calculated norms of resource substitution can be determined, under which the ongoing replacements within the stability of dual estimates do not affect the efficiency of the optimal plan. Conclusion. The dual scores are:

1. An indicator of scarcity of resources and products.

2. An indicator of the influence of restrictions on the value of the objective function.

3. An indicator of the efficiency of production of certain types of products from the standpoint of the optimality criterion.

4. A tool for comparing total contingent costs and results.

15. Statement of the transport task according to the criterion of cost.

TK - the problem of the most economical plan for the transportation of a homogeneous or interchangeable product from a point of production (departure stations) to consumption points (destination stations) - is the most important private task of LP, which has extensive practical applications not only to transport problems.

TK is distinguished in the LP by the certainty of the economic characteristics, the features of the mathematical model, the presence of specific methods of solution.

The simplest formulation of the TOR in terms of cost is as follows: in T points of departure A1,…,Am are respectively a1,…,am units of homogeneous cargo (resources) to be delivered n consumers B1,…,Bn in quantities b1,…,bn units (needs). The transport costs Cij of transporting a unit of cargo from the i-th point of departure to the j-th point of consumption are known.

It is required to draw up a transportation plan, i.e. find how many units of cargo should be sent from the i-th point of departure to the j-th point of consumption in such a way that the needs are fully satisfied and that the total transportation costs are minimal.

For clarity, the conditions of the TK will be presented in the form of a table, which is called distribution .

Provider

Consumer


Cargo stock






Need






Here, the amount of cargo transported from the i-th point of origin to the j-th destination is equal to xij, the stock of cargo at the i-th point of departure is determined by the value ai>=0, and the demand of the j-th destination in cargo is bj>=0 . It is assumed that all xij>=0.

The matrix is ​​called tariff matrix (costs or transportation costs).

Transport task plan is called a matrix, where each number xij denotes the number of cargo units that must be delivered from the i-th point of departure to the j-th destination. The matrix xij is called traffic matrix.

The total total costs associated with the implementation of the transportation plan can be represented by the objective function

Variables xij must satisfy the restrictions on stocks, on consumers and the conditions of non-negativity:

– restrictions on stocks (2);

– restrictions on consumers (2);

are the non-negativity conditions (3).

Thus, mathematically the transport problem is formulated as follows. The system of constraints (2) under condition (3) and the objective function (1) are given. Among the set of solutions of system (2), it is required to find such a non-negative solution that minimizes the function (1).

The system of constraints for problem (1) - (3) contains m + n equations with Tn variables. It is assumed that the total reserves are equal to the total needs, i.e.

16. Sign of the solvability of the transport problem

In order for the transportation problem to have admissible plans, it is necessary and sufficient that the equality

There are two types of transport tasks: closed , in which the total volume of cargo of suppliers is equal to the total demand of consumers, and open , in which the total production capacity of suppliers exceeds the demand of consumers or the demand of consumers is greater than the actual total capacity of suppliers, i.e.

An open model can be converted to a closed one. So, if, then a fictitious (n + 1)-th destination is introduced into the mathematical model of the transport problem. To do this, an additional column is provided in the task matrix, for which the need is equal to the difference between the total capacity of suppliers and the actual demand of consumers:

All tariffs for the delivery of goods to this point will be considered equal to zero. In this way, the open model of the problem is transformed into a closed one. For a new problem, the objective function is always the same, since the cost of additional transportation is zero. In other words, the fictitious consumer does not violate the consistency of the constraint system.

If, then a fictitious (m + 1)-th point of departure is introduced, to which the cargo stock equal to is attributed.

Tariffs for the delivery of goods from this fictitious supplier are again assumed to be equal to zero. One row will be added to the matrix, this will not affect the objective function, and the system of constraints of the problem will become compatible, i.e., it will become possible to find the optimal plan.

The following theorem is of great importance for the transport problem.

Theorem. The rank of the matrix of the transport problem is one less than the number of equations, i.e. r ( a )= m + n -1.

It follows from the theorem that each reference design must have (m-1)(n-1) free variables equal to zero and m+n-1 basic variables.

We will look for the transportation plan of the transport task directly in the distribution table. We assume that if the variable xij takes a value, then we will write this value into the corresponding cell (I,j), but if xij=0, then we will leave the cell (I,j) free. Taking into account the theorem on the rank of a matrix in the distribution table the master plan should include m+n-1 occupied cells and the rest will be free.

These requirements for the base plan are not the only ones. Base plans must satisfy another requirement related to cycles.

A set of traffic matrix cells in which two and only two neighboring cells are located in one row or in one column and the last cell of the set lies in the same row or column as the first one is called closed cycle .

Graphically, the cycle is a closed broken line, the vertices of which are located in the occupied cells of the table, and the links are located only in rows or columns. Moreover, at each vertex of the cycle there are exactly two links, one of which is in a row, and the other in a column. If the polyline forming the cycle intersects itself, then the points of self-intersection are not vertices.

The following important properties of transportation task plans are associated with the set of cycle cells:

1) an admissible plan of the transport task is a reference if and only if no cycle can be formed from the cells occupied by this plan;

2) if we have a basic plan, then for each free cell it is possible to form only one cycle containing this cell and some part of the occupied cells.

17. Building the initial baseline

Northwest corner rule.

To draw up an initial transportation plan, it is convenient to use the northwest corner rule, which is as follows.

We will fill in, starting from the upper left, conventionally called the "northwest corner", moving further along the line to the right or down the column. We put in the cell (1; 1) the smaller of the numbers a1 and b1, i.e. . If, then the first column is “closed”, i.e., the demand of the first consumer is completely satisfied. This means that for all other cells of the first column, the amount of cargo for .

If, then the first line is similarly “closed”, i.e., for . We proceed to filling in the adjacent cell (2; 1), into which we enter.

Having filled in the second cell (1; 2) or (2; 1), we proceed to filling in the next third cell in the second row or in the second column. We will continue this process until, at some stage, the resources am and the needs bn are exhausted. The last filled cell will be in the last n-th column and in the last m-th row.

The "minimum element" rule.

The initial reference plan, built according to the “north-west corner” rule, usually turns out to be very far from optimal, since the costs cij are not taken into account when determining it. Therefore, in further calculations, many iterations will be required to achieve the optimal plan. The number of iterations can be reduced if the initial plan is built according to the "minimum element" rule. Its essence lies in the fact that at each step the maximum possible “movement” of the cargo into the cell is carried out with the minimum tariff cij. We start filling in the table from the cell, which corresponds to the smallest element cij of the tariff matrix. The smallest of the numbers ai or bj is placed in the cell with the lowest tariff . Then, the row corresponding to the supplier, whose stocks are completely used up, or the column corresponding to the consumer, whose demand is completely satisfied, is excluded from consideration. It may turn out that a row and column should be excluded at the same time if the supplier's inventory is completely used up and the consumer's demand is completely satisfied. Then, from the remaining cells of the table, the cell with the lowest tariff is again selected and the process of distributing stocks continues until all of them are distributed and the demand is satisfied.

18. Method of potentials

The general principle of determining the optimal plan of the transport task by the potential method is similar to the principle of solving the LP problem by the simplex method, namely: first, the basic plan of the transport task is found, and then it is successively improved until an optimal plan is obtained.

The essence of the potential method is as follows. After the initial reference transportation plan is found, each supplier (each row) is assigned a certain number called the potential of the supplier Ai , and each consumer (each column) is assigned a certain number called the potential of the consumer.

The cost of a ton of cargo at a point is equal to the cost of a ton of cargo before transportation + the cost of its transportation: .

To solve the transport problem by the potential method, it is necessary:

1. Build a basic transportation plan according to one of the rules outlined. The number of filled cells should be m+n-1.

2. Calculate the potentials and, respectively, suppliers and consumers (for busy cells): . The number of filled cells is m+n-1, and the number of equations is m+n. Because the number of equations is one less than the number of unknowns, then one of the unknowns is free and can take any numerical value. For example, . The remaining potentials for a given reference solution are uniquely determined.

3. Check for optimality, i.e. for free cells calculate scores. If, then transportation is expedient and plan X is optimal - a sign of optimality. If there is at least one difference, then go to a new reference plan. In its economic sense, the value characterizes the change in total transportation costs that will occur due to the implementation of a single supply by the i-th supplier to the j-th consumer. If, then a single delivery will lead to savings in transport costs, if - to an increase in them. Therefore, if among the free directions of supply there are no directions that save transportation costs, then the resulting plan is optimal.

4. Among the positive numbers, the maximum is chosen, built for a free cell, to which it corresponds to the recalculation cycle. After the cycle has been built for the selected free cell, you should move on to a new reference plan. To do this, you need to move the loads within the cells associated with this free cell by the recalculation cycle.

a) Each of the cells connected by a cycle with a given free cell is assigned a certain sign, and this free cell is “+”, and all other cells (vertices of the cycle) are alternately signs “–” and “+”. We will call these cells minus and plus.

b) In the minus cells of the cycle, we find the minimum supply, which we denote by. The smaller of the numbers xij in the negative cells is transferred to this free cell. At the same time, this number is added to the corresponding numbers in the cells with the “+” sign and subtracted from the numbers in the minus cells. A cell that was previously free becomes occupied and enters the reference plan; and the minus cell, in which the minimum of the numbers xij stood, is considered free and leaves the reference plan.

Thus, a new baseline was determined. The transition from one baseline to another described above is called a shift in the recalculation cycle. When shifting along the recalculation cycle, the number of occupied cells remains unchanged, namely, it remains equal to m+n-1. Moreover, if there are two or more identical numbers xij in minus cells, then only one of these cells is vacated, and the rest are left occupied with zero supplies.

5. The resulting reference plan is checked for optimality, i.e. repeat all the steps from paragraph 2.

19. The concept of dynamic programming.

DP (planning) is a mathematical method for finding optimal solutions to multi-step (multi-stage) problems. Some of these problems naturally break down into separate steps (stages), but there are problems in which the partition has to be introduced artificially in order to be solved by the DP method.

Usually, DP methods optimize the operation of some controlled systems, the effect of which is estimated additive, or multiplicative, objective function. Additive is a function of several variables f(x1,x2,…,xn) whose value is calculated as the sum of some functions fj that depend on only one variable xj: . The terms of the additive objective function correspond to the effect of decisions made at individual stages of the controlled process.

R. Bellman's principle of optimality.

The meaning of the approach implemented in dynamic programming lies in replacing the solution of the original multidimensional problem with a sequence of problems of lower dimension. Basic requirements for tasks:

1. the object of research should be controlled system (object) with given valid states and admissible departments;

2. the task must allow interpretation as a multi-step process, each step of which consists of the adoption solutions O choosing one of the admissible controls leading to state change systems;

3. the task should not depend on the number of steps and be defined on each of them;

4. the state of the system at each step must be described by the same (compositionally) set of parameters;

5. the subsequent state in which the system finds itself after choosing a solution on k-th step, depends only on the given solution and the initial state to the beginning k- step. This property is the main one from the point of view of the ideology of dynamic programming and is called lack of consequences .

Consider the issues of applying the dynamic programming model in a generalized form. Let there be a task of managing some abstract object that can be in different states. The current state of the object will be identified with a certain set of parameters, which will be denoted in the following by S and called state vector. It is assumed that the set S of all possible states is given. The set is also defined for the object admissible controls(control actions) x, which, without loss of generality, can be considered a numerical set. Control actions can be carried out at discrete points in time, and the control solution consists in choosing one of the controls. plan tasks or management strategy the vector x=(x1,x2,…,xn-1) is called, the components of which are the controls selected at each step of the process. In view of the expected lack of aftereffect between each two successive states of the object Sk and Sk+1, there is a known functional dependence, which also includes the selected control: . Thus, setting the initial state of the object and choosing a plan X uniquely define trajectory of behavior object.

Management efficiency at every step k depends on the current state Sk, the selected control xk and is quantified using the functions fk(xk,Sk), which are summands additive objective function , characterizing the overall efficiency of the facility management. ( Note , that the definition of the function fk(xk,Sk) includes the range of admissible values xk , and this area, as a rule, depends on the current state of Sk). Optimal control , for a given initial state S1, reduces to choosing such an optimal plan x* , at which maximum amount values ​​of fk on the corresponding trajectory.

The main principle of dynamic programming is that at each step one should strive not for an isolated optimization of the function fk(xk,Sk), but choose the optimal control x*k under the assumption that all subsequent steps are optimal. Formally, this principle is realized by finding at each step k conditional optimal controls , providing the highest total efficiency starting from this step, assuming that the current state is S.

Denote by Zk(s) the maximum value of the sum of functions fk throughout the steps from k before P(obtained under optimal control on a given segment of the process), provided that the object at the beginning of the step k is in the state S. Then the functions Zk(s) must satisfy the recursive relation:

This ratio is called basic recurrence relation (basic functional equation) dynamic programming. It implements the basic principle of dynamic programming, also known as Bellman's principle of optimality :

The optimal control strategy must satisfy the following condition: whatever the initial state sk at the kth step and the control chosen at this step xk, subsequent controls (management decisions) should be optimal in relation to cocmo yanyu ,resulting from the decision made at step k .

The main relation allows us to find the functions Zk(s) only V combined with initial condition which in our case is equality.

The principle of optimality formulated above is applicable only to control objects for which the choice of optimal control does not depend on the prehistory of the controlled process, i.e. on how the system came to the current state. It is this circumstance that makes it possible to decompose the problem and make its practical solution possible.

For each specific task, the functional equation has its own specific form, but it must certainly retain the recurrent nature inherent in the expression (*) and embodying the main idea of ​​the optimality principle.

20. The concept of game models.

The mathematical model of a conflict situation is called game , parties involved in the conflict players and the outcome of the conflict winning.

For each formalized game, we introduce rules , those. a system of conditions that determines: 1) options for the players' actions; 2) the amount of information each player has about the behavior of partners; 3) the payoff to which each set of actions leads. Typically, gain (or loss) can be quantified; for example, you can evaluate a loss by zero, a win by one, and a draw by 1/2. Quantifying the results of a game is called payment .

The game is called steam room , if there are two players involved, and multiple , if the number of players is more than two. We will consider only paired games. They are played by two players A And IN, whose interests are opposite, and by the game we mean a series of actions on the part of A And IN.

The game is called zero sum game or antagonistic skoy , if the gain of one of the players is equal to the loss of the other, i.e. the sum of the payoffs of both parties is zero. To complete the task of the game, it is enough to indicate the value of one of them . If we designate A- win one of the players, b the other's payoff, then for a zero-sum game b=A, so it suffices to consider, for example A.

The choice and implementation of one of the actions provided for by the rules is called move player. Moves can be personal And random . personal move it is a conscious choice by the player of one of the possible actions (for example, a move in a chess game). The set of possible options for each personal move is regulated by the rules of the game and depends on the totality of previous moves on both sides.

Random move it is a randomly chosen action (for example, choosing a card from a shuffled deck). For the game to be mathematically defined, the rules of the game must specify for each random move probability distribution possible outcomes.

Some games may consist only of random moves (so-called pure games of chance) or only of personal moves (chess, checkers). Most card games belong to the mixed type games, that is, they contain both random and personal moves. In what follows, we will consider only the personal moves of the players.

Games are classified not only by the nature of the moves (personal, random), but also by the nature and amount of information available to each player regarding the actions of another. A special class of games are the so-called "games with complete information". A game with complete information A game is called in which each player at each personal move knows the results of all previous moves, both personal and random. Examples of games with complete information are chess, checkers, and the well-known game of tic-tac-toe. Most games of practical importance do not belong to the class of games with complete information, since the unknown about the opponent's actions is usually an essential element of conflict situations.

One of the basic concepts of game theory is the concept strategies .

strategy A player is called a set of rules that determine the choice of his action for each personal move, depending on the situation. Usually during the game, at each personal move, the player makes a choice depending on the specific situation. However, in principle it is possible that all decisions are made by the player in advance (in response to any given situation). This means that the player has chosen a certain strategy, which can be given in the form of a list of rules or a program. (So ​​you can play the game using a computer). The game is called ultimate , if each player has a finite number of strategies, and endless .– otherwise.

In order to decide game , or find game decision , it is necessary for each player to choose a strategy that satisfies the condition optimality , those. one of the players must receive maximum win, when the second sticks to his strategy, At the same time, the second player must have minimum loss , if the first adheres to its strategy. Such strategies are called optimal . Optimal strategies must also satisfy the condition sustainability , those. it should be unprofitable for any of the players to abandon their strategy in this game.

If the game is repeated enough times, then the players may not be interested in winning and losing in each particular game, A average win (loss) in all parties.

The goal of game theory is to determine the optimal strategy for each player.

21. Payment matrix. Lower and upper price of the game

End game in which the player A It has T strategies, and the player B - p strategies is called an m×n game.

Consider an m×n game of two players A And IN("we" and "opponent").

Let the player A has T personal strategies, which we denote by A1,A2,…,Am. Let the player IN available n personal strategies, we denote them B1,B2,…,Bn.

Let each side choose a particular strategy; for us it will be Ai, for the opponent Bj. As a result of the choice by the players of any pair of strategies Ai and Bj (), the outcome of the game is uniquely determined, i.e. win aij player A(positive or negative) and loss (-aij) of the player IN.

Assume that the values ​​aij are known for any pair of strategies (Ai,Bj) . Matrix P=aij , whose elements are the payoffs corresponding to the strategies Ai and Bj, called payment matrix or game matrix. The rows of this matrix correspond to the player's strategies A, and the columns are the player's strategies B. These strategies are called pure.

The m×n game matrix has the form:

Consider an m×n game with a matrix and determine the best among the strategies A1,A2,…,Am . Choosing a strategy Ai the player A should expect the player IN will answer it with one of the strategies Bj for which the payoff for the player A minimal (player IN seeks to "harm" the player A).

Denote by the least payoff of the player A when he chooses the strategy Ai for all possible strategies of the player IN(smallest number in i-th row of the payoff matrix), i.e.

Among all numbers () choose the largest: .

Let's call the lower price of the game, or maximum win (maxmin). This is the guaranteed payoff of player A for any strategy of player B. Hence,

The strategy corresponding to the maximin is called maximin strategy . Player IN interested in reducing the player's payoff A, choosing the strategy Bj, he takes into account the maximum possible payoff for A. Denote

Among all the numbers, choose the smallest

and call top game price or minimax payoff(minimax). Ego guaranteed loss of player B. Therefore,

The minimax strategy is called minimax strategy.

The principle that dictates to players the choice of the most "cautious" minimax and maximin strategies is called minimax principle . This principle follows from the reasonable assumption that each player seeks to achieve the opposite goal of the opponent.

Theorem. The lower price of the game never exceeds the upper price of the game. .

If the upper and lower prices of the game are the same, then the total value of the upper and lower prices of the game is called the net price of the game, or the price of the game. The minimax strategies corresponding to the price of the game are optimal strategies , and their totality optimal solution or game decision. In this case the player A receives the maximum guaranteed (independent of the player's behavior) IN) win v, and the player IN achieves the minimum guaranteed (regardless of the player's behavior A) losing v. The solution to the game is said to have sustainability , those. if one of the players sticks to his optimal strategy, then it cannot be advantageous for the other to deviate from his optimal strategy.

If one of the players (for example A) sticks to his optimal strategy, and the other player (IN) will deviate from its optimal strategy in any way, then for the player who made the deviation, this can never be beneficial; such a deviation of the player IN may at best leave the gain unchanged. and in the worst case, increase it.

On the contrary, if IN adheres to its optimal strategy, and A deviates from its own, then this can in no way be beneficial to A.

A pair of pure strategies gives an optimal solution to the game if and only if the corresponding element is both the largest in its column and the smallest in its row. Such a situation, if it exists, is called power point. In geometry, a point on a surface that has the property: simultaneous minimum along one coordinate and maximum along the other, is called power dot, by analogy this term is used in game theory.

The game for which , called power point game. The element that has this property is the power point of the matrix.

So, for every game with a power point, there is a solution that determines a pair of optimal strategies for both sides, which differs in the following properties.

1) If both sides stick to their optimal strategies, then the average payoff is equal to the net cost of the game v, which is both its lower and upper price.

2) If one of the parties adheres to its optimal strategy, while the other deviates from its own, then the deviating party can only lose from this and in no case can increase its gain.

In game theory, it is proved that, in particular, every game with complete information has a power point, and, consequently, every such game has a solution, i.e., there is a pair of optimal strategies for one side and the other, giving an average payoff equal to the price of the game. If a game with complete information consists only of personal moves, then, when each side applies its own optimal strategy, it must always end in a quite definite outcome, namely, a payoff exactly equal to the price of the game.

22. Solution of the game in mixed strategies.

Among finite games of practical importance, games with a power point are comparatively rare; more typical is the case when the lower and upper prices of the game are different. Analyzing the matrices of such games, we come to the conclusion that if each player is given the choice of a single strategy, then, based on a reasonably acting opponent, this choice should be determined by the minimax principle. Adhering to our maximin strategy, for any behavior of the opponent, we certainly guarantee ourselves a payoff equal to the lower price of the game α. mixed strategies

mixed strategy Sa player A is called the application of pure strategies A1,A1,…,Ai,…,Am with probabilities p1,p2,…pi,…pm, and the sum of the probabilities is equal to 1: . The mixed strategies of player A are written as a matrix

or as a string Sa=(p1,p2,…,pi,…,pm).

Similarly, the mixed strategies of player B are denoted by:

Or Sb=(q1,q2,…,qi,…,qn),

where the sum of the probabilities of the appearance of strategies is equal to 1: .

Obviously, each pure strategy is a special case of a mixed one, in which all strategies, except for one, are applied with zero frequencies (probabilities), and this one is applied with a frequency (probability) of 1.

It turns out that by applying not only pure but also mixed strategies, it is possible to obtain a solution for each finite game, i.e., a pair of such (generally mixed) strategies such that when both players apply them, the payoff will be equal to the game price, and when any one-sided deviation from the optimal strategy, the payoff can only change in a direction unfavorable for the deviant. So, based on the minimax principle, it is determined optimal solution (or solution) games: this is a pair of optimal strategies in the general case, mixed, having the following property: if one of the players adheres to his optimal strategy, then it cannot be profitable for the other to deviate from his. The payoff corresponding to the optimal solution is called at the cost of playing v . The price of the game satisfies the inequality:

Where α and β are the lower and upper prices of the game.

This statement is the content of the so-called the fundamental theorem of game theory. This theorem was first proved by John von Neumann in 1928. The known proofs of the theorem are relatively complex; therefore, we present only its formulation.

Every finite game has at least one optimal solution, possibly among mixed strategies.

It follows from the main theorem that every finite game has a price.

Let and pair of optimal strategies. If a pure strategy is included in an optimal mixed strategy with non-zero probability, then it is called active (useful) .

Fair active strategy theorem: if one of the players adheres to his optimal mixed strategy, then the payoff remains unchanged and equal to the cost of the game v, if the second player does not go beyond his active strategies.

The player can use any of his active strategies in their pure form, and can also mix them in any proportion.

This theorem is of great practical importance - it gives specific models for finding optimal strategies in the absence of a saddle point.

Consider 2x2 size game, which is the simplest case of a finite game. If such a game has a saddle point, then the optimal solution is a pair of pure strategies corresponding to this point.

A game without a saddle point, according to the fundamental theorem of game theory the optimal solution exists and is determined by a pair of mixed strategies And.

In order to find them, we use the theorem on active strategies. If the player A adheres to its optimal strategy , then his average payoff will be equal to the price of the game v, no matter what active strategy the player uses IN. For a 2v2 game, any opponent's pure strategy is active if there is no ssdl point. Player win A(player's loss IN)– a random variable, the mathematical expectation (average value) of which is the price of the game. Therefore, the average payoff of a player A(optimal strategy) will be equal to v for both the 1st and 2nd adversary strategies.

Let the game be given by the payoff matrix.

Average player payoff A, if he uses an optimal mixed strategy and the player IN - pure strategy B1 (this corresponds to the 1st column of the payoff matrix R), equal to the price of the game v: .

The player receives the same average payoff A, if the 2nd player uses strategy B2, i.e. . Given that, we obtain a system of equations for determining the optimal strategy and game prices v:

Solving this system, we obtain the optimal strategy

and the price of the game.

Applying the active strategy theorem to find player's optimal strategy IN, we obtain that for any pure strategy of the player A (A1 or A2) average player loss IN equal to the price of the game v, i.e.

Then the optimal strategy is determined by the formulas: .

The task of solving the game, if its matrix does not contain a saddle point, is the more difficult, the greater the value m And n. Therefore, in the theory of matrix games, methods are considered by which the solution of some games is reduced to the solution of other, simpler ones, in particular, by reducing the dimension of the matrix. It is possible to reduce the dimension of a matrix by excluding duplicate and obviously disadvantageous strategies.

Duplicate are called strategies that correspond to the same values ​​of elements in the payoff matrix, i.e. the matrix contains the same rows (columns).

If all elements of the i-th row of the matrix are less than the corresponding elements of the k-th row, then the i-th strategy for the player A unprofitable (winning less).

If all elements of the rth column of the matrix are greater than the corresponding elements of the jth column, then for the player IN The r-th strategy is unprofitable (the loss is greater).

The procedure for eliminating duplicative and obviously unprofitable strategies should always precede the solution of the game.

23. Geometric interpretation of the 2×2 game

Game decision 2×2 allows a clear geometric interpretation.

Let the game be given by the payoff matrix P=(aij), i, j=1,2.

On the abscissa (Fig.) Set aside unit segment A1A2; point A1 ( X=0) depicts strategy A1, point A2 ( X=1) depicts strategy A2, and all intermediate points of this segment are mixed strategies Sa of the first player, and the distance from Sa to the right end of the segment is the probability p1 of strategy A1 , the distance to the left end is the probability p2 of strategy A2 .

Let's draw two perpendiculars to the x-axis through points A1 and A2: axis I-I and axis II-II. On the I-I axis, we will postpone the gains under the strategy A1; on axis II-II - payoffs under the strategy A2.

If player A uses strategy A1, then his payoff under strategy B1 of player B is a11, and under strategy B2 it is a12. Numbers a11 and a12 on axis I correspond to points B1 and B2.

If player A uses strategy A2, then his payoff under strategy B1 of player B is a21, and under strategy B2 it is equal to a22. Numbers a21 and a22 correspond to points B1 and B2 on axis II.

We connect points B1 (I) and B1 (II); B2 (I) and B2 (II). Got two straight lines. Straight line B1B1– if the player A applies a mixed strategy (any combination of strategies A1 and A2 with probabilities p1 and p2) and player B applies strategy B1. Winning player A corresponds to some point on this line. The average payoff corresponding to the mixed strategy is determined by the formula a11p1+a21p2 and is represented by the point M1 on line B1B1.

Similarly, we construct the segment B2B2 corresponding to the use of the strategy B2 by the second player. In this case, the average gain is determined by the formula a12p1+a22p2 and is represented by the point M2 on straight line B2B2.

We need to find an optimal strategy S*a, i.e. one for which the minimum payoff (for any behavior IN) would turn to the maximum. To do this, we will build the lower limit of the gain with B1B2 strategies , i.e., the broken line B1NB2 marked in Fig. bold line. This the lower bound will express the minimum payoff of the player A for any of its mixed strategies; dotN , in which this minimum payoff reaches a maximum, and determines the solution (optimal strategy) and the price of the game. Point ordinate N is there a price to play v. Point coordinates N we find as the coordinates of the points of intersection of lines B1B1 and B2B2. In our case, the solution of the game was determined by the intersection point of the strategies. However, this will not always be the case.

It is geometrically possible to determine the optimal strategy as a player A, as well as the player IN; in both cases, the minimax principle is used, but in the second case, not the lower, but the upper boundary of the payoff is constructed, and not the maximum, but the minimum is determined on it.

If the payoff matrix contains negative numbers, then for a graphical solution of the problem it is better to switch to a new matrix with non-negative elements; to do this, it suffices to add the corresponding positive number to the elements of the original matrix. The decision of the game will not change, but the price of the game will increase by this number. The graphical method can be used to solve the game 2×n, m×2.

24. Reduction of a matrix game to a linear programming problem

The m×n game generally does not have a visual geometric interpretation. Its solution is rather laborious for large T And n, however, it has no fundamental difficulties, since it can be reduced to solving a linear programming problem. Let's show it.

Let the m×n game be given by the payoff matrix . Player A has strategies A1,A2,..Ai,..Am , player IN - strategies B 1,B 2,..B i,.. B n. It is necessary to determine the optimal strategies and where are the probabilities of applying the corresponding pure strategies Ai,Bj,

The optimal strategy satisfies the following requirement. It provides the player A average payoff not less than the price of the game v, for any strategy of the player IN and a payoff equal to the price of the game v, with the player's optimal strategy IN. Without loss of generality, we set v> 0; this can be achieved by making all elements . If the player A applies a mixed strategy against any pure strategy Bj player IN, then he gets average payoff , or mathematical expectation of winning (i.e. elements j-Go payoff matrix columns are multiplied term by term by the corresponding probabilities of the strategies A1,A2,..Ai,..Am and the results are added).

For an optimal strategy, all average payoffs are not less than the price of the game v, so we get a system of inequalities:

Each of the inequalities can be divided by a number. Let's introduce new variables: . Then the system takes the form

Player Goal A - maximize your guaranteed payoff, i.e. game price v.

Dividing by equality, we get that the variables satisfy the condition: . Game price maximization v is equivalent to minimizing the quantity , so the problem can be formulated as follows: define variable values , mato satisfy the linear constraints(*) And while the linear function (2*) turned to the minimum.

This is a linear programming problem. Solving problem (1*)–(2*), we obtain the optimal solution and optimal strategy .

To determine the optimal strategy, it should be taken into account that the player IN seeks to minimize the guaranteed payoff, i.e. find max . Variables satisfy the inequalities

which follow from the fact that the average loss of a player IN does not exceed the value of the game, no matter what pure strategy the player uses A.

If we denote (4*) , then we get a system of inequalities:

The variables satisfy the condition.

The game was reduced to the next task.

Determine Variable Values , which satisfy the system of inequalities (5*)And maximize the linear function

The solution of the linear programming problem (5*), (6*) determines the optimal strategy. At the same time, the price of the game. (7*)

Having compiled extended matrices for problems (1*), (2*) and (5*), (6*), we make sure that one matrix was obtained from another by transposition:

Thus, linear programming problems (1*), (2*) and (5*), (6*) are mutually dual. Obviously, when determining the optimal strategies in specific problems, one should choose one of the mutually dual problems, the solution of which is less laborious, and find the solution of the other problem using duality theorems.

When solving an arbitrary finite game of size m×n, it is recommended to adhere to the following scheme:

1. Exclude obviously unprofitable strategies from the payoff matrix in comparison with other strategies. Such strategies for the player A

operations research) - a relatively new area, a brief history of which goes back to the beginning of World War II. This exact mat. science contains a well-defined set of general principles, to-rye provide researchers with a plan for implementing the operations of scientific research. It includes the following stages. 1. Formulation of the problem. 2. Construction of a mat. model representing the system under study. 3. Obtaining a solution from this model. 4. Checking the model and the solution obtained from it. 5. Establishing control over the decision. 6. Prakt. solution implementation: implementation. Formulating the problem Serious attention must be paid to defining the general nature of the problem and, more importantly, the objectives of the research. These goals should be formulated in behavioral terms in order to minimize or eliminate ambiguity and uncertainty. Time must also be allowed to correctly prioritize realistically achievable goals. Too long a list of goals can cause potential difficulties in their implementation, especially if these goals are not clearly linked in a logical sequence. Construction of a mathematical model The second phase of research with t. sp. And about. suggests a description of the model. The purpose of the model is to represent the real world. In I. o. such models are symbolic, expressed in mat. terms. The classical equation E \u003d mc2 is a typical example of a mat. models. The traditional forms for such models are algebraic equations, to-rye not only val. more economical than verbal formulations, but also entail the thoroughness and precision of definition necessary for a clear expression and understanding of individual elements and their relationships. The most important task in building such a model is the clear and precise development and definition of the objective function. This function expresses the relationship between independent and dependent variables. Deriving a Solution from a Given Model The third phase is to find a solution. As a rule, it is desirable to find an optimal or better solution, but it should be taken into account that such a solution will only be of value in the context of the model under consideration. Since the model is only a representation of the real world problem, there are many situations in which the optimal solution may not be associated with the best choice. However, when the optimal solution is combined with less optimal or more realistic alternative solutions, with the possibility of their subsequent testing in relation to a real problem, the use of the optimal solution entails certain benefits. One such benefit relates to the definition at the end of the study. the relative distance between this ideal solution and the accepted alternative. A by-product of this methodology is the use of I. o. is the assumption that less optimal solutions can be seen as stepping stones on the way to achieving the goal. This method of successive approximations can lead the operations researcher to more fruitful results. There are many mats. procedures for obtaining solutions in the I. o. These procedures are based on applications of probability theory. Validation of the model and the solution derived from it Validation of the model and the solution is related to the implementation of two steps. The first consists in a thorough analysis of all elements of the model, incl. rechecking its algebraic factors for the presence of simplistic cosmetic errors, which may affect validity. Dr. an even more important step involves redefining the relationship of the model to the assumptions that were originally used to develop this model. A more systematic review plan also includes the use of ist. data that can be easily entered into the model so that a prototype solution can be obtained. These data must be carefully reviewed to ensure the validity of the check to the operations researcher. It should be noted that as soon as this model is practically developed on the basis of the previous sources. data and needs, it may behave very differently in the future. Dr. a common mistake is the introduction of factors into the model, to-rye were not presented in the ist. database. Establishing control The fifth stage, establishing control over the decision, appears in the course of repeated use of the model. Control over the model is established when the operations researcher allows discrepancies in the values ​​of ist. data and recognizes that these discrepancies can affect the relationships between model elements and the resulting solutions. Dr. an important step could be the development of restrictions on the selected bases. model parameters to establish a range of acceptable values ​​based on real data. Implementing the Model The final step is to introduce real data into the model. Prakt. the implementation of the model is associated with the obvious step of introducing real data and obtaining a solution to a real problem. In addition, it is also important to assess the proximity of the real solution to the ist. the solutions obtained earlier, as well as the consequences of this decision for improving the way the model is used. These steps provide an important link between the mat. nature I. o. and pract. research results. Ultimately, these decisions and their managerial implications are used by an experienced AI specialist. to refine the model for possible future use. See also Methodology of (scientific) research R. S. Endrulis


close