Production of events dependent and independent events. Probability theory. The probability of an event, random events (probability theory). Independent and inconsistent events in probability theory. Adding the probabilities of inconsistent events

19.02.2021 Products

The classical definition of probability.

The likelihood of an event is a quantitative measure that is introduced to compare events according to the degree of their likelihood of occurrence.

An event represented as a set (sum) of several elementary events is called a composite event.

An event that cannot be broken down into simpler ones is called elementary.

An event is called impossible if it never occurs under the conditions of a given experiment (test).

Plausible and impossible events are not accidental.

Joint events- several events are called joint if, as a result of the experiment, the occurrence of one of them does not exclude the appearance of others.

Incompatible events- several events are called inconsistent in a given experiment if the appearance of one of them excludes the appearance of others. Two events are called opposite, if one of them occurs if and only if the other does not happen.

The probability of event A is P (A) called the ratio of the number m elementary events (outcomes) conducive to the occurrence of the event BUT, to the number n of all elementary events under the conditions of a given probabilistic experiment.

The following properties of probability follow from the definition:

1. The probability of a random event is a positive number between 0 and 1:

(2)

2. The probability of a certain event is 1: (3)

3. If the event is impossible, then its probability is

(4)

4. If the events are incompatible, then

5. If events A and B are joint, then the probability of their sum is equal to the sum of the probabilities of these events without the probability of their joint occurrence:

P (A + B) = P (A) + P (B) - P (AB)(6)

6. If and are opposite events, then (7)

7. The sum of the probabilities of events А 1, А 2, ..., А n forming a complete group is equal to 1:

P (A 1) + P (A 2) + ... + P (A n) = 1.(8)

In economic studies, values ​​and in a formula can be interpreted differently. At statistical definition the probability of an event is understood as the number of observations of the results of the experiment in which the event occurred exactly once. In this case, the relation is called relative frequency (frequency) of the event

Developments A, B are called independent if the probabilities of each of them do not depend on whether or not another event occurred. The probabilities of independent events are called unconditional.

Developments A, B are called dependent if the probability of each of them depends on whether or not another event occurred. The probability of event B, calculated on the assumption that another event A has already taken place, is called conditional probability.


If two events A and B are independent, then the equalities are true:

P (B) = P (B / A), P (A) = P (A / B) or P (B / A) - P (B) = 0(9)

The probability of the product of two dependent events A, B is equal to the product of the probability of one of them by the conditional probability of the other:

P (AB) = P (B) ∙ P (A / B) or P (AB) = P (A) ∙ P (B / A) (10)

The probability of event B, provided that event A occurs:

(11)

The probability of the product of two independent events A, B is equal to the product of their probabilities:

P (AB) = P (A) ∙ P (B)(12)

If several events are pairwise independent, then their independence as a whole does not follow from here.

Developments А 1, А 2, ..., А n (n> 2) are called independent in the aggregate if the probability of each of them does not depend on whether or not any of the other events have occurred.

The probability of the joint occurrence of several events, independent in the aggregate, is equal to the product of the probabilities of these events:

Р (А 1 ∙ А 2 ∙ А 3 ∙… ∙ А n) = Р (А 1) ∙ Р (А 2) ∙ Р (А 3) ∙… ∙ Р (А n). (13)

The dependence of events is understood in probabilistic sense, not functional. This means that by the appearance of one of the dependent events it is impossible to unambiguously judge the appearance of the other. Probabilistic dependence means that the occurrence of one of the dependent events only changes the likelihood of the occurrence of another. If this does not change the probability, then the events are considered independent.

Definition: Let - an arbitrary probability space, - some random events. They say that event BUT does not depend on the event IN , if its conditional probability coincides with the unconditional probability:

.

If , then they say that the event BUT depends on the event IN.

The concept of independence is symmetric, that is, if an event BUT does not depend on the event IN, then the event IN does not depend on the event BUT... Indeed, let ... Then ... Therefore, they simply say that events BUT and IN independent.

The following symmetric definition of the independence of events follows from the rule of multiplication of probabilities.

Definition: Developments BUT and IN, defined on the same probability space are called independent, if

If then events BUT and IN are called dependent.

Note that this definition is also valid in the case when or .

Properties of independent events.

1. If events BUT and IN are independent, then the following pairs of events are also independent:.

▲ Let us prove, for example, the independence of events. Imagine an event BUT as: . Since the events are incompatible, then, and due to the independence of events BUT and IN we get that. Hence, which means independence. ■

2. If the event BUT does not depend on events IN 1 and AT 2 that are inconsistent () , that event BUT does not depend on the amount either.

▲ Indeed, using the axiom of additivity of probability and independence of the event BUT from events IN 1 and AT 2, we have:

The relationship between the concepts of independence and incompatibility.

Let be BUT and IN- any events with a nonzero probability:, so that ... If at the same time events BUT and IN are inconsistent (), and therefore equality can never take place. Thus, inconsistent events are dependent.

When more than two events are considered at the same time, their pairwise independence does not sufficiently characterize the connection between the events of the entire group. In this case, the concept of independence in the aggregate is introduced.

Definition: Events defined on the same probability space are called collectively independent if for any 2 £ m £ n and any combination of indices the equality is true:

At m = 2 independence in the aggregate implies the pairwise independence of events. The converse is not true.


Example. (Bernstein S.N.)

A random experiment consists in tossing a regular tetrahedron (tetrahedron). A facet is observed falling out from top to bottom. The faces of the tetrahedron are colored as follows: 1 face - white, 2 face - black,
3 face - red, 4 face - contains all colors.

Consider the events:

BUT= (Drop out white); B= (Drop out black);

C= (Drop out red).

Then ;

Consequently, events BUT, IN and WITH are pairwise independent.

But, .

Therefore events BUT, IN and WITH are not collectively independent.

In practice, as a rule, the independence of events is not established by checking it by definition, but on the contrary: events are considered independent from any external considerations or taking into account the circumstances of a random experiment, and independence is used to find the probabilities of the product of events.

Theorem (multiplication of probabilities for independent events).

If events defined on the same probability space are independent in the aggregate, then the probability of their product is equal to the product of the probabilities:

▲ The proof of the theorem follows from the definition of the independence of events in the aggregate or from the general theorem of multiplication of probabilities, taking into account the fact that

Example 1 (a typical example of finding conditional probabilities, the concept of independence, the theorem of addition of probabilities).

The electrical circuit consists of three independently operating elements. The failure probabilities of each of the elements are correspondingly equal.

1) Find the probability of circuit failure.

2) The circuit is known to have failed.

What is the likelihood that it refused:

a) 1st element; b) 3rd element?

Solution. Consider events = (Refused k th element), and the event BUT= (Scheme refused). Then the event BUT is presented in the form:

.

1) Since the events and are not inconsistent, the axiom of the additivity of probability P3) is inapplicable and to find the probability one should use the general theorem of addition of probabilities, according to which

Independent events

In the practical application of probabilistic-statistical methods of decision-making, the concept of independence is constantly used. For example, when using statistical methods of product quality management, they talk about independent measurements of the values ​​of the controlled parameters for the units of production included in the sample, about the independence of the appearance of defects of one type from the appearance of defects of another type, etc. The independence of random events is understood in probabilistic models in the following sense.

Definition 2. Developments BUT and IN are called independent if P (AB) = P (A) P (B). Multiple events BUT, IN, WITH, ... are called independent if the probability of their joint implementation is equal to the product of the probabilities of each of them occurring separately: R(ABC…) = R(BUT)R(IN)R(WITH)…

This definition corresponds to the intuitive idea of ​​independence: the implementation or non-implementation of one event should not affect the implementation or non-implementation of another. Sometimes the ratio R(AB) = R(BUT) R(IN|A) = P(B)P(A|B), which is valid for P(A)P(B)> 0, is also called the probability multiplication theorem.

Statement 1. Let the events BUT and IN independent. Then events are independent, events and IN independent, events BUT and independent (here is the opposite event BUT, and is the opposite event IN).

Indeed, from property c) in (3) it follows that for events WITH and D whose product is empty, P(C+ D) = P(C) + P(D). Since the intersection AB and IN empty, but the union is IN, then P (AB) + P (B) = P (B). Since A and B are independent, then P (B) = P (B) - P (AB) = P (B) - P (A) P (B) = P (B) (1 - P (A)). Note now that it follows from relations (1) and (2) that P () = 1 - P (A). Means, P (B) = P () P (B).

Equality derivation P (A) = P (A) P () differs from the previous one only by replacing everywhere BUT on the IN, but IN on the BUT.

To prove independence and we will use the fact that events AB, B, A, do not have pairwise common elements, but in the sum they make up the entire space of elementary events. Consequently, R(AB) + P (B) + P (A) + P () = 1. Using the previously proved relations, we obtain that P (B) = 1 -R(AB) - P (B) ( 1 - P (A)) - P (A) ( 1 - P (B)) = ( 1 – P (A)) ( 1 – P (B)) = Р () Р (), as required to prove.

Example 3. Consider an experiment consisting in throwing a dice with the numbers 1, 2, 3, 4, 5.6 written on the edges. We believe that all faces have the same chance of being at the top. Let's construct the corresponding probability space. Let us show that the events "at the top - a face with an even number" and "at the top - a face with a number divisible by 3" are independent.

Parsing an example. The space of elementary outcomes consists of 6 elements: "at the top - face with 1", "at the top - face with 2", ..., "at the top - face with 6". The event "at the top - a face with an even number" consists of three elementary events - when 2, 4, or 6 are at the top. all faces have the same chances of being at the top, then all elementary events must have the same probability. Since there are 6 elementary events in total, each of them has a probability of 1/6. By definition 1, the event "at the top is a face with an even number" has a probability of ½, and the event "at the top is a face with a number divisible by 3" has a probability of 1/3. The product of these events consists of one elementary event "at the top - the edge with 6", and therefore has a probability of 1/6. Since 1/6 = ½ x 1/3, the considered events are independent in accordance with the definition of independence.

When assessing the probability of the occurrence of any random event, it is very important to have a good preliminary understanding of whether the probability (probability of an event) of the occurrence of an event of interest to us depends on how other events develop. In the case of the classical scheme, when all outcomes are equally probable, we can already estimate the values ​​of the probability of the particular event of interest to us independently. We can do this even if the event is a complex collection of several elementary outcomes. What if several random events occur simultaneously or sequentially? How does this affect the likelihood of the event of interest to us? If I roll a dice several times and want a six to come up, but I’m not lucky all the time, does that mean I need to increase the bet, because, according to the theory of probability, I’m about to get lucky? Alas, the theory of probability does not state anything of the kind. Neither dice, nor cards, nor coins can remember what they showed us last time. It doesn't matter to them at all whether for the first time or for the tenth time today I am testing my fate. Every time I repeat the throw, I only know one thing: and this time the probability of getting a six is ​​again equal to one-sixth. Of course, this does not mean that the number I need will never fall out. It only means that my loss after the first throw and after any other throw are independent events. Events A and B are called independent if the implementation of one of them does not affect the probability of another event in any way. For example, the probabilities of hitting a target with the first of two guns do not depend on whether the target was hit by another gun, so the events "the first gun hit the target" and "the second gun hit the target" are independent. If two events A and B are independent, and the probability of each of them is known, then the probability of simultaneous occurrence of both event A and event B (denoted by AB) can be calculated using the following theorem.

Multiplication theorem for probabilities for independent events

P (AB) = P (A) * P (B) the probability of the simultaneous occurrence of two independent events is equal to the product of the probabilities of these events.

Example 1... The probabilities of hitting the target when firing the first and second guns are respectively equal: p 1 = 0.7; p 2 = 0.8. Find the probability of hitting with one volley with both guns simultaneously.

as we have already seen events A (hit by the first gun) and B (hit by the second gun) are independent, i.e. P (AB) = P (A) * P (B) = p1 * p2 = 0.56. What happens to our estimates if the initiating events are not independent? Let's modify the previous example a bit.

Example 2. Two shooters in the competition shoot at targets, and if one of them shoots accurately, the opponent becomes nervous and his results deteriorate. How to turn this everyday situation into a mathematical problem and outline ways to solve it? It is intuitively clear that it is necessary to somehow separate the two scenarios for the development of events, to draw up essentially two scenarios, two different tasks. In the first case, if the opponent misses, the scenario will be favorable for the nervous athlete and his accuracy will be higher. In the second case, if the opponent has decently realized his chance, the probability of hitting the target for the second athlete is reduced. To separate possible scenarios (often called hypotheses) of events, we will often use a "probability tree" scheme. This scheme is similar in meaning to the decision tree, with which you probably already had to deal with. Each branch represents a separate scenario of the development of events, only now it has its own value of the so-called conditional probability (q 1, q 2, q 1 -1, q 2 -1).

This scheme is very convenient for analyzing sequential random events. It remains to clarify one more important question: where do the initial values ​​of probabilities in real situations come from? After all, the theory of probability works not with the same coins and dice? Usually these estimates are taken from statistics, and when statistics are not available, we conduct our own research. And we often have to start it not with collecting data, but with the question of what information we generally need.

Example 3. Let's say we need to estimate the market size in a city with a population of one hundred thousand inhabitants for a new product that is not an essential item, for example, for a balm for the care of colored hair. Consider a "probability tree" scheme. In this case, the value of the probability on each "branch" we need to approximately estimate. So, our estimates of the market capacity:

1) 50% of all residents of the city are women,

2) of all women, only 30% dye their hair often,

3) only 10% of them use conditioners for colored hair,

4) of these, only 10% can get up the courage to try a new product,

5) 70% of them usually buy everything not from us, but from our competitors.


According to the law of multiplication of probabilities, we determine the probability of the event of interest to us A = (a resident of the city buys this new balm from us) = 0.00045. Let's multiply this value of probability by the number of inhabitants of the city. As a result, we have only 45 potential customers, and if we consider that one bubble of this money is enough for several months, the trade is not very lively. Still, there is some benefit from our assessments. Firstly, we can compare the forecasts of different business ideas, they will have different “forks” on the diagrams, and, of course, the probability values ​​will also be different. Secondly, as we have already said, a random variable is not called random because it does not depend on anything at all. It's just that its exact meaning is not known in advance. We know that the average number of buyers can be increased (for example, by advertising a new product). So it makes sense to focus our efforts on those “forks” where the probability distribution does not particularly suit us, on those factors that we are able to influence. Consider another quantitative example of shopping behavior research.

Example 3. On average, 10,000 people visit the food market per day. The probability that a market visitor enters a dairy pavilion is 1/2. It is known that in this pavilion, on average, 500 kg of various products are sold per day. Can we say that the average purchase in a pavilion weighs only 100 g?

Discussion.

Of course not. It is clear that not everyone who entered the pavilion ended up buying something there.


As shown in the diagram, in order to answer the question about the average purchase weight, we must find the answer to the question, what is the probability that a person who enters the pavilion will buy something there. If we do not have such data at our disposal, but we need them, we will have to obtain them ourselves, after observing the visitors of the pavilion for some time. Let's say our observations have shown that only a fifth of the pavilion's visitors buy something. As soon as we have received these estimates, the task becomes already simple. Out of 10,000 people who come to the market, 5,000 will enter the pavilion of dairy products, there will be only 1,000 purchases. The average purchase weight is 500 grams. It is interesting to note that to build a complete picture of what is happening, the logic of conditional "branching" must be defined at each stage of our reasoning as clearly as if we were working with a "specific" situation, and not with probabilities.

Tasks for self-testing.

1. Let there be an electrical circuit consisting of n series-connected elements, each of which works independently of the others. The probability p of failure of each element is known. Determine the probability of correct operation of the entire section of the chain (event A).


2. The student knows 20 of the 25 exam questions. Find the probability that the student knows the three questions suggested by the examiner.

3. Production consists of four successive stages, at each of which equipment operates, for which the probabilities of failure within the next month are equal to p 1, p 2, p 3 and p 4, respectively. Find the probability that there will be no production interruption due to equipment failure in a month.

Addition and multiplication theorems for probabilities.
Dependent and independent events

The headline looks scary, but it's actually very simple. In this lesson, we will get acquainted with the theorems of addition and multiplication of the probabilities of events, as well as analyze typical problems, which, along with the problem of the classical definition of probability will definitely meet or, more likely, have already met on your way. To effectively study the materials of this article, you need to know and understand the basic terms. probability theory and be able to perform the simplest arithmetic operations. As you can see, very little is required, and therefore a fat plus in the asset is almost guaranteed. But on the other hand, I again warn against a superficial attitude to practical examples - there are also enough subtleties. Good luck:

The addition theorem for the probabilities of inconsistent events: the probability of one of the two appearing inconsistent events or (no matter what), is equal to the sum of the probabilities of these events:

A similar fact is true for a large number of inconsistent events, for example, for three inconsistent events and:

Dream theorem =) However, such a dream is subject to proof, which can be found, for example, in V.E. Gmurman.

Let's get acquainted with new, so far not met concepts:

Dependent and independent events

Let's start with independent events. Events are independent if the probability of occurrence any of them does not depend from the appearance / non-appearance of the remaining events of the set under consideration (in all possible combinations). ... But what is there to grind out general phrases:

The multiplication theorem for the probabilities of independent events: the probability of the joint occurrence of independent events and is equal to the product of the probabilities of these events:

Let's go back to the simplest example of the 1st lesson, in which two coins are tossed and the following events:

- heads will be dropped on the 1st coin;
- heads will be dropped on the 2nd coin.

Let's find the probability of the event (an eagle will appear on the 1st coin and an eagle will appear on the 2nd coin - we remember how it is read production of events!) ... The probability of getting heads on one coin does not depend in any way on the result of throwing another coin, therefore, events are independent.

Similarly:
- the probability that the 1st coin will land tails and on the 2nd tails;
- the probability that an eagle appears on the 1st coin and on the 2nd tails;
- the probability that tails will appear on the 1st coin and on the 2nd eagle.

Note that events form full group and the sum of their probabilities is equal to one:.

The multiplication theorem obviously extends to a large number of independent events, so, for example, if the events are independent, then the probability of their joint occurrence is equal to:. Let's practice with specific examples:

Problem 3

Each of the three boxes contains 10 parts. In the first box there are 8 standard parts, in the second - 7, in the third - 9. One part is taken at random from each box. Find the probability that all the details will be standard.

Solution: The probability of retrieving a standard or non-standard part from any box does not depend on which parts are retrieved from other boxes, so the problem is about independent events. Consider the following independent events:

- a standard part has been removed from the 1st box;
- a standard part has been removed from the 2nd box;
- a standard part has been removed from the 3rd box.

By the classic definition:
- the corresponding probabilities.

Event of interest to us (a standard part will be removed from the 1st box and from 2nd standard and from 3rd standard) expressed by the product.

By the multiplication theorem for the probabilities of independent events:

- the probability that one standard part will be removed from three boxes.

Answer: 0,504

After invigorating exercises with boxes, no less interesting urns await us:

Problem 4

Three urns contain 6 white and 4 black balls. One ball is taken at random from each urn. Find the probability that: a) all three balls will be white; b) all three balls will be the same color.

Based on the information received, guess how to deal with the point "bh" ;-) A sample solution is designed in an academic style with a detailed list of all events.

Dependent events... The event is called addicted if its probability depends from one or more events that have already occurred. You don't have to go far for examples - it's enough to get to the nearest store:

- Tomorrow at 19.00 there will be fresh bread on sale.

The likelihood of this event depends on many other events: whether fresh bread will be delivered tomorrow, whether it will be sold out before 7 pm or not, etc. Depending on various circumstances, this event can be both reliable and impossible. So the event is addicted.

Bread ... and, as the Romans demanded, spectacles:

- the student will receive a simple ticket for the exam.

If you do not go first, then the event will be dependent, since its probability will depend on which tickets have already been drawn by fellow students.

How to define event dependency / independence?

Sometimes this is directly stated in the problem statement, but more often you have to carry out an independent analysis. There is no unambiguous reference point here, and the fact of dependence or independence of events follows from natural logical reasoning.

In order not to lump everything together, tasks for dependent events I'll highlight the following lesson, but for now we'll look at the most common combination of theorems in practice:

Problems on addition theorems for the probabilities of inconsistent
and multiplying the probabilities of independent events

This tandem, according to my subjective assessment, works in about 80% of the tasks on the topic under consideration. Hit hits and the real classics of probability theory:

Problem 5

Two shooters fired one shot at the target. The hit probability for the first shooter is 0.8, for the second - 0.6. Find the probability that:

a) only one shooter hits the target;
b) at least one of the shooters hits the target.

Solution: The probability of hitting / missing one shooter obviously does not depend on the performance of the other shooter.

Consider the events:
- 1st shooter hits the target;
- The 2nd shooter hits the target.

By condition: .

Let's find the probabilities of opposite events - that the corresponding arrows will miss:

a) Consider the event: - only one shooter hits the target. This event consists of two inconsistent outcomes:

1st shooter hits and 2nd will miss
or
1st will miss and 2nd will hit.

In the language event algebras this fact will be written down by the following formula:

First, we use the theorem of addition of the probabilities of inconsistent events, then - the theorem of multiplication of the probabilities of independent events:

- the probability that there will be only one hit.

b) Consider the event: - at least one of the shooters hits the target.

First of all, LET'S THINK - what does the condition "AT LEAST ONE" mean? In this case, this means that either the 1st shooter will hit (the 2nd will miss) or 2nd (1st misses) or both arrows at once - a total of 3 inconsistent outcomes.

Method one: given the ready probability of the previous paragraph, it is convenient to present the event as the sum of the following inconsistent events:

one will get (an event consisting, in turn, of 2 inconsistent outcomes) or
both arrows will hit - let's designate this event with a letter.

Thus:

By the multiplication theorem for the probabilities of independent events:
- the probability that the 1st shooter will hit and 2nd shooter hits.

By the addition theorem for the probabilities of inconsistent events:
- the probability of at least one hit on the target.

Method two: consider the opposite event: - both arrows miss.

By the multiplication theorem for the probabilities of independent events:

As a result:

Pay special attention to the second method - in general, it is more rational.

In addition, there is an alternative, third way of solving, based on the theorem of the addition of joint events, which was not mentioned above.

! If you are reading the material for the first time, then in order to avoid confusion, it is better to skip the next paragraph.

Method three : the events are joint, which means that their sum expresses the event "at least one shooter hits the target" (see. algebra of events). By addition theorem for the probabilities of joint events and the multiplication theorem for the probabilities of independent events:

Let's check: events and (0, 1 and 2 hits respectively) form a complete group, so the sum of their probabilities must be equal to one:
, which was required to be verified.

Answer:

With a thorough study of the theory of probability, you will come across dozens of tasks of militaristic content, and, which is typical, after that you will not want to shoot anyone - the tasks are almost gift-giving. Why not simplify the template as well? Let's shorten the entry:

Solution: by condition:, is the probability of hitting the corresponding shooters. Then the probabilities of their miss are:

a) According to the theorems for adding the probabilities of inconsistent events and multiplying the probabilities of independent events:
- the probability that only one shooter will hit the target.

b) By the multiplication theorem for the probabilities of independent events:
- the probability that both shooters will miss.

Then: - the probability that at least one of the shooters will hit the target.

Answer:

In practice, you can use any design option. Of course, they go the shortcut much more often, but one should not forget the 1st method - although it is longer, it is more meaningful - it is clearer, what, why and why adds up and multiplies. In some cases, a hybrid style is appropriate, when it is convenient to designate only some events in capital letters.

Similar tasks for independent solution:

Problem 6

For fire alarm, two independently operating sensors are installed. The probabilities that the sensor will be triggered in case of fire are 0.5 and 0.7 for the first and second sensors, respectively. Find the probability that in case of fire:

a) both sensors will fail;
b) both sensors will work.
c) Using the addition theorem for the probabilities of events forming the complete group, find the probability that only one sensor will be triggered in the event of a fire. Check the result by directly calculating this probability (using addition and multiplication theorems).

Here, the independence of the devices is directly spelled out in the condition, which, by the way, is an important clarification. The sample solution is designed in an academic style.

What if in a similar problem the same probabilities are given, for example, 0.9 and 0.9? You need to decide in exactly the same way! (which, in fact, has already been demonstrated in the example with two coins)

Problem 7

The probability of hitting the target by the first shooter with one shot is 0.8. The probability that the target is not hit after the first and second shooters have fired one shot is 0.08. What is the probability of hitting the target by the second shooter with one shot?

And this is a small puzzle, which is framed in a short way. The condition can be reformulated more succinctly, but I will not redo the original - in practice, you have to delve into more ornate fabrications.

Meet - he is the one who set up an unmeasured amount of details for you =):

Problem 8

A worker operates three machines. The probability that during the shift the first machine will require adjustment is 0.3, the second is 0.75, and the third is 0.4. Find the probability that during the shift:

a) all machines will require adjustment;
b) only one machine will require adjustment;
c) at least one machine will require adjustment.

Solution: as long as the condition does not say anything about a single technological process, then the work of each machine should be considered independent of the work of other machines.

By analogy with Problem No. 5, here you can enter into consideration the events that the corresponding machines will require adjustment during the shift, write down the probabilities, find the probabilities of opposite events, etc. But with three objects, I don't really want to design the task like that - it will turn out to be long and tedious. Therefore, it is much more profitable to use the "fast" style here:

By condition: - the likelihood that during the shift the corresponding machines will require a tincture. Then the probabilities that they will not require attention are:

One of the readers found a cool typo here, I won't even correct it =)

a) By the multiplication theorem for the probabilities of independent events:
- the probability that during the shift all three machines will require adjustment.

b) The event "During the shift, only one machine will require adjustment" consists of three inconsistent outcomes:

1) 1st machine will require attention and 2nd machine will not require and 3rd machine will not require
or:
2) 1st machine will not require attention and 2nd machine will require and 3rd machine will not require
or:
3) 1st machine will not require attention and 2nd machine will not require and 3rd machine will require.

According to the theorems for adding the probabilities of inconsistent and multiplying the probabilities of independent events:

- the likelihood that only one machine will require adjustment during a shift.

I think now you should be clear where the expression came from

c) We calculate the probability that the machines will not require adjustment, and then - the probability of the opposite event:
- that at least one machine will require adjustment.

Answer:

Item "ve" can also be solved through the amount, where is the probability that during the shift only two machines will require adjustment. This event, in turn, includes 3 incompatible outcomes, which are signed by analogy with the "be" clause. Try to find the probability yourself to test the whole problem using equality.

Problem 9

Three guns fired a volley at the target. The probability of hitting with one shot only from the first gun is 0.7, from the second - 0.6, from the third - 0.8. Find the probability that: 1) at least one projectile will hit the target; 2) only two projectiles will hit the target; 3) the target will be hit at least twice.

Solution and answer at the end of the lesson.

And again about coincidences: if, by the condition, two or even all the values ​​of the initial probabilities coincide (for example, 0.7; 0.7 and 0.7), then you should adhere to exactly the same solution algorithm.

At the end of the article, let's look at another common puzzle:

Problem 10

The shooter hits the target with the same probability with each shot. What is this probability if the probability of at least one hit with three shots is 0.973.

Solution: denote by - the probability of hitting the target with each shot.
and after - the probability of a miss with each shot.

And still we will write down the events:
- with 3 shots, the shooter will hit the target at least once;
- the shooter will miss 3 times.

By condition, then the probability of the opposite event is:

On the other hand, by the multiplication theorem for the probabilities of independent events:

Thus:

- the probability of a miss with each shot.

As a result:
- hit probability with each shot.

Answer: 0,7

Simple and elegant.

In the considered problem, additional questions can be posed about the probability of only one hit, only two hits, and the probability of three hits on the target. The solution scheme will be exactly the same as in the two previous examples:

However, the fundamental substantive difference is that repeated independent tests, which are performed sequentially, independently of each other and with the same probability of outcomes.