## Search This Blog

### Computing the determinant of a Vandermonde-like matrix

This is a problem from a students competition but it looks quite straightforward to solve.

The task is to compute the determinant of the following matrix.

$$A = \begin{bmatrix}1&2&3&...&n\\1^2&2^2&3^2&...&n^2\\...&...&...&...&...\\1^n&2^n&3^n&...&n^n\end{bmatrix}$$

What I noticed is that it looks similar to Vandermonde's matrix even though it's not quite that.

Let's pull out from its determinant a common multiple $k$ from the $k$-th column for each $k=1,2,3,...,n$.

We get

$$det(A) = n! \cdot \begin{vmatrix}1^0&2^0&3^0&...&n^0\\1^1&2^1&3^1&...&n^1\\...&...&...&...&...\\1^{n-1}&2^{n-1}&3^{n-1}&...&n^{n-1}\end{vmatrix}$$

The matrix whose determinant we got in the RHS of the last equality is the transpose of the Vandermonde matrix (with $\alpha_s = s$, for $s = 1,2,...,n$). Hence its determinant equals the determinant of the Vandermonde matrix itself (since we know that $det(A) = det(A^T)$).

So (leaving aside the $n!$ multiplier) the determinant we are left with now is Vandermonde's determinant for the numbers $1,2,3,...,n$. We know its value is equal to

$$\prod\limits_{1 \le i \lt j \le n} (j-i)$$

So the result we get now is

$$det(A) = n! \cdot \prod\limits_{1 \le i \lt j \le n} (j-i)$$

This is the same as

$$n! \cdot (n-1)^1 \cdot (n-2)^2 \cdot (n-3)^3\ ...\ 3^{n-3} \cdot 2^{n-2} \cdot 1^{n-1}$$

Finally it is not difficult to realize that this expression is equal to

$$n!\ \cdot (n-1)!\ \cdot (n-2)!\ ... \ 3! \cdot\ 2! \cdot\ 1!$$

So this is our final answer here for the determinant of $A$.

### Solving systems of linear equations with SymPy

SymPy is a great Python library for symbolic computation. It can be used e.g. for solving systems of linear equations. So if you are manually solving problems from a linear algebra book, and you want to verify your solutions, you can check them against the solutions that SymPy provides. You just need to know how to code the linear system for SymPy.

Here is an example which illustrates this very well.

import sympy as sp

x, y, z, t = sp.symbols(['x', 'y', 'z', 't'])

solution = solve(

[

2 * x + 3 * y -  5*z + 1 * t  -  2  ,

2 * x + 3 * y -  1*z + 3 * t  -  8  ,

6 * x + 9 * y -  7*z + 7 * t  - 18  ,

4 * x + 6 * y - 12*z + 1 * t  -  1

]

,

[x,y,z,t]

)

print(solution)

The code above solves the following system.

$$\begin{bmatrix}2 & 3 & -5 & 1 \\2 & 3 & -1 & 3 \\6 & 9 & -7 & 7 \\4 & 6 & -12 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \\ t \end{bmatrix} = \begin{bmatrix} 2 \\ 8 \\ 18 \\ 1 \end{bmatrix}$$

The solution which SymPy provides is this one

{z: 3/2 - t/2, x: -7*t/4 - 3*y/2 + 19/4}

The form of this solution implies that $y$ and $t$ are free variables, and can thus be given any values (i.e. they can be taken as free parameters e.g. $y=a, t=b$). The system is then solved (in terms of $a,b$) for the leading variables which happen to be $x$ and $z$.

### Solving the integrals $\int \frac{ \alpha\sin{x} + \beta\cos{x} }{a\sin{x} + b\cos{x}} dx$

I found this as a problem in a book, I solved it, and I kind of liked it.

So I am going to share the end result here.

The solution to the integral

$$\int \frac{ \alpha\sin{x} + \beta\cos{x} }{a\sin{x} + b\cos{x}} dx$$

is the antiderivative function

$$F(x) = Ax + B \ln{\left|a\sin{x} + b\cos{x}\right|}$$

where

$$A = \frac{\alpha a + \beta b}{a^2+b^2}$$

$$B = \frac{\beta a - \alpha b}{a^2+b^2}$$

One can verify this by differentiating $F(x)$.

The original problem, the one which I encountered was actually asking to find the constants $A$ and $B$.

### Indefinite integrals from a differential binomial

The expression

$$x^{m}(ax^n + b)^p \tag{1}$$

where $m,n,p \in \mathbb{Q}$ and $a,b$ are real non-zero constants is called a differential binomial

The integrals from a differential binomial have the form

$$\int x^{m}(ax^n + b)^p dx \tag{2}$$

They can be solved in elementary functions in three cases (detailed below). In all other cases these integrals cannot be solved in elementary functions. Here are the 3 cases in which the integral $(2)$ is solvable.

1) $p \in \mathbb{Z}$

In this case we take $k = LCM(m,n)$ and we apply the substitution $x=t^k$, this leads us to an integral from a rational function of $t$.

2) $\frac{m+1}{n} \in \mathbb{Z}$

In this case to solve the integral we apply the substitution: $ax^n + b = t$. In this case another (rather obvious) second substitution may be needed $t = g(u)$ to arrive at a rational function of $u$.

3) $\frac{m+1}{n} + p \in \mathbb{Z}$

In this case to solve the integral we apply the substitution: $a + bx^{-n} = t$. In this case another (rather obvious) second substitution may be needed $t = g(u)$ to arrive at a rational function of $u$.

### IPython commands

IPython basic commands

1) ? after module / object / method name - show help info

2) ?? after module / object / method name - show detailed help info (usually the source code)

3) <TAB> after object or while typing object/method - auto-completion

4) <TAB> when importing - auto-completion

Examples:

from <module> import <TAB>

from itertools import co<TAB>

IPython keyboard shortcuts

Ctrl-a - Move cursor to the beginning of the line

Ctrl-e - Move cursor to the end of the line

Ctrl-b (or the left arrow key) - Move cursor back one character

Ctrl-f (or the right arrow key) - Move cursor forward one character

2) Text manipulation shortcuts:

Backspace key - Delete previous character in line

Ctrl-d - Delete next character in line

Ctrl-k - Cut text from cursor to end of line

Ctrl-u - Cut text from beginning of line to cursor

Ctrl-y - Yank (i.e., paste) text that was previously cut

Ctrl-t - Transpose (i.e., switch) previous two characters

3) Command history shortcuts:

Ctrl-p (or the up arrow key) - Access previous command in history

Ctrl-n (or the down arrow key) - Access next command in history

Ctrl-r - Reverse-search through command history

4) Miscellaneous Shortcuts:

Ctrl-l - Clear terminal screen

Ctrl-c - Interrupt current Python command

Ctrl-d - Exit IPython session

IPython general information - see here

### An interesting identity involving radicals

This identity came out while solving the indefinite integral

$$I = \int \frac{dx}{(x+1)\sqrt{x^2+x+1}} \tag{1}$$

$$I = F(x) = \ln { \frac{-1 + \sqrt{x^2+x+1}}{-1-2x+\sqrt{x^2+x+1}}} \tag{2}$$

but the answer given in the book was

$$I = G(x) = \ln {\frac{x + \sqrt{x^2+x+1}}{x + 2 + \sqrt{x^2+x+1}}} \tag{3}$$

Checking the two answers with WA shows that both are correct.

So it is natural then to ask... what is the relation between these two expressions?

After some short struggle, I found that the relation is as follows:

$$\frac{-1 + \sqrt{x^2+x+1}}{-1-2x+\sqrt{x^2+x+1}}= (-1) \cdot \frac{x + \sqrt{x^2+x+1}}{x + 2 + \sqrt{x^2+x+1}} \tag{4}$$

One can easily prove this by letting $a = \sqrt{x^2+x+1}$ and then doing some simple algebraic manipulations.

Of course $(4)$ is true only for those real values of $x$ for which both sides are well-defined.

The curious thing is that even though $F(x)$ and $G(x)$ have identical derivatives (identical when viewed as an expression of $x$, I mean), they are never simultaneously well-defined. Why? Because when $$f(x) = \frac{-1 + \sqrt{x^2+x+1}}{-1-2x+\sqrt{x^2+x+1}}$$ and $$g(x) = \frac{x + \sqrt{x^2+x+1}}{x+2+\sqrt{x^2+x+1}}$$ are both defined and non-zero, they have opposite signs (as $(4)$ shows). So we can take logarithm either from one or the other but not from both at the same time.

### Euler substitutions

Euler substitutions are used for solving integrals of the form

$$\int R(x, \sqrt{ax^2+bx+c}) dx \tag{1}$$ where $R$ is a rational two-argument function.

There is plenty of information about them on the web so this post will be just very short.

1) The first Euler substitution is defined by

$\sqrt{ax^2+bx+c} = \sqrt{a} \cdot x + t \tag{2}$

It is used when $a \gt 0$

2) The second Euler substitution is defined by

$\sqrt{ax^2+bx+c} = x \cdot t + \sqrt{c} \tag{3}$

It is used when $c \gt 0$

3) The third Euler substitution is used when the quadratic polynomial

$ax^2+bx+c$ has 2 distinct real roots $\alpha$ and $\beta$.

It is defined by the equality

$\sqrt{a(x-\alpha)(x-\beta)} = t \cdot (x-\alpha) \tag{4}$

The equality $(2), (3),$ or $(4)$ is then solved for $x$, and $x$ is replaced in $(1)$ with the respective resulting expression/function of $t$. This allows us to transform the integral $(1)$ into an integral from a rational function of $t$.

### Several indefinite integrals by non-trivial substitutions

This post comes to show several indefinite integrals which can be solved by non-trivial substitutions.

$$\int \frac{dx}{(a^2+x^2)^2} = \frac{1}{4a^3} \cdot \left(2 \cdot \arctan \frac{x}{a} + \sin \left(2 \cdot \arctan \frac{x}{a}\right)\right), \ \ \ where \ a \gt 0 \tag{1}$$

$$\int \frac{x^3 dx}{(a^2+x^2)^3} = \frac{1}{4a^2} \cdot \sin^4\left(\arctan \frac{x}{a}\right), \ \ \ where \ a \gt 0 \tag{2}$$

The above two integrals (1) and (2) can be solved by using the substitution $x = a \cdot \tan t$.

This solution uses the fact that the tangent function is a well-known monotonic bijection from $(-\frac{\pi}{2}, \frac{\pi}{2})$ to $\mathbb{R}$.

Below are two more integrals which are solved by a different substitution.

$$\int \sqrt{a^2+x^2} dx = \frac{a^2}{2} \cdot \left(\ln\left(\frac{x}{a} + \sqrt{\frac{x^2}{a^2} + 1}\right) + \frac{x}{a} \cdot \sqrt{\frac{x^2}{a^2} + 1}\right), \ \ \ where \ a \gt 0 \tag{3}$$

$$\int \frac{dx} {(a^2+x^2)^\frac{3}{2}} = \frac{1}{a^2} \cdot \frac{x^2 + x \cdot \sqrt{x^2+a^2}}{x^2 + a^2 + x \cdot \sqrt{x^2+a^2}}, \ \ \ where \ a \gt 0 \tag{4}$$

The integrals (3) and (4) can be solved by the substitution $x = a \cdot \sinh{t}$.
This solution uses the fact that the hyperbolic sine function is a well-known monotonic bijection from $\mathbb{R}$ to $\mathbb{R}$.

### Solving the integrals $\int \sin^mx \cdot \cos^nx \cdot dx$

In this post we look at solving the integrals of the form

$$I(m,n) = \int \sin^mx \cdot \cos^nx \cdot dx \tag{1}$$

where $m,\ n$ are whole numbers (not necessarily positive).

This is usually done by applying some of the recurrence formulas listed below.

$$I(m,n) = \frac{1}{m+1} \sin^{m+1}x \cos^{n-1}x + \frac{n-1}{m+1} \cdot I(m+2, n-2), for\ \ m \ne -1 \tag{2}$$

$$I(m,n) = -\frac{1}{n+1} \sin^{m-1}x \cos^{n+1}x + \frac{m-1}{n+1} \cdot I(m-2, n+2), for\ \ n \ne -1 \tag{3}$$

$$I(m,n) = I(m-2,n) - I(m-2,n+2), for\ any \ m,n \tag{4}$$

$$I(m,n) = I(m,n-2) - I(m+2,n-2), for\ any \ m,n \tag{5}$$

Then from (2) and (5) we derive (2'), while from (3) and (4) we derive (3'). The formulas (2') and (3') are used to reduce the degree either of the sine or of the cosine. They turn out to be very useful when $m,n$ are positive.

$$I(m,n) = \frac{1}{m+n} \sin^{m+1}x\cos^{n-1}x + \frac{n-1}{m+n} \cdot I(m, n-2),\ for \ m \ne -1,\ m+n \ne 0 \tag{2'}$$

$$I(m,n) = -\frac{1}{m+n} \sin^{m-1}x\cos^{n+1}x + \frac{m-1}{m+n} \cdot I(m-2, n),\ for \ n \ne -1,\ m+n \ne 0 \tag{3'}$$

When both $m, n$ are non-negative usually it is useful to apply formula (6). Note though that (6) is valid for any values of $m,n$, and not just for non-negative values.

$$I(m,n) = I(m+2, n) + I(m, n+2) \tag{6}$$

The last two formulas (2'') and (3'') can be easily derived from (2') and (3'). They allow us to increase the degree of the sine or of the cosine. They are usually used when $m$ or $n$ is negative.

$$I(m,n) = -\frac{1}{n+1} \sin^{m+1}x\cos^{n+1}x + \frac{m+n+2}{n+1} \cdot I(m, n+2),\ for \ m \ne -1,\ n \ne -1 \tag{2''}$$

$$I(m,n) = \frac{1}{m+1} \sin^{m+1}x\cos^{n+1}x + \frac{m+n+2}{m+1} \cdot I(m+2, n),\ for \ m \ne -1,\ n \ne -1 \tag{3''}$$

It can be proved that using these formulas the calculation of the integral (1) always boils down to calculating one of the below given integrals. These integrals below are calculated directly (e.g. by integration by parts or by even simpler means).

$$\int \sin{x}\ dx = -\cos{x}$$

$$\int \cos{x}\ dx = \sin{x}$$

$$\int \frac{dx}{\sin{x}} = \ln |\tan ({\frac{x}{2})}|$$

$$\int \frac{dx}{\cos{x}} = \ln |\tan {(\frac{x}{2}+\frac{\pi}{4})}|$$

$$\int \sin{x}\ \cos{x}\ dx = -\frac{1}{4}\cos{2x}$$

$$\int \frac{dx}{\sin{x}\cos{x}} = \ln |\tan {(x)}|$$

$$\int \frac{\sin{x}}{\cos{x}}\ dx = - \ln |\cos{x}|$$

$$\int \frac{\cos{x}}{\sin{x}}\ dx = \ln |\sin{x}|$$

$$\int \sin^2{x}\ dx = \frac{1}{2}x - \frac{1}{4}\sin{2x}$$

$$\int \cos^2{x}\ dx = \frac{1}{2}x + \frac{1}{4}\sin{2x}$$

$$\int \frac{dx}{\sin^2{x}} = -\cot{x}$$

$$\int \frac{dx}{\cos^2{x}} = \tan{x}$$

It should be noted that when $m$ is odd we can introduce/enter one of the $\sin{x}$ multipliers under the differential sign (where it becomes $\cos{x}$). Then by letting $t=\cos{x}$ and by using that $\sin^2{x} = 1 - \cos^2{x}$, we get an integral from a rational function of $t$.

Analogically, when $n$ is odd we can introduce/enter one of the $\cos{x}$ multipliers under the differential sign (where it becomes $\sin{x}$). Then by letting $t=\sin{x}$ and by using that $\cos^2{x} = 1 - \sin^2{x}$, we get an integral from a rational function of $t$.

### Sequence defined through an arithmetic mean recurrence

There is this nice problem about sequences which I've encountered several times while solving problems in real analysis.

Two constant real numbers $a,b$ are given and then we have this sequence defined as:

$$a_0 = a$$

$$a_1 = b$$

$$a_{n+2} = \frac{a_{n+1} + a_n}{2},\ \ \ n \ge 0$$

Prove that the sequence converges and find the limit $L = \lim_{n \to \infty} a_n$

I won't post the solution here but... it turns out the limit is this number

$$L = \frac{1}{3} \cdot a + \frac{2}{3} \cdot b$$

Here is a nice illustration of this fact generated by a Python program.

In this case (depicted on the picture) the limit is $$\frac{1}{3} \cdot 10 + \frac{2}{3} \cdot 100 = \frac{1}{3} (10 + 200) = \frac{210}{3} = 70$$

### Real Analysis - Basic Integration Rules

(1) Basic indefinite integrals:

$$\int x^a \ dx = \frac{x^{a+1}}{a+1} \ \ \ \ \ (a \ne -1) \tag{1}$$

$$\int \frac{1}{x}\ dx = \ln{\left|x\right|} \ \ \ \ \ \tag{2}$$

$$\int \sin{x}\ dx = -\cos{x} \ \ \ \ \ \tag{3}$$

$$\int \cos{x}\ dx = \sin{x} \ \ \ \ \ \tag{4}$$

$$\int \frac{1}{\cos^2{x}}\ dx = \tan{x} \tag{5}$$

$$\int \frac{1}{\sin^2{x}}\ dx = -\cot{x} \tag{6}$$

$$\int {\rm e}^x \ dx = {\rm e}^x \tag{7}$$

$$\int \frac{1}{1+x^2}\ dx = \arctan{x} \tag{8}$$

$$\int \frac{1}{\sqrt{1-x^2}}\ dx = \arcsin{x} \tag{9}$$

$$\int \frac{1}{\sqrt{x^2 + a}}\ dx = \ln {\left|x + \sqrt {x^2 + a}\right|} \ \ \ \ \ (a \ne 0) \tag{10}$$

Note that these formulas are valid for those values of $x$ for which the integrand function is defined. E.g. $(1)$, $(3)$, $(4)$ are valid for all $x$, $(2)$ is valid for non-zero values of $x$, $(5)$ is valid for values of $x$ such that $\cos{x} \ne 0$.

These three integrals are quite important and often met. The formulas below can be derived via integration by parts.

$$\int \sqrt{a^2-x^2}\ dx = \frac{1}{2} ( x \sqrt {a^2-x^2} + a^2 \arcsin {\frac{x}{a}} ) \ \ \ \ \ \ (a \gt 0, \left|x\right| \lt a) \tag{1'}$$

$$\int \sqrt{x^2-a^2}\ dx = \frac{1}{2} ( x \sqrt {x^2-a^2} - a^2 \ln {\left|x + \sqrt {x^2-a^2}\right|} ) \ \ \ \ \ \ (a \gt 0, \left|x\right| \gt a) \tag{2'}$$

$$\int \sqrt{x^2+a^2}\ dx = \frac{1}{2} ( x \sqrt {x^2+a^2} + a^2 \ln {\left|x + \sqrt {x^2+a^2}\right|} ) \ \ \ \ \ \ (a \gt 0) \tag{3'}$$

Let us suppose we are given this integral which we want to calculate. This one is often met when trying to integrate rational functions.

$$I(a, n) = \int \frac{dx}{(a^2+x^2)^n}$$

For this integral, one can prove via integration by parts the following recurrent relation:

$$I(a, n) = \frac{1}{a^2}\cdot\frac{2n-3}{2n-2}\cdot I(a, n-1) + \frac{1}{(2n-2)a^2} \cdot \frac{x}{(a^2+x^2)^{n-1}}$$

Also, it is easy to see that:

$$I(a,1) = \frac{1}{a} \cdot \arctan{\frac{x}{a}}$$

The last two identities give us a procedure for calculating the integrals $I(a, n)$.

### Mean value theorem proof illustrated

I was rereading recently the proof of the mean value theorem from math real analysis.

This led me to the idea to generate some drawing which nicely illustrates the idea of the proof. Let's first restate the mean value theorem.

Theorem: If the function $f$ is defined and continuous in the closed interval $[a,b]$ and is differentiable in the open interval $(a,b)$, then there exists a point $\theta$ which is strictly between $a$ and $b$ i.e. $a \lt \theta \lt b$, such that

$$f'(\theta) = \frac{f(b)-f(a)}{b-a} \tag{1}$$

The proof constructs a function and I was wondering what the idea is behind that function. So I finally understood that and wanted to illustrate it here via some nice drawing. It took me some time to find a good looking function but OK... I finally picked this one:

$$f(x) = \sin(t) - \sin^2(t/2) + \cos^3(t/5) \tag{2}$$

Here is the drawing I generated.

The given function $f(x)$ is shown in red in the figure above.

The proof of the theorem is quite nice. Main role in it plays the the line which connects the two endpoints of the graph of $f(x)$ i.e. the points $(a,f(a))$ and $(b,f(b))$. This line is represented by the function

$$g(x) = f(a) + \frac{f(b)-f(a)}{b-a}(x-a) \tag{3}$$

The line is shown in green in the figure above.

The proof then goes on to construct the function $h(x) = f(x)-g(x)$ and for this one it can be easily seen that $h(a)=h(b)=0$. The proof then applies Rolle's theorem to the function $h$ to get the desired result.

So I was wondering how this function $h(x)$looks like. It is shown in deep sky blue in the figure above. The interesting thing about this function $h(x)$ is that it measures (at each point $x$) what the difference is between $f(x)$ and $g(x)$.

In simple words this can be formulated in 2 different ways:
a) At any value of $x$ the point $(x,h(x))$ is as far from the X axis, as $(x,f(x))$ is from $(x,g(x))$.
This follows from the fact that: $h(x) = f(x) - g(x)$ which can be informally stated as blue = red - green.
b) At any value of $x$ the point $(x,f(x))$ is as far from $(x,h(x))$, as $(x,g(x))$ is from the X axis.
This follows from the fact that: $f(x) - h(x) = g(x)$ which can be informally stated as red blue = green.

### The function $f(x) = x^x$

Let us look at this function

$$f(x) = x^x \tag{1}$$

and try to find its derivative for real values of $x$.

Before this we should mention that this function is well defined only when $x \gt 0$.

In other words... to avoid any complications we are looking at this function only for positive values of $x$.

I. How do we go about finding the derivative $f'(x) = (x^x)'$ ?

Well, we will use the following identity

$$a^x = e^{x \cdot \ln a} \tag{2}$$

which is well-known from high school math and which holds true for any $a \gt 0$ and any $x \in \mathbb{R}$.

Substituting $a = x$ in this identity we subsequently get

$$f(x) = x^x = e ^ {x \cdot \ln x}$$

$$f'(x) = e ^ {x \cdot \ln x} \cdot (x \cdot \ln x)'$$

$$f'(x) = e ^ {x \cdot \ln x} \cdot (1 \cdot \ln x + x \cdot \frac{1}{x})$$

$$f'(x) = e ^ {x \cdot \ln x} \cdot (\ln x + 1)$$

$$f'(x) = x ^ x \cdot (\ln x + 1) \tag{3}$$

The last equation $(3)$ gives us the derivative which we wanted to find.

In the above derivation we used several simple facts from math analysis.

$$(e^x)' = e^x$$

$$(f(g(x)))' = f'(g(x)) \cdot g'(x)$$

$$(u(x) \cdot v(x))' = u'(x) \cdot v(x) + u(x) \cdot v'(x)$$

Finally let us restate the above established formula.

$$\large { (x^x)' = x ^ x \cdot (\ln x + 1) } \tag{4}$$

II. How do we find $\displaystyle{ \lim_{x \to 0^{+}} f(x) } = \displaystyle{ \lim_{x \to 0^{+}} x^x }$ ?

I think the easiest way of finding this limit which I have seen is by letting $x=e^{-t}$ where $t$ is some very large positive number. Then we easily get the following.

$$\lim_{x \to 0^{+}} f(x) = \lim_{x \to 0^{+}} x^x = \lim_{x \to 0^{+}} e ^ {x \cdot \ln x} = \lim_{x \to 0^{+}} e ^ {(\ln x) \cdot x} =$$

$$= \lim_{t \to \infty} e^{(-t) \cdot e^{-t} } = \lim_{t \to \infty} e^{ \frac{(-t)}{e^{t}} } = e ^ {\lim_{t \to \infty} \frac{(-t)}{e^{t} } } = e^0 = 1$$

Here the main fact which we used was that

$$\lim_{t \to \infty} \frac{(-t)}{e^{t} } = 0$$

which is quite obvious given that the numerator is a polynomial of $t$ and the denominator is the exponential function $e^t$.

This way we have just calculated this quite remarkable limit

$$\large \lim_{x \to 0^{+}} x^x = 1 \tag{5}$$

Finally... here is a video demonstrating nicely an informal numerical approach to finding the same limit What is 0 to the power of 0?

### Euler's identity for $\frac{sin(x)}{x}$

Here is this famous identity due to Euler.

$$\frac{\sin x}{x} = \prod_{k=1}^{\infty} \cos \left(\frac{x}{2^k} \right) \tag{1}$$

This holds true for every $x \ne 0$.

Let's prove it.

Denote:

$$S(n) = \prod_{k=1}^{n} \cos \left(\frac{x}{2^k} \right) \tag{2}$$

Multiplying the two sides by $sin{\frac{x}{2^n}}$ we sequentially get:

$$sin{\frac{x}{2^n}} \cdot S(n) = sin{\frac{x}{2^{n}}} \cdot \prod_{k=1}^{n} \cos \left(\frac{x}{2^k} \right)$$
$$sin{\frac{x}{2^n}} \cdot S(n) = sin{\frac{x}{2^{n-1}}} \cdot \frac{1}{2^1} \cdot \prod_{k=1}^{n-1} \cos \left(\frac{x}{2^k} \right)$$
$$sin{\frac{x}{2^n}} \cdot S(n) = sin{\frac{x}{2^{n-2}}} \cdot \frac{1}{2^2} \cdot \prod_{k=1}^{n-2} \cos \left(\frac{x}{2^k} \right)$$
$$...$$
$$sin{\frac{x}{2^n}} \cdot S(n) = sin{\frac{x}{2^{1}}} \cdot \frac{1}{2^{n-1}} \cdot \prod_{k=1}^{1} \cos \left(\frac{x}{2^k} \right)$$

The last one obviously gives us:

$$sin{\frac{x}{2^n}} \cdot S(n) = \frac{1}{2^{n}} \cdot sin(x) \tag{2}$$

which can be easily reworked to:

$$\large{ \frac{sin{\frac{x}{2^n}}}{\frac{x}{2^n}} \cdot S(n) = \frac{sin(x)}{x}} \tag{3}$$

Now in $(3)$ when we take the limit as ${n\to\infty}$ while using that

$$\lim_{u \to 0} \frac{sin(u)}{u} = 1$$

we get equality $(1)$ which is what we wanted to prove.

### Trigonometric identities

1) Even/Odd function identities

$$\sin(-\alpha) = -\sin\alpha \tag{1.1}$$
$$\cos(-\alpha) = \cos\alpha \tag{1.2}$$
$$\tan(-\alpha) = -\tan\alpha \tag{1.3}$$
$$\cot(-\alpha) = -\cot\alpha \tag{1.4}$$

$$\sin(\alpha + \beta) = \sin\alpha\cos\beta + \cos\alpha\sin\beta \tag{2.1}$$
$$\sin(\alpha - \beta) = \sin\alpha\cos\beta - \cos\alpha\sin\beta \tag{2.2}$$
$$\cos(\alpha + \beta) = \cos\alpha\cos\beta - \sin\alpha\sin\beta \tag{2.3}$$
$$\cos(\alpha - \beta) = \cos\alpha\cos\beta + \sin\alpha\sin\beta \tag{2.4}$$

$$\tan(\alpha + \beta) = \frac{\tan\alpha+\tan\beta}{1-\tan\alpha\tan\beta}\tag{2.5}$$

$$\cot(\alpha + \beta) = \frac{\cot\alpha\cot\beta-1}{\cot\alpha+\cot\beta}\tag{2.6}$$

3) Sum to product identities

$$\sin\alpha + \sin\beta = 2 \sin\frac{\alpha+\beta}{2}\cos\frac{\alpha-\beta}{2} \tag{3.1}$$
$$\sin\alpha - \sin\beta = 2 \sin\frac{\alpha-\beta}{2}\cos\frac{\alpha+\beta}{2} \tag{3.2}$$

$$\cos\alpha + \cos\beta = 2 \cos\frac{\alpha+\beta}{2}\cos\frac{\alpha-\beta}{2} \tag{3.3}$$

$$\cos\alpha - \cos\beta = -2 \sin\frac{\alpha+\beta}{2}\sin\frac{\alpha-\beta}{2} \tag{3.4}$$

$$\tan\alpha \pm \tan\beta = \frac{\sin{(\alpha\pm\beta)}}{\cos\alpha\cos\beta} \tag{3.5}$$
$$\cot\alpha \pm \cot\beta = \frac{\sin{(\beta\pm\alpha)}}{\sin\alpha\sin\beta} \tag{3.6}$$

4) Product to sum identities

$$\sin\alpha\sin\beta = \frac{1}{2}[\cos(\alpha-\beta) - \cos(\alpha+\beta)] \tag{4.1}$$

$$\cos\alpha\cos\beta = \frac{1}{2}[\cos(\alpha-\beta) + \cos(\alpha+\beta)] \tag{4.2}$$

$$\sin\alpha\cos\beta = \frac{1}{2}[\sin(\alpha+\beta) + \sin(\alpha-\beta)] \tag{4.3}$$

5) Double-angle and triple-angle identities

$$\sin 2 \alpha = 2 \sin\alpha \cos\alpha \tag{5.1}$$

$$\cos 2 \alpha = \cos^2\alpha - \sin^2\alpha \tag{5.2}$$

$$\tan 2 \alpha = \frac{2\tan\alpha}{1-\tan^2\alpha} = \frac{2}{\cot\alpha - \tan\alpha} \tag{5.3}$$

$$\cot 2 \alpha = \frac{\cot^2\alpha-1}{2\cot\alpha} = \frac{\cot\alpha - \tan\alpha}{2} \tag{5.4}$$

$$\sin 3 \alpha = 3 \sin\alpha - 4\sin^3 \alpha \tag{5.5}$$

$$\cos 3 \alpha = 4\cos^3 \alpha - 3 \cos\alpha \tag{5.6}$$

$$\tan 3 \alpha = \frac{3\tan \alpha - \tan^3 \alpha}{1 - 3 \tan^2 \alpha} \tag{5.7}$$

$$\cot 3 \alpha = \frac{\cot^3 \alpha - 3 \cot \alpha}{3\cot^2 \alpha - 1} \tag{5.8}$$

6) Decreasing the power of sine and cosine

$$2 \sin^2 \alpha = 1 - \cos 2\alpha \tag{6.1}$$

$$4 \sin^3 \alpha = 3 \sin \alpha - \sin 3\alpha \tag{6.2}$$

$$8 \sin^4 \alpha = \cos 4\alpha - 4 \cos 2\alpha + 3 \tag{6.3}$$

$$2 \cos^2 \alpha = 1 + \cos 2\alpha \tag{6.4}$$

$$4 \cos^3 \alpha = 3 \cos \alpha + \cos 3\alpha \tag{6.5}$$

$$8 \cos^4 \alpha = \cos 4\alpha + 4 \cos 2\alpha + 3 \tag{6.6}$$

### Fun problem on Ceva's theorem

Problem:

Given is $\Delta ABC$. The points $D \in BC$, $E \in CA$, $F \in AB$ are such that the lines $AD,BE,CF$ are concurrent.

$A'$ - midpoint of $BC$
$B'$ - midpoint of $CA$
$C'$ - midpoint of $AB$

$D'$ - midpoint of $AD$
$E'$ - midpoint of $BE$
$F'$ - midpoint of $CF$

Prove that the lines

$A'D', B'E', C'F'$ are also concurrent (i.e. that they pass through a common point).

Solution:

It is crucial to come up with a nice realistic drawing here.

Also, it is important to realize that:
a) $A', F', B'$ are on one line
b) $B', D', C'$ are on one line
c) $C', E', A'$ are on one line

From Ceva's theorem for triangle $ABC$ we get:
$$\frac{AF}{FB}\frac{BD}{DC}\frac{CE}{EA} = 1 \tag{1}$$

Now the trick is to realize that:
$$\frac{B'F'}{F'A'} = \frac{AF}{FB} \tag{2}$$
$$\frac{C'D'}{D'B'} = \frac{BD}{DC} \tag{3}$$
$$\frac{A'E'}{E'C'} = \frac{CE}{EA} \tag{4}$$

Why is this so?

Because $B'C' || BC$   , $C'A' || CA$ and  $A'B' || AB$
so these relations follow from the Intercept theorem.

Multiplying the last 3 equations and using $(1)$ we get:

$$\frac{B'F'}{F'A'}\frac{C'D'}{D'B'}\frac{A'E'}{E'C'} = \frac{AF}{FB}\frac{BD}{DC}\frac{CE}{EA} = 1$$

Thus:

$$\frac{B'F'}{F'A'}\frac{A'E'}{E'C'}\frac{C'D'}{D'B'} = 1 \tag{5}$$

Now using the converse Ceva's theorem (for the triangle $A'B'C'$ and for the points $D', E', F'$), we can conclude from $(5)$ that the three lines  $A'D', B'E', C'F'$ intersect at a single/common point. This is what we had to prove hence the problem is solved.

### Klein bottle mystery

In the book "Basic Topology" by M.A.Armstrong I found an explanation about how to construct a Klein bottle. I had to reread it 5 times and I was still not quite convinced. I am retelling it here.

Begin with a sphere, remove two discs from it, and add a Möbius strip in their places.
A Möbius strip has after all a single circle as boundary, and all that we are asking
is that the points of its boundary circle be identified with those of the boundary
circle of the hole in the sphere. One must imagine this identification taking place
in some space where there is plenty of room (euclidean four-dimensional space will do).
This cannot be realized in three dimensions without having each Möbius strip
intersect itself. The resulting closed surface is called the Klein bottle.

I was scratching my head around how this procedure actually produces a Klein bottle until I found this question in MathSE.

Klein-bottle-as-two-Möbius-strips

This picture in one of the answers is really really nice, it really shows what happens if we cut a Klein bottle in half - we really get two Möbius strips as a result. The cut is done by a plane "parallel to the handle" which cuts the bottle in two symmetric parts.

So... it's really for a reason that they say "a picture is worth a thousand words".

### How to solve a quadratic equation?

This post comes to demonstrate the support for $\LaTeX$ available in Blogger.

A quadratic equation is an equation of the form
$$ax^2 + bx + c = 0 \tag{1}$$
where
$a,b,c \in \mathbb{R}$ and $a \ne 0$.
Let us assume that we are trying to solve this equation for real numbers only.
The value
$$D = b^2 - 4ac \tag{2}$$
is called discriminant of the equation $(1)$.
Case #1:
When $D \gt 0$, there are two distinct real solutions to $(1)$ and they are:
$$x_{1,2} = {-b \pm \sqrt{b^2-4ac} \over 2a} \tag{3}$$
Case #2:
When $D = 0$, there is one real solution to $(1)$ which is:
$$x = {-b \over 2a} \tag{4}$$
Case #3:
When $D \lt 0$, there are no real solutions to $(1)$.

### How to solve a linear equation?

This post comes to demonstrate the support for $\LaTeX$ available in Blogger.

A linear equation is an equation of the form

$$ax + b = 0 \tag{1}$$

where

$$a,b \in \mathbb{R}, a \ne 0$$

Let us assume that we are trying to solve this equation for real numbers only i.e. we are trying to find the real roots of the equation.

Solving this equation is simple, it always has a single real solution which is

$$x = - {b \over a} \tag{2}$$

### Calculate $\lim_{x \to \infty} \sqrt[n]{(x+a_1)(x+a_2) ... (x+a_n)} - x$

This is a problem which I found in a math textbook some time ago. I tried solving it but I did not manage to solve it as quickly as I wanted and so I lost patience. Therefore I posted it in MathSE. I liked quite a lot one of the solutions I got, so I wrote it on a sheet of paper and kept it.

Here is the full story...

Problem:

Find $\lim_{x \to \infty} \sqrt[n]{(x+a_1)(x+a_2) ... (x+a_n)} - x$,

where $x$ is a real variable and $n$ is a fixed natural number $n \ge 1$.

Solution:

Let
$f(x)=\sqrt[n]{(x+a_1)(x+a_2)\cdot (x+a_n)}$

We now calculate:

$$\lim_{x\to \infty} \frac{f(x)}{x}=\lim_{x\to \infty} \sqrt[n]{(1+\frac{a_1}{x})(1+\frac{a_2}{x}) \cdot (1+\frac{a_n}{x})}= \sqrt[n]{1} = 1 \tag{1}$$

We now form the below difference and we rework it a bit:

$$f(x)-x=\frac{f(x)^n-x^n}{\sum_{k=0}^{n-1} f(x)^{n-1-k} x^{k}}= \frac{f(x)^n-x^n}{f(x)^{n-1}+f(x)^{n-2}x+\dots+f(x)x^{n-2}+x^{n-1}}$$

We note here that the numerator $P(x) = f(x)^n-x^n$ is a polynomial of degree $n-1$ and it has a leading coefficient of $c = a_1+a_2+\dots+a_n$. The denominator is composed of the sum of $n$ terms of the form $f(x)^{n-1-k}x^k$ (for $k=0,1, \dots, n-1$).

So we divide the numerator and the denominator by $x^{n-1}$ and we get:

\begin{align*}
\smash
\lim_{x\to \infty}f(x)-x & = \lim_{x\to \infty} \frac {\frac{f(x)^n-x^n}{x^{n-1}}}{\frac{f(x)^{n-1}}{x^{n-1}}+\frac{f(x)^{n-2}}{x^{n-2}}+\dots+\frac{f(x)}{x}+1} \\[10pt]
&= \lim_{x\to \infty} \frac {\frac{P(x)}{x^{n-1}}}{\frac{f(x)^{n-1}}{x^{n-1}}+\frac{f(x)^{n-2}}{x^{n-2}}+\dots+\frac{f(x)}{x}+1} \\[10pt]
&= \frac{c}{\underbrace{1 + 1 + \dots + 1}_{n\text{ times}}} \\[10pt]
&= \frac{a_1+a_2+\dots+a_n}{n}\end{align*}

The cool thing here is that the limit value happens to be the arithmetic mean of the numbers $a_1, a_2, \dots a_n$.

### Derivative of $f(x)=x^n$

What is the derivative of $$f(x)=x^n$$ when $n$ is a fixed integer and $x$ is a real variable?

As we all know this derivative is

$$f'(x)=nx^{n-1}$$

But for which values of $n$ and $x$ does this hold true?

Well...

1) this is true for integers $n \gt 0$
2) this is also true for integers $n \lt 0$ if $x \ne 0$

Note that when $x=0$ the symbols $x^0, x^{-1}, x^{-2}, x^{-3}, \dots$ are usually treated as undefined. That is why for 2) we need the additional restriction that $x \ne 0$.