Talk:Half-exponential function

{{WikiProject banner shell|class=Stub|

{{WikiProject Computer science |importance=Low}}

}}

Proved only for "positive" combinations?

"Not expressible in terms of elementary functions", really? Following the link (to Scott) I see much weaker statement, and a question: "And of course, how do we handle subtraction and division?" Or is this question answered already? Boris Tsirelson (talk) 19:58, 22 May 2014 (UTC)

: I agree with you - Scott's proof definitely can't handle subtraction and division - so I've changed the wording of the sentence and removed the "factual inaccuracy" template. David9550 (talk) 00:17, 11 April 2016 (UTC)

:: After a bit more searching, I found a MathOverflow post that points to some references that claim to do the full case. So I've added that in. David9550 (talk) 00:29, 11 April 2016 (UTC)

The article says...

It has been proven that every function ƒ composed of basic arithmetic operations, exponentials, and logarithms, then ƒ(ƒ(x)) is either subexponential or superexponential:[3] half-exponential functions are not expressible in terms of elementary functions.

Stronger statement that I would like to know if it's true:

The smallest set of all functions that contains all elementary functions as well as being closed under indefinite integration still doesn't have the half-exponential functions.

In other words, if y is a half-exponential function, then no function in the sequence whose first function is y and each function after that is the derivative of the preceding function will be elementary. Georgia guy (talk) 20:25, 22 May 2014 (UTC)

:That could be interesting, but for now the statement in the article seems to be too strong, not too week, as compared to what is proved. Boris Tsirelson (talk) 04:58, 23 May 2014 (UTC)

Aspiration for smoothness

One would like to have a solution f to f(f(x))=exp(x) which is not merely continuous and strictly increasing, but as smooth as possible. One might hope for C, but even that would still leave the possibility of undulations in the rate of increase. Analyticity would be desireable, but the sources suggest that it is not possible. If there were a place where the function was especially smooth, it could be used to extend the definition of such a smooth function to the whole real line using the facts:

: \exp (f (\ln (x))) = f (f (f (\ln (x))) = f (\exp (\ln (x))) = f (x), \mbox{ if } x \in (0,\infty)

and

: \ln (f (\exp (x))) = \ln (f (f ( f(x)))) = \ln (\exp (f (x))) = f (x) \,.

Define A=f(0). We can show that as x approaches -∞, f(x) approaches ln(A)<0. This is asymptotic. So we might hope that f will take an especially simple form in this limit. Let us take the derivative using the chain rule as x approaches -∞

: \exp (x) = \exp^{\prime} (x) = f^{\prime} (f (x)) \cdot f^{\prime} (x) \approx f^{\prime} (\ln (A)) \cdot f^{\prime} (x) \,.

So for some constant c1,

: f^{\prime} (x) \approx c_1 \exp (x) \,.

Integrating from -∞ to x gives

: f (x) \approx \ln (A) + c_1 \exp (x) \,.

If we go to the second derivative, we get

: \exp (x) = \exp^{\prime \prime} (x) = f^{\prime \prime} (f (x)) \cdot (f^{\prime} (x))^2 + f^{\prime} (f (x)) \cdot f^{\prime \prime} (x) \approx f^{\prime \prime} (\ln (A)) \cdot (c_1 \exp (x))^2 + f^{\prime} (\ln (A)) \cdot f^{\prime \prime} (x)\,.

So for some constants c1 and c2,

: f^{\prime \prime} (x) \approx c_1 \exp (x) + 4 c_2 \exp (2x) \,.

Integrating from -∞ to x gives

: f^{\prime} (x) \approx c_1 \exp (x) + 2 c_2 \exp (2x) \,.

Integrating from -∞ to x gives

: f (x) \approx \ln (A) + c_1 \exp (x) + c_2 \exp (2x) \,.

The pattern suggests that

: f (x) = \sum_{n=0}^{\infty} c_n \exp (n x) \,

where c0=ln(A). In other words, we can hope that there is a function h analytic in a neighborhood of 0 such that

: f (x) = h (\exp (x))

since exp(x) approaches 0 as x approaches -∞. Notice that

: h (x) = f^{-1} (x)

and consequently h cannot be entire because f -1(ln(A)) is undefined. JRSpriggs (talk) 00:32, 26 May 2014 (UTC)

The maximum value of A consistent with it lying within the radius of convergence of

: h (x) = \sum_{n=0}^{\infty} c_n x^n \,

is Ω, the Omega constant, where A=−ln(A). JRSpriggs (talk) 01:26, 27 May 2014 (UTC)

For a while, I explored the possibility that A=Ω=0.5671... . Then I realized that that value was too large to be consistent with my belief that the appropriate f should be C and that all its derivatives should be positive on the whole real line. According to the mean value theorem, there should be a point in the interval (ln(A),0) where the first derivative is

:\frac{f(0) - f(\ln(A))}{0 - \ln(A)} = \frac{A - 0}{0 - \ln(A)}

and there should be a point in the interval (0,A) where the first derivative is

:\frac{f(A) - f(0)}{A - 0} = \frac{1 - A}{A - 0} \,.

If we require the first of these to be less than the second and cross-multiply, we get

: A^2 < -\ln(A) \cdot (1 - A) \,.

If A were Ω, then

: \Omega^2 < \Omega \cdot (1 - \Omega)

which would imply

: \Omega < \frac12 \,,

a clear falsehood. JRSpriggs (talk) 04:46, 29 May 2014 (UTC)

From

: \exp (x) = f (f (x))

we get

: \exp (x) = \exp^{\prime} (x) = f^{\prime} (f (x)) \cdot f^{\prime} (x)

by the chain rule. If we then substitute f−1(x) in place of x, we get

: f (x) = \exp (f^{-1} (x)) = f^{\prime} (f (f^{-1} (x))) \cdot f^{\prime} (f^{-1} (x)) = f^{\prime} (x) \cdot f^{\prime} (f^{-1} (x))

from which it follows that

: f^{\prime} (x) = \frac{ f (x) }{ f^{\prime} (f^{-1} (x)) } \,.

This will allow us to calculate the derivative of f at larger arguments, if we know it at smaller arguments. By analogy with

: \exp (x) = 1 + \int_{0}^{x} \exp (t) d t

we could calculate a series expansion for f near x, if we know a series for its derivative at f−1(x).

We begin with

: f^{\prime} (x) = c_1 \exp (x) + 2 c_2 (\exp (x))^2 + 3 c_3 (\exp (x))^3 + \ldots

for

: x \in (-\infty, \ln (- \ln (A))) \supset (-\infty, \ln (A)] \,.

Applying our formula and using analytic continuation, we get

: f^{\prime} (x) = \frac{ 1 }{ c_1 + 2 c_2 f (x) + 3 c_3 (f (x))^2 + \ldots }

for

: x \in (-\infty, f^{-1} (- \ln (A))) \supset (-\infty, 0] \,.

Applying it again

: f^{\prime} (x) = f (x) \cdot ( c_1 + 2 c_2 x + 3 c_3 x^2 + \ldots )

for

: x \in (\ln (A), - \ln (A)) \supset (\ln (A), A] \,.

Applying it yet again

: f^{\prime} (x) = \frac{ f (x) }{ x \cdot ( c_1 + 2 c_2 f^{-1} (x) + 3 c_3 (f^{-1} (x))^2 + \ldots ) }

for

: x \in (0, f^{-1} (A^{-1})) \supset (0, 1] \,.

JRSpriggs (talk) 06:28, 11 June 2014 (UTC)

For the second derivative, the recursion rule is

: f^{\prime\prime} (x) = \frac{ f (x) }{ ( f^{\prime} (f^{-1} (x)) )^2 } - \frac{ f (x) \cdot f^{\prime\prime} (f^{-1} (x)) }{ ( f^{\prime} (f^{-1} (x)) )^3 } \,.

Here is a table of values

:

\begin{array}

cc|c|c
x & f(x) & f^{\prime}(x) & f^{\prime\prime}(x) \\

\hline

-\infty & \ln(A) & 0 & 0 \\

\ln(A) & 0 & c_1^{-1} & -2c_2c_1^{-3} \\

0 & A & c_1A & c_1^2A + 2c_2A \\

A & 1 & [c_1A]^{-1} & [c_1A]^{-2} - c_1^{-1}A^{-2} - 2c_1^{-3}c_2A^{-2} \\

1 & e^A & c_1Ae^A & ? \\

e^A & e & e[c_1Ae^A]^{-1} & ? \\

\end{array}

JRSpriggs (talk) 11:04, 11 June 2014 (UTC)

We need constraints on the possible values of the parameters A, c0, c1, c2, c3, ... :

: f (-\infty) = \ln (A) = c_0

: f (\ln (A)) = 0 = c_0 + c_1 A + c_2 A^2 + c_3 A^3 + \ldots

: f^{\prime} (\ln (A)) = \frac{1}{c_1} = c_1 A + 2 c_2 A^2 + 3 c_3 A^3 + \ldots

: \infty = \lim_{x \searrow \ln (A)} ( c_1 + 2 c_2 x + 3 c_3 x^2 + \ldots )

: f^{\prime\prime} (\ln (A)) = -2c_2c_1^{-3} = c_1 A + 4 c_2 A^2 + 9 c_3 A^3 + \ldots

Assuming that the derivative of f is strictly increasing, we get

: 0 < \frac{1}{ c_1 } < \frac{ A - 0 }{ 0 - \ln(A) } < c_1 A < \frac { 1 - A }{ A - 0 } < \frac{1}{ c_1 A } < \frac{ e^A - 1 }{ 1 - A } < c_1 A e^A < \frac{ e - e^A }{ e^A - 1 } < \frac{ e }{ c_1 A e^A } < \ldots \,.

Assuming that the second derivative of f is strictly increasing, we get

: 0 < \frac{ -2 c_2 }{ c_1^3 } < \frac{ c_1 A - c_1^{-1} }{ 0 - \ln (A) } < c_1^2A + 2c_2A < \frac{ [c_1 A]^{-1} - c_1 A }{ A - 0 } < [c_1A]^{-2} - c_1^{-1}A^{-2} - 2c_1^{-3}c_2A^{-2} < \frac{ c_1 A e^A - [c_1 A]^{-1} }{ 1 - A } < \ldots \,.

And so forth. JRSpriggs (talk) 04:19, 12 June 2014 (UTC)

So far, it appears that

: 0.46 < A < 0.53

: A = \frac12 \Rightarrow 3.0203 < c_1^2 < 3.1112 \,.

JRSpriggs (talk) 07:15, 14 June 2014 (UTC)

Link to Tetration?

I think this article should include some mention of tetration. While it defines f(x) as a piecewise, it would make more sense to define it using tetration.

Let's focus on this for example:

: f(f(x)) = b^x

This can be rewritten using tetration:

: f(x) = ^{0.5+slog_b(x)}b

Here's a proof to show that you indeed get b^x as a result:

: f(f(x)) = ^{0.5+slog_b(^{0.5+slog_b(x)}b)}b

: f(f(x)) = ^{0.5+0.5+slog_b(x)}b

: f(f(x)) = ^{1+slog_b(x)}b

: f(f(x)) = b^{^{slog_b(x)}b}

: f(f(x)) = b^x

Unfortunately, I can't figure out a way to get that coefficient in the front, so using the function above only satisfies f(f(x)) = ab^x when a = 1 . Regardless, I believe it's worth mentioning. Not only would using tetration help make it easier to define functions such as f(f(f(x))) = b^x , but the fact that the link exists means that there might be some way to calculate ^{0.5}x , although this assumes that the value of a half-exponential function at x = 0 is known.

Expfac user (talk) 20:05, 20 January 2021 (UTC)

I don't see the point. Functional iteration is a mature, self-standing field, as your can see from the cited bibliography, and no salutary information or techniques can come from tetrations. You might consider linking this one to the iterated function page, however. Cuzkatzimhut (talk) 01:20, 21 January 2021 (UTC)

:"linking this one to the iteration page" as in, linking the half exponential function to the iteration page? Because if that's the case, the page already has a link to iterated functions. As for the comment on tetration...yeah, I guess it isn't practical. Although, would it be worth it to mention half-exponentials on the tetration page, assuming it hasn't been done already?

:Other than that appreciate the advice, I suppose there's no point mentioning tetration if it's not practical. Expfac user (talk) 16:03, 21 January 2021 (UTC)

::Apologies; I took my own advice and linked the iterated function page to this one, after my comment above! Personally, more Kneser stuff is in order, but I lack the energy to do that. Cuzkatzimhut (talk) 22:00, 21 January 2021 (UTC)