Jack function

{{Short description|Generalization of the Jack polynomial}}

In mathematics, the Jack function is a generalization of the Jack polynomial, introduced by Henry Jack. The Jack polynomial is a homogeneous, symmetric polynomial which generalizes the Schur and zonal polynomials, and is in turn generalized by the Heckman–Opdam polynomials and Macdonald polynomials.

Definition

The Jack function J_\kappa^{(\alpha )}(x_1,x_2,\ldots,x_m)

of an integer partition \kappa, parameter \alpha, and arguments x_1,x_2,\ldots,x_m can be recursively defined as

follows:

; For m=1 :

: J_{k}^{(\alpha )}(x_1)=x_1^k(1+\alpha)\cdots (1+(k-1)\alpha)

; For m>1:

: J_\kappa^{(\alpha )}(x_1,x_2,\ldots,x_m)=\sum_\mu

J_\mu^{(\alpha )}(x_1,x_2,\ldots,x_{m-1})

x_m^

\kappa /\mu
\beta_{\kappa \mu},

where the summation is over all partitions \mu such that the skew partition \kappa/\mu is a horizontal strip, namely

:

\kappa_1\ge\mu_1\ge\kappa_2\ge\mu_2\ge\cdots\ge\kappa_{n-1}\ge\mu_{n-1}\ge\kappa_n

(\mu_n must be zero or otherwise J_\mu(x_1,\ldots,x_{n-1})=0) and

:

\beta_{\kappa\mu}=\frac{

\prod_{(i,j)\in \kappa} B_{\kappa\mu}^\kappa(i,j)

}{

\prod_{(i,j)\in \mu} B_{\kappa\mu}^\mu(i,j)

},

where B_{\kappa\mu}^\nu(i,j) equals \kappa_j'-i+\alpha(\kappa_i-j+1) if \kappa_j'=\mu_j' and \kappa_j'-i+1+\alpha(\kappa_i-j) otherwise. The expressions \kappa' and \mu' refer to the conjugate partitions of \kappa and \mu, respectively. The notation (i,j)\in\kappa means that the product is taken over all coordinates (i,j) of boxes in the Young diagram of the partition \kappa.

=Combinatorial formula=

In 1997, F. Knop and S. Sahi {{sfn|Knop|Sahi|1997}} gave a purely combinatorial formula for the Jack polynomials J_\mu^{(\alpha )} in n variables:

:J_\mu^{(\alpha )} = \sum_{T} d_T(\alpha) \prod_{s \in T} x_{T(s)}.

The sum is taken over all admissible tableaux of shape \lambda, and

:d_T(\alpha) = \prod_{s \in T \text{ critical}} d_\lambda(\alpha)(s)

with

:d_\lambda(\alpha)(s) = \alpha(a_\lambda(s) +1) + (l_\lambda(s) + 1).

An admissible tableau of shape \lambda is a filling of the Young diagram \lambda with numbers 1,2,…,n such that for any box (i,j) in the tableau,

  • T(i,j) \neq T(i',j) whenever i'>i.
  • T(i,j) \neq T(i,j-1) whenever j>1 and i'

A box s = (i,j) \in \lambda is critical for the tableau T if j > 1 and T(i,j)=T(i,j-1).

This result can be seen as a special case of the more general combinatorial formula for Macdonald polynomials.

C normalization

The Jack functions form an orthogonal basis in a space of symmetric polynomials, with inner product:

:\langle f,g\rangle = \int_{[0,2\pi]^n} f \left (e^{i\theta_1},\ldots,e^{i\theta_n} \right ) \overline{g \left (e^{i\theta_1},\ldots,e^{i\theta_n} \right )} \prod_{1\le j

This orthogonality property is unaffected by normalization. The normalization defined above is typically referred to as the J normalization. The C normalization is defined as

:C_\kappa^{(\alpha)}(x_1,\ldots,x_n) = \frac{\alpha^

\kappa
(|\kappa|)!}{j_\kappa} J_\kappa^{(\alpha)}(x_1,\ldots,x_n),

where

:j_\kappa=\prod_{(i,j)\in \kappa} \left (\kappa_j'-i+\alpha \left (\kappa_i-j+1 \right ) \right ) \left (\kappa_j'-i+1+\alpha \left (\kappa_i-j \right ) \right ).

For \alpha=2, C_\kappa^{(2)}(x_1,\ldots,x_n) is often denoted by C_\kappa(x_1,\ldots,x_n) and called the Zonal polynomial.

P normalization

The P normalization is given by the identity J_\lambda = H'_\lambda P_\lambda, where

:H'_\lambda = \prod_{s\in \lambda} (\alpha a_\lambda(s) + l_\lambda(s) + 1)

where a_\lambda and l_\lambda denotes the arm and leg length respectively. Therefore, for \alpha=1, P_\lambda is the usual Schur function.

Similar to Schur polynomials, P_\lambda can be expressed as a sum over Young tableaux. However, one need to add an extra weight to each tableau that depends on the parameter \alpha.

Thus, a formula {{sfn|Macdonald|1995|pp=379}} for the Jack function P_\lambda is given by

: P_\lambda = \sum_{T} \psi_T(\alpha) \prod_{s \in \lambda} x_{T(s)}

where the sum is taken over all tableaux of shape \lambda, and T(s) denotes the entry in box s of T.

The weight \psi_T(\alpha) can be defined in the following fashion: Each tableau T of shape \lambda can be interpreted as a sequence of partitions

: \emptyset = \nu_1 \to \nu_2 \to \dots \to \nu_n = \lambda

where \nu_{i+1}/\nu_i defines the skew shape with content i in T. Then

: \psi_T(\alpha) = \prod_i \psi_{\nu_{i+1}/\nu_i}(\alpha)

where

:\psi_{\lambda/\mu}(\alpha) = \prod_{s \in R_{\lambda/\mu}-C_{\lambda/\mu} } \frac{(\alpha a_\mu(s) + l_\mu(s) +1)}{(\alpha a_\mu(s) + l_\mu(s) + \alpha)} \frac{(\alpha a_\lambda(s) + l_\lambda(s) + \alpha)}{(\alpha a_\lambda(s) + l_\lambda(s) +1)}

and the product is taken only over all boxes s in \lambda such that s has a box from \lambda/\mu in the same row, but not in the same column.

Connection with the Schur polynomial

When \alpha=1 the Jack function is a scalar multiple of the Schur polynomial

:

J^{(1)}_\kappa(x_1,x_2,\ldots,x_n) = H_\kappa s_\kappa(x_1,x_2,\ldots,x_n),

where

:

H_\kappa=\prod_{(i,j)\in\kappa} h_\kappa(i,j)=

\prod_{(i,j)\in\kappa} (\kappa_i+\kappa_j'-i-j+1)

is the product of all hook lengths of \kappa.

Properties

If the partition has more parts than the number of variables, then the Jack function is 0:

:J_\kappa^{(\alpha )}(x_1,x_2,\ldots,x_m)=0, \mbox{ if }\kappa_{m+1}>0.

Matrix argument

In some texts, especially in random matrix theory, authors have found it more convenient to use a matrix argument in the Jack function. The connection is simple. If X is a matrix with eigenvalues

x_1,x_2,\ldots,x_m, then

:

J_\kappa^{(\alpha )}(X)=J_\kappa^{(\alpha )}(x_1,x_2,\ldots,x_m).

References

  • {{citation

| last1 = Demmel | first1 = James | author1-link = James Demmel

| last2 = Koev | first2 = Plamen

| doi = 10.1090/S0025-5718-05-01780-1

| mr = 2176397

| issue = 253

| journal = Mathematics of Computation

| pages = 223–239

| title = Accurate and efficient evaluation of Schur and Jack functions

| volume = 75

| year = 2006| citeseerx = 10.1.1.134.5248 }}.

  • {{citation

| last = Jack | first = Henry | authorlink = Henry Jack

| mr = 0289462

| journal = Proceedings of the Royal Society of Edinburgh | series = Section A. Mathematics

| pages = 1–18

| title = A class of symmetric polynomials with a parameter

| volume = 69

| year = 1970–1971}}.

  • {{citation

|last1=Knop|first1=Friedrich|last2=Sahi|first2=Siddhartha

|title=A recursion and a combinatorial formula for Jack polynomials

|journal=Inventiones Mathematicae

|date=19 March 1997

|volume=128

|issue=1

|pages=9–22

|doi=10.1007/s002220050134|arxiv=q-alg/9610016|bibcode=1997InMat.128....9K|s2cid=7188322 }}

  • {{citation

| last = Macdonald | first = I. G. | authorlink = Ian G. Macdonald

| edition = 2nd

| mr = 1354144

| isbn = 978-0-19-853489-1

| location = New York

| publisher = Oxford University Press

| series = Oxford Mathematical Monographs

| title = Symmetric functions and Hall polynomials

| year = 1995}}

  • {{citation

| last = Stanley | first = Richard P. | authorlink = Richard P. Stanley

| doi = 10.1016/0001-8708(89)90015-7

| doi-access=free

| mr = 1014073

| issue = 1

| journal = Advances in Mathematics

| pages = 76–115

| title = Some combinatorial properties of Jack symmetric functions

| volume = 77

| year = 1989}}.