# Talk:Fractional calculus

## Untitled

This is a large and multi-faceted topic. This will be the mother-page for a large section. Here's a rough outline:

1. Introduction
2. History
3. Semiotic base
1. Differintegrals
1. Riemann-Liouville
2. Grunwald-Lietnikov
3. Weyl
4. Interpretation
2. Relation to Standard Transformations
1. Laplace transform
2. Fourier transform
3. Properties and Techniques
1. General Properties
2. Differintegration of some special functions
4. Geometric structure of
1. Relation to Diffusion
1. anomalous(non-fickian) diffusion
2. fractional brownian motion
2. Relation to Fractals & Chaos Theory
1. Multiple-order differintegration
1. extraordinary differential equations
2. partial fractional derivatives
2. Special Forms of Fractional Calculus
1. Initialized fractional calculus
2. Local fractional derivative(LFD)
3. Morphological(Synthesis of Structure and Change) aspects
1. fractional reaction-diffusion equations
2. fractional calculus in continuum mechanics
3. fractal operators
6. Applications of Fractional Calculus
1. Mathematics
2. Physics
3. Engineering
7. Contemporary Trends in Fractional Calculus

And, ofcourse, I am open to suggestions. I will, however, be stubborn on there being a 'geometric structure of' section, in whatever form. I hope this helps get this moving.

-User:Kevin_baas 2003.05.06

---

I think some of that may be hard to swallow for an undergraduate math student. (Minor note: fractional calculus deals with complex numbered orders of differintegration as well.) Charles, thank you very much for your contributions to this page! I've been waiting for someone besides me to work in this area. :) Kevin Baas 19:33, 16 Apr 2004 (UTC)

OK - let me explain that I was working today on the basis of the half-page article in the big Soviet mathematical encyclopedia. So it's not going to look like a tutorial, at this point.

Charles Matthews 19:42, 16 Apr 2004 (UTC)

## Alternative version

This page used to be quite different. The current and the older version both have their advantages. I invite contributors to look at the older version here, and combine the best of both worlds, while making the article more in line with the protocols agreed to on the WikiProject Mathematics pages. Kevin Baas | talk 19:56, 2004 Sep 24 (UTC)

I think it would also be helpful to point out that we now have pages on functional calculus and pseudo-differential operator, that contribute significantly to the context; and, less obviously, there is material on the Sobolev space page that also uses fractional differentiation, defined via Fourier transform.

Charles Matthews 20:56, 24 Sep 2004 (UTC)

Kevin, are you done with the page or is this work in progress? Gadykozma 23:25, 24 Sep 2004 (UTC)
This page, or the alternative? Every page is a work in progress. The version here is primarily Charles Matthews', the alternative version is primarily mine, before charles radically altered the page. Why do you ask? Kevin Baas | talk 02:19, 2004 Sep 25 (UTC)
I'm afraid I cannot add much mathematical intuition beyond what I anyway wrote under Sobolev space. Editorially, I only think that it's better to start with modest goals (i.e. the current article) and expand the article step by step. Here it is also important to keep in sync with Differintegral so that there won't be any unnecessary duplication of material. Gadykozma 02:49, 25 Sep 2004 (UTC)

## Hmmm…

Interesting article. I certainly haven't explored the subject in depth—this article is all that I've read on it—, but I'm already beginning to wonder about the uniqueness of ${\displaystyle D^{p/q}}$ (to deal only with the rational case for now; it would be easy to extend that to real- and complex-valued exponents). For a given function ${\displaystyle f}$, might there be two distinct operators ${\displaystyle D_{1}}$ and ${\displaystyle D_{2}}$ such that ${\displaystyle {\big (}D_{1}^{p/q}{\big )}^{q}=f'={\big (}D_{2}^{p/q}{\big )}^{q}}$?

Take the polynomial case:

${\displaystyle f(x)=\sum _{i=0}^{n}a_{i}x^{i}.}$

If we make the desirable (I suppose) assumptions that ${\displaystyle D^{p/q}(f+g)=D^{p/q}(f)+D^{p/q}(g)}$ and that ${\displaystyle D^{p/q}(kf)=kD^{p/q}(f)}$ (for constant ${\displaystyle k}$), then one way to define ${\displaystyle D^{1/q}(f)}$ for a non-zero integer ${\displaystyle q}$ is

${\displaystyle D^{1/q}(f)=\sum _{i=0}^{n}{\Gamma (i+1) \over \Gamma (i+1-1/q)}a_{i}x^{i-1/q}.}$

And that implies that

${\displaystyle D^{p/q}(f)=\sum _{i=0}^{n}{\Gamma (i+1) \over \Gamma (i+1-p/q)}a_{i}x^{i-p/q}}$

for integers ${\displaystyle p}$ and ${\displaystyle q}$ (again, ${\displaystyle q\neq 0}$). But is that definition unique? And is it easy to extend to general functions? How about function composition: do we get ${\displaystyle D^{p/q}{\big (}f(g){\big )}=D^{p/q}{\big (}f(g){\big )}D^{p/q}(g)}$ for functions ${\displaystyle f}$ and ${\displaystyle g}$? Do we even want to define ${\displaystyle D^{p/q}}$ that way?

Just some random musings. Forgive me if this is just a lot of ignorant babbling. Shorne 05:37, 16 Oct 2004 (UTC)

(PS: Why do \bigl and the like not work? Shorne 05:37, 16 Oct 2004 (UTC))

Shorne hi. You forgot one important assumption, and that's translation invariance: you want that ${\displaystyle (D^{p/q})(f(x+a))=(D^{p/q}f)(x+a)}$. However, even with this assumption there is more than one solution. The easiest way to see this is in the Fourier domain. There, differentiation is just mutiplication by n. So it's root must be multiplication by ${\displaystyle {\sqrt {n}}}$. However, any choice of signs would also give you a square root. In other words, for any choice of a series ${\displaystyle \epsilon _{n}}$ of ${\displaystyle \pm 1}$, you can construct a "root of differential" operator by taking Fourier transform, multiplying by ${\displaystyle \epsilon _{n}{\sqrt {n}}}$, and taking inverse Fourier transform.
As for your PS question, the mechanics are expalined in meta:Help:Formula so check there. Gadykozma 14:16, 16 Oct 2004 (UTC)
Gadykozma, that's about what I was thinking: if you had the Fourier series of your function it's easy to do fractional derivatives because it's just of sines and cosines, and if you represent the fourier series as magnitude and phase instead of a sine and a cosine you don't have to worry about the edges lining up on your interval... and there's the overlap with what you just said, when you take the derivative of sine it shifts by (1+2n)pi/2 so for a half derivative do you shift by pi/2 or 3pi/2... --Sukisuki (talk) 00:37, 28 August 2009 (UTC)

## complicated?

"Unfortunately the comparable process for the derivative operator D is significantly more complex..." really? The one in Mathworld isn't especially complicated in concept; integrate up the fraction (integration being so tidy) then differentiate down:

${\displaystyle D^{\mu }=D^{m}I^{m-\mu }}$ with integer ${\displaystyle m\geq \mu >0}$

Y'don't even have to use the least m. Kwantus 2005 July 2 01:18 (UTC)

With m restricted to the least value, Loverro p11 calls that the Lefthand form and the reverse ${\displaystyle D^{\mu }=I^{m-\mu }D^{m}}$ the Righthand or Caputo form. The latter is apparently more practical, producing 0 for constant functions and working better in DEs. Kwantus 2005 July 2 18:17 (UTC)

## and now for the Amateur Hour (me)

Question: I am a non-mathematician who chanced upon fractional diff/integ quite by accident many years ago, specifically by noting that since fractional diff/integ is trivial for simple sinusoidal plane waves -- a proportional -/+ phase shift does the trick -- then I could define a meaningful continuum for diff/integ just by taking the Fourier transform of a function and shifting its components proportionally. It's rather fun, as you can see the results approximating the regular fencepost values of -2, -1, 0, 1, 2 as they approach those points. Sort of slinky-thinky, yes, but such visualizations might help some readers. Comments, anyone? --Terry Dactyl 04:53, 15 August 2005 (UTC)

Nice and simple intuitive explanation! If diffing of sinus or cosinus, is shifting by -pi/2, then it is clear that half-diffing would be shifting by -pi/4, becuase doing this twice, will lead to original transformation. This quickly and nicely extends to any fractional or real parameter. And finally leads to frequency domain interpretation. --91.213.255.7 (talk) 22:38, 9 November 2010 (UTC)
Do we have this? by all means we should. (and as mentioned below something about application and particularly in relation to viscoelasticity.) i chanced upon frac calc myself, as well, in high school, though by way of extending the power law. (i had to find a fractionalization of the factorial function (and hence i learned of the gamma function), so perhaps it was not as elegant, but it worked.) there is a section on "heuristics" there. but that's just from riemann in a letter he wrote a long time ago, and nobody really arrives at it that way and it's not very intuitive. (or mathematical)
So in conclusion i say we should add an explanation of the fourier transform method in the manner just described by Terry. Kevin Baastalk 19:30, 10 November 2010 (UTC)

And an amateur question. Any practical applications of this, as there are for ordinary caluclus? An example or two, of interest to non-mathematicians, would help this article shine. Derex 19:15, 7 June 2006 (UTC)

The fractional calculus does have uses in modeling viscoelastic phenomena. Both magma flows (very low fequency) and flutter (very high frequency) phenomena can be modeled using fractional models. (Also, the Bessel functions are fractional detivatives of sinusoids.) ComputerGeezer (talk) 03:10, 14 February 2008 (UTC)

### Soapbox: Need for a history section // Why not help newbies more?

1. There is a rich history to this topic that goes back to the foundations of calculus with Leibniz and Newton, as briefly mentioned on this page. None of that seems to show up in this page. I'd would try to edit the article myself, but frankly, the show of mathematical notational expertise on this page is a bit too intimidating. Still, I find it a bit bizarre that for such a historically rich topic there isn't even a header for the history of this topic. Giving two external references, one in French without any link (!) and one link at the end, just doesn't seem to cut it.
2. Why oh why do so many pages like this in Wikipedia seem absolutely dedicated to scaring the bejeebers out of anyone other than a pure mathematician who might want to come by and actually learn something they can understand about the topic? This article is so notation-rich and method-rich that the only people who would seem to have much chance of following it are the people who already do understand it. What is the point of that? I keep thinking that I'm reading a discussion by expert wine connoisseurs of the various delicate parameters that add richness to the topic. That's fine -- important even -- but wouldn't a more complete article help get the general drift over to less sophisticated tasters? Shouldn't the idea be writing articles that try to bring in new folks and get them interested in learning more, so that someday they, too, can become aware of the full depth of it?
3. I would again in this case point to the example of how fractional calculus could be introduced with minimal notations by going through a discussion of how how sinusoidals behave when differentiated or integrated (they shift left or right by wavelength-proptotional units), and how that can be used in an intuitive fashion to create continuous definitions of differentiation and integration as left and right shifts of wave forms. Does that over simplify things? Great gollumpus gumdrops (please pardon the profanity) yes, of course it does, insanely so! But isn't that the whole point when trying to convey a basic concept over to newbies -- simplifying things to the point where they can have that little click of intuition that tells them "Hey, I think I get the drift of this idea afterall, wow!"?
4. From sinusoidal-only you can then point out this odd idea that you can try adding just a couple of them with different wavelengths, then pointing out that you still get the right results at the fencepost (integer) differentiations and integrations. You then refer the readers to this wonderful concept of a basis set, and how sinusoidals form such a basis set. (Side Note: Error on my part! I first typed "base set", thought it look wrong, but for some reason went with it anyway. Spelled correctly, "Basis set" is in fact covered in the article Basis (linear_algebra). However, I remain disappointed that even in that case the article starts with the assumption that the reader must understand linear algebra before being able to grasp the concept of a set of elements from which all points of interest can be constructed or expressed.) From that you introduce Fourier transforms, and point out how extraordinarily useful this little concept is -- and, if you don't mind tossing in a bit of physics, point out that this simple concept underlies the entire mysterious-sounding uncertainty principle of quantum physics. Hey, it's a better intro and a lot more precise intro to such a concept than the wonky philosophy stuff leads the reader away from precision and mathematical formulation of the concept!)
In other words: Lead 'em in! Bring 'em along! Get their curiousity up! To me a topic like this one -- fractional calculus that is, since I've digressed a bit 8^) -- is a perfect example of the kind of fascinating concept-extension that makes math fun.
Another is the beautiful way that complex numbers almost magically encompass wave-related mathematics. (I am stealing Roger Penrose's terminology and perspective there, since I'm currently going through his delightfully exploratory book The Road to Reality. Penrose's take on the topic really is a lot of fun to read, even if you already know it well and were already impressed by it.)

All of the above takes far less writing than it might seem, especially with good use of diagrams and references to other (apparently non-existent in some cases!) Wikipedia articles. But more importantly, it might give a few readers enough interest and self-confidence to actually dig into the real meat of the article that is already here.

Enough... as in "wow, I think I just said waaaaaay too much, but what the heck, I believe what I said and don't particularly feel like taking it back... 8^)

Cheers, Terry Bollinger 03:23, 19 June 2006 (UTC)

## A good introduction to Fractional Calculus:

http://www.xuru.org/fc/toc.asp I didn't see it already in the article, but didn't want to add it if it's already there.

JWhiteheadcc (talk) 15:36, 16 January 2008 (UTC)

## Parametric continuity

Considering parametric continuity, what happens when you take, e.g., the 3/2 derivative of a C1-continuous function? At what fractional does the fractional derivative does the non-smoothness make the fractional derivative locally undefined? —Ben FrantzDale (talk) 17:59, 25 August 2008 (UTC)

Multiple integration can be seen as measuring the volume of a solid, the area of a surface, the length of a curve, etc. By extension, fractional integration can be seen as measuring the volume/area/length/whatever of a fractal.
Note, 1st-order integration on a volume will give you a surface of lengths, 2nd-order will give you a curve of areas, and 3rd-order will give you a point of volume. The same holds true for fractional integration.
In a way, integrating over an R-dimensional region is just projecting an N-dimensional function onto an N-R dimensional space. think of shining light through a translucent (3D)object onto a (2D)piece of paper. the darkness at a point on the (2D)surface of the paper is equal to the (1D)thickness("length") of the object at that point. 2D+1D=3D The "region of integration" in this example are the (1D)rays cast through the object.
Thus, for example, if you do a fractional integration on coral, of order equal to the fractal dimension of the coral, you might get some (0D)growth parameter such as the rate of calcium deposition.
Now if you integrate to a lower order you might get from that a "surface" of growth rates, or something like that (though it might not be a 2-dimensional "surface"). If you look at a point in a region not on the surface of the coral, you might very well get a growth rate that's "locally undefined" - which is all perfectly logical because there's no coral there.
You also might want to consider blowing up a fractal balloon. It's a twist on the oft-cited related rates problem, and the solution is just about as straightforward. The important point is that the intuition is just as applicable, and most of nature's balloons are fractal.
You see, since nature has to follow the laws of mathematics and vice-versa, even very high-level math usually has very simple and intuitive explanations. Which is why visualizing the problems, mathematical operations, etc. can often make math MUCH easier than just using a purely formal/symbolic approach. (And more useful.) Perhaps not too surprisingly, this was Benoit Mandlebrot's approach. Kevin Baastalk 17:10, 14 January 2009 (UTC)
If I still haven't answered your question: "At what fractional does the fractional derivative does the non-smoothness make the fractional derivative locally undefined?" - then let me just say that it's the same as non-fractional integration. You're really not doing anything different, spatially speaking. And the formalism only describes what's going on spatially, so if you try to measure the volume of a surface or something like that, you'll have problems. Just so long as you realize what you're actually doing, there should be no surprises. And one more thing: notice that integration has a region of integration, but differentiation doesn't. That's why you have to add that "C"(integration constant) back in when you reverse it (and why the "C"'s on the other side of the equation). Same holds true for fractional integration, only the "C" might be a function of fractional dimension. (In a way, the "C" represents information lost in the projection.) NASA invented an "initialized fractional calculus" to deal with this. You might find it interesting. Kevin Baastalk 18:08, 14 January 2009 (UTC)
Thanks for the reply. That sounds very interesting although I must admit I don't fully understand. Let me rephrase my initial question; see if this is a less nonsensical question: Suppose I have the function
${\displaystyle f(x)={\frac {1}{2}}|x|^{2}}$
This function is ${\displaystyle C^{1}}$ continuous. Its derivatives are
${\displaystyle {\frac {d}{dx}}f(x)=|x|}$
and
${\displaystyle {\frac {d^{2}}{dx^{2}}}f(x)={\begin{cases}-1&x<0\\\;\;\,1&x>0\\{\text{undefined}}&x=0\end{cases}}}$
Looking at the derivative at zero as a function of the order of the derivative,
${\displaystyle g(n):=\left.{\frac {d^{n}}{dx^{n}}}f(x)\right|_{x=0}}$
we have
${\displaystyle g(0)=0}$, the function value at zero;
${\displaystyle g(1)=0}$, the parabola is flat;
${\displaystyle g(2)}$ is undefined.
It seems like the fractional derivative should let us fill in values for ${\displaystyle g(n)}$ for ${\displaystyle n\in [0,2)}$.
What would it give us?
Is there a finite-difference approximation of a fractional derivative?
I am particularly curious to understand what happens to this fractional derivative as n goes to two. I feel like I'll learn something by understanding at what order of fractional differentiation ${\displaystyle f^{(n)}(x)}$ "suddenly" gets a sharp point in it.
Thanks. —Ben FrantzDale (talk) 13:36, 15 January 2009 (UTC)
Hmm, well absolute value isn't really a "natural" function, so to speak. (As far as I can tell nature rarely, if ever, takes the absolute value of something, and I can't concieve of how it would. (save inventing humans who do so)) And consequently there aren't really any formulas or identities for integrating/differentiating it, like there is for natural log, cosine, etc. Or any formulas / identities at all involving absolute value, for that matter. In this sense, it's poorly defined.
But in any case, for dx^2, at least, I suppose you could go back to first principles, and take the limit h->0 from the right, versus from the left, giving you a separate "right" and "left" derivative, both of which are defined (g(2) = 0). But really, by just looking at the graph, you can see that the instantaneous acceleration at the origin is infinite -- it goes from -1mph to +1mph in a nanosecond.
So that doesn't help us. I think the problem here is that absolute value is an even function, whereas x^0 is even, x^1 is odd, and x^2 are even. Every time you integrate/differentiate the odd/evenness of the outer function changes. A fractional differentation/integration would leave you with something like x^1.5th. (Which I guess is somewhere between odd and even.) For absolute value, the even/oddness doesn't change. (which seems to violate what one would expect from differentiation/integration. As far as i'm aware, absolute value is the only function that does this. In fact, I think a single-order differentiation translates to a 90-degree phase shift in all frequencies (and a scaling) when transformed to the fourier domain. (hence d(sin(ax)) = a*cos(ax) and d(x+y)=d(x)+d(y) this should pretty much guarantee alternation between odd and even functions.) In fact, come to think of it, what IS the derivative of absolute value, formally speaking (i.e. derived by manipulating symbols)? One needs to know it because one needs to apply the chain rule. Then one needs to derive the fractional derivative of the absolute value function. Do that, and you'll have your answer. But I'm not going to be holding my breath.
Perhaps an example that didn't use the absolute value function might be better? I would generally say just integrate/differentiate it like any other function and you'll get your answers just as you would w/integer-order calculus. You can use the fractionalized formulas listed here, though i notice that the fractional chain rule is missing from there. It exists, you'll just have to do a search for it.
I would guess that there's no greater significance to when fractional integrations become singular than when non-fractional ones do. It all depends on the function, really. And if you're deal with abstractions like C-1 continuous, you don't have a function so you just say, well it's c-1 so at precisely 1 order of differentation. And if you're working with functions that just aren't defined that well due to their abstract/artifical nature (such as absolute value), then don't expect the results to be well defined. Kevin Baastalk 15:59, 15 January 2009 (UTC)

pardon the pun. absolve the alliterations. I think "heuristics" and "Half derivative of a simple function" cover pretty much the same stuff. But I think half-derivative is written more clearly, and with less narrative ("A fairly natural question to ask...", "It turns out..."). I say heuristics should be merged into half-derivative. Kevin Baastalk 17:19, 14 January 2009 (UTC)

## composition / initialization / boundary conditions - explanation.

There are some interesting issues that show up w/fractional calculus concerning boundary conditions, composition, and the like.

When you integrate a function like 5x+3, you get 5x^2/2+3x + C. This is because when you differentiate 5x^2/2+3x + 2 or 5x^2/2+3x - 7 or anything like that you get the same answer: 5x+3 . For composition to hold, integration must be the inverse of differentiation so that many-to-one mapping in differentation (infinity-to-one) must map back from one-to-infinity, and we do this by adding an undefined constant, 'C'. Now when we integrate twice or differentiate twice, this infinity-to-one stuff happens twice, so we get two undefined constants. (This is all assuming an "indefinite" integral right now.) But when we integrate/differentiate one and a half times, do we get one and a half undefined constants? Ofcourse not, so what happens?

It is best to visualize it - picture differentiation/integration as an exponentially-shaped funnel - differentiation is pushing into the funnel, such that the result is smaller and that constant is popped out, while integration is pulling out of the funnel - such that we have to pop a constant in. Notice in typical calculus we are only pushing in discrete steps - for this, picture planes - or slices, if you will, cutting through the funnel, infinitely thin spaced one unit apart. when we integrate or differentiate by 1, we push all of these planes up or down by exactly one unit. (thus, in the end, the picture looks the same as before) We see that when we push down, we lose information because a finite becomes an infinitesimal, and when we pull up, we gain a constant because an infinitesimal becomes finite.

When we do fractional integration/differentation, we have to picture the whole volume of the funnel filled and the up/down motion being continuous rather than discrete. In our discrete example, each slice represented a polynominal order, the first was a*x^0, the next one higher, b*x^1, etc. In the continuous case, where we fill all the space in between, we have a continuum of polynominals - a(t)*x^t, or a "spectrum", if you will. As we move it up and down we have a continuum of those a(t) constants being pushed out or in.

Think of a function, instead of being a short string of symbols generating a curve, as a spectrum, such as: f(x) = a(t)*sin(x*t)+b(t)*cos(x*t) . Integrating or differentiating by a real number, 'q', will shift the entire spectrum - D_x^q ( a(t)*sin(x*t)+b(t)*cos(x*t) ) = a(t)*D_x^q( sin(x*t) ) + b(t)*D_x^q( cos(x*t) ) . Notice that in addition to adding a pi/2 phase shift per degree of integration/differentation, we have to apply a fractionalized version of the chain rule to the cos(x*t) and sin(x*t), bringing 't' out to the front. something like " a(t) * t^q * sin(x*t+q*pi/2) ", i believe.

In one differentiation, some of those t's brought out to the front will be zero, hence our loss of information. If we leave them as t's and integrate back, we will get back our original function. But if we evaluate the expression - replacing all t's with their calculated values, the zeros remain and there are no t's to be pushed back, except that instead of multiplying by t when t=0 we might be dividing by t when it equals zero, resulting in an undefined expression. This undefined expression is our "+ C". This dividing by zero or multiplying by zero is the infinitesimal-to-finite or finite-to-infinitesimal conversion; the infinity-to-one.

And this is how we lose composition; by not leaving everything as t's. As we pull from the funnel we generate a whole continuum of undefined expressions (+ C)'s, just as when we push through it we continuosly drive parts of the spectrum to zero.

Kevin Baastalk 16:14, 9 June 2009 (UTC)

## Applications

This [1] may be useful at some point for fleshing out the article. It discusses (briefly!) an application of fractional derivatives.

CRGreathouse (t | c) 23:38, 15 March 2010 (UTC)

## Example plot used is not color blind friendly

Blue and purple are hard do differentiate for people, like myself, with red-green colorblindness. 24.18.246.152 (talk) 02:33, 26 August 2010 (UTC)

## New plot, animation

Hi, I prepared animation using gnuplot: http://smp.if.uj.edu.pl/~baryluk/fractional_diff/fractional_diff.gif (9MB) It shows derivatives of x^4. I shows D^p x^4, where p varies from across animation from 0 to 5. For negative argument I show real part of complex result. Hopes this will help somehow. In the same directory in files fractional_diff.gp and fractional_diff.gp one will find simple code for this animation (do whatever you want with them). Witold Baryluk --91.213.255.7 (talk) 22:02, 9 November 2010 (UTC)

I, for one, uploaded it: [2]. Kevin Baastalk 19:44, 10 November 2010 (UTC)

## The bit about "the fractional derivative after the integer derivative"

So I reverted an edit by Kevin_Baas, because it is not a standardized convention that 0 is negative (in fact, Negative integer#Order-theoretic properties claims it is neither positive nor negative) (and I didn't mean to remove the term integer from his edit), but his re-revert made me take another look at the paragraph in question, and I don't think it makes any sense. ${\displaystyle \Gamma (1-\alpha )}$ only has problems as ${\displaystyle \alpha \in \{1,2,3,\dots \}}$, i.e. where we already know what the αth derivative means. Is this paragraph correct? RobHar (talk) 17:50, 18 January 2011 (UTC)

i didn't know zero was defined that way. i come from a computer programming background, in which the sign of zero is undefined and thus it can be either (and in fact in floating point numbers where the sign is stored separately sometimes its positive and other times its negative). all i was really concerned about was the integer part though, as opposed to all nonpositive real numbers. "zero and(or?) negative integers" is fine by me.
as to whether it's correct. well someone just added it in and i didnt really give it much thought. if alpha is negative well that means you're integrating (it's really a differ-integral; combined differentation / integration operator). the example it gives is completely irrelevant because it shows a positive value of alpha (3/2). it's altogether irrelevant whether differentation or integration operators are well defined at those points, because we're not using those operators here, we're using a differintegral operator, and we want to make sure _that_ is well defined at those points. we don't want to bother with switching back and forth. so i guess the problem comes in when integrating to integer order with the operator, using that definition of the differintegral, at least.
that's just my observations. i don't know anything about what the literature says on the matter, if anything. and the gamma function version of the power rule is really not itself a differintegral operator, it's just a rule that you can use, and if it doesn't work i suppose you can just use something else. so on that note i'm not sure it's notable enough to put on this page. maybe on a page of differintegration rules, e.g. here. but i'm not sure its even notable enough there. Kevin Baastalk 19:02, 18 January 2011 (UTC)

## Generalizations section

I removed the recently-added "Generalizations" section for the following reasons:

• The so-called "Udita fractional operator" is not referred to in any indepedent reliable secondary sources. This is a non-negotiable requirement for inclusion in an encyclopedia. Our mathematics articles aim to cover things that can be found in secondary sources (like textbooks), not cutting-edge research like recent PhD dissertations, or even recent journal articles that have not gained widespread acceptance. In particular, the term itself "Udita fractional operator" does not seem to appear anywhere outside wikipedia, and this is also against our policies.
• The Erdélyi–Kober operator does not seem to be a generalization of any of the operators mentioned in the article, which each accepts as an input a function f. But there is no f in that section. If we are going to mention it here, then it only makes sense to say in what way this generalizes those operators.

--Sławomir Biały (talk) 18:58, 29 July 2012 (UTC)

## H. Vic Dannon in "References"

An anonymous user removed this reference and was reverted, for not having given a reason. Actually he did give a reason: "Dannon is an amateur mathematician who makes wild mistakes". I don't know if one can express things in this way, but I had a look and I can add this: H. Vic Dannon is not known to Math. Reviews, and the journal "Gauge Institute Journal" is not referenced in Math. Reviews. Apparently more than half of the papers there are written by Vic Dannon. I have serious doubts about the notability of this reference. Bdmy (talk) 22:52, 20 June 2013 (UTC) Bdmy (talk) 22:53, 20 June 2013 (UTC)

## Interpretation of non-integer derivatives

I'm a physicist and there are instances in physics where taking "the square root of a differential operator" comes in handy, for example the Klein-Gordon operator is second order in time while we want it to be first order to get unitary time evolution. Anyhow, I know the interpretation of a derivative of a function is the slope of the tangent at that point, but what is the natural interpretation of a fractional derivative? Is it a kind of deformation of the function? Is this deformation continuous or differentiable in itself? 94.211.47.103 (talk) 11:17, 11 November 2013 (UTC)