# Talk:Logarithm

Active discussions

Logarithm is a featured article; it (or a previous version of it) has been identified as one of the best articles produced by the Wikipedia community. Even so, if you can update or improve it, please do so.
This article appeared on Wikipedia's Main Page as Today's featured article on June 5, 2011.
Article milestones
DateProcessResult
December 27, 2010Good article nomineeNot listed
January 20, 2011Good article nomineeListed
February 22, 2011Peer reviewReviewed
June 1, 2011Featured article candidatePromoted
Current status: Featured article

## Need a section on units?

There seems to be a fair amount of controversy around the web on if you can take the logarithm of a number that has units. A web search with query like "units in logarithms" returns many pages that all seem to be wrong. In general, the claim is basically that you cannot take the logarithm of a number with units. Period. For example at .

This seems incorrect to me. If the Log is considered the integral of dx/x then the units of the integral are the units of that quantity. An integral is a sum with the units of the individual terms. In the case of dx/x the result is always unitless since dx and x have the same units so the ratio is always unitless. So Log10(100meters) is 2 and no rules are broken. That also means though that the Log is a lossy transform since you cannot recover the unit with exponentiation. E.g., 10^(Log10(100meters))=100 and not 100meters.

There are many examples in chemistry, physics and engineering where the logarithm is taken of a quantity with units, for example the Arrhenius Equation: deltaG = RTln(k) where k may have units.

Some claim that there is an implied de-dimentionalization when taking a Log. Using the above example, people claim that what is actually being done is Log10(100m/1m). This seems to be incorrect and unneeded. Calculus works just fine with units and the Log function is no exception. It is lossy, but many mathematical transforms are lossy (e.g., sqrt(x^2) != x for all x).

Jsluka (talk) 18:47, 10 July 2020 (UTC)

You can meaningfully take a difference of two logarithms of quantities with the same units, just as you can meaningfully take a ratio of quantities with the same units and get a dimensionless result. —David Eppstein (talk) 20:30, 10 July 2020 (UTC)

Jsluka (talk) 21:58, 22 July 2020 (UTC) Exactly, so by definition you can take the logarithm of a quantity that includes units and you (1) aren't breaking any rules and (2) get a unitless result.

NO. You can meaningfully take a DIFFERENCE of logarithms of two quantities that are measured in the same units. The logarithms themselves are only defined up to an additive constant (just like indefinite integrals are defined only up to an additive constant). —David Eppstein (talk) 22:22, 22 July 2020 (UTC)
Agree, sort of. You can take the log of numbers that have units, and those units affect the result, which is nevertheless unitless, the ratio of the number to the unit (that is, think of the argument as a unitless ratio, with the unit as the reference). The differences of such things (where the original numbers have the same units) are indifferent to the units. In any case, the logs themselves are unitless, unless you adopt the position that natural log has unit 1 and other logs have different units, which is also a valid viewpoint. Dicklyon (talk) 02:20, 23 July 2020 (UTC)
If you say that a certain distance is 10, or that a certain volume is 100, the result is meaningless unless you multiply it by its unit. If you say that a (decimal) log of a certain distance is 1, or that the log of a certain volume is 2, then it's equally meaningless unless you add it to the log of its unit. So you could reasonably say that log10 (100 liters) is 2 + log(liter). When you subtract the logs of units cancel, just like when you divide numbers with the same multiplicative units the units cancel. —David Eppstein (talk) 06:07, 23 July 2020 (UTC)
A prime example. maybe worth including, is dBm, the absolute power level referenced to one milliwatt. The difference between 30 dBm and 40 dBm is 10 dB, a dimensionless ratio. —agr (talk) 07:40, 23 July 2020 (UTC)
I figured out the idea of additive log units in high school, but was later convinced that we are not supposed to use them. Equations are supposed to be arranged such that only the log of appropriately unitless combination is used. Arrhenius is an interesting example. While in theory k has units, in practice it is considered arbitrary. In theory the slope depends on an activation energy, but the only measurement of that energy is the Arrhenius plot itself. For log graph paper, the graph value is the log of the desired quantity divided by the value on the graph axis tic mark. One that I always found interesting, is radioactivity uses base 2 log, unlike just about everything else. For most things, actual measurable quantities go into the exponential. A capacitor discharges as exp(-t/RC) where we can measure R and C. But for radioactive decay, there is unmeasurable physics inside the nucleus, such that we only measure the decay time. But it is really easy here. You need to find a WP:RS if you want to add it. Gah4 (talk) 09:00, 23 July 2020 (UTC)

## Original research?

Dear Wikipedians, as part of the development of the Vietnamese version, I've read this article carefully and now I'm afraid that there are some original research, as follows:

1. There are also some other integral representations of the logarithm that are useful in some situations ... with two integral representations of ln(x) --> not sourced
2. The Taylor series of ln(z) provides a particularly useful approximation ... less than 5% off the correct value 0.0953. --> not sourced
3. A closely related method can be used to compute the logarithm of integers ... to the end of the section --> not sourced
4. The non-negative reals not only have a multiplication, but also have addition, and form a semiring, called the probability semiring; this is in fact a semifield. The logarithm then takes multiplication to addition (log multiplication), and takes addition to log addition (LogSumExp), giving an isomorphism of semirings between the probability semiring and the log semiring. --> not sourced

These were not presented in the 2011 featured version. Another problem: at least for me, the "Motivation and definition" read more like a textbook rather than an encyclopedic article. Are them serious issues? I'm tagging Jakob.scholbach, the nominator for FA back in 2011, and also the WikiProject Mathematics for rapid response. (Note that I'm not a contributor of this article.) Thuyhung2112 (talk) 15:25, 10 October 2020 (UTC)

This may or may not help your question, but in many scientific articles WP:CALC applies more than WP:OR. You might also read WP:SYNTHNOT. For example, many mathematical operations are so common that many textbooks will have them, and so no need to site an actual reference. All this is not necessarily agreeing or disagreeing with what you say, but to keep discussion going. Gah4 (talk) 22:07, 10 October 2020 (UTC)
I think calling these citations original research is a bit exaggerated. Some of these sentences could use an additional reference (or in some cases I would maybe merge them somehow differently in the surrounding text), but OR is a bit of a longshot here. Back from 2011, when I nominated this article, it has in a few places mildly deteriorated (IMO), and some of the spots you mention belong to the additions that I personally consider not always helpful. But in any case this is not OR proper, IMO. Jakob.scholbach (talk) 12:30, 11 October 2020 (UTC)
Some of this was also discussed at the ref desk. This is starting to run afoul of WP:TALKFORK, but to summarize, I'm inclined to remove the integrals: they're just two specific cases of a more general result, any of which involves a logarithm. There are lots of other integrals where logarithms pop out unexpectedly. The claim that these are "useful" is dubious, and this much at least does need to be clarified and sourced. It's also poorly explained, but unless has any insights into this, I think it would be better to just axe it. –Deacon Vorbis (carbon • videos) 13:26, 11 October 2020 (UTC)

## easy

The article says that logarithms are less easy than root. Considering that the common way to compute roots, especially non-integer roots, is with logs, that isn't so obvious. Should we be judging what is easy and what isn't? Gah4 (talk) 22:13, 10 October 2020 (UTC)

I cannot find in the article any assertion comparing the difficulty of computing logarithms and roots. What I see in the lead is the assertion that before the invention of computers, logarithm tables made scientific computation easier than before. This is blatantly true. Nevertheless the article contains too many occurrences of "easy" or "easily", per as MOS:EDITORIAL. D.Lazard (talk) 09:45, 11 October 2020 (UTC)

## Sources for "Feynman's algorithm"

This entire sections seems to be based on a single, vague, two paragraph description in Physics Today (here), with the details being fleshed out through original research. Physics Today may a reliable source, but the article in question is obviously meant for a popular audience, and so is extremely lacking in detail, and does not cite any sources itself from which further detail might be found. If someone can find an academic paper, textbook or reference book that describes this algorithm then I'd be convinced that it belongs in Wikipedia, but as it stands I find its inclusion highly dubious at best. The section title might be taken to imply that the method is known as "Feynman's algorithm" in the literature, but can find no evidence that anyone aside from the editor who added this section called it that. If have no doubt that the method is, in fact valid, but this does not alone merit its inclusion since much of the detail seems to be the result of original research. --RDBury (talk) 03:11, 15 October 2020 (UTC)

It seems that this is the first attempt (or one of the first attempts) for computing logarithms on a computer, and that it is its only interest, as it seems to be not specially efficient. So I suggest to reduce this paragraph to a single line, and to move it to the history section. D.Lazard (talk) 03:53, 15 October 2020 (UTC)
@D.Lazard: The algorithm, according to the PT article, was from when Feynman was still at Los Alamos, at which time "computer" meant a person working the computations by hand. The fact that the algorithm itself seems to be optimized for a binary (electronic) computer would seem to contradict this though. The algorithm does seem seem efficient since it eliminates multiplication and division from the computation, but whether more efficient methods are used by modern processors I can't say. There's a trade-off between look-up table size and number of computations, so it's hard to day what 'efficiency' actually means here. For the history section I think we have a similar issue: is there a reference which shows the historical significance of the algorithm? --RDBury (talk) 04:16, 19 October 2020 (UTC)