Jump to content

Talk:Dynamic programming

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Confusion

[edit]

The first paragraph of the second part and the article on algorithms states that dynamic programming is a bottom-up approach, but later this article says dynamic programming may use bottom-up or top-down approaches. --zeno 11:46, 9 Feb 2004 (UTC)

I've revised that section. It was very vague and somewhat incorrect. I hope there's less confusion now. Derrick Coetzee 20:18, 5 Oct 2004 (UTC)

Weirdness

[edit]

The page says "i dont know" in those terms, which is not only weirdly misplaced, but also improperly formatted. Someone check.

Misplaced section

[edit]

The part following "The steps for using dynamic program goes as follows:" is out of place. Out of nowhere the article starts using terms like 'gap penalties' and 'sequence alignment.' Seems like this was copied from some article on a specific application, such as protien sequencing. Anon, Wed Jul 11 14:11:43 EDT 2007

Too many examples

[edit]

This is not a textbook. One good example would suffice. Even worse, many of them are just too long (and boring). Put them in wikibooks. Unfortunately, any attempt to fix this article would be blocked as "vandalism". Way to go, Wikipedia!

Why ALGOL

[edit]

Hello , in

function fib(n)

   if n <= 1 return n
   return fib(n − 1) + fib(n − 2)

I doubt that the return function would return a false if, so maybe you make a if n larger or equal to 1 out of it ? That also has the nice side effect that the Fibonacci numbers would become larger than -1, like the original series is larger than +1, I guess that is what you intended ...

Call-by-need is not memoization

[edit]

Some programming languages can automatically memoize the result of a function call with a particular set of arguments, in order to speed up call-by-name evaluation (this mechanism is referred to as call-by-need).

This is technically correct but quite misleadingly worded. In call-by-need, the value at a particular call site may be (but need not be) memoized. If I call the same function with the same arguments but from a different call site, the result will be computed again. That is, rather than a memo table, call-by-need at most provides a cached value for a particular function call.

For example, in Haskell, a "lazy" language with call-by-need, we can define factorial recursively as

fact n = if n == 0 then 1 else n * fact (n -1)

If we then call it from a single call site

sum (map (\_ -> fact 5) [1..3])

the call to fact 5 and its child calls will each be evaluated but once, even though the lambda appears to be evaluated three times.

However, in

fact 5 + fact 5 + fact 5

each call to fact 5 and children will be evaluated separately: no memoization will occur.

This distinction makes call-by-need useless for memoization when decomposing a typical recursion: the subproblems are all called from separate call sites, and thus no memoization will occur.

Some languages make it possible portably (e.g. Scheme, Common Lisp, Perl or D).

Citations needed, at the least. Futures / promises are not a memoization mechanism, but a manual implementation of the call-by-need caching described above. In general Scheme and Perl, at least, are strict languages as far as I am aware. Bart Massey (talk) 20:03, 19 February 2025 (UTC)[reply]