Svoboda | Graniru | BBC Russia | Golosameriki | Facebook

To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

From Wikipedia, the free encyclopedia

The two right triangles with leg and hypotenuse (7,13) and (13,17) have equal third sides of length . The square of this side, 120, is a congruum: it is the difference between consecutive values in the arithmetic progression of squares 72, 132, 172. Equivalently, the two annuli between the three yellow circles have equal areas, π times the congruum.

In number theory, a congruum (plural congrua) is the difference between successive square numbers in an arithmetic progression of three squares. That is, if , , and (for integers , , and ) are three square numbers that are equally spaced apart from each other, then the spacing between them, , is called a congruum.

The congruum problem is the problem of finding squares in arithmetic progression and their associated congrua.[1] It can be formalized as a Diophantine equation: find integers , , and such that

When this equation is satisfied, both sides of the equation equal the congruum.

Fibonacci solved the congruum problem by finding a parameterized formula for generating all congrua, together with their associated arithmetic progressions. According to this formula, each congruum is four times the area of a Pythagorean triangle. Congrua are also closely connected with congruent numbers: every congruum is a congruent number, and every congruent number is a congruum multiplied by the square of a rational number.

YouTube Encyclopedic

  • 1/3
    Views:
    449 876
    13 344
    677 302
  • Hash Tables
  • Section, Week 2
  • JFK Assassination Conspiracy Theories: John F. Kennedy Facts, Photos, Timeline, Books, Articles

Transcription

[NOISE]. Before diving into hash tables, let's first review the pros and cons of some simpler data structures, starting with arrays. Recall that arrays allow us to store elements of a single data type contiguously in memory. Because each element is associated with an index, or location, we have random access to all of the elements in an array. In other words, we can access any element in a single step by indexing into the array. This is a big deal, because algorithms like binary search depend on random access. A downside of arrays is that their size is fixed. Because arrays store data contiguously in memory, you must specify an array size when you declare the array. You are effectively asking the operating system to reserve the appropriate amount of memory for the array's elements. There's no guarantee that more memory, adjacent to your array, will be available for use later. So arrays cannot easily grow. Recall that we also learned about linked lists, which can grow because their elements are not contiguous in memory. Each node in a linked list contains the element that we want to store, as well as a pointer to the subsequent element in the list. Unfortunately, the price we've paid for dynamic size is random access to elements. In order to access a certain element, it's necessary to traverse the entire list until the desired element is reached. So, if I'm looking for the number 9, I'd follow the pointers from node to node, checking whether the value of each node is equal to 9. As such, in the worst case, look up is O(n), which is far from efficient. Can we do better than O(n) while still allowing our data structure to grow over time? Hash tables offer a solution. Hash tables are used when speedy insertion, deletion, and lookup of elements is priority. In theory, insertion, deletion, and lookup can even be accomplished in constant time. So, what is a hash table anyway? A hash table is just an array coupled with a function, which we'll call the hash function. The hash function takes a piece of data as input, we'll call this a key, and outputs an integer, commonly referred to as a hash value. The hash value maps our key to a particular index in the hash table. You'd initially use the hash function to determine where in the hash table to store a given key. Later, you'd use the same hash function to determine where in the hash table to search for a given key. For this reason, it's crucial that a hash function behaves consistently and outputs the same hash value for identical keys. Know that hash tables can be used to store data of all types. But to simplify things, we'll focus on strings for now. Here's a simple hash function for strings. This hash function computes a hash function based on the first letter of the key. "Apple" begins with the letter "A", so it's mapped to index 0 in the hash table. Similarly, "banana" is mapped to index 1, and "cat" is mapped to index 2. If a friend asks if the word "dog" is in the table, we'll input "dog" into the hash function, which will output a hash value of 3. Since "dog" is not stored at index 3, we can say with confidence that "dog" is not in the table, even though we've only checked one of the hash table's 26 indices. Time to throw a wrench into things. What if we want to store "ant" into the table as well? "Ant" hashes to index 0, just as "apple" did. This is an example of a collision, the result of two keys hashing to the same index. Even if your hash table is larger than your data set, and you've chosen a good hash function, you still need a plan for dealing with collisions, if and when they arise. Let's discuss the pros and cons of two common methods for resolving collisions: linear probing and separate chaining. With linear probing, if a key hashes to the same index as the previously stored key, it is assigned the next available slot in the table. So, "ant" is now stored at index 3, since indices 0, 1, and 2 were already in use. And if we try to store a third word that starts with the letter "A", it's assigned to index 4, since indices 0, 1, 2, and 3 are full. As you can see even from this simple example, once a collision occurs, you significantly increase chances that another collision will occur in the same area. This is called clustering, and it's a serious drawback to linear probing. Moreover, worst-case insertion, deletion, and lookup times have devolved to O(n), as the next available slot could have potentially been the last slot in the table. Maybe separate chaining will offer a more compelling solution. In the separate chaining model, the hash table is actually an array of pointers to linked lists. When a collision occurs, the key can be inserted in constant time at the head of the appropriate linked list. What happens now when we search for "apple" in the hash table? In the worst case, we must traverse the entire linked list, starting at index 0. The worst-case lookup time for a hash table that uses separate chaining is therefore O(n/k), where k is the size of the hash table. Wait a second, k is a constant. So O(n/k) is really just O(n), which was the worst-case lookup time for a linked list. Have we really gone through all the trouble of learning about hash tables only to end up back where we started? That may be the case from a theoretical perspective, but in the real world, O(n/k) could be a huge improvement over O(n). Think of it this way: assume that k is 10--would you rather wait 100 seconds or 100/k? 10 seconds from Microsoft Word to finish spell checking your document. As you just saw, resolving collisions entails one sort of linear search or another, which slows things down considerably. Therefore, you'll want to choose a hash function that minimizes the chance of collisions occurring in the first place. Here are some properties of good hash functions to keep in mind. A good hash function should make use of all information provided by a given key in order to maximize the number of possible hash values. For example, if we had two strings, "cat" and "caterpillar", we'd want them to hash to different places on the table. If a hash function only took into account the first one, two, or even three letters of the strings, a collision would occur, since both words start with the same three letters. Hash values should be spread evenly across the hash table. This will reduce the length of linked lists should collisions occur. It's also a good sign if your hash value is capable of generating very different hash values for similar keys, making collisions much less likely. Our goal is speedy insertion, deletion, and lookup. The hash function plays a crucial role in each of these processes and will be called very frequently. Therefore, make sure it employs only very simple, quick operations to minimize run time. I hope you've enjoyed this brief introduction to hash tables. My name is Lauren, and this is CS50.

Examples

As an example, the number 96 is a congruum because it is the difference between adjacent squares in the sequence 4, 100, and 196 (the squares of 2, 10, and 14 respectively).

The first few congrua are:

24, 96, 120, 216, 240, 336, 384, 480, 600, 720 … (sequence A256418 in the OEIS).

History

The congruum problem was originally posed in 1225, as part of a mathematical tournament held by Frederick II, Holy Roman Emperor, and answered correctly at that time by Fibonacci, who recorded his work on this problem in his Book of Squares.[2]

Fibonacci was already aware that it is impossible for a congruum to itself be a square, but did not give a satisfactory proof of this fact.[3] Geometrically, this means that it is not possible for the pair of legs of a Pythagorean triangle to be the leg and hypotenuse of another Pythagorean triangle. A proof was eventually given by Pierre de Fermat, and the result is now known as Fermat's right triangle theorem. Fermat also conjectured, and Leonhard Euler proved, that there is no sequence of four squares in arithmetic progression.[4][5]

Parameterized solution

The congruum problem may be solved by choosing two distinct positive integers and (with ); then the number is a congruum. The middle square of the associated arithmetic progression of squares is , and the other two squares may be found by adding or subtracting the congruum. Additionally, multiplying a congruum by a square number produces another congruum, whose progression of squares is multiplied by the same factor. All solutions arise in one of these two ways.[1] For instance, the congruum 96 can be constructed by these formulas with and , while the congruum 216 is obtained by multiplying the smaller congruum 24 by the square number 9.

An equivalent formulation of this solution, given by Bernard Frénicle de Bessy, is that for the three squares in arithmetic progression , , and , the middle number is the hypotenuse of a Pythagorean triangle and the other two numbers and are the difference and sum respectively of the triangle's two legs.[6] The congruum itself is four times the area of the same Pythagorean triangle. The example of an arithmetic progression with the congruum 96 can be obtained in this way from a right triangle with side and hypotenuse lengths 6, 8, and 10.

Relation to congruent numbers

A congruent number is defined as the area of a right triangle with rational sides. Because every congruum can be obtained (using the parameterized solution) as the area of a Pythagorean triangle, it follows that every congruum is congruent. Conversely, every congruent number is a congruum multiplied by the square of a rational number.[7] However, testing whether a number is a congruum is much easier than testing whether a number is congruent. For the congruum problem, the parameterized solution reduces this testing problem to checking a finite set of parameter values. In contrast, for the congruent number problem, a finite testing procedure is known only conjecturally, via Tunnell's theorem, under the assumption that the Birch and Swinnerton-Dyer conjecture is true.[8]

See also

  • Automedian triangle, a triangle for which the squares on the three sides form an arithmetic progression
  • Spiral of Theodorus, formed by right triangles whose (non-integer) sides, when squared, form an infinite arithmetic progression

References

  1. ^ a b Darling, David (2004), The Universal Book of Mathematics: From Abracadabra to Zeno's Paradoxes, John Wiley & Sons, p. 77, ISBN 978-0-471-66700-1.
  2. ^ Bradley, Michael John (2006), The Birth of Mathematics: Ancient Times to 1300, Infobase Publishing, p. 124, ISBN 978-0-8160-5423-7.
  3. ^ Ore, Øystein (2012), Number Theory and Its History, Courier Dover Corporation, pp. 202–203, ISBN 978-0-486-13643-1.
  4. ^ Erickson, Martin J. (2011), Beautiful Mathematics, MAA Spectrum, Mathematical Association of America, pp. 94–95, ISBN 978-0-88385-576-8.
  5. ^ Euler's proof is not clearly written. An elementary proof is given in Brown, Kevin, "No Four Squares In Arithmetic Progression", MathPages, retrieved 2014-12-06.
  6. ^ Beiler, Albert H. (1964), Recreations in the Theory of Numbers: The Queen of Mathematics Entertains, Courier Corporation, p. 153, ISBN 978-0-486-21096-4.
  7. ^ Conrad, Keith (Fall 2008), "The congruent number problem" (PDF), Harvard College Mathematical Review, 2 (2): 58–73, archived from the original (PDF) on 2013-01-20.
  8. ^ Koblitz, Neal (1984), Introduction to Elliptic Curves and Modular Forms, Graduate Texts in Mathematics, no. 97, Springer-Verlag, ISBN 0-387-97966-2

External links

This page was last edited on 10 December 2022, at 22:49
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.