রবিবার, ১৬ ফেব্রুয়ারী, ২০১৪

Square Pyramidal Number


SquarePyramidalNumber
A figurate number of the form
 P_n^((4))=1/6n(n+1)(2n+1),
(1)
corresponding to a configuration of points which form a square pyramid, is called a square pyramidal number (or sometimes, simply a pyramidal number). The first few are 1, 5, 14, 30, 55, 91, 140, 204, ... (Sloane's A000330). The generating function for square pyramidal numbers is
 (x(x+1))/((x-1)^4)=x+5x^2+14x^3+30x^4+....
(2)
The square pyramidal numbers are sums of consecutive pairs of tetrahedral numbers and satisfy
 P_n=1/3(2n+1)T_n,
(3)
where T_n is the nth triangular number.
The only numbers which are simultaneously square S_m=m^2 and square pyramidal P_n=n(n+1)(2n+1)/6 (the cannonball problem) are P_1=1and P_(24)=4900, corresponding to S_1=1 and S_(70)=4900 (Ball and Coxeter 1987, p. 59; Ogilvy 1988; Dickson 2005, p. 25), as conjectured by Lucas (1875), partially proved by Moret-Blanc (1876) and Lucas (1877), and proved by Watson (1918). The problem requires solving the Diophantine equation
 m^2=1/6n(n+1)(2n+1)
(4)
(Guy 1994, p. 147). Watson (1918) gave an almost elementary proof, disposing of most cases by elementary means, but resorting to the use of elliptic functions for one pesky case. Entirely elementary proofs have been given by Ma (1985) and Anglin (1990).
Numbers which are simultaneously triangular T_m=m(m+1)/2 and square pyramidal P_n=n(n+1)(2n+1)/6 satisfy the Diophantine equation
 1/2m(m+1)=1/6n(n+1)(2n+1).
(5)
Completing the square gives
 1/2(m+1/2)^2-1/8=1/6(2n^3+3n^2+n)
(6)
 1/8(2m+1)^2=1/6(2n^3+3n^2+n)+1/8
(7)
 3(2m+1)^2=8n^3+12n^2+4n+3.
(8)
The only solutions are (n,m)=(-1,0), (0, 0), (1, 1), (5, 10), (6, 13), and (85, 645) (Guy 1994, p. 147), corresponding to the nontrivial triangular square pyramidal numbers 1, 55, 91, 208335.
Numbers which are simultaneously tetrahedral Te_m=m(m+1)(m+2)/6 and square pyramidal P_n=n(n+1)(2n+1)/6 satisfy the Diophantine equation
 m(m+1)(m+2)=n(n+1)(2n+1).
(9)
Beukers (1988) has studied the problem of finding solutions via integral points on an elliptic curve and found that the only solution is the trivial Te_1=P_1=1.

Ref: mathworld.wolfram.com

শুক্রবার, ৭ ফেব্রুয়ারী, ২০১৪

Know The Complexities

There are different types of algorithm and they have complexities. These complexities are typed by there categories ---------------

GoodFairPoor

Searching

AlgorithmData StructureTime ComplexitySpace Complexity
AverageWorstWorst
Depth First Search (DFS) Graph of |V| vertices and |E| edges-O(|E| + |V|)O(|V|)
Breadth First Search (BFS) Graph of |V| vertices and |E| edges-O(|E| + |V|)O(|V|)
Binary search Sorted array of n elementsO(log(n))O(log(n))O(1)
Linear (Brute Force) ArrayO(n)O(n)O(1)
Shortest path by Dijkstra,
using a Min-heap as priority queue
 
Graph with |V| vertices and |E| edgesO((|V| + |E|) log |V|)O((|V| + |E|) log |V|)O(|V|)
Shortest path by Dijkstra,
using an unsorted array as priority queue
 
Graph with |V| vertices and |E| edgesO(|V|^2)O(|V|^2)O(|V|)
Shortest path by Bellman-Ford Graph with |V| vertices and |E| edgesO(|V||E|)O(|V||E|)O(|V|)

Sorting

AlgorithmData StructureTime ComplexityWorst Case Auxiliary Space Complexity
BestAverageWorstWorst
Quicksort ArrayO(n log(n))O(n log(n))O(n^2)O(n)
Mergesort ArrayO(n log(n))O(n log(n))O(n log(n))O(n)
Heapsort ArrayO(n log(n))O(n log(n))O(n log(n))O(1)
Bubble Sort ArrayO(n)O(n^2)O(n^2)O(1)
Insertion Sort ArrayO(n)O(n^2)O(n^2)O(1)
Select Sort ArrayO(n^2)O(n^2)O(n^2)O(1)
Bucket Sort ArrayO(n+k)O(n+k)O(n^2)O(nk)
Radix Sort ArrayO(nk)O(nk)O(nk)O(n+k)

Data Structures

Data StructureTime ComplexitySpace Complexity
AverageWorstWorst
IndexingSearchInsertionDeletionIndexingSearchInsertionDeletion
Basic Array O(1)O(n)--O(1)O(n)--O(n)
Dynamic Array O(1)O(n)O(n)O(n)O(1)O(n)O(n)O(n)O(n)
Singly-Linked List O(n)O(n)O(1)O(1)O(n)O(n)O(1)O(1)O(n)
Doubly-Linked List O(n)O(n)O(1)O(1)O(n)O(n)O(1)O(1)O(n)
Skip List O(log(n))O(log(n))O(log(n))O(log(n))O(n)O(n)O(n)O(n)O(n log(n))
Hash Table -O(1)O(1)O(1)-O(n)O(n)O(n)O(n)
Binary Search Tree O(log(n))O(log(n))O(log(n))O(log(n))O(n)O(n)O(n)O(n)O(n)
Cartresian Tree -O(log(n))O(log(n))O(log(n))-O(n)O(n)O(n)O(n)
B-Tree O(log(n))O(log(n))O(log(n))O(log(n))O(log(n))O(log(n))O(log(n))O(log(n))O(n)
Red-Black Tree O(log(n))O(log(n))O(log(n))O(log(n))O(log(n))O(log(n))O(log(n))O(log(n))O(n)
Splay Tree -O(log(n))O(log(n))O(log(n))-O(log(n))O(log(n))O(log(n))O(n)
AVL Tree O(log(n))O(log(n))O(log(n))O(log(n))O(log(n))O(log(n))O(log(n))O(log(n))O(n)

Heaps

HeapsTime Complexity
HeapifyFind MaxExtract MaxIncrease KeyInsertDeleteMerge
Linked List (sorted) -O(1)O(1)O(n)O(n)O(1)O(m+n)
Linked List (unsorted) -O(n)O(n)O(1)O(1)O(1)O(1)
Binary Heap O(n)O(1)O(log(n))O(log(n))O(log(n))O(log(n))O(m+n)
Binomial Heap -O(log(n))O(log(n))O(log(n))O(log(n))O(log(n))O(log(n))
Fibonacci Heap -O(1)O(log(n))*O(1)*O(1)O(log(n))*O(1)

Graphs

Node / Edge ManagementStorageAdd VertexAdd EdgeRemove VertexRemove EdgeQuery
Adjacency list O(|V|+|E|)O(1)O(1)O(|V| + |E|)O(|E|)O(|V|)
Incidence list O(|V|+|E|)O(1)O(1)O(|E|)O(|E|)O(|E|)
Adjacency matrix O(|V|^2)O(|V|^2)O(1)O(|V|^2)O(1)O(1)
Incidence matrix O(|V| ⋅ |E|)O(|V| ⋅ |E|)O(|V| ⋅ |E|)O(|V| ⋅ |E|)O(|V| ⋅ |E|)O(|E|)

Notation for asymptotic growth

letterboundgrowth
(theta) Θupper and lower, tight[1]equal[2]
(big-oh) Oupper, tightness unknownless than or equal[3]
(small-oh) oupper, not tightless than
(big omega) Ωlower, tightness unknowngreater than or equal
(small omega) ωlower, not tightgreater than
[1] Big O is the upper bound, while Omega is the lower bound. Theta requires both Big O and Omega, so that's why it's referred to as a tight bound (it must be both the upper and lower bound). For example, an algorithm taking Omega(n log n) takes at least n log n time but has no upper limit. An algorithm taking Theta(n log n) is far preferential since it takes AT LEAST n log n (Omega n log n) and NO MORE THAN n log n (Big O n log n). 
[2] f(x)=Θ(g(n)) means f (the running time of the algorithm) grows exactly like g when n (input size) gets larger. In other words, the growth rate of f(x) is asymptotically proportional to g(n).
[3] Same thing. Here the growth rate is no faster than g(n). big-oh is the most useful because represents the worst-case behavior.
In short, if algorithm is __ then its performance is __
algorithmperformance
o(n)< n
O(n)≤ n
Θ(n)= n
Ω(n)≥ n
ω(n)> n

Big-O Complexity Chart

This interactive chart, created by our friends over at MeteorCharts , shows the number of operations (y axis) required to obtain a result as the number of elements (x axis) increase.  O(n!) is the worst complexity which requires 720 operations for just 6 elements, while O(1) is the best complexity, which only requires a constant number of operations for any number of elements.