## Monday, 28 January 2013

### Computing Communities in Large Networks Using Random Walks

In this blog we will discuss the computation of dense sub-graph (community) of sparse graph, which is very common in our real life complex network. We will give introduction about different way of computing sub-graph and finally will spread some light on random walk approach.

Complex network has become important in different domain such as sociology (acquaintance or collaboration networks), biology (metabolic networks, gene networks) or computer science (Internet topology, Web graph, P2P networks). The Graphs of these networks are general globally sparse but locally dense. There exist group of vertices which are highly connected among itself, called communities, but few links to other. This type of structure carries much information about network. One real life example could be our Facebook friends if we draw our friends as node and link between them if they are friend among themselves then we will find a sparse graph having different dense sub graph(community) based on college friend , High school friend , School friend, colleague etc.

Community Computation problem can be related to graph partition problem [1]in which given a graph G = (V,E) with V vertices and E edges, k-way partition of graph is partitioning of graph in k part such that number of edges running over these partitions are minimum. However the algorithm for Graph partition is not well suited for our case, because no of communities and their size is input for Community Computation.

Hierarchical clustering is another classical approach in which nodes are grouped together based on the similarity between nodes, Newman proposed [2] a greedy algorithm that starts with n communities corresponding to the vertices and merges communities in order to optimize a function called modularity which measures the quality of a partition.

The approach which is discussed here is based on the random walks on a graph [3] tend to get “trapped” into densely connected parts corresponding to communities. The graph G has its adjacency matrix A: Aij  = 1 if vertex i and j are connected and Aij   = 0 otherwise. The degree d(i)   of the vertex i is the number of its neighbors. A discrete random walk is a process in which a walker is on a vertex and moves to vertex chosen randomly and uniformly among its neighbors. The sequence of visited vertices is a Mrakov chain, the states of which are the vertices of the graph. At each step, the transition probability from vertex i to vertex j is Pij    =Aij/d(i). This defines the transition matrix P of random walk. The probability of going from i to j through a random walk of length t is (Pt)ij. It satisfies following two properties of random walk.

Property 1:  When length of a path between two vertices i and j tends toward infinity. The probability that a random walker will be at vertex j will totally depend on the degree of vertex j

Property 2: The ratio of probabilities of going from i to j and j to i through a random walk of a fixed length t depend on the degrees d(i) and d(j).

For grouping vertices into communities, we will now introduce a distance r between the vertices that captures the community structure of the graph. This distance will be large if two vertices are in different communities, and on contrary if the distance is small they will be in same community. It will be computed from the information given by the random walk of the graph.

Let us consider a random walk of length t from vertex i to j . t is large enough to gather the topology of graph. However t must not be too long to avoid property 1. The probability Ptij  would depend on the degree of the vertices i and j. Each Ptij gives some information about the two vertices i and j, but property 2 says that Ptij and Ptji encode the same information. Finally the information about vertex i is encode inside n probabilities Ptik  1<k<n. Which is nothing but the ith row of matrix Pt. For comparing two vertices we must notice following points.

·         If two vertices i and j are in the same community, the probability Ptij  will be high be high but if Ptij is high it does not necessarily imply that i and j will be in same community.
·          High degree vertex will have always high probability that a random walker will reach there. So Ptij  is influenced by the degree d(j) of vetex.
·         Two vertices of a same community observe all the other vertices in same way. Thus if  i and j are in same community then

References
[1]Graph Partition Problem
[2]Newman, M.E.J.: Fast algorithm for detecting community structure in networks. Physical Review E 69 (2004) 066133
[3]Computing Communities in Large Networks Using Random Walks - Pascal Pons and Matthieu Latapy

### Complex Contagion in Complex Network

Following up on the previous post referring to rumor spreading in social networks, I would like to extend the topic to complex contagion in a complex network.

This phenomenon tells us you will take some action or adopt a changed behavior only if certain fraction of your friends do so. This is different from spread of diseases or rumors. Complex contagion examples include whether to join a protest, see a movie, adopt a new hairstyle or get a tattoo.

Damon Centola of Harvard University and Michael Macy of Cornell University suggests four properties that necessitates multiple exposure , namely strategic complementarity, legitimacy, credibility and emotional contagion. I'll fall back on a very recent experience of mine when the movie 'Django unchained' was released ( or when its blu-ray print became available?). One of my friends (the movie nerd?) told me it was a good one, then another couple of friends supported him. Within a week, six of my friends watched and recommended it. If this happens to you, I am sure you'll also utter the already famous quote from the movie :
"You had my curiosity but now you have my attention." and go and watch it (I am sorry, I couldn't resist).

#### Mathematical Model

The diffusion through all of the network or cascading effect as indicated above can be explained through a very simple mathematical model.
• Whether or not you'll adopt a behavior or choose one of two possible alternatives depends upon a payoff .
• It also depends upon the fraction of your friends/neighbors who have already chosen one.
Suppose, p fraction of your friends play cricket and other (1-p) fraction play football. The payoff (i.e. your willingness) for playing cricket is c and playing football is f.

Now, you would choose cricket over football if,
p.c > (1-p).f
In other words, p > f/(c+f).

So in the network, if q fraction of your neighbors play cricket where, q > f/(c+f), you will also play cricket and in doing so, influence your other neighbors who play football to play cricket. In time, this results in the cascading effect.

#### Complex Contagion : some interesting aspects

• Long ties that connect socially distant relation, drastically bring down the degree of separation and results in fast information diffusion. So, a trend in fashion arising in anywhere in the world, should also readily affect India, right? After all, it's a small world. Unfortunately, the answer is no. Long ties not necessarily speed up the complex contagion process and may have no effect in this. Having many common neighbors helps in this process more. You are much more likely to join a protest if three of your close friends join it, in contrast to when your foreign friend whom you met during a holiday or intern joins it.
• Social networks have community structures embedded in it. Communities provide an interesting study. Within a community, there are many mutual links and an opinion spreads faster  and more easily,  encompassing the whole community. New opinions coming from outside the community share a different response. Links are sparse between different communities and different opinion of a different community is met with considerable resistance and a community is more likely to hold on to its own within group opinion even though it may be inferior compared to the new idea. This might tell you why parental 'community' finds teenagers' take on life 'rebellious' !!
• Moreover, when having two ideas, an individual can choose both for an additional cost. Falling back to our previous example of playing cricket and football, one (the sporty one!) may choose to play both with the additional cost of his or her own time. These 'bilingual' nodes help in coexistence of both kinds of opinion, here cricket and football and prevents one taking out the other one.
• While trying to spread an idea throughout the network, if one chooses the seed nodes carefully and inflicts them with the idea by whatever means (like a free cricket bat and ball for all those who will start to play cricket first), they may initiate the cascading effect. This has huge implication in viral marketing.

## Sunday, 27 January 2013

### Theory of Rumor Spreading in Complex networks

Today in this blog we will deal with the world most beautiful and dangerous invention named as rumor. Rumors spreading play an important part in shaping of the world.  The spread of rumors can shape the public opinion in a country, greatly impact financial markets and also can cause panic in a society during wars. Thus one can possibly imagine the viral marketing of the rumor one’s it goes over internet or any social networking site. Most of the corporate world now uses WWW to spread rumor over Internet and “world-of-email”.

Rumor can be viewed as an “infection of mind”. Here we will primarily focus our discussion on Graph node rumor spreading by standard model of Daley and Kendal or the DK model and Maki-Thomson model. DK model divides the population into three major categories namely: Ignorant, Stiflers and Spreaders.  Rumors are spread via pair-wise contact of spreaders and others in the population.
S: People who are ignorant of the rumor;
I: People who actively spread the rumor;
R: People who have heard the rumor, but no longer are interested in spreading it.
Any spreader involved in a pair-wise meeting attempts to “infect” the other individual with the rumor. In the case this other individual is an ignorant, he or she becomes a spreader. In the other two cases, either one or both of those involved in the meeting learn that the rumor is known and decided not to tell the rumor anymore, thereby turning into stiflers.

In social networking let us consider the Graph G (V, E). Following Maki-Thomson model graph consider a population consisting of N individuals and rumor can only spread by direct contact along the links..

Whenever a spreader contacts an ignorant, the ignorant becomes a spreader at a rate λ .
When a spreader contacts another spreader or a stifler the initiating spreader becomes a stifler at a rate α
In the above, the first rule models the tendency of individuals to accept a rumor only with a certain probability which, loosely speaking, depends on the urgency or credibility of a rumor. The second rule, on the other hand, models the tendency of individuals to lose interest in spreading a rumor when they learn, through contacts with others, that the rumor has become stale news, or is false. In both the DK and the MK rumor models, and their variants, stifling is the only mechanism that results in cessation of rumor spreading.

We will describe above model using IMC framework (mean field equations). IMC was initially introduced to handle means for modeling social processes involving several agents. It consists of N nodes and internal transition is not only depended on the current node but also on the node adjacent to the  current node.

Consider now a node j which is in the ignorant state at time t. We denote with piij  the probability that this node stays in the ignorant state in the time interval[t +t ] and with pisj = 1- piij   the probability that it makes a transition to the spreader state. It then follows that
piij  =(1-∆t λ)g ,
where g=g(t) denotes the number of neighbors of node j which are in the spreader state at time t.
The corresponding probability for a transition from the spreader to the stifler state, psr(k,t) is given by
psr(k,t)=1- pss(k,t).

The final size of the rumor, R is shown as a function of the spreading rate λ for the ER network of size 106. The results are shown for several values of the stifling parameter α.
References:
[1] Theory of rumor spreading in complex networks. M.Nekovee, Y.Moreno, G.Binaconi, M.Marsili
[2] Rumor spreading in social network-Wikipidea, http://en.wikipedia.org/wiki/Rumor_spread_in_social_network

# Academic Search Engine Optimization (ASEO)

In this blog we will take a look into Academic search engine optimization, how do Academic search engines (ASE) rank the documents (scholarly articles) and most importantly how can one optimize scholarly literature for academic search engines in general and for Google Scholar in particular.

The basic concept of keyword-based searching is the same for all of the major academic search engines such as Google Scholar, IEEE Xplore, PubMed, and SciPlore.org. Relevance of a document with respect to a query term depends on how many times the term appears in the document and where. This means that an occurrence in the title is weighed more heavily than an occurrence in the abstract, which carries more weight than an occurrence in a (sub)heading, which in turn is more than in the body, and so on.  The metadata associated with the electronic files (like in pdf format) is also important as it helps the ASE crawler to differentiate between an ordinary document and an academic article (by extracting the author and title from metadata). Apart from this other common ranking factors are: publication date, citation count, author or journal name and reputation etc.

So how does Google scholar do it? Google Scholar is one of those search engines that combine several factors into one ranking algorithm. The most important factors are relevance, citation count, author name(s), and name of publication.
Relevance: Google scholar gives a lot of importance to the title and a short and specific title will be ranked above long and descriptive one. E.g.  For the search term ‘SEO,’ a document titled ‘SEO: An Overview’ would be ranked higher than one titled ‘Search Engine Optimization (SEO): A Literature Survey of the Current State of the Art.’ The total search term count has minimal effect on ranking, synonyms and pdf metadata are also neglected.
Citation Count: As shown in the figure below higher the citation counts higher the ranking. . Google Scholar does not differentiate between self-citations and citations by third parties.

Author and Publication name: If the search query has author or publication name then the documents having the name gets a high rank. Google scholar also claim to take both author’s and publication’s “reputation” into account.

Every researcher wants to spread his/her work to as many people as possible. But for doing that it has become almost necessary to ensure that the article must not only be indexed properly but should also be ranked higher by Academic search engines; that’s where ASEO come into the picture. As described in a recent paper by Joeran Beel, Bela Gipp, and Erik Wilde, Academic search engine optimization (ASEO) is the creation, publication, and modification of scholarly literature in a way that makes it easier for academic search engines to both crawl it and index it.
Preparation: First of all build a set of keywords(only a few) which are highly relevant to the article. The choice of keywords is very important; they should not be the most popular in their category as it may increase the competition for the article. One can take help of tools like Google trends, Google insights etc. or can use the words suggested by search engines themselves.
Writing the article: while writing the article the keywords selected above must be used in title, abstract and in the body as often as possible (but not too much that will annoy the readers). If possible include synonyms of these keywords in the text as well, so that it may be found by users unaware of the exact terminology. While writing names, take special care on spellings as it would help search engines to identify the article or citations correctly. Use the standard scientific layout and structure for the article so that ASE could easily classify the article as scientific.
Preparing for Publication: Text in figures and tables should be machine readable so that it can be easily indexed by ASE. If the documents are converted to pdf then the metadata (author and title name) should be correct.
Publishing: while publishing choice of publication matters a lot for e.g. open-access articles usually receive more citations than articles accessible only by purchase or subscription. Journals or publishers who have friendlier policies with Google scholar and other ASEs must be preferred.

ASEO had received mixed reviews in the scientific community as many people look this area of research just as “how to cheat the search engines to boost up your rank”. That’s why when Joeran Beel, Bela Gipp sent their paper for review it got rejected and they received following reviews:

“I’m not a big fan of this area of research […]. I know it’s in the call for papers, but I think that’s a mistake.”

“[This] paper seems to encourage scientific paper authors to learn Google scholar’s ranking method and write papers accordingly to boost ranking [which is not] acceptable to scientific communities which are supposed to advocate true technical quality/impact instead of ranking”

But it should be viewed as guidelines which will help search engines to understand the articles in a better way thus making the content more widely and easily available. Obviously there would be cases where people will take unethical steps to boost their article rank using ASEO but the same problem existed with web search and finally the web search engines manage to avoid spam, thus ASEs will too catch up and it will be beneficial for authors and users alike.

References
Joeran Beel, Bela Gipp, and Erik Wilde. Academic Search Engine Optimization (ASEO): Optimizing Scholarly Literature for Google Scholar and Co. Journal of Scholarly Publishing, 41 (2): 176–190, January 2010

## Sunday, 20 January 2013

### Dog Programming Language

Developed by Sep Kamvar, Salman Ahmad and Zahan Malkani at MIT, this high-level Programming language, Dog, allegedly "makes it easy to create social applications by employing natural language commands.

The language emerged from Kamvar’s frustration with writing tonnes of code for defining social interactions in conventional languages like Java. He felt simple and intuitive interactions, say listening for someone’s facebook posts, had to be thought of in the realm of data storage and protocols. According to him these are better and more intuitively described using the natural language.

"I had to write code at a lower level of abstraction than I had to think about the interactions," he says. "And so I thought it would be interesting to start writing a programming language that allowed me to write at the same level of abstraction that I think."

And so he set out to create a new programming language, Dog (named such perhaps to convey the friendliness of the language, except to Cynophobes and cat people).Kamvar and his team of students have been developing the compiler for the language along with some demo programs. The public version of Dog is slated for release by summer this year. The language will be kept free and open source.

Dog identifies people as a basic data type, Kamvar believes that the major problem in defining social interaction using conventional languages is the notion of people. Following his approach of natural language, he created a syntax for the language utilizing simple words like listen, notify, ask, compute etc.

The promised product does indeed look much less intimidating than languages like Java. For example creating a group “students” can be simply done as: students = PEOPLE FROM facebook WHERE university = 'iit' AND degree = 'computer science'. Which resembles our ever so friendly Structured Query Language.

The use case of Dog is expected to be as follows, suppose you want to create a social application which requires standard computational tasks as well as a variety of social tasks say, listening to news feeds, messaging people, handle interaction events. Doing all this in a traditional language is a daunting task, especially the social aspects. By using Dog, you can abstract such things to simple code, which will be taken care of by Dog, while other programming languages can still be used for the non-social aspects of the application.

According to the MIT media page,
"Dog is a new programming language that makes it easy and intuitive to create social applications. Dog focuses on a unique and small set of features that allows it to achieve the power of a full-blown application development framework. One of Dog’s key features is built-in support for interacting with people. Dog provides a natural framework in which both people and computers can be given instructions and return results. It can perform a long-running computation while also displaying messages, requesting information, or even sending operations to particular individuals or groups. By switching between machine and human computation, developers can create powerful workflows and model complex social processes without worrying about low-level technical details."

As of now, the language is for server side applications, but the team is also developing similar mechanisms for the client side.

Kamvar believes that Dog will enable non-programmers such as interaction designers or product managers to easily understand what the website is doing and what all functions are being used internally.
Salient features of the language
• Identifies people using SQL query like commands, like
good_students = PEOPLE FROM iit WHERE gpa < 9
• Makes communication tasks, like sending messages, emails, listening to posts easier,
LISTEN TO students VIA email FOR assignments
• Supports integration with other languages, such as imports from Python
• Simplifies asynchronous state management,
LISTEN TO users FOR tickets

ON ticket DO

Will this be a game changer for the social application developers? We cannot say.
Would it help students enrolled in the Complex Networks course with their projects? Not with these timelines it won’t.
But what it does do, is bring up this interesting debate of whether Programming languages are indeed unnecessarily hard and more work could be put into them, to make them more accessible.

## Wednesday, 16 January 2013

In this blog post, I will be talking about a very recent paper by Yong-Yeol Ahn et al, titled 'Link communities reveal multiscale complexity in networks' which appeared in Nature (a physics journal) in 2010. Today's lecture, introducing clustering coefficient, would serve as a great introduction for this post.

Networks have become a key approach to understanding systems of interacting objects, unifying the study of diverse phenomena including human society. One crucial step when studying the structure and dynamics of networks is to identify communities: groups of related nodes that correspond to functional subunits such as social spheres. Communities in networks often overlap such that nodes simultaneously belong to several groups. Meanwhile, many networks are known to possess hierarchical organization, where communities are recursively grouped into a hierarchical structure . However, the fact that many real networks have communities with pervasive overlap, where each and every node belongs to more than one group, has the consequence that a global hierarchy of nodes cannot capture the relationships between overlapping groups.

While most of the previous works focused on clustering nodes, this work tries to cluster links instead of nodes and successfully reconcile the antagonistic organizing principles of overlapping communities and hierarchy. When I think about it, it feels obvious, now! :)

The method in its essence, is as follows:
For an undirected, unweighted network, we denote the node as i and its neighbours as n+(i). Limiting ourselves to link pairs that share a node, expected to be more similar than disconnected pairs, we find the similarity, S, between links eik and ejk to be

Shared node k does not appear in S because it provides no additional information and introduces bias. Single-linkage hierarchical clustering builds a link dendrogram, cutting this dendrogram at some clustering threshold yields link communities.

Reference:
Ahn Y-Y, Bagrow JP, Lehmann S. Link communities reveal multiscale complexity in networks. Nature. 2010;466:761–764.

## Friday, 11 January 2013

### Bose–Einstein Condensation: a Complex Network Approach

Following up Dastagiri’s excellent post is a tough task. But, I will try my best, not to bore you with monotony of Dravid like defense after this type of Sehwagian start.  After willfully taking up the challenge to contribute in this blogathon, I juggled with different types of random topics. Finally, I chose this topic which I feel, will perfectly set the premise to dive into the perplexities of complex networks.
 The Large Hadron Collider

Most of the previous year was a dream-come-true times for any quantum physicist. The search of so-called God particle in the Haddron Collider caught the attention of all kinds of minds - be it Calvin or Mr. Bin. The quarrel among the quarks suddenly became the queen of most of the daily discussions. And all this hoopla of scientific gigantic leap was approximately based upon Bose-Einstein statistical model of particle physics. To be precise, the modelling of Bose-Einstein condensate (BEC) gave initial light upon the deepest of secrets of nature. Meanwhile, folks, lets gear up for a bit of physics to get a clear idea about BEC.

# Bose–Einstein condensate:

Bose–Einstein condensate is a state of matter that occurs in certain gases at very low temperatures. Any elementary particle, atom, or molecule, can be classified as one of two types: a boson or a fermion. For example, an electron is a fermion, while a photon or a helium atom is a boson.
In quantum mechanics, the energy of a (bound) particle is limited to a set of discrete values, called energy levels. An important characteristic of a fermion is that it obeys the Pauli exclusion principle, which states that no two fermions may occupy the same energy level. Bosons, on the other hand, do not obey the exclusion principle, and any number can exist in the same energy level. As a result, at very low energies (or temperatures), a great majority of the bosons in a Bose gas can be crowded into the lowest energy state, creating a Bose–Einstein condensate.
So, a Bose–Einstein condensate is a state of matter of a dilute gas of bosons cooled to temperatures very near absolute zero (0 K or−273.15 °C). Under such conditions, a large fraction of the bosons occupy the lowest quantum state, at which point quantum effects become apparent on a macroscopic scale. These effects are called macroscopic quantum phenomena.
For all mathematical freaks,the transition to BEC occurs below a critical temperature, which for a uniform three-dimensional gas consisting of non-interacting particles with no apparent internal degrees of freedom is given by:

where:
 $\,T_c$ is the critical temperature, $\,n$ is the particle density, $\,m$ is the mass per boson, $\hbar$ is the reduced Planck constant, $\,k_B$ is the Boltzmann constant, and $\,\zeta$ is the Riemann zeta function; $\,\zeta(3/2)\approx 2.6124.$

Briefly this is the sneak peek of this extremely complicated quantum phenomenon, and now look at the complex networking side of it.
 Bose–Einstein condensation at 400, 200, and 50 nano-kelvins. The peaks show that as the temperature  goes down, more and more atoms "condense" to the same energy level.

## Connection with network theory

Despite the irreversible and non equilibrium nature of complex networking systems, these follow Bose statistics and can undergo Bose–Einstein condensation. Addressing the dynamical properties of these non-equilibrium systems within the framework of equilibrium quantum gases predicts that the “first-mover-advantage,” “fit-get-rich,” and “winner-takes-all” phenomena observed in competitive systems are thermodynamically distinct phases of the underlying evolving networks.

Schematic illustration of the mapping between the network model and the Bose gas.

Starting from the fitness model, the mapping of a Bose gas to a network can be done by assigning an energy $\epsilon_i$ to each node, determined by its fitness through the relation
$\epsilon_i=-\frac{1}{\beta}\ln{\eta_i}$
where $\beta=1\!$ . In particular when $\beta=0\!$ all the nodes have equal fitness, when instead $\beta\gg 1\!$ nodes with different "energy" have very different fitness. We assume that the network evolves through a modified preferential attachment mechanism. At each time a new node i with energy $\epsilon_i$ drawn from a probability distribution $p(\epsilon)$enters in the network and attach a new link to a node j chosen with probability:
$\Pi_j=\frac{e^{-\beta\epsilon_j}k_j}{\sum_r e^{-\beta\epsilon_r}k_r}.$
In the mapping to a Bose gas, we assign to every new link linked by preferential attachment to node j a particle in the energy state $\epsilon_j$
The continuum theory predicts that the rate at which links accumulate on node i with "energy " $\epsilon_i$ is given by
$\frac{\partial k_i(\epsilon_i,t,t_i)}{\partial t}=m\frac{e^{-\beta\epsilon_i}k_i(\epsilon_i,t,t_i)}{Z_t}$
where $k_i(\epsilon_i,t, t_i)$ indicating the number of links attached to node i that was added to the network at the time step $t_i$$Z_t\!$ is the partition function, defined as:
$Z_t=\sum_i e^{-\beta\epsilon_i}k_i(\epsilon_i,t,t_i).$
The solution of this differential equation is:
$k_i(\epsilon_i,t,t_i)=m\left(\frac{t}{t_i}\right)^{f(\epsilon_i)}$
where the dynamic exponent $f(\epsilon)$ satisfies $f(\epsilon)=e^{-\beta(\epsilon-\mu)}$$\mu$ plays the role of the chemical potential, satisfying the equation
$\int d\epsilon \, p(\epsilon) \frac{1}{e^{\beta(\epsilon-\mu)}-1}=1$
where $p(\epsilon)$ is the probability that a node has "energy" $\epsilon$ and "fitness" $\eta=e^{-\beta \epsilon}$. In the $t \rightarrow \infty$ limit the occupation number, giving the number of links linked to nodes with "energy" $\epsilon$, follows the familiar Bose statistics
$n(\epsilon)=\frac{1}{e^{\beta(\epsilon -\mu)}-1}.$.
The definition of the constant $\mu$ in the network models is surprisingly similar to the definition of the chemical potential in a Bose gas. In particular for probabilities $p(\epsilon)$ such that $p(\epsilon) \rightarrow 0$ for $\epsilon \rightarrow 0$ at high enough value of $\beta$ we have a condensation phase transition in the network model. When this occurs, one node, the one with higher fitness acquires a finite fraction of all the links. The Bose–Einstein condensation in complex networks is therefore a topological phase transition after which the network has a star-like dominant structure.

## Bose–Einstein phase transition in complex networks

The mapping of a Bose gas predicts the existence of two distinct phases as a function of the energy distribution. In the fit-get-rich phase, describing the case of uniform fitness, the fitter nodes acquire edges at a higher rate than older but less fit nodes. In the end the fittest node will have the most edges, but the richest node is not the absolute winner, since its share of the edges (i.e. the ratio of its edges to the total number of edges in the system) reduces to zero in the limit of large system sizes . The unexpected outcome of this mapping is the possibility of Bose–Einstein condensation for $T, when the fittest node acquires a finite fraction of the edges and maintains this share of edges over time.
A representative fitness distribution ρ(η) that leads to a condensations
$\rho(\eta)=(1-\eta)^{\lambda}\$
with λ = 1.
However, the existence of the Bose–Einstein condensation or the fit-get-rich phase does not depend on the temperature or $\beta$ of the system but depends only on the functional form of the fitness distribution $\rho(\nu)$ of the system. In the end, the $\beta$ falls out of all topologically important quantities. In fact it can be shown that Bose–Einstein condensation exists in the fitness model even without mapping to a Bose gas.A similar relation can be seen in models with super linear preferential attachment, however, it is not clear whether this is an accident or a deeper connection lies between this and the fitness model.

## Some other interesting applications of Bose–Einstein condensation

i) Evolutionary Models:
In evolutionary models each species reproduces proportionally to its fitness. Depending on the fitness distribution, the model shows a condensation phase transition. Recently the mapping of this model to a Bose–Einstein condensation was done.

ii) Ecological Systems:
When the condensation phenomenon in an ecological system occurs, one species becomes dominant and strongly reduces the biodiversity of the system. This phase transition describes a basic stylized mechanism which is responsible for the large impact of invasive species in many ecological systems.

iii) Memory Explanation:
In 1970 Pascual-Leone had shown that memory experiments can be modelled by the Bose–Einstein occupancy model.From this and a large body of other empirical findings, Weiss and Weiss draw the generalized conclusion that memory span can be understood as the quantum number of a harmonic oscillator, where memory is to be mapped into an equilibrium Bose gas.

P.S. : May be this blog is missing Sehwagian flare, but if you can get slightest hint of class of cover drive by Ganguly, my effort will be justified :)

## References

1.  Bianconi, G.; Barabási, A.-L. (2001). "Bose–Einstein Condensation in Complex Networks." Phys. Rev. Lett. 86: 5632–35.
2. Albert, R.; Barabási, A.-L. (2002). "Statistical mechanics of complex networks" Rev. Mod. Phys. 74: 47–97.
3. Bose, S. N. (1924). "Plancks Gesetz und Lichtquantenhypothese". Zeitschrift für Physik 26: 178.
4.  Einstein, A. (1925). "Quantentheorie des einatomigen idealen Gases". Sitzungsberichte der Preussischen Akademie der Wissenschaften
5. http://en.wikipedia.org/wiki/Bose%E2%80%93Einstein_condensate