Interesting. I'd not heard the term Markov Chain used to describe a path so I checked my copy of "All of Statistics" by Larry Wasserman and he (slightly) disagrees with both of us and says
"A Markov chain is a stochastic process for which the distribution of X_t depends only on X_{t-1}."
So he doesn't need it to be a discrete process but he also doesn't think it's a path. I guess the terminology is not 100% standardized. But anyhow thanks for making me think about this. Always interesting.
But a random walk is precisely a stochastic process for which the _next state_ depends only on the _current state_. In terms of graphs (where _random walk_ comes from), the next node is decided by randomly selecting a neighbor of the current node.
That's not true for random walks in general I don't think. A random walk is a process derived from taking random steps in some mathematical space. It can include jumps and it can include memory.
Take a "path-avoiding" random walk. At time t the distribution of the next step depends on whether or not I have at some point hit any of the adjacent nodes in the current path. That's not the current state, that's memory.
A random walk on a graph is a stochastic process that starts at a vertex and, at each step, moves to a randomly chosen neighboring vertex. Formally:
Given a graph, a random walk is a sequence of vertices [v1, v2, ..., vk] such that each v{i+1} is selected uniformly at random from the neighbors of vi.
In weighted graphs, the next vertex is chosen with probability proportional to edge weights.
I’m pretty sure it’s not a requirement that the distribution is uniform and also not path-dependent as per the example I gave - a random walk where you’re not allowed to visit a node more than once.
Those notes look interesting thanks. I’ve really never heard of someone saying you have to have uniform probability for a random walk on a graph. In fact the context where I’m most familiar with them (lattice/grid pricers) it’s always specified something like “the probability of branch a is p and b is 1-p” (ie explicitly not uniform) and they aren’t a weighted graph in the normal sense.
But also "the memory" of the random walk can be encoded in a state itself. In your example you can just keep accumulating the visited nodes, so your state space will be now the space of tuples of nodes from your initial space.
A Markov chain is commonly understood to be a time-discrete Markov process. Intuitively, it’s a “chain” because you can “single out” its states in time rather than intervals. That’s also Markov’s original definition. Instead of Wassermann one can look up the notion in Wikipedia. A path is a notion relevant to the state space of a process - it’s a realization of states at every single point in time for the time span of interest.