I will mainly concentrate on the use of neural networks for resource scheduling. Job scheduling also goes hand in hand. One important assumption is the non-preemptive nature of the tasks to which the resources are being allocated.
Neural networks take a different approach to problem solving than that of conventional computers. Conventional computers use an algorithmic approach i.e. the computer follows a set of instructions in order to solve a problem.That restricts the problem solving capability of conventional computers to problems that we already understand and know how to solve. But computers would be so much more useful if they could do things that we don't exactly know how to do.
Neural networks process information in a similar way the human brain does. The network is composed of a large number of highly interconnected processing elements (neurons) working in parallel to solve a specific problem. Neural networks learn by example. They cannot be programmed to perform a specific task. The examples must be selected carefully otherwise useful time is wasted or even worse the network might be functioning incorrectly. The disadvantage is that because the network finds out how to solve the problem by itself, its operation can be unpredictable. Their ability to learn by example makes them very flexible and powerful. Furthermore there is no need to devise an algorithm in order to perform a specific task; i.e. there is no need to understand the internal mechanisms of that task. They are also very well suited for real time systems because of their fast response and computational times which are due to their parallel architecture.
RESOURCE SCHEDULING: NEURAL NETWORK STRUCTURE
In general, neural networks can be used as an approximate tool for solving constraint satisfaction problems by mapping constraints of the problem to functions of the neuron states which constitute the energy of the neural network. The mapping is to be done such that minimum values of the energy function correspond to feasible solutions of the problem.
A neural network energy function for constraint satisfaction problems is usually of the form:
E = ∑ Ai ("violation of constraint i")
Where Ai > 0. By minimizing the energy function E, we attempt to maximize the satisfaction of the constraints. There are a few ways to do this minimization. One I came across was gradient descent method. The gradient descent method accepts only y those moves of the neuron states which reduces the energy function. But it has also its problems, major one to consider is that, the neural network will always converge to a state which is the local minima of the energy function(E). There is another approach which eliminates this problem of local minima. To escape from local minima, the concept of simulated annealing , which performs hill climbing on the energy function and theoretically guarantees the converging to global minima with infinite time.
The most widely used is the classical artificial neural network known as Hopfield Net, which uses the gradient descent method.
The proposed model above is from a paper mentioned  where it is used for scheduling using heuristic models and backtracking(the coupling adaptive function in the image above). Transfer function of h type is sigmoid, meaning the output varies continuously but not linearly as the input changes. Sigmoid units bear a greater resemblance to real neurons than do linear or threshold units, but all three must be considered rough approximations. The other two , f ad g type are threshold functions, meaning the output is set at one of two levels, depending on whether the total input is greater than or less than some threshold value.
The TSP-type part is the network of h-type neurons, whose output is 1, if the task and processor represented by that neuron are connected (i.e. the task is assigned to that processor), otherwise it is 0. As you might have guessed, the total number of neurons in this type is number_of_processor x number_of_tasks. The f-type neuron enforces the inequality constraint involving two tasks. The output of the g-type neuron is the starting time of the task associated with that neuron.
To clarify on the inequality constraints stated above, let me give a few examples of such constraints:
Start to start: Tn + Lnm ≤ Tm , start time of nth task plus the lag time between nth and mth activity is less than the start time of mth task.
Finish to start: Tn +dn + Lnm ≤ Tm , d here is the duration of the nth task.
Resource constraint: The total consumption of type k resource at any project time t must be less than or equal to the maximum number of available resources of type k at time t.
Total duration: The project duration must not exceed a given upper limit.
A better structure of such neural network would be :
Though the above neural network  is for a dynamic neural model but gives a better understating of the whole structure.
. In general, neural networks can be used as an approximate tool for solving constraint satisfaction problems by mapping constraints of the problem to functions of the neuron states which constitute the energy of the neural network. The mapping is to be done such that minimum values of the energy function correspond to feasible solutions of the problem. By minimizing the energy function E, we attempt to maximize the satisfaction of the constraints. But why even use neural networks? Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. A trained neural network can be thought of as an "expert" in the category of information it has been given to analyse.
On a completely unrelated note , here is a cool use of neural networks in doing the TSP problem(also used as h type in ):
- RESOURCE SCHEDULING USING NEURAL DYNAMICS MODEL OF ADELI AND PARK By Ahmed B. Senouci1and Hojjat Adel
- Fast Heuristic Scheduling Based on Neural Networks for Real-Time Systems RUCK THAWONMAS, GOUTAM CHAKRABORTY AND NORIO SHIRATORI
- JOB-SHOP SCHEDULING USING NEURAL NETWORKSA. S. Jain S. Meeran