I am currently working on some rather powerful additions to Mediocre.
The static exchange evaluation is probably the most needed one. It is a way of determining if a capture is winning or losing material without actually playing out the sequence on the board. We simply count the number of pieces attacking the square for both sides, and order them so we always capture with the least valuable piece first.
This way we get a good idea if a capture is winning or losing with little time used.
This information can then be used for better sorting of quiescent search moves and also for the main sorting of moves. For the main sorting, winning captures is more likely to produce good results than even the killer moves, and equal captures should be sorted before losing captures obviously.
The problem is writing a good algorithm for this since it will be called very often. If it is too slow the gains will diminish fast. I have taken a good look at the Scorpio and Glaurung engines way of doing this and I think I got it to work very well.
The code is quite complex and I will write a thorough guide when the next version of Mediocre is ready.
As opposed to null moves that reduce moves which are likely to fail high (be too good), late move reductions reduce moves that are likely to fail low (not reach up to alpha).
The way it works is we start by searching the first 3-4 moves to full depth. In Mediocre these should be the hash move, the killer moves, and possible a move from the history heuristic or a capture sorted high with MVV/LVA (once SEE is implemented winning captures will be among these as well).
Then we reduce the depth of the rest of the moves with 1 and search with a minimal window. If the move by chance surprises us and returns a value bigger than the previous best move, we research the move with full depth.
It is very rare that one of the remaining moves produce better results so this technique saves quite some time. However we need to be careful to not reduce mindlessly. We do not want to reduce moves that we think leads to something interesting. For example we can not reduce checks, since that will cancel out the check extension. Also captures are risky to reduce.
There are several more ways to determine if we should allow reducing a move. For example if a move has failed high often in the passed it is not a good candidate for reducing, we can also check the static evaluation and see if reducing is a good idea (if static evaluation is currently below alpha we should probably not reduce any moves).
There are a few more considerations that can be done as well and I am currently experimenting with what works best for Mediocre. I will get back with a more thorough analysis.
Currently Mediocre uses a fixed percent of the time on every move (2.5% + increment/2). This is a very crude way of handling the time assignment.
I have been looking at a number of considerations to make the time management more dynamic. Things like different time usage for different kinds of positions, using more time if the evaluation drops drastically after an iteration and spending a bit extra time when getting out of the opening book.
Also using the best move from aborted searches (where we ran out of time and exited the search) is an option as pointed out to me by Greg Schmidt.
All in all there are a lot of things to consider when deciding how much time to use for a move, and the best part is we can have very advanced ways of determining this since it is only done once per search and will not have a big impact of the total time usage.