I would find a Diplomacy AI very interesting, but I see some problems in making one as well.
One of the problems is that there are many different Nash equilibria in strategies in Diplomacy, with which I mean that there is no best strategy; everything is dependent on the psychology of the other players.
For example, if none of the other 6 players on the board put any effort in trying to keep allies or hold their word, then it is best for you to act the same way, and if you do, everyone on the board is then following the best strategy they can against the other players. However, if the other players do try to be somewhat honest, and punish liars, then the best strategy is something different. These are just examples; the bottom line is that the best strategy differs depending on who your opponents are, and that there does not exist any strategy that beats all other strategies.
This means that it is possible to build an AI that is extremely difficult to play against if all the players on the board are AI as well, but is easy to play against if only some of them are, and likewise, it is possible to build an AI that is very strong, yet does not prepare you for how a match with real top player humans would look like.
If we therefore want an AI to reflect (top) human play, we should build an AI that is best against 6 other humans, and NOT against 6 other AI, since the latter could result in strategies that would not work if most opponents are human (but would be very hard to beat by a human if most opponents are other AI).
This means that playtesting is difficult, since you can't just let the AI play thousands of games against itself and let it learn that way. The AI has to learn from playing against humans if you want the end result to actually be a match in a human environment.