• You've discovered RedGuides 📕 an EverQuest multi-boxing community 🛡️🧙🗡️. We want you to play several EQ characters at once, come join us and say hello! 👋
  • IS THIS SITE UGLY? Change the look. To dismiss this notice, click the X --->

Strategy - Anyone good at AI programming? (1 Viewer)

joojoobee

A Member to Remember
Joined
May 15, 2016
RedCents
4,237¢
I wonder if one could train an AI with hundreds/thousands of bot scenarios to use MQ2 for EQ play. Now... THAT'S a bot army!

https://www.theverge.com/2018/7/4/17533898/deepmind-ai-agent-video-game-quake-iii-capture-the-flag

We use AI at my work a lot after training them with lots of domain expert examples. After a few days (or a few thousand) expert examples of something, the AIs are at least as good as mid range humans at classification systems. It's impressive. Sadly, I am not an AI programming expert or I'd be all over this for my bot team for fun.

JJB
 
Yeah, you could set up a neural network to train an EQ bot. You'd have to get people to play normally with the plugin enabled (if they use kiss or something it will just reinforce current bot play). Potentially, you could gather a bunch of natural player logs to do a lot of the training, too.

However, not sure it would totally be worth it because EQ is pretty deterministic and a simple behavior tree or even decision tree (which is what kiss is) would suffice with enough sophistication. Neural nets are for things that have too many variables to control.
 
To me the nice thing would be something that learned-- even from a set of KISS logs. From there, you let it genetically alter game play until over time learns to get better. One strategy is have it mimic what humans (or even KISS bots) do. Once you have that then you let it learn to get better. Two styles of AI.

If KISS were so deterministic or a "simple behavior tree"-- not sure there's be 5 years now of KISS development with multiple branches. ;):toot: "With enough sophistication"... uhm, I'm still waiting!

Problem of course with most AI is you can't dissect what it learned. It's a black box. But people around my work are talking about AI designs where you can go back in and learn what the rules are.

Anyway, fun to think about.

Happy July 4th everyone!

JJB
 
I'm going to say..... not anytime soon..... "the greater the number of DeepMind bots on a team, the worse they did."

Also, they mention OpenAI. The OpenAI blog has some interesting numbers, just to train the 5v5 teams for Dota 2:
"OpenAI Five plays 180 years worth of games against itself every day, learning via self-play. It trains using a scaled-up version of Proximal Policy Optimization running on 256 GPUs and 128,000 CPU cores"

That's a lot of horsepower for training.

The OpenAI blog is a very interesting read. https://blog.openai.com/openai-five/
 
I'm going to say..... not anytime soon..... "the greater the number of DeepMind bots on a team, the worse they did."

Also, they mention OpenAI. The OpenAI blog has some interesting numbers, just to train the 5v5 teams for Dota 2:
"OpenAI Five plays 180 years worth of games against itself every day, learning via self-play. It trains using a scaled-up version of Proximal Policy Optimization running on 256 GPUs and 128,000 CPU cores"

That's a lot of horsepower for training.

The OpenAI blog is a very interesting read. https://blog.openai.com/openai-five/

Might not be a fair comparison... since that was a 'ground up' learning from scratch. That's a different kind of AI than 'mimicry' AI. Again, we use mimicry AI to train AI with a day's worth of expert classification... it's then as good as mid to high range experts at classification (only needs a few hundred examples in our hands of pictures to find the "signature" in cancer diagnostics). That's mimicry AI.

So... you mimic the KISS algorithm with AI (no, it won't be perfect) but it won't take 180 years of play... it's watching and copying rules. Then... when it's good enough ... you turn it over to the "get better" part of the AI system.

Anyway-- the AI experts at my work say it would work... but not saying it's easy. Or that I can do it! LOL...

Just ideas.
 
Last edited:
I've messed around with it quite a bit now. MarI/O(Written in Lua) got me interested a few years ago. There are so many different approaches and probably many more we haven't come up with yet. My first task with MarI/O was to adjust how fitness worked. Out of the box the AI gains points for moving forward. This is cool and gets him to complete the level however.. he isn't going for the high score! It took a few months to tweak the script just right to adjust for a proper high score fitness. Tweaking sometimes resulted in worst genomes/generations. Which was to be expected as I was asking the AI to do more. He ended up finding all sorts of fun oddities though and in the end maxed out his score on each level.

SethBling(The dude who wrote MarI/O)
https://www.youtube.com/watch?v=qv6UVOQ0F44

Method MarI/O is using:
https://en.wikipedia.org/wiki/Neuroevolution_of_augmenting_topologies

Some MarI/O info:
https://www.engadget.com/2015/06/17/super-mario-world-self-learning-ai/

https://www.polygon.com/2017/11/5/16610012/mario-kart-mariflow-neural-network-video
 
I've messed around with it quite a bit now. MarI/O(Written in Lua) got me interested a few years ago. There are so many different approaches and probably many more we haven't come up with yet. My first task with MarI/O was to adjust how fitness worked. Out of the box the AI gains points for moving forward. This is cool and gets him to complete the level however.. he isn't going for the high score! It took a few months to tweak the script just right to adjust for a proper high score fitness. Tweaking sometimes resulted in worst genomes/generations. Which was to be expected as I was asking the AI to do more. He ended up finding all sorts of fun oddities though and in the end maxed out his score on each level.

SethBling(The dude who wrote MarI/O)
https://www.youtube.com/watch?v=qv6UVOQ0F44

Method MarI/O is using:
https://en.wikipedia.org/wiki/Neuroevolution_of_augmenting_topologies

Some MarI/O info:
https://www.engadget.com/2015/06/17/super-mario-world-self-learning-ai/

https://www.polygon.com/2017/11/5/16610012/mario-kart-mariflow-neural-network-video

Awesome! Yeah... mimicry then evolution is the key. I've been surprised how little information is needed to "mimic". A question would be, I guess, how much of the MQ2API you would want to feed into it. Positions, etc. for sure. HPs, mana, etc.

What character did you train?
 
Strategy - Anyone good at AI programming?

Users who are viewing this thread

Back
Top