Feels like AI is cheating?
Comments
-
Thanks for the link @madwren!0
-
The point from all this I'm sure is to spread awareness that the game is not random when it should be. It doesn't matter if the player gets waterfalls just like Greg. What matters is that the AI gets to choose to make that happen and that is cheating, especially when it does so selectively to serve certain situations. Cascades are most definitely generated accordingly and that is not okay for people, especially those who spend a ton of money on this game and spend entire weekends playing events, only to get **** when Greg feels like it.1
-
I understand the frustration, I had my fair bit of share losing to crazy cascades.
What I don't fully understand is what are we trying to say here? So my cascades are not random, just seem to be random, but it is all manipulated. Greg's cascades are not random, just seem to be random, again all manipulated.
My card draw is not random, Greg's draw is not random, all manipulated.
So what, basically it is pre-determined how many matches I am going to lose when I sign up for the event? Or is it only pre-determined that I will face "hard, cheating" Greg once, twice, etc. per event? Is it also manipulated what decks I am facing with?
So should I spend less time deck-building, and just throw in some half-decent/decent cards, cause it does not really matter? When Greg turns on the cheating mode, I will draw useless cards anyways, will not get any gem matches, meantime he will have the perfect cards in hand, and full cascades to cast them. Then, when cheat mode is off, I can have whatever in my deck, as long as it is not utterly useless 1/1 critters, I will win, because I will cascade crazy, and will have my cards in my hand.
I can definitely use the extra time I am spending on deck building and testing.0 -
madwren said:Here's a great thread to save people some lookup time. https://forums.d3go.com/discussion/70603/the-lucky-greg/p1That's a blast from the past. The best thing from that thread was a link to a study of perception of luck by Steve Fawkner (the guy who created the Puzzle Quest series.) He rigged a match-3 game to various degrees and asked players how they perceived their luck compared to the AI's. He found he needed to rig the human player to be twice as "lucky" as the AI before humans perceived themselves to be equally lucky as the AI.In other words, it's the nature of human psychology for beliefs in non-randomness favouring their opponent to occur in games which are random and fair. If nobody here thought the AI was rigged, it would strongly suggest that the game cheated gem drops which helped the players, or cheated to cripple the gem drops for Greg.It's a fascinating study that's directly relevant to this discussion, and I highly recommend anyone who has views on the topic to read it: https://gemsofwar.zendesk.com/hc/en-us/articles/210140183-Does-the-AI-Cheat-5
-
Volrak said:madwren said:Here's a great thread to save people some lookup time. https://forums.d3go.com/discussion/70603/the-lucky-greg/p1That's a blast from the past. The best thing from that thread was a link to a study of perception of luck by Steve Fawkner (the guy who created the Puzzle Quest series.) He rigged a match-3 game to various degrees and asked players how they perceived their luck compared to the AI's. He found he needed to rig the human player to be twice as "lucky" as the AI before humans perceived themselves to be equally lucky as the AI.In other words, it's the nature of human psychology for beliefs in non-randomness favouring their opponent to occur in games which are random and fair. If nobody here thought the AI was rigged, it would strongly suggest that the game cheated gem drops which helped the players, or cheated to cripple the gem drops for Greg.It's a fascinating study that's directly relevant to this discussion, and I highly recommend anyone who has views on the topic to read it: https://gemsofwar.zendesk.com/hc/en-us/articles/210140183-Does-the-AI-Cheat-
I made one attempt to track Greg's mana gains compared to mine. What I found was that I out mana gained him by 2 to 1 or more in all but a couple of terrible matches. I quit tracking after 20 matches because I had confirmed what I expected to find. After looking at the results in this study I have to wonder if some one else would have come to a different result with the same data I had.1 -
madwren said:JamesGam said:What bugs me is that people are willing to stand on the side of RNG and adamantly negate other potential possibilities; despite, there being information in other fields/games/applications/companies that specifically manipulate this data in their favor. And it's rather funny to see that for the most part, both sides of this debate are incapable of really proving anything. Nevertheless, for all we know, all of us could be correct - they combined RNG and manipulated RNG.
To pivot off your comment, what bugs me is that we seem to get a post every month alleging that the AI is cheating, yet none of the accusers are willing to provide anything but anecdotal evidence (that often conveniently disregards that players experience the same cascade/gem behavior). Why wouldn't one be on the side of RNG?
The burden of proof lies on the person who makes the claim, and those claims are always the same.
Here's a great thread to save people some lookup time. https://forums.d3go.com/discussion/70603/the-lucky-greg/p1I'd like to think I'm a reasonable person. So you are right; why wouldn't anybody be on the side of RNG. Hell, even I want to - it would make my life easier.Also, I agree about needing proof for claims. This is my fault. I merely tried to use what information I had available and logically deduce it without having to go through the trouble of recording my games, isolating variables, etc. Personally, I thought I did a relatively good job, regardless of whether you agree or not (I would actually appreciate any feedback from anyone on this so I can re-evaluate my thought process for the future).And thank you for that thread link. I had totally missed it and had no idea this issue was such a recurring one so I apologize if you as well as others feel as though they are beating a dead horse.I read the entire post from about the end of page 7, I started to skim.I noticed, I didn't even post on that thread. I vaguely remember this time period and I'm pretty sure I didn't have that much of an issue with the AI at the time. And if I'm not mistaken, I believe there were specific AI behavioral trends that could be worked around to minimize the AI cascades at that time. The first few handfuls of games were brutal but after that it was manageable.There are a lot of things I want to say but I'll break it up into multiple posts.0 -
JamesGam said:...2. For the most part, the AI lacks a lot of skill (i.e. can't use card mechanics / orders cards in hand as creature > support > spell) **AI Cycling is a new recent feature and will not be considered for now.2.a. In the past, the AI has self targeted for spells that clearly should be used on the opponent. **I believe quite a bit of this has been resolved2.b. Aside from cards that require certain criteria [fraying omnipotence / beacon bolt(no spells in yard)], the AI will use cards as they fill with mana. **Sometimes the AI cannot cast card draw/fetch cards unless a slot in their hand is freed.> Based on this information, we know that the AI does not have complex capacities. In other words, each card is not programmed to be used a specific way - there is no algorithmic pathway charts (google adult cardiac arrest algorithm). Therefore, since there is very little AI "skill", another approach must be used in order to enhance the AI's "skill" to attain "hard mode". But before that, we have to define what is considered "hard mode" or more importantly what is considered a "hard" match. We don't know what the devs used as a definition; but, i imagine it would be probably be along the lines of "getting more threats out" than the opponent (player). Therefore, find a way for the AI to cast more cards in order to increase difficulty since the AI and individual cards do not have complex capacities.3. The AI never chooses the cards of his deck.3.a. Cards have been worded as "opposing" instead of "target" in a lot of cases to prevent the AI from targeting their own creatures or self.> Since the AI is incapable of choosing his own deck, the "hard mode" AI needs to be able to use any deck. Therefore, a method needs to be employed that allows the AI to use all potential cards in the MTGPQ collection. And since the complex individual programming of cards (see 3.a.) will take far too many resources, including time; use a broad-range approach.Based off of this short list, logically, it makes sense to employ a gem cascading AI as the "hard mode" AI. The massive mana generation will allow the AI to create enough threats to mimic a challenging match. Also, this broad range approach will self compensate for variables such as PW, deck, etc. Despite this, there are so many moving pieces and variables that it will be near impossible for the players to actually isolate. And as is human nature for most: the path of least resistance will be the one chosen, i.e. it's all RNG. Whoops, this last part was a bit off topic; the point is there are tons of variables that need to be isolated.However, the question you need to ask yourself is, "how do they do this?" Well since it's so hard to manipulate the AIs programming. What if they manipulated the formula associated with the gem board layout and sky-falling gems? What if they used the environment surrounding the AI to create the illusion of a "hard mode" AI? After all, if the player can't get mana to cast their cards, while the AI gets tons of mana; well, we have ourselves a challenge. But it's not really a fair challenge, is it? You can't really blame someone for feeling cheated ...The AI isn't cheating. Instead, the AI has been provided a more advantageous environment. I believe the term "cheating" is really clouding this discussion. The term I would like to use is AI advantage or handicap (which I'm sure many can agree, the AI needs).Some examples:1. In the presence of "hard mode" AI, the starting gem layout has a 20% increased chance of being in favor of the AI's primary colors.
2. In the presence of "hard mode" AI, AI gem matches have a 20% increased chance of "skyfalling" more colored gems corresponding to the most prevalent gem color on the board.
3. In the presence of "hard mode" AI, the AI has a 20% chance to match gems with at least one corresponding match-3 cascade. (more AI behavior than environmental)
4. In the presence of "hard mode" AI, this battle has a 20% increase for "skyfalling" gems of the AIs primary colors for "x" consecutive turns.
5. In the presence of "hard mode" AI, the player side deck has a 20% increased chance of drawing the same cardSo on and so forth.The AI is being helped in order to mimic a challenging battle. Otherwise, the AI is performing business as usual, whether its the gem matching noob "easy" mode AI or the gem matching semi-pro "hard mode" AI..I believe the two arguments for debate are: idealistic RNG vs. modified RNG. Technically, they are both RNG. And this "hard mode" AI advantage doesn't have to lie outside of that.0 -
Alright so after much thought about this or that. Honestly, to me, it really doesn't matter either way - whether the AI is provided an advantage (modified RNG) or its all idealistic RNG. I mean, it won't change anything; they are still going to use that programming code and make changes to it as necessary. I just wanted to know the answer. Specifically, I wanted to know in order to make sense of this madness that is RNG. And it would be nice to know that some of those bad games had a reason for existing outside of just RNG; but, I think my time would probably be better off just playing the game. After all, whether I know or not, I'm still going to run into good games and bad games. Though the debate and thought exercise was rather enjoyable. I even came up with some weird thought processes (rought drafts) like:So many questions, so few answers. Madness.Brigby said:...The AI is primarily being toned down for content aimed towards newer players. Events will still have strong AI, especially the further into the event players go, however not to the extent that Ixalan had.1. The statement itself isn't a bad thing; but for the sake of analysis, let's take a look.To me, this reads: as you play more of an (unspecified) event, the likelihood of running into a hard mode AI will increase. This certainly is not the fair idealistic RNG everyone has in mind; but rather, a modified RNG. It proves nothing, but, it does suggest the existence of modified RNG in potential places we would not expect.2. Okay, so the AI difficulty is different for certain events. So if the AI has been toned down for events geared towards newer players; does that mean I am less likely to encounter the hard mode AI or its been eradicated for that event? But what if those toned down AIs are actually operating on true RNG; and therefore, its occurrence is lower (since stars have to align); whereas, for events not geared towards newer players, the hard mode AI occurrence RNG has been modified?3. Under the assumption that the AI RNG, the gem board layout RNG, gem skyfall RNG, player side deck shuffling/card draw RNG are all separate; are there not enough different combinations so we end up running into the "hard mode" AI more frequently than not? It feels like the frequency would be much less if true RNG were in place since there are so many different moving parts; how can it be achieved? Are some of these linked together?4. Is the "hard mode" AI really that good, such that its matches and cascades really feel that miraculous? Is the true potential of an AI's ability, to create seemingly God-level phenomenon?5. What is this hard mode AI? A hard mode AI is really an easy mode AI if it can't cast anything. How can "hard mode" AI only be smarter at matching gems when matching smarter doesn't always lead to being able to cast enough cards to be consistently considered a hard mode AI?
0 -
If something is bugging you, you could always ask the developers. It's what Oktagon Q&A is for. The form to submit questions is linked from this thread: https://forums.d3go.com/discussion/83518/oktagon-q-a-session-october-2020
2 -
Volrak said:madwren said:Here's a great thread to save people some lookup time. https://forums.d3go.com/discussion/70603/the-lucky-greg/p1That's a blast from the past. The best thing from that thread was a link to a study of perception of luck by Steve Fawkner (the guy who created the Puzzle Quest series.) He rigged a match-3 game to various degrees and asked players how they perceived their luck compared to the AI's. He found he needed to rig the human player to be twice as "lucky" as the AI before humans perceived themselves to be equally lucky as the AI.In other words, it's the nature of human psychology for beliefs in non-randomness favouring their opponent to occur in games which are random and fair. If nobody here thought the AI was rigged, it would strongly suggest that the game cheated gem drops which helped the players, or cheated to cripple the gem drops for Greg.It's a fascinating study that's directly relevant to this discussion, and I highly recommend anyone who has views on the topic to read it: https://gemsofwar.zendesk.com/hc/en-us/articles/210140183-Does-the-AI-Cheat-
Yes, that Fawkner post was incredibly enlightening. I probably should have highlighted that myself; thanks for making sure that part of the thread wasn't missed.
0 -
I don't doubt that the gem and 40-card library generation are all random, however, they still have to program Greg to make a move, and how he decides to choose which gem to swap is the real question here. From some of the links provided here where the devs admitted there is a smarter AI called Manu, and Brigby saying they had to tone down the AI to make it fair, it's clear that Greg was deliberately programmed and is capable of making "better" moves.
I wrote a bot for the slack chat/messaging app that my team uses to collaborate. It solves a given MTGPQ puzzle board and provides a list of valid moves. Here is the pseudo-code for my algorithm:
function get_all_valid_moves(puzzle_board):
for all gems in puzzle_board:
if valid_move:
append move to move_list
return move_list
Obviously, this is enough for Greg to make a valid move, but impossible to make a "better" one. Greg will need to know how much each move is worth. This was the first problem I ran into. To solve this, I have to calculate the total mana gains (I calculated cascades as well but only from the gems already on the board. I did not generate new gems to fill the gaps on the board left by the matched gems)
function compute_total_mana_gains(move_list, mana_gains):
for all moves in move_list:
calculate total_mana_gains
append total_mana_gains to mana_gains_list
return mana_gains_list
Now I can sort the moves by mana_gains and identify the low scoring one, and the high scoring one.
Then, the next problem I ran into is that after deciding which move to make, I am still not guaranteed that I actually picked the lowest scoring move or the highest scoring move because of possible cascades that newly generated gems dropping into the board will make increasing my score further. In other words, without the knowledge of what gems are going to drop, my move_list is not 100% sorted from best to worst.
Until the devs confirm exactly how they did it, as a programmer myself, here's my best guess as to how it was done:
1) The devs sorted the possible moves from best to worst (without knowledge of what was going to drop from above). Greg was given a list of weights (or probabilities) to determine which move to pick. An "easy" Greg will have a high chance of picking the low-scoring move. A "hard" Greg will have a high chance of picking the high-scoring move.
2) Testing (or analysis of actual game plays) has determined that all moves has a random chance of increased mana gains from new gems dropping from above. Therefore a low-scoring move accidentally made "easy" Greg win some matches from lucky cascades, meanwhile "hard" Greg can sometimes be unlucky and high-scoring moves turn out to be bad moves. The result, "easy" Greg and "hard" Greg are still indistinguishable from each other and new players gave up quickly when Greg gets lucky.
3) Developer decided that in order to make "easy" Greg < "hard" Greg instead of "easy" Greg <=>? "hard" Greg, Greg needs to know how the new gems dropping in will affect the total mana gains. Only then will the move list be 100% correctly sorted from bad move to best move. "Easy" Greg will then choose the bad move more often while "Hard" Greg will pick the best move more often. As a side effect of this new method, "Hard" Greg ended up picking the move with most cascades more frequently than before because moves with cascades always get sorted high. "Easy" Greg now hardly gets any cascades but new players love him.
Regardless, I don't think this is cheating at all. If true, this simply allows the game to truly have a consistent bad or good AI that is distinguishable from one another. Cheating would be manipulating the gem drops so that Greg gets matches all the time from the new drops or manipulating your library so that all your OP cards are at the bottom. A Greg that know what's going to drop is still very beatable because the odds of cascades coming from above are still low anyway (some multiple of 1/6 chance).
I think the solution is for the devs to come out and tell us how Greg is programmed to select the gem swaps and then allow us to choose which Greg to play against. I myself would love to play against hard Greg, especially if they also teach him how to prioritize cards in their hand.
1 -
Some interesting thoughts, and I like your approach of creating a simulator.Larz70 said:To solve this, I have to calculate the total mana gains (I calculated cascades as well but only from the gems already on the board.This is already better than Greg - unlike Manu, Greg doesn't look at (on-board) cascades when evaluating a match.The idea that some time post-Manu, the devs' first thought was to implement controversial off-screen cascade evaluation (plus the gem buffering functionality required to support it), while blithely ignoring the on-screen cascade evaluation they'd already implemented, seems kind of inexplicable.1
-
Greg has some other favours too:
Matching opponents coloured gems
Matching gems that build a direct combination
Matching an opponents support or activated gem
Matching loyalty over coloured gems
Matching something that is totally stupid like a colour that is least usefull
Matching gems in the wrong way so that a simple 2 step combination is ignored
Ignoring 5-matches
greg ignores tokens on possible destruction over nontokens, even if the tokens will kill him next turn
0 -
In the past the adjustments to Gregs matching priorities tended to match up with whatever new mechanic was being introduced at the time. When Zendikar was release Greg prioritized Landfall matches. With Amonkhet he also started prioritizing activated gems. I wouldnt be surprised if when clues were introduced that he started taking token supports into account as well. Some of those priorities have been toned down over the years, usually resulting from feedback (complaints?) on these forums. For example, Greg no longer religiously pops activated gems like he used to.
0 -
If you guys think Greg is cheating now, wait until you play Rise of Adventure and have to deal with random landfalls causing effects at the worst possible time. =p
0
Categories
- All Categories
- 44.8K Marvel Puzzle Quest
- 1.5K MPQ News and Announcements
- 20.3K MPQ General Discussion
- 3K MPQ Tips and Guides
- 2K MPQ Character Discussion
- 171 MPQ Supports Discussion
- 2.5K MPQ Events, Tournaments, and Missions
- 2.8K MPQ Alliances
- 6.3K MPQ Suggestions and Feedback
- 6.2K MPQ Bugs and Technical Issues
- 13.6K Magic: The Gathering - Puzzle Quest
- 504 MtGPQ News & Announcements
- 5.4K MtGPQ General Discussion
- 99 MtGPQ Tips & Guides
- 421 MtGPQ Deck Strategy & Planeswalker Discussion
- 298 MtGPQ Events
- 60 MtGPQ Coalitions
- 1.2K MtGPQ Suggestions & Feedback
- 5.6K MtGPQ Bugs & Technical Issues
- 548 Other 505 Go Inc. Games
- 21 Puzzle Quest: The Legend Returns
- 5 Adventure Gnome
- 6 Word Designer: Country Home
- 381 Other Games
- 142 General Discussion
- 239 Off Topic
- 7 505 Go Inc. Forum Rules
- 7 Forum Rules and Site Announcements