I believe I saw an issue within the last few months where a ton of apptokens were created because of a bug that was alter fixed. We have roughly 216k apptokens in one environment that is dragging the database down because it is doing full table scans (adding an index would be a nice update!).
My question, how do we clean this up? Looks like there are constraints tied to the Job table. I can probably come up with something but I would feel better if someone else has already done this…
Tried revoking a single token through a job… like put in some details and run it on occasion. It can grab the token, but the revoke command throws a grpc error no matter what I do.
I hope it is as easy as setting the Revoked column from 0 to 1… that would make this simple…
You can set the revoked column from 0 to 1. It’s all we look at to determine if the token is revoked. The revoked date is to determine when the groom job will clean them up. I think it will delete them after 30 days.
If you delete the tokens manually, you’ll need to restart the service because we cache them in memory to speed up validation.
I don’t know that I’d necessarily trust the “last used” data/time -stamp. For example, I have a token that is used literally every 10 minutes, and the last used data/time -stamp says it hasn’t been used since March 20th.
If you’re sure that they were accurately marked, go for it. Just wanted to point out that they may have said NULL when they may not have been (meaning you may have marked tokens that were not actually supposed to be NULL as revoked).
We ran into the bug. We’re on SQL server. I revoked the token in the table, deleted the entries and also reset the id count so we’re back at the expected “normal” id range.