Skip to content

Delay user storage cluster sync event to after completion#48697

Open
pedroigor wants to merge 1 commit intokeycloak:mainfrom
pedroigor:issue-48666
Open

Delay user storage cluster sync event to after completion#48697
pedroigor wants to merge 1 commit intokeycloak:mainfrom
pedroigor:issue-48666

Conversation

@pedroigor
Copy link
Copy Markdown
Contributor

Closes #48666

Closes keycloak#48666

Signed-off-by: Pedro Igor <pigor.craveiro@gmail.com>
@pedroigor pedroigor requested a review from a team as a code owner May 4, 2026 18:52
Copy link
Copy Markdown
Member

@ahus1 ahus1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the change. I think it is incomplete, and this is probably also the reason why reading the information on a second node is incomplete.

The current event notification is wrapped inside a new transaction. So when the inner transaction ends, the outer transaction hasn't yet written the information the database.

I suppose that in the following snippet, the runJobInTransaction() needs to be removed to make this work as expected.

runJobInTransaction(sessionFactory, session -> {
RealmModel realm = session.realms().getRealm(realmId);
if (realm == null) {
return;
}
session.getContext().setRealm(realm);
if (model != null) {
refreshScheduledTasks(session, model, removed);
notifyStoreSyncClusterUpdate(session, realm, model, removed);
} else {
getUserStorageProvidersStream(realm).forEachOrdered(fedProvider -> {
refreshScheduledTasks(session, fedProvider, removed);
notifyStoreSyncClusterUpdate(session, realm, fedProvider, removed);
});
}
});

The snippet above was added in commit 5e3c0b6 which might need a different solution. That commit updates some enlisteAfterCompletion blocks for removeRealm() and importRealm(). Those might no longer be necessary as those events are now delivered after the commit to the database has taken place.

@ahus1 ahus1 self-assigned this May 5, 2026
@pedroigor
Copy link
Copy Markdown
Contributor Author

@ahus1 Do you mean in order to fix the behavior when propagating the full user storage provider model within the cluster event?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

User Storage Task sync event can lead to deadlocks and outdated information

2 participants