We’re aware of ongoing federation issues for activities being sent to us by lemmy.ml.
We’re currently working on the issue, but we don’t have an ETA right now.
Cloudflare is reporting 520 - Origin Error when lemmy.ml is trying to send us activities, but the requests don’t seem to properly arrive on our proxy server. This is working fine for federation with all other instances so far, but we have seen a few more requests not related to activity sending that seem to occasionally report the same error.
Right now we’re about 1.25 days behind lemmy.ml.
You can still manually resolve posts in lemmy.ml communities or comments by lemmy.ml users in our communities to make them show up here without waiting for federation, but this obviously is not something that will replace regular federation.
We’ll update this post when there is any new information available.
Update 2024-11-19 17:19 UTC:
Federation is resumed and we’re down to less than 5 hours lag, the remainder should be caught up soon.
The root cause is still not identified unfortunately.
Update 2024-11-23 00:24 UTC:
We’ve explored several different approaches to identify and/or mitigate the issue, which included replacing our primary load balancer with a new VM, updating HAproxy from the latest version packaged in Ubuntu 24.04 LTS to the latest upstream version, finding and removing a configuration option that may have prevented logging of certain errors, but we still haven’t really made any progress other than ruling out various potential issues.
We’re currently waiting for lemmy.ml admins to be available to reset federation failures at a time when we can start capturing some traffic to get more insights on the traffic that is hitting our load balancer, as the problem seems to be either between Cloudflare and our load balancer, or within the load balancer itself. Due to real life time constraints, we weren’t able to find a suitable time this evening, we expect to be able to continue with this tomorrow during the day.
As of this update we’re about 2.37 days behind lemmy.ml.
We are still not aware of similar issues on other instances.
this comment section is not a place to rant about other instances
The hypocracy of people exaggerating the scale of the problems and piling-on about other instances in their manic drive to turn this place into an echo chamber is depressing.
I don’t see why not.
Yeah a swept clean comment section looks way better…
I mean…if you wanted to defederare from lemmy.ml I’d be fine with that.
I wouldn’t.
It’s certainly an efficient way to resolve the problem.
There’s too much content on lemmy.ml to defederate. We’d lose like a quarter of all the content.
We’d lose the .ml version of that content.
But I thought the whole point of federation was that if mods, or communities become problematic, you could always create your own. Then all the non-problematic people will move to that.
Is that not the very foundation of the concept of the fediverse?
It’s not wrong, but I don’t regard Lemmy.ml as being problematic enough to defederate from. Their moderation practices are questionable and their user base is annoying but it’s otherwise generally tolerable. People can block the instance if they don’t want to see content from it.
I can think of a solution.
Lemm.ee is not loading at all, giving a 503 error. Possibly related?
FWIW the communities on walledgarden.xyz have been having federation issues to lemmy.ml for a few days as well.
Since we were/are working through some things with our host I didn’t want to bother anyone from lemmy.ml about it, but it’s a thing. AFAIK federation is otherwise working normally.
Good luck getting it figured out and resolved!
Do these things usually happen from time to time?
I’ve noticed some lemmy.ml communities looking surprisingly “dead” some days here and there but not thought much of it.
I wouldn’t say usually, but they can happen from time to time for a variety of reasons.
It can be caused by overly aggressive WAF (web application firewall) configurations, proxy server misconfigurations, bugs in Lemmy and probably some more.
Proxy server misconfiguration is a common one we’ve seen other instances have issues with from time to time, especially when it works between Lemmy instances but e.g. Mastodon -> Lemmy not working properly, as the proxy configuration would only be specifically matching Lemmys behavior rather than spec-compliant requests.
Overly aggressive WAF configurations tend to usually being a result of instances being attacked/overloaded either by DDoS or aggressive AI service crawlers.
Usually, when there are no configuration changes on either side, issues like this don’t just show up randomly.
In this case, while there was a change on the lemmy.ml side and we don’t believe a change on our side fell into the time this started happening (we don’t have the exact date for when the underlying issue started happening), while the behavior on the sending side might have changed with the Lemmy update, and other instances might just randomly not be affected. We currently believe that this is likely just exposing an issue on our end that already existed prior to changes on lemmy.ml, except the specific logic was previously not used.
Hmm interesting, so I guess even though I can see this hours old post, my comment should arrive in several days time. Hopefully I haven’t responded to anyone on world with anything important recently.
it arrived a few minutes ago, federation is working again (for now)
This probably should explain: https://lemmy.world/post/22166289
Should be a feature, not a bug.
Ah, I just today in the morning blocked Lemmy.ml. Seems to have been interesting timing :)
I’ve never had a positive interaction with Lemmy.ml. For me it serves as a quarantine space, and a set of pre-tagged users I don’t personally enjoy dealing with.
…and I’m not particularly averse to Marxists sentiments either, but they’re certainly not good sales people, diplomats, or representative of their cause.
Which is just part of their reputation now. Having a bad experience with a .ml user seems to be part of the lemmy experience. It’s kind of comical how consistent it seems.
That said, I’m sure there’s good people on .ml.
Yeah, just avoid politics there and it’s fine. We all know about their zeal so it’s pointless to discuss it.
Is there a reason people seem to not like .ml? I only joined because it said the instance was for FOSS enthusiasts
Yes. Because it isn’t for FOSS enthusiasts… They use the .ml specifically to refer to an oppressive, violent, ignorant political ideology. Every bit as bad as capitalism. That’s a threat to anyone that disagrees with them left or right.
A lot of older established communities are there by the circumstance of it being the oldest server. And not by any other virtue. In fact, there are a number like the KDE project that have their own instance. Completely detached from political ideology. Which is a wise decision. A lot of official projects don’t want to be associated with the regular hypocritical and disparaging remarks of the admin staff there
I’ve been a foss enthusiast since the late 80s early 90s when I was in college. Used Linux since 94. Dabbled in BSD a bit before. Am solidly towards the anarchist left. And I block ML on principle alone. Authoritarians aren’t allies. And access to open source communities shouldn’t hinge on not accidentally crossing the fragile and hypocritical political ideologies of such groups. No place or group is perfect. But few are so flawed out of the gate.
So it’s your fault!
In which case, thank you.
No one said it was a global variable. You can hardly blame the user for poor documentation.
Could it be an issue/compatibility with lemmy.ml running Lemmy v0.19.7 ?
I don’t believe it is.
There weren’t any network related changes from 0.19.6 to 0.19.7 and we haven’t seen this behavior with any of the 0.19.6 instances yet.
The requests are visible with details (domain, path, headers) in Cloudflare, but they’re not showing on our proxy server logs at all.
I’ve read enough posts over at /r/sysadmin, it is always DNS.