

Why not post here when everything is all ready? Fragmenting your big marketing push is just going to hurt you.
Hello there!
I’m also @savvywolf@furry.engineer , and I have a website at https://www.savagewolf.org/ .
He/They


Why not post here when everything is all ready? Fragmenting your big marketing push is just going to hurt you.


Honest comment: Bitsocial sounds an awful lot like Truth Social.
If that comparison is unintentional and not desired, maybe reconsider the name.
If that is intentional, we are probably not your target audience.


You can (and should) just use a password manager to generate and store ~64 byte keys which have roughly the same amount of security.


to the extent Tor is secure
Tor doesn’t automatically secure your app. If your social media instance has 1000 users on it, and one user gets compromised, then the other 999 users shouldn’t have any interactions outside of that user leaked.
web crypto can be utilized for group and 1-1s for an additional layer of encryption
Are file uploads encrypted?
How would you ever discover a filename?
Maybe you have a data leak. Maybe they send the filename in plaintext somewhere. Maybe they take advantage of the fact that UUIDs might be deterministic. But if I may flip the question… Why does an inaccessible post even need to return 403 anyway? It just functions as a big footgun that may cause any other exploits to behave worse.
Even if you have the correct link, if those two conditions arnt satisfied you will not be able to view.
But you can determine its existence or not through the status code.
This was a design choice to have consistency in filetypes. What’s the downside? All browsers will support displaying a jpg.
Gifs will lose any animation, pngs will lose quality. Also, as far as I can tell, there’s nothing stopping a malicious user uploading a non-image file.
Which part are you talking about?
There are two steps to making a post: Upload and store the image and add the post to the database. There’s also similar steps to deleting a post: Removing the image upload and removing the post from the database. Are both these operations atomic?
Everything except the login page, registration link will behind these two checks see (def login) where the @loginrequired logic is defined for each of the app routes.
It’s not that hard for a sufficiently motivated adversary to get an account on a sufficiently large instance. You need to ensure that one user account being compromised doesn’t result in information leakage from unrelated accounts.
This discussion stems from issues I found in just one function. You’re making a product which requires a very high level of security. You need to understand how to write secure code, and your LLM won’t be able to do it for you.
I don’t want to discourage you from programming in general, but making a very secure social media site is a rather complex undertaking for someone new to programming.


You list “Activist/journalist secure communication” as a use case. Not all countries have freedom of press.
Looks like you name images based on a random uuid, so that should protect against filename attacks. But if you do have a filename you can tell whether the image has been an image or not.
Also, looks like all uploads are converted to jpg, regardless as to whether the original image was a jpg (or even an image) or not. Don’t do that.


Had a quick skim and found this little guy:
# ---------- Protected media route ----------
@app.route('/img/<path:name>')
@login_required
def media(name):
db = SessionLocal()
try:
me = current_user(db)
# Find the post with this image
post = db.query(Post).filter_by(image_path=name).first()
if post:
# Check visibility
can_view = post.user_id == me.id or db.query(UserVisibility).filter_by(
owner_id=post.user_id, viewer_id=me.id
).first() is not None
if not can_view:
abort(403)
return send_from_directory(UPLOAD_DIR, os.path.basename(name))
finally:
db.close()
I’ve not read through everything, but there are some security concerns that jump out to me from just this function. Hopefully you can enlighten me on them.
Firstly, what is stopping a logged in user from accessing any image that, for whatever reason, doesn’t have an associated post for it?
Secondly, the return codes for “the image doesn’t exist” (404) and “the image exists but you can’t access it” (403) look to be different. This means that a logged in user can check whether a given filename (e.g. “epstien_and_trump_cuddling.jpg”) has been uploaded or not by any user.
Both of these look to be pretty bad security issues, especially for a project touting its ability to protect from nationstates. Am I missing something?


doesn’t rely on any servers or instances .
Yet is hosted on Github and presumably requires a working DNS and HTTPS system to download.
Users connect to your node directly, p2p, and nobody can stop you.
Except your ISP and/or government.
the protocol is text only, to embed media, you need to host it on the regular ( Centralized ) internet, and then you link to it like https://example.com/image.jpg, and the host will stop hosting that image and report your IP.
So your supposedly non-centralized project requires external hosting? It’s like NFTs where the images were just worthless links. :P Also, uh, base64 encoding is a thing and clients will absolutely start supporting it.
the community creator can assign mods, mods can remove posts from that community.
… Isn’t this what you’ve been trying to avoid?
if a community is badly moderated, the user will never see it, it wont be recommended to him.
Finally, a mention of content discovery. How is your recommendation system implemented? What decides whether a community is worth being recommended?
Also being p2p, seedit is not private, so it can’t really be used for illegal activity
Wait… Isn’t your whole pitch that it was censorship resistant? Can you clarify your threat model here, who are you actually worried about censoring your platform?
[ActivityPub servers] are hard to run and manage.
And using a completely unknown new service and protocol isn’t? I’m sure there’s tons of documentation out there for hosting Mostodon or Lemmy servers.
the problem with federated social media is that each federated instance is just a regular centralized sites.
I agree with this, but not for the reasons you’ve stated.
P2P also scales infinitely, which is the reverse of centralized websites like federated instances: the more users there are, the faster it gets.
P2P scales much worse than centralized systems. Centralized systems scale at N connections per node, while P2P systems scale at N^2 connections per node.
You know what, I don’t mind this project. We need a place for far right people to go to to avoid “censorship” (getting banned from a subreddit for doing nothing but throwing slurs at people) and collaborate on their “plans” (killing minorities) on a platform that is “private” (easily traceable, unencrypted and linked to your IP address).
In before this is a big elaborate ad for NordVPN.
(VPNs and tor allow you to mask your IP)