Content Policy
How content moderation works on the Million Finney Homepage.
1. Why This Exists
Legal compliance in multiple jurisdictions requires the ability to remove certain categories of content from the frontend. This policy explains what the admin can and cannot do, the technical safeguards that limit that power, and the categories of content that may be flagged.
2. What Flagging Does
The contract owner (admin) can flag individual pixels whose associated content violates the Acceptable Use Policy below. When a pixel is flagged:
- The pixel's title and media are hidden from the frontend UI.
- The pixel's color, coordinates, and ownership remain fully visible on the grid.
- The admin may also unpin the associated media file from the IPFS pinning service.
3. What Flagging Does NOT Do
Flagging is a frontend-only moderation action. It does not modify the blockchain in any way beyond recording the flag itself.
- The on-chain
PixelDatastruct is never modified. The original title and mediaURI remain immutably stored in the smart contract. - Anyone can still read the original data by calling
getPixelData(tokenId)directly on the contract. - Flagging does not burn, transfer, or lock the NFT. The owner retains full ownership.
4. The 30-Day Safeguard
The admin's ability to flag or unflag pixels is enforced by an on-chain deadline that cannot be extended or bypassed:
- When all 1,000,000 pixels have been sold, a 30-day countdown begins automatically.
- After this 30-day period expires, the
flagPixelandunflagPixelfunctions permanently revert. - No one — including the contract owner — can ever flag or unflag any pixel after the deadline passes.
This mechanism ensures the admin has a limited window for content moderation during the initial growth phase, after which the grid becomes fully immutable at the frontend level as well.
5. Acceptable Use Policy
Content associated with a pixel (title and/or media) may be flagged if it falls into any of the following categories:
- Scams, spam, and server misuse — including viruses, phishing, spoofing, or any content designed to deceive users or compromise their devices.
- Intellectual property, privacy, or publicity rights violations — content that infringes copyrights, trademarks, trade secrets, or the privacy and publicity rights of others.
- Unlawful obscene content or solicitation of unlawful services — content that is legally obscene in the relevant jurisdiction, or that solicits or facilitates unlawful services.
- Child sexual abuse material (CSAM) or terrorist content — any content depicting or promoting the sexual exploitation of minors, or content that promotes, incites, or glorifies terrorism.
- Content reasonably likely to cause or increase risk of harm — content that could foreseeably lead to physical, psychological, or financial harm to individuals or groups.
6. Transparency
All flagging actions are recorded on-chain and are publicly auditable:
PixelFlagged(uint256 indexed tokenId)— emitted when a pixel is flagged.PixelUnflagged(uint256 indexed tokenId)— emitted when a flag is removed.
Anyone can query these events to build a complete, verifiable history of every moderation action taken. The on-chain isPixelFlagged(tokenId) function returns the current flag status for any pixel.
This policy may be updated. The on-chain safeguards (immutable pixel data, 30-day deadline) cannot be changed regardless of any policy update.