resyndicator 0.5.4 2017-05-20 ✔ PY3

resyndicator on PyPI  

Aggregates data from many sources into merged and filtered Atom feeds.

AuthorDenis Drescher
LicenseApache 2.0


The Resyndicator aggregates data from various sources into Atom feeds. If you have a list of a couple hundred data sources – such as feeds, sitemaps, and Twitter users – and want to share the aggregate of those entries or updates between your various devices (computers, phones, etc.), your colleagues, or even the visitors of your website, then that’s just what the Resyndicator is for.

  • It allows for queries as sophisticated as SQLAlchemy allows to filter your aggregate feed.
  • It allows you to subclass the fetchers, so you can write fetchers for endpoints as obscure as Adobe’s AMF.
  • It keeps all entries in Postgres, so you have a backup.


When you’ve installed it though Buildout or pip, you should get an endpoint like bin/resyndicator. If not and you know why, then please tell me, because I have the same problem. Otherwise just copy the entry_points parameter from to your to create a new one.

In your own package, you’ll need to create at least a and a In, you can specify your database credential with something like DATABASE = 'postgresql://foo:bar@localhost/impactfeeder' (you may need to create the database and grant access rights to the user). For more options, see the included in the Resyndicator.

In, you list the feeds and (eponymous) resyndicators like so for example:

from datetime import timedelta
from sqlalchemy.sql import or_
from resyndicator import settings
from resyndicator.models import Entry
from resyndicator.fetchers import (
    FeedFetcher, SitemapIndexFetcher, SitemapFetcher,
    TwitterStreamer, ContentFetcher)
from resyndicator.resyndicators import Resyndicator

PAST = timedelta(days=7)

CONTENT_FETCHER = ContentFetcher(past=PAST, timeout=10)

        title='Effective Altruism',

                   defaults={'title': 'Open Phil Sitemap',
                             'author': 'Open Philanthropy Project'},


For each resyndicator, you define a query and a title which will determine its ID and thus its identity. If you change the title you create a different feed. The query determine the entries of the feed and are written are SQLAlchemy where statements.

You can then start the scheduler of the fetchers with bin/resyndicator -s mypackage.settings fetchers, the first stream with bin/resyndicator -s mypackage.settings stream 0 (other streams analogously), and the content fetcher with bin/resyndicator -s mypackage.settings content unless your Buildout is configured some weird way.