list_0 = [] list_1 = [] def sort_data(): trades = client.get_recent_trades(symbol='BTCUSDT', limit=50) for t in trades: id_zero = [int(t["id"]), int(t["isBuyerMaker"]), float(t["quoteQty"]).__round__(2)] list_0.append(id_zero) dup = [x[0] for x in list_0] for x in dup: if x not in list_1: ? while True: sort_data()
I am connected to the API from Binance and I want to check recent trades. So far, so good, I can get them with the client.get_recent_trades
I have to download 50 trades at one time because otherwise it would be too slow and I would lost most of the trades. I can see it on the specific IDs.
ID: 560, 565, 576, 587, ...
for example are useless for me, I lose the others.
In every package of data are only a few "fresh" ones. I don't want to append duplicates to my lists, so I tried to check the IDs to filter out duplicates with
dup = [x[0] for x in list_0] Example sublist: [234543234, 1, 4543.45]
I don't know how to do the following: Check ID for every entry in list_0
( ID is on index 0 in sublist ) and when the ID is not in list_1
, copy the whole dataset / sublist [ID, isBuyerMaker, quoteQty] to list_1
.
When it would be a "static" routine, I would know what to do, but here I have a routine with fresh data every second and I don't know how to deal with that. I can't do it with the iteration [-1]
because it is to slow.
没有评论:
发表评论