rest - KairosDB in Java - using the client to delete high volumes of data -
let me know if posted incorrectly, here. (note: kairosdb on top of cassandra. uses hector).
i'm using kairosdb java client dump large amounts of sample data datastore. dumped 6 million in, , attempting delete of method follows:
public static void purgedata(string metricstype, httpclient c, int num, timeunit units){ try { system.out.println("beginning method"); c = new httpclient("http://localhost:8080/api/v1/datapoints/delete"); querybuilder builder = querybuilder.getinstance(); system.out.println("preparing delete info"); builder.setstart(20, timeunit.months).setend(1, timeunit.seconds).addmetric(metricstype); system.out.println("attempted delete info"); queryresponse response = c.query(builder); //system.out.println("json: " + response.getjson()); } catch (exception e) { system.out.println("adding data points produced error"); e.printstacktrace(); } }
note removed time interval parameters try , delete of data @ once.
when executing method, no points seemingly deleted. opted curl query json form of data , received hectorexception stating "all host pools marked down. retry burden pushed out client".
my personal conclusion 6 million many delete @ once. thinking deleting pieces @ time, don't know how restrict how many rows delete kdb java client-side. know kairosdb used in production. how people delete large amounts of data java client?
thanks time!
you can use cqlsh or cassandra-cli truncate kairosdbs tables (data_points, row_key_index, string_index). not familiar enough kairosdb know if thats going cause issues or not though.
> truncate {your keyspace}.data_points;
it might take few seconds complete.
Comments
Post a Comment