标签:部署 apt file put input tput export mod 文件
def keras2tflite(keras_model, tflitefile) converter = tf.lite.TFLiteConverter.from_keras_model(model_2) # Indicate that we want to perform the default optimizations, converter.optimizations = [tf.lite.Optimize.DEFAULT] tflite_model = converter.convert() open(tflitefile, "wb").write(tflite_model)
模型转化之后,就可以在移动端通过tflite相应的工具进行调用了。比如,如果需要通过c语言的调用的话,则可以将得到的模型转化为c语言的代码。
apt-get -qq install xxd xxd -i model.tflite > model.cc cat model.cc #查看生成的模型文件
def pytorch2onnx(model, onnxfile,cpu = True): device = torch.device("cpu" if cpu else "cuda") net = model.to(device) inputs = torch.randn(1, 3, WIDTH, HEIGHT).to(device) torch_out = torch.onnx._export(net, inputs, output_onnx, export_params=True, verbose=False)
标签:部署 apt file put input tput export mod 文件
原文地址:https://www.cnblogs.com/xueliangliu/p/12268037.html