repo
stringlengths
2
99
file
stringlengths
13
225
code
stringlengths
0
18.3M
file_length
int64
0
18.3M
avg_line_length
float64
0
1.36M
max_line_length
int64
0
4.26M
extension_type
stringclasses
1 value
tvm
tvm-main/gallery/how_to/deploy_models/deploy_sparse.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """ Deploy a Hugging Face Pruned Model on CPU ========================================= **Author**: `Josh Fromm <https://github.com/jwfromm>`_ This tutorial demonstrates how to take any pruned model, in this case `PruneBert from Hugging Face <https://huggingface.co/huggingface/prunebert-base-uncased-6-finepruned-w-distil-squad>`_, and use TVM to leverage the model's sparsity support to produce real speedups. Although the primary purpose of this tutorial is to realize speedups on already pruned models, it may also be useful to estimate how fast a model would be *if* it were pruned. To this end, we also provide a function that takes an unpruned model and replaces its weights with random and pruned weights at a specified sparsity. This may be a useful feature when trying to decide if a model is worth pruning or not. Before we get into the code, it's useful to discuss sparsity and pruning and dig into the two different types of sparsity: **structured** and **unstructured**. Pruning is a technique primarily used to reduce the parameter size of a model by replacing weight values with 0s. Although many methods exist for choosing which weights should be set to 0, the most straight forward is by picking the weights with the smallest value. Typically, weights are pruned to a desired sparsity percentage. For example, a 95% sparse model would have only 5% of its weights non-zero. Pruning to very high sparsities often requires fine-tuning or full retraining as it tends to be a lossy approximation. Although parameter size benefits are quite easy to obtain from a pruned model through simple compression, leveraging sparsity to yield runtime speedups is more complicated. In structured sparsity weights are pruned with the goal of clustering pruned weights together. In other words, they are pruned using both their value and location. The benefit of bunching up pruned weights is that it allows an algorithm such as matrix multiplication to skip entire blocks. It turns out that some degree of *block sparsity* is very important to realizing significant speedups on most hardware available today. This is because when loading memory in most CPUs or GPUs, it doesn't save any work to skip reading a single value at a time, instead an entire chunk or tile is read in and executed using something like vectorized instructions. Unstructured sparse weights are those that are pruned only on the value of the original weights. They may appear to be scattered randomly throughout a tensor rather than in chunks like we'd see in block sparse weights. At low sparsities, unstructured pruning techniques are difficult to accelerate. However, at high sparsities many blocks of all 0 values will naturally appear, making it possible to accelerate. This tutorial interacts with both structured and unstructured sparsity. Hugging Face's PruneBert model is unstructured but 95% sparse, allowing us to apply TVM's block sparse optimizations to it, even if not optimally. When generating random sparse weights for an unpruned model, we do so with structured sparsity. A fun exercise is comparing the real speed of PruneBert with the block sparse speed using fake weights to see the benefit of structured sparsity. """ ############################################################################### # Load Required Modules # --------------------- # Other than TVM, scipy, the latest transformers, and # tensorflow 2.2+ are required. import os import tvm import time import itertools import numpy as np import tensorflow as tf from tvm import relay, runtime from tvm.contrib import graph_executor from tvm.relay import data_dep_optimization as ddo from tensorflow.python.framework.convert_to_constants import ( convert_variables_to_constants_v2, ) import scipy.sparse as sp # Ask tensorflow to limit its GPU memory to what's actually needed # instead of gobbling everything that's available. # https://www.tensorflow.org/guide/gpu#limiting_gpu_memory_growth # This way this tutorial is a little more friendly to sphinx-gallery. gpus = tf.config.list_physical_devices("GPU") if gpus: try: for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) print("tensorflow will use experimental.set_memory_growth(True)") except RuntimeError as e: print("experimental.set_memory_growth option is not available: {}".format(e)) ############################################################################### # Configure Settings # ------------------ # Let's start by defining some parameters that define the type of model # and sparsity to run. # The name of the transformer model to download and run. name = "huggingface/prunebert-base-uncased-6-finepruned-w-distil-squad" # The number of batches in an input. batch_size = 1 # The length of each input sequence. seq_len = 128 # TVM platform identifier. Note that best cpu performance can be achieved by setting -mcpu # appropriately for your specific machine. CUDA and ROCm are also supported. target = "llvm" # Which device to run on. Should be one of tvm.cpu() or tvm.cuda(). dev = tvm.cpu() # If true, then a sparse variant of the network will be run and # benchmarked. measure_sparse = True # The block size of structured sparsity to convert weight tensors # into. Changing this parameter may yield speedups for some platforms. bs_r = 1 # For models besides PruneBert (which is 95% sparse), this parameter # determines how sparse the generated weights should be. The higher # the sparsity, the faster the result. sparsity = 0.85 ############################################################################### # Download and Convert Transformers Model # --------------------------------------- # Now we'll grab a model from the transformers module, download it, # convert it into a TensorFlow graphdef in preperation for converting that graphdef into # a relay graph that we can optimize and deploy. def load_keras_model(module, name, seq_len, batch_size, report_runtime=True): model = module.from_pretrained(name) dummy_input = tf.keras.Input(shape=[seq_len], batch_size=batch_size, dtype="int32") dummy_out = model(dummy_input) # Propagate shapes through the keras model. if report_runtime: np_input = np.random.uniform(size=[batch_size, seq_len], low=0, high=seq_len).astype( "int32" ) start = time.time() repeats = 50 for i in range(repeats): np_out = model(np_input) end = time.time() print("Keras Runtime: %f ms." % (1000 * ((end - start) / repeats))) return model def convert_to_graphdef(model, batch_size, seq_len): model_func = tf.function(lambda x: model(x)) input_dict = model._saved_model_inputs_spec input_spec = input_dict[list(input_dict.keys())[0]] model_func = model_func.get_concrete_function( tf.TensorSpec([batch_size, seq_len], input_spec.dtype) ) frozen_func = convert_variables_to_constants_v2(model_func) return frozen_func.graph.as_graph_def() def download_model(name, batch_size, seq_len): import transformers module = getattr(transformers, "TFBertForSequenceClassification") model = load_keras_model(module, name=name, batch_size=batch_size, seq_len=seq_len) return convert_to_graphdef(model, batch_size, seq_len) ############################################################################### # Convert to Relay Graph # ---------------------- # We now have all the tooling to get a transformers model in the right format # for relay conversion. Let's import it! In the following function we # save the imported graph in relay's json format so that we dont have # to reimport from tensorflow each time this script is run. def import_graphdef( name, batch_size, seq_len, save_relay=True, relay_file="model.json", relay_params="model.params", ): abs_path = os.path.dirname(os.path.abspath(__file__)) shape_dict = {"input_1": (batch_size, seq_len)} relay_file = ("%s_%d_%d_%s" % (name, batch_size, seq_len, relay_file)).replace("/", "_") relay_params = ("%s_%d_%d_%s" % (name, batch_size, seq_len, relay_params)).replace("/", "_") if os.path.exists(os.path.join(abs_path, relay_file)) and os.path.exists( os.path.join(abs_path, relay_params) ): with open(os.path.join(abs_path, relay_file), "r") as fi: mod = tvm.ir.load_json(fi.read()) with open(os.path.join(abs_path, relay_params), "rb") as fi: params = relay.load_param_dict(fi.read()) else: graph_def = download_model(name, batch_size, seq_len) mod, params = relay.frontend.from_tensorflow(graph_def, shape=shape_dict) if save_relay: with open(os.path.join(abs_path, relay_file), "w") as fo: fo.write(tvm.ir.save_json(mod)) with open(os.path.join(abs_path, relay_params), "wb") as fo: fo.write(runtime.save_param_dict(params)) return mod, dict(params.items()), shape_dict ############################################################################### # Run the Dense Graph # ------------------- # Let's run the default version of the imported model. Note that even if # the weights are sparse, we won't see any speedup because we are using # regular dense matrix multiplications on these dense (but mostly zero) # tensors instead of sparse aware kernels. def run_relay_graph(mod, params, shape_dict, target, dev): with relay.build_config(opt_level=3): lib = relay.build(mod, target=target, params=params) input_shape = shape_dict["input_1"] dummy_data = np.random.uniform(size=input_shape, low=0, high=input_shape[1]).astype("int32") m = graph_executor.GraphModule(lib["default"](dev)) m.set_input(0, dummy_data) m.run() tvm_output = m.get_output(0) print(m.benchmark(dev, repeat=5, number=5)) return tvm_output def run_dense(mod, params, shape_dict, target, dev): print("Dense Model Benchmark:") return run_relay_graph(mod, params, shape_dict, target, dev) ############################################################################### # Run the Sparse Graph # -------------------- # Next we'll convert the graph into a sparse representation and generate # fake sparse weights if needed. Then we'll use the same benchmarking # script as dense to see how much faster we go! We apply a few relay passes # to the graph to get it leveraging sparsity. First we use # `simplify_fc_transpose` to use transposes on the weights of dense layers # into the parameters. This makes it easier to convert to matrix multiplies # to sparse versions. Next we apply `bsr_dense.convert` to identify all # weight matrices that can be sparse, and automatically replace them. # # The `bsr_dense.convert` call below is doing the heavy lifting of identifying # which weights in the model can be made sparse by checking if they are # at least `sparsity_threshold` percent sparse. If so, it converts those # weights into *Block Compressed Row Format (BSR)*. BSR is essentially # a representation that indexes into the nonzero chunks of the tensor, # making it easy for an algorithm to load those non-zero chunks and ignore # the rest of the tensor. Once the sparse weights are in BSR format, # `relay.transform.DenseToSparse` is applied to actually replace # `relay.dense` operations with `relay.sparse_dense` calls that can be # run faster. def random_bsr_matrix(M, N, BS_R, BS_C, density, dtype="float32"): Y = np.zeros((M, N), dtype=dtype) assert M % BS_R == 0 assert N % BS_C == 0 nnz = int(density * M * N) num_blocks = int(nnz / (BS_R * BS_C)) + 1 candidate_blocks = np.asarray(list(itertools.product(range(0, M, BS_R), range(0, N, BS_C)))) assert candidate_blocks.shape[0] == M // BS_R * N // BS_C chosen_blocks = candidate_blocks[ np.random.choice(candidate_blocks.shape[0], size=num_blocks, replace=False) ] for i in range(len(chosen_blocks)): r, c = chosen_blocks[i] Y[r : r + BS_R, c : c + BS_C] = np.random.uniform(-0.1, 0.1, (BS_R, BS_C)) s = sp.bsr_matrix(Y, blocksize=(BS_R, BS_C)) assert s.data.shape == (num_blocks, BS_R, BS_C) assert s.data.size >= nnz assert s.indices.shape == (num_blocks,) assert s.indptr.shape == (M // BS_R + 1,) return s.todense() def random_sparse_bert_params(func, params, density, BS_R, BS_C): def deepcopy(param_dic): ret = {} for k, v in param_dic.items(): ret[k] = tvm.nd.array(v.numpy()) return ret new_params = deepcopy(params) dense_weight_names = relay.analysis.sparse_dense._search_dense_op_weight(func) for item in dense_weight_names: name = str(item) shape = new_params[name].shape if shape[0] % BS_R == 0 and shape[1] % BS_C == 0: new_w = random_bsr_matrix(shape[0], shape[1], BS_R, BS_C, density) new_params[name] = tvm.nd.array(new_w) return new_params def run_sparse(mod, params, shape_dict, target, dev, bs_r, sparsity, gen_weights): mod, params = ddo.simplify_fc_transpose.convert(mod["main"], params) if gen_weights: params = random_sparse_bert_params(mod, params, BS_R=bs_r, BS_C=1, density=1 - sparsity) mod, params = ddo.bsr_dense.convert(mod, params, (bs_r, 1), sparsity_threshold=0.8) print("Block Sparse Model with {blocksize}x1 blocks:".format(blocksize=bs_r)) return run_relay_graph(mod, params, shape_dict, target, dev) ############################################################################### # Run All the Code! # ----------------- # And that's it! Now we'll simply call all the needed function to benchmark # the model according to the set parameters. Note that to run this code # you'll need to uncomment the last line first. def benchmark(): mod, params, shape_dict = import_graphdef(name, batch_size, seq_len) run_dense(mod, params, shape_dict, target, dev) if measure_sparse: gen_weights = "prune" not in name run_sparse(mod, params, shape_dict, target, dev, bs_r, sparsity, gen_weights) # benchmark() ############################################################################### # Sample Output # ------------- # For reference, below is the output of the script when run on an AMD CPU # and shows about a 2.5X speedup from using sparsity. # Dense Model Benchmark: # Cannot find config for target=llvm, workload=('dense_nopack.x86', ('TENSOR', (1, 768), 'float32'), ('TENSOR', (2, 768), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression. # Cannot find config for target=llvm, workload=('dense_nopack.x86', ('TENSOR', (1, 768), 'float32'), ('TENSOR', (768, 768), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression. # Cannot find config for target=llvm, workload=('dense_nopack.x86', ('TENSOR', (128, 3072), 'float32'), ('TENSOR', (768, 3072), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression. # Cannot find config for target=llvm, workload=('dense_nopack.x86', ('TENSOR', (128, 768), 'float32'), ('TENSOR', (3072, 768), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression. # Cannot find config for target=llvm, workload=('dense_nopack.x86', ('TENSOR', (128, 768), 'float32'), ('TENSOR', (768, 768), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression. # Cannot find config for target=llvm, workload=('batch_matmul.x86', ('TENSOR', (12, 128, 128), 'float32'), ('TENSOR', (12, 64, 128), 'float32')). A fallback configuration is used, which may bring great performance regression. # Cannot find config for target=llvm, workload=('batch_matmul.x86', ('TENSOR', (12, 128, 64), 'float32'), ('TENSOR', (12, 128, 64), 'float32')). A fallback configuration is used, which may bring great performance regression. # Runtime: 165.26 ms (12.83 ms) # Block Sparse Model with 1x1 blocks: # Runtime: 67.75 ms (8.83 ms) # Here is the output of this script on a GPU (GTX 1070) with the target "cuda -libs=cublas". # # Dense Model Benchmark: # Cannot find config for target=cuda -keys=cuda,gpu -libs=cublas -max_num_threads=1024 -thread_warp_size=32, workload=('dense_cublas.cuda', ('TENSOR', (1, 768), 'float32'), ('TENSOR', (2, 768), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression. # Cannot find config for target=cuda -keys=cuda,gpu -libs=cublas -max_num_threads=1024 -thread_warp_size=32, workload=('dense_cublas.cuda', ('TENSOR', (1, 768), 'float32'), ('TENSOR', (768, 768), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression. # Cannot find config for target=cuda -keys=cuda,gpu -libs=cublas -max_num_threads=1024 -thread_warp_size=32, workload=('dense_cublas.cuda', ('TENSOR', (128, 3072), 'float32'), ('TENSOR', (768, 3072), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression. # Cannot find config for target=cuda -keys=cuda,gpu -libs=cublas -max_num_threads=1024 -thread_warp_size=32, workload=('dense_cublas.cuda', ('TENSOR', (128, 768), 'float32'), ('TENSOR', (3072, 768), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression. # Cannot find config for target=cuda -keys=cuda,gpu -libs=cublas -max_num_threads=1024 -thread_warp_size=32, workload=('dense_cublas.cuda', ('TENSOR', (128, 768), 'float32'), ('TENSOR', (768, 768), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression. # Cannot find config for target=cuda -keys=cuda,gpu -libs=cublas -max_num_threads=1024 -thread_warp_size=32, workload=('batch_matmul_cublas.cuda', ('TENSOR', (12, 128, 128), 'float32'), ('TENSOR', (12, 64, 128), 'float32'), (12, 128, 64)). A fallback configuration is used, which may bring great performance regression. # Cannot find config for target=cuda -keys=cuda,gpu -libs=cublas -max_num_threads=1024 -thread_warp_size=32, workload=('batch_matmul_cublas.cuda', ('TENSOR', (12, 128, 64), 'float32'), ('TENSOR', (12, 128, 64), 'float32'), (12, 128, 128)). A fallback configuration is used, which may bring great performance regression. # Runtime: 10.64 ms (0.29 ms) # Block Sparse Model with 1x1 blocks: # Runtime: 6.46 ms (0.05 ms)
19,380
52.244505
319
py
tvm
tvm-main/gallery/how_to/deploy_models/deploy_model_on_rasp.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """ .. _tutorial-deploy-model-on-rasp: Deploy the Pretrained Model on Raspberry Pi =========================================== **Author**: `Ziheng Jiang <https://ziheng.org/>`_, \ `Hiroyuki Makino <https://makihiro.github.io/>`_ This is an example of using Relay to compile a ResNet model and deploy it on Raspberry Pi. """ import tvm from tvm import te import tvm.relay as relay from tvm import rpc from tvm.contrib import utils, graph_executor as runtime from tvm.contrib.download import download_testdata ###################################################################### # .. _build-tvm-runtime-on-device: # # Build TVM Runtime on Device # --------------------------- # # The first step is to build the TVM runtime on the remote device. # # .. note:: # # All instructions in both this section and next section should be # executed on the target device, e.g. Raspberry Pi. And we assume it # has Linux running. # # Since we do compilation on local machine, the remote device is only used # for running the generated code. We only need to build tvm runtime on # the remote device. # # .. code-block:: bash # # git clone --recursive https://github.com/apache/tvm tvm # cd tvm # mkdir build # cp cmake/config.cmake build # cd build # cmake .. # make runtime -j4 # # After building runtime successfully, we need to set environment varibles # in :code:`~/.bashrc` file. We can edit :code:`~/.bashrc` # using :code:`vi ~/.bashrc` and add the line below (Assuming your TVM # directory is in :code:`~/tvm`): # # .. code-block:: bash # # export PYTHONPATH=$PYTHONPATH:~/tvm/python # # To update the environment variables, execute :code:`source ~/.bashrc`. ###################################################################### # Set Up RPC Server on Device # --------------------------- # To start an RPC server, run the following command on your remote device # (Which is Raspberry Pi in our example). # # .. code-block:: bash # # python -m tvm.exec.rpc_server --host 0.0.0.0 --port=9090 # # If you see the line below, it means the RPC server started # successfully on your device. # # .. code-block:: bash # # INFO:root:RPCServer: bind to 0.0.0.0:9090 # ###################################################################### # Prepare the Pre-trained Model # ----------------------------- # Back to the host machine, which should have a full TVM installed (with LLVM). # # We will use pre-trained model from # `MXNet Gluon model zoo <https://mxnet.apache.org/api/python/gluon/model_zoo.html>`_. # You can found more details about this part at tutorial :ref:`tutorial-from-mxnet`. from mxnet.gluon.model_zoo.vision import get_model from PIL import Image import numpy as np # one line to get the model block = get_model("resnet18_v1", pretrained=True) ###################################################################### # In order to test our model, here we download an image of cat and # transform its format. img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true" img_name = "cat.png" img_path = download_testdata(img_url, img_name, module="data") image = Image.open(img_path).resize((224, 224)) def transform_image(image): image = np.array(image) - np.array([123.0, 117.0, 104.0]) image /= np.array([58.395, 57.12, 57.375]) image = image.transpose((2, 0, 1)) image = image[np.newaxis, :] return image x = transform_image(image) ###################################################################### # synset is used to transform the label from number of ImageNet class to # the word human can understand. synset_url = "".join( [ "https://gist.githubusercontent.com/zhreshold/", "4d0b62f3d01426887599d4f7ede23ee5/raw/", "596b27d23537e5a1b5751d2b0481ef172f58b539/", "imagenet1000_clsid_to_human.txt", ] ) synset_name = "imagenet1000_clsid_to_human.txt" synset_path = download_testdata(synset_url, synset_name, module="data") with open(synset_path) as f: synset = eval(f.read()) ###################################################################### # Now we would like to port the Gluon model to a portable computational graph. # It's as easy as several lines. # We support MXNet static graph(symbol) and HybridBlock in mxnet.gluon shape_dict = {"data": x.shape} mod, params = relay.frontend.from_mxnet(block, shape_dict) # we want a probability so add a softmax operator func = mod["main"] func = relay.Function(func.params, relay.nn.softmax(func.body), None, func.type_params, func.attrs) ###################################################################### # Here are some basic data workload configurations. batch_size = 1 num_classes = 1000 image_shape = (3, 224, 224) data_shape = (batch_size,) + image_shape ###################################################################### # Compile The Graph # ----------------- # To compile the graph, we call the :py:func:`relay.build` function # with the graph configuration and parameters. However, You cannot to # deploy a x86 program on a device with ARM instruction set. It means # Relay also needs to know the compilation option of target device, # apart from arguments :code:`net` and :code:`params` to specify the # deep learning workload. Actually, the option matters, different option # will lead to very different performance. ###################################################################### # If we run the example on our x86 server for demonstration, we can simply # set it as :code:`llvm`. If running it on the Raspberry Pi, we need to # specify its instruction set. Set :code:`local_demo` to False if you want # to run this tutorial with a real device. local_demo = True if local_demo: target = tvm.target.Target("llvm") else: target = tvm.target.arm_cpu("rasp3b") # The above line is a simple form of # target = tvm.target.Target('llvm -device=arm_cpu -model=bcm2837 -mtriple=armv7l-linux-gnueabihf -mattr=+neon') with tvm.transform.PassContext(opt_level=3): lib = relay.build(func, target, params=params) # After `relay.build`, you will get three return values: graph, # library and the new parameter, since we do some optimization that will # change the parameters but keep the result of model as the same. # Save the library at local temporary directory. tmp = utils.tempdir() lib_fname = tmp.relpath("net.tar") lib.export_library(lib_fname) ###################################################################### # Deploy the Model Remotely by RPC # -------------------------------- # With RPC, you can deploy the model remotely from your host machine # to the remote device. # obtain an RPC session from remote device. if local_demo: remote = rpc.LocalSession() else: # The following is my environment, change this to the IP address of your target device host = "10.77.1.162" port = 9090 remote = rpc.connect(host, port) # upload the library to remote device and load it remote.upload(lib_fname) rlib = remote.load_module("net.tar") # create the remote runtime module dev = remote.cpu(0) module = runtime.GraphModule(rlib["default"](dev)) # set input data module.set_input("data", tvm.nd.array(x.astype("float32"))) # run module.run() # get output out = module.get_output(0) # get top1 result top1 = np.argmax(out.numpy()) print("TVM prediction top-1: {}".format(synset[top1]))
8,144
34.25974
116
py
tvm
tvm-main/gallery/how_to/deploy_models/deploy_prequantized_tflite.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """ Deploy a Framework-prequantized Model with TVM - Part 3 (TFLite) ================================================================ **Author**: `Siju Samuel <https://github.com/siju-samuel>`_ Welcome to part 3 of the Deploy Framework-Prequantized Model with TVM tutorial. In this part, we will start with a Quantized TFLite graph and then compile and execute it via TVM. For more details on quantizing the model using TFLite, readers are encouraged to go through `Converting Quantized Models <https://www.tensorflow.org/lite/convert/quantization>`_. The TFLite models can be downloaded from this `link <https://www.tensorflow.org/lite/guide/hosted_models>`_. To get started, Tensorflow and TFLite package needs to be installed as prerequisite. .. code-block:: bash # install tensorflow and tflite pip install tensorflow==2.1.0 pip install tflite==2.1.0 Now please check if TFLite package is installed successfully, ``python -c "import tflite"`` """ ############################################################################### # Necessary imports # ----------------- import os import numpy as np import tflite import tvm from tvm import relay ###################################################################### # Download pretrained Quantized TFLite model # ------------------------------------------ # Download mobilenet V2 TFLite model provided by Google from tvm.contrib.download import download_testdata model_url = ( "https://storage.googleapis.com/download.tensorflow.org/models/" "tflite_11_05_08/mobilenet_v2_1.0_224_quant.tgz" ) # Download model tar file and extract it to get mobilenet_v2_1.0_224.tflite model_path = download_testdata( model_url, "mobilenet_v2_1.0_224_quant.tgz", module=["tf", "official"] ) model_dir = os.path.dirname(model_path) ###################################################################### # Utils for downloading and extracting zip files # ---------------------------------------------- def extract(path): import tarfile if path.endswith("tgz") or path.endswith("gz"): dir_path = os.path.dirname(path) tar = tarfile.open(path) tar.extractall(path=dir_path) tar.close() else: raise RuntimeError("Could not decompress the file: " + path) extract(model_path) ###################################################################### # Load a test image # ----------------- ####################################################################### # Get a real image for e2e testing # -------------------------------- def get_real_image(im_height, im_width): from PIL import Image repo_base = "https://github.com/dmlc/web-data/raw/main/tensorflow/models/InceptionV1/" img_name = "elephant-299.jpg" image_url = os.path.join(repo_base, img_name) img_path = download_testdata(image_url, img_name, module="data") image = Image.open(img_path).resize((im_height, im_width)) x = np.array(image).astype("uint8") data = np.reshape(x, (1, im_height, im_width, 3)) return data data = get_real_image(224, 224) ###################################################################### # Load a tflite model # ------------------- ###################################################################### # Now we can open mobilenet_v2_1.0_224.tflite tflite_model_file = os.path.join(model_dir, "mobilenet_v2_1.0_224_quant.tflite") tflite_model_buf = open(tflite_model_file, "rb").read() # Get TFLite model from buffer try: import tflite tflite_model = tflite.Model.GetRootAsModel(tflite_model_buf, 0) except AttributeError: import tflite.Model tflite_model = tflite.Model.Model.GetRootAsModel(tflite_model_buf, 0) ############################################################################### # Lets run TFLite pre-quantized model inference and get the TFLite prediction. def run_tflite_model(tflite_model_buf, input_data): """Generic function to execute TFLite""" try: from tensorflow import lite as interpreter_wrapper except ImportError: from tensorflow.contrib import lite as interpreter_wrapper input_data = input_data if isinstance(input_data, list) else [input_data] interpreter = interpreter_wrapper.Interpreter(model_content=tflite_model_buf) interpreter.allocate_tensors() input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() # set input assert len(input_data) == len(input_details) for i in range(len(input_details)): interpreter.set_tensor(input_details[i]["index"], input_data[i]) # Run interpreter.invoke() # get output tflite_output = list() for i in range(len(output_details)): tflite_output.append(interpreter.get_tensor(output_details[i]["index"])) return tflite_output ############################################################################### # Lets run TVM compiled pre-quantized model inference and get the TVM prediction. def run_tvm(lib): from tvm.contrib import graph_executor rt_mod = graph_executor.GraphModule(lib["default"](tvm.cpu(0))) rt_mod.set_input("input", data) rt_mod.run() tvm_res = rt_mod.get_output(0).numpy() tvm_pred = np.squeeze(tvm_res).argsort()[-5:][::-1] return tvm_pred, rt_mod ############################################################################### # TFLite inference # ---------------- ############################################################################### # Run TFLite inference on the quantized model. tflite_res = run_tflite_model(tflite_model_buf, data) tflite_pred = np.squeeze(tflite_res).argsort()[-5:][::-1] ############################################################################### # TVM compilation and inference # ----------------------------- ############################################################################### # We use the TFLite-Relay parser to convert the TFLite pre-quantized graph into Relay IR. Note that # frontend parser call for a pre-quantized model is exactly same as frontend parser call for a FP32 # model. We encourage you to remove the comment from print(mod) and inspect the Relay module. You # will see many QNN operators, like, Requantize, Quantize and QNN Conv2D. dtype_dict = {"input": data.dtype.name} shape_dict = {"input": data.shape} mod, params = relay.frontend.from_tflite(tflite_model, shape_dict=shape_dict, dtype_dict=dtype_dict) # print(mod) ############################################################################### # Lets now the compile the Relay module. We use the "llvm" target here. Please replace it with the # target platform that you are interested in. target = "llvm" with tvm.transform.PassContext(opt_level=3): lib = relay.build_module.build(mod, target=target, params=params) ############################################################################### # Finally, lets call inference on the TVM compiled module. tvm_pred, rt_mod = run_tvm(lib) ############################################################################### # Accuracy comparison # ------------------- ############################################################################### # Print the top-5 labels for MXNet and TVM inference. # Checking the labels because the requantize implementation is different between # TFLite and Relay. This cause final output numbers to mismatch. So, testing accuracy via labels. print("TVM Top-5 labels:", tvm_pred) print("TFLite Top-5 labels:", tflite_pred) ########################################################################## # Measure performance # ------------------- # Here we give an example of how to measure performance of TVM compiled models. n_repeat = 100 # should be bigger to make the measurement more accurate dev = tvm.cpu(0) print(rt_mod.benchmark(dev, number=1, repeat=n_repeat)) ###################################################################### # .. note:: # # Unless the hardware has special support for fast 8 bit instructions, quantized models are # not expected to be any faster than FP32 models. Without fast 8 bit instructions, TVM does # quantized convolution in 16 bit, even if the model itself is 8 bit. # # For x86, the best performance can be achieved on CPUs with AVX512 instructions set. # In this case, TVM utilizes the fastest available 8 bit instructions for the given target. # This includes support for the VNNI 8 bit dot product instruction (CascadeLake or newer). # For EC2 C5.12x large instance, TVM latency for this tutorial is ~2 ms. # # Intel conv2d NCHWc schedule on ARM gives better end-to-end latency compared to ARM NCHW # conv2d spatial pack schedule for many TFLite networks. ARM winograd performance is higher but # it has a high memory footprint. # # Moreover, the following general tips for CPU performance equally applies: # # * Set the environment variable TVM_NUM_THREADS to the number of physical cores # * Choose the best target for your hardware, such as "llvm -mcpu=skylake-avx512" or # "llvm -mcpu=cascadelake" (more CPUs with AVX512 would come in the future) # * Perform autotuning - :ref:`Auto-tuning a convolution network for x86 CPU # <tune_relay_x86>`. # * To get best inference performance on ARM CPU, change target argument # according to your device and follow :ref:`Auto-tuning a convolution # network for ARM CPU <tune_relay_arm>`.
10,190
37.602273
100
py
tvm
tvm-main/gallery/how_to/deploy_models/deploy_quantized.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """ Deploy a Quantized Model on Cuda ================================ **Author**: `Wuwei Lin <https://github.com/vinx13>`_ This article is an introductory tutorial of automatic quantization with TVM. Automatic quantization is one of the quantization modes in TVM. More details on the quantization story in TVM can be found `here <https://discuss.tvm.apache.org/t/quantization-story/3920>`_. In this tutorial, we will import a GluonCV pre-trained model on ImageNet to Relay, quantize the Relay model and then perform the inference. """ import tvm from tvm import te from tvm import relay import mxnet as mx from tvm.contrib.download import download_testdata from mxnet import gluon import logging import os batch_size = 1 model_name = "resnet18_v1" target = "cuda" dev = tvm.device(target) ############################################################################### # Prepare the Dataset # ------------------- # We will demonstrate how to prepare the calibration dataset for quantization. # We first download the validation set of ImageNet and pre-process the dataset. calibration_rec = download_testdata( "http://data.mxnet.io.s3-website-us-west-1.amazonaws.com/data/val_256_q90.rec", "val_256_q90.rec", ) def get_val_data(num_workers=4): mean_rgb = [123.68, 116.779, 103.939] std_rgb = [58.393, 57.12, 57.375] def batch_fn(batch): return batch.data[0].asnumpy(), batch.label[0].asnumpy() img_size = 299 if model_name == "inceptionv3" else 224 val_data = mx.io.ImageRecordIter( path_imgrec=calibration_rec, preprocess_threads=num_workers, shuffle=False, batch_size=batch_size, resize=256, data_shape=(3, img_size, img_size), mean_r=mean_rgb[0], mean_g=mean_rgb[1], mean_b=mean_rgb[2], std_r=std_rgb[0], std_g=std_rgb[1], std_b=std_rgb[2], ) return val_data, batch_fn ############################################################################### # The calibration dataset should be an iterable object. We define the # calibration dataset as a generator object in Python. In this tutorial, we # only use a few samples for calibration. calibration_samples = 10 def calibrate_dataset(): val_data, batch_fn = get_val_data() val_data.reset() for i, batch in enumerate(val_data): if i * batch_size >= calibration_samples: break data, _ = batch_fn(batch) yield {"data": data} ############################################################################### # Import the model # ---------------- # We use the Relay MxNet frontend to import a model from the Gluon model zoo. def get_model(): gluon_model = gluon.model_zoo.vision.get_model(model_name, pretrained=True) img_size = 299 if model_name == "inceptionv3" else 224 data_shape = (batch_size, 3, img_size, img_size) mod, params = relay.frontend.from_mxnet(gluon_model, {"data": data_shape}) return mod, params ############################################################################### # Quantize the Model # ------------------ # In quantization, we need to find the scale for each weight and intermediate # feature map tensor of each layer. # # For weights, the scales are directly calculated based on the value of the # weights. Two modes are supported: `power2` and `max`. Both modes find the # maximum value within the weight tensor first. In `power2` mode, the maximum # is rounded down to power of two. If the scales of both weights and # intermediate feature maps are power of two, we can leverage bit shifting for # multiplications. This make it computationally more efficient. In `max` mode, # the maximum is used as the scale. Without rounding, `max` mode might have # better accuracy in some cases. When the scales are not powers of two, fixed # point multiplications will be used. # # For intermediate feature maps, we can find the scales with data-aware # quantization. Data-aware quantization takes a calibration dataset as the # input argument. Scales are calculated by minimizing the KL divergence between # distribution of activation before and after quantization. # Alternatively, we can also use pre-defined global scales. This saves the time # for calibration. But the accuracy might be impacted. def quantize(mod, params, data_aware): if data_aware: with relay.quantize.qconfig(calibrate_mode="kl_divergence", weight_scale="max"): mod = relay.quantize.quantize(mod, params, dataset=calibrate_dataset()) else: with relay.quantize.qconfig(calibrate_mode="global_scale", global_scale=8.0): mod = relay.quantize.quantize(mod, params) return mod ############################################################################### # Run Inference # ------------- # We create a Relay VM to build and execute the model. def run_inference(mod): model = relay.create_executor("vm", mod, dev, target).evaluate() val_data, batch_fn = get_val_data() for i, batch in enumerate(val_data): data, label = batch_fn(batch) prediction = model(data) if i > 10: # only run inference on a few samples in this tutorial break def main(): mod, params = get_model() mod = quantize(mod, params, data_aware=True) run_inference(mod) if __name__ == "__main__": main()
6,145
35.802395
88
py
tvm
tvm-main/web/emcc/decorate_as_wasi.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Decorate emcc generated js to a WASI compatible API.""" import sys template_head = """ function EmccWASI() { """ template_tail = """ this.Module = Module; this.start = Module.wasmLibraryProvider.start; this.imports = Module.wasmLibraryProvider.imports; this.wasiImport = this.imports["wasi_snapshot_preview1"]; } if (typeof module !== "undefined" && module.exports) { module.exports = EmccWASI; } """ if __name__ == "__main__": if len(sys.argv) != 3: print("Usage <file-in> <file-out>") result = template_head + open(sys.argv[1]).read() + template_tail with open(sys.argv[2], "w") as fo: fo.write(result)
1,444
32.604651
69
py
tvm
tvm-main/web/tests/python/webgpu_rpc_test.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Simple testcode to test Javascript RPC To use it, start a rpc proxy with "python -m tvm.exec.rpc_proxy". Connect javascript end to the websocket port and connect to the RPC. """ import tvm from tvm import te from tvm import rpc from tvm.contrib import utils, emcc from tvm.relay.backend import Runtime import numpy as np proxy_host = "127.0.0.1" proxy_port = 9090 def test_rpc(): if not tvm.runtime.enabled("rpc"): return # generate the wasm library target = tvm.target.Target("webgpu", host="llvm -mtriple=wasm32-unknown-unknown-wasm") runtime = Runtime("cpp", {"system-lib": True}) n = 2048 A = te.placeholder((n,), name="A") B = te.compute(A.shape, lambda *i: te.log(te.abs(A(*i)) + 1.0), name="B") s = te.create_schedule(B.op) num_thread = 2 xo, xi = s[B].split(B.op.axis[0], factor=num_thread) s[B].bind(xi, te.thread_axis("threadIdx.x")) s[B].bind(xo, te.thread_axis("blockIdx.x")) fadd = tvm.build(s, [A, B], target, runtime=runtime, name="addone") temp = utils.tempdir() wasm_path = temp.relpath("addone_gpu.wasm") fadd.export_library(wasm_path, emcc.create_tvmjs_wasm) wasm_binary = open(wasm_path, "rb").read() remote = rpc.connect( proxy_host, proxy_port, key="wasm", session_constructor_args=["rpc.WasmSession", wasm_binary], ) def check(remote): # basic function checks. dev = remote.webgpu(0) adata = np.random.uniform(size=n).astype(A.dtype) a = tvm.nd.array(adata, dev) b = tvm.nd.array(np.zeros(n, dtype=A.dtype), dev) np.testing.assert_equal(a.numpy(), adata) f1 = remote.system_lib() addone = f1.get_function("addone") addone(a, b) np.testing.assert_allclose(b.numpy(), np.log(np.abs(a.numpy()) + 1), atol=1e-5, rtol=1e-5) print("Test pass..") check(remote) test_rpc()
2,703
31.578313
98
py
tvm
tvm-main/web/tests/python/websock_rpc_test.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Simple testcode to test Javascript RPC To use it, start a rpc proxy with "python -m tvm.exec.rpc_proxy". Connect javascript end to the websocket port and connect to the RPC. """ import tvm from tvm import te from tvm import rpc from tvm.contrib import utils, emcc from tvm.relay.backend import Runtime import numpy as np proxy_host = "127.0.0.1" proxy_port = 9090 def test_rpc(): if not tvm.runtime.enabled("rpc"): return # generate the wasm library runtime = Runtime("cpp", {"system-lib": True}) target = "llvm -mtriple=wasm32-unknown-unknown-wasm" if not tvm.runtime.enabled(target): raise RuntimeError("Target %s is not enbaled" % target) n = te.var("n") A = te.placeholder((n,), name="A") B = te.compute(A.shape, lambda *i: A(*i) + 1.0, name="B") s = te.create_schedule(B.op) fadd = tvm.build(s, [A, B], target, runtime=runtime, name="addone") temp = utils.tempdir() wasm_path = temp.relpath("addone.wasm") fadd.export_library(wasm_path, emcc.create_tvmjs_wasm) wasm_binary = open(wasm_path, "rb").read() remote = rpc.connect( proxy_host, proxy_port, key="wasm", session_constructor_args=["rpc.WasmSession", wasm_binary], ) def check(remote): # basic function checks. faddone = remote.get_function("testing.asyncAddOne") fecho = remote.get_function("testing.echo") assert faddone(100) == 101 assert fecho(1, 2, 3) == 1 assert fecho(1, 2, 3) == 1 assert fecho(100, 2, 3) == 100 assert fecho("xyz") == "xyz" assert bytes(fecho(bytearray(b"123"))) == b"123" # run the generated library. f1 = remote.system_lib() dev = remote.cpu(0) a = tvm.nd.array(np.random.uniform(size=1024).astype(A.dtype), dev) b = tvm.nd.array(np.zeros(1024, dtype=A.dtype), dev) # invoke the function addone = f1.get_function("addone") addone(a, b) # time evaluator time_f = f1.time_evaluator("addone", dev, number=100, repeat=10) time_f(a, b) cost = time_f(a, b).mean print("%g secs/op" % cost) np.testing.assert_equal(b.numpy(), a.numpy() + 1) check(remote) test_rpc()
3,052
32.184783
75
py
tvm
tvm-main/web/tests/python/prepare_test_libs.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # Prepare test library for standalone wasm runtime test. import tvm from tvm import te from tvm.contrib import emcc from tvm.relay.backend import Runtime import os def prepare_test_libs(base_path): runtime = Runtime("cpp", {"system-lib": True}) target = "llvm -mtriple=wasm32-unknown-unknown-wasm" if not tvm.runtime.enabled(target): raise RuntimeError("Target %s is not enbaled" % target) n = te.var("n") A = te.placeholder((n,), name="A") B = te.compute(A.shape, lambda *i: A(*i) + 1.0, name="B") s = te.create_schedule(B.op) fadd = tvm.build(s, [A, B], target, runtime=runtime, name="add_one") wasm_path = os.path.join(base_path, "test_addone.wasm") fadd.export_library(wasm_path, emcc.create_tvmjs_wasm) if __name__ == "__main__": curr_path = os.path.dirname(os.path.abspath(os.path.expanduser(__file__))) prepare_test_libs(os.path.join(curr_path, "../../dist/wasm"))
1,719
38.090909
78
py
tvm
tvm-main/python/setup.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=invalid-name, exec-used """Setup TVM package.""" import os import pathlib import shutil import sys import sysconfig from setuptools import find_packages from setuptools.dist import Distribution # need to use distutils.core for correct placement of cython dll if "--inplace" in sys.argv: from distutils.core import setup from distutils.extension import Extension else: from setuptools import setup from setuptools.extension import Extension CURRENT_DIR = os.path.dirname(__file__) FFI_MODE = os.environ.get("TVM_FFI", "auto") CONDA_BUILD = os.getenv("CONDA_BUILD") is not None def get_lib_path(): """Get library path, name and version""" # We can not import `libinfo.py` in setup.py directly since __init__.py # Will be invoked which introduces dependencies libinfo_py = os.path.join(CURRENT_DIR, "./tvm/_ffi/libinfo.py") libinfo = {"__file__": libinfo_py} exec(compile(open(libinfo_py, "rb").read(), libinfo_py, "exec"), libinfo, libinfo) version = libinfo["__version__"] if not CONDA_BUILD: lib_path = libinfo["find_lib_path"]() libs = [lib_path[0]] if "runtime" not in libs[0]: for name in lib_path[1:]: if "runtime" in name: libs.append(name) break # Add standalone_crt, if present for name in lib_path: candidate_path = os.path.join(os.path.dirname(name), "standalone_crt") if os.path.isdir(candidate_path): libs.append(candidate_path) break # Add microTVM template projects for name in lib_path: candidate_path = os.path.join(os.path.dirname(name), "microtvm_template_projects") if os.path.isdir(candidate_path): libs.append(candidate_path) break # Add tvmc configuration json files for name in lib_path: candidate_path = os.path.abspath(os.path.join(os.path.dirname(name), "..", "configs")) if os.path.isdir(candidate_path): libs.append(candidate_path) break for dir in [ "3rdparty", "jvm", "web", "rust", "golang", "include", "src", "cmake", "CMakeLists.txt", ]: for name in lib_path: candidate_path = os.path.abspath(os.path.join(os.path.dirname(name), "..", dir)) if os.path.exists(candidate_path): libs.append(candidate_path) if dir == "3rdparty": # remove large files _remove_path(os.path.join(candidate_path, "cutlass", "docs")) _remove_path(os.path.join(candidate_path, "cutlass", "media")) _remove_path( os.path.join(candidate_path, "cutlass_fpA_intB_gemm", "cutlass", "docs") ) _remove_path( os.path.join( candidate_path, "cutlass_fpA_intB_gemm", "cutlass", "media" ) ) break else: libs = None return libs, version def git_describe_version(original_version): """Get git describe version.""" ver_py = os.path.join(CURRENT_DIR, "..", "version.py") libver = {"__file__": ver_py} exec(compile(open(ver_py, "rb").read(), ver_py, "exec"), libver, libver) _, gd_version = libver["git_describe_version"]() if gd_version != original_version and "--inplace" not in sys.argv: print("Use git describe based version %s" % gd_version) return gd_version def _remove_path(path): if os.path.exists(path): if os.path.isfile(path): os.remove(path) elif os.path.isdir(path): shutil.rmtree(path) LIB_LIST, __version__ = get_lib_path() __version__ = git_describe_version(__version__) def config_cython(): """Try to configure cython and return cython configuration""" if FFI_MODE not in ("cython"): if os.name == "nt" and not CONDA_BUILD: print("WARNING: Cython is not supported on Windows, will compile without cython module") return [] sys_cflags = sysconfig.get_config_var("CFLAGS") if sys_cflags and "i386" in sys_cflags and "x86_64" in sys_cflags: print("WARNING: Cython library may not be compiled correctly with both i386 and x64") return [] try: from Cython.Build import cythonize # from setuptools.extension import Extension if sys.version_info >= (3, 0): subdir = "_cy3" else: subdir = "_cy2" ret = [] path = "tvm/_ffi/_cython" extra_compile_args = ["-std=c++17", "-DDMLC_USE_LOGGING_LIBRARY=<tvm/runtime/logging.h>"] if os.name == "nt": library_dirs = ["tvm", "../build/Release", "../build"] libraries = ["tvm"] extra_compile_args = [ "/std:c++17", "/D DMLC_USE_LOGGING_LIBRARY=<tvm/runtime/logging.h>", ] # library is available via conda env. if CONDA_BUILD: library_dirs = [os.environ["LIBRARY_LIB"]] else: library_dirs = None libraries = None for fn in os.listdir(path): if not fn.endswith(".pyx"): continue ret.append( Extension( "tvm._ffi.%s.%s" % (subdir, fn[:-4]), ["tvm/_ffi/_cython/%s" % fn], include_dirs=[ "../include/", "../3rdparty/dmlc-core/include", "../3rdparty/dlpack/include", ], extra_compile_args=extra_compile_args, library_dirs=library_dirs, libraries=libraries, language="c++", ) ) return cythonize(ret, compiler_directives={"language_level": 3}) except ImportError as error: if FFI_MODE == "cython": raise error print("WARNING: Cython is not installed, will compile without cython module") return [] class BinaryDistribution(Distribution): def has_ext_modules(self): return True def is_pure(self): return False setup_kwargs = {} if not CONDA_BUILD: with open("MANIFEST.in", "w") as fo: for path in LIB_LIST: if os.path.isfile(path): shutil.copy(path, os.path.join(CURRENT_DIR, "tvm")) _, libname = os.path.split(path) fo.write(f"include tvm/{libname}\n") if os.path.isdir(path): _, libname = os.path.split(path) shutil.copytree(path, os.path.join(CURRENT_DIR, "tvm", libname)) fo.write(f"recursive-include tvm/{libname} *\n") setup_kwargs = {"include_package_data": True} def get_package_data_files(): # Relay standard libraries return ["relay/std/prelude.rly", "relay/std/core.rly"] def long_description_contents(): with open(pathlib.Path(CURRENT_DIR).resolve().parent / "README.md", encoding="utf-8") as readme: description = readme.read() return description # Temporarily add this directory to the path so we can import the requirements generator # tool. sys.path.insert(0, os.path.dirname(__file__)) import gen_requirements sys.path.pop(0) requirements = gen_requirements.join_requirements() extras_require = { piece: deps for piece, (_, deps) in requirements.items() if piece not in ("all", "core") } setup( name="tvm", version=__version__, description="TVM: An End to End Tensor IR/DSL Stack for Deep Learning Systems", long_description=long_description_contents(), long_description_content_type="text/markdown", url="https://tvm.apache.org/", download_url="https://github.com/apache/tvm/tags", author="Apache TVM", license="Apache", # See https://pypi.org/classifiers/ classifiers=[ "License :: OSI Approved :: Apache Software License", "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Intended Audience :: Education", "Intended Audience :: Science/Research", ], keywords="machine learning", zip_safe=False, entry_points={"console_scripts": ["tvmc = tvm.driver.tvmc.main:main"]}, install_requires=requirements["core"][1], extras_require=extras_require, packages=find_packages(), package_dir={"tvm": "tvm"}, package_data={"tvm": get_package_data_files()}, distclass=BinaryDistribution, ext_modules=config_cython(), **setup_kwargs, ) if not CONDA_BUILD: # Wheel cleanup os.remove("MANIFEST.in") for path in LIB_LIST: _, libname = os.path.split(path) _remove_path(f"tvm/{libname}")
9,884
33.806338
100
py
tvm
tvm-main/python/gen_requirements.py
#!/usr/bin/env python3 # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """TVM Python requirements.txt generator. This script generates a set of requirements.txt files (stored in `./requirements`) that describe TVM's Python dependencies. ## Pieces TVM can be roughly broken into these named pieces along the lines of Python dependencies: - "core": A core piece, which is intended to be buildable with very few external dependencies. Users can use Relay, compile models, and run autotuning with this part. - "importer-<tool>": Model importers, which convert models defined in various other tools (i.e. TensorFlow, PyTorch, etc) into Relay models. - Extra features (i.e. XGBoost in AutoTVM). These enhance TVM's functionality, but aren't required for basic operation. ## What this tool does From these pieces, this tool builds: - requirements/<name>.txt - Python dependencies for each named piece above, `<name>` is the same as the quoted piece name. - requirements/all.txt - Consolidated Python dependencies for all pieces, excluding dev below. - requirements/dev.txt - Python dependencies needed to develop TVM, such as lint and test tools. The data representing each piece is contained in the two maps below. """ import argparse import collections import os import re import sys import textwrap import typing RequirementsByPieceType = typing.List[typing.Tuple[str, typing.Tuple[str, typing.List[str]]]] # Maps named TVM piece (see description above) to a list of names of Python packages. Please use # alphabetical order for each package list, and do not add version constraints here! REQUIREMENTS_BY_PIECE: RequirementsByPieceType = [ # Base requirements needed to install tvm. ( "core", ( "Base requirements needed to install tvm", [ "attrs", "cloudpickle", "decorator", "ml_dtypes", "numpy", "psutil", "scipy", "tornado", "typing_extensions", ], ), ), # Provide support for Arm(R) Ethos(TM)-U NPU. ( "ethosu", ( "Requirements for using Arm(R) Ethos(TM)-U NPU", [ "ethos-u-vela", ], ), ), # Relay frontends. ( "importer-caffe", ( "Requirements for the Caffe importer", [ "numpy", "protobuf", "scikit-image", "six", ], ), ), ( "importer-caffe2", ( "Requirements for the Caffe2 importer", [ "future", # Hidden dependency of torch. "torch", ], ), ), ("importer-coreml", ("Requirements for the CoreML importer", ["coremltools"])), ("importer-darknet", ("Requirements for the DarkNet importer", ["opencv-python"])), ( "importer-keras", ("Requirements for the Keras importer", ["tensorflow", "tensorflow-estimator"]), ), ( "importer-onnx", ( "Requirements for the ONNX importer", [ "future", # Hidden dependency of torch. "onnx", "onnxoptimizer", "onnxruntime", "torch", "torchvision", ], ), ), ( "importer-paddle", ("Requirements for the PaddlePaddle importer", ["paddlepaddle"]), ), ( "importer-pytorch", ( "Requirements for the PyTorch importer", [ "future", # Hidden dependency of torch. "torch", "torchvision", ], ), ), ( "importer-tensorflow", ("Requirements for the TensorFlow importer", ["tensorflow", "tensorflow-estimator"]), ), ( "importer-tflite", ("Requirements for the TFLite importer", ["tensorflow", "tensorflow-estimator", "tflite"]), ), ( "tvmc", ( "Requirements for the tvmc command-line tool", [ "ethos-u-vela", "future", # Hidden dependency of torch. "onnx", "onnxoptimizer", "onnxruntime", "paddlepaddle", "tensorflow", "tflite", "torch", "torchvision", "xgboost", ], ), ), # Vitis AI requirements ( "vitis-ai", ( "Requirements for the Vitis AI codegen", [ "h5py", "progressbar", ], ), ), # XGBoost, useful for autotuning on some targets. ( "xgboost", ( "Requirements for XGBoost autotuning", [ "future", # Hidden dependency of torch. "torch", "xgboost", ], ), ), # Development requirements ( "dev", ( "Requirements to develop TVM -- lint, docs, testing, etc.", [ "astroid", # pylint requirement, listed so a hard constraint can be included. "autodocsumm", "black", "commonmark", "cpplint", "docutils", "image", "matplotlib", "pillow", "pylint", "sphinx", "sphinx_autodoc_annotation", "sphinx_gallery", "sphinx_rtd_theme", "types-psutil", ], ), ), ] ConstraintsType = typing.List[typing.Tuple[str, typing.Union[None, str]]] # Maps a named Python package (which should appear in REQUIREMENTS_BY_PIECE above) to a # semver or pip version constraint. Semver constraints are translated into requirements.txt-friendly # constraints. # # These constraints serve only to record technical reasons why a particular version can't be used. # They are the default install_requires used in setup.py. These can be further narrowed to restrict # dependencies to those tested or used in CI; however, that process is not done here. # # Policy for constraints listed here: # 1. Each package specified in REQUIREMENTS_BY_PIECE must be included here. # 2. If TVM will functionally break against an old version of a dependency, specify a >= relation # here. Include a comment linking to context or explaining why the constraint is in place. CONSTRAINTS = [ ("astroid", None), ("attrs", None), ("autodocsumm", None), ("black", "==20.8b1"), ("cloudpickle", None), ("commonmark", ">=0.7.3"), # From PR #213. ("coremltools", None), ("cpplint", None), ("decorator", None), ( "docutils", "<0.17", ), # Work around https://github.com/readthedocs/sphinx_rtd_theme/issues/1115 ("ethos-u-vela", "==3.8.0"), ("future", None), ("h5py", "==2.10.0"), ("image", None), ("matplotlib", None), ("numpy", None), ("onnx", None), ("onnxoptimizer", None), ("onnxruntime", None), ("opencv-python", None), ("paddlepaddle", None), ("pillow", None), ("progressbar", None), ("protobuf", None), ("psutil", None), ("pylint", None), ("scikit-image", None), ("scipy", None), ("six", None), ("sphinx", None), ("sphinx_autodoc_annotation", None), ("sphinx_gallery", None), ("sphinx_rtd_theme", None), ("tensorflow", None), ("tensorflow-estimator", None), ("tflite", None), ("torch", None), ("torchvision", None), ("tornado", None), ("typing_extensions", None), ("xgboost", ">=1.1.0"), # From PR #4953 & Issue #12009 ] ################################################################################ # End of configuration options. ################################################################################ # Required keys in REQUIREMENTS_BY_PIECE. REQUIRED_PIECES: typing.List[str] = ["core", "dev"] # Regex to validates piece names. PIECE_REGEX: typing.Pattern = re.compile(r"^[a-z0-9][a-z0-9-]*", re.IGNORECASE) # Regex to match a constraint specification. Multiple constraints are not supported. CONSTRAINT_REGEX: typing.Pattern = re.compile(r"(?:\^|\<|(?:~=)|(?:<=)|(?:==)|(?:>=)|\>)[^<>=\^,]+") # Regex for parsing semantic versions. See # https://semver.org/#is-there-a-suggested-regular-expression-regex-to-check-a-semver-string SEMVER_REGEX: typing.Pattern = re.compile( r"^(?P<major>0|[1-9]\d*)\.(?P<minor>0|[1-9]\d*)\.(?P<patch>0|[1-9]\d*)(?:-(?P<prerelease>(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+(?P<buildmetadata>[0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?$" ) def validate_requirements_by_piece() -> typing.List[str]: """Validate REQUIREMENTS_BY_PIECE, returning a list of problems. Returns ------- list[str] : A list of strings, each one describing a distinct problem with REQUIREMENTS_BY_PIECE. """ problems = [] unseen_required_pieces = set(REQUIRED_PIECES) seen_pieces = set() # Ensure that core is listed first and dev is listed last. saw_core = False saw_dev = False if not isinstance(REQUIREMENTS_BY_PIECE, (list, tuple)): problems.append(f"must be list or tuple, see {REQUIREMENTS_BY_PIECE!r}") return problems for piece, value in REQUIREMENTS_BY_PIECE: if not isinstance(piece, str): problems.append(f"piece {piece!r}: must be str") continue if piece in unseen_required_pieces: unseen_required_pieces.remove(piece) piece_lower = piece.lower() if piece_lower in seen_pieces: problems.append(f"piece {piece}: listed twice") seen_pieces.add(piece_lower) if not saw_core and piece != "core": problems.append(f'piece {piece}: must list after "core" (core must be first)') elif piece == "core": saw_core = True if saw_dev: problems.append(f'piece {piece}: must list before "dev" (dev must be last)') elif piece == "dev": saw_dev = True if not isinstance(value, (tuple, list)) or len(value) != 2: problems.append( f'piece {piece}: should be formatted like ("{piece}", ("<requirements.txt comment>", ["dep1", "dep2", ...])). got: {value!r}' ) continue description, deps = value if not isinstance(description, str): problems.append(f"piece {piece}: description should be a string, got {description!r}") if not isinstance(deps, (list, tuple)) or any(not isinstance(d, str) for d in deps): problems.append(f"piece {piece}: deps should be a list of strings, got {deps!r}") continue if list(sorted(deps)) != list(deps): problems.append( f"piece {piece}: deps must be sorted. Correct order:\n {list(sorted(deps))!r}" ) piece_deps = set() for d in deps: if CONSTRAINT_REGEX.search(d): problems.append( f"piece {piece}: dependency {d} should not specify a version. " "Add it to CONSTRAINTS instead." ) if d.lower() in piece_deps: problems.append(f"piece {piece}: dependency {d} listed twice") piece_deps.add(d.lower()) extras_pieces = [ k for (k, _) in REQUIREMENTS_BY_PIECE if k not in ("dev", "core") if isinstance(k, str) ] sorted_extras_pieces = list(sorted(extras_pieces)) if sorted_extras_pieces != list(extras_pieces): problems.append( 'pieces other than "core" and "dev" must appear in alphabetical order: ' f"{sorted_extras_pieces}" ) return problems def parse_semver( package: str, constraint: str, problems: typing.List[str] ) -> typing.Tuple[typing.List[str], int, int]: """Parse a semantic versioning constraint of the form "^X.[.Y[.Z[...]]]]" Parameters ---------- package : str Name of the package specifying this constraint, for reporting problems. constraint : str The semver constraint. Must start with "^" problems : List[str] A list of strings describing problems that have occurred validating the configuration. Problems encountered while validating constraint are appended to this list. Returns ------- tuple[list[str], int, int] : A 3-tuple. The first element is a list containing an entry for each component in the semver string (components separated by "."). The second element is the index of the component in the list which must not change to meet the semver constraint. The third element is an integer, the numeric value of the changing component (this can be non-trivial when the patch is the changing part but pre-, post-release, or build metadta. See "Caret requirements" at https://python-poetry.org/docs/versions/. """ m = SEMVER_REGEX.match(constraint[1:]) if not m: problems.append(f"{package}: invalid semver constraint {constraint}") return [], 0, 0 min_ver_parts = [ m.group("major"), m.group("minor"), m.group("patch") + (f"-{m.group('prerelease')}" if m.group("prerelease") else "") + (f"+{m.group('buildmetadata')}" if m.group("buildmetadata") else ""), ] # Major/minor version handling is simple for i, p in enumerate(min_ver_parts[:2]): x = int(p.strip()) if x: return min_ver_parts, i, x # For patch version, consult only the numeric patch if m.group("patch"): patch_int = int(m.group("patch")) if patch_int or min_ver_parts[2] != m.group("patch"): return min_ver_parts, 2, patch_int # All 0's return min_ver_parts, 0, 0 def validate_constraints() -> typing.List[str]: """Validate CONSTRAINTS, returning a list of problems found. Returns ------- list[str] : A list of strings, each one describing a distinct problem found in CONSTRAINTS. """ problems = [] if not isinstance(CONSTRAINTS, (list, tuple)): problems.append(f"must be list or tuple, see: {CONSTRAINTS!r}") seen_packages = set() all_deps = set() for _, (_, deps) in REQUIREMENTS_BY_PIECE: for d in deps: all_deps.add(d.lower()) for package, constraint in CONSTRAINTS: if package in seen_packages: problems.append(f"{package}: specified twice") seen_packages.add(package) if package.lower() not in all_deps: problems.append(f"{package}: not specified in REQUIREMENTS_BY_PIECE") if constraint is None: # None is just a placeholder that allows for comments. continue if not CONSTRAINT_REGEX.match(constraint): problems.append( f'{package}: constraint "{constraint}" does not look like a valid constraint' ) if constraint.startswith("^"): parse_semver(package, constraint, problems) all_constrained_packages = [p for (p, _) in CONSTRAINTS] sorted_constrained_packages = list(sorted(all_constrained_packages)) if sorted_constrained_packages != all_constrained_packages: problems.append( "CONSTRAINTS entries should be in this sorted order: " f"{sorted_constrained_packages}" ) return problems class ValidationError(Exception): """Raised when a validation error occurs.""" @staticmethod def format_problems(config: str, problems: typing.List[str]) -> str: """Format a list of problems with a global config variable into human-readable output. Parameters ---------- config : str Name of the global configuration variable of concern. Prepended to the output. problems: list[str] A list of strings, each one a distinct problem with that config variable. Returns ------- str : A human-readable string suitable for console, listing the problems as bullet points. """ formatted = [] for p in problems: assert isinstance(p, str), f"problems element not a str: {p}" formatted.append( "\n".join( textwrap.wrap( f"{config}: {p}", width=80, initial_indent=" * ", subsequent_indent=" " ) ) ) return "\n".join(formatted) def __init__(self, config: str, problems: typing.List[str]): """Describes an error that occurs validating one of the global config variables. Parameters ---------- config : str Name of the global configuration variable of concern. Prepended to the output. problems: list[str] A list of strings, each one a distinct problem with that config variable. """ super(ValidationError, self).__init__(self.format_problems(config, problems)) self.problems = problems def validate_or_raise(): problems = validate_requirements_by_piece() if problems: raise ValidationError("REQUIREMENTS_BY_PIECE", problems) problems = validate_constraints() if problems: raise ValidationError("CONSTRAINTS", problems) def semver_to_requirements(dep: str, constraint: str, joined_deps: typing.List[str]): """Convert a SemVer-style constraint to a setuptools-compatible constraint. Parameters ---------- dep : str Name of the PyPI package to depend on. constraint : str The SemVer constraint, of the form "^<semver constraint>" joined_deps : list[str] A list of strings, each a setuptools-compatible constraint which could be written to a line in requirements.txt. The converted constraint is appended to this list. """ problems: typing.List[str] = [] min_ver_parts, fixed_index, fixed_part = parse_semver(dep, constraint, problems) text_problems = "\n" + "\n".join(f" * {p}" for p in problems) assert ( not problems ), f"should not happen: validated semver {constraint} parses with problems:{text_problems}" max_ver_parts = ( min_ver_parts[:fixed_index] + [str(fixed_part + 1)] + ["0" for _ in min_ver_parts[fixed_index + 1 :]] ) joined_deps.append(f'{dep}>={".".join(min_ver_parts)},<{".".join(max_ver_parts)}') def join_requirements() -> typing.Dict[str, typing.Tuple[str, typing.List[str]]]: """Validate, then join REQUIRMENTS_BY_PIECE against CONSTRAINTS and return the result. Returns ------- An OrderedDict containing REQUIREMENTS_BY_PIECE, except any dependency mentioned in CONSTRAINTS is replaced by a setuptools-compatible constraint. """ validate_or_raise() constraints_map = collections.OrderedDict([(p.lower(), c) for (p, c) in CONSTRAINTS]) to_return = collections.OrderedDict() all_deps = set() for piece, (description, deps) in REQUIREMENTS_BY_PIECE: joined_deps = [] for d in deps: constraint = constraints_map.get(d.lower()) if constraint is None: joined_deps.append(d) continue if constraint[0] == "^": semver_to_requirements(d, constraint, joined_deps) else: joined_deps.append(f"{d}{constraint}") if piece != "dev": all_deps.update(joined_deps) to_return[piece] = (description, joined_deps) to_return["all-prod"] = ( "Combined dependencies for all TVM pieces, excluding dev", list(sorted(all_deps)), ) return to_return def join_and_write_requirements(args: argparse.Namespace): try: joined_deps = join_requirements() except ValidationError as e: print(f"ERROR: invalid requirements configuration in {__file__}:", file=sys.stderr) print(str(e), file=sys.stderr) sys.exit(2) if args.lint: sys.exit(0) output_dir = os.path.join(os.path.dirname(__file__), "requirements") if not os.path.exists(output_dir): os.makedirs(output_dir) elif not os.path.isdir(output_dir): print( f"ERROR: output directory {output_dir} exists but is not a dir. Delete it", file=sys.stderr, ) sys.exit(2) for piece, (description, deps) in joined_deps.items(): with open(os.path.join(output_dir, f"{piece}.txt"), "w") as f: f.write( f"# AUTOGENERATED by python/gen_requirements.py{os.linesep}" f"#{os.linesep}" f"# {description}{os.linesep}" ) for d in deps: f.write(f"{d}{os.linesep}") def parse_args() -> argparse.Namespace: parser = argparse.ArgumentParser() parser.add_argument( "--lint", action="store_true", help="Just lint dependencies, don't generate anything" ) return parser.parse_args() def main(): args = parse_args() join_and_write_requirements(args) if __name__ == "__main__": main()
22,224
32.221226
244
py
tvm
tvm-main/python/tvm/error.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Structured error classes in TVM. Each error class takes an error message as its input. See the example sections for suggested message conventions. To make the code more readable, we recommended developers to copy the examples and raise errors with the same message convention. .. note:: Please also refer to :ref:`error-handling-guide`. """ from tvm._ffi.base import TVMError, register_error @register_error class InternalError(TVMError): """Internal error in the system. Examples -------- .. code :: c++ // Example code C++ LOG(FATAL) << "InternalError: internal error detail."; .. code :: python # Example code in python raise InternalError("internal error detail") """ def __init__(self, msg): super(InternalError, self).__init__(msg) register_error("ValueError", ValueError) register_error("TypeError", TypeError) register_error("AttributeError", AttributeError) register_error("KeyError", KeyError) register_error("IndexError", IndexError) @register_error class RPCError(TVMError): """Error thrown by the remote server handling the RPC call.""" @register_error class RPCSessionTimeoutError(RPCError, TimeoutError): """Error thrown by the remote server when the RPC session has expired.""" @register_error class OpError(TVMError): """Base class of all operator errors in frontends.""" @register_error class OpNotImplemented(OpError, NotImplementedError): """Operator is not implemented. Example ------- .. code:: python raise OpNotImplemented( "Operator {} is not supported in {} frontend".format( missing_op, frontend_name)) """ @register_error class OpAttributeRequired(OpError, AttributeError): """Required attribute is not found. Example ------- .. code:: python raise OpAttributeRequired( "Required attribute {} not found in operator {}".format( attr_name, op_name)) """ @register_error class OpAttributeInvalid(OpError, AttributeError): """Attribute value is invalid when taking in a frontend operator. Example ------- .. code:: python raise OpAttributeInvalid( "Value {} in attribute {} of operator {} is not valid".format( value, attr_name, op_name)) """ @register_error class OpAttributeUnImplemented(OpError, NotImplementedError): """Attribute is not supported in a certain frontend. Example ------- .. code:: python raise OpAttributeUnImplemented( "Attribute {} is not supported in operator {}".format( attr_name, op_name)) """ @register_error class DiagnosticError(TVMError): """Error diagnostics were reported during the execution of a pass. See the configured diagnostic renderer for detailed error information. """
3,679
26.058824
77
py
tvm
tvm-main/python/tvm/parser.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=invalid-name """The legacy TVM parser """ from .ir.base import deprecated # pylint: disable=import-outside-toplevel @deprecated("tvm.parser.parse", "tvm.relay.parse") def parse(*args, **kwargs): """Deprecated, use `tvm.relay.parse` instead""" from tvm.relay import parse as _impl return _impl(*args, **kwargs) @deprecated("tvm.parser.parse_expr", "tvm.relay.parse_expr") def parse_expr(*args, **kwargs): """Deprecated, use `tvm.relay.parse_expr` instead""" from tvm.relay import parse_expr as _impl return _impl(*args, **kwargs) @deprecated("tvm.parser.fromtext", "tvm.relay.fromtext") def fromtext(*args, **kwargs): """Deprecated, use `tvm.relay.fromtext` instead""" from tvm.relay import fromtext as _impl return _impl(*args, **kwargs) @deprecated("tvm.parser.SpanCheck", "tvm.relay.SpanCheck") def SpanCheck(*args, **kwargs): """Deprecated, use `tvm.relay.SpanCheck` instead""" from tvm.relay import SpanCheck as _impl return _impl(*args, **kwargs)
1,816
32.648148
62
py
tvm
tvm-main/python/tvm/support.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Support infra of TVM.""" import json import textwrap import ctypes import os import sys import tvm import tvm._ffi from .runtime.module import Module from . import get_global_func def libinfo(): """Returns a dictionary containing compile-time info, including cmake flags and git commit hash Returns ------- info: Dict[str, str] The dictionary of compile-time info. """ get_lib_info_func = get_global_func("support.GetLibInfo", allow_missing=True) if get_lib_info_func is not None: lib_info = get_lib_info_func() if lib_info is None: return {} else: return {} return dict(lib_info.items()) def describe(): """ Print out information about TVM and the current Python environment """ info = list((k, v) for k, v in libinfo().items()) info = dict(sorted(info, key=lambda x: x[0])) print("Python Environment") sys_version = sys.version.replace("\n", " ") uname = os.uname() uname = f"{uname.sysname} {uname.release} {uname.version} {uname.machine}" lines = [ f"TVM version = {tvm.__version__}", f"Python version = {sys_version} ({sys.maxsize.bit_length() + 1} bit)", f"os.uname() = {uname}", ] print(textwrap.indent("\n".join(lines), prefix=" ")) print("CMake Options:") print(textwrap.indent(json.dumps(info, indent=2), prefix=" ")) class FrontendTestModule(Module): """A tvm.runtime.Module whose member functions are PackedFunc.""" def __init__(self, entry_name=None): underlying_mod = get_global_func("testing.FrontendTestModule")() handle = underlying_mod.handle # Set handle to NULL to avoid cleanup in c++ runtime, transferring ownership. # Both cython and ctypes FFI use c_void_p, so this is safe to assign here. underlying_mod.handle = ctypes.c_void_p(0) super(FrontendTestModule, self).__init__(handle) if entry_name is not None: self.entry_name = entry_name def add_function(self, name, func): self.get_function("__add_function")(name, func) def __setitem__(self, key, value): self.add_function(key, value) tvm._ffi._init_api("support", __name__)
3,023
32.230769
99
py
tvm
tvm-main/python/tvm/generic.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Generic operators.""" # pylint:disable=unused-wildcard-import, wildcard-import from .tir.generic import *
894
43.75
62
py
tvm
tvm-main/python/tvm/__init__.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=redefined-builtin, wildcard-import """TVM: Open Deep Learning Compiler Stack.""" import multiprocessing import sys import os import traceback # top-level alias # tvm._ffi from ._ffi.base import TVMError, __version__, _RUNTIME_ONLY from ._ffi.runtime_ctypes import DataTypeCode, DataType from ._ffi import register_object, register_func, register_extension, get_global_func # top-level alias # tvm.runtime from .runtime.object import Object from .runtime.ndarray import device, cpu, cuda, gpu, opencl, cl, vulkan, metal, mtl from .runtime.ndarray import vpi, rocm, ext_dev, hexagon from .runtime import ndarray as nd # tvm.error from . import error # tvm.ir from .ir import IRModule from .ir import transform from .ir import instrument from .ir import container from .ir import PoolInfo from .ir import WorkspacePoolInfo from .ir import ConstantPoolInfo from .ir import PoolInfoProperties from .ir import WorkspaceMemoryPools from .ir import ConstantMemoryPools from . import ir # tvm.tir from . import tir # tvm.target from . import target # tvm.te from . import te # tvm.driver from .driver import build, lower # tvm.parser from . import parser # others from . import arith # support infra from . import support # Contrib initializers from .contrib import rocm as _rocm, nvcc as _nvcc, sdaccel as _sdaccel if not _RUNTIME_ONLY and support.libinfo().get("USE_MICRO", "OFF") == "ON": from . import micro # NOTE: This file should be python2 compatible so we can # raise proper error message when user run the package using # an older version of the python def _should_print_backtrace(): in_pytest = "PYTEST_CURRENT_TEST" in os.environ tvm_backtrace = os.environ.get("TVM_BACKTRACE", "0") try: tvm_backtrace = bool(int(tvm_backtrace)) except ValueError: raise ValueError( "invalid value for TVM_BACKTRACE {}, please set to 0 or 1.".format(tvm_backtrace) ) return in_pytest or tvm_backtrace def tvm_wrap_excepthook(exception_hook): """Wrap given excepthook with TVM additional work.""" def wrapper(exctype, value, trbk): """Clean subprocesses when TVM is interrupted.""" if exctype is error.DiagnosticError and not _should_print_backtrace(): # TODO(@jroesch): consider moving to C++? print("note: run with `TVM_BACKTRACE=1` environment variable to display a backtrace.") else: exception_hook(exctype, value, trbk) if hasattr(multiprocessing, "active_children"): # pylint: disable=not-callable for p in multiprocessing.active_children(): p.terminate() return wrapper sys.excepthook = tvm_wrap_excepthook(sys.excepthook)
3,524
28.375
98
py
tvm
tvm-main/python/tvm/te/autodiff.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Automatic differentiation of tensor expressions.""" from . import _ffi_api def gradient(output, inputs, head=None): """Perform reverse-mode automatic differentiation. Parameters ---------- output : Tensor The tensor to differentiate. inputs : List[Tensor] The list of input tensors to be differentiated wrt. head : Tensor The adjoint of the output, in other words, some tensor, by which the Jacobians will be multiplied. Its shape must be of the form `prefix + output.shape`. If `None` is passed, the identity tensor of shape `output.shape + output.shape` will be used. Returns ------- tensors: List[Tensor] The result gradient, in the same order as the inputs Example ------- .. code-block:: python x = tvm.placeholder((32, 3, 28, 28), name='x') w1 = tvm.placeholder((10, 3, 3, 3), name='w1') w2 = tvm.placeholder((10, 10, 3, 3), name='w2') z1 = topi.nn.conv2d(x, w1, 1, 1, 1) z2 = topi.nn.conv2d(z1, w2, 1, 1, 1) y = topi.sum(z2) # produce gradients [dw1, dw2] = tvm.gradient(y, [w1, w2]) # produce Jacobians [jw1, jw2] = tvm.gradient(z2, [w1, w2]) # produce gradients, the head adjoint for z2 is provided manually [dw1, dw2] = tvm.gradient(z2, [w1, w2], topi.full_like(z2, 1.0)) """ if not isinstance(inputs, list): inputs = [inputs] return _ffi_api.Gradient(output, inputs, head)
2,305
32.911765
87
py
tvm
tvm-main/python/tvm/te/tag.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Tag class for TVM operators.""" import warnings from tvm._ffi.base import decorate class TagScope(object): """Tag scope object to set tag for operators, working as context manager and decorator both. See also tag_scope. """ _current = None @classmethod def get_current(cls): if cls._current: cls._current.accessed = True return cls._current def __init__(self, tag): self._old_scope = None self.tag = tag self.accessed = False def __enter__(self): if TagScope._current is not None: raise ValueError("nested op_tag is not allowed for now") self._old_scope = TagScope._current TagScope._current = self return self def __exit__(self, ptype, value, trace): assert self._old_scope is None if not self.accessed: warnings.warn(f"Tag '{self.tag}' declared via TagScope was not used.") TagScope._current = self._old_scope def __call__(self, fdecl): def tagged_fdecl(func, *args, **kwargs): with self: return func(*args, **kwargs) return decorate(fdecl, tagged_fdecl) def tag_scope(tag): """The operator tag scope. Parameters ---------- tag: str The tag name. Returns ------- tag_scope: TagScope The tag scope object, which can be used as decorator or context manger. Example ------- .. code-block:: python n = te.var('n') m = te.var('m') l = te.var('l') A = te.placeholder((n, l), name='A') B = te.placeholder((m, l), name='B') k = te.reduce_axis((0, l), name='k') with tvm.te.tag_scope(tag='matmul'): C = te.compute((n, m), lambda i, j: te.sum(A[i, k] * B[j, k], axis=k)) # or use tag_scope as decorator @tvm.te.tag_scope(tag="conv") def compute_relu(data): return te.compute(data.shape, lambda *i: tvm.tir.Select(data(*i) < 0, 0.0, data(*i))) """ return TagScope(tag)
2,857
29.084211
97
py
tvm
tvm-main/python/tvm/te/_ffi_api.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """FFI APIs for tvm.te""" import tvm._ffi tvm._ffi._init_api("te", __name__)
864
38.318182
62
py
tvm
tvm-main/python/tvm/te/__init__.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=unused-import, redefined-builtin, wildcard-import """Namespace for Tensor Expression Language """ # expose all operators in tvm tir.op from tvm.tir import any, all, min_value, max_value, trace from tvm.tir import exp, erf, tanh, sigmoid, log, tan, cos, sin, sqrt, rsqrt, floor, ceil from tvm.tir import sinh, cosh, log2, log10 from tvm.tir import asin, asinh, acos, acosh, atan, atanh from tvm.tir import trunc, abs, round, nearbyint, power, popcount, fmod, if_then_else from tvm.tir import isnan, isfinite, isinf from tvm.tir import div, indexdiv, indexmod, truncdiv, truncmod, floordiv, floormod from tvm.tir import comm_reducer, min, max, sum from tvm.tir import add, subtract, multiply from .schedule import ( Schedule, Stage, create_schedule, SpecializedCondition, AXIS_SEPARATOR, ) from .tensor import TensorSlice, Tensor from .tensor_intrin import decl_tensor_intrin from .tag import tag_scope from .operation import placeholder, compute, scan, extern, var, size_var, const from .operation import thread_axis, reduce_axis from .operation import create_prim_func from .operation import extern_primfunc from .tensor import PlaceholderOp, ComputeOp, TensorComputeOp, ScanOp, ExternOp, HybridOp from .autodiff import gradient from . import hybrid
2,075
41.367347
89
py
tvm
tvm-main/python/tvm/te/schedule.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=unused-import """The computation schedule api of TVM.""" import collections import inspect from typing import Callable, List import tvm._ffi from tvm._ffi.base import string_types from tvm.ir import container as _container from tvm.runtime import Object, convert from tvm.tir import Buffer, IndexMap, IterVar, Var from . import _ffi_api from . import tensor as _tensor @tvm._ffi.register_object class Split(Object): """Split operation on axis.""" @tvm._ffi.register_object class Fuse(Object): """Fuse operation on axis.""" @tvm._ffi.register_object class Singleton(Object): """Singleton axis.""" def create_schedule(ops): """Create a schedule for list of ops Parameters ---------- ops : list of Operations The source expression. Returns ------- sch : schedule.Schedule The created schedule. """ if not isinstance(ops, (list, _container.Array)): ops = [ops] return _ffi_api.CreateSchedule(ops) @tvm._ffi.register_object class Schedule(Object): """Schedule for all the stages.""" def __getitem__(self, k): if isinstance(k, _tensor.Tensor): k = k.op if not isinstance(k, _tensor.Operation): raise ValueError("Expect schedule key to be Tensor or Operation") if k not in self.stage_map: raise ValueError(f"Cannot find the operation {k} in schedule") return self.stage_map[k] def normalize(self): """Build a normalized schedule from the current schedule. Insert necessary rebase to make certain iter var to start from 0. This is needed before bound inference and followup step. Returns ------- sch : Schedule The normalized schedule. """ return _ffi_api.ScheduleNormalize(self) def create_group(self, outputs, inputs, include_inputs=False): """Create stage group by giving output and input boundary. The operators between outputs and inputs are placed as member of group. outputs are include in the group, while inputs are not included. Parameters ---------- outputs : list of Tensors The outputs of the group. inputs : list of Tensors The inputs of the group. include_inputs : boolean, optional Whether include input operations in the group if they are used by outputs. Returns ------- group : Stage A virtual stage represents the group, user can use compute_at to move the attachment point of the group. """ if isinstance(outputs, _tensor.Tensor): outputs = [outputs] if isinstance(inputs, _tensor.Tensor): inputs = [inputs] return _ffi_api.ScheduleCreateGroup(self, outputs, inputs, include_inputs) def cache_read(self, tensor, scope, readers): """Create a cache read of original tensor for readers. This will mutate the body of the readers. A new cache stage will be created for the tensor. Call this before doing any split/fuse schedule. Parameters ---------- tensor : Tensor The tensor to be cached. scope : str The scope of cached readers : list of Tensor or Operation The readers to read the cache. Returns ------- cache : Tensor The created cache tensor. """ if isinstance(readers, (_tensor.Tensor, _tensor.Operation)): readers = [readers] readers = [t.op if isinstance(t, _tensor.Tensor) else t for t in readers] return _ffi_api.ScheduleCacheRead(self, tensor, scope, readers) def cache_write(self, tensor, scope): """Create a cache write of original tensor, before storing into tensor. This will mutate the body of the tensor. A new cache stage will created before feed into the tensor. This function can be used to support data layout transformation. If there is a split/fuse/reorder on the data parallel axis of tensor before cache_write is called. The intermediate cache stores the data in the layout as the iteration order of leave axis. The data will be transformed back to the original layout in the original tensor. User can further call compute_inline to inline the original layout and keep the data stored in the transformed layout. Parameters ---------- tensor : Tensor, list or tuple The tensors to be feed to. All the tensors must be produced by one computeOp scope : str The scope of cached Returns ------- cache : Tensor The created cache tensor. """ return _ffi_api.ScheduleCacheWrite(self, tensor, scope) def rfactor(self, tensor, axis, factor_axis=0): """Factor a reduction axis in tensor's schedule to be an explicit axis. This will create a new stage that generated the new tensor with axis as the first dimension. The tensor's body will be rewritten as a reduction over the factored tensor. Parameters ---------- tensor : Tensor The tensor to be factored. axis : IterVar The reduction axis in the schedule to be factored. factor_axis : int The position where the new axis is placed. Returns ------- tfactor : Tensor or Array of Tensor The created factored tensor. """ factored = _ffi_api.ScheduleRFactor(self, tensor, axis, factor_axis) return factored[0] if len(factored) == 1 else factored @tvm._ffi.register_object class Stage(Object): """A Stage represents schedule for one operation.""" def split(self, parent, factor=None, nparts=None): """Split the stage either by factor providing outer scope, or both Parameters ---------- parent : IterVar The parent iter var. factor : Expr, optional The splitting factor nparts : Expr, optional The number of outer parts. Returns ------- outer : IterVar The outer variable of iteration. inner : IterVar The inner variable of iteration. """ if nparts is not None: if factor is not None: raise ValueError("Do not need to provide both outer and nparts") outer, inner = _ffi_api.StageSplitByNParts(self, parent, nparts) else: if factor is None: raise ValueError("Either nparts or factor need to be provided") outer, inner = _ffi_api.StageSplitByFactor(self, parent, factor) return outer, inner def fuse(self, *args): """Fuse multiple consecutive iteration variables into a single iteration variable. fused = fuse(...fuse(fuse(args[0], args[1]), args[2]),..., args[-1]) The order is from outer to inner. Parameters ---------- args : list of IterVars Itervars that proceeds each other Returns ------- fused : IterVar The fused variable of iteration. """ fused = _ffi_api.StageFuse(self, args) return fused def set_scope(self, scope): """Set the thread scope of this stage Parameters ---------- scope : str The thread scope of this stage """ return _ffi_api.StageSetScope(self, scope) def bind(self, ivar, thread_ivar): """Bind ivar to thread index thread_ivar Parameters ---------- ivar : IterVar The iteration to be binded to thread. thread_ivar : IterVar The thread to be binded. """ _ffi_api.StageBind(self, ivar, thread_ivar) def env_threads(self, threads): """Mark threads to be launched at the outer scope of composed op. Parameters ---------- threads : list of threads The threads to be launched. """ if isinstance(threads, IterVar): threads = [threads] _ffi_api.StageEnvThreads(self, threads) def set_store_predicate(self, predicate): """Set predicate under which store to the array can be performed. Use this when there are duplicated threads doing the same store and we only need one of them to do the store. Parameters ---------- predicate : Expr The guard condition fo store. """ _ffi_api.StageSetStorePredicate(self, predicate) def compute_at(self, parent, scope): """Attach the stage at parent's scope Parameters ---------- parent : Stage The parent stage scope : IterVar The loop scope t be attached to. """ _ffi_api.StageComputeAt(self, parent, scope) def compute_inline(self): """Mark stage as inline Parameters ---------- parent : Stage The parent stage """ _ffi_api.StageComputeInline(self) def compute_root(self): """Attach the stage at parent, and mark it as root Parameters ---------- parent : Stage The parent stage """ _ffi_api.StageComputeRoot(self) def reorder(self, *args): """reorder the arguments in the specified order. Parameters ---------- args : list of IterVar The order to be ordered """ _ffi_api.StageReorder(self, args) def tile(self, x_parent, y_parent, x_factor, y_factor): """Perform tiling on two dimensions The final loop order from outmost to inner most are [x_outer, y_outer, x_inner, y_inner] Parameters ---------- x_parent : IterVar The original x dimension y_parent : IterVar The original y dimension x_factor : Expr The stride factor on x axis y_factor : Expr The stride factor on y axis Returns ------- x_outer : IterVar Outer axis of x dimension y_outer : IterVar Outer axis of y dimension x_inner : IterVar Inner axis of x dimension p_y_inner : IterVar Inner axis of y dimension """ x_outer, y_outer, x_inner, y_inner = _ffi_api.StageTile( self, x_parent, y_parent, x_factor, y_factor ) return x_outer, y_outer, x_inner, y_inner def vectorize(self, var): """Vectorize the iteration. Parameters ---------- var : IterVar The iteration to be vectorize """ _ffi_api.StageVectorize(self, var) def tensorize(self, var, tensor_intrin): """Tensorize the computation enclosed by var with tensor_intrin Parameters ---------- var : IterVar The iteration boundary of tensorization. tensor_intrin : TensorIntrin The tensor intrinsic used for computation. """ _ffi_api.StageTensorize(self, var, tensor_intrin) def unroll(self, var): """Unroll the iteration. Parameters ---------- var : IterVar The iteration to be unrolled. """ _ffi_api.StageUnroll(self, var) def parallel(self, var): """Parallelize the iteration. Parameters ---------- var : IterVar The iteration to be parallelized. """ _ffi_api.StageParallel(self, var) def pragma(self, var, pragma_type, pragma_value=None): """Annotate the iteration with pragma This will translate to a pragma_scope surrounding the corresponding loop generated. Useful to support experimental features and extensions. Parameters ---------- var : IterVar The iteration to be anotated pragma_type : str The pragma string to be annotated pragma_value : Expr, optional The pragma value to pass along the pragma Note ---- Most pragmas are advanced/experimental features and may subject to change. List of supported pragmas: - **debug_skip_region** Force skip the region marked by the axis and turn it into no-op. This is useful for debug purposes. - **parallel_launch_point** Specify to launch parallel threads outside the specified iteration loop. By default the threads launch at the point of parallel construct. This pragma moves the launching point to even outer scope. The threads are launched once and reused across multiple parallel constructs as BSP style program. - **parallel_barrier_when_finish** Insert a synchronization barrier between working threads after the specified loop iteration finishes. - **parallel_stride_pattern** Hint parallel loop to execute in strided pattern. :code:`for (int i = task_id; i < end; i += num_task)` """ if isinstance(pragma_value, string_types): pragma_value = convert(pragma_value) _ffi_api.StagePragma(self, var, pragma_type, pragma_value) def prefetch(self, tensor, var, offset): """Prefetch the specified variable Parameters ---------- tensor : Tensor The tensor to be prefetched var : IterVar The loop point at which the prefetching is applied offset : Expr The number of iterations to be prefetched before actual execution """ _ffi_api.StagePrefetch(self, tensor, var, offset) def storage_align(self, axis, factor, offset): """Set alignment requirement for specific axis This ensures that stride[axis] == k * factor + offset for some k. This is useful to set memory layout to for more friendly memory access pattern. For example, we can set alignment to be factor=2, offset=1 to avoid bank conflict for thread access on higher dimension in GPU shared memory. Parameters ---------- axis : IterVar The axis dimension to be aligned. factor : int The factor in alignment specification. offset : int The offset in the alignment specification. """ _ffi_api.StageStorageAlign(self, axis, factor, offset) def double_buffer(self): """Compute the current stage via double buffering. This can only be applied to intermediate stage. This will double the storage cost of the current stage. Can be useful to hide load latency. """ _ffi_api.StageDoubleBuffer(self) def rolling_buffer(self): """Compute the current stage via rolling buffering. This can only be applied to intermediate stage. This will change the storage cost of the current stage. """ _ffi_api.StageRollingBuffer(self) def transform_layout(self, mapping_function: Callable[..., List[tvm.tir.PrimExpr]]): """Defines the layout transformation for the current stage's tensor. The map from initial_indices to final_indices must be an invertible affine transformation. This method may be called more than once for a given tensor, in which case each transformation is applied sequentially. If the stage is a ComputeOp, then the iteration order of the compute stage is rewritten to be a row-major traversal of the tensor, and the new loop iteration variables are returned. For all other stages, the loop iteration order is unmodified, and the return value is None. Parameters ---------- mapping_function : Callable[..., List[tvm.tir.PrimExpr]] A callable that accepts N arguments of type tvm.tir.Var, and outputs a list of PrimExpr. The input arguments represent the location of a value in the current stage's tensor, using the pre-transformation layout. The return value of the function gives the location of that value in the current stage's tensor, using the post-transformation layout. Returns ------- new_iter_vars : Optional[List[tvm.tir.IterVar]] If the stage is a ComputeOp, then the return will be the updated loop iteration variables over the data array, in the same order as the output values from the `mapping_function`. Otherwise, the return value is None. Examples -------- .. code-block:: python # ``A`` is a tensor whose compute definition is in NHWC # format, and should be transformed into NCHWc format. s[A].transform_layout( lambda n,h,w,c: [n, c//4, h, w, c%4] ) .. code-block:: python # ``A`` is a tensor whose compute definition is in an # arbitrary format, and should be transformed such that # the last index is split, with the slower-changing index # of the split placed at the slowest changing dimension. s[A].transform_layout( lambda *indices, i: [i//4, *indices, i%4] ) .. code-block:: python # ``B`` is a tensor defined by te.compute to be a copy of # ``A`, and should be transformed such that ``B``'s layout # is a transpose of ``A``'s layout. The loop iteration # that computes ``B`` will correspond to ``B``'s memory # layout. A = te.placeholder([n,m]) B = te.compute(A.shape, lambda i,j: A[i,j]) s = te.create_schedule(B.op) s[B].transform_layout(lambda i,j: [j,i]) """ ndim = len(self.op.output(0).shape) index_map, axis_separators = IndexMap.from_func_with_separators( mapping_function, ndim=ndim, index_dtype="int32" ) new_iter_vars = _ffi_api.StageTransformLayout( self, index_map.initial_indices, index_map.final_indices ) _ffi_api.StageSetAxisSeparators(self, axis_separators) return new_iter_vars or None @tvm._ffi.register_object class SpecializedCondition(Object): """Specialized condition to enable op specialization.""" def __init__(self, conditions): """Create a specialized condition. .. note:: Conditions are represented in conjunctive joint form (CNF). Each condition should be a simple expression, e.g., n > 16, m % 8 == 0, etc., where n, m are tvm.Var that represents a dimension in the tensor shape. Parameters ---------- conditions : List of tvm.Expr List of conditions in conjunctive joint form (CNF). """ if not isinstance(conditions, (list, _container.Array)): conditions = [conditions] self.__init_handle_by_constructor__(_ffi_api.CreateSpecializedCondition, conditions) @staticmethod def current(): """Returns the current specialized condition""" return _ffi_api.GetCurrentSpecialization() def __enter__(self): _ffi_api.EnterSpecializationScope(self) return self def __exit__(self, ptype, value, trace): _ffi_api.ExitSpecializationScope(self) # Sentinel value used to indicate which groups of pre-flattening axes # should be used to post-flattening axes axes. Moved from # te.AXIS_SEPARATOR to tir.IndexMap.AXIS_SEPARATOR for general use, # maintained here for backwards compatibility. AXIS_SEPARATOR = IndexMap.AXIS_SEPARATOR tvm._ffi._init_api("schedule", __name__)
20,956
30.849544
92
py
tvm
tvm-main/python/tvm/te/tensor_intrin.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Tensor intrinsics""" import tvm._ffi import tvm.tir from tvm.runtime import Object, convert from tvm.ir import Range from .tensor import PlaceholderOp from . import tensor as _tensor from . import _ffi_api def _get_region(tslice): region = [] for idx in tslice.indices: if isinstance(idx, slice): assert idx.step is None region.append(Range(idx.start, idx.stop)) else: if isinstance(idx, tvm.tir.IterVar): begin = idx.var else: begin = idx region.append(Range.from_min_extent(begin, 1)) return region @tvm._ffi.register_object class TensorIntrin(Object): """Tensor intrinsic functions for certain computation. See Also -------- decl_tensor_intrin: Construct a TensorIntrin """ def __call__(self, *args, **kwargs): tensors = [x.tensor for x in args if isinstance(x, _tensor.TensorSlice)] scalar_inputs = [x for x in args if not isinstance(x, _tensor.TensorSlice)] regions = [_get_region(x) for x in args if isinstance(x, _tensor.TensorSlice)] reduce_axis = [] if "reduce_axis" in kwargs: reduce_axis = kwargs["reduce_axis"] if not isinstance(reduce_axis, (list, tuple)): reduce_axis = [reduce_axis] reduce_axis = convert(reduce_axis) if scalar_inputs: scalar_inputs = convert(scalar_inputs) return _ffi_api.TensorIntrinCall(self, tensors, regions, reduce_axis, scalar_inputs) def decl_tensor_intrin( op, fcompute, name="tensor_intrin", binds=None, scalar_params=None, default_buffer_params=None ): """Declare a tensor intrinsic function. Parameters ---------- op: Operation The symbolic description of the intrinsic operation fcompute: lambda function of inputs, outputs-> stmt Specifies the IR statement to do the computation. See the following note for function signature of fcompute .. note:: **Parameters** - **ins** (list of :any:`tvm.tir.Buffer`) - Placeholder for each inputs - **outs** (list of :any:`tvm.tir.Buffer`) - Placeholder for each outputs **Returns** - **stmt** (:any:`tvm.tir.Stmt`, or tuple of three stmts) - If a single stmt is returned, it represents the body - If tuple of three stmts are returned they corresponds to body, reduce_init, reduce_update name: str, optional The name of the intrinsic. binds: dict of :any:`Tensor` to :any:`tvm.tir.Buffer`, optional Dictionary that maps the Tensor to Buffer which specified the data layout requirement of the function. By default, a new compact buffer is created for each tensor in the argument. scalar_params: a list of variables used by op, whose values will be passed as scalar_inputs when the tensor intrinsic is called. default_buffer_params: Optional[dict] Dictionary of buffer arguments to be passed when constructing a buffer. Returns ------- intrin: TensorIntrin A TensorIntrin that can be used in tensorize schedule. """ if not isinstance(op, _tensor.Operation): raise TypeError("expect Operation") inputs = op.input_tensors binds = binds if binds else {} tensors = list(inputs) for i in range(op.num_outputs): tensors.append(op.output(i)) binds_list = [] for t in inputs: if not isinstance(t.op, PlaceholderOp): raise ValueError("Do not yet support composition op") default_buffer_params = {} if default_buffer_params is None else default_buffer_params for t in tensors: buf = ( binds[t] if t in binds else tvm.tir.decl_buffer(t.shape, t.dtype, t.op.name, **default_buffer_params) ) binds_list.append(buf) if scalar_params: body = fcompute(binds_list[: len(inputs)], binds_list[len(inputs) :], scalar_params) else: body = fcompute(binds_list[: len(inputs)], binds_list[len(inputs) :]) scalar_params = [] if isinstance(body, (tvm.tir.PrimExpr, tvm.tir.Stmt)): body = [body] body = [tvm.tir.Evaluate(x) if isinstance(x, tvm.tir.PrimExpr) else x for x in body] if len(body) < 3: body += [None] * (3 - len(body)) return _ffi_api.TensorIntrin(name, op, inputs, binds_list, scalar_params, *body)
5,302
35.07483
98
py
tvm
tvm-main/python/tvm/te/operation.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """ Operation class for computation declaration.""" import inspect # pylint: disable=invalid-name from numbers import Integral as _Integral from typing import List, Optional import tvm._ffi import tvm.arith._ffi_api import tvm.tir import tvm.tir._ffi_api from tvm._ffi.base import string_types from tvm.ir import Array from tvm.runtime import convert from . import _ffi_api from . import tag as _tag from . import tensor as _tensor def placeholder(shape, dtype=None, name="placeholder"): """Construct an empty tensor object. Parameters ---------- shape: Tuple of Expr The shape of the tensor dtype: str, optional The data type of the tensor name: str, optional The name hint of the tensor Returns ------- tensor: Tensor The created tensor """ shape = (shape,) if isinstance(shape, tvm.tir.PrimExpr) else shape dtype = "float32" if dtype is None else dtype return _ffi_api.Placeholder(shape, dtype, name) def compute(shape, fcompute, name="compute", tag="", attrs=None, varargs_names=None): """Construct a new tensor by computing over the shape domain. The compute rule is result[axis] = fcompute(axis) Parameters ---------- shape: Tuple of Expr The shape of the tensor fcompute: lambda function of indices-> value Specifies the input source expression name: str, optional The name hint of the tensor tag: str, optional Additional tag information about the compute. attrs: dict, optional The additional auxiliary attributes about the compute. varargs_names: list, optional The names to use for each of the varargs. If not supplied, the varargs will be called i1, i2, ... Returns ------- tensor: Tensor The created tensor """ if _tag.TagScope.get_current() is not None: if tag != "": raise ValueError("nested tag is not allowed for now") tag = _tag.TagScope.get_current().tag shape = (shape,) if isinstance(shape, tvm.tir.PrimExpr) else shape # for python3 shape = tuple([int(s) if isinstance(s, float) else s for s in shape]) out_ndim = len(shape) argspec = inspect.getfullargspec(fcompute) if len(argspec.args) == 0 and argspec.varargs is None: arg_names = [f"i{i}" for i in range(out_ndim)] elif argspec.varargs is not None: # if there is a varargs, it takes the remaining dimensions of out_ndim num_remaining_args = out_ndim - len(argspec.args) if varargs_names is not None: if len(varargs_names) != num_remaining_args: raise RuntimeError( f"Number of varargs ({num_remaining_args}) does not match number" f"of varargs_names ({len(varargs_names)})" ) arg_names = argspec.args + varargs_names else: arg_names = argspec.args + [f"i{i}" for i in range(out_ndim - len(argspec.args))] else: arg_names = argspec.args # if there are fewer args than out dimensions, the remaining dimensions # are implicitly broadcast out_ndim = len(arg_names) assert argspec.varkw is None, "Variable keyword arguments not supported in fcompute" assert argspec.defaults is None, "Default arguments not supported in fcompute" assert len(argspec.kwonlyargs) == 0, "Keyword arguments are not supported in fcompute" if out_ndim != len(arg_names): raise ValueError( "Number of args to fcompute does not match dimension, " f"args={len(arg_names)}, dimension={out_ndim}" ) dim_var = [tvm.tir.IterVar((0, s), x, 0) for x, s in zip(arg_names, shape[:out_ndim])] body = fcompute(*[v.var for v in dim_var]) if isinstance(body, _tensor.TensorIntrinCall): for i, s in enumerate(shape[out_ndim:]): var_name = "ax" + str(i) dim_var.append(tvm.tir.IterVar((0, s), var_name, 4)) op_node = _ffi_api.TensorComputeOp( name, tag, dim_var, body.reduce_axis, out_ndim, body.intrin, body.tensors, body.regions, body.scalar_inputs, ) else: if not isinstance(body, (list, tuple)): body = [body] body = convert(body) op_node = _ffi_api.ComputeOp(name, tag, attrs, dim_var, body) num = op_node.num_outputs outputs = tuple(op_node.output(i) for i in range(num)) return outputs[0] if num == 1 else outputs def scan(init, update, state_placeholder, inputs=None, name="scan", tag="", attrs=None): """Construct new tensors by scanning over axis. Parameters ---------- init: Tensor or list of Tensor The initial condition of first init.shape[0] timestamps update: Tensor or list of Tensor The update rule of the scan given by symbolic tensor. state_placeholder: Tensor or list of Tensor The placeholder variables used by update. inputs: Tensor or list of Tensor, optional The list of inputs to the scan. This is not required, but can be useful for the compiler to detect scan body faster. name: str, optional The name hint of the tensor tag: str, optional Additonal tag information about the compute. attrs: dict, optional The additional auxiliary attributes about the compute. Returns ------- tensor: Tensor or list of Tensors The created tensor or tuple of tensors contains multiple outputs. Example ------- .. code-block:: python # The following code is equivalent to numpy.cumsum m = te.var("m") n = te.var("n") X = te.placeholder((m, n), name="X") s_state = te.placeholder((m, n)) s_init = te.compute((1, n), lambda _, i: X[0, i]) s_update = te.compute((m, n), lambda t, i: s_state[t-1, i] + X[t, i]) res = tvm.te.scan(s_init, s_update, s_state, X) """ if _tag.TagScope.get_current() is not None: if tag != "": raise ValueError("nested tag is not allowed for now") tag = _tag.TagScope.get_current().tag if isinstance(init, _tensor.Tensor): init = [init] if isinstance(update, _tensor.Tensor): update = [update] if isinstance(state_placeholder, _tensor.Tensor): state_placeholder = [state_placeholder] if isinstance(inputs, _tensor.Tensor): inputs = [inputs] if inputs is None: inputs = [] if len(init) != len(update) or len(init) != len(state_placeholder): raise ValueError("init, update, state_placeholder must have same length") axis = tvm.tir.IterVar((init[0].shape[0], update[0].shape[0]), f"{name}.idx", 3) op = _ffi_api.ScanOp(name, tag, attrs, axis, init, update, state_placeholder, inputs) res = [op.output(i) for i in range(len(update))] return res[0] if len(res) == 1 else res def extern( shape, inputs, fcompute, name="extern", dtype=None, in_buffers=None, out_buffers=None, tag="", attrs=None, ): """Compute several tensors via an extern function. Parameters ---------- shape: tuple or list of tuples. The shape of the outputs. inputs: list of Tensor The inputs fcompute: lambda function of inputs, outputs-> stmt Specifies the IR statement to do the computation. See the following note for function signature of fcompute .. note:: **Parameters** - **ins** (list of :any:`tvm.tir.Buffer`) - Placeholder for each inputs - **outs** (list of :any:`tvm.tir.Buffer`) - Placeholder for each outputs **Returns** - **stmt** (:any:`tvm.tir.Stmt`) - The statement that carries out array computation. name: str, optional The name hint of the tensor dtype: str or list of str, optional The data types of outputs, by default dtype will be same as inputs. in_buffers: tvm.tir.Buffer or list of tvm.tir.Buffer, optional Input buffers. out_buffers: tvm.tir.Buffer or list of tvm.tir.Buffer, optional Output buffers. tag: str, optional Additonal tag information about the compute. attrs: dict, optional The additional auxiliary attributes about the compute. Returns ------- tensor: Tensor or list of Tensors The created tensor or tuple of tensors contains multiple outputs. Example ------- In the code below, C is generated by calling external PackedFunc `tvm.contrib.cblas.matmul` .. code-block:: python A = te.placeholder((n, l), name="A") B = te.placeholder((l, m), name="B") C = te.extern((n, m), [A, B], lambda ins, outs: tvm.tir.call_packed( "tvm.contrib.cblas.matmul", ins[0], ins[1], outs[0], 0, 0), name="C") """ if _tag.TagScope.get_current() is not None: if tag != "": raise ValueError("nested tag is not allowed for now") tag = _tag.TagScope.get_current().tag shape = (shape,) if isinstance(shape, (tvm.tir.PrimExpr, _Integral)) else shape if shape == () or isinstance(shape[0], (tvm.tir.PrimExpr, _Integral)): shape = [shape] if in_buffers is not None: in_buffers = [in_buffers] if not isinstance(in_buffers, list) else in_buffers if len(inputs) != len(in_buffers): raise RuntimeError( "Number of inputs and in_buffers mismatch: %d vs %d." % (len(inputs), len(in_buffers)) ) if out_buffers is not None: out_buffers = [out_buffers] if not isinstance(out_buffers, list) else out_buffers if len(shape) != len(out_buffers): raise RuntimeError( "Number of outputs and out_buffers mismatch: %d vs %d." % (len(shape), len(out_buffers)) ) input_placeholders = in_buffers or [] output_placeholders = out_buffers or [] types = set() for t in inputs: if not isinstance(t, _tensor.Tensor): raise ValueError("expect inputs to be tensor") if in_buffers is None: input_placeholders.append( tvm.tir.decl_buffer( t.shape, t.dtype, t.op.name, elem_offset=tvm.tir.Var("elem_offset", "int32") ) ) types.add(t.dtype) if dtype is None: if len(types) != 1: raise ValueError("Cannot infer output type, please provide dtype argument") infered_type = types.pop() dtype = [infered_type for _ in shape] if isinstance(dtype, str): dtype = [dtype] if out_buffers is None: for shp, dt in zip(shape, dtype): output_placeholders.append( tvm.tir.decl_buffer(shp, dt, name, elem_offset=tvm.tir.Var("elem_offset", "int32")) ) body = fcompute(input_placeholders, output_placeholders) if isinstance(body, tvm.tir.PrimExpr): body = tvm.tir.Evaluate(body) if not isinstance(body, tvm.tir.Stmt): raise ValueError( f"Function '{fcompute.__name__}' should return PrimExpr or Stmt, but it returned " f"'{type(body)}'" ) op = _ffi_api.ExternOp(name, tag, attrs, inputs, input_placeholders, output_placeholders, body) res = [op.output(i) for i in range(len(output_placeholders))] return res[0] if len(res) == 1 else res def extern_primfunc(input_tensors: List[_tensor.Tensor], primfunc: tvm.tir.PrimFunc, **kwargs): """Compute tensors via a schedulable TIR PrimFunc Parameters ---------- input_tensors: list of Tensor Input tensors that map to the corresponding primfunc input params. primfunc: PrimFunc The TIR PrimFunc Returns ------- tensor: Tensor or list of Tensors The created tensor or tuple of tensors if it contains multiple outputs. Example ------- In the code below, a TVMScript defined TIR PrimFunc is inlined into a TE ExternOp. Applying te.create_prim_func on this .. code-block:: python A = te.placeholder((128, 128), name="A") B = te.placeholder((128, 128), name="B") @T.prim_func def before_split(a: T.handle, b: T.handle) -> None: A = T.match_buffer(a, (128, 128)) B = T.match_buffer(b, (128, 128)) for i, j in T.grid(128, 128): with T.block("B"): vi, vj = T.axis.remap("SS", [i, j]) B[vi, vj] = A[vi, vj] * 2.0 C = te.extern_primfunc([A, B], func) """ # dt_access_map and primfunc.buffer_map are unordered, so use order from primfunc.params dt_access_map = tvm.arith._ffi_api.DomainTouchedAccessMap(primfunc) ordered_buffers = [primfunc.buffer_map[param] for param in primfunc.params] in_buffers = [buf for buf in ordered_buffers if len(dt_access_map[buf][0])] out_buffers = [buf for buf in ordered_buffers if len(dt_access_map[buf][1])] assert in_buffers, "PrimFunc has no input buffers" assert out_buffers, "PrimFunc has no output buffers" outputs = [] inplace = [] input_buffers = in_buffers for obuf in out_buffers: if obuf in in_buffers: inplace.append(obuf) else: outputs.append(obuf) if not outputs: iobuf = inplace.pop() input_buffers.remove(iobuf) outputs = [iobuf] assert len(input_buffers) == len(input_tensors), ( "The number of provided input input_tensors does not match the number of ", "input buffers in the primfunc", ) for tensor, buffer in zip(input_tensors, input_buffers): # TODO(csullivan): Can a stronger comparison between Tensor<>Buffer be made? assert len(tensor.shape) == len(buffer.shape) for d1, d2 in zip(tensor.shape, buffer.shape): assert d1 == d2, ( "The input input_tensors provided do not match the input buffers in the ", "primfunc. Please check that the order of input te.Input_Tensors and the ", "order of the primfunc variables in the params list agree.", ) output = extern( [buf.shape for buf in outputs], input_tensors, lambda ins, outs: primfunc.body, in_buffers=input_buffers, out_buffers=outputs, **kwargs, ) return output def var(name="tindex", dtype="int32", span=None): """Create a new variable with specified name and dtype Parameters ---------- name : str The name dtype : str The data type span : Optional[Span] The location of this variable in the source. Returns ------- var : Var The result symbolic variable. """ return tvm.tir.Var(name, dtype, span) def const(value, dtype="int32", span=None): """Create a new constant with specified value and dtype Parameters ---------- value : Union[bool, int, float, numpy.ndarray, tvm.nd.NDArray] The constant value. dtype : str The data type span : Optional[Span] The location of this variable in the source. Returns ------- const : PrimExpr The result constant expr. """ return tvm.tir.const(value, dtype, span) def size_var(name="size", dtype="int32", span=None): """Create a new variable represents a tensor shape size, which is non-negative. Parameters ---------- name : str The name dtype : str The data type span : Optional[Span] The location of this variable in the source. Returns ------- var : SizeVar The result symbolic shape variable. """ return tvm.tir.SizeVar(name, dtype, span) def thread_axis(dom=None, tag="", name="", span=None): """Create a new IterVar to represent thread index. Parameters ---------- dom : Range or str The domain of iteration When str is passed, dom is set to None and str is used as tag tag : str, optional The thread tag name : str, optional The name of the var. span : Optional[Span] The location of this variable in the source. Returns ------- axis : IterVar The thread itervar. """ if isinstance(dom, string_types): tag, dom = dom, None if not tag: raise ValueError("tag must be given as Positional or keyword argument") name = name if name else tag return tvm.tir.IterVar(dom, name, 1, tag, span) def reduce_axis(dom, name="rv", thread_tag="", span=None): """Create a new IterVar for reduction. Parameters ---------- dom : Range The domain of iteration. name : str The name of the variable. thread_tag : Optional[str] The name of the thread_tag. span : Optional[Span] The location of this variable in the source. Returns ------- axis : IterVar An iteration variable representing the value. """ return tvm.tir.IterVar(dom, name, 2, thread_tag, span) def create_prim_func( ops: List[_tensor.Tensor], index_dtype_override: Optional[str] = None ) -> tvm.tir.PrimFunc: """Create a TensorIR PrimFunc from tensor expression Parameters ---------- ops : List[Tensor] The source expression. Example ------- We define a matmul kernel using following code: .. code-block:: python import tvm from tvm import te from tvm.te import create_prim_func import tvm.script A = te.placeholder((128, 128), name="A") B = te.placeholder((128, 128), name="B") k = te.reduce_axis((0, 128), "k") C = te.compute((128, 128), lambda x, y: te.sum(A[x, k] * B[y, k], axis=k), name="C") func = create_prim_func([A, B, C]) print(func.script()) If we want to use TensorIR schedule to do transformations on such kernel, we need to use `create_prim_func([A, B, C])` to create a schedulable PrimFunc. The generated function looks like: .. code-block:: python @T.prim_func def tir_matmul(a: T.handle, b: T.handle, c: T.handle) -> None: A = T.match_buffer(a, (128, 128)) B = T.match_buffer(b, (128, 128)) C = T.match_buffer(c, (128, 128)) for i, j, k in T.grid(128, 128, 128): with T.block(): vi, vj, vk = T.axis.remap("SSR", [i, j, k]) with T.init(): C[vi, vj] = 0.0 C[vi, vj] += A[vi, vk] * B[vj, vk] Returns ------- func : tir.PrimFunc The created function. """ if not isinstance(ops, (list, tuple, Array)): ops = [ops] return _ffi_api.CreatePrimFunc(ops, index_dtype_override)
19,827
30.775641
99
py
tvm
tvm-main/python/tvm/te/tensor.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Tensor class for computation declaration.""" # pylint: disable=invalid-name import tvm._ffi from tvm.runtime import Object, ObjectGeneric, convert_to_object from tvm.tir import expr as _expr, DataProducer from . import _ffi_api class TensorSlice(ObjectGeneric, _expr.ExprOp): """Auxiliary data structure for enable slicing syntax from tensor.""" def __init__(self, tensor, indices): if not isinstance(indices, tuple): indices = (indices,) self.tensor = tensor self.indices = indices def __getitem__(self, indices): if not isinstance(indices, tuple): indices = (indices,) return TensorSlice(self.tensor, self.indices + indices) def asobject(self): """Convert slice to object.""" return self.tensor.__call__(*self.indices) @property def dtype(self): """Data content of the tensor.""" return self.tensor.dtype @tvm._ffi.register_object class TensorIntrinCall(Object): """Intermediate structure for calling a tensor intrinsic.""" @tvm._ffi.register_object class Tensor(DataProducer, _expr.ExprOp): """Tensor object, to construct, see function.Tensor""" def __call__(self, *indices): ndim = self.ndim if len(indices) != ndim: raise ValueError( f"Need to provide {ndim} index in tensor but {len(indices)} was provided" ) indices = convert_to_object(indices) args = [] for x in indices: if isinstance(x, _expr.PrimExpr): args.append(x) elif isinstance(x, _expr.IterVar): args.append(x.var) else: raise ValueError("The indices must be expression") return _expr.ProducerLoad(self, args) def __getitem__(self, indices): return TensorSlice(self, indices) def __hash__(self): return _ffi_api.TensorHash(self) def __eq__(self, other): if not isinstance(other, Tensor): if isinstance(other, _expr.ExprOp): return _expr.EqualOp(self, other) return False if self.ndim == 0 and other.ndim == 0: raise ValueError( "Equal == comparison among rank-0 tensor is ambiguous, " "use Tensor.equal for content expression equvalence, " "use Tensor.same_as for exact reference comparison" ) return _ffi_api.TensorEqual(self, other) @property def ndim(self): """Dimension of the tensor.""" return len(self.shape) @property def axis(self): """Axis of the tensor.""" return self.__getattr__("axis") @property def op(self): """The corressponding :py:class:`Operation`.""" return self.__getattr__("op") @property def value_index(self): """The output value index the tensor corresponds to.""" return self.__getattr__("value_index") @property def shape(self): """The output shape of the tensor.""" return self.__getattr__("shape") @property def name(self): op = self.op if op.num_outputs == 1: return op.name return f"{op.name}.v{self.value_index}" class Operation(Object): """Represent an operation that generates a tensor""" def output(self, index): """Get the index-th output of the operation Parameters ---------- index : int The index size. Returns ------- out : Tensor The i-th output. """ return _ffi_api.OpGetOutput(self, index) @property def num_outputs(self): """Number of outputs from this op.""" return _ffi_api.OpNumOutputs(self) @property def input_tensors(self): """List of input tensors to this op.""" return _ffi_api.OpInputTensors(self) @tvm._ffi.register_object class PlaceholderOp(Operation): """Placeholder operation.""" @tvm._ffi.register_object class BaseComputeOp(Operation): """Compute operation.""" @property def axis(self): """Represent the IterVar axis, defined when it is a ComputeOp""" return self.__getattr__("axis") @property def reduce_axis(self): """Represent axis of reductions, only defined when it is a ComputeOp""" return self.__getattr__("reduce_axis") @tvm._ffi.register_object class ComputeOp(BaseComputeOp): """Scalar operation.""" @tvm._ffi.register_object class TensorComputeOp(BaseComputeOp): """Tensor operation.""" @tvm._ffi.register_object class ScanOp(Operation): """Scan operation.""" @property def scan_axis(self): """Represent the scan axis, only defined when it is a ScanOp""" return self.__getattr__("scan_axis") @tvm._ffi.register_object class ExternOp(Operation): """External operation.""" @tvm._ffi.register_object class HybridOp(Operation): """Hybrid operation.""" @property def axis(self): """Represent the IterVar axis, also defined when it is a HybridOp""" return self.__getattr__("axis")
5,958
27.108491
89
py
tvm
tvm-main/python/tvm/te/hybrid/parser.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Hybrid Script Parser""" import ast import operator import logging import sys import numbers from enum import Enum from tvm.ir import Array, Range import tvm.runtime import tvm.tir import tvm.te import tvm.te._ffi_api import tvm.arith from tvm.tir import expr as _expr from tvm.tir import stmt as _stmt from tvm.te.tensor import Tensor, Operation from tvm.tir import all as _all from tvm.tir import any as _any from .utils import _internal_assert from . import calls from . import utils from .preprocessor import determine_variable_usage def concat_list_to_block(lst): """Concatenate a list of Python IR nodes to HalideIR Block""" if not lst: return utils.make_nop() n = len(lst) if n == 1: return lst[0] return _stmt.SeqStmt(lst) def visit_list_to_block(visit, lst): """Visit and concatenate a list of Python IR nodes to HalideIR Block""" lst = [visit(stmt) for stmt in lst if not utils.is_docstring(stmt)] lst = [stmt for stmt in lst if not tvm.ir.structural_equal(stmt, utils.make_nop())] if not lst: return utils.make_nop() return concat_list_to_block(lst) class Symbol(Enum): """Enumerates types in the symbol table""" Callable = 0 Input = 1 OutputBuffer = 2 GlobalBuffer = 3 LocalBuffer = 4 SharedBuffer = 5 ConstVar = 6 BufferVar = 7 LoopVar = 8 ConstLoopVar = 9 ThreadBind = 10 def _floordiv(x, y): if isinstance(x, _expr.ExprOp) or isinstance(y, _expr.ExprOp): return tvm.tir.floordiv(x, y) return operator.floordiv(x, y) def _floormod(x, y): if isinstance(x, _expr.ExprOp) or isinstance(y, _expr.ExprOp): return tvm.tir.floormod(x, y) return operator.mod(x, y) class HybridParser(ast.NodeVisitor): """Python AST visitor pass which finally lowers it to HalideIR""" _binop_maker = { ast.Add: operator.add, ast.Sub: operator.sub, ast.Mult: operator.mul, ast.Div: operator.div if sys.version_info[0] == 2 else operator.truediv, ast.FloorDiv: _floordiv, ast.Mod: _floormod, ast.BitOr: operator.or_, ast.BitAnd: operator.and_, ast.BitXor: operator.xor, ast.Gt: operator.gt, ast.GtE: operator.ge, ast.Lt: operator.lt, ast.LtE: operator.le, ast.Eq: operator.eq, ast.NotEq: operator.ne, ast.And: _all, ast.Or: _any, } _unaryop_maker = {ast.USub: operator.neg, ast.Invert: operator.invert, ast.Not: operator.not_} def __init__(self, args, usage, symbols, closure_vars, func_name=None): """ Parameters ---------- args: A list of tvm.te.placeholder or te.var Provided by the user, the argument list of the function to be lowered. usage: A dict of variables used in last in this function Provided by last lower pass, which collects this information symbols : list of str The symbol list of the global context of the function. closure_vars: dict A dict of external name reference captured by this function. Returns ------- func_name: str The name of the function to be lowered; if not provided, the compiler will use the name in the AST """ self.args = list(args) self.usage = usage.copy() self.symbols = {} # Symbol table for k, v in symbols.items(): if callable(v): self.add_symbol(k, Symbol.Callable, v) self.closure_vars = closure_vars self.binds = {} # Thread binds self.device = 0 # Is it generating device self.func_name = func_name # The name of the function to be lowered self.outputs = [] # Output tensors' name self.side_effect = set() # Tensors with side effects self.parsed_body = None # The parsed HalideIR body self.analyzer = tvm.arith.Analyzer() self.returned = False # If this function has a valid return def add_symbol(self, key, ty, val): # pylint: disable=invalid-name """Add value to the symbol table context""" if key in self.symbols.keys(): old = str(self.symbols[key]) new = str((ty, val)) _internal_assert(False, f"Name conflict in symbol table! [{key}] {old} -> {new}") self.symbols[key] = ty, val if ty == Symbol.ThreadBind: if val.var.name not in self.binds.keys(): self.binds[val.var.name] = val return val_ = self.binds[val.var.name] _internal_assert( tvm.tir.analysis.expr_deep_equal(val_.dom.extent, val.dom.extent), "Thread extents should be uniform!", ) self.symbols[key] = ty, val_ def wrap_up_realize(self, node, body): """Wrap up all the variables which will no longer be used""" to_pop = [] for key, val in self.usage.items(): _, level, _ = val if key not in self.symbols: # don't realize the symbols that are never visited continue if level != node: continue _internal_assert(key in self.symbols.keys(), f"Unknown symbol {key}!") ty, entry = self.symbols[key] # pylint: disable=invalid-name if ty in [Symbol.Input, Symbol.OutputBuffer]: continue if "Buffer" in ty.name: _buf = entry _scope = "global" if ty is Symbol.BufferVar else ty.name[:-6].lower() to_pop.append(key) else: continue if _scope == "global": body = self.wrap_up_binds(body) _domain = [Range.from_min_extent(0, i) for i in _buf.shape] _dtype = _buf.dtype _true = tvm.runtime.convert(True) body = tvm.tir.ProducerRealize(_buf, _domain, _true, body, tvm.runtime.convert(_scope)) for elem in to_pop: self.symbols.pop(elem) return body def wrap_up_binds(self, body): for _, iter_var in self.binds.items(): ext = iter_var.dom.extent body = tvm.tir.AttrStmt(iter_var, "thread_extent", ext, body) self.binds = {} return body # pylint: disable=invalid-name, missing-docstring def visit_Module(self, node): _internal_assert( len(node.body) == 1, "Only one-function source code will be fed to this parser!" ) return self.visit(node.body[0]) def visit_FunctionDef(self, node): _internal_assert( len(node.args.args) == len(self.args), "The number of arguments passed to the \ function should be the same as it is defined!", ) if self.func_name is None: self.func_name = node.name for idx, arg in enumerate(node.args.args): _attr = "id" if sys.version_info[0] < 3 else "arg" # To make py2 and 3 compatible self.add_symbol(getattr(arg, _attr), Symbol.Input, self.args[idx]) res = visit_list_to_block(self.visit, node.body) res = self.wrap_up_realize(node, res) return self.wrap_up_binds(res) def visit_Expr(self, node): return self.visit(node.value) def visit_Name(self, node): name = node.id if sys.version_info[0] == 2 and name in ["True", "False"]: return tvm.runtime.convert(ast.literal_eval(name)) if name in self.closure_vars: return tvm.runtime.convert(self.closure_vars[name]) ty, entry = self.symbols[name] _internal_assert(name in self.symbols, f"Unknown symbol {name}!") if ty in [Symbol.LoopVar, Symbol.Input, Symbol.ConstLoopVar]: return entry if ty is Symbol.ThreadBind: return entry.var if ty is Symbol.ConstVar: return entry if isinstance(node.ctx, ast.Load) else None if ty is Symbol.BufferVar: if isinstance(node.ctx, ast.Load): return tvm.tir.ProducerLoad(entry, [tvm.runtime.const(0, "int32")]) return entry, [tvm.runtime.const(0, "int32")] # Do I need any assertion here? return entry def visit_Num(self, node): if isinstance(node.n, numbers.Integral): dtype = "int32" elif isinstance(node.n, float): dtype = "float32" else: _internal_assert( isinstance(node.n, bool), "The data type should be one of (int, float, bool)" ) dtype = "bool" return tvm.runtime.const(node.n, dtype) def visit_NameConstant(self, node): return tvm.runtime.convert(node.value) def visit_AugAssign(self, node): buf = self.visit(node.target) rhs = self.visit(node.value) if isinstance(buf, tuple): _internal_assert(len(buf) == 2, "LHS is supposed to be (buf, args)!") buf, args = buf else: args = [tvm.runtime.const(0, "int32")] _internal_assert(isinstance(buf, Tensor), "LHS is supposed to be Tensor!") read = tvm.tir.ProducerLoad(buf, args) value = HybridParser._binop_maker[type(node.op)](read, rhs) return tvm.tir.ProducerStore(buf, value, args) def visit_Assign(self, node): rhs = self.visit(node.value) if isinstance(rhs, Operation): rmap = {} _internal_assert( len(node.targets) == rhs.num_outputs, "Unable to detuple the outs to targets" ) for i in range(rhs.num_outputs): _internal_assert( isinstance(node.targets[i], ast.Name), "You should bind a pure name to the tensors", ) self.add_symbol(node.targets[i].id, Symbol.GlobalBuffer, rhs.output(i)) rmap[rhs.outputs[i].op] = rhs.output(i) return utils.replace_io(rhs.body, rmap) _internal_assert(len(node.targets) == 1, "So far only one-valued assignment is supported!") lhs = node.targets[0] if isinstance(rhs, _expr.PrimExpr): rhs = self.analyzer.simplify(rhs) if isinstance(lhs, ast.Name): # TODO: support defined intermediate buffer later lhs_ = lhs lhs = lhs.id if lhs in self.symbols.keys(): ty, _ = self.symbols[lhs] _internal_assert(ty != Symbol.LoopVar, "Loop variable cannot be overwritten!") decl, _, rw = self.usage[lhs] if decl == lhs_: _internal_assert( lhs not in self.symbols.keys(), "This value should not be defined before this point!", ) if isinstance(rhs, tuple): shape, dtype, scope = rhs ph = tvm.te.placeholder(shape, dtype=dtype, name=lhs) self.add_symbol(lhs, getattr(Symbol, scope.title() + "Buffer"), ph) if scope == "output": self.outputs.append(lhs) return utils.make_nop() if isinstance(rhs, utils.halide_imm_types) and ast.Store not in rw: self.add_symbol(lhs, Symbol.ConstVar, rhs) else: _internal_assert( self.device == 0, "Single variable not supported in devices' side!\n" + "If you are using GPU, please allocate a 'local' spad " + "outside the bind body", ) ph = tvm.te.placeholder((1,), dtype=rhs.dtype, name=lhs) self.add_symbol(lhs, Symbol.BufferVar, ph) lhs = self.visit(lhs_) if lhs is not None: buf, args = lhs return tvm.tir.ProducerStore(buf, rhs, args) return utils.make_nop() lhs, args = self.visit(lhs) _internal_assert( isinstance(lhs, Tensor), "An array access's LHS is expected to be a expr.Call!" ) res = tvm.tir.ProducerStore(lhs, rhs, args) return res def visit_Index(self, node): if isinstance(node.value, ast.Tuple): return self.visit(node.value) return [self.visit(node.value)] def visit_Attribute(self, node): buf = self.visit(node.value) return getattr(buf, node.attr) def visit_Subscript(self, node): args = self.visit(node.slice) if sys.version_info >= (3, 9): if not isinstance(node.slice, ast.Tuple): args = [args] arr = self.visit(node.value) if isinstance(arr, Array): for i in args: if isinstance(i, numbers.Integral): arr = arr[i] else: _internal_assert( isinstance(i, (_expr.IntImm,)), "All indices are supposed to be constants" ) arr = arr[i.value] return arr if isinstance(node.ctx, ast.Load): return tvm.tir.ProducerLoad(arr, args) return arr, args def visit_With(self, node): if sys.version_info[0] < 3: context = node.context_expr option = node.optional_vars else: _internal_assert(len(node.items) == 1, "Only one with element is supported so far!") context = node.items[0].context_expr option = node.items[0].optional_vars _internal_assert(isinstance(context, ast.Call), "The object must be a Python func call!") _internal_assert(isinstance(option, ast.Name), "The object after 'as' must be an id!") self.annotation[option.id] = context.func.id return visit_list_to_block(self.visit, node.body) def visit_If(self, node): cond = self.analyzer.simplify(self.visit(node.test)) # Return no IfThenElse if proven if isinstance(cond, _expr.IntImm): if cond.value: return visit_list_to_block(self.visit, node.body) if node.orelse: return visit_list_to_block(self.visit, node.orelse) return utils.make_nop() if_body = visit_list_to_block(self.visit, node.body) if node.orelse: else_body = visit_list_to_block(self.visit, node.orelse) else: else_body = None return tvm.tir.IfThenElse(cond, if_body, else_body) def visit_IfExp(self, node): cond = self.visit(node.test) if_body = self.visit(node.body) else_body = self.visit(node.orelse) return tvm.tir.Select(cond, if_body, else_body) def visit_Compare(self, node): _internal_assert(len(node.ops) == len(node.comparators), "#compare ops != #comparators") ops = [self.visit(node.left)] ops += [self.visit(i) for i in node.comparators] res = [] for i in range(len(node.ops)): lhs = ops[i] rhs = ops[i + 1] res.append(HybridParser._binop_maker[type(node.ops[i])](lhs, rhs)) return _all(*res) def visit_BoolOp(self, node): n = len(node.values) if n == 1: _internal_assert(isinstance(node.op, ast.Not), "Unary is supposed to be not!") return operator.not_(self.visit(node.values[0])) _internal_assert(isinstance(node.op, (ast.And, ast.Or)), "Binary is supposed to be and/or!") values = [self.visit(i) for i in node.values] return HybridParser._binop_maker[type(node.op)](*values) def visit_UnaryOp(self, node): operand = self.visit(node.operand) return HybridParser._unaryop_maker[type(node.op)](operand) def visit_BinOp(self, node): lhs = self.visit(node.left) rhs = self.visit(node.right) return HybridParser._binop_maker[type(node.op)](lhs, rhs) def visit_Call(self, node): # Yet, no function pointer supported _internal_assert( isinstance(node.func, ast.Name), "Only id-function function call is supported so far!" ) func_id = node.func.id args = [self.visit(i) for i in node.args] # Intrinsics' if hasattr(calls, func_id): return getattr(calls, func_id)(func_id, args) # Contexts' _internal_assert( func_id in self.symbols.keys(), f"The function called ({func_id}) is not in the context either!", ) ty, entry = self.symbols[func_id] _internal_assert(ty is Symbol.Callable, "Are you sure what you call is a function?!") outs = entry(*args) op = outs.op if isinstance(outs, Tensor) else outs[0].op return op def visit_For(self, node): iter_var, low, ext, kind = self.visit(node.iter) _internal_assert( isinstance(node.target, ast.Name), "The loop iterator should be a variable!" ) _name = node.target.id if isinstance(kind, tuple): low = self.analyzer.simplify(low) ext = self.analyzer.simplify(ext) _internal_assert( isinstance(low, _expr.ConstExpr) and isinstance(ext, _expr.ConstExpr), "Const range should start from a const " + "and iterate const times", ) low, ext = low.value, ext.value if ext > 114514: logging.log( logging.CRITICAL, "[Warning] Are you sure to unroll a large loop in Python?" ) bodies = [] for i in range(low, low + ext): self.add_symbol(_name, Symbol.ConstLoopVar, i) body = visit_list_to_block(self.visit, node.body) body = self.wrap_up_realize(node, body) bodies.append(body) self.symbols.pop(_name) return concat_list_to_block(bodies) if iter_var is None: _internal_assert(kind is not None, "The loop iterating function parse error!") if isinstance(ext, _expr.PrimExpr): dtype = ext.dtype elif isinstance(ext, int): dtype = "int32" else: raise NotImplementedError(f"Unsupported type of ext: {type(ext)}") offset = iter_var = tvm.te.var(_name, dtype=dtype) if not tvm.tir.analysis.expr_deep_equal(low, tvm.runtime.const(0, "int32")): offset = iter_var + low self.add_symbol(_name, Symbol.LoopVar, offset) _body = visit_list_to_block(self.visit, node.body) else: _internal_assert(kind is None, "The loop bind function parse error!") self.add_symbol(_name, Symbol.ThreadBind, iter_var) self.device += 1 _body = visit_list_to_block(self.visit, node.body) self.device -= 1 _body = self.wrap_up_realize(node, _body) if kind is None: res = _body else: _internal_assert( not isinstance(kind, tuple), "Micro expansion should be handled before!" ) res = tvm.tir.For(iter_var, tvm.runtime.const(0, "int32"), ext, kind, _body) self.symbols.pop(_name) return res def visit_Return(self, node): _internal_assert( all(ty != Symbol.LoopVar for ty, _ in self.symbols.values()), "Return should not be in a loop body!", ) ids = [] if isinstance(node.value, ast.Name): ids = [node.value.id] else: _internal_assert( isinstance(node.value, ast.Tuple), "You should return either a single tensor or a tuple", ) _internal_assert( all(isinstance(i, ast.Name) for i in node.value.elts), "What do you return?" ) ids = [i.id for i in node.value.elts] _internal_assert(len(set(ids)) == len(ids), "Duplicated tensors in the return tuples") if len(ids) < len(self.outputs): logging.log(logging.CRITICAL, "[Warning] Not all the output buffers returned!") self.outputs = [self.symbols[i][1] for i in ids] self.returned = True return utils.make_nop() def visit_Tuple(self, node): return tuple(self.visit(i) for i in node.elts) def visit_Str(self, node): return node.s def visit_Assert(self, node): test = self.visit(node.test) mesg = tvm.runtime.convert(self.visit(node.msg)) return tvm.tir.AssertStmt(test, mesg, utils.make_nop()) def parse_python(src, args, symbols, closure_vars): """The helper function of calling the AST visitor Parameters ---------- src : ast.node or str If an ast.node, then directly lower it. If a str, then parse it to ast and lower it. args : list of Tensors or Vars The argument lists to the function. It is NOT encouraged to write a function without arguments. It is NOT encouraged to write a function with side effect. symbols : list of str The symbol list of the global context of the function. closure_vars: dict A dict of external name reference captured by this function. Returns ------- root : Stmt The result Halide IR and the parser class instance. """ root = ast.parse(src) if isinstance(src, str) else src _internal_assert(root, ast.AST) var_usage = determine_variable_usage(root, args, symbols, closure_vars) parser = HybridParser(args, var_usage, symbols, closure_vars) parser.parsed_body = parser.visit(root) _internal_assert(parser.returned, "No valid return found in the function body!") return parser def source_to_op(src, args, symbols, closure_vars): """Another level of wrapper Parameters ---------- src : ast.node or str If an ast.node, then directly lower it. If a str, then parse it to ast and lower it. args : list of Tensors or Vars The argument lists to the function. It is NOT encouraged to write a function without arguments. It is NOT encouraged to write a function with side effect. symbols : list of str The symbol list of the global context of the function. closure_vars: dict A dict of external name reference captured by this function. Returns ------- res : list of output tensors The result of output tensors of the formed OpNode. """ parser = parse_python(src, args, symbols, closure_vars) input_tensors = [] def get_input_tensors(arg): if isinstance(arg, Tensor): input_tensors.append(arg) elif isinstance(arg, Array): for i in arg: get_input_tensors(i) for i in args: get_input_tensors(i) op = tvm.te._ffi_api.HybridOp( parser.func_name, "HybridOp", None, input_tensors, parser.outputs, parser.parsed_body ) res = [op.output(i) for i in range(len(parser.outputs))] return res[0] if len(res) == 1 else res
24,126
35.611533
100
py
tvm
tvm-main/python/tvm/te/hybrid/calls.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Intrinsics of TVM-Python Hybrid Script for Python compilation time semantic support.""" from tvm.runtime import const, convert import tvm.te from tvm.ir.container import Array from tvm.target import Target from tvm.tir import expr as _expr from tvm.tir import call_intrin from tvm.tir.stmt import ForKind from .utils import _internal_assert # pylint: disable=redefined-builtin,invalid-name LOOP_INTRIN = { "range": ForKind.SERIAL, "unroll": ForKind.UNROLLED, "parallel": ForKind.PARALLEL, "vectorize": ForKind.VECTORIZED, "const_range": (ForKind.UNROLLED,), } def _range(annotation, args): """Handling TVM loop types""" n = args.__len__() if n == 1: low, ext = const(0, dtype="int32"), args[0] else: _internal_assert(n == 2, "A loop intrinsic should only have 1 or 2 arguments!") low, ext = args[0], args[1] if not tvm.tir.analysis.expr_deep_equal(low, const(0, dtype="int32")): ext = ext - low kind = LOOP_INTRIN[annotation] iter_var = None return iter_var, low, ext, kind range = unroll = vectorize = parallel = const_range = _range # pylint: disable=invalid-name def bind(func_id, args): """Handling TVM thread binding""" _internal_assert(func_id == "bind", "This function cannot be directly invoked!") _internal_assert(args.__len__() == 2, "A loop bind should only have 2 arguments!") _internal_assert(isinstance(args[0], str), "A loop bind's first argument should be a string!") low, ext = const(0, "int32"), args[1] iter_var = tvm.te.thread_axis((low, ext), args[0]) kind = None return iter_var, low, ext, kind def _math_intrin(func_id, args): # pylint: disable=import-outside-toplevel from tvm.tir import op return getattr(op, func_id)(*args) sqrt = ( log ) = exp = tanh = sigmoid = power = popcount = round = _math_intrin # pylint: disable=invalid-name def _min_max(func_id, args): _internal_assert(args.__len__() == 2, "Max/Min function should have 2 elements") return getattr(_expr, func_id.title())(args[0], args[1]) min = max = _min_max # pylint: disable=invalid-name def _allocate_tensor(func_id, args): """Handling TVM tensor allocation. You may refer hybrid.intrin.allocate for more details.""" n = args.__len__() _internal_assert( isinstance(convert(args[0]), Array), "allocate's first argument should be a tuple of shape!" ) shape = args[0] for i in shape: _internal_assert(isinstance(i, _expr.PrimExpr), "The shape should be an expression") if n > 1: _internal_assert(isinstance(args[1], str), "The data type should be an str") _internal_assert( args[1].startswith("int") or args[1].startswith("float"), "The data type should be either int or float!", ) dtype = args[1] else: dtype = "float32" if n > 2: _internal_assert(isinstance(args[2], str), "The data scope should be an string") _internal_assert(func_id != "output_tensor", "Output tensor cannot specify scope") scope = args[2] else: scope = "global" if func_id != "output_tensor" else "output" return (shape, dtype, scope) output_tensor = allocate = _allocate_tensor # pylint: disable=invalid-name def len(func_id, args): """Iterpret the len function""" _internal_assert(args.__len__() == 1, "Only 1 argument is expected!") _internal_assert(func_id == "len", "This function cannot be directly invoked!") try: return convert(args[0].__len__()) except: # pylint: disable=bare-except _internal_assert(args[0].shape.__len__() == 1, "Only one-dimension array can get len") return convert(args[0].shape[0]) def _cast(func_id, args): _internal_assert( args.__len__() == 1 and isinstance(args[0], _expr.PrimExpr), "Only one expression can be cast", ) return _expr.Cast(func_id, args[0]) float16 = float32 = float64 = _cast # pylint: disable=invalid-name int8 = int16 = int32 = int64 = _cast # pylint: disable=invalid-name uint8 = uint16 = uint32 = uint64 = _cast # pylint: disable=invalid-name def ceil_div(func_id, args): _internal_assert(func_id == "ceil_div", "This function cannot be directly invoked!") _internal_assert(args.__len__() == 2, "2 arguments expected for division!") _internal_assert(isinstance(args[0], _expr.PrimExpr), "Only expressions can div") _internal_assert(isinstance(args[1], _expr.PrimExpr), "Only expressions can div") a, b = args[0], args[1] return (a + b - 1) // b def likely(func_id, args): _internal_assert(args.__len__() == 1, "Only one expression can be likely") _internal_assert(func_id == "likely", "This function cannot be directly invoked!") return call_intrin(args[0].dtype, "tir.likely", *args) def max_num_threads(func_id, args): """Set the maximum number of threads.""" _internal_assert(func_id == "max_num_threads", "This function cannot be directly invoked!") _internal_assert(args.__len__() <= 1, "At most one argument accepted!") if args.__len__() == 0: res = Target.current().max_num_threads else: _internal_assert(isinstance(args[0], _expr.IntImm), "In tvm bool should be uint") res = Target.current(args[0].value).max_num_threads return convert(res) def inf(func_id, args): """Infinity""" _internal_assert(func_id == "inf", "This function cannot be directly invoked!") _internal_assert(args.__len__() == 1, "One argument accepted!") return tvm.tir.max_value(args[0]) def ninf(func_id, args): """Negative infinity""" _internal_assert(func_id == "ninf", "This function cannot be directly invoked!") _internal_assert(args.__len__() == 1, "One argument accepted!") return tvm.tir.min_value(args[0])
6,635
35.065217
100
py
tvm
tvm-main/python/tvm/te/hybrid/utils.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=inconsistent-return-statements """Internal utilities for parsing Python subset to TIR""" import ast import inspect import logging import sys import numpy import tvm.runtime from tvm._ffi.base import numeric_types from tvm.ir.container import Array from tvm.tir import expr as _expr from tvm.tir import stmt as _stmt from tvm.te.tensor import Tensor # pylint: disable=invalid-name np_arg_types = tuple(list(numeric_types) + [numpy.ndarray]) tvm_arg_types = (Tensor, Array, _expr.Var, _expr.ConstExpr) halide_imm_types = (_expr.IntImm, _expr.FloatImm) def _internal_assert(cond, err): """Simplify the code segment like if not XXX then raise an error""" if not cond: raise ValueError(err) # Useful constants. In avoid of runtime dependences, we use function calls to return them. def make_nop(): """Returns a 'no operation' node in HalideIR.""" return _stmt.Evaluate(tvm.runtime.const(0, dtype="int32")) def is_docstring(node): """Checks if a Python AST node is a docstring""" return isinstance(node, ast.Expr) and isinstance(node.value, ast.Str) def _pruned_source(func): """Prune source code's extra leading spaces""" try: lines = inspect.getsource(func).split("\n") leading_space = len(lines[0]) - len(lines[0].lstrip(" ")) lines = [line[leading_space:] for line in lines] return "\n".join(lines) except IOError as err: if sys.version_info[0] == 2 and str(err) == "could not get source code": logging.log( logging.CRITICAL, "This module is not fully operated under Python2... " "Please move to Python3!", ) raise err def replace_io(body, rmap): """Replacing tensors usage according to the dict given""" # pylint: disable=import-outside-toplevel from tvm.tir import stmt_functor def replace(op): if isinstance(op, _stmt.ProducerStore) and op.producer.op in rmap.keys(): buf = rmap[op.producer.op] return _stmt.ProducerStore(buf, op.value, op.indices) if isinstance(op, _expr.ProducerLoad) and op.producer.op in rmap.keys(): buf = rmap[op.producer.op] return _expr.ProducerLoad(buf, op.indices) return None return stmt_functor.ir_transform(body, None, replace, ["tir.ProducerStore", "tir.ProducerLoad"]) def _is_tvm_arg_types(args): """Determine a list of element is either a list of tvm arguments of a list of numpy arguments. If neither is true, raise a value error.""" if isinstance(args[0], tvm_arg_types): for elem in args[1:]: _internal_assert( isinstance(elem, tvm_arg_types), f"Expecting a Var, Tensor or ConstExpr instance but {type(elem)} get!", ) return True _internal_assert( isinstance(args[0], np_arg_types), f"Expect a numpy type but {type(args[0])} get!" ) for elem in args[1:]: _internal_assert( isinstance(elem, np_arg_types), f"Expect a numpy type but {type(elem)} get!" ) return False
3,906
34.518182
100
py
tvm
tvm-main/python/tvm/te/hybrid/module.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Methods and data structures to support dumping HalideIR to Hybrid Script. This allows users to do quick hack to generated HalideIR and cast it back to TVM modules. To enable this feature, you need to build with -DUSE_HYBRID_DUMP=ON. """ import ast from tvm.contrib import utils from .utils import _internal_assert from .utils import _is_tvm_arg_types from .parser import source_to_op class HybridModule(object): """The usage of Hybrid Module is very similar to conventional TVM module, but conventional TVM module requires a function body which is already fully lowered. This contradicts to the fact that Hybrid Module is originally a text format for Phase 0 HalideIR. Thus, a totally separated module is defined.""" def __init__(self, src=None, name=None): """The constructor of this a hybrid module Parameters ---------- src : str The source code of this module name : str The name of this module """ self.src_ = self.name = self.func_ = self.root_ = None if src is not None: temp = utils.tempdir() dst = temp.relpath("script.py") with open(dst, "w") as f: f.write(f"import tvm\n@tvm.te.hybrid.script\n{src}") if name is not None: self.name = name self.load(dst) def __call__(self, *args): if _is_tvm_arg_types(args): return source_to_op(self.root_, args, globals(), {}) return self.func_(*args) def get_source(self): return self.src_ def save(self, path): if not path.endswith(".py"): path = path + ".py" with open(path, "w") as f: f.write(self.src_) def load(self, path): """Load the module from a python file Parameters ---------- path : str Path to the given python file """ with open(path, "r") as f: self.src_ = f.read() src = self.src_ class FindFunc(ast.NodeVisitor): """Find the function in module to be loaded module.""" # pylint: disable=invalid-name def __init__(self): self.name = None self.root = None def visit_FunctionDef(self, node): _internal_assert(self.name is None, "For now, only one function supported!") self.name = node.name _internal_assert(self.root is None, "For now, only one function supported!") self.root = node root = ast.parse(src) finder = FindFunc() finder.visit(root) _internal_assert(finder.name is not None and finder.root is not None, "No function found!") if self.name is None: self.name = finder.name self.root_ = finder.root _, local_ = {}, {} exec(self.src_, _, local_) # pylint: disable=exec-used local_.pop("tvm") assert len(local_) == 1 self.func_ = list(local_.values())[0]
3,857
32.842105
99
py
tvm
tvm-main/python/tvm/te/hybrid/runtime.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Intrinsics of TVM-Python Hybrid Script for Python emulation runtime""" import numpy from tvm.target import Target class bind(object): # pylint: disable=invalid-name """GPU bind software emulataion runtime.""" def __init__(self, _, ext): self.ext = ext def __iter__(self): i = 0 while i < self.ext: yield i i += 1 def allocate(shape, dtype="float32", scope="global"): # pylint: disable=unused-argument """Allocate a buffer with given shape Parameters ---------- shape: Tuple The shape of the tensor to be allocated dtype: string The data type of the tensor scope: string The storage scope of the tensor Returns ------- tensor: numpy.array The tensor allocated """ return numpy.zeros(shape).astype(dtype) def rsqrt(x): """ Computes reciprocal of square root of x element-wise Parameters ---------- x: Tensor Returns ------- res: Tensor The result of reciprocal of square root of x """ return numpy.ones_like(x) / numpy.sqrt(x) def popcount(x): """ Count ones in the binary representation of number x Parameters ---------- x: Integer The number to be counted Returns ------- cnt: Integer The number of ones in the binary representation of number x """ cnt = 0 while x: x -= x & -x cnt += 1 return cnt def sigmoid(x): """ Sigmoid function of x, aka 1/(1+exp(-x)). Parameters ---------- x: a real number Returns ------- res: a real number The result of sigmoid function """ return 1 / (1 + numpy.exp(-x)) def max_num_threads(allow_none=True): """Get max number of threads for GPU targets.""" return Target.current(allow_none).max_num_threads def inf(dtype): return numpy.iinfo(dtype).max def ninf(dtype): return numpy.iinfo(dtype).min HYBRID_GLOBALS = { "unroll": range, "vectorize": range, "parallel": range, "const_range": range, "bind": bind, "allocate": allocate, "output_tensor": allocate, "sqrt": numpy.sqrt, "rsqrt": rsqrt, "log": numpy.log, "tanh": numpy.tanh, "power": numpy.power, "exp": numpy.exp, "sigmoid": sigmoid, "popcount": popcount, "round": round, "likely": lambda cond: cond, "uint8": numpy.uint8, "uint16": numpy.uint16, "uint32": numpy.uint32, "uint64": numpy.uint64, "int8": numpy.int8, "int16": numpy.int16, "int32": numpy.int32, "int64": numpy.int64, "float16": numpy.float16, "float32": numpy.float32, "float64": numpy.float64, "ceil_div": lambda a, b: (a + b - 1) // b, "max_num_threads": max_num_threads, "inf": inf, "ninf": inf, } def _enter_hybrid_runtime(func): """Put hybrid runtime variables into the global scope""" _globals = func.__globals__ intersect = [] for elem in list(HYBRID_GLOBALS.keys()): if elem in _globals.keys(): intersect.append((elem, _globals[elem])) _globals[elem] = HYBRID_GLOBALS[elem] return intersect def _restore_runtime(func, intersect): """Rollback the modification caused by hybrid runtime""" _globals = func.__globals__ for elem in list(HYBRID_GLOBALS.keys()): _globals.pop(elem) for k, v in intersect: _globals[k] = v
4,235
23.068182
88
py
tvm
tvm-main/python/tvm/te/hybrid/__init__.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Hybrid Programming APIs of TVM Python Package. This package maps a subset of python to HalideIR so that: 1. Users can write some preliminary versions of the computation patterns have not been supported yet and verify it across the real execution and python semantic emulation. 2. So far, it is a text format dedicated to HalideIR Phase 0. Refer tvm.lower for more details. A larger ambition of this module is to support all levels of HalideIR. """ # TODO(@were): Make this module more complete. # 1. Support HalideIR dumping to Hybrid Script # 2. Support multi-level HalideIR import inspect import tvm._ffi import tvm.te.schedule from tvm._ffi.base import decorate from .module import HybridModule from .parser import source_to_op from .utils import _pruned_source def script(pyfunc): """Decorate a python function function as hybrid script. The hybrid function support emulation mode and parsing to the internal language IR. Returns ------- hybrid_func : function A decorated hybrid script function. """ # pylint: disable=import-outside-toplevel, missing-docstring def wrapped_func(func, *args, **kwargs): from .utils import _is_tvm_arg_types if _is_tvm_arg_types(args): src = _pruned_source(func) closure_vars = inspect.getclosurevars(func).nonlocals closure_vars.update(inspect.getclosurevars(func).globals) return source_to_op(src, args, func.__globals__, closure_vars) from .runtime import _enter_hybrid_runtime, _restore_runtime intersect = _enter_hybrid_runtime(func) value = func(*args, **kwargs) _restore_runtime(func, intersect) return value return decorate(pyfunc, wrapped_func) def build(sch, inputs, outputs, name="hybrid_func"): """Dump the current schedule to hybrid module Parameters ---------- sch: tvm.te.Schedule The schedule to be dumped inputs: An array of Tensors or Vars The inputs of the function body outputs: An array of Tensors The outputs of the function body Returns ------- module: HybridModule The built results is wrapped in a HybridModule. The usage of HybridModule is roughly the same as normal TVM-built modules. """ sch = sch.normalize() bounds = tvm.te.schedule.InferBound(sch) stmt = tvm.te.schedule.ScheduleOps(sch, bounds) src = _Dump(stmt, inputs, outputs, name) return HybridModule(src, name) tvm._ffi._init_api("tvm.hybrid", __name__)
3,339
31.745098
82
py
tvm
tvm-main/python/tvm/te/hybrid/preprocessor.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Determines the declaration, r/w status, and last use of each variable""" import ast import sys from .runtime import HYBRID_GLOBALS from .utils import _internal_assert class PyVariableUsage(ast.NodeVisitor): """The vistor class to determine the declaration, r/w status, and last use of each variable""" # pylint: disable=invalid-name # pylint: disable=missing-docstring def __init__(self, args, symbols, closure_vars): self.status = {} self.scope_level = [] self._args = {} self.args = args self.aug_assign_ = False self.symbols = symbols self.closure_vars = closure_vars def visit_FunctionDef(self, node): self.scope_level.append(node) _internal_assert( len(node.args.args) == len(self.args), "#arguments passed should be the same as #arguments defined", ) for idx, arg in enumerate(node.args.args): _attr = "id" if sys.version_info[0] < 3 else "arg" # To make py2 and 3 compatible self._args[getattr(arg, _attr)] = self.args[idx] for i in node.body: self.visit(i) def visit_For(self, node): _internal_assert(isinstance(node.target, ast.Name), "For's iterator should be an id") self.visit(node.iter) self.scope_level.append(node) for i in node.body: self.visit(i) self.scope_level.pop() def visit_Call(self, node): # No function pointer supported so far _internal_assert(isinstance(node.func, ast.Name), "Function call should be an id") func_id = node.func.id _internal_assert( func_id in list(HYBRID_GLOBALS.keys()) + ["range", "max", "min", "len"] + list(self.symbols.keys()), "Function call id " + func_id + " not in intrinsics' list", ) for elem in node.args: self.visit(elem) def visit_AugAssign(self, node): self.aug_assign_ = True self.generic_visit(node) self.aug_assign_ = False def visit_Name(self, node): # If it is True or False, we do not worry about it! if sys.version_info[0] == 2 and node.id in ["True", "False"]: return # If it is from the argument list or loop variable, we do not worry about it! if node.id in self._args.keys(): return fors = [loop.target.id for loop in self.scope_level if isinstance(loop, ast.For)] if node.id in fors: return # The loop variable cannot be overwritten when iteration _internal_assert( not isinstance(node.ctx, ast.Store) or node.id not in fors, "Iter var cannot be overwritten", ) if node.id not in self.status.keys(): # It is a captured value in closure if node.id in self.closure_vars: try: ast.literal_eval(str(self.closure_vars[node.id])) except ValueError: raise ValueError("Only support capturing constant values in closure") return _internal_assert(isinstance(node.ctx, ast.Store), f"Undeclared variable {node.id}") if self.aug_assign_: raise ValueError('"First store" cannot be an AugAssign') self.status[node.id] = (node, self.scope_level[-1], set()) else: decl, loop, usage = self.status[node.id] usage.add(type(node.ctx)) _internal_assert( loop in self.scope_level, f"{node.id} is used out of the scope it is defined!" ) self.status[node.id] = (decl, loop, usage) def determine_variable_usage(root, args, symbols, closure_vars): """The helper function for calling the dedicated visitor.""" visitor = PyVariableUsage(args, symbols, closure_vars) visitor.visit(root) return visitor.status
4,746
38.231405
98
py
tvm
tvm-main/python/tvm/testing/plugin.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=unused-argument """Pytest plugin for using tvm testing extensions. TVM provides utilities for testing across all supported targets, and to more easily parametrize across many inputs. For more information on usage of these features, see documentation in the tvm.testing module. These are enabled by default in all pytests provided by tvm, but may be useful externally for one-off testing. To enable, add the following line to the test script, or to the conftest.py in the same directory as the test scripts. pytest_plugins = ['tvm.testing.plugin'] """ import pytest import _pytest import tvm from tvm.testing import utils try: from xdist.scheduler.loadscope import LoadScopeScheduling HAVE_XDIST = True except ImportError: HAVE_XDIST = False MARKERS = { "gpu": "mark a test as requiring a gpu", "tensorcore": "mark a test as requiring a tensorcore", "cuda": "mark a test as requiring cuda", "opencl": "mark a test as requiring opencl", "rocm": "mark a test as requiring rocm", "vulkan": "mark a test as requiring vulkan", "metal": "mark a test as requiring metal", "llvm": "mark a test as requiring llvm", "ethosn": "mark a test as requiring ethosn", "hexagon": "mark a test as requiring hexagon", "corstone300": "mark a test as requiring Corstone300 FVP", } def pytest_configure(config): """Runs at pytest configure time, defines marks to be used later.""" for feature in utils.Feature._all_features.values(): feature._register_marker(config) print("enabled targets:", "; ".join(map(lambda x: x[0], utils.enabled_targets()))) print("pytest marker:", config.option.markexpr) def pytest_addoption(parser): """Add pytest options.""" parser.addoption("--gtest_args", action="store", default="") def pytest_generate_tests(metafunc): """Called once per unit test, modifies/parametrizes it as needed.""" _parametrize_correlated_parameters(metafunc) _auto_parametrize_target(metafunc) _add_target_specific_marks(metafunc) # Process gtest arguments option_value = metafunc.config.option.gtest_args if "gtest_args" in metafunc.fixturenames and option_value is not None: metafunc.parametrize("gtest_args", [option_value]) def pytest_collection_modifyitems(config, items): """Called after all tests are chosen, currently used for bookkeeping.""" # pylint: disable=unused-argument _count_num_fixture_uses(items) _remove_global_fixture_definitions(items) _sort_tests(items) @pytest.fixture def dev(target): """Give access to the device to tests that need it.""" return tvm.device(target) def pytest_sessionfinish(session, exitstatus): # Don't exit with an error if we select a subset of tests that doesn't # include anything if session.config.option.markexpr != "": if exitstatus == pytest.ExitCode.NO_TESTS_COLLECTED: session.exitstatus = pytest.ExitCode.OK def _auto_parametrize_target(metafunc): """Automatically applies parametrize_targets Used if a test function uses the "target" fixture, but isn't already marked with @tvm.testing.parametrize_targets. Intended for use in the pytest_generate_tests() handler of a conftest.py file. """ if "target" in metafunc.fixturenames: # Check if any explicit parametrizations exist, and apply one # if they do not. If the function is marked with either # excluded or known failing targets, use these to determine # the targets to be used. parametrized_args = [ arg.strip() for mark in metafunc.definition.iter_markers("parametrize") for arg in mark.args[0].split(",") ] if "target" not in parametrized_args: excluded_targets = getattr(metafunc.function, "tvm_excluded_targets", []) # Add a parametrize marker instead of calling # metafunc.parametrize so that the parametrize rewriting # can still occur. mark = pytest.mark.parametrize( "target", [ t["target"] for t in utils._get_targets() if t["target_kind"] not in excluded_targets ], scope="session", ) metafunc.definition.add_marker(mark) def _add_target_specific_marks(metafunc): """Add any target-specific marks to parametrizations over target""" def update_parametrize_target_arg( mark, argnames, argvalues, *args, **kwargs, ): args = [arg.strip() for arg in argnames.split(",") if arg.strip()] if "target" in args: target_i = args.index("target") new_argvalues = [] for argvalue in argvalues: if isinstance(argvalue, _pytest.mark.structures.ParameterSet): # The parametrized value is already a # pytest.param, so track any marks already # defined. param_set = argvalue.values target = param_set[target_i] additional_marks = argvalue.marks elif len(args) == 1: # Single value parametrization, argvalue is a list of values. target = argvalue param_set = (target,) additional_marks = [] else: # Multiple correlated parameters, argvalue is a list of tuple of values. param_set = argvalue target = param_set[target_i] additional_marks = [] if mark in metafunc.definition.own_markers: xfail_targets = getattr(metafunc.function, "tvm_known_failing_targets", []) target_kind = target.split()[0] if isinstance(target, str) else target.kind.name if target_kind in xfail_targets: additional_marks.append( pytest.mark.xfail( reason=f'Known failing test for target "{target_kind}"' ) ) new_argvalues.append( pytest.param( *param_set, marks=_target_to_requirement(target) + additional_marks ) ) try: argvalues[:] = new_argvalues except TypeError as err: pyfunc = metafunc.definition.function filename = pyfunc.__code__.co_filename line_number = pyfunc.__code__.co_firstlineno msg = ( f"Unit test {metafunc.function.__name__} ({filename}:{line_number}) " "is parametrized using a tuple of parameters instead of a list " "of parameters." ) raise TypeError(msg) from err if "target" in metafunc.fixturenames: # Update any explicit use of @pytest.mark.parmaetrize to # parametrize over targets. This adds the appropriate # @tvm.testing.requires_* markers for each target. for mark in metafunc.definition.iter_markers("parametrize"): update_parametrize_target_arg(mark, *mark.args, **mark.kwargs) def _count_num_fixture_uses(items): # Helper function, counts the number of tests that use each cached # fixture. Should be called from pytest_collection_modifyitems(). for item in items: is_skipped = item.get_closest_marker("skip") or any( mark.args[0] for mark in item.iter_markers("skipif") ) if is_skipped: continue for fixturedefs in item._fixtureinfo.name2fixturedefs.values(): # Only increment the active fixturedef, in a name has been overridden. fixturedef = fixturedefs[-1] if hasattr(fixturedef.func, "num_tests_use_this_fixture"): fixturedef.func.num_tests_use_this_fixture[0] += 1 def _remove_global_fixture_definitions(items): # Helper function, removes fixture definitions from the global # variables of the modules they were defined in. This is intended # to improve readability of error messages by giving a NameError # if a test function accesses a pytest fixture but doesn't include # it as an argument. Should be called from # pytest_collection_modifyitems(). modules = set(item.module for item in items) for module in modules: for name in dir(module): obj = getattr(module, name) if hasattr(obj, "_pytestfixturefunction") and isinstance( obj._pytestfixturefunction, _pytest.fixtures.FixtureFunctionMarker ): delattr(module, name) def _sort_tests(items): """Sort tests by file/function. By default, pytest will sort tests to maximize the re-use of fixtures. However, this assumes that all fixtures have an equal cost to generate, and no caches outside of those managed by pytest. A tvm.testing.parameter is effectively free, while reference data for testing may be quite large. Since most of the TVM fixtures are specific to a python function, sort the test ordering by python function, so that tvm.testing.utils._fixture_cache can be cleared sooner rather than later. Should be called from pytest_collection_modifyitems. """ def sort_key(item): filename, lineno, test_name = item.location test_name = test_name.split("[")[0] return filename, lineno, test_name items.sort(key=sort_key) def _target_to_requirement(target): if isinstance(target, str): target = tvm.target.Target(target) # mapping from target to decorator if target.kind.name == "cuda" and "cudnn" in target.attrs.get("libs", []): return utils.requires_cudnn.marks() if target.kind.name == "cuda" and "cublas" in target.attrs.get("libs", []): return utils.requires_cublas.marks() if target.kind.name == "cuda": return utils.requires_cuda.marks() if target.kind.name == "rocm": return utils.requires_rocm.marks() if target.kind.name == "vulkan": return utils.requires_vulkan.marks() if target.kind.name == "nvptx": return utils.requires_nvptx.marks() if target.kind.name == "metal": return utils.requires_metal.marks() if target.kind.name == "opencl": return utils.requires_opencl.marks() if target.kind.name == "llvm": return utils.requires_llvm.marks() if target.kind.name == "hexagon": return utils.requires_hexagon.marks() return [] def _parametrize_correlated_parameters(metafunc): parametrize_needed = {} for name, fixturedefs in metafunc.definition._fixtureinfo.name2fixturedefs.items(): fixturedef = fixturedefs[-1] if hasattr(fixturedef.func, "parametrize_group") and hasattr( fixturedef.func, "parametrize_values" ): group = fixturedef.func.parametrize_group values = fixturedef.func.parametrize_values ids = fixturedef.func.parametrize_ids if group in parametrize_needed: assert ids == parametrize_needed[group]["ids"] else: parametrize_needed[group] = {"ids": ids, "params": []} parametrize_needed[group]["params"].append((name, values)) for parametrize_group in parametrize_needed.values(): params = parametrize_group["params"] ids = parametrize_group["ids"] if len(params) == 1: name, values = params[0] metafunc.parametrize(name, values, indirect=True, ids=ids) else: names = ",".join(name for name, values in params) value_sets = zip(*[values for name, values in params]) metafunc.parametrize(names, value_sets, indirect=True, ids=ids) # pytest-xdist isn't required but is used in CI, so guard on its presence if HAVE_XDIST: def pytest_xdist_make_scheduler(config, log): """ Serialize certain tests for pytest-xdist that have inter-test dependencies """ class TvmTestScheduler(LoadScopeScheduling): """ Scheduler to serializer tests """ def _split_scope(self, nodeid): """ Returns a specific string for classes of nodeids """ # NOTE: these tests contain inter-test dependencies and must be # serialized items = { "test_tvm_testing_features": "functional-tests", "tests/python/unittest/test_crt": "crt-tests", "tests/python/driver/tvmc": "tvmc-tests", } for nodeid_pattern, suite_name in items.items(): if nodeid_pattern in nodeid: return suite_name return nodeid return TvmTestScheduler(config, log)
13,992
36.314667
100
py
tvm
tvm-main/python/tvm/testing/auto_scheduler.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=invalid-name, missing-function-docstring """Common functions for auto_scheduler test cases""" import tvm from tvm import auto_scheduler, te, topi from tvm.topi.nn.winograd_util import winograd_transform_matrices from tvm.topi.utils import get_const_tuple @auto_scheduler.register_workload def matmul_auto_scheduler_test(N, M, K): A = te.placeholder((N, K), name="A") B = te.placeholder((K, M), name="B") k = te.reduce_axis((0, K), name="k") C = te.compute( (N, M), lambda i, j: te.sum(A[i][k] * B[k][j], axis=[k]), name="C", attrs={"layout_free_placeholders": [B]}, ) return [A, B, C] @auto_scheduler.register_workload def double_matmul_auto_scheduler_test(N): A = te.placeholder((N, N), name="A", dtype="float32") B = te.placeholder((N, N), name="B", dtype="float32") C = te.placeholder((N, N), name="C", dtype="float32") k = te.reduce_axis((0, N), name="k") D = te.compute((N, N), lambda i, j: te.sum(A[i][k] * B[k][j], axis=[k]), name="D") k = te.reduce_axis((0, N), name="k") E = te.compute((N, N), lambda i, j: te.sum(D[i][k] * C[k][j], axis=[k]), name="E") return [A, B, C, E] @auto_scheduler.register_workload def parallel_matmul_auto_scheduler_test(N): """Two parallel matmuls with shared A.""" A = te.placeholder((N, N), name="A", dtype="float32") B = te.placeholder((N, N), name="B", dtype="float32") C = te.placeholder((N, N), name="C", dtype="float32") k = te.reduce_axis((0, N), name="k") D = te.compute((N, N), lambda i, j: te.sum(A[i][k] * B[k][j], axis=[k]), name="D") k = te.reduce_axis((0, N), name="k") E = te.compute((N, N), lambda i, j: te.sum(A[i][k] * C[k][j], axis=[k]), name="E") return [A, B, C, D, E] # Test for register_workload with different name @auto_scheduler.register_workload("matmul_auto_scheduler_test_rename_1") def matmul_auto_scheduler_test_rename_0(N, M, K): A = te.placeholder((N, K), name="A") B = te.placeholder((K, M), name="B") k = te.reduce_axis((0, K), name="k") C = te.compute((N, M), lambda i, j: te.sum(A[i][k] * B[k][j], axis=[k]), name="C") return [A, B, C] @auto_scheduler.register_workload def conv2d_nchw_bn_relu_auto_scheduler_test( N, H, W, CI, CO, kernel_size, strides, padding, dilation=1 ): data = te.placeholder((N, CI, H, W), name="Data") kernel = te.placeholder((CO, CI, kernel_size, kernel_size), name="Kernel") bias = te.placeholder((CO, 1, 1), name="Bias") bn_scale = te.placeholder((CO, 1, 1), name="Bn_scale") bn_offset = te.placeholder((CO, 1, 1), name="Bn_offset") OH = (H + 2 * padding - (kernel_size - 1) * dilation - 1) // strides + 1 OW = (W + 2 * padding - (kernel_size - 1) * dilation - 1) // strides + 1 conv = topi.nn.conv2d_nchw(data, kernel, strides, padding, dilation) conv = te.compute( (N, CO, OH, OW), lambda i, j, k, l: conv[i, j, k, l] + bias[j, 0, 0], name="Bias_add" ) conv = te.compute( (N, CO, OH, OW), lambda i, j, k, l: conv[i, j, k, l] * bn_scale[j, 0, 0], name="Bn_mul" ) conv = te.compute( (N, CO, OH, OW), lambda i, j, k, l: conv[i, j, k, l] + bn_offset[j, 0, 0], name="Bn_add" ) out = topi.nn.relu(conv) return [data, kernel, bias, bn_offset, bn_scale, out] @auto_scheduler.register_workload def max_pool2d_auto_scheduler_test(N, H, W, CI, padding): data = te.placeholder((N, CI, H, W), name="Data") out = topi.nn.pool2d(data, [2, 2], [1, 1], [1, 1], [padding, padding, padding, padding], "max") return [data, out] @auto_scheduler.register_workload def min_nm_auto_scheduler_test(N, M): A = te.placeholder((N, M), name="A") B = topi.min(A, axis=-1) return [A, B] @auto_scheduler.register_workload def softmax_nm_auto_scheduler_test(N, M): A = te.placeholder((N, M), name="A") B = topi.nn.softmax(A, axis=1) return [A, B] @auto_scheduler.register_workload def softmax_abcd_auto_scheduler_test(a, b, c, d): A = te.placeholder((a, b, c, d), name="A") B = topi.nn.softmax(A, axis=-1) return [A, B] @auto_scheduler.register_workload def invalid_compute_definition(): A = te.placeholder((10, 10), name="A") # The names of the following two iterators are the same. # This is invalid. r1 = te.reduce_axis((0, 2), name="r1") r2 = te.reduce_axis((0, 2), name="r1") B = te.compute((10,), lambda i: te.sum(A[i][r1 + r2], axis=[r1, r2]), name="B") return [A, B] @auto_scheduler.register_workload def zero_rank_reduce_auto_scheduler_test(N): A = tvm.te.placeholder((N,), name="A") k = tvm.te.reduce_axis((0, N), name="k") B = tvm.te.compute((), lambda: tvm.te.sum(A[k], k), name="B") return [A, B] @auto_scheduler.register_workload def zero_rank_compute_auto_scheduler_test(N): A = tvm.te.placeholder((N,), name="A") B = tvm.te.compute((), lambda: A[0], name="B") return [A, B] @auto_scheduler.register_workload def conv2d_winograd_nhwc_auto_scheduler_test( N, H, W, CI, CO, kernel_size=3, stride=1, padding=0, dilation=1 ): tile_size = 4 inputs = te.placeholder((N, H, W, CI), name="inputs") N, H, W, CI = get_const_tuple(inputs.shape) if isinstance(dilation, int): dilation_h = dilation_w = dilation else: dilation_h, dilation_w = dilation assert (dilation_h, dilation_w) == (1, 1), "Does not support dilation" KH = KW = kernel_size HPAD, WPAD, _, _ = topi.nn.get_pad_tuple(padding, (KH, KW)) HSTR, WSTR = (stride, stride) if isinstance(stride, int) else stride assert HSTR == 1 and WSTR == 1 and KH == KW data_pad = topi.nn.pad(inputs, (0, HPAD, WPAD, 0), (0, HPAD, WPAD, 0), name="data_pad") r = KW m = tile_size alpha = m + r - 1 A, B, _ = winograd_transform_matrices(m, r, "float32") H = (H + 2 * HPAD - KH) // HSTR + 1 W = (W + 2 * WPAD - KW) // WSTR + 1 nH, nW = (H + m - 1) // m, (W + m - 1) // m P = N * nH * nW kshape = (alpha, alpha, CI, CO) kernel_pack = te.placeholder(kshape, inputs.dtype, name="weight") idxdiv = te.indexdiv idxmod = te.indexmod # pack input tile input_tile = te.compute( (alpha, alpha, P, CI), lambda eps, nu, p, ci: data_pad[idxdiv(p, (nH * nW))][idxmod(idxdiv(p, nW), nH) * m + eps][ idxmod(p, nW) * m + nu ][ci], name="input_tile", ) # transform data r_a = te.reduce_axis((0, alpha), "r_a") r_b = te.reduce_axis((0, alpha), "r_b") data_pack = te.compute( (alpha, alpha, P, CI), lambda eps, nu, p, ci: te.sum( input_tile[r_a][r_b][p][ci] * B[r_a][eps] * B[r_b][nu], axis=[r_a, r_b] ), name="data_pack", attrs={"auto_scheduler_simplify_const_tensor_indices": ["eps", "nu", "r_a", "r_b"]}, ) # do batch gemm ci = te.reduce_axis((0, CI), name="ci") bgemm = te.compute( (alpha, alpha, P, CO), lambda eps, nu, p, co: te.sum( data_pack[eps][nu][p][ci] * kernel_pack[eps][nu][ci][co], axis=[ci] ), name="bgemm", ) # inverse transform r_a = te.reduce_axis((0, alpha), "r_a") r_b = te.reduce_axis((0, alpha), "r_b") inverse = te.compute( (m, m, P, CO), lambda vh, vw, p, co: te.sum( bgemm[r_a][r_b][p][co] * A[r_a][vh] * A[r_b][vw], axis=[r_a, r_b] ), name="inverse", attrs={"auto_scheduler_simplify_const_tensor_indices": ["vh", "vw", "r_a", "r_b"]}, ) # output output = te.compute( (N, H, W, CO), lambda n, h, w, co: inverse[ idxmod(h, m), idxmod(w, m), n * nH * nW + idxdiv(h, m) * nW + idxdiv(w, m), co ], name="conv2d_winograd", ) return [inputs, kernel_pack, output] def get_tiled_matmul(): """Get a compute dag and a state for tiled matmul""" A, B, C = matmul_auto_scheduler_test(512, 512, 512) dag = auto_scheduler.ComputeDAG([A, B, C]) s0 = dag.get_init_state() its0 = s0.split(C, s0[C].iters[0], [4, 8, 8]) its1 = s0.split(C, s0[C].iters[4], [8, 4, 4]) s0.reorder( C, [its0[0], its1[0], its0[1], its1[1], its0[2], its1[2], its0[3], its1[3], s0[C].iters[8]] ) return dag, s0
9,066
32.958801
99
py
tvm
tvm-main/python/tvm/testing/tir.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=invalid-name, import-outside-toplevel, unused-variable """Common utility functions in TVM tir""" def mma_schedule( workload, k_inner, in_dtype, b_transposed, i_factors, j_factors, k_factors, index_map_A, index_map_B, index_map_C, ldmatrix_a_intrin, ldmatrix_b_intrin, mma_intrin, mma_fill_intrin, mma_store_intrin, shared_scope="shared", ): """Create a tensorized schedule for GEMM with MMA intrinsics.""" import tvm # pylint: disable=import-outside-toplevel ir_module = tvm.IRModule({"main": workload}) sch = tvm.tir.Schedule(ir_module) block = sch.get_block("C") i, j, k = sch.get_loops(block) i, i_tc = sch.split(i, factors=[None, 16]) j, j_tc = sch.split(j, factors=[None, 16]) k, k_tc = sch.split(k, factors=[None, k_inner]) sch.reorder(i, j, k, i_tc, j_tc, k_tc) block_inner = sch.blockize(i_tc) block_outer, block_inner = block_inner, block num_ty = i_factors[2] * j_factors[2] i0, i1, i2, i3, i4 = sch.split(i, factors=i_factors) j0, j1, j2, j3, j4 = sch.split(j, factors=j_factors) k0, k1, k2 = sch.split(k, k_factors) sch.reorder(i0, j0, i1, j1, j2, i2, k0, k1, i3, j3, k2, i4, j4) block_idx = sch.fuse(i0, j0) block_idy = sch.fuse(i1, j1) thread_idy = sch.fuse(j2, i2) sch.bind(block_idx, "blockIdx.x") sch.bind(block_idy, "blockIdx.y") sch.bind(thread_idy, "threadIdx.y") def fetch_to_shared(block, idx, ndim): block_read = sch.cache_read(block, idx, shared_scope) sch.compute_at(block_read, k0) vector_size = 16 if in_dtype == "int8" else 8 warp_size = 32 fused = sch.fuse(*sch.get_loops(block_read)[-ndim:]) _, f_1, f_2, f_3 = sch.split(fused, factors=[None, num_ty, warp_size, vector_size]) sch.bind(f_2, "threadIdx.x") sch.bind(f_1, "threadIdx.y") sch.vectorize(f_3) offset = 8 if in_dtype == "float16" else 16 sch.storage_align(block_read, 0, axis=-2, factor=32, offset=offset) return block_read fetch_to_shared(block_outer, 0, 2) fetch_to_shared(block_outer, 1, 2) A_warp = sch.cache_read(block_outer, 0, "warp") B_warp = sch.cache_read(block_outer, 1, "warp") sch.compute_at(A_warp, k1) sch.compute_at(B_warp, k1) C_warp = sch.cache_write(block_outer, 0, "warp") sch.reverse_compute_at(C_warp, thread_idy) ii, jj = sch.get_loops(C_warp)[-2:] io, ii = sch.split(ii, factors=[None, 16]) jo, ji = sch.split(jj, factors=[None, 16]) sch.reorder(io, jo, ii, ji) sch.decompose_reduction(block_outer, sch.get_loops(block_outer)[3]) block_init_c = sch.get_block("C_init") def tile_wmma_fragment(block_read, height, width): i, j = sch.get_loops(block_read)[-2:] i0, i1 = sch.split(i, factors=[None, height]) j0, j1 = sch.split(j, factors=[None, width]) sch.reorder(i0, j0, i1, j1) return i1 loop_a = tile_wmma_fragment(A_warp, 16, k_inner) if b_transposed: loop_b = tile_wmma_fragment(B_warp, 16, k_inner) else: loop_b = tile_wmma_fragment(B_warp, k_inner, 16) sch.transform_layout(A_warp, ("write", 0), index_map_A) sch.transform_layout(B_warp, ("write", 0), index_map_B) sch.transform_layout(C_warp, ("read", 0), index_map_C) sch.tensorize(loop_a, ldmatrix_a_intrin) sch.tensorize(loop_b, ldmatrix_b_intrin) sch.tensorize(sch.get_loops(block_inner)[-3], mma_intrin) sch.tensorize(sch.get_loops(block_init_c)[-2], mma_fill_intrin) sch.tensorize(sch.get_loops(C_warp)[-2], mma_store_intrin) return sch def mfma_schedule( workload, k_inner, in_dtype, b_transposed, i_factors, j_factors, k_factors, index_map_A, index_map_B, index_map_C, ldmatrix_a_intrin, ldmatrix_b_intrin, mfma_intrin, mfma_fill_intrin, mfma_store_intrin, shared_scope="shared", ): """Create a tensorized schedule for GEMM with MFMA intrinsics.""" import tvm ir_module = tvm.IRModule({"main": workload}) sch = tvm.tir.Schedule(ir_module) wmma_m = 16 wmma_n = 16 wmma_k = k_inner warp_size = 64 block = sch.get_block("C") i, j, k = sch.get_loops(block) i, i_tc = sch.split(i, factors=[None, wmma_m]) j, j_tc = sch.split(j, factors=[None, wmma_n]) k, k_tc = sch.split(k, factors=[None, wmma_k]) sch.reorder(i, j, k, i_tc, j_tc, k_tc) block_inner = sch.blockize(i_tc) block_outer, block_inner = block_inner, block num_ty = i_factors[2] * j_factors[2] i0, i1, i2, i3, i4 = sch.split(i, factors=i_factors) j0, j1, j2, j3, j4 = sch.split(j, factors=j_factors) k0, k1, k2 = sch.split(k, k_factors) sch.reorder(i0, j0, i1, j1, j2, i2, k0, k1, i3, j3, k2, i4, j4) block_idx = sch.fuse(i0, j0) block_idy = sch.fuse(i1, j1) thread_idy = sch.fuse(j2, i2) sch.bind(block_idx, "blockIdx.x") sch.bind(block_idy, "blockIdx.y") sch.bind(thread_idy, "threadIdx.y") def fetch_to_shared(block, idx, ndim): block_read = sch.cache_read(block, idx, shared_scope) sch.compute_at(block_read, k0) vector_size = 16 if in_dtype == "int8" else 8 fused = sch.fuse(*sch.get_loops(block_read)[-ndim:]) _, f_1, f_2, f_3 = sch.split(fused, factors=[None, num_ty, warp_size, vector_size]) sch.bind(f_2, "threadIdx.x") sch.bind(f_1, "threadIdx.y") sch.vectorize(f_3) return block_read fetch_to_shared(block_outer, 0, 2) fetch_to_shared(block_outer, 1, 2) A_warp = sch.cache_read(block_outer, 0, "warp") B_warp = sch.cache_read(block_outer, 1, "warp") sch.compute_at(A_warp, k1) sch.compute_at(B_warp, k1) C_warp = sch.cache_write(block_outer, 0, "warp") sch.reverse_compute_at(C_warp, thread_idy) ii, jj = sch.get_loops(C_warp)[-2:] io, ii = sch.split(ii, factors=[None, 16]) jo, ji = sch.split(jj, factors=[None, 16]) sch.reorder(io, jo, ii, ji) sch.decompose_reduction(block_outer, sch.get_loops(block_outer)[3]) block_init_c = sch.get_block("C_init") def tile_wmma_fragment(block_read, height, width): i, j = sch.get_loops(block_read)[-2:] i0, i1 = sch.split(i, factors=[None, height]) j0, j1 = sch.split(j, factors=[None, width]) sch.reorder(i0, j0, i1, j1) return i1 loop_a = tile_wmma_fragment(A_warp, 16, k_inner) if b_transposed: loop_b = tile_wmma_fragment(B_warp, 16, k_inner) else: loop_b = tile_wmma_fragment(B_warp, k_inner, 16) sch.transform_layout(A_warp, ("write", 0), index_map_A) sch.transform_layout(B_warp, ("write", 0), index_map_B) sch.transform_layout(C_warp, ("read", 0), index_map_C) sch.tensorize(loop_a, ldmatrix_a_intrin) sch.tensorize(loop_b, ldmatrix_b_intrin) sch.tensorize(sch.get_loops(block_inner)[-3], mfma_intrin) sch.tensorize(sch.get_loops(block_init_c)[-2], mfma_fill_intrin) sch.tensorize(sch.get_loops(C_warp)[-2], mfma_store_intrin) return sch
7,909
31.68595
91
py
tvm
tvm-main/python/tvm/testing/utils.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=invalid-name,unnecessary-comprehension """TVM testing utilities Organization ************ This file contains functions expected to be called directly by a user while writing unit tests. Integrations with the pytest framework are in plugin.py. Testing Markers *************** We use pytest markers to specify the requirements of test functions. Currently there is a single distinction that matters for our testing environment: does the test require a gpu. For tests that require just a gpu or just a cpu, we have the decorator :py:func:`requires_gpu` that enables the test when a gpu is available. To avoid running tests that don't require a gpu on gpu nodes, this decorator also sets the pytest marker `gpu` so we can use select the gpu subset of tests (using `pytest -m gpu`). Unfortunately, many tests are written like this: .. python:: def test_something(): for target in all_targets(): do_something() The test uses both gpu and cpu targets, so the test needs to be run on both cpu and gpu nodes. But we still want to only run the cpu targets on the cpu testing node. The solution is to mark these tests with the gpu marker so they will be run on the gpu nodes. But we also modify all_targets (renamed to enabled_targets) so that it only returns gpu targets on gpu nodes and cpu targets on cpu nodes (using an environment variable). Instead of using the all_targets function, future tests that would like to test against a variety of targets should use the :py:func:`tvm.testing.parametrize_targets` functionality. This allows us greater control over which targets are run on which testing nodes. If in the future we want to add a new type of testing node (for example fpgas), we need to add a new marker in `tests/python/pytest.ini` and a new function in this module. Then targets using this node should be added to the `TVM_TEST_TARGETS` environment variable in the CI. """ import inspect import copy import copyreg import ctypes import functools import hashlib import itertools import logging import os import pickle import platform import sys import textwrap import time import shutil import subprocess from pathlib import Path from typing import Optional, Callable, Union, List, Tuple import pytest import numpy as np import tvm import tvm.arith import tvm.tir import tvm.te import tvm._ffi from tvm.contrib import nvcc, cudnn, rocm import tvm.contrib.hexagon._ci_env_check as hexagon from tvm.driver.tvmc.frontends import load_model from tvm.error import TVMError SKIP_SLOW_TESTS = os.getenv("SKIP_SLOW_TESTS", "").lower() in {"true", "1", "yes"} IS_IN_CI = os.getenv("CI", "") == "true" skip_if_wheel_test = pytest.mark.skipif( os.getenv("WHEEL_TEST", "").lower() in {"true", "1", "yes"}, reason="Test not supported in wheel.", ) def assert_allclose(actual, desired, rtol=1e-7, atol=1e-7): """Version of np.testing.assert_allclose with `atol` and `rtol` fields set in reasonable defaults. Arguments `actual` and `desired` are not interchangeable, since the function compares the `abs(actual-desired)` with `atol+rtol*abs(desired)`. Since we often allow `desired` to be close to zero, we generally want non-zero `atol`. """ actual = np.asanyarray(actual) desired = np.asanyarray(desired) np.testing.assert_allclose(actual.shape, desired.shape) np.testing.assert_allclose(actual, desired, rtol=rtol, atol=atol, verbose=True) def check_numerical_grads( function, input_values, grad_values, function_value=None, delta=1e-3, atol=1e-2, rtol=0.1 ): """A helper function that checks that numerical gradients of a function are equal to gradients computed in some different way (analytical gradients). Numerical gradients are computed using finite difference approximation. To reduce the number of function evaluations, the number of points used is gradually increased if the error value is too high (up to 5 points). Parameters ---------- function A function that takes inputs either as positional or as keyword arguments (either `function(*input_values)` or `function(**input_values)` should be correct) and returns a scalar result. Should accept numpy ndarrays. input_values : Dict[str, numpy.ndarray] or List[numpy.ndarray] A list of values or a dict assigning values to variables. Represents the point at which gradients should be computed. grad_values : Dict[str, numpy.ndarray] or List[numpy.ndarray] Gradients computed using a different method. function_value : float, optional Should be equal to `function(**input_values)`. delta : float, optional A small number used for numerical computation of partial derivatives. The default 1e-3 is a good choice for float32. atol : float, optional Absolute tolerance. Gets multiplied by `sqrt(n)` where n is the size of a gradient. rtol : float, optional Relative tolerance. """ # If input_values is a list then function accepts positional arguments # In this case transform it to a function taking kwargs of the form {"0": ..., "1": ...} if not isinstance(input_values, dict): input_len = len(input_values) input_values = {str(idx): val for idx, val in enumerate(input_values)} def _function(_input_len=input_len, _orig_function=function, **kwargs): return _orig_function(*(kwargs[str(i)] for i in range(input_len))) function = _function grad_values = {str(idx): val for idx, val in enumerate(grad_values)} if function_value is None: function_value = function(**input_values) # a helper to modify j-th element of val by a_delta def modify(val, j, a_delta): val = val.copy() val.reshape(-1)[j] = val.reshape(-1)[j] + a_delta return val # numerically compute a partial derivative with respect to j-th element of the var `name` def derivative(x_name, j, a_delta): modified_values = { n: modify(val, j, a_delta) if n == x_name else val for n, val in input_values.items() } return (function(**modified_values) - function_value) / a_delta def compare_derivative(j, n_der, grad): der = grad.reshape(-1)[j] return np.abs(n_der - der) < atol + rtol * np.abs(n_der) for x_name, grad in grad_values.items(): if grad.shape != input_values[x_name].shape: raise AssertionError( "Gradient wrt '{}' has unexpected shape {}, expected {} ".format( x_name, grad.shape, input_values[x_name].shape ) ) ngrad = np.zeros_like(grad) wrong_positions = [] # compute partial derivatives for each position in this variable for j in range(np.prod(grad.shape)): # forward difference approximation nder = derivative(x_name, j, delta) # if the derivative is not equal to the analytical one, try to use more # precise and expensive methods if not compare_derivative(j, nder, grad): # central difference approximation nder = (derivative(x_name, j, -delta) + nder) / 2 if not compare_derivative(j, nder, grad): # central difference approximation using h = delta/2 cnder2 = ( derivative(x_name, j, delta / 2) + derivative(x_name, j, -delta / 2) ) / 2 # five-point derivative nder = (4 * cnder2 - nder) / 3 # if the derivatives still don't match, add this position to the # list of wrong positions if not compare_derivative(j, nder, grad): wrong_positions.append(np.unravel_index(j, grad.shape)) ngrad.reshape(-1)[j] = nder wrong_percentage = int(100 * len(wrong_positions) / np.prod(grad.shape)) dist = np.sqrt(np.sum((ngrad - grad) ** 2)) grad_norm = np.sqrt(np.sum(ngrad**2)) if not (np.isfinite(dist) and np.isfinite(grad_norm)): raise ValueError( "NaN or infinity detected during numerical gradient checking wrt '{}'\n" "analytical grad = {}\n numerical grad = {}\n".format(x_name, grad, ngrad) ) # we multiply atol by this number to make it more universal for different sizes sqrt_n = np.sqrt(float(np.prod(grad.shape))) if dist > atol * sqrt_n + rtol * grad_norm: raise AssertionError( "Analytical and numerical grads wrt '{}' differ too much\n" "analytical grad = {}\n numerical grad = {}\n" "{}% of elements differ, first 10 of wrong positions: {}\n" "distance > atol*sqrt(n) + rtol*grad_norm\n" "distance {} > {}*{} + {}*{}".format( x_name, grad, ngrad, wrong_percentage, wrong_positions[:10], dist, atol, sqrt_n, rtol, grad_norm, ) ) max_diff = np.max(np.abs(ngrad - grad)) avg_diff = np.mean(np.abs(ngrad - grad)) logging.info( "Numerical grad test wrt '%s' of shape %s passes, " "dist = %f, max_diff = %f, avg_diff = %f", x_name, grad.shape, dist, max_diff, avg_diff, ) def assert_prim_expr_equal(lhs, rhs): """Assert lhs and rhs equals to each iother. Parameters ---------- lhs : tvm.tir.PrimExpr The left operand. rhs : tvm.tir.PrimExpr The left operand. """ ana = tvm.arith.Analyzer() if not ana.can_prove_equal(lhs, rhs): raise ValueError("{} and {} are not equal".format(lhs, rhs)) def check_bool_expr_is_true(bool_expr, vranges, cond=None): """Check that bool_expr holds given the condition cond for every value of free variables from vranges. for example, 2x > 4y solves to x > 2y given x in (0, 10) and y in (0, 10) here bool_expr is x > 2y, vranges is {x: (0, 10), y: (0, 10)}, cond is 2x > 4y We creates iterations to check, for x in range(10): for y in range(10): assert !(2x > 4y) || (x > 2y) Parameters ---------- bool_expr : tvm.ir.PrimExpr Boolean expression to check vranges: Dict[tvm.tir.expr.Var, tvm.ir.Range] Free variables and their ranges cond: tvm.ir.PrimExpr extra conditions needs to be satisfied. """ if cond is not None: bool_expr = tvm.te.any(tvm.tir.Not(cond), bool_expr) def _run_expr(expr, vranges): """Evaluate expr for every value of free variables given by vranges and return the tensor of results. """ def _compute_body(*us): vmap = {v: u + r.min for (v, r), u in zip(vranges.items(), us)} return tvm.tir.stmt_functor.substitute(expr, vmap) A = tvm.te.compute([r.extent.value for v, r in vranges.items()], _compute_body) args = [tvm.nd.empty(A.shape, A.dtype)] sch = tvm.te.create_schedule(A.op) mod = tvm.build(sch, [A]) mod(*args) return args[0].numpy() res = _run_expr(bool_expr, vranges) if not np.all(res): indices = list(np.argwhere(res == 0)[0]) counterex = [(str(v), i + r.min) for (v, r), i in zip(vranges.items(), indices)] counterex = sorted(counterex, key=lambda x: x[0]) counterex = ", ".join([v + " = " + str(i) for v, i in counterex]) ana = tvm.arith.Analyzer() raise AssertionError( "Expression {}\nis not true on {}\n" "Counterexample: {}".format(ana.simplify(bool_expr), vranges, counterex) ) def check_int_constraints_trans_consistency(constraints_trans, vranges=None): """Check IntConstraintsTransform is a bijective transformation. Parameters ---------- constraints_trans : arith.IntConstraintsTransform Integer constraints transformation vranges: Dict[tvm.tir.Var, tvm.ir.Range] Free variables and their ranges """ if vranges is None: vranges = {} def _check_forward(constraints1, constraints2, varmap, backvarmap): ana = tvm.arith.Analyzer() all_vranges = vranges.copy() all_vranges.update({v: r for v, r in constraints1.ranges.items()}) # Check that the transformation is injective cond_on_vars = tvm.tir.const(1, "bool") for v in constraints1.variables: if v in varmap: # variable mapping is consistent v_back = ana.simplify(tvm.tir.stmt_functor.substitute(varmap[v], backvarmap)) cond_on_vars = tvm.te.all(cond_on_vars, v == v_back) # Also we have to check that the new relations are true when old relations are true cond_subst = tvm.tir.stmt_functor.substitute( tvm.te.all(tvm.tir.const(1, "bool"), *constraints2.relations), backvarmap ) # We have to include relations from vranges too for v in constraints2.variables: if v in constraints2.ranges: r = constraints2.ranges[v] range_cond = tvm.te.all(v >= r.min, v < r.min + r.extent) range_cond = tvm.tir.stmt_functor.substitute(range_cond, backvarmap) cond_subst = tvm.te.all(cond_subst, range_cond) cond_subst = ana.simplify(cond_subst) check_bool_expr_is_true( tvm.te.all(cond_subst, cond_on_vars), all_vranges, cond=tvm.te.all(tvm.tir.const(1, "bool"), *constraints1.relations), ) _check_forward( constraints_trans.src, constraints_trans.dst, constraints_trans.src_to_dst, constraints_trans.dst_to_src, ) _check_forward( constraints_trans.dst, constraints_trans.src, constraints_trans.dst_to_src, constraints_trans.src_to_dst, ) def _get_targets(target_names=None): if target_names is None: target_names = _tvm_test_targets() if not target_names: target_names = DEFAULT_TEST_TARGETS targets = [] for target in target_names: target_kind = target.split()[0] if target_kind == "cuda" and "cudnn" in tvm.target.Target(target).attrs.get("libs", []): is_enabled = tvm.support.libinfo()["USE_CUDNN"].lower() in ["on", "true", "1"] is_runnable = is_enabled and cudnn.exists() elif target_kind == "hexagon": is_enabled = tvm.support.libinfo()["USE_HEXAGON"].lower() in ["on", "true", "1"] # If Hexagon has compile-time support, we can always fall back is_runnable = is_enabled and "ANDROID_SERIAL_NUMBER" in os.environ else: is_enabled = tvm.runtime.enabled(target_kind) is_runnable = is_enabled and tvm.device(target_kind).exist targets.append( { "target": target, "target_kind": target_kind, "is_enabled": is_enabled, "is_runnable": is_runnable, } ) if all(not t["is_runnable"] for t in targets): if tvm.runtime.enabled("llvm"): logging.warning( "None of the following targets are supported by this build of TVM: %s." " Try setting TVM_TEST_TARGETS to a supported target. Defaulting to llvm.", target_names, ) return _get_targets(["llvm"]) raise TVMError( "None of the following targets are supported by this build of TVM: %s." " Try setting TVM_TEST_TARGETS to a supported target." " Cannot default to llvm, as it is not enabled." % target_names ) return targets DEFAULT_TEST_TARGETS = [ "llvm", "cuda", "nvptx", "vulkan -from_device=0", "opencl", "opencl -device=mali,aocl_sw_emu", "opencl -device=intel_graphics", "metal", "rocm", "hexagon", ] def device_enabled(target): """Check if a target should be used when testing. It is recommended that you use :py:func:`tvm.testing.parametrize_targets` instead of manually checking if a target is enabled. This allows the user to control which devices they are testing against. In tests, this should be used to check if a device should be used when said device is an optional part of the test. Parameters ---------- target : str Target string to check against Returns ------- bool Whether or not the device associated with this target is enabled. Example ------- >>> @tvm.testing.uses_gpu >>> def test_mytest(): >>> for target in ["cuda", "llvm"]: >>> if device_enabled(target): >>> test_body... Here, `test_body` will only be reached by with `target="cuda"` on gpu test nodes and `target="llvm"` on cpu test nodes. """ assert isinstance(target, str), "device_enabled requires a target as a string" # only check if device name is found, sometime there are extra flags target_kind = target.split(" ")[0] return any(target_kind == t["target_kind"] for t in _get_targets() if t["is_runnable"]) def enabled_targets(): """Get all enabled targets with associated devices. In most cases, you should use :py:func:`tvm.testing.parametrize_targets` instead of this function. In this context, enabled means that TVM was built with support for this target, the target name appears in the TVM_TEST_TARGETS environment variable, and a suitable device for running this target exists. If TVM_TEST_TARGETS is not set, it defaults to variable DEFAULT_TEST_TARGETS in this module. If you use this function in a test, you **must** decorate the test with :py:func:`tvm.testing.uses_gpu` (otherwise it will never be run on the gpu). Returns ------- targets: list A list of pairs of all enabled devices and the associated context """ return [(t["target"], tvm.device(t["target"])) for t in _get_targets() if t["is_runnable"]] class Feature: """A feature that may be required to run a test. Parameters ---------- name: str The short name of the feature. Should match the name in the requires_* decorator. This is applied as a mark to all tests using this feature, and can be used in pytests ``-m`` argument. long_name: Optional[str] The long name of the feature, to be used in error messages. If None, defaults to the short name. cmake_flag: Optional[str] The flag that must be enabled in the config.cmake in order to use this feature. If None, no flag is required to use this feature. target_kind_enabled: Optional[str] The target kind that must be enabled to run tests using this feature. If present, the target_kind must appear in the TVM_TEST_TARGETS environment variable, or in tvm.testing.DEFAULT_TEST_TARGETS if TVM_TEST_TARGETS is undefined. If None, this feature does not require a specific target to be enabled. compile_time_check: Optional[Callable[[], Union[bool,str]]] A check that returns True if the feature can be used at compile-time. (e.g. Validating the version number of the nvcc compiler.) If the feature does not have support to perform compile-time tests, the check should returns False to display a generic error message, or a string to display a more specific error message. If None, no additional check is performed. target_kind_hardware: Optional[str] The target kind that must have available hardware in order to run tests using this feature. This is checked using tvm.device(target_kind_hardware).exist. If a feature requires a different check, this should be implemented using run_time_check. If None, this feature does not require a specific tvm.device to exist. run_time_check: Optional[Callable[[], Union[bool,str]]] A check that returns True if the feature can be used at run-time. (e.g. Validating the compute version supported by a GPU.) If the feature does not have support to perform run-time tests, the check should returns False to display a generic error message, or a string to display a more specific error message. If None, no additional check is performed. parent_features: Optional[Union[str,List[str]]] The short name of a feature or features that are required in order to use this feature. (e.g. Using cuDNN requires using CUDA) This feature should inherit all checks of the parent feature, with the exception of the `target_kind_enabled` checks. If None, this feature does not require any other parent features. """ _all_features = {} def __init__( self, name: str, long_name: Optional[str] = None, cmake_flag: Optional[str] = None, target_kind_enabled: Optional[str] = None, compile_time_check: Optional[Callable[[], Union[bool, str]]] = None, target_kind_hardware: Optional[str] = None, run_time_check: Optional[Callable[[], Union[bool, str]]] = None, parent_features: Optional[Union[str, List[str]]] = None, ): self.name = name self.long_name = long_name or name self.cmake_flag = cmake_flag self.target_kind_enabled = target_kind_enabled self.compile_time_check = compile_time_check self.target_kind_hardware = target_kind_hardware self.run_time_check = run_time_check if parent_features is None: self.parent_features = [] elif isinstance(parent_features, str): self.parent_features = [parent_features] else: self.parent_features = parent_features self._all_features[self.name] = self def _register_marker(self, config): config.addinivalue_line("markers", f"{self.name}: Mark a test as using {self.long_name}") def _uses_marks(self): for parent in self.parent_features: yield from self._all_features[parent]._uses_marks() yield getattr(pytest.mark, self.name) def _compile_only_marks(self): for parent in self.parent_features: yield from self._all_features[parent]._compile_only_marks() if self.compile_time_check is not None: res = self.compile_time_check() if isinstance(res, str): yield pytest.mark.skipif(True, reason=res) else: yield pytest.mark.skipif( not res, reason=f"Compile-time support for {self.long_name} not present" ) if self.target_kind_enabled is not None: target_kind = self.target_kind_enabled.split()[0] yield pytest.mark.skipif( all(enabled.split()[0] != target_kind for enabled in _tvm_test_targets()), reason=( f"{self.target_kind_enabled} tests disabled " f"by TVM_TEST_TARGETS environment variable" ), ) if self.cmake_flag is not None: yield pytest.mark.skipif( not _cmake_flag_enabled(self.cmake_flag), reason=( f"{self.long_name} support not enabled. " f"Set {self.cmake_flag} in config.cmake to enable." ), ) def _run_only_marks(self): for parent in self.parent_features: yield from self._all_features[parent]._run_only_marks() if self.run_time_check is not None: res = self.run_time_check() if isinstance(res, str): yield pytest.mark.skipif(True, reason=res) else: yield pytest.mark.skipif( not res, reason=f"Run-time support for {self.long_name} not present" ) if self.target_kind_hardware is not None: yield pytest.mark.skipif( not tvm.device(self.target_kind_hardware).exist, reason=f"No device exists for target {self.target_kind_hardware}", ) def marks(self, support_required="compile-and-run"): """Return a list of marks to be used Parameters ---------- support_required: str Allowed values: "compile-and-run" (default), "compile-only", or "optional". See Feature.__call__ for details. """ if support_required not in ["compile-and-run", "compile-only", "optional"]: raise ValueError(f"Unknown feature support type: {support_required}") if support_required == "compile-and-run": marks = itertools.chain( self._run_only_marks(), self._compile_only_marks(), self._uses_marks() ) elif support_required == "compile-only": marks = itertools.chain(self._compile_only_marks(), self._uses_marks()) elif support_required == "optional": marks = self._uses_marks() else: raise ValueError(f"Unknown feature support type: {support_required}") return list(marks) def __call__(self, func=None, *, support_required="compile-and-run"): """Mark a pytest function as requiring this feature Can be used either as a bare decorator, or as a decorator with arguments. Parameters ---------- func: Callable The pytest test function to be marked support_required: str Allowed values: "compile-and-run" (default), "compile-only", or "optional". If "compile-and-run", the test case is marked as using the feature, and is skipped if the environment lacks either compile-time or run-time support for the feature. If "compile-only", the test case is marked as using the feature, and is skipped if the environment lacks compile-time support. If "optional", the test case is marked as using the feature, but isn't skipped. This is kept for backwards compatibility for tests that use `enabled_targets()`, and should be avoided in new test code. Instead, prefer parametrizing over the target using the `target` fixture. Examples -------- .. code-block:: python @feature def test_compile_and_run(): ... @feature(compile_only=True) def test_compile_only(): ... """ if support_required not in ["compile-and-run", "compile-only", "optional"]: raise ValueError(f"Unknown feature support type: {support_required}") def wrapper(func): for mark in self.marks(support_required=support_required): func = mark(func) return func if func is None: return wrapper return wrapper(func) @classmethod def require(cls, name, support_required="compile-and-run"): """Returns a decorator that marks a test as requiring a feature Parameters ---------- name: str The name of the feature that is used by the test support_required: str Allowed values: "compile-and-run" (default), "compile-only", or "optional". See Feature.__call__ for details. Examples -------- .. code-block:: python @Feature.require("cuda") def test_compile_and_run(): ... @Feature.require("cuda", compile_only=True) def test_compile_only(): ... """ return cls._all_features[name](support_required=support_required) def _any_gpu_exists(): return ( tvm.cuda().exist or tvm.rocm().exist or tvm.opencl().exist or tvm.metal().exist or tvm.vulkan().exist ) # Mark a test as requiring llvm to run requires_llvm = Feature( "llvm", "LLVM", cmake_flag="USE_LLVM", target_kind_enabled="llvm", target_kind_hardware="llvm" ) # Mark a test as requiring a GPU to run. requires_gpu = Feature("gpu", run_time_check=_any_gpu_exists) # Mark to differentiate tests that use the GPU in some capacity. # # These tests will be run on CPU-only test nodes and on test nodes with GPUs. # To mark a test that must have a GPU present to run, use # :py:func:`tvm.testing.requires_gpu`. uses_gpu = requires_gpu(support_required="optional") # Mark a test as requiring the x86 Architecture to run. requires_x86 = Feature( "x86", "x86 Architecture", run_time_check=lambda: platform.machine() == "x86_64" ) # Mark a test as requiring the CUDA runtime. requires_cuda = Feature( "cuda", "CUDA", cmake_flag="USE_CUDA", target_kind_enabled="cuda", target_kind_hardware="cuda", parent_features="gpu", ) # Mark a test as requiring a tensorcore to run requires_tensorcore = Feature( "tensorcore", "NVIDIA Tensor Core", run_time_check=lambda: tvm.cuda().exist and nvcc.have_tensorcore(tvm.cuda().compute_version), parent_features="cuda", ) # Mark a test as requiring the cuDNN library. requires_cudnn = Feature("cudnn", "cuDNN", cmake_flag="USE_CUDNN", parent_features="cuda") # Mark a test as requiring the cuBLAS library. requires_cublas = Feature("cublas", "cuBLAS", cmake_flag="USE_CUBLAS", parent_features="cuda") # Mark a test as requiring the NVPTX compilation on the CUDA runtime requires_nvptx = Feature( "nvptx", "NVPTX", target_kind_enabled="nvptx", target_kind_hardware="nvptx", parent_features=["llvm", "cuda"], ) # Mark a test as requiring the CUDA Graph Feature requires_cudagraph = Feature( "cudagraph", "CUDA Graph", target_kind_enabled="cuda", compile_time_check=nvcc.have_cudagraph, parent_features="cuda", ) # Mark a test as requiring the OpenCL runtime requires_opencl = Feature( "opencl", "OpenCL", cmake_flag="USE_OPENCL", target_kind_enabled="opencl", target_kind_hardware="opencl" if "RPC_TARGET" not in os.environ else None, parent_features="gpu" if "RPC_TARGET" not in os.environ else None, ) # Mark a test as requiring the rocm runtime requires_rocm = Feature( "rocm", "ROCm", cmake_flag="USE_ROCM", target_kind_enabled="rocm", target_kind_hardware="rocm", parent_features="gpu", ) # Mark a test as requiring a matrixcore to run requires_matrixcore = Feature( "matrixcore", "AMD Matrix Core", run_time_check=lambda: tvm.rocm().exist and rocm.have_matrixcore(tvm.rocm().compute_version), parent_features="rocm", ) # Mark a test as requiring the metal runtime requires_metal = Feature( "metal", "Metal", cmake_flag="USE_METAL", target_kind_enabled="metal", target_kind_hardware="metal", parent_features="gpu", ) # Mark a test as requiring the vulkan runtime requires_vulkan = Feature( "vulkan", "Vulkan", cmake_flag="USE_VULKAN", target_kind_enabled="vulkan", target_kind_hardware="vulkan", parent_features="gpu", ) # Mark a test as requiring OpenCLML support in build. requires_openclml = Feature( "OpenCLML", "CLML", cmake_flag="USE_CLML", target_kind_enabled="opencl", ) # Mark a test as requiring microTVM to run requires_micro = Feature("micro", "MicroTVM", cmake_flag="USE_MICRO") # Mark a test as requiring CUTLASS to run requires_cutlass = Feature("cutlass", "CUTLASS", cmake_flag="USE_CUTLASS") # Mark a test as requiring rpc to run requires_rpc = Feature("rpc", "RPC", cmake_flag="USE_RPC") # Mark a test as requiring Arm(R) Ethos(TM)-N to run requires_ethosn = Feature("ethosn", "Arm(R) Ethos(TM)-N", cmake_flag="USE_ETHOSN") # Mark a test as requiring Arm(R) Ethos(TM)-U to run requires_ethosu = Feature("ethosu", "Arm(R) Ethos(TM)-U", cmake_flag="USE_ETHOSU") # Mark a test as requiring libtorch to run requires_libtorch = Feature("libtorch", "LibTorch", cmake_flag="USE_LIBTORCH") # Mark a test as requiring Hexagon to run requires_hexagon = Feature( "hexagon", "Hexagon", cmake_flag="USE_HEXAGON", target_kind_enabled="hexagon", compile_time_check=hexagon._compile_time_check, run_time_check=hexagon._run_time_check, parent_features="llvm", ) # Mark a test as requiring the CMSIS NN library requires_cmsisnn = Feature("cmsisnn", "CMSIS NN", cmake_flag="USE_CMSISNN") def _corstone300_compile_time_check(): if shutil.which("arm-none-eabi-gcc") is None: return "ARM embedded toolchain unavailable" return True # Mark a test as requiring the corstone300 FVP requires_corstone300 = Feature( "corstone300", "Corstone-300", compile_time_check=_corstone300_compile_time_check, parent_features="cmsisnn", ) # Mark a test as requiring Vitis AI to run requires_vitis_ai = Feature("vitis_ai", "Vitis AI", cmake_flag="USE_VITIS_AI") def _arm_dot_supported(): arch = platform.machine() if arch not in ["arm64", "aarch64"]: return False if sys.platform.startswith("darwin"): cpu_info = subprocess.check_output("sysctl -a", shell=True).strip().decode() for line in cpu_info.split("\n"): if line.startswith("hw.optional.arm.FEAT_DotProd"): return bool(int(line.split(":", 1)[1])) elif sys.platform.startswith("linux"): return True return False def _is_intel(): # Only linux is supported for now. if sys.platform.startswith("linux"): with open("/proc/cpuinfo", "r") as content: return "Intel" in content.read() return False def _has_vnni(): arch = platform.machine() # Only linux is supported for now. if arch == "x86_64" and sys.platform.startswith("linux"): with open("/proc/cpuinfo", "r") as content: return "avx512_vnni" in content.read() return False # check avx512 intrinsic groups for SkyLake X def _has_slavx512(): # Check LLVM support llvm_version = tvm.target.codegen.llvm_version_major() is_llvm_support = llvm_version >= 8 arch = platform.machine() # Only linux is supported for now. if arch == "x86_64" and sys.platform.startswith("linux"): with open("/proc/cpuinfo", "r") as content: ctx = content.read() check = ( "avx512f" in ctx and "avx512cd" in ctx and "avx512bw" in ctx and "avx512dq" in ctx and "avx512vl" in ctx ) return check and is_llvm_support return False requires_arm_dot = Feature("arm_dot", "ARM dot product", run_time_check=_arm_dot_supported) requires_cascadelake = Feature( "cascadelake", "x86 CascadeLake", run_time_check=lambda: _has_vnni() and _is_intel() ) requires_skylake_avx512 = Feature( "skylake_avx512", "x86 SkyLake AVX512", run_time_check=lambda: _has_slavx512() and _is_intel(), ) def _cmake_flag_enabled(flag): flag = tvm.support.libinfo()[flag] # Because many of the flags can be library flags, we check if the # flag is not disabled, rather than checking if it is enabled. return flag.lower() not in ["off", "false", "0"] def _tvm_test_targets(): target_str = os.environ.get("TVM_TEST_TARGETS", "").strip() if target_str: # Use dict instead of set for de-duplication so that the # targets stay in the order specified. return list({t.strip(): None for t in target_str.split(";") if t.strip()}) return DEFAULT_TEST_TARGETS def _compose(args, decs): """Helper to apply multiple markers""" if len(args) > 0: f = args[0] for d in reversed(decs): f = d(f) return f return decs slow = pytest.mark.skipif( SKIP_SLOW_TESTS, reason="Skipping slow test since the SKIP_SLOW_TESTS environment variable is 'true'", ) def requires_nvcc_version(major_version, minor_version=0, release_version=0): """Mark a test as requiring at least a specific version of nvcc. Unit test marked with this decorator will run only if the installed version of NVCC is at least `(major_version, minor_version, release_version)`. This also marks the test as requiring a cuda support. Parameters ---------- major_version: int The major version of the (major,minor,release) version tuple. minor_version: int The minor version of the (major,minor,release) version tuple. release_version: int The release version of the (major,minor,release) version tuple. """ try: nvcc_version = nvcc.get_cuda_version() except RuntimeError: nvcc_version = (0, 0, 0) min_version = (major_version, minor_version, release_version) version_str = ".".join(str(v) for v in min_version) requires = [ pytest.mark.skipif(nvcc_version < min_version, reason=f"Requires NVCC >= {version_str}"), *requires_cuda.marks(), ] def inner(func): return _compose([func], requires) return inner def requires_cuda_compute_version(major_version, minor_version=0): """Mark a test as requiring at least a compute architecture Unit test marked with this decorator will run only if the CUDA compute architecture of the GPU is at least `(major_version, minor_version)`. This also marks the test as requiring a cuda support. Parameters ---------- major_version: int The major version of the (major,minor) version tuple. minor_version: int The minor version of the (major,minor) version tuple. """ min_version = (major_version, minor_version) try: arch = tvm.contrib.nvcc.get_target_compute_version() compute_version = tvm.contrib.nvcc.parse_compute_version(arch) except ValueError: # No GPU present. This test will be skipped from the # requires_cuda() marks as well. compute_version = (0, 0) min_version_str = ".".join(str(v) for v in min_version) compute_version_str = ".".join(str(v) for v in compute_version) requires = [ pytest.mark.skipif( compute_version < min_version, reason=f"Requires CUDA compute >= {min_version_str}, but have {compute_version_str}", ), *requires_cuda.marks(), ] def inner(func): return _compose([func], requires) return inner def skip_if_32bit(reason): def decorator(*args): if "32bit" in platform.architecture()[0]: return _compose(args, [pytest.mark.skip(reason=reason)]) return _compose(args, []) return decorator def requires_package(*packages): """Mark a test as requiring python packages to run. If the packages listed are not available, tests marked with `requires_package` will appear in the pytest results as being skipped. This is equivalent to using ``foo = pytest.importorskip('foo')`` inside the test body. Parameters ---------- packages : List[str] The python packages that should be available for the test to run. Returns ------- mark: pytest mark The pytest mark to be applied to unit tests that require this """ def has_package(package): try: __import__(package) return True except ImportError: return False marks = [ pytest.mark.skipif(not has_package(package), reason=f"Cannot import '{package}'") for package in packages ] def wrapper(func): for mark in marks: func = mark(func) return func return wrapper def parametrize_targets(*args): """Parametrize a test over a specific set of targets. Use this decorator when you want your test to be run over a specific set of targets and devices. It is intended for use where a test is applicable only to a specific target, and is inapplicable to any others (e.g. verifying target-specific assembly code matches known assembly code). In most circumstances, :py:func:`tvm.testing.exclude_targets` or :py:func:`tvm.testing.known_failing_targets` should be used instead. If used as a decorator without arguments, the test will be parametrized over all targets in :py:func:`tvm.testing.enabled_targets`. This behavior is automatically enabled for any target that accepts arguments of ``target`` or ``dev``, so the explicit use of the bare decorator is no longer needed, and is maintained for backwards compatibility. Parameters ---------- f : function Function to parametrize. Must be of the form `def test_xxxxxxxxx(target, dev)`:, where `xxxxxxxxx` is any name. targets : list[str], optional Set of targets to run against. If not supplied, :py:func:`tvm.testing.enabled_targets` will be used. Example ------- >>> @tvm.testing.parametrize_targets("llvm", "cuda") >>> def test_mytest(target, dev): >>> ... # do something """ # Backwards compatibility, when used as a decorator with no # arguments implicitly parametrizes over "target". The # parametrization is now handled by _auto_parametrize_target, so # this use case can just return the decorated function. if len(args) == 1 and callable(args[0]): return args[0] return pytest.mark.parametrize("target", list(args), scope="session") def exclude_targets(*args): """Exclude a test from running on a particular target. Use this decorator when you want your test to be run over a variety of targets and devices (including cpu and gpu devices), but want to exclude some particular target or targets. For example, a test may wish to be run against all targets in tvm.testing.enabled_targets(), except for a particular target that does not support the capabilities. Applies pytest.mark.skipif to the targets given. Parameters ---------- f : function Function to parametrize. Must be of the form `def test_xxxxxxxxx(target, dev)`:, where `xxxxxxxxx` is any name. targets : list[str] Set of targets to exclude. Example ------- >>> @tvm.testing.exclude_targets("cuda") >>> def test_mytest(target, dev): >>> ... # do something Or >>> @tvm.testing.exclude_targets("llvm", "cuda") >>> def test_mytest(target, dev): >>> ... # do something """ def wraps(func): func.tvm_excluded_targets = args return func return wraps def known_failing_targets(*args): """Skip a test that is known to fail on a particular target. Use this decorator when you want your test to be run over a variety of targets and devices (including cpu and gpu devices), but know that it fails for some targets. For example, a newly implemented runtime may not support all features being tested, and should be excluded. Applies pytest.mark.xfail to the targets given. Parameters ---------- f : function Function to parametrize. Must be of the form `def test_xxxxxxxxx(target, dev)`:, where `xxxxxxxxx` is any name. targets : list[str] Set of targets to skip. Example ------- >>> @tvm.testing.known_failing_targets("cuda") >>> def test_mytest(target, dev): >>> ... # do something Or >>> @tvm.testing.known_failing_targets("llvm", "cuda") >>> def test_mytest(target, dev): >>> ... # do something """ def wraps(func): func.tvm_known_failing_targets = args return func return wraps def parameter(*values, ids=None, by_dict=None): """Convenience function to define pytest parametrized fixtures. Declaring a variable using ``tvm.testing.parameter`` will define a parametrized pytest fixture that can be used by test functions. This is intended for cases that have no setup cost, such as strings, integers, tuples, etc. For cases that have a significant setup cost, please use :py:func:`tvm.testing.fixture` instead. If a test function accepts multiple parameters defined using ``tvm.testing.parameter``, then the test will be run using every combination of those parameters. The parameter definition applies to all tests in a module. If a specific test should have different values for the parameter, that test should be marked with ``@pytest.mark.parametrize``. Parameters ---------- values : Any A list of parameter values. A unit test that accepts this parameter as an argument will be run once for each parameter given. ids : List[str], optional A list of names for the parameters. If None, pytest will generate a name from the value. These generated names may not be readable/useful for composite types such as tuples. by_dict : Dict[str, Any] A mapping from parameter name to parameter value, to set both the values and ids. Returns ------- function A function output from pytest.fixture. Example ------- >>> size = tvm.testing.parameter(1, 10, 100) >>> def test_using_size(size): >>> ... # Test code here Or >>> shape = tvm.testing.parameter((5,10), (512,1024), ids=['small','large']) >>> def test_using_size(shape): >>> ... # Test code here Or >>> shape = tvm.testing.parameter(by_dict={'small': (5,10), 'large': (512,1024)}) >>> def test_using_size(shape): >>> ... # Test code here """ if by_dict is not None: if values or ids: raise RuntimeError( "Use of the by_dict parameter cannot be used alongside positional arguments" ) ids, values = zip(*by_dict.items()) # Optional cls parameter in case a parameter is defined inside a # class scope. @pytest.fixture(params=values, ids=ids) def as_fixture(*_cls, request): return request.param return as_fixture _parametrize_group = 0 def parameters(*value_sets, ids=None): """Convenience function to define pytest parametrized fixtures. Declaring a variable using tvm.testing.parameters will define a parametrized pytest fixture that can be used by test functions. Like :py:func:`tvm.testing.parameter`, this is intended for cases that have no setup cost, such as strings, integers, tuples, etc. For cases that have a significant setup cost, please use :py:func:`tvm.testing.fixture` instead. Unlike :py:func:`tvm.testing.parameter`, if a test function accepts multiple parameters defined using a single call to ``tvm.testing.parameters``, then the test will only be run once for each set of parameters, not for all combinations of parameters. These parameter definitions apply to all tests in a module. If a specific test should have different values for some parameters, that test should be marked with ``@pytest.mark.parametrize``. Parameters ---------- values : List[tuple] A list of parameter value sets. Each set of values represents a single combination of values to be tested. A unit test that accepts parameters defined will be run once for every set of parameters in the list. ids : List[str], optional A list of names for the parameter sets. If None, pytest will generate a name from each parameter set. These generated names may not be readable/useful for composite types such as tuples. Returns ------- List[function] Function outputs from pytest.fixture. These should be unpacked into individual named parameters. Example ------- >>> size, dtype = tvm.testing.parameters( (16,'float32'), (512,'float16') ) >>> def test_feature_x(size, dtype): >>> # Test code here >>> assert( (size,dtype) in [(16,'float32'), (512,'float16')]) """ global _parametrize_group parametrize_group = _parametrize_group _parametrize_group += 1 outputs = [] for param_values in zip(*value_sets): # Optional cls parameter in case a parameter is defined inside a # class scope. def fixture_func(*_cls, request): return request.param fixture_func.parametrize_group = parametrize_group fixture_func.parametrize_values = param_values fixture_func.parametrize_ids = ids outputs.append(pytest.fixture(fixture_func)) return outputs def fixture(func=None, *, cache_return_value=False): """Convenience function to define pytest fixtures. This should be used as a decorator to mark functions that set up state before a function. The return value of that fixture function is then accessible by test functions as that accept it as a parameter. Fixture functions can accept parameters defined with :py:func:`tvm.testing.parameter`. By default, the setup will be performed once for each unit test that uses a fixture, to ensure that unit tests are independent. If the setup is expensive to perform, then the cache_return_value=True argument can be passed to cache the setup. The fixture function will be run only once (or once per parameter, if used with tvm.testing.parameter), and the same return value will be passed to all tests that use it. If the environment variable TVM_TEST_DISABLE_CACHE is set to a non-zero value, it will disable this feature and no caching will be performed. Example ------- >>> @tvm.testing.fixture >>> def cheap_setup(): >>> return 5 # Setup code here. >>> >>> def test_feature_x(target, dev, cheap_setup) >>> assert(cheap_setup == 5) # Run test here Or >>> size = tvm.testing.parameter(1, 10, 100) >>> >>> @tvm.testing.fixture >>> def cheap_setup(size): >>> return 5*size # Setup code here, based on size. >>> >>> def test_feature_x(cheap_setup): >>> assert(cheap_setup in [5, 50, 500]) Or >>> @tvm.testing.fixture(cache_return_value=True) >>> def expensive_setup(): >>> time.sleep(10) # Setup code here >>> return 5 >>> >>> def test_feature_x(target, dev, expensive_setup): >>> assert(expensive_setup == 5) """ force_disable_cache = bool(int(os.environ.get("TVM_TEST_DISABLE_CACHE", "0"))) cache_return_value = cache_return_value and not force_disable_cache # Deliberately at function scope, so that caching can track how # many times the fixture has been used. If used, the cache gets # cleared after the fixture is no longer needed. scope = "function" def wraps(func): if cache_return_value: func = _fixture_cache(func) func = pytest.fixture(func, scope=scope) return func if func is None: return wraps return wraps(func) class _DeepCopyAllowedClasses(dict): def __init__(self, allowed_class_list): self.allowed_class_list = allowed_class_list super().__init__() def get(self, key, *args, **kwargs): """Overrides behavior of copy.deepcopy to avoid implicit copy. By default, copy.deepcopy uses a dict of id->object to track all objects that it has seen, which is passed as the second argument to all recursive calls. This class is intended to be passed in instead, and inspects the type of all objects being copied. Where copy.deepcopy does a best-effort attempt at copying an object, for unit tests we would rather have all objects either be copied correctly, or to throw an error. Classes that define an explicit method to perform a copy are allowed, as are any explicitly listed classes. Classes that would fall back to using object.__reduce__, and are not explicitly listed as safe, will throw an exception. """ obj = ctypes.cast(key, ctypes.py_object).value cls = type(obj) if ( cls in copy._deepcopy_dispatch or issubclass(cls, type) or getattr(obj, "__deepcopy__", None) or copyreg.dispatch_table.get(cls) or cls.__reduce__ is not object.__reduce__ or cls.__reduce_ex__ is not object.__reduce_ex__ or cls in self.allowed_class_list ): return super().get(key, *args, **kwargs) rfc_url = ( "https://github.com/apache/tvm-rfcs/blob/main/rfcs/0007-parametrized-unit-tests.md" ) raise TypeError( ( f"Cannot copy fixture of type {cls.__name__}. TVM fixture caching " "is limited to objects that explicitly provide the ability " "to be copied (e.g. through __deepcopy__, __getstate__, or __setstate__)," "and forbids the use of the default `object.__reduce__` and " "`object.__reduce_ex__`. For third-party classes that are " "safe to use with copy.deepcopy, please add the class to " "the arguments of _DeepCopyAllowedClasses in tvm.testing._fixture_cache.\n" "\n" f"For discussion on this restriction, please see {rfc_url}." ) ) def _fixture_cache(func): cache = {} # Can't use += on a bound method's property. Therefore, this is a # list rather than a variable so that it can be accessed from the # pytest_collection_modifyitems(). num_tests_use_this_fixture = [0] num_times_fixture_used = 0 # Using functools.lru_cache would require the function arguments # to be hashable, which wouldn't allow caching fixtures that # depend on numpy arrays. For example, a fixture that takes a # numpy array as input, then calculates uses a slow method to # compute a known correct output for that input. Therefore, # including a fallback for serializable types. def get_cache_key(*args, **kwargs): try: hash((args, kwargs)) return (args, kwargs) except TypeError: pass try: return pickle.dumps((args, kwargs)) except TypeError as e: raise TypeError( "TVM caching of fixtures requires arguments to the fixture " "to be either hashable or serializable" ) from e @functools.wraps(func) def wrapper(*args, **kwargs): if num_tests_use_this_fixture[0] == 0: raise RuntimeError( "Fixture use count is 0. " "This can occur if tvm.testing.plugin isn't registered. " "If using outside of the TVM test directory, " "please add `pytest_plugins = ['tvm.testing.plugin']` to your conftest.py" ) try: cache_key = get_cache_key(*args, **kwargs) try: cached_value = cache[cache_key] except KeyError: cached_value = cache[cache_key] = func(*args, **kwargs) yield copy.deepcopy( cached_value, # allowed_class_list should be a list of classes that # are safe to copy using copy.deepcopy, but do not # implement __deepcopy__, __reduce__, or # __reduce_ex__. _DeepCopyAllowedClasses(allowed_class_list=[]), ) finally: # Clear the cache once all tests that use a particular fixture # have completed. nonlocal num_times_fixture_used num_times_fixture_used += 1 if num_times_fixture_used >= num_tests_use_this_fixture[0]: cache.clear() # Set in the pytest_collection_modifyitems(), by _count_num_fixture_uses wrapper.num_tests_use_this_fixture = num_tests_use_this_fixture return wrapper def identity_after(x, sleep): """Testing function to return identity after sleep Parameters ---------- x : int The input value. sleep : float The amount of time to sleep Returns ------- x : object The original value """ if sleep: time.sleep(sleep) return x def terminate_self(): """Testing function to terminate the process.""" sys.exit(-1) def is_ampere_or_newer(): """Check if the target environment has an NVIDIA Ampere GPU or newer.""" arch = tvm.contrib.nvcc.get_target_compute_version() major, _ = tvm.contrib.nvcc.parse_compute_version(arch) return major >= 8 def install_request_hook(depth: int) -> None: """Add a wrapper around urllib.request for CI tests""" if not IS_IN_CI: return # https://sphinx-gallery.github.io/stable/faq.html#why-is-file-not-defined-what-can-i-use base = None msg = "" try: base = __file__ msg += f"found file {__file__}\n" except NameError: msg += "no file\n" if base is None: hook_script_dir = Path.cwd().resolve() msg += "used path.cwd()\n" else: hook_script_dir = Path(base).resolve().parent msg += "used base()\n" msg += f"using depth {depth}\n" if depth <= 0: raise ValueError(f"depth less than 1 not supported, found: {depth}") # Go up the parent directories while depth > 0: msg += f"[depth={depth}] dir={hook_script_dir}\n" hook_script_dir = hook_script_dir.parent depth -= 1 # Ensure the specified dir is valid hook_script_dir = hook_script_dir / "tests" / "scripts" / "request_hook" if not hook_script_dir.exists(): raise RuntimeError(f"Directory {hook_script_dir} does not exist:\n{msg}") # Import the hook and start it up (it's not included here directly to avoid # keeping a database of URLs inside the tvm Python package sys.path.append(str(hook_script_dir)) # This import is intentionally delayed since it should only happen in CI import request_hook # pylint: disable=import-outside-toplevel request_hook.init() def fetch_model_from_url( url: str, model_format: str, sha256: str, ) -> Tuple[tvm.ir.module.IRModule, dict]: """Testing function to fetch a model from a URL and return it as a Relay model. Downloaded files are cached for future re-use. Parameters ---------- url : str The URL or list of URLs to try downloading the model from. model_format: str The file extension of the model format used. sha256 : str The sha256 hex hash to compare the downloaded model against. Returns ------- (mod, params) : object The Relay representation of the downloaded model. """ rel_path = f"model_{sha256}.{model_format}" file = tvm.contrib.download.download_testdata(url, rel_path, overwrite=False) # Check SHA-256 hash file_hash = hashlib.sha256() with open(file, "rb") as f: for block in iter(lambda: f.read(2**24), b""): file_hash.update(block) if file_hash.hexdigest() != sha256: raise FileNotFoundError("SHA-256 hash for model does not match") tvmc_model = load_model(file, model_format) return tvmc_model.mod, tvmc_model.params def _mark_parameterizations(*params, marker_fn, reason): """ Mark tests with a nodeid parameters that exactly matches one in params. Useful for quickly marking tests as xfail when they have a large combination of parameters. """ params = set(params) def decorator(func): @functools.wraps(func) def wrapper(request, *args, **kwargs): if "[" in request.node.name and "]" in request.node.name: # Strip out the test name and the [ and ] brackets params_from_name = request.node.name[len(request.node.originalname) + 1 : -1] if params_from_name in params: marker_fn( reason=f"{marker_fn.__name__} on nodeid {request.node.nodeid}: " + reason ) return func(request, *args, **kwargs) return wrapper return decorator def xfail_parameterizations(*xfail_params, reason): return _mark_parameterizations(*xfail_params, marker_fn=pytest.xfail, reason=reason) def skip_parameterizations(*skip_params, reason): return _mark_parameterizations(*skip_params, marker_fn=pytest.skip, reason=reason) def main(): test_file = inspect.getsourcefile(sys._getframe(1)) sys.exit(pytest.main([test_file] + sys.argv[1:])) class CompareBeforeAfter: """Utility for comparing before/after of TIR transforms A standard framework for writing tests that take a TIR PrimFunc as input, apply a transformation, then either compare against an expected output or assert that the transformation raised an error. A test should subclass CompareBeforeAfter, defining class members `before`, `transform`, and `expected`. CompareBeforeAfter will then use these members to define a test method and test fixture. `transform` may be one of the following. - An instance of `tvm.ir.transform.Pass` - A method that takes no arguments and returns a `tvm.ir.transform.Pass` - A pytest fixture that returns a `tvm.ir.transform.Pass` `before` may be any one of the following. - An instance of `tvm.tir.PrimFunc`. This is allowed, but is not the preferred method, as any errors in constructing the `PrimFunc` occur while collecting the test, preventing any other tests in the same file from being run. - An TVMScript function, without the ``@T.prim_func`` decoration. The ``@T.prim_func`` decoration will be applied when running the test, rather than at module import. - A method that takes no arguments and returns a `tvm.tir.PrimFunc` - A pytest fixture that returns a `tvm.tir.PrimFunc` `expected` may be any one of the following. The type of `expected` defines the test being performed. If `expected` provides a `tvm.tir.PrimFunc`, the result of the transformation must match `expected`. If `expected` is an exception, then the transformation must raise that exception type. - Any option supported for `before`. - The `Exception` class object, or a class object that inherits from `Exception`. - A method that takes no arguments and returns `Exception` or a class object that inherits from `Exception`. - A pytest fixture that returns `Exception` or an class object that inherits from `Exception`. Examples -------- .. python:: class TestRemoveIf(tvm.testing.CompareBeforeAfter): transform = tvm.tir.transform.Simplify() def before(A: T.Buffer(1, "int32")): if True: A[0] = 42 else: A[0] = 5 def expected(A: T.Buffer(1, "int32")): A[0] = 42 """ def __init_subclass__(cls): if hasattr(cls, "before"): cls.before = cls._normalize_before(cls.before) if hasattr(cls, "expected"): cls.expected = cls._normalize_expected(cls.expected) if hasattr(cls, "transform"): cls.transform = cls._normalize_transform(cls.transform) @classmethod def _normalize_ir_module(cls, func): if isinstance(func, tvm.tir.PrimFunc): def inner(self): # pylint: disable=unused-argument return func elif cls._is_method(func): def inner(self): # pylint: disable=unused-argument return func(self) elif inspect.isclass(func): def inner(self): # pylint: disable=unused-argument func_dict = {} for name, method in func.__dict__.items(): if name.startswith("_"): pass elif isinstance(method, tvm.ir.function.BaseFunc): func_dict[name] = method else: source_code = "@T.prim_func\n" + textwrap.dedent(inspect.getsource(method)) prim_func = tvm.script.from_source(source_code) func_dict[name] = prim_func return tvm.IRModule(func_dict) else: def inner(self): # pylint: disable=unused-argument source_code = "@T.prim_func\n" + textwrap.dedent(inspect.getsource(func)) return tvm.script.from_source(source_code) return pytest.fixture(inner) @classmethod def _normalize_before(cls, func): if hasattr(func, "_pytestfixturefunction"): return func else: return cls._normalize_ir_module(func) @classmethod def _normalize_expected(cls, func): if hasattr(func, "_pytestfixturefunction"): return func elif inspect.isclass(func) and issubclass(func, Exception): def inner(self): # pylint: disable=unused-argument return func return pytest.fixture(inner) else: return cls._normalize_ir_module(func) @classmethod def _normalize_transform(cls, transform): def apply(module_transform): def inner(obj): if isinstance(obj, tvm.IRModule): return module_transform(obj) elif isinstance(obj, tvm.tir.PrimFunc): mod = tvm.IRModule({"main": obj}) mod = module_transform(mod) return mod["main"] else: raise TypeError(f"Expected IRModule or PrimFunc, but received {type(obj)}") return inner if hasattr(transform, "_pytestfixturefunction"): if not hasattr(cls, "_transform_orig"): cls._transform_orig = transform def inner(self, _transform_orig): # pylint: disable=unused-argument return apply(_transform_orig) elif isinstance(transform, tvm.ir.transform.Pass): def inner(self): # pylint: disable=unused-argument return apply(transform) elif cls._is_method(transform): def inner(self): # pylint: disable=unused-argument return apply(transform(self)) else: raise TypeError( "Expected transform to be a tvm.ir.transform.Pass, or a method returning a Pass" ) return pytest.fixture(inner) @staticmethod def _is_method(func): sig = inspect.signature(func) return "self" in sig.parameters def test_compare(self, before, expected, transform): """Unit test to compare the expected TIR PrimFunc to actual""" if inspect.isclass(expected) and issubclass(expected, Exception): with pytest.raises(expected): after = transform(before) # This portion through pytest.fail isn't strictly # necessary, but gives a better error message that # includes the before/after. before_str = before.script(name="before") after_str = after.script(name="after") pytest.fail( msg=( f"Expected {expected.__name__} to be raised from transformation, " f"instead received TIR\n:{before_str}\n{after_str}" ) ) elif isinstance(expected, (tvm.tir.PrimFunc, tvm.ir.IRModule)): after = transform(before) try: tvm.ir.assert_structural_equal(after, expected) except ValueError as err: before_str = before.script(name="before") after_str = after.script(name="after") expected_str = expected.script(name="expected") raise ValueError( f"TIR after transformation did not match expected:\n" f"{before_str}\n{after_str}\n{expected_str}" ) from err else: raise TypeError( f"tvm.testing.CompareBeforeAfter requires the `expected` fixture " f"to return either `Exception`, an `Exception` subclass, " f"or an instance of `tvm.tir.PrimFunc`. " f"Instead, received {type(expected)}." ) class _control_span_filling: def __init__(self, on=True): self._on = on self._pass_ctx = tvm.transform.PassContext(config={"relay.frontend.fill_span": self._on}) def __enter__(self): self._pass_ctx.__enter__() def __exit__(self, exc_type, exc_val, exc_tb): self._pass_ctx.__exit__(exc_type, exc_val, exc_tb) class enable_span_filling(_control_span_filling): def __init__(self): super().__init__() class disable_span_filling(_control_span_filling): def __init__(self): super().__init__(on=False)
70,687
32.109133
99
py
tvm
tvm-main/python/tvm/testing/aot.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=use-list-literal, consider-using-with, f-string-without-interpolation """Common functions for AOT test cases""" import contextlib import datetime import os import pathlib import re import subprocess import tarfile import logging from typing import Any, NamedTuple, Union, Tuple, Optional, List, Dict, Callable import numpy as np import tvm from tvm import relay from tvm import autotvm from tvm.contrib import utils, graph_executor from tvm.relay.backend import Executor, Runtime from tvm.relay.backend.utils import mangle_module_name from tvm.micro import export_model_library_format from tvm.micro.testing.utils import mlf_extract_workspace_size_bytes _LOG = logging.getLogger(__name__) NP_TYPE_TO_C = { "int8": "int8_t", "uint8": "uint8_t", "int16": "int16_t", "uint16": "uint16_t", "int32": "int32_t", "uint32": "uint32_t", "float32": "float", } AOT_SUCCESS_TOKEN = "AOT_TEST_SUCCESS" AOT_FAILURE_TOKEN = "AOT_TEST_FAILURE" class AOTTestModel(NamedTuple): """Class to describe a model under test Parameters ---------- module: tvm.IRModule IRModule to generate AOT executor for inputs: Dict[str, np.array] Dict of input names to value arrays outputs: List[np.array] Dict of output names to value arrays output_tolerance: Optional[Union[int, float]] Allowed tolerance of the output name: str Name to use for this model params: Optional[Dict[str, np.array]] Dict of parameter names to value arrays extra_memory_in_bytes: int Extra memory to allocate after planned memory """ module: tvm.IRModule inputs: Dict[str, np.array] outputs: Dict[str, np.array] output_tolerance: Optional[Union[int, float]] = None name: str = "default" params: Optional[Dict[str, np.array]] = None extra_memory_in_bytes: int = 0 class AOTCompiledTestModel(NamedTuple): """A compiled AOTTestModel with associated module Parameters ---------- model: AOTTestModel Input model to be compiled module: tvm.runtime.Module The compiled Module for the associated AOTTestModel """ model: AOTTestModel executor_factory: tvm.relay.backend.executor_factory.AOTExecutorFactoryModule class AOTDataLinkage(NamedTuple): """A compiled AOTTestModel with associated module Parameters ---------- section: str Named section to place data into alignment: int Section alignment """ section: str alignment: int class AOTTestRunner(NamedTuple): """Class to describe a test runner for AOT code Parameters ---------- makefile: str Premade Makefile to use from the AOT test folder prologue: str Code to prepend to the main function epilogue: str Code to append to the main function includes: List[str] Additional includes required to run the AOT test runner parameters: Dict[str, str] Additional parameters to pass to the make command pass_config: Dict[str, Any] Additional pass configuration when building the model """ makefile: str = "default" prologue: str = "" epilogue: str = "" includes: List[str] = [] parameters: Dict[str, str] = {} pass_config: Dict[str, Any] = {} def _subprocess_check_log_output(cmd, cwd, logfile): """ This method runs a process and logs the output to both a log file and stdout """ _LOG.info("Execute (%s): %s", cwd, cmd) cmd_base = cmd[0] if isinstance(cmd, (list, tuple)) else cmd.split(" ", 1)[0] proc = subprocess.Popen( cmd, cwd=cwd, shell=True, bufsize=0, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, encoding="utf-8", ) stdout = "" with open(logfile, "a") as f: msg = ( "\n" + "-" * 80 + f"{datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')}: Execute ({cwd}): {cmd}\n" + "-" * 80 ) f.write(msg) stdout += msg + "\n" while True: data = proc.stdout.readline() stdout += data _LOG.debug("%s: %s", cmd_base, data.rstrip("\n")) f.write(data) # process is done if there is no data and the result is valid if not data: # EOF break proc.wait() if proc.returncode != 0: raise RuntimeError(f"Subprocess failed: {cmd}\nstdout:\n{stdout}") def _mangle_name(mod_name, name): mod_name = mangle_module_name(mod_name) return mod_name + "_" + name # TODO: Move to linker script with list of symbols rather than coding into source def _emit_data_linkage(output_file, data_linkage): if data_linkage is not None: output_file.write( f'__attribute__((section("{data_linkage.section}"), ' f"aligned({data_linkage.alignment}))) " ) def _emit_main_prologue( main_file, custom_prologue, workspace_bytes, data_linkage, compiled_models, interface_api, use_stack_allocator=True, debug_last_error=False, ): if use_stack_allocator: workspace_define = f"#define WORKSPACE_SIZE ({workspace_bytes}" if interface_api == "c": for compiled_model in compiled_models: model = compiled_model.model workspace_define += f" + TVMGEN_{model.name.upper()}_WORKSPACE_SIZE" # Add TVM_RUNTIME_ALLOC_ALIGNMENT_BYTES because of memory alignment. workspace_define += " + TVM_RUNTIME_ALLOC_ALIGNMENT_BYTES)\n" main_file.write(workspace_define) _emit_data_linkage(main_file, data_linkage) main_file.write("static uint8_t g_aot_memory[WORKSPACE_SIZE];\n") main_file.write("tvm_workspace_t app_workspace;\n") main_file.write( """\n tvm_crt_error_t TVMPlatformMemoryAllocate(size_t num_bytes, DLDevice dev, void** out_ptr) { return StackMemoryManager_Allocate(&app_workspace, num_bytes, out_ptr); } tvm_crt_error_t TVMPlatformMemoryFree(void* ptr, DLDevice dev) { return StackMemoryManager_Free(&app_workspace,ptr); } """ ) else: # An implementation is not needed for these if the stack allocator is not used main_file.write( """\n tvm_crt_error_t TVMPlatformMemoryAllocate(size_t num_bytes, DLDevice dev, void** out_ptr) { return kTvmErrorFunctionCallNotImplemented; } tvm_crt_error_t TVMPlatformMemoryFree(void* ptr, DLDevice dev) { return kTvmErrorFunctionCallNotImplemented; } """ ) main_file.write( """\n void TVMPlatformAbort(tvm_crt_error_t code) { exit(-1); } void TVMLogf(const char* msg, ...) { va_list args; va_start(args, msg); vfprintf(stdout, msg, args); va_end(args); } """ ) if debug_last_error: main_file.write( """\n tvm_crt_error_t TVMPlatformTimerStart() { return kTvmErrorFunctionCallNotImplemented; } tvm_crt_error_t TVMPlatformTimerStop(double* elapsed_time_seconds) { return kTvmErrorFunctionCallNotImplemented; } const TVMModule* TVMSystemLibEntryPoint(void) { return NULL; } """ ) else: main_file.write( """\n TVM_DLL int TVMFuncRegisterGlobal(const char* name, TVMFunctionHandle f, int override) {} """ ) main_file.write("\nint main(){\n") main_file.write(custom_prologue) def _emit_main_data(main_file, input_map, output_map, mod_name): for key in input_map: sanitized_tensor_name = re.sub(r"\W", "_", key) main_file.write( f'#include "{_mangle_name(mod_name,"input_data")}_{sanitized_tensor_name}.h"\n' ) for key in output_map: sanitized_tensor_name = re.sub(r"\W", "_", key) main_file.write( f'#include "{_mangle_name(mod_name,"expected_output_data")}_' f'{sanitized_tensor_name}.h"\n' f'#include "{_mangle_name(mod_name,"output_data")}_' f'{sanitized_tensor_name}.h"\n' ) def _emit_main_device_structs(main_file, devices, mod_name): if devices: main_file.write( f"struct {_mangle_name(mod_name, 'devices')} {_mangle_name(mod_name, 'devices')} = {{" ) for device in devices: main_file.write(f"\t.{device} = {device},\n") main_file.write("};\n") def _emit_main_workspace_pool_structs(main_file, workspace_pool_names, mod_name): if workspace_pool_names and len(workspace_pool_names) > 0: main_file.write( f"struct {_mangle_name(mod_name, 'workspace_pools')} " f"{_mangle_name(mod_name, 'workspace_pools')} = {{" ) for workspace_pool_name in workspace_pool_names.keys(): main_file.write( f"\t.{workspace_pool_name} = {workspace_pool_names[workspace_pool_name]}" f"{workspace_pool_name},\n" ) main_file.write("};\n") def _emit_main_data_structs(main_file, input_map, output_map, mod_name): main_file.write( f"struct {_mangle_name(mod_name, 'inputs')} {_mangle_name(mod_name, 'inputs')} = {{" ) for key in input_map: sanitized_tensor_name = re.sub(r"\W", "_", key) main_file.write( f"\t.{sanitized_tensor_name} = " f"{_mangle_name(mod_name, 'input_data')}_{sanitized_tensor_name},\n" ) main_file.write("};\n") main_file.write( f"struct {_mangle_name(mod_name, 'outputs')} {_mangle_name(mod_name, 'outputs')} = {{" ) for key in output_map: sanitized_tensor_name = re.sub(r"\W", "_", key) main_file.write( f"\t.{sanitized_tensor_name} = {_mangle_name(mod_name, 'output_data')}_" f"{sanitized_tensor_name},\n" ) main_file.write("};\n") def _emit_main_data_setup(main_file, input_map, output_map, mod_name): num_outputs = len(output_map) num_inputs = len(input_map) main_file.write(f'void* {_mangle_name(mod_name,"inputs")}[{num_inputs}] = {{ ') for key in input_map: sanitized_tensor_name = re.sub(r"\W", "_", key) main_file.write(f'{_mangle_name(mod_name,"input_data")}_{sanitized_tensor_name}, ') main_file.write("};\n") main_file.write(f'void* {_mangle_name(mod_name,"outputs")}[{num_outputs}] = {{ ') for key in output_map: sanitized_tensor_name = re.sub(r"\W", "_", key) main_file.write(f'{_mangle_name(mod_name, "output_data")}_{sanitized_tensor_name}, ') main_file.write("};\n") def _emit_main_c_interface_call( main_file, devices, workspace_pool_names, mod_name, use_workspace_io, debug_last_error ): sub_strings = list() sub_strings.append(f'if ({_mangle_name(mod_name,"run")}(') if not use_workspace_io: sub_strings.append(f'&{_mangle_name(mod_name,"inputs")}, ') sub_strings.append(f'&{_mangle_name(mod_name,"outputs")}, ') if workspace_pool_names: sub_strings.append(f'&{_mangle_name(mod_name,"workspace_pools")}, ') if devices: sub_strings.append(f'&{_mangle_name(mod_name,"devices")}, ') # Removing the last two characters that is a comma and a space sub_strings[-1] = sub_strings[-1][:-2] # Adding brackets and newline instead sub_strings[-1] = sub_strings[-1] + ") == -1) {\n" main_file_string = "".join(sub_strings) main_file.write(main_file_string) if debug_last_error: main_file.write(f'\tprintf("ERROR: %s\\n", TVMGetLastError());\n') main_file.write(f'\tprintf("{AOT_FAILURE_TOKEN}\\n");\n') main_file.write("\treturn -1;\n") main_file.write("}\n") def _emit_main_fake_packed_values(main_file): main_file.write( """ static DLDevice fake_device = {kDLCPU, 0}; static int64_t fake_dims = 0; static int64_t fake_shape = {0}; """ ) def _emit_main_packed_call(main_file, input_map, output_list, mod_name): tensors_name = _mangle_name(mod_name, "tensors") values_name = _mangle_name(mod_name, "values") typeids_name = _mangle_name(mod_name, "typeids") def fake_tensor(source, source_index, packed_index): main_file.write( f""" {tensors_name}[{packed_index}].device = fake_device; {tensors_name}[{packed_index}].data = {source}[{source_index}]; {tensors_name}[{packed_index}].shape = &fake_shape; {tensors_name}[{packed_index}].ndim = fake_dims; {tensors_name}[{packed_index}].byte_offset = 0; {tensors_name}[{packed_index}].strides = NULL; {values_name}[{packed_index}].v_handle = &{tensors_name}[{packed_index}]; """ ) num_outputs = len(output_list) num_inputs = len(input_map) num_tensors = num_inputs + num_outputs main_file.write( f""" DLTensor {tensors_name}[{num_tensors}]; TVMValue {values_name}[{num_tensors}]; int32_t {typeids_name}[{num_tensors}]; """ ) for i in range(0, num_inputs): fake_tensor(_mangle_name(mod_name, "inputs"), i, i) for i in range(0, num_outputs): fake_tensor(_mangle_name(mod_name, "outputs"), i, i + num_inputs) main_file.write( f'{_mangle_name(mod_name, "run")}({values_name}, {typeids_name}, 0, NULL, 0, NULL);\n' ) main_file.write("\n") def _emit_main_compare(main_file, outputs, output_tolerance, mod_name, use_interface_c=False): for key in outputs: sanitized_tensor_name = re.sub(r"\W", "_", key) expected_data_name = _mangle_name(mod_name, f"expected_output_data_{sanitized_tensor_name}") is_float_dtype = outputs[key].dtype == "float32" comparison_function = "abs" tolerance = output_tolerance or 0 if is_float_dtype: comparison_function = "fabs" tolerance = output_tolerance or 0.001 data_length_var_name = ( _mangle_name(mod_name, f"output_data_{sanitized_tensor_name}") + "_len" ) if use_interface_c: c_type = NP_TYPE_TO_C[str(outputs[key].dtype)] actual_data_name = f"(({c_type}*)" + _mangle_name( mod_name, f"outputs.{sanitized_tensor_name})" ) else: actual_data_name = _mangle_name(mod_name, f"output_data_{sanitized_tensor_name}") main_file.write( f"for (int i = 0; i<{data_length_var_name}; i++) {{\n" f"\tif ({comparison_function}({actual_data_name}[i]-" f"{expected_data_name}[i]) > {tolerance}) {{\n" f'\t\tprintf("{AOT_FAILURE_TOKEN}\\n");\n' f"\t\treturn -1;\n" f"\t}}\n" f"}}" ) def _emit_main_init_memory_manager(main_file): main_file.write("StackMemoryManager_Init(&app_workspace, g_aot_memory, WORKSPACE_SIZE);") main_file.write("\n") def _emit_main_epilogue(main_file, custom_epilogue): main_file.write(custom_epilogue) main_file.write(f'printf("{AOT_SUCCESS_TOKEN}\\n");') main_file.write("return 0;") main_file.write("}\n") def _emit_main_common_includes(main_file, custom_includes, debug_last_error): main_file.write("#include <stdio.h>\n") main_file.write("#include <stdarg.h>\n") main_file.write("#include <stdlib.h>\n") main_file.write("#include <math.h>\n") main_file.write('#include "tvm/runtime/c_runtime_api.h"\n') main_file.write('#include "tvm/runtime/crt/stack_allocator.h"\n') if debug_last_error: main_file.write('#include "tvm/runtime/crt/module.h"\n') for include in custom_includes: main_file.write(f'#include "{include}"\n') def _emit_main_micro_include(main_file, mod_name): main_file.write(f"#include <{mangle_module_name(mod_name)}.h>\n") def _create_main( test_name, compiled_models, output_path, custom_includes, custom_prologue, custom_epilogue, data_linkage, interface_api, workspace_bytes, use_stack_allocator=True, use_workspace_io=False, debug_last_error=False, ): file_path = pathlib.Path(f"{output_path}/" + test_name).resolve() # create header file raw_path = file_path.with_suffix(".c").resolve() with open(raw_path, "w") as main_file: _emit_main_common_includes(main_file, custom_includes, debug_last_error) if interface_api == "c": for compiled_model in compiled_models: model = compiled_model.model _emit_main_micro_include(main_file, model.name) for compiled_model in compiled_models: model = compiled_model.model _emit_main_data(main_file, model.inputs, model.outputs, model.name) _emit_main_prologue( main_file, custom_prologue, workspace_bytes, data_linkage, compiled_models, interface_api, use_stack_allocator, debug_last_error, ) if use_stack_allocator: _emit_main_init_memory_manager(main_file) if interface_api == "c": for compiled_model in compiled_models: model = compiled_model.model executor_codegen_metadata = ( compiled_model.executor_factory.executor_codegen_metadata ) devices = compiled_model.executor_factory.get_devices() workspace_pool_names = {} if executor_codegen_metadata.pool_inputs: workspace_pool_names = { allocated_pool.pool_info.pool_name: "&" if isinstance( allocated_pool.pool_info, tvm.ir.memory_pools.ConstantPoolInfo ) else "" for allocated_pool in dict(executor_codegen_metadata.pool_inputs).values() if not allocated_pool.pool_info.is_internal } _emit_main_device_structs(main_file, devices, model.name) if not use_workspace_io: _emit_main_workspace_pool_structs(main_file, workspace_pool_names, model.name) _emit_main_data_structs(main_file, model.inputs, model.outputs, model.name) _emit_main_c_interface_call( main_file, devices, list(workspace_pool_names.keys()), model.name, use_workspace_io, debug_last_error, ) else: _emit_main_fake_packed_values(main_file) for compiled_model in compiled_models: model = compiled_model.model _emit_main_data_setup(main_file, model.inputs, model.outputs, model.name) _emit_main_packed_call(main_file, model.inputs, model.outputs, model.name) for compiled_model in compiled_models: model = compiled_model.model _emit_main_compare( main_file, model.outputs, model.output_tolerance, model.name, interface_api == "c" ) _emit_main_epilogue(main_file, custom_epilogue) def _create_header_file(tensor_name, npy_data, output_path, data_linkage): """ This method generates a header file containing the data contained in the numpy array provided. It is used to capture the tensor data (for both inputs and expected outputs) to be bundled into the standalone application. """ file_path = pathlib.Path(f"{output_path}/" + tensor_name).resolve() # create header file raw_path = file_path.with_suffix(".h").resolve() with open(raw_path, "w") as header_file: header_file.write("#include <stddef.h>\n") header_file.write("#include <stdint.h>\n") header_file.write("#include <dlpack/dlpack.h>\n") header_file.write(f"const size_t {tensor_name}_len = {npy_data.size};\n") _emit_data_linkage(header_file, data_linkage) header_file.write(f"{NP_TYPE_TO_C[str(npy_data.dtype)]} {tensor_name}[] =") header_file.write("{") for i in np.ndindex(npy_data.shape): header_file.write(f"{npy_data[i]}, ") header_file.write("};\n\n") def convert_to_relay(tflite_model_buf, bind_params_by_name=True): """Convert a tflite model buffer in a Relay module""" # TFLite.Model.Model has changed to TFLite.Model from 1.14 to 2.1 try: import tflite.Model # pylint: disable=import-outside-toplevel tflite_model = tflite.Model.Model.GetRootAsModel(tflite_model_buf, 0) except AttributeError: import tflite # pylint: disable=import-outside-toplevel tflite_model = tflite.Model.GetRootAsModel(tflite_model_buf, 0) except ImportError: raise ImportError("The tflite package must be installed") mod, params = relay.frontend.from_tflite(tflite_model) if bind_params_by_name: mod["main"] = relay.build_module.bind_params_by_name(mod["main"], params) return mod, params def compile_models( models: Union[List[AOTTestModel], AOTTestModel], interface_api: str, use_unpacked_api: bool, workspace_byte_alignment: int = 8, constant_byte_alignment: int = 8, enable_op_fusion: bool = True, pass_config: Dict[str, Any] = None, use_runtime_executor: bool = True, target: tvm.target.Target = tvm.target.Target("c"), workspace_memory_pools=None, constant_memory_pools=None, schedule_name: str = None, ) -> List[AOTCompiledTestModel]: """ This method generates runtime.Modules for the tests """ if not isinstance(models, list): models = [models] runtime = Runtime("crt") executor = Executor( "aot", { "workspace-byte-alignment": workspace_byte_alignment, "constant-byte-alignment": constant_byte_alignment, "interface-api": interface_api, "unpacked-api": use_unpacked_api, }, ) config = {"tir.disable_vectorize": True} if pass_config: config = {**config, **pass_config} if not enable_op_fusion: config["relay.FuseOps.max_depth"] = 1 compiled_mods = list() for model in models: with contextlib.ExitStack() as context_stack: if schedule_name: # Testing with deterministic schedule task_list = autotvm.task.extract_from_program( model.module, target=target, params=model.params ) context_stack.enter_context( tvm.autotvm.apply_fixed_config(task_list, schedule_name) ) context_stack.enter_context(tvm.transform.PassContext(opt_level=3, config=config)) build_kwargs = dict( ir_mod=model.module, params=model.params, mod_name=model.name, ) # TODO(Mousius) - Remove once executor/runtime are fully removed from Target if use_runtime_executor: build_kwargs.update( dict( target=target, executor=executor, runtime=runtime, workspace_memory_pools=workspace_memory_pools, constant_memory_pools=constant_memory_pools, ) ) else: build_kwargs.update(dict(target=tvm.target.Target(target, host=target))) executor_factory = tvm.relay.build(**build_kwargs) compiled_mods.append( AOTCompiledTestModel(model=model, executor_factory=executor_factory) ) return compiled_mods def run_and_check( models: List[AOTCompiledTestModel], runner: AOTTestRunner, interface_api: str, debug_calculated_workspaces=False, workspace_byte_alignment=8, constant_byte_alignment=8, data_linkage: AOTDataLinkage = None, test_dir: str = None, verbose: bool = False, use_workspace_io: bool = False, debug_last_error: bool = False, checker: Optional[Callable[[str], bool]] = None, ): """ This method uses the original test data and compiled runtime.Modules to run in the test runner to verify the results. """ def run_and_check_body(base_path): cflags = ( f"-DTVM_RUNTIME_ALLOC_ALIGNMENT_BYTES={workspace_byte_alignment} " f" -DTVM_RUNTIME_CONST_ALLOC_ALIGNMENT_BYTES={constant_byte_alignment} " ) # The calculated workspaces will not account for stack allocator tags used for debugging if debug_calculated_workspaces: cflags += "-DTVM_CRT_STACK_ALLOCATOR_ENABLE_LIFO_CHECK " base_path = os.path.abspath(base_path) build_path = os.path.join(base_path, "build") os.makedirs(build_path, exist_ok=True) include_path = os.path.join(base_path, "include") os.mkdir(include_path) tvm.micro.copy_crt_config_header("crt", include_path) workspace_bytes = 0 for compiled_model in models: model = compiled_model.model tar_file = os.path.join(base_path, f"{model.name}.tar") export_model_library_format(compiled_model.executor_factory, tar_file) t = tarfile.open(tar_file) t.extractall(base_path) # Interface C APIs does not need compiler generated # workspace to generate the test application, because # workspace size is codegen'd as a macro to # tvmgen_<model_name>.h. if interface_api != "c": workspace_bytes += mlf_extract_workspace_size_bytes(tar_file) workspace_bytes += model.extra_memory_in_bytes for key in model.inputs: sanitized_tensor_name = re.sub(r"\W", "_", key) _create_header_file( f'{_mangle_name(model.name, "input_data")}_{sanitized_tensor_name}', model.inputs[key], include_path, data_linkage, ) for key in model.outputs: sanitized_tensor_name = re.sub(r"\W", "_", key) _create_header_file( f'{_mangle_name(model.name, "output_data")}_{sanitized_tensor_name}', np.zeros(model.outputs[key].shape, model.outputs[key].dtype), include_path, data_linkage, ) _create_header_file( f'{_mangle_name(model.name, "expected_output_data")}_{sanitized_tensor_name}', model.outputs[key], include_path, data_linkage, ) use_usmp = runner.pass_config.get("tir.usmp.enable", False) # We only need the stack allocator if USMP is not used use_stack_allocator = not use_usmp _create_main( "test.c", models, build_path, runner.includes, runner.prologue, runner.epilogue, data_linkage, interface_api, workspace_bytes, use_stack_allocator, use_workspace_io, debug_last_error, ) if checker and (not checker(base_path)): return False # Verify that compiles fine file_dir = os.path.dirname(os.path.abspath(__file__)) makefile_dir = os.path.join(file_dir, "../../../tests/python/relay/aot") codegen_path = os.path.join(base_path, "codegen") makefile = os.path.join(makefile_dir, f"{runner.makefile}.mk") fvp_dir = "/opt/arm/FVP_Corstone_SSE-300/models/Linux64_GCC-6.4/" # TODO(@grant-arm): Remove once ci_cpu docker image has been updated to FVP_Corstone_SSE if not os.path.isdir(fvp_dir): fvp_dir = "/opt/arm/FVP_Corstone_SSE-300_Ethos-U55/models/Linux64_GCC-6.4/" custom_params = " ".join( [f" {param}='{value}'" for param, value in runner.parameters.items()] ) make_command = ( f"make -f {makefile} build_dir={build_path}" + f" CFLAGS='{cflags}'" + f" TVM_ROOT={file_dir}/../../.." + f" AOT_TEST_ROOT={makefile_dir}" + f" CODEGEN_ROOT={codegen_path}" + f" STANDALONE_CRT_DIR={tvm.micro.get_standalone_crt_dir()}" + f" FVP_DIR={fvp_dir}" + custom_params ) compile_log_path = os.path.join(build_path, "test_compile.log") compile_command = f"{make_command} aot_test_runner" if verbose: print("Compile command:\n", compile_command) _subprocess_check_log_output(compile_command, ".", compile_log_path) # Verify that runs fine run_log_path = os.path.join(build_path, "test_run.log") run_command = f"{make_command} run" if verbose: print("Run command:\n", run_command) _subprocess_check_log_output(run_command, build_path, run_log_path) with open(run_log_path) as run_log: assert AOT_SUCCESS_TOKEN in run_log.read() return True if test_dir is None: tmpdir = utils.tempdir() return run_and_check_body(os.path.join(tmpdir.path, "test")) else: return run_and_check_body(test_dir) def compile_and_run( models: Union[List[AOTTestModel], AOTTestModel], runner: AOTTestRunner, interface_api: str, use_unpacked_api: bool, debug_calculated_workspaces: bool = False, workspace_byte_alignment: int = 8, constant_byte_alignment: int = 8, enable_op_fusion: bool = True, data_linkage: AOTDataLinkage = None, use_runtime_executor: bool = True, target: Union[str, tvm.target.Target, List[tvm.target.Target]] = "c", target_opts: Dict = None, test_dir: str = None, verbose: bool = False, schedule_name: str = None, debug_last_error: bool = False, checker: Optional[Callable[[str], bool]] = None, ) -> bool: """This is a wrapper API to compile and run models as test for AoT Parameters ---------- test_dir : str This path will contain build, codegen, include directories verbose: bool Prints commands to build and run AOT test runner """ if target_opts: for key, val in target_opts.items(): target += f" {key}={val}" if isinstance(target, str): target = tvm.target.Target(target) compiled_test_mods = compile_models( models=models, interface_api=interface_api, use_unpacked_api=use_unpacked_api, workspace_byte_alignment=workspace_byte_alignment, constant_byte_alignment=constant_byte_alignment, enable_op_fusion=enable_op_fusion, pass_config=runner.pass_config, use_runtime_executor=use_runtime_executor, target=target, schedule_name=schedule_name, ) return run_and_check( models=compiled_test_mods, runner=runner, interface_api=interface_api, debug_calculated_workspaces=debug_calculated_workspaces, workspace_byte_alignment=workspace_byte_alignment, constant_byte_alignment=constant_byte_alignment, data_linkage=data_linkage, test_dir=test_dir, verbose=verbose, debug_last_error=debug_last_error, checker=checker, ) def get_dtype_range(dtype: str) -> Tuple[int, int]: """ Produces the min,max for a give data type. Parameters ---------- dtype : str a type string (e.g., int8, float64) Returns ------- type_info.min : int the minimum of the range type_info.max : int the maximum of the range """ type_info = None np_dtype = np.dtype(dtype) kind = np_dtype.kind if kind == "f": type_info = np.finfo(np_dtype) elif kind in ["i", "u"]: type_info = np.iinfo(np_dtype) else: raise TypeError(f"dtype ({dtype}) must indicate some floating-point or integral data type.") return type_info.min, type_info.max def generate_ref_data(mod, input_data, params=None, target="llvm"): """Generate reference data through executing the relay module""" with tvm.transform.PassContext(opt_level=3, config={"tir.disable_vectorize": True}): lib = relay.build(mod, target=target, params=params) lib_name = "mod.so" temp = utils.tempdir() lib_path = temp.relpath(lib_name) lib.export_library(lib_path) lib = tvm.runtime.load_module(lib_path) grt_mod = graph_executor.GraphModule(lib["default"](tvm.cpu())) grt_mod.set_input(**input_data) grt_mod.run() output_count = grt_mod.get_num_outputs() out = [grt_mod.get_output(i).numpy() for i in range(output_count)] if isinstance(mod, tvm.relay.Function): main = mod else: main = mod["main"] if main.attrs is None or main.attrs["output_tensor_names"] is None: output_tensor_names = ( ["output"] if output_count == 1 else [f"output{i}" for i in range(output_count)] ) else: output_tensor_names = main.attrs["output_tensor_names"] return dict(zip(output_tensor_names, out)) def create_relay_module_and_inputs_from_tflite_file(tflite_model_file, bind_params_by_name=True): """A helper function to create a Relay IRModule with inputs and params from a tflite file""" with open(tflite_model_file, "rb") as f: tflite_model_buf = f.read() mod, params = convert_to_relay(tflite_model_buf, bind_params_by_name) inputs = dict() for param in mod["main"].params: name = str(param.name_hint) data_shape = [int(i) for i in param.type_annotation.shape] dtype = str(param.type_annotation.dtype) if np.issubdtype(dtype, np.floating): # Since np.random.uniform only allows the ranges of float32, # at first float16 is used and scaled afterwards, if necessary. in_min, in_max = (np.finfo("float16").min, np.finfo("float16").max) data = np.random.uniform(low=in_min, high=in_max, size=data_shape).astype(dtype) scale = np.finfo(dtype).min / np.finfo("float16").min data *= scale elif np.issubdtype(dtype, np.integer): in_min, in_max = (np.iinfo(dtype).min, np.iinfo(dtype).max) data = np.random.randint(in_min, high=in_max, size=data_shape, dtype=dtype) else: raise TypeError(f"Type {dtype} not supported") inputs[name] = data return mod, inputs, params
35,441
34.620101
100
py
tvm
tvm-main/python/tvm/testing/_ffi_api.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """FFI APIs for tvm.testing""" import tvm._ffi tvm._ffi._init_api("testing", __name__)
874
38.772727
62
py
tvm
tvm-main/python/tvm/testing/popen_pool.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=invalid-name, missing-function-docstring """Common functions for popen_pool test cases""" import tvm from . import _ffi_api TEST_GLOBAL_STATE_1 = 0 TEST_GLOBAL_STATE_2 = 0 TEST_GLOBAL_STATE_3 = 0 def initializer(test_global_state_1, test_global_state_2, test_global_state_3): global TEST_GLOBAL_STATE_1, TEST_GLOBAL_STATE_2, TEST_GLOBAL_STATE_3 TEST_GLOBAL_STATE_1 = test_global_state_1 TEST_GLOBAL_STATE_2 = test_global_state_2 TEST_GLOBAL_STATE_3 = test_global_state_3 def after_initializer(): global TEST_GLOBAL_STATE_1, TEST_GLOBAL_STATE_2, TEST_GLOBAL_STATE_3 return TEST_GLOBAL_STATE_1, TEST_GLOBAL_STATE_2, TEST_GLOBAL_STATE_3 @tvm._ffi.register_func("testing.identity_py") def identity_py(arg): return arg def register_ffi(): @tvm._ffi.register_func("testing.nested_identity_py") def _identity_py(arg): # pylint: disable=unused-variable return arg def call_py_ffi(arg): _identity_py = tvm._ffi.get_global_func("testing.nested_identity_py") return _identity_py(arg) def call_cpp_ffi(arg): return tvm.testing.echo(arg) def call_cpp_py_ffi(arg): return tvm.testing.identity_cpp(arg) def fast_summation(n): return n * (n + 1) // 2 def slow_summation(n): r = 0 for i in range(0, n + 1): r += i return r def timeout_job(n): _ffi_api.sleep_in_ffi(n * 1.5)
2,170
27.565789
79
py
tvm
tvm-main/python/tvm/testing/runner.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=invalid-name, missing-function-docstring """A utility method to run a TVM module on a remote device.""" from typing import TYPE_CHECKING, Callable, List, Optional, Tuple, Union from typing_extensions import Literal if TYPE_CHECKING: import numpy as np from tvm.meta_schedule.runner import EvaluatorConfig, RPCConfig from tvm.runtime import Device, Module, NDArray # pylint: disable=import-outside-toplevel,protected-access def _args_to_device(args, device): import numpy as np from tvm.runtime.ndarray import NDArray, empty uploaded_args = [] for arg in args: if isinstance(arg, (np.ndarray, NDArray)): uploaded_args.append(empty(arg.shape, dtype=arg.dtype, device=device).copyfrom(arg)) elif isinstance(arg, (int, float)): uploaded_args.append(arg) else: raise ValueError(f"Unsupported input type: {type(arg)}") return uploaded_args def _args_to_numpy(args): from tvm.runtime.ndarray import NDArray downloaded_args = [] for arg in args: if isinstance(arg, NDArray): downloaded_args.append(arg.numpy()) else: downloaded_args.append(arg) return downloaded_args def _normalize_export_func(export_func, output_format) -> Tuple[Callable, str]: from tvm.contrib import ndk, tar def export_with(func): return lambda mod, path: mod.export_library(path, func) if export_func == "tar": export_func = export_with(tar.tar) output_format = "tar" elif export_func == "ndk": export_func = export_with(ndk.create_shared) output_format = "so" elif callable(export_func): if output_format is None: raise ValueError("output_format must be specified if `export_func` is callable") else: raise ValueError(f"Unsupported export_func: {export_func}") return export_func, output_format def local_run( # pylint: disable=too-many-arguments,too-many-locals mod: "Module", device_type: str, args: List[Union["np.ndarray", "NDArray", int, float]], evaluator_config: Optional["EvaluatorConfig"] = None, export_func: Union[Callable[["Module", str], None], Literal["tar", "ndk"]] = "tar", output_format: Optional[str] = None, ): """Run a TVM module on a local device. Parameters ---------- mod : Module The TVM module to run. device_type : str The device type to run the module on. args : List[Union[np.ndarray, NDArray, int, float]] The arguments to be fed to the module. evaluator_config : Optional[EvaluatorConfig] The evaluator configuration to use. export_func : Union[Callable[Module, str], Literal["tar", "ndk"]] The function to export the module to a file. If callable, it must be a function that takes two arguments: the module to export and the path to export to. If "tar", the module will be exported to a tar file. If "ndk", the module will be exported to a shared library. output_format : Optional[str] The format of the exported module. If not specified, it will be inferred from the `export_func` argument. Returns ------- args : List[Union[np.ndarray, NDArray, int, float]] The results of running the module. profile_result : tvm.runtime.BenchmarkResult The profiling result of running the module. """ import os.path as osp import tempfile from tvm.meta_schedule.runner import EvaluatorConfig from tvm.runtime import device, load_module evaluator_config = EvaluatorConfig._normalized(evaluator_config) export_func, output_format = _normalize_export_func(export_func, output_format) with tempfile.TemporaryDirectory() as tmp_dir: artifact_path = osp.join(tmp_dir, "tvm_tmp_mod." + output_format) export_func(mod, artifact_path) device: Device = device(device_type, 0) try: args = _args_to_device(args, device) remote_mod = load_module(artifact_path) profile_result = remote_mod.time_evaluator( func_name=remote_mod.entry_name, dev=device, number=evaluator_config.number, repeat=evaluator_config.repeat, min_repeat_ms=evaluator_config.min_repeat_ms, f_preproc="cache_flush_cpu_non_first_arg" if evaluator_config.enable_cpu_cache_flush else "", )(*args) remote_mod(*args) args = _args_to_numpy(args) finally: pass return args, profile_result def rpc_run( # pylint: disable=too-many-arguments,too-many-locals mod: "Module", device_type: str, args: List[Union["np.ndarray", "NDArray", int, float]], evaluator_config: Optional["EvaluatorConfig"] = None, rpc_config: Optional["RPCConfig"] = None, export_func: Union[Callable[["Module", str], None], Literal["tar", "ndk"]] = "tar", output_format: Optional[str] = None, ): """Run a TVM module on a remote device. Parameters ---------- mod : Module The TVM module to run. device_type : str The device type to run the module on. args : List[Union[np.ndarray, NDArray, int, float]] The arguments to be fed to the module. evaluator_config : Optional[EvaluatorConfig] The evaluator configuration to use. rpc_config : Optional[RPCConfig] The RPC configuration to connect to the remote device. If not specified, the default RPC configuration will be used, which reads the following environment variables: - TVM_TRACKER_HOST - TVM_TRACKER_PORT - TVM_TRACKER_KEY export_func : Union[Callable[Module, str], Literal["tar", "ndk"]] The function to export the module to a file. If callable, it must be a function that takes two arguments: the module to export and the path to export to. If "tar", the module will be exported to a tar file. If "ndk", the module will be exported to a shared library. output_format : Optional[str] The format of the exported module. If not specified, it will be inferred from the `export_func` argument. Returns ------- args : List[Union[np.ndarray, NDArray, int, float]] The results of running the module. profile_result : tvm.runtime.BenchmarkResult The profiling result of running the module. """ import os.path as osp import tempfile from tvm.meta_schedule.runner import EvaluatorConfig, RPCConfig evaluator_config = EvaluatorConfig._normalized(evaluator_config) rpc_config = RPCConfig._normalized(rpc_config) export_func, output_format = _normalize_export_func(export_func, output_format) with tempfile.TemporaryDirectory() as tmp_dir: artifact_path = osp.join(tmp_dir, "tvm_tmp_mod." + output_format) _, remote_path = osp.split(artifact_path) session = rpc_config.connect_server() device: Device = session.device(device_type, 0) export_func(mod, artifact_path) try: session.upload(artifact_path, remote_path) args = _args_to_device(args, device) remote_mod = session.load_module(remote_path) profile_result = remote_mod.time_evaluator( func_name=remote_mod.entry_name, dev=device, number=evaluator_config.number, repeat=evaluator_config.repeat, min_repeat_ms=evaluator_config.min_repeat_ms, f_preproc="cache_flush_cpu_non_first_arg" if evaluator_config.enable_cpu_cache_flush else "", )(*args) remote_mod(*args) args = _args_to_numpy(args) finally: session.remove(remote_path) session.remove(remote_path + "." + output_format) session.remove("") return args, profile_result
8,850
36.504237
97
py
tvm
tvm-main/python/tvm/testing/autotvm.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=invalid-name, missing-function-docstring, missing-class-docstring """Common utilities for testing autotvm""" import time import numpy as np import tvm from tvm import te from tvm import autotvm from tvm.autotvm import MeasureInput, MeasureResult from tvm.autotvm.measure.measure import Runner class DummyRunner(Runner): def __init__(self): super(DummyRunner, self).__init__(1, 1) def run(self, measure_inputs, build_results): return [ MeasureResult((np.random.random(),), 0, 0.2, time.time()) for _ in range(len(measure_inputs)) ] def get_build_kwargs(self): return {} @autotvm.template("testing/matmul") def matmul(N, L, M, dtype): A = te.placeholder((N, L), name="A", dtype=dtype) B = te.placeholder((L, M), name="B", dtype=dtype) k = te.reduce_axis((0, L), name="k") C = te.compute((N, M), lambda i, j: te.sum(A[i, k] * B[k, j], axis=k), name="C") s = te.create_schedule(C.op) # schedule y, x = s[C].op.axis k = s[C].op.reduce_axis[0] ##### define space begin ##### cfg = autotvm.get_config() cfg.define_split("tile_y", y, num_outputs=2) cfg.define_split("tile_x", x, num_outputs=2) ##### define space end ##### # schedule according to config yo, yi = cfg["tile_y"].apply(s, C, y) # Make sure configurations have a varied number of itervars. Splitting adds # new itervars, so conditionally splitting with cause the number of # itervars to depend on the tile size. if cfg["tile_x"].size[-1] > 1: xo, xi = cfg["tile_x"].apply(s, C, x) s[C].reorder(yo, xo, k, yi, xi) else: s[C].reorder(yo, k, yi, x) return s, [A, B, C] @autotvm.template("testing/bad_matmul") def bad_matmul(N, L, M, dtype): if "bad_device" in tvm.target.Target.current().keys: A = te.placeholder((N, L), name="A", dtype=dtype) B = te.placeholder((L, M), name="B", dtype=dtype) k = te.reduce_axis((0, L - 1), name="k") C = te.compute((N, M), lambda i, j: te.sum(A[i, k] * B[k, j], axis=k), name="C") s = te.create_schedule(C.op) # schedule y, x = s[C].op.axis cfg = autotvm.get_config() cfg.define_split("tile_y", y, num_outputs=2) cfg.define_split("tile_x", x, num_outputs=2) return s, [A, B, C] return matmul(N, L, M, dtype) def get_sample_task(n=128): """return a sample task for testing""" target = tvm.target.Target("llvm") task = autotvm.task.create("testing/matmul", args=(n, n, n, "float32"), target=target) return task, target def get_sample_records(n): """get sample records for testing""" tsk, target = get_sample_task() inps, ress = [], [] for i in range(n): inps.append(MeasureInput(target, tsk, tsk.config_space.get(i % len(tsk.config_space)))) ress.append(MeasureResult((i + 1,), 0, i, time.time())) return list(zip(inps, ress))
3,762
32.300885
95
py
tvm
tvm-main/python/tvm/testing/__init__.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=redefined-builtin, wildcard-import """Utility Python functions for TVM testing""" from . import auto_scheduler, autotvm from ._ffi_api import ( ErrorTest, FrontendTestModule, device_test, echo, identity_cpp, nop, object_use_count, run_check_signal, test_check_eq_callback, test_raise_error_callback, test_wrap_callback, ) from .popen_pool import ( after_initializer, call_cpp_ffi, call_cpp_py_ffi, call_py_ffi, fast_summation, initializer, register_ffi, slow_summation, timeout_job, ) from .runner import local_run, rpc_run from .utils import *
1,430
28.8125
62
py
tvm
tvm-main/python/tvm/testing/usmp.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """ This file contains USMP tests harnesses.""" import tvm def is_tvm_backendallocworkspace_calls(mod: tvm.runtime.module) -> bool: """TVMBackendAllocWorkspace call check. This checker checks whether any c-source produced has TVMBackendAllocWorkspace calls. If USMP is invoked, none of them should have TVMBAW calls """ dso_modules = mod._collect_dso_modules() for dso_mod in dso_modules: if dso_mod.type_key not in ["c", "llvm"]: assert ( False ), 'Current AoT codegen flow should only produce type "c" or "llvm" runtime modules' source = dso_mod.get_source() if source.count("TVMBackendAllocWorkspace") != 0: return True return False
1,531
37.3
96
py
tvm
tvm-main/python/tvm/runtime/container.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Runtime container structures.""" import tvm._ffi from .object import Object, PyNativeObject from .object_generic import ObjectTypes from . import _ffi_api def getitem_helper(obj, elem_getter, length, idx): """Helper function to implement a pythonic getitem function. Parameters ---------- obj: object The original object elem_getter : function A simple function that takes index and return a single element. length : int The size of the array idx : int or slice The argument passed to getitem Returns ------- result : object The result of getitem """ if isinstance(idx, slice): start = idx.start if idx.start is not None else 0 stop = idx.stop if idx.stop is not None else length step = idx.step if idx.step is not None else 1 if start < 0: start += length if stop < 0: stop += length return [elem_getter(obj, i) for i in range(start, stop, step)] if idx < -length or idx >= length: raise IndexError(f"Index out of range. size: {length}, got index {idx}") if idx < 0: idx += length return elem_getter(obj, idx) @tvm._ffi.register_object("runtime.ADT") class ADT(Object): """Algebatic data type(ADT) object. Parameters ---------- tag : int The tag of ADT. fields : list[Object] or tuple[Object] The source tuple. """ def __init__(self, tag, fields): for f in fields: assert isinstance( f, ObjectTypes ), f"Expect object or tvm NDArray type, but received : {type(f)}" self.__init_handle_by_constructor__(_ffi_api.ADT, tag, *fields) @property def tag(self): return _ffi_api.GetADTTag(self) def __getitem__(self, idx): return getitem_helper(self, _ffi_api.GetADTFields, len(self), idx) def __len__(self): return _ffi_api.GetADTSize(self) def tuple_object(fields=None): """Create a ADT object from source tuple. Parameters ---------- fields : list[Object] or tuple[Object] The source tuple. Returns ------- ret : ADT The created object. """ fields = fields if fields else [] for f in fields: assert isinstance( f, ObjectTypes ), f"Expect object or tvm NDArray type, but received : {type(f)}" return _ffi_api.Tuple(*fields) @tvm._ffi.register_object("runtime.String") class String(str, PyNativeObject): """TVM runtime.String object, represented as a python str. Parameters ---------- content : str The content string used to construct the object. """ __slots__ = ["__tvm_object__"] def __new__(cls, content): """Construct from string content.""" val = str.__new__(cls, content) val.__init_tvm_object_by_constructor__(_ffi_api.String, content) return val # pylint: disable=no-self-argument def __from_tvm_object__(cls, obj): """Construct from a given tvm object.""" content = _ffi_api.GetFFIString(obj) val = str.__new__(cls, content) val.__tvm_object__ = obj return val @tvm._ffi.register_object("runtime.ShapeTuple") class ShapeTuple(Object): """TVM runtime ShapeTuple object. Parameters ---------- shape : list[int] The shape list used to construct the object. """ def __init__(self, shape): assert isinstance( shape, (list, tuple) ), f"Expect list of tuple, but received : {type(shape)}" for x in shape: assert isinstance(x, int), f"Expect int type, but received : {type(x)}" self.__init_handle_by_constructor__(_ffi_api.ShapeTuple, *shape) def __len__(self): return _ffi_api.GetShapeTupleSize(self) def __getitem__(self, idx): return getitem_helper(self, _ffi_api.GetShapeTupleElem, len(self), idx) def __eq__(self, other): if self.same_as(other): return True if len(self) != len(other): return False for a, b in zip(self, other): if a != b: return False return True
5,018
27.68
83
py
tvm
tvm-main/python/tvm/runtime/profiler_vm.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=no-else-return, unidiomatic-typecheck, undefined-variable, invalid-name, redefined-builtin """ The Relay Virtual Machine profiler. Provides extra APIs for profiling vm execution. """ import warnings from tvm.runtime import _ffi_api from tvm.rpc import base as rpc_base from . import vm from .profiling import Report def enabled(): """Whether vm profiler is enabled.""" return hasattr(_ffi_api, "_VirtualMachineDebug") class VirtualMachineProfiler(vm.VirtualMachine): """Relay profile VM runtime.""" def __init__(self, exe, device, memory_cfg=None): super(VirtualMachineProfiler, self).__init__(exe, device, memory_cfg) # Make sure the constructor of the VM module is on the proper device # Remote devices have device_type of their actual device_type + RPC_SESS_MASK if device.device_type >= rpc_base.RPC_SESS_MASK: self.module = device._rpc_sess.get_function("runtime._VirtualMachineDebug")(exe) else: self.module = _ffi_api._VirtualMachineDebug(exe.module) self._init = self.module["init"] self._invoke = self.module["invoke"] self._profile = self.module["profile"] self._profile_rpc = self.module["profile_rpc"] self._set_input = self.module["set_input"] self._setup_device(device, memory_cfg) def get_stat(self, sort_by_time=True): # pylint: disable=unused-argument """Get the statistics of executed ops. REMOVED, use profile method instead. """ warnings.warn("get_stat has been removed, use profile instead") return "" def profile(self, *args, func_name="main", collectors=None, **kwargs): """Profile a function call. Parameters ---------- func_name : str The name of the function. collectors : Optional[Sequence[MetricCollector]] Extra metrics to collect. If profiling over RPC, collectors must be `None`. args : list[tvm.runtime.NDArray] or list[np.ndarray] The arguments to the function. kwargs: dict of str to tvm.runtime.NDArray or np.ndarray Named arguments to the function. Returns ------- timing_results : str Overall and per-op timing results formatted in a table. """ if args or kwargs: self.set_input(func_name, *args, **kwargs) if self.module.type_key == "rpc": # We cannot serialize MetricCollectors over RPC assert collectors is None, "Profiling with collectors is not supported over RPC" return Report.from_json(self._profile_rpc(func_name)) return self._profile(func_name, collectors)
3,521
37.282609
108
py
tvm
tvm-main/python/tvm/runtime/object.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=invalid-name, unused-import """Runtime Object API""" import ctypes from tvm._ffi.base import _FFI_MODE, _LIB, _RUNTIME_ONLY, c_str, check_call from tvm._ffi.runtime_ctypes import ObjectRValueRef from . import _ffi_api, _ffi_node_api try: # pylint: disable=wrong-import-position,unused-import if _FFI_MODE == "ctypes": raise ImportError() from tvm._ffi._cy3.core import ( ObjectBase, PyNativeObject, _set_class_object, _set_class_object_generic, ) except (RuntimeError, ImportError) as error: # pylint: disable=wrong-import-position,unused-import if _FFI_MODE == "cython": raise error from tvm._ffi._ctypes.object import ObjectBase, PyNativeObject from tvm._ffi._ctypes.packed_func import _set_class_object, _set_class_object_generic def _new_object(cls): """Helper function for pickle""" return cls.__new__(cls) class Object(ObjectBase): """Base class for all tvm's runtime objects.""" __slots__ = [] def __repr__(self): return _ffi_node_api.AsRepr(self) def legacy_repr(self): return _ffi_node_api.AsLegacyRepr(self) def __dir__(self): class_names = dir(self.__class__) fnames = _ffi_node_api.NodeListAttrNames(self) size = fnames(-1) return sorted([fnames(i) for i in range(size)] + class_names) def __getattr__(self, name): # specially check handle since # this is required for PackedFunc calls if name == "handle": raise AttributeError("handle is not set") try: return _ffi_node_api.NodeGetAttr(self, name) except AttributeError: raise AttributeError(f"{type(self)} has no attribute {name}") from None def __hash__(self): return _ffi_api.ObjectPtrHash(self) def __eq__(self, other): return self.same_as(other) def __ne__(self, other): return not self.__eq__(other) def __reduce__(self): cls = type(self) return (_new_object, (cls,), self.__getstate__()) def __getstate__(self): handle = self.handle if handle is not None: return {"handle": _ffi_node_api.SaveJSON(self)} return {"handle": None} def __setstate__(self, state): # pylint: disable=assigning-non-slot, assignment-from-no-return handle = state["handle"] self.handle = None if handle is not None: self.__init_handle_by_constructor__(_ffi_node_api.LoadJSON, handle) def _move(self): """Create an RValue reference to the object and mark the object as moved. This is a advanced developer API that can be useful when passing an unique reference to an Object that you no longer needed to a function. A unique reference can trigger copy on write optimization that avoids copy when we transform an object. Note ---- All the reference of the object becomes invalid after it is moved. Be very careful when using this feature. Examples -------- .. code-block:: python x = tvm.tir.Var("x", "int32") x0 = x some_packed_func(x._move()) # both x0 and x will points to None after the function call. Returns ------- rvalue : The rvalue reference. """ return ObjectRValueRef(self) _set_class_object(Object)
4,249
30.481481
89
py
tvm
tvm-main/python/tvm/runtime/vm.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=no-else-return, unidiomatic-typecheck, undefined-variable, invalid-name, redefined-builtin """ The Relay Virtual Machine runtime. Implements a Python interface to executing the compiled VM object. """ import numpy as np import tvm from tvm.runtime import Module from tvm._ffi.runtime_ctypes import TVMByteArray from tvm._ffi import base as _base from .object import Object from . import _ffi_api, container from ..rpc.base import RPC_SESS_MASK def _convert(arg, cargs): def _gettype(arg): if isinstance(arg, np.float16): return "float16" elif isinstance(arg, (_base.integer_types, bool)): return "int32" else: return "float32" if isinstance(arg, Object): cargs.append(arg) elif arg is None: cargs.append(tvm.nd.array([], device=tvm.cpu(0))) elif isinstance(arg, np.ndarray): nd_arr = tvm.nd.array(arg, device=tvm.cpu(0)) cargs.append(nd_arr) elif isinstance(arg, tvm.runtime.NDArray): cargs.append(arg) elif isinstance(arg, (tuple, list)): field_args = [] for field in arg: _convert(field, field_args) cargs.append(container.tuple_object(field_args)) elif isinstance(arg, (_base.numeric_types, bool)): dtype = _gettype(arg) value = tvm.nd.array(np.array(arg, dtype=dtype), device=tvm.cpu(0)) cargs.append(value) elif isinstance(arg, str): cargs.append(arg) else: raise TypeError(f"Unsupported type: {type(arg)}") def convert(args): cargs = [] for arg in args: _convert(arg, cargs) return cargs class Executable(object): """Relay VM executable""" def __init__(self, mod): self.mod = mod self._function_params = {} self._save = self.mod["save"] self._get_lib = self.mod["get_lib"] self._get_bytecode = self.mod["get_bytecode"] self._get_constants = self.mod["get_constants"] self._get_virtual_devices = self.mod["get_virtual_devices"] self._get_primitives = self.mod["get_primitives"] self._get_stats = self.mod["get_stats"] self._get_function_arity = self.mod["get_function_arity"] self._get_function_param_name = self.mod["get_function_param_name"] self._move_late_bound_consts = self.mod["move_late_bound_consts"] self._get_late_bound_consts = self.mod["get_late_bound_consts"] self._load_late_bound_consts = self.mod["load_late_bound_consts"] self._load_late_bound_consts_from_map = self.mod["load_late_bound_consts_from_map"] def save(self): """Save the Relay VM Executable. Returns ------- code : bytearray The binary blob representing a serialized Relay VM executable. It can then be saved to disk and later deserialized into a new Executable. lib : :py:class:`~tvm.runtime.Module` The runtime module that contains the generated code. It is basically a library that is composed of hardware dependent code. Notes ----- The returned code is organized with the following sections in order. - Global section. This section contains the globals used by the virtual machine. - Constant section. This section is used to store the constant pool of a virtual machine. - Primitive name section. This section is introduced to accommodate the list of primitive operator names that will be invoked by the virtual machine. - Code section. The VM functions, including bytecode, are sitting in this section. Examples -------- .. code-block:: python import numpy as np import tvm from tvm import te from tvm import relay # define a simple network. x = relay.var('x', shape=(10, 10)) f = relay.Function([x], x + x) mod = tvm.IRModule({"main": f}) # create a Relay VM. dev = tvm.cpu() target = "llvm" executable = relay.vm.compile(mod, target) code, lib = executable.save() # save and load the code and lib file. tmp = tvm.contrib.utils.tempdir() path_lib = tmp.relpath("lib.so") lib.export_library(path_lib) with open(tmp.relpath("code.ro"), "wb") as fo: fo.write(code) loaded_lib = tvm.runtime.load_module(path_lib) loaded_code = bytearray(open(tmp.relpath("code.ro"), "rb").read()) # deserialize. des_exec = tvm.runtime.vm.Executable.load_exec(loaded_code, loaded_lib) # execute the deserialized executable. x_data = np.random.rand(10, 10).astype('float32') des_vm = tvm.runtime.vm.VirtualMachine(des_exec, dev) res = des_vm.run(x_data) print(res.numpy()) """ return self._save(), self._get_lib() @staticmethod def load_exec(bytecode, lib): """Construct an executable from saved artifacts. Parameters ---------- bytecode : bytearray The binary blob representing a the Relay VM bytecode. lib : :py:class:`~tvm.runtime.Module` The runtime module that contains the generated code. Returns ------- exec: Executable An executable constructed using the provided artifacts. """ if isinstance(bytecode, (bytes, str)): bytecode = bytearray(bytecode) elif not isinstance(bytecode, (bytearray, TVMByteArray)): raise TypeError( "bytecode is expected to be the type of bytearray or TVMByteArray, but received " f"{type(bytecode)}" ) if lib is not None and not isinstance(lib, tvm.runtime.Module): raise TypeError( f"lib is expected to be the type of tvm.runtime.Module, but received {type(lib)}" ) return Executable(_ffi_api.Load_Executable(bytecode, lib)) @property def lib(self): """Get the library that contains hardware dependent code. Returns ------- ret : :py:class:`~tvm.runtime.Module` The runtime module that contains hardware dependent code. """ return self._get_lib() @property def stats(self): """Get the statistics of the Relay VM executable. Returns ------- ret : String The statistic information of the VM executable. """ return self._get_stats() @property def primitive_ops(self): """Get the name of the primitive ops contained in the executable. Returns ------- ret : List[String] The list of primitive ops. """ ret = [] num_primitives = _ffi_api.GetNumOfPrimitives(self.module) for i in range(num_primitives): ret.append(_ffi_api.GetPrimitiveFields(self.module, i)) return ret @property def bytecode(self): """Get the bytecode of the Relay VM executable. Returns ------- ret : String The bytecode of the executable. Notes ----- The bytecode is in the following format: func_name reg_file_size num_instructions param1 param2 ... paramM instruction1 instruction2 ... instructionN Each instruction is printed in the following format: hash opcode field1 ... fieldX # The text format. The part starting from # is only used for visualization and debugging. The real serialized code doesn't contain it, therefore the deserializer doesn't need to deal with it as well. """ return self._get_bytecode() @property def constants(self): """Returns a human-readable description of all the constants in the executable. Useful for debugging and diffing generated executables in unit tests.""" return self._get_constants() @property def virtual_devices(self): """Returns a human-readable description of all the (virtual) devices in the executable.""" return self._get_virtual_devices() @property def primitives(self): """Returns a human-readable description of all the primitives (ie PackedFuncs) in the executable""" return self._get_primitives() @property def globals(self): """Get the globals used by the Relay VM executable. Returns ------- ret : List[String] The globals contained in the executable. """ ret = [] num_globals = _ffi_api.GetNumOfGlobals(self.module) for i in range(num_globals): ret.append(_ffi_api.GetGlobalFields(self.module, i)) return ret @property def module(self): """Return the runtime module contained in a virtual machine executable.""" return self.mod def get_function_params(self, func_name): """Get VM Function parameters""" if func_name in self._function_params: return self._function_params[func_name] arity = self._get_function_arity(func_name) assert arity >= 0 params = [] for i in range(arity): p = self._get_function_param_name(func_name, i) assert p params.append(p) self._function_params[func_name] = params return params def move_late_bound_consts(self, path, byte_limit): """Move all constants of byte size greater or equal to byte_limit to file at path""" return self._move_late_bound_consts(path, byte_limit) def get_late_bound_consts(self, byte_limit): """Return all constants of byte size greater or equal to byte_limit""" return self._get_late_bound_consts(byte_limit) def load_late_bound_consts(self, path): """Re-load constants previously saved to file at path""" return self._load_late_bound_consts(path) def load_late_bound_consts_from_map(self, map): """Re-load constants supplied in map""" return self._load_late_bound_consts_from_map(map) class VirtualMachine(object): """Relay VM runtime. Parameters ---------- exe : Executable The VM executable. device : tvm.runtime.Device or List[tvm.runtime.Device] The device(s) on which the model will run. Currently at most one device per device type is supported. memory_cfg : str or Dict[tvm.runtime.Device, str], optional Config the type of memory allocator. The allocator type can be ["naive", "pooled"]. If memory_cfg is None, all devices will use pooled allocator by default. If memory_cfg is string, all devices will use the specified allocator type. If memory_cfg is a dict, each device uses the allocator type specified in the dict, or pooled allocator if not specified in the dict. """ NAIVE_ALLOCATOR = 1 POOLED_ALLOCATOR = 2 def __init__(self, exe, device, memory_cfg=None): """ Construct a VirtualMachine wrapper class which provides a simple interface over the raw C++ Module based API. Parameters ---------- exe: Union[Executable, Module] The executable either with the wrapper Python type or the raw runtime.Module. In most cases this will be the Python wrapper class tvm.runtime.vm.Executable but if you instead get the underlying runtime.Module subclass (i.e `exe.mod`) you can directly pass it to this method. This case can occur when doing things such as RPC where TVM's module APIs return the raw modules, not the wrapped modules. This constructor will handle this internally. device: Union[Device, List[Device]] The device, or devices on which to execute the VM code. memory_cfg: Optional[str] The allocator behavior to use for the VM. Returns ------- vm: VirtualMachine A VM wrapper object. """ if not isinstance(exe, Executable) and not isinstance(exe, Module): raise TypeError( f"exe is expected to be the type of Executable, but received {type(exe)}" ) if not isinstance(exe, Executable): exe = Executable(exe) self.module = exe.mod["vm_load_executable"]() self._exec = exe self._init = self.module["init"] self._invoke = self.module["invoke"] self._invoke_stateful = self.module["invoke_stateful"] self._get_output = self.module["get_output"] self._get_num_outputs = self.module["get_num_outputs"] self._get_input_index = self.module["get_input_index"] self._set_input = self.module["set_input"] self._set_one_input = self.module["set_one_input"] self._set_outputs = self.module["set_outputs"] self._setup_device(device, memory_cfg) def _setup_device(self, dev, memory_cfg): """Init devices and allocators.""" devs = dev if not isinstance(dev, (list, tuple)): if not isinstance(dev, tvm.runtime.Device): raise TypeError("dev is expected to be Device or List[Device]") devs = [dev] # CPU is required for executing shape functions if not any(c.device_type % RPC_SESS_MASK == tvm.cpu().device_type for c in devs): devs.append(tvm.cpu()) default_alloc_type = VirtualMachine.POOLED_ALLOCATOR if memory_cfg is None: memory_cfg = {} elif isinstance(memory_cfg, str): assert memory_cfg in ["naive", "pooled"] if memory_cfg == "naive": default_alloc_type = VirtualMachine.NAIVE_ALLOCATOR memory_cfg = {} elif not isinstance(memory_cfg, dict): raise TypeError( f"memory_cfg is expected be string or dictionary, but received {type(memory_cfg)}" ) init_args = [] for device in devs: init_args.append(device.device_type % RPC_SESS_MASK) init_args.append(device.device_id) alloc_type = memory_cfg[device] if device in memory_cfg else default_alloc_type init_args.append(alloc_type) self._init(*init_args) def set_input(self, func_name, *args, **kwargs): """Set the input to a function. If device type and device id for input tensor are the same as for target one the zero copy is used. It means that internal tensor is reference to memory allocated by input one. Otherwise new internal NDarray is created and data is copied Parameters ---------- func_name : str The name of the function. args : list[tvm.runtime.NDArray] or list[np.ndarray] The arguments to the function. kwargs: dict of str to tvm.runtime.NDArray or np.ndarray Named arguments to the function. """ if kwargs: # kwargs is a super set of the required function parameters. We # only find the ones that are needed. func_params = self._exec.get_function_params(func_name) new_args = [None] * len(func_params) cnt = 0 for k in kwargs: if k in func_params: idx = func_params.index(k) new_args[idx] = kwargs[k] cnt += 1 assert len(args) + cnt == len(func_params) idx = 0 for i, arg in enumerate(new_args): if arg is None: new_args[i] = args[idx] idx += 1 args = new_args cargs = convert(args) self._set_input(func_name, *cargs) def set_one_input(self, func_name, *args, **kwargs): """Set the one input tensor with tag to a function. Parameters ---------- func_name : str The name of the function. args : [str or int, tvm.runtime.NDArray] name or index of tensor and input tensor, optional kwargs: dict of str or int to tvm.runtime.NDArray, optional taged arguments to the function. Only args or kwargs should exist """ if kwargs: assert len(kwargs) == 1 tag = next(iter(kwargs)) if isinstance(tag, str): func_params = self._exec.get_function_params(func_name) assert tag in func_params self._set_one_input(func_name, tag, kwargs[tag]) else: assert len(args) == 2 self._set_one_input(func_name, args[0], args[1]) def invoke(self, func_name, *args, **kwargs): """Invoke a function. Parameters ---------- func_name : str The name of the function. args : list[tvm.runtime.NDArray] or list[np.ndarray] The arguments to the function. kwargs: dict of str to tvm.runtime.NDArray or np.ndarray Named arguments to the function. Returns ------- result : Object The output. """ if args or kwargs: self.set_input(func_name, *args, **kwargs) return self._invoke(func_name) def run(self, *args, **kwargs): """Run the main function. Parameters ---------- args : list[tvm.runtime.NDArray] or list[np.ndarray] The arguments to the function. kwargs: dict of str to tvm.runtime.NDArray or np.ndarray Named arguments to the function. Returns ------- result : Object The output. """ return self.invoke("main", *args, **kwargs) def invoke_stateful(self, func_name, *args, **kwargs): """Invoke a function and ignore the returned result. Use this function when running over rpc because it is currently impossible to return a ADT object over rpc. To get the outputs, use :py:func`get_outputs`. Parameters ---------- func_name : str The name of the function. args : list[tvm.runtime.NDArray] or list[np.ndarray] The arguments to the function. kwargs: dict of str to tvm.runtime.NDArray or np.ndarray Named arguments to the function. """ if args or kwargs: self.set_input(func_name, *args, **kwargs) self._invoke_stateful(func_name) def invoke_with_outputs(self, func_name, input_args, output_args): # TODO(vvchernov): consider scenario then output tensors set once """Invoke a function with pre-allocated output tensors. The output tensors should be set every invocation. input_args can be None if set_input method was used before. This invoke method allows to avoid excess copying if memory for output tensors was allocated before inference. Parameters ---------- func_name : str The name of the function. input_args: dict of str to tvm.runtime.NDArray or np.ndarray Named arguments to the function. output_args : list[tvm.runtime.NDArray] or list[DLTensor] The output tensors of the function. """ if input_args: func_params = self._exec.get_function_params(func_name) new_args = [None] * len(func_params) cnt = 0 for k in input_args: if k in func_params: idx = func_params.index(k) new_args[idx] = input_args[k] cnt += 1 assert cnt == len(func_params) cargs = convert(new_args) self._set_input(func_name, *cargs) self._set_outputs(func_name, *output_args) self._invoke(func_name) def get_outputs(self): """Get the outputs from a call to :py:func`invoke_stateful`. Returns ------- outputs : List[NDArray] """ return [self._get_output(i) for i in range(self._get_num_outputs())] def get_input_index(self, input_name, func_name="main"): """Get inputs index via input name. Parameters ---------- name : str The input key name func_name : str The function name Returns ------- index: int The input index. -1 will be returned if the given input name is not found. """ return self._get_input_index(input_name, func_name) def benchmark( self, device, *args, func_name="main", repeat=5, number=5, min_repeat_ms=None, limit_zero_time_iterations=100, end_to_end=False, cooldown_interval_ms=0, repeats_to_cooldown=1, **kwargs, ): """Calculate runtime of a function by repeatedly calling it. Use this function to get an accurate measurement of the runtime of a function. The function is run multiple times in order to account for variability in measurements, processor speed or other external factors. Mean, median, standard deviation, min and max runtime are all reported. On GPUs, CUDA and ROCm specifically, special on-device timers are used so that synchonization and data transfer operations are not counted towards the runtime. This allows for fair comparison of runtimes across different functions and models. The `end_to_end` flag switches this behavior to include data transfer operations in the runtime. The benchmarking loop looks approximately like so: .. code-block:: python for r in range(repeat): time_start = now() for n in range(number): func_name() time_end = now() total_times.append((time_end - time_start)/number) Parameters ---------- func_name : str The function to benchmark repeat : int Number of times to run the outer loop of the timing code (see above). The output will contain `repeat` number of datapoints. number : int Number of times to run the inner loop of the timing code. This inner loop is run in between the timer starting and stopping. In order to amortize any timing overhead, `number` should be increased when the runtime of the function is small (less than a 1/10 of a millisecond). min_repeat_ms : Optional[int] If set, the inner loop will be run until it takes longer than `min_repeat_ms` milliseconds. This can be used to ensure that the function is run enough to get an accurate measurement. limit_zero_time_iterations : Optional[int] The maximum number of repeats when measured time is equal to 0. It helps to avoid hanging during measurements. end_to_end : bool If set, include time to transfer input tensors to the device and time to transfer returned tensors in the total runtime. This will give accurate timings for end to end workloads. cooldown_interval_ms: Optional[int] The cooldown interval in milliseconds between the number of repeats defined by `repeats_to_cooldown`. repeats_to_cooldown: Optional[int] The number of repeats before the cooldown is activated. args : Sequence[Object] Arguments to the function. These are cached before running timing code, so that data transfer costs are not counted in the runtime. kwargs : Dict[str, Object] Named arguments to the function. These are cached like `args`. Returns ------- timing_results : BenchmarkResult Runtimes of the function. Use `.mean` to access the mean runtime, use `.results` to access the individual runtimes (in seconds). """ min_repeat_ms = 0 if min_repeat_ms is None else min_repeat_ms if end_to_end: # We need to unpack keyword arguments into positional arguments packed_args = list(args) for k, v in kwargs.items(): i = self.get_input_index(k, func_name) if i < 0: raise TypeError(f"{func_name}() got an unexpected keyword argument '{k}'") while i >= len(packed_args): packed_args.append(None) packed_args[i] = v return self.module.time_evaluator( "invoke_return_to_device", device, repeat=repeat, number=number, min_repeat_ms=min_repeat_ms, limit_zero_time_iterations=limit_zero_time_iterations, )(func_name, device.device_type % RPC_SESS_MASK, device.device_id, *packed_args) if args or kwargs: self.set_input(func_name, *args, **kwargs) return self.module.time_evaluator( "invoke", device, repeat=repeat, number=number, min_repeat_ms=min_repeat_ms, limit_zero_time_iterations=limit_zero_time_iterations, cooldown_interval_ms=cooldown_interval_ms, repeats_to_cooldown=repeats_to_cooldown, )(func_name)
26,630
35.036536
108
py
tvm
tvm-main/python/tvm/runtime/module.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=invalid-name, unused-import, import-outside-toplevel, inconsistent-return-statements """Runtime Module namespace.""" import os import ctypes import struct from typing import Sequence import numpy as np import tvm._ffi from tvm._ffi.base import _LIB, check_call, c_str, string_types, _RUNTIME_ONLY from tvm._ffi.libinfo import find_include_path from .packed_func import PackedFunc, PackedFuncHandle, _set_class_module from . import _ffi_api class BenchmarkResult: """Runtimes from benchmarking""" def __init__(self, results: Sequence[float]): """Construct a new BenchmarkResult from a sequence of runtimes. Parameters ---------- results : Sequence[float] Raw times from benchmarking Attributes ---------- min : float Minimum runtime in seconds of all results. mean : float Mean runtime in seconds of all results. If py:meth:`Module.time_evaluator` or `benchmark` is called with `number` > 0, then each result is already the mean of a `number` of runtimes, so this becomes the mean of means. median : float Median runtime in seconds of all results. If py:meth:`Module.time_evaluator` is called with `number` > 0, then each result is already the mean of a `number` of runtimes, so this becomes the median of means. max : float Maximum runtime in seconds of all results. If py:meth:`Module.time_evaluator` is called with `number` > 0, then each result is already the mean of a `number` of runtimes, so this becomes the maximum of those means. std : float Standard deviation in seconds of runtimes. If py:meth:`Module.time_evaluator` is called with `number` > 0, then each result is already the mean of a `number` of runtimes, so this becomes the standard deviation of means. results : Sequence[float] The collected runtimes (in seconds). This may be a series of mean runtimes if py:meth:`Module.time_evaluator` or `benchmark` was run with `number` > 1. """ self.results = results self.mean = np.mean(self.results) self.std = np.std(self.results) self.median = np.median(self.results) self.min = np.min(self.results) self.max = np.max(self.results) def __repr__(self): return ( f"BenchmarkResult(min={self.min}, mean={self.mean}, median={self.median}, " f"max={self.max}, std={self.std}, results={self.results})" ) def __str__(self): return ( f"Execution time summary:\n" f"{'mean (ms)':^12} {'median (ms)':^12} {'max (ms)':^12} " f"{'min (ms)':^12} {'std (ms)':^12}\n" f"{self.mean * 1000:^12.4f} {self.median * 1000:^12.4f} {self.max * 1000:^12.4f} " f"{self.min * 1000:^12.4f} {self.std * 1000:^12.4f}" " " ) class ModulePropertyMask(object): """Runtime Module Property Mask.""" BINARY_SERIALIZABLE = 0b001 RUNNABLE = 0b010 DSO_EXPORTABLE = 0b100 class Module(object): """Runtime Module.""" __slots__ = ["handle", "_entry", "entry_name"] def __init__(self, handle): self.handle = handle self._entry = None self.entry_name = "__tvm_main__" def __del__(self): if _LIB: check_call(_LIB.TVMModFree(self.handle)) def __hash__(self): return ctypes.cast(self.handle, ctypes.c_void_p).value @property def entry_func(self): """Get the entry function Returns ------- f : tvm.runtime.PackedFunc The entry function if exist """ if self._entry: return self._entry self._entry = self.get_function(self.entry_name) return self._entry def implements_function(self, name, query_imports=False): """Returns True if the module has a definition for the global function with name. Note that has_function(name) does not imply get_function(name) is non-null since the module may be, eg, a CSourceModule which cannot supply a packed-func implementation of the function without further compilation. However, get_function(name) non null should always imply has_function(name). Parameters ---------- name : str The name of the function query_imports : bool Whether to also query modules imported by this module. Returns ------- b : Bool True if module (or one of its imports) has a definition for name. """ return _ffi_api.ModuleImplementsFunction(self, name, query_imports) def get_function(self, name, query_imports=False): """Get function from the module. Parameters ---------- name : str The name of the function query_imports : bool Whether also query modules imported by this module. Returns ------- f : tvm.runtime.PackedFunc The result function. """ ret_handle = PackedFuncHandle() check_call( _LIB.TVMModGetFunction( self.handle, c_str(name), ctypes.c_int(query_imports), ctypes.byref(ret_handle) ) ) if not ret_handle.value: raise AttributeError(f"Module has no function '{name}'") return PackedFunc(ret_handle, False) def import_module(self, module): """Add module to the import list of current one. Parameters ---------- module : tvm.runtime.Module The other module. """ check_call(_LIB.TVMModImport(self.handle, module.handle)) def __getitem__(self, name): if not isinstance(name, string_types): raise ValueError("Can only take string as function name") return self.get_function(name) def __eq__(self, other): return self.handle.value == other.handle.value def __call__(self, *args): if self._entry: return self._entry(*args) # pylint: disable=not-callable return self.entry_func(*args) def __repr__(self): return f"Module({self.type_key}, {self.handle.value:x})" @property def type_key(self): """Get type key of the module.""" return _ffi_api.ModuleGetTypeKey(self) @property def format(self): """Get the format of the module.""" return _ffi_api.ModuleGetFormat(self) def get_source(self, fmt=""): """Get source code from module, if available. Parameters ---------- fmt : str, optional The specified format. Returns ------- source : str The result source code. """ return _ffi_api.ModuleGetSource(self, fmt) @property def imported_modules(self): """Get imported modules Returns ---------- modules : list of Module The module """ nmod = _ffi_api.ModuleImportsSize(self) return [_ffi_api.ModuleGetImport(self, i) for i in range(nmod)] def get_property_mask(self): """Get the runtime module property mask. The mapping is stated in ModulePropertyMask. Returns ------- mask : int Bitmask of runtime module property """ return _ffi_api.ModuleGetPropertyMask(self) @property def is_binary_serializable(self): """Returns true if module is 'binary serializable', ie can be serialzed into binary stream and loaded back to the runtime module. Returns ------- b : Bool True if the module is binary serializable. """ return (self.get_property_mask() & ModulePropertyMask.BINARY_SERIALIZABLE) != 0 @property def is_runnable(self): """Returns true if module is 'runnable'. ie can be executed without any extra compilation/linking steps. Returns ------- b : Bool True if the module is runnable. """ return (self.get_property_mask() & ModulePropertyMask.RUNNABLE) != 0 @property def is_dso_exportable(self): """Returns true if module is 'DSO exportable', ie can be included in result of export_library by the external compiler directly. Returns ------- b : Bool True if the module is DSO exportable. """ return (self.get_property_mask() & ModulePropertyMask.DSO_EXPORTABLE) != 0 def save(self, file_name, fmt=""): """Save the module to file. This do not save the dependent device modules. See also export_shared Parameters ---------- file_name : str The name of the file. fmt : str The format of the file. See Also -------- runtime.Module.export_library : export the module to shared library. """ _ffi_api.ModuleSaveToFile(self, file_name, fmt) def time_evaluator( self, func_name, dev, number=10, repeat=1, min_repeat_ms=0, limit_zero_time_iterations=100, cooldown_interval_ms=0, repeats_to_cooldown=1, f_preproc="", ): """Get an evaluator that measures time cost of running function. Parameters ---------- func_name: str The name of the function in the module. dev: Device The device we should run this function on. number: int The number of times to run this function for taking average. We call these runs as one `repeat` of measurement. repeat: int, optional The number of times to repeat the measurement. In total, the function will be invoked (1 + number x repeat) times, where the first one is warm up and will be discarded. The returned result contains `repeat` costs, each of which is an average of `number` costs. min_repeat_ms: int, optional The minimum duration of one `repeat` in milliseconds. By default, one `repeat` contains `number` runs. If this parameter is set, the parameters `number` will be dynamically adjusted to meet the minimum duration requirement of one `repeat`. i.e., When the run time of one `repeat` falls below this time, the `number` parameter will be automatically increased. limit_zero_time_iterations: int, optional The maximum number of repeats when measured time is equal to 0. It helps to avoid hanging during measurements. cooldown_interval_ms: int, optional The cooldown interval in milliseconds between the number of repeats defined by `repeats_to_cooldown`. repeats_to_cooldown: int, optional The number of repeats before the cooldown is activated. f_preproc: str, optional The preprocess function name we want to execute before executing the time evaluator. Note ---- The function will be invoked (1 + number x repeat) times, with the first call discarded in case there is lazy initialization. Returns ------- ftimer : function The function that takes same argument as func and returns a BenchmarkResult. The ProfileResult reports `repeat` time costs in seconds. """ try: feval = _ffi_api.RPCTimeEvaluator( self, func_name, dev.device_type, dev.device_id, number, repeat, min_repeat_ms, limit_zero_time_iterations, cooldown_interval_ms, repeats_to_cooldown, f_preproc, ) def evaluator(*args): """Internal wrapped evaluator.""" # Wrap feval so we can add more stats in future. blob = feval(*args) fmt = "@" + ("d" * repeat) results = struct.unpack(fmt, blob) return BenchmarkResult(results) return evaluator except NameError: raise NameError("time_evaluator is only supported when RPC is enabled") def _collect_from_import_tree(self, filter_func): """Helper function to collect modules from the tree matching a filter_func, then return it. Parameters ---------- filter_func : Callable[[Module], bool] A function which is invoked for each Module discovered in the import tree (including self). Returns ------- list[Module] : A list of matching Module. """ visited, stack, dso_modules = set(), [], [] # append root module visited.add(self) stack.append(self) while stack: module = stack.pop() assert ( module.is_dso_exportable or module.is_binary_serializable ), f"Module {module.type_key} should be either dso exportable or binary serializable." if filter_func(module): dso_modules.append(module) for m in module.imported_modules: if m not in visited: visited.add(m) stack.append(m) return dso_modules def _collect_dso_modules(self): return self._collect_from_import_tree(lambda m: m.is_dso_exportable) def export_library(self, file_name, fcompile=None, addons=None, workspace_dir=None, **kwargs): """ Export the module and all imported modules into a single device library. This function only works on host LLVM modules, other runtime::Module subclasses will work with this API but they must support implement the save and load mechanisms of modules completely including saving from streams and files. This will pack your non-shared library module into a single shared library which can later be loaded by TVM. Parameters ---------- file_name : str The name of the shared library. fcompile : function(target, file_list, kwargs), optional The compilation function to use create the final library object during export. For example, when fcompile=_cc.create_shared, or when it is not supplied but module is "llvm," this is used to link all produced artifacts into a final dynamic library. This behavior is controlled by the type of object exported. If fcompile has attribute object_format, will compile host library to that format. Otherwise, will use default format "o". workspace_dir : str, optional The path of the directory used to create the intermediate artifacts when exporting the module. If this is not provided a temporary dir will be created. kwargs : dict, optional Additional arguments passed to fcompile Returns ------- result of fcompile() : unknown, optional If the compilation function returns an artifact it would be returned via export_library, if any. """ # NOTE: this function depends on contrib library features # which are only available in when TVM function is available. if _RUNTIME_ONLY: raise RuntimeError("Cannot call export_library in runtime only mode") # Extra dependencies during runtime. from pathlib import Path from tvm.contrib import cc as _cc, tar as _tar, utils as _utils if isinstance(file_name, Path): file_name = str(file_name) if self.type_key == "stackvm": if not file_name.endswith(".stackvm"): raise ValueError( f"Module[{self.type_key}]: can only be saved as stackvm format." "did you build with LLVM enabled?" ) self.save(file_name) return modules = self._collect_dso_modules() if workspace_dir is None: temp = _utils.tempdir() workspace_dir = temp.temp_dir files = addons if addons else [] is_system_lib = False has_c_module = False system_lib_prefix = None llvm_target_string = None global_object_format = "o" for index, module in enumerate(modules): if fcompile is not None and hasattr(fcompile, "object_format"): if module.type_key == "c": assert module.format in [ "c", "cc", "cpp", "cu", ], "The module.format needs to be either c, cc, cpp or cu." object_format = module.format has_c_module = True else: global_object_format = object_format = fcompile.object_format else: if module.type_key == "c": if len(module.format) > 0: assert module.format in [ "c", "cc", "cpp", "cu", ], "The module.format needs to be either c, cc, cpp, or cu." object_format = module.format else: object_format = "c" if "cc" in kwargs: if kwargs["cc"] == "nvcc": object_format = "cu" has_c_module = True else: assert module.type_key == "llvm" or module.type_key == "static_library" global_object_format = object_format = "o" path_obj = os.path.join(workspace_dir, f"lib{index}.{object_format}") module.save(path_obj) files.append(path_obj) if module.type_key == "llvm": is_system_lib = module.get_function("__tvm_is_system_module")() llvm_target_string = module.get_function("_get_target_string")() system_lib_prefix = module.get_function("__tvm_get_system_lib_prefix")() if not fcompile: if file_name.endswith(".tar"): fcompile = _tar.tar else: fcompile = _cc.create_shared if llvm_target_string is None and hasattr(fcompile, "get_target_triple"): triple = fcompile.get_target_triple() assert triple, "Target triple should not be empty" llvm_target_string = "llvm -mtriple " + triple if getattr(fcompile, "need_system_lib", False) and not is_system_lib: raise ValueError(f"{str(fcompile)} need --system-lib option") if self.imported_modules: pack_lib_prefix = system_lib_prefix if system_lib_prefix else "" if enabled("llvm") and llvm_target_string: path_obj = os.path.join( workspace_dir, f"{pack_lib_prefix}devc.{global_object_format}" ) m = _ffi_api.ModulePackImportsToLLVM( self, is_system_lib, llvm_target_string, pack_lib_prefix ) m.save(path_obj) files.append(path_obj) else: path_cc = os.path.join(workspace_dir, f"{pack_lib_prefix}devc.c") with open(path_cc, "w") as f: f.write(_ffi_api.ModulePackImportsToC(self, is_system_lib, pack_lib_prefix)) files.append(path_cc) # The imports could contain a c module but the object format could be tar # Thus, it would not recognize the following include paths as options # which are there assuming a c compiler is the fcompile. if has_c_module and not file_name.endswith(".tar"): options = [] if "options" in kwargs: opts = kwargs["options"] options = opts if isinstance(opts, (list, tuple)) else [opts] opts = options + ["-I" + path for path in find_include_path()] kwargs.update({"options": opts}) return fcompile(file_name, files, **kwargs) def system_lib(symbol_prefix=""): """Get system-wide library module singleton. System lib is a global module that contains self register functions in startup. Unlike normal dso modules which need to be loaded explicitly. It is useful in environments where dynamic loading api like dlopen is banned. To build system lib function, simply specify target option ```llvm --system-lib``` The system lib will be available as long as the result code is linked by the program. The system lib is intended to be linked and loaded during the entire life-cyle of the program. If you want dynamic loading features, use dso modules instead. Parameters ---------- symbol_prefix: Optional[str] Optional symbol prefix that can be used for search. When we lookup a symbol symbol_prefix + name will first be searched, then the name without symbol_prefix. Returns ------- module : runtime.Module The system-wide library module. """ return _ffi_api.SystemLib(symbol_prefix) def load_module(path, fmt=""): """Load module from file. Parameters ---------- path : str The path to the module file. fmt : str, optional The format of the file, if not specified it will be inferred from suffix of the file. Returns ------- module : runtime.Module The loaded module Note ---- This function will automatically call cc.create_shared if the path is in format .o or .tar """ if os.path.isfile(path): path = os.path.realpath(path) else: raise ValueError(f"cannot find file {path}") # High level handling for .o and .tar file. # We support this to be consistent with RPC module load. if path.endswith(".o"): # Extra dependencies during runtime. from tvm.contrib import cc as _cc _cc.create_shared(path + ".so", path) path += ".so" elif path.endswith(".tar"): # Extra dependencies during runtime. from tvm.contrib import cc as _cc, utils as _utils, tar as _tar tar_temp = _utils.tempdir(custom_path=path.replace(".tar", "")) _tar.untar(path, tar_temp.temp_dir) files = [tar_temp.relpath(x) for x in tar_temp.listdir()] _cc.create_shared(path + ".so", files) path += ".so" # Redirect to the load API return _ffi_api.ModuleLoadFromFile(path, fmt) def load_static_library(path, func_names): """Load the .o library at path which implements functions with func_names. Unlike the generic load_module the result will remain as a static_library and will not be relinked on-the-fly into a .so library.""" return _ffi_api.ModuleLoadStaticLibrary(path, func_names) def enabled(target): """Whether module runtime is enabled for target Parameters ---------- target : str The target device type. Returns ------- enabled : bool Whether runtime is enabled. Examples -------- The following code checks if gpu is enabled. >>> tvm.runtime.enabled("gpu") """ return _ffi_api.RuntimeEnabled(target) def num_threads() -> int: """Get the number of threads in use by the TVM runtime. Returns ------- int Number of threads in use. """ return _ffi_api.NumThreads() _set_class_module(Module)
24,908
34.033755
102
py
tvm
tvm-main/python/tvm/runtime/object_path.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """ ObjectPath class that represents a path from a root object to one of its descendants via attribute access, array indexing etc. """ from typing import Optional import tvm._ffi from tvm.runtime import Object from . import _ffi_node_api __all__ = ( "ObjectPath", "RootPath", "AttributeAccessPath", "UnknownAttributeAccessPath", "ArrayIndexPath", "MissingArrayElementPath", "MapValuePath", "MissingMapEntryPath", "ObjectPathPair", ) @tvm._ffi.register_object("ObjectPath") class ObjectPath(Object): """ Path to an object from some root object. """ def __init__(self) -> None: super().__init__() raise ValueError( "ObjectPath can't be initialized directly. " "Use ObjectPath.root() to create a path to the root object" ) @staticmethod def root(root_name: Optional[str] = None) -> "ObjectPath": return _ffi_node_api.ObjectPathRoot(root_name) def __eq__(self, other): return _ffi_node_api.ObjectPathEqual(self, other) def __ne__(self, other): return not _ffi_node_api.ObjectPathEqual(self, other) @property def parent(self) -> "ObjectPath": return _ffi_node_api.ObjectPathGetParent(self) def __len__(self) -> int: return _ffi_node_api.ObjectPathLength(self) def get_prefix(self, length) -> "ObjectPath": return _ffi_node_api.ObjectPathGetPrefix(self, length) def is_prefix_of(self, other) -> "ObjectPath": return _ffi_node_api.ObjectPathIsPrefixOf(self, other) def attr(self, attr_key) -> "ObjectPath": return _ffi_node_api.ObjectPathAttr(self, attr_key) def array_index(self, index) -> "ObjectPath": return _ffi_node_api.ObjectPathArrayIndex(self, index) def missing_array_element(self, index) -> "ObjectPath": return _ffi_node_api.ObjectPathMissingArrayElement(self, index) def map_value(self, key) -> "ObjectPath": return _ffi_node_api.ObjectPathMapValue(self, tvm.runtime.convert(key)) def missing_map_entry(self) -> "ObjectPath": return _ffi_node_api.ObjectPathMissingMapEntry(self) __hash__ = Object.__hash__ @tvm._ffi.register_object("RootPath") class RootPath(ObjectPath): pass @tvm._ffi.register_object("AttributeAccessPath") class AttributeAccessPath(ObjectPath): pass @tvm._ffi.register_object("UnknownAttributeAccessPath") class UnknownAttributeAccessPath(ObjectPath): pass @tvm._ffi.register_object("ArrayIndexPath") class ArrayIndexPath(ObjectPath): pass @tvm._ffi.register_object("MissingArrayElementPath") class MissingArrayElementPath(ObjectPath): pass @tvm._ffi.register_object("MapValuePath") class MapValuePath(ObjectPath): pass @tvm._ffi.register_object("MissingMapEntryPath") class MissingMapEntryPath(ObjectPath): pass @tvm._ffi.register_object("ObjectPathPair") class ObjectPathPair(Object): """ Pair of ObjectPaths, one for each object being tested for structural equality. """ @property def lhs_path(self) -> ObjectPath: return _ffi_node_api.ObjectPathPairLhsPath(self) @property def rhs_path(self) -> ObjectPath: return _ffi_node_api.ObjectPathPairRhsPath(self)
4,038
26.855172
84
py
tvm
tvm-main/python/tvm/runtime/name_transforms.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """ Name transformation functions shared in Backend and Runtime """ from . import _ffi_api def sanitize_name(original_name: str): """Sanitize name for output into compiler artifacts Parameters ---------- original_name : str Original name to sanitize """ return _ffi_api.SanitizeName(original_name)
1,119
32.939394
62
py
tvm
tvm-main/python/tvm/runtime/_ffi_api.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """FFI APIs for tvm.runtime""" import tvm._ffi # Exports functions registered via TVM_REGISTER_GLOBAL with the "runtime" prefix. # e.g. TVM_REGISTER_GLOBAL("runtime.ModuleLoadFromFile") tvm._ffi._init_api("runtime", __name__)
1,012
43.043478
81
py
tvm
tvm-main/python/tvm/runtime/ndarray.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=invalid-name, unused-import, redefined-outer-name """Runtime NDArray API""" import ctypes import warnings import numpy as np try: import ml_dtypes except ImportError: ml_dtypes = None import tvm._ffi from tvm._ffi.base import _LIB, check_call, c_array, string_types, _FFI_MODE from tvm._ffi.runtime_ctypes import DataType, Device, TVMArray, TVMArrayHandle from tvm._ffi.runtime_ctypes import DataTypeCode, tvm_shape_index_t from . import _ffi_api try: # pylint: disable=wrong-import-position if _FFI_MODE == "ctypes": raise ImportError() from tvm._ffi._cy3.core import _set_class_ndarray, _make_array, _from_dlpack from tvm._ffi._cy3.core import NDArrayBase except (RuntimeError, ImportError) as error: # pylint: disable=wrong-import-position if _FFI_MODE == "cython": raise error from tvm._ffi._ctypes.ndarray import _set_class_ndarray, _make_array, _from_dlpack from tvm._ffi._ctypes.ndarray import NDArrayBase @tvm._ffi.register_object("runtime.NDArray") class NDArray(NDArrayBase): """Lightweight NDArray class of TVM runtime. Strictly this is only an Array Container (a buffer object) No arthimetic operations are defined. All operations are performed by TVM functions. The goal is not to re-build yet another array library. Instead, this is a minimal data structure to demonstrate how can we use TVM in existing project which might have their own array containers. """ @property def dtype(self): """Type of this array""" return str(self.handle.contents.dtype) @property def device(self): """Device of this array""" return self.handle.contents.device def __dlpack__(self, stream=None): # pylint: disable=unused-argument """Export the array for consumption by from_dlpack() as a DLPack capsule. Parameters ---------- stream : int, optional A Python integer representing a pointer to a stream. Stream is provided by the consumer to the producer to instruct the producer to ensure that operations can safely be performed on the array. Returns ------- capsule : PyCapsule A DLPack capsule for the array, containing a DLPackManagedTensor. """ return self.to_dlpack() def __dlpack_device__(self): """Return a tuple of device_type, device_id in DLPack convention""" return (self.handle.contents.device.device_type, self.handle.contents.device.device_id) def __hash__(self): return ctypes.cast(self.handle, ctypes.c_void_p).value def __eq__(self, other): return self.same_as(other) def __ne__(self, other): return not self.__eq__(other) def same_as(self, other): """Check object identity equality Parameters ---------- other : object The other object to compare to Returns ------- same : bool Whether other is same as self. """ if not isinstance(other, NDArrayBase): return False return self.__hash__() == other.__hash__() def __setitem__(self, in_slice, value): """Set ndarray value""" if ( not isinstance(in_slice, slice) or in_slice.start is not None or in_slice.stop is not None ): raise ValueError("Array only support set from numpy array") if isinstance(value, NDArrayBase): if value.handle is not self.handle: value.copyto(self) elif isinstance(value, (np.ndarray, np.generic)): self.copyfrom(value) else: raise TypeError(f"type {type(value)} not supported") def copyfrom(self, source_array): """Perform a synchronous copy from the array. Parameters ---------- source_array : array_like The data source we should like to copy from. Returns ------- arr : NDArray Reference to self. """ if isinstance(source_array, NDArrayBase): source_array.copyto(self) return self if not isinstance(source_array, np.ndarray): try: source_array = np.array(source_array, dtype=self.dtype) except: raise TypeError( f"array must be an array_like data, type {type(source_array)} is not supported" ) t = DataType(self.dtype) shape, dtype = self.shape, self.dtype if t.lanes > 1: shape = shape + (t.lanes,) t.lanes = 1 dtype = str(t) if source_array.shape != shape: raise ValueError( f"array shape do not match the shape of NDArray {source_array.shape} vs {shape}" ) numpy_str_map = DataType.NUMPY2STR np_dtype_str = ( numpy_str_map[source_array.dtype] if source_array.dtype in numpy_str_map else str(source_array.dtype) ) if (not source_array.flags["C_CONTIGUOUS"]) or ( dtype == "bfloat16" or dtype != np_dtype_str ): source_array = np.ascontiguousarray( source_array, dtype="uint16" if dtype == "bfloat16" else dtype ) assert source_array.flags["C_CONTIGUOUS"] data = source_array.ctypes.data_as(ctypes.c_void_p) nbytes = ctypes.c_size_t(source_array.size * source_array.dtype.itemsize) check_call(_LIB.TVMArrayCopyFromBytes(self.handle, data, nbytes)) return self def __repr__(self): res = f"<tvm.nd.NDArray shape={self.shape}, {self.device}>\n" res += self.numpy().__repr__() return res def __str__(self): return str(self.numpy()) def asnumpy(self): """Convert this array to numpy array. This API will be deprecated in TVM v0.8 release. Please use `numpy` instead.""" warnings.warn( "NDArray.asnumpy() will be deprecated in TVM v0.8 release. " "Please use NDArray.numpy() instead.", DeprecationWarning, ) return self.numpy() def numpy(self): """Convert this array to numpy array Returns ------- np_arr : numpy.ndarray The corresponding numpy array. """ t = DataType(self.dtype) shape, dtype = self.shape, self.dtype old_dtype = dtype if t.lanes > 1: shape = shape + (t.lanes,) t.lanes = 1 dtype = str(t) if dtype == "int4": dtype = "int8" if dtype == "bfloat16": dtype = "uint16" if dtype == "e4m3_float8": if ml_dtypes is not None: dtype = ml_dtypes.float8_e4m3fn else: raise RuntimeError( "ml_dtypes is not installed, cannot convert e4m3_float8 array to numpy." ) if dtype == "e5m2_float8": if ml_dtypes is not None: dtype = ml_dtypes.float8_e5m2 else: raise RuntimeError( "ml_dtypes is not installed, cannot convert e5m2_float8 array to numpy." ) np_arr = np.empty(shape, dtype=dtype) assert np_arr.flags["C_CONTIGUOUS"] data = np_arr.ctypes.data_as(ctypes.c_void_p) nbytes = ctypes.c_size_t(np_arr.size * np_arr.dtype.itemsize) check_call(_LIB.TVMArrayCopyToBytes(self.handle, data, nbytes)) if old_dtype == "int4": length = np_arr.size np_arr_ret = np.empty((length,), dtype="int8") np_arr = np_arr.reshape((length,)) old_index = np.bitwise_and(np_arr, 0x0F) even_index = np.bitwise_and(np_arr >> 4, 0x0F) np_arr_ret[1::2] = old_index[0 : length // 2] np_arr_ret[0::2] = even_index[0 : length // 2] return np_arr_ret.reshape(shape) return np_arr def copyto(self, target, mem_scope=None): """Copy array to target Parameters ---------- target : NDArray The target array to be copied, must have same shape as this array. mem_scope : Optional[str] The memory scope of the array. """ if isinstance(target, NDArrayBase): return self._copyto(target) if isinstance(target, Device): res = empty(self.shape, self.dtype, target, mem_scope) return self._copyto(res) raise ValueError(f"Unsupported target type {type(target)}") def _create_view(self, shape): """Create a view into an existing array. The view shares the same allocation and datatype as the existing array, but can have a different array shape. This is useful for runtimes that support non-flat memory, where both the physical shape of an allocation and the logical shape of the tensor it represents may need to be independently specified. Warning: This function should not be used outside of low-level manipulations, as it breaks non-aliasing assumptions made by TVM. This function may also be removed/replaced in the future. Parameters ---------- shape: Union[tvm.runtime.ShapeTuple, Sequence[typing.SupportsInt]] The shape of the view. """ if not isinstance(shape, tvm.runtime.ShapeTuple): shape = tvm.runtime.ShapeTuple([int(dim) for dim in shape]) return _ffi_api.TVMArrayCreateView(self, shape) def device(dev_type, dev_id=0): """Construct a TVM device with given device type and id. Parameters ---------- dev_type: int or str The device type mask or name of the device. dev_id : int, optional The integer device id Returns ------- dev: tvm.runtime.Device The corresponding device. Examples -------- Device can be used to create reflection of device by string representation of the device type. .. code-block:: python assert tvm.device("cpu", 1) == tvm.cpu(1) assert tvm.device("cuda", 0) == tvm.cuda(0) """ if isinstance(dev_type, string_types): dev_type = dev_type.split()[0] if dev_type not in Device.STR2MASK: raise ValueError(f"Unknown device type {dev_type}") dev_type = Device.STR2MASK[dev_type] return Device(dev_type, dev_id) def numpyasarray(np_data): """Return a TVMArray representation of a numpy array.""" data = np_data assert data.flags["C_CONTIGUOUS"] arr = TVMArray() shape = c_array(tvm_shape_index_t, data.shape) arr.data = data.ctypes.data_as(ctypes.c_void_p) arr.shape = shape arr.strides = None arr.dtype = DataType(np.dtype(data.dtype).name) arr.ndim = data.ndim # CPU device arr.device = device(Device.kDLCPU, 0) return arr, shape def empty(shape, dtype="float32", device=device(Device.kDLCPU, 0), mem_scope=None): """Create an empty array given shape and device Parameters ---------- shape : Union[tvm.runtime.ShapeTuple, Sequence[typing.SupportsInt]] The shape of the array. dtype : type or str The data type of the array. device : Device The device of the array. mem_scope : Optional[str] The memory scope of the array. Returns ------- arr : tvm.nd.NDArray The array tvm supported. """ if not isinstance(shape, tvm.runtime.ShapeTuple): shape = tvm.runtime.ShapeTuple([int(dim) for dim in shape]) dtype = DataType(dtype) arr = _ffi_api.TVMArrayAllocWithScope(shape, dtype, device, mem_scope) return arr def from_dlpack(dltensor): """Produces an array from an object with __dlpack__ method or a DLPack tensor w/o memory copy. Retreives the underlying DLPack tensor's pointer to create an array from the data. Removes the original DLPack tensor's destructor as now the array is responsible for destruction. Parameters ---------- dltensor : object with __dlpack__ attribute or a DLPack capsule Returns ------- arr: tvm.nd.NDArray The array view of the tensor data. """ t = type(dltensor) if t.__module__ == "builtins" and t.__name__ == "PyCapsule": return _from_dlpack(dltensor) if hasattr(dltensor, "__dlpack__"): dlpack_caps = dltensor.__dlpack__() return _from_dlpack(dlpack_caps) raise AttributeError("Required attribute __dlpack__ not found") def cpu(dev_id=0): """Construct a CPU device Parameters ---------- dev_id : int, optional The integer device id Returns ------- dev : Device The created device """ return Device(Device.kDLCPU, dev_id) def cuda(dev_id=0): """Construct a CUDA GPU device Parameters ---------- dev_id : int, optional The integer device id Returns ------- dev : Device The created device """ return Device(Device.kDLCUDA, dev_id) def gpu(dev_id=0): """Construct a CUDA GPU device deprecated:: 0.9.0 Use :py:func:`tvm.cuda` instead. Parameters ---------- dev_id : int, optional The integer device id Returns ------- dev : Device The created device """ warnings.warn( "Please use tvm.cuda() instead of tvm.gpu(). tvm.gpu() is going to be deprecated in 0.9.0" ) return Device(Device.kDLCUDA, dev_id) def rocm(dev_id=0): """Construct a ROCM device Parameters ---------- dev_id : int, optional The integer device id Returns ------- dev : Device The created device """ return Device(Device.kDLROCM, dev_id) def opencl(dev_id=0): """Construct a OpenCL device Parameters ---------- dev_id : int, optional The integer device id Returns ------- dev : Device The created device """ return Device(Device.kDLOpenCL, dev_id) def metal(dev_id=0): """Construct a metal device Parameters ---------- dev_id : int, optional The integer device id Returns ------- dev : Device The created device """ return Device(Device.kDLMetal, dev_id) def vpi(dev_id=0): """Construct a VPI simulated device Parameters ---------- dev_id : int, optional The integer device id Returns ------- dev : Device The created device """ return Device(Device.kDLVPI, dev_id) def vulkan(dev_id=0): """Construct a Vulkan device Parameters ---------- dev_id : int, optional The integer device id Returns ------- dev : Device The created device """ return Device(Device.kDLVulkan, dev_id) def ext_dev(dev_id=0): """Construct a extension device Parameters ---------- dev_id : int, optional The integer device id Returns ------- dev : Device The created device Note ---- This API is reserved for quick testing of new device by plugin device API as ext_dev. """ return Device(Device.kDLExtDev, dev_id) def hexagon(dev_id=0): """Construct a Hexagon device Parameters ---------- dev_id : int, optional The integer device id Returns ------- dev : Device The created device """ return Device(Device.kDLHexagon, dev_id) def webgpu(dev_id=0): """Construct a webgpu device. Parameters ---------- dev_id : int, optional The integer device id Returns ------- dev : Device The created device """ return Device(Device.kDLWebGPU, dev_id) cl = opencl mtl = metal def array(arr, device=cpu(0), mem_scope=None): """Create an array from source arr. Parameters ---------- arr : numpy.ndarray The array to be copied from device : Device, optional The device device to create the array mem_scope : Optional[str] The memory scope of the array Returns ------- ret : NDArray The created array """ if isinstance(arr, tvm.ir.container.Array): raise AttributeError("arr is an instance of", type(arr)) if not isinstance(arr, (np.ndarray, NDArray)): arr = np.array(arr) return empty(arr.shape, arr.dtype, device, mem_scope).copyfrom(arr) # Register back to FFI _set_class_ndarray(NDArray)
17,465
26.9456
99
py
tvm
tvm-main/python/tvm/runtime/__init__.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """TVM runtime namespace.""" # class exposures from .packed_func import PackedFunc from .object import Object from .object_path import ObjectPath, ObjectPathPair from .script_printer import Scriptable from .object_generic import ObjectGeneric, ObjectTypes from .ndarray import NDArray, DataType, DataTypeCode, Device from .module import Module, num_threads from .profiling import Report # function exposures from .object_generic import convert_to_object, convert, const from .ndarray import device, cpu, cuda, gpu, opencl, cl, vulkan, metal, mtl from .ndarray import vpi, rocm, ext_dev from .module import load_module, enabled, system_lib, load_static_library from .container import String, ShapeTuple from .params import ( save_param_dict, load_param_dict, save_param_dict_to_file, load_param_dict_from_file, ) from . import executor
1,639
37.139535
75
py
tvm
tvm-main/python/tvm/runtime/packed_func.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=invalid-name, unused-import """Packed Function namespace.""" import ctypes from tvm._ffi.base import _LIB, check_call, c_str, string_types, _FFI_MODE try: # pylint: disable=wrong-import-position if _FFI_MODE == "ctypes": raise ImportError() from tvm._ffi._cy3.core import _set_class_packed_func, _set_class_module from tvm._ffi._cy3.core import PackedFuncBase from tvm._ffi._cy3.core import convert_to_tvm_func except (RuntimeError, ImportError) as error: # pylint: disable=wrong-import-position if _FFI_MODE == "cython": raise error from tvm._ffi._ctypes.packed_func import _set_class_packed_func, _set_class_module from tvm._ffi._ctypes.packed_func import PackedFuncBase from tvm._ffi._ctypes.packed_func import convert_to_tvm_func PackedFuncHandle = ctypes.c_void_p class PackedFunc(PackedFuncBase): """The PackedFunc object used in TVM. Function plays an key role to bridge front and backend in TVM. Function provide a type-erased interface, you can call function with positional arguments. The compiled module returns Function. TVM backend also registers and exposes its API as Functions. The following are list of common usage scenario of tvm.runtime.PackedFunc. - Automatic exposure of C++ API into python - To call PackedFunc from python side - To call python callbacks to inspect results in generated code - Bring python hook into C++ backend See Also -------- tvm.register_func: How to register global function. tvm.get_global_func: How to get global function. """ _set_class_packed_func(PackedFunc)
2,442
36.015152
94
py
tvm
tvm-main/python/tvm/runtime/params.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=invalid-name """Helper utility to save and load parameter dicts.""" from . import _ffi_api, ndarray, NDArray def _to_ndarray(params): transformed = {} for (k, v) in params.items(): if not isinstance(v, NDArray): transformed[k] = ndarray.array(v) else: transformed[k] = v return transformed def save_param_dict(params): """Save parameter dictionary to binary bytes. The result binary bytes can be loaded by the GraphModule with API "load_params". Parameters ---------- params : dict of str to NDArray The parameter dictionary. Returns ------- param_bytes: bytearray Serialized parameters. Examples -------- .. code-block:: python # set up the parameter dict params = {"param0": arr0, "param1": arr1} # save the parameters as byte array param_bytes = tvm.runtime.save_param_dict(params) # We can serialize the param_bytes and load it back later. # Pass in byte array to module to directly set parameters tvm.runtime.load_param_dict(param_bytes) """ return _ffi_api.SaveParams(_to_ndarray(params)) def save_param_dict_to_file(params, path): """Save parameter dictionary to file. Parameters ---------- params : dict of str to NDArray The parameter dictionary. path: str The path to the parameter file. """ return _ffi_api.SaveParamsToFile(_to_ndarray(params), path) def load_param_dict(param_bytes): """Load parameter dictionary from binary bytes. Parameters ---------- param_bytes: bytearray Serialized parameters. Returns ------- params : dict of str to NDArray The parameter dictionary. """ if isinstance(param_bytes, (bytes, str)): param_bytes = bytearray(param_bytes) return _ffi_api.LoadParams(param_bytes) def load_param_dict_from_file(path): """Load parameter dictionary from file. Parameters ---------- path: str The path to the parameter file to load from. Returns ------- params : dict of str to NDArray The parameter dictionary. """ return _ffi_api.LoadParamsFromFile(path)
3,041
26.405405
65
py
tvm
tvm-main/python/tvm/runtime/script_printer.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Configuration of TVMScript printer""" from typing import Dict, List, Optional, Sequence from tvm._ffi import get_global_func, register_object from tvm.runtime import Object from . import _ffi_node_api from .object_path import ObjectPath @register_object("node.PrinterConfig") class PrinterConfig(Object): """Configuration of TVMScript printer""" binding_names: Sequence[str] show_meta: bool ir_prefix: str tir_prefix: str buffer_dtype: str int_dtype: str float_dtype: str verbose_expr: bool indent_spaces: int print_line_numbers: bool num_context_lines: int syntax_sugar: bool path_to_underline: Optional[List[ObjectPath]] path_to_annotate: Optional[Dict[ObjectPath, str]] obj_to_underline: Optional[List[Object]] obj_to_annotate: Optional[Dict[Object, str]] def __init__( self, *, name: Optional[str] = None, show_meta: bool = False, ir_prefix: str = "I", tir_prefix: str = "T", buffer_dtype: str = "float32", int_dtype: str = "int32", float_dtype: str = "void", verbose_expr: bool = False, indent_spaces: int = 4, print_line_numbers: bool = False, num_context_lines: Optional[int] = None, syntax_sugar: bool = True, path_to_underline: Optional[List[ObjectPath]] = None, path_to_annotate: Optional[Dict[ObjectPath, str]] = None, obj_to_underline: Optional[List[Object]] = None, obj_to_annotate: Optional[Dict[Object, str]] = None, ) -> None: if num_context_lines is None: num_context_lines = -1 cfg = { "show_meta": show_meta, "ir_prefix": ir_prefix, "tir_prefix": tir_prefix, "buffer_dtype": buffer_dtype, "int_dtype": int_dtype, "float_dtype": float_dtype, "verbose_expr": verbose_expr, "indent_spaces": indent_spaces, "print_line_numbers": print_line_numbers, "num_context_lines": num_context_lines, "syntax_sugar": syntax_sugar, "path_to_underline": path_to_underline, "path_to_annotate": path_to_annotate, "obj_to_underline": obj_to_underline, "obj_to_annotate": obj_to_annotate, } if name is not None: cfg["name"] = name self.__init_handle_by_constructor__( _ffi_node_api.PrinterConfig, cfg # type: ignore # pylint: disable=no-member ) def _script(obj: Object, config: PrinterConfig) -> str: return _ffi_node_api.TVMScriptPrinterScript(obj, config) # type: ignore # pylint: disable=no-member def _relax_script(obj: Object, config: PrinterConfig) -> str: func = get_global_func("script.printer.ReprPrintRelax") return func(obj, config) class Scriptable: """A base class that enables the script() and show() method.""" def script( self, *, name: Optional[str] = None, show_meta: bool = False, ir_prefix: str = "I", tir_prefix: str = "T", buffer_dtype: str = "float32", int_dtype: str = "int32", float_dtype: str = "void", verbose_expr: bool = False, indent_spaces: int = 4, print_line_numbers: bool = False, num_context_lines: int = -1, syntax_sugar: bool = True, path_to_underline: Optional[List[ObjectPath]] = None, path_to_annotate: Optional[Dict[ObjectPath, str]] = None, obj_to_underline: Optional[List[Object]] = None, obj_to_annotate: Optional[Dict[Object, str]] = None, ) -> str: """Print TVM IR into TVMScript text format Parameters ---------- name : Optional[str] = None The name of the object show_meta : bool = False Whether to print the meta data of the object ir_prefix : str = "I" The prefix of AST nodes from tvm.ir tir_prefix : str = "T" The prefix of AST nodes from tvm.tir buffer_dtype : str = "float32" The default data type of buffer int_dtype : str = "int32" The default data type of integer float_dtype : str = "void" The default data type of float verbose_expr : bool = False Whether to print the detailed definition of each variable in the expression indent_spaces : int = 4 The number of spaces for indentation print_line_numbers : bool = False Whether to print line numbers num_context_lines : int = -1 The number of lines of context to print before and after the line to underline. syntax_sugar: bool = True Whether to output with syntax sugar, set false for complete printing. path_to_underline : Optional[List[ObjectPath]] = None Object path to be underlined path_to_annotate : Optional[Dict[ObjectPath, str]] = None Object path to be annotated obj_to_underline : Optional[List[Object]] = None Object to be underlined obj_to_annotate : Optional[Dict[Object, str]] = None Object to be annotated Returns ------- script : str The TVM Script of the given TVM IR """ return _script( self, PrinterConfig( name=name, show_meta=show_meta, ir_prefix=ir_prefix, tir_prefix=tir_prefix, buffer_dtype=buffer_dtype, int_dtype=int_dtype, float_dtype=float_dtype, verbose_expr=verbose_expr, indent_spaces=indent_spaces, print_line_numbers=print_line_numbers, num_context_lines=num_context_lines, syntax_sugar=syntax_sugar, path_to_underline=path_to_underline, path_to_annotate=path_to_annotate, obj_to_underline=obj_to_underline, obj_to_annotate=obj_to_annotate, ), ) def show( self, style: Optional[str] = None, black_format: bool = True, *, name: Optional[str] = None, show_meta: bool = False, ir_prefix: str = "I", tir_prefix: str = "T", buffer_dtype: str = "float32", int_dtype: str = "int32", float_dtype: str = "void", verbose_expr: bool = False, indent_spaces: int = 4, print_line_numbers: bool = False, num_context_lines: int = -1, syntax_sugar: bool = True, path_to_underline: Optional[List[ObjectPath]] = None, path_to_annotate: Optional[Dict[ObjectPath, str]] = None, obj_to_underline: Optional[List[Object]] = None, obj_to_annotate: Optional[Dict[Object, str]] = None, ) -> None: """A sugar for print highlighted TVM script. Parameters ---------- style : str, optional Pygmentize printing style, auto-detected if None. See `tvm.script.highlight.cprint` for more details. black_format: bool If true (default), use the formatter Black to format the TVMScript name : Optional[str] = None The name of the object show_meta : bool = False Whether to print the meta data of the object ir_prefix : str = "I" The prefix of AST nodes from tvm.ir tir_prefix : str = "T" The prefix of AST nodes from tvm.tir buffer_dtype : str = "float32" The default data type of buffer int_dtype : str = "int32" The default data type of integer float_dtype : str = "void" The default data type of float verbose_expr : bool = False Whether to print the detailed definition of each variable in the expression indent_spaces : int = 4 The number of spaces for indentation print_line_numbers : bool = False Whether to print line numbers num_context_lines : int = -1 The number of lines of context to print before and after the line to underline. syntax_sugar: bool = True Whether to output with syntax sugar, set false for complete printing. path_to_underline : Optional[List[ObjectPath]] = None Object path to be underlined path_to_annotate : Optional[Dict[ObjectPath, str]] = None Object path to be annotated obj_to_underline : Optional[List[Object]] = None Object to be underlined obj_to_annotate : Optional[Dict[Object, str]] = None Object to be annotated """ from tvm.script.highlight import ( # pylint: disable=import-outside-toplevel cprint, ) cprint( self.script( name=name, show_meta=show_meta, ir_prefix=ir_prefix, tir_prefix=tir_prefix, buffer_dtype=buffer_dtype, int_dtype=int_dtype, float_dtype=float_dtype, verbose_expr=verbose_expr, indent_spaces=indent_spaces, print_line_numbers=print_line_numbers, num_context_lines=num_context_lines, syntax_sugar=syntax_sugar, path_to_underline=path_to_underline, path_to_annotate=path_to_annotate, obj_to_underline=obj_to_underline, obj_to_annotate=obj_to_annotate, ), style=style, black_format=black_format, )
10,547
36.272085
104
py
tvm
tvm-main/python/tvm/runtime/object_generic.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Common implementation of object generic related logic""" # pylint: disable=unused-import, invalid-name from numbers import Number, Integral from tvm._ffi.base import string_types from tvm._ffi.runtime_ctypes import ObjectRValueRef from . import _ffi_node_api, _ffi_api from .object import ObjectBase, PyNativeObject, _set_class_object_generic from .ndarray import NDArrayBase from .packed_func import PackedFuncBase, convert_to_tvm_func from .module import Module class ObjectGeneric(object): """Base class for all classes that can be converted to object.""" def asobject(self): """Convert value to object""" raise NotImplementedError() ObjectTypes = (ObjectBase, NDArrayBase, Module, ObjectRValueRef, PackedFuncBase, PyNativeObject) def convert_to_object(value, span=None): """Convert a Python value to corresponding object type. Parameters ---------- value : str The value to be inspected. span : Optional[Span] The location of this itervar in the source code. Returns ------- obj : Object The corresponding object value. """ if isinstance(value, ObjectTypes): return value if isinstance(value, bool): return const(value, "uint1x1", span=span) if isinstance(value, Number): return const(value, span=span) if isinstance(value, string_types): return _ffi_api.String(value) if isinstance(value, (list, tuple)): value = [convert_to_object(x) for x in value] return _ffi_api.Array(*value) if isinstance(value, dict): vlist = [] for item in value.items(): if ( not isinstance(item[0], ObjectTypes) and not isinstance(item[0], string_types) and not isinstance(item[0], Number) ): raise ValueError("key of map must already been a container type") vlist.append(convert_to_object(item[0])) vlist.append(convert_to_object(item[1])) return _ffi_api.Map(*vlist) if isinstance(value, ObjectGeneric): return value.asobject() if callable(value): return convert_to_tvm_func(value) if value is None: return None raise ValueError(f"don't know how to convert type {type(value)} to object") def convert(value, span=None): """Convert value to TVM object or function. Parameters ---------- value : python value span : Optional[Span] The location of this statement in the source code. Returns ------- tvm_val : Object or Function Converted value in TVM Note ---- This function is redirected to `convert_to_object` as it is widely used in the codebase. We can choose one to keep and discard the other one later. """ return convert_to_object(value, span=span) def _scalar_type_inference(value): if hasattr(value, "dtype"): dtype = str(value.dtype) elif isinstance(value, bool): dtype = "bool" elif isinstance(value, float): # We intentionally prefer convert the float to float32 since it's more common in DL. if -3.40282347e38 <= value <= 3.40282347e38: dtype = "float32" else: dtype = "float64" elif isinstance(value, int): # We intentionally prefer convert the python int to int32 since it's more common in DL. if -2147483648 <= value <= 2147483647: dtype = "int32" else: dtype = "int64" else: raise NotImplementedError(f"Cannot automatically inference the type. value={value}") return dtype def const(value, dtype=None, span=None): """construct a constant Parameters ---------- value : number The content of the constant number. dtype : str or None, optional The data type. span : Optional[Span] The location of the constant value in the source. Returns ------- const_val: tvm.Expr The result expression. """ if dtype is None: dtype = _scalar_type_inference(value) if dtype == "uint64" and value >= (1 << 63): return _ffi_node_api.LargeUIntImm(dtype, value & ((1 << 32) - 1), value >> 32, span) return _ffi_node_api._const(value, dtype, span) _set_class_object_generic(ObjectGeneric, convert_to_object)
5,146
30.771605
96
py
tvm
tvm-main/python/tvm/runtime/_ffi_node_api.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=invalid-name, unused-argument """FFI for tvm.node""" import tvm._ffi # The implementations below are default ones when the corresponding # functions are not available in the runtime only mode. # They will be overriden via _init_api to the ones registered # via TVM_REGISTER_GLOBAL in the compiler mode. def AsRepr(obj): return obj.type_key() + "(" + obj.handle.value + ")" def AsLegacyRepr(obj): return obj.type_key() + "(" + obj.handle.value + ")" def NodeListAttrNames(obj): return lambda x: 0 def NodeGetAttr(obj, name): raise AttributeError() def SaveJSON(obj): raise RuntimeError("Do not support object serialization in runtime only mode") def LoadJSON(json_str): raise RuntimeError("Do not support object serialization in runtime only mode") # Exports functions registered via TVM_REGISTER_GLOBAL with the "node" prefix. # e.g. TVM_REGISTER_GLOBAL("node.AsRepr") tvm._ffi._init_api("node", __name__)
1,748
31.388889
82
py
tvm
tvm-main/python/tvm/runtime/profiling/_ffi_api.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """FFI for profiling""" from ... import _ffi _ffi._init_api("runtime.profiling", __name__)
877
40.809524
62
py
tvm
tvm-main/python/tvm/runtime/profiling/__init__.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Registration of profiling objects in python.""" from typing import Dict, Sequence, Optional from ... import _ffi from . import _ffi_api from .. import Object, Device @_ffi.register_object("runtime.profiling.Report") class Report(Object): """A container for information gathered during a profiling run. Attributes ---------- calls : Array[Dict[str, Object]] Per-call profiling metrics (function name, runtime, device, ...). device_metrics : Dict[Device, Dict[str, Object]] Per-device metrics collected over the entire run. """ def __init__( self, calls: Sequence[Dict[str, Object]], device_metrics: Dict[str, Dict[str, Object]], configuration: Dict[str, Object], ): """Construct a profiling report from a list of metrics and per-device metrics. Parameters ---------- calls : Sequence[Dict[str, Object]] Per function call metrics. device_metrics : Dict[str, Dict[str, Object]] Per device metrics. configuration : Dict[str, Object] Configuration of TVM for this profiling run. Includes number of threads, executor. """ self.__init_handle_by_constructor__(_ffi_api.Report, calls, device_metrics, configuration) def csv(self): """Convert this profiling report into CSV format. This only includes calls and not overall metrics. Returns ------- csv : str `calls` in CSV format. """ return _ffi_api.AsCSV(self) def table(self, sort=True, aggregate=True, col_sums=True): """Generate a human-readable table Parameters ---------- sort : bool If aggregate is true, whether to sort call frames by descending duration. If aggregate is False, whether to sort frames by order of appearancei n the program. aggregate : bool Whether to join multiple calls to the same op into a single line. col_sums : bool Whether to include the sum of each column. Returns ------- table : str A human-readable table """ return _ffi_api.AsTable(self, sort, aggregate, col_sums) def json(self): """Convert this profiling report into JSON format. Example output: .. code-block: { "calls": [ { "Duration (us)": { "microseconds": 12.3 }, "Name": "fused_dense", "Count": { "count": 1 }, "Percent": { "percent": 10.3 } } ], "device_metrics": { "cpu": { "Duration (us)": { "microseconds": 334.2 }, "Percent": { "percent": 100 } } } } {"calls": [ {"Duration (us)": {"microseconds": 12.3} ,"Name": "fused_dense" ,"Count": {"count":1} ,"Percent": {"percent": 10.3} } ], "device_metrics": {"cpu": {"Duration (us)": {"microseconds": 334.2} ,"Percent": {"percent": 100.0} } } } Returns ------- json : str Formatted JSON """ return _ffi_api.AsJSON(self) @classmethod def from_json(cls, s): """Deserialize a report from JSON. Parameters ---------- s : str Report serialize via :py:meth:`json`. Returns ------- report : Report The deserialized report. """ return _ffi_api.FromJSON(s) @_ffi.register_object("runtime.profiling.Count") class Count(Object): """A integer count of something""" def __init__(self, count: int): self.__init_handle_by_constructor__(_ffi_api.Count, count) @_ffi.register_object("runtime.profiling.Duration") class Duration(Object): """A duration of something""" def __init__(self, duration: float): self.__init_handle_by_constructor__(_ffi_api.Duration, duration) @_ffi.register_object("runtime.profiling.Percent") class Percent(Object): """A Percent of something""" def __init__(self, percent: float): self.__init_handle_by_constructor__(_ffi_api.Percent, percent) @_ffi.register_object("runtime.profiling.Ratio") class Ratio(Object): """A Ratio of two things""" def __init__(self, ratio: float): self.__init_handle_by_constructor__(_ffi_api.Ratio, ratio) @_ffi.register_object("runtime.profiling.MetricCollector") class MetricCollector(Object): """Interface for user defined profiling metric collection.""" @_ffi.register_object("runtime.profiling.DeviceWrapper") class DeviceWrapper(Object): """Wraps a tvm.runtime.Device""" def __init__(self, dev: Device): self.__init_handle_by_constructor__(_ffi_api.DeviceWrapper, dev) def profile_function(mod, dev, collectors, func_name=None, warmup_iters=10): """Collect performance information of a function execution. Usually used with a compiled PrimFunc. This information can include performance counters like cache hits and FLOPs that are useful in debugging performance issues of individual PrimFuncs. Different metrics can be collected depending on which MetricCollector is used. Example ------- .. code-block: python f = tvm.build(my_func, target="llvm", name="my_func") prof = tvm.runtime.profiling.profile_function( f, tvm.cpu(), [tvm.runtime.profiling.PAPIMetricCollector({tvm.cpu(): ["PAPI_FP_OPS"]}), ) counters = prof(*args) print(counters) Parameters ---------- mod: Module Module containing the function to profile. dev: Device Device to run the function on. collectors: List[MetricCollector] :py:class:`MetricCollector`s which will collect performance information. func_name: Optional[str] Name of the function in `mod` to profile. Defaults to the `entry_name` of `mod`. warmup_iters: int Number of iterations to run the function before collecting performance information. Recommended to set this larger than 0 for consistent cache effects. Defaults to 10. Returns ------- prof: PackedFunc[args, Dict[str, ObjectRef]] PackedFunc which takes the same arguments as the `mod[func_name]` and returns performance metrics as a `Dict[str, ObjectRef]` where values can be `CountNode`, `DurationNode`, `PercentNode`. """ if func_name is None: func_name = mod.entry_name return _ffi_api.ProfileFunction( mod, func_name, dev.device_type, dev.device_id, warmup_iters, collectors ) # We only enable this class when TVM is build with PAPI support if _ffi.get_global_func("runtime.profiling.PAPIMetricCollector", allow_missing=True) is not None: @_ffi.register_object("runtime.profiling.PAPIMetricCollector") class PAPIMetricCollector(MetricCollector): """Collects performance counter information using the Performance Application Programming Interface (PAPI). """ def __init__(self, metric_names: Optional[Dict[Device, Sequence[str]]] = None): """ Parameters ---------- metric_names : Optional[Dict[Device, Sequence[str]]] List of per-device metrics to collect. You can find a list of valid metrics by runing `papi_native_avail` from the command line. """ metric_names = {} if metric_names is None else metric_names wrapped = dict() for dev, names in metric_names.items(): wrapped[DeviceWrapper(dev)] = names self.__init_handle_by_constructor__(_ffi_api.PAPIMetricCollector, wrapped)
9,085
29.904762
98
py
tvm
tvm-main/python/tvm/runtime/executor/aot_executor.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """A Python wrapper for the Module-based Model Runtime Interface for Ahead-of-Time compilation.""" import numpy as np class AotModule(object): """Wraps the AOT executor runtime.Module. This is a thin wrapper of the underlying TVM module. you can also directly call set_input, run, and get_output of underlying module functions Parameters ---------- module : tvm.runtime.Module The internal tvm module that holds the implemented model functions. Attributes ---------- module : tvm.runtime.Module The internal tvm module that holds the implemented model functions. Examples -------- .. code-block:: python import tvm from tvm import relay from tvm.contrib import graph_executor # build the library using graph executor lib = relay.build(...) lib.export_library("compiled_lib.so") # load it back as a runtime lib: tvm.runtime.Module = tvm.runtime.load_module("compiled_lib.so") # Call the library factory function for default and create # a new runtime.Module, wrap with aot module. gmod = tvm.runtime.executor.AotModule(lib["default"](dev)) # use the aot module. gmod.set_input("x", data) gmod.run() """ def __init__(self, module): self.module = module self._set_input = module["set_input"] self._run = module["run"] self._get_output = module["get_output"] self._get_input = module["get_input"] self._get_num_outputs = module["get_num_outputs"] self._get_input_index = module["get_input_index"] self._get_num_inputs = module["get_num_inputs"] self._get_input_name = module["get_input_name"] def set_input(self, key=None, value=None, **params): """Set inputs to the module via kwargs Parameters ---------- key : int or str The input key value : the input value. The input key params : dict of str to NDArray Additional arguments """ if key is not None: v = self._get_input(key) if v is None: raise RuntimeError(f"Could not find '{key}' in model's inputs") v.copyfrom(value) if params: # upload big arrays first to avoid memory issue in rpc mode keys = list(params.keys()) keys.sort(key=lambda x: -np.prod(params[x].shape)) for k in keys: # TODO(zhiics) Skip the weights for submodule in a better way. # We should use MetadataModule for initialization and remove # params from set_input val = self._get_input(k) if val: self._get_input(k).copyfrom(params[k]) def run(self, **input_dict): """Run forward execution of the model Parameters ---------- input_dict: dict of str to NDArray List of input values to be feed to """ if input_dict: self.set_input(**input_dict) self._run() def get_num_outputs(self): """Get the number of outputs from the model Returns ------- count : int The number of outputs. """ return self._get_num_outputs() def get_num_inputs(self): """Get the number of inputs to the model Returns ------- count : int The number of inputs. """ return self._get_num_inputs() def get_input(self, index, out=None): """Get index-th input to out Parameters ---------- index : int The input index out : NDArray The output array container """ if out: self._get_input(index).copyto(out) return out return self._get_input(index) def get_input_index(self, name): """Get inputs index via input name. Parameters ---------- name : str The input key name Returns ------- index: int The input index. -1 will be returned if the given input name is not found. """ return self._get_input_index(name) def get_output(self, index, out=None): """Get index-th output to out Parameters ---------- index : int The output index out : NDArray The output array container """ if out: self._get_output(index, out) return out return self._get_output(index) def get_input_name(self, index: int) -> str: """Return the name of input with index `index`""" return self._get_input_name(index) def get_input_info(self): """Return the 'shape' and 'dtype' dictionaries of the module.""" self.get_input_name(0) shape_dict = dict() dtype_dict = dict() for ind in range(0, self.get_num_inputs()): input_name = self.get_input_name(ind) input_tensor = self.get_input(ind) shape_dict[input_name] = input_tensor.shape dtype_dict[input_name] = input_tensor.dtype return shape_dict, dtype_dict
6,108
29.242574
98
py
tvm
tvm-main/python/tvm/runtime/executor/__init__.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """This module contains Python wrappers for the TVM C++ Executor implementations. NOTE: at present, only AOT Executor is contained here. The others are: - GraphExecutor, in python/tvm/contrib/graph_executor.py - VM Executor, in python/tvm/runtime/vm.py TODO(areusch): Consolidate these into this module. """ from .aot_executor import AotModule
1,134
41.037037
81
py
tvm
tvm-main/python/tvm/arith/_ffi_api.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """FFI APIs for tvm.arith""" import tvm._ffi tvm._ffi._init_api("arith", __name__)
870
38.590909
62
py
tvm
tvm-main/python/tvm/arith/int_set.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Integer set.""" import tvm._ffi from tvm.runtime import Object from . import _ffi_api class IntSet(Object): """Represent a set of integer in one dimension.""" def is_nothing(self): """Whether the set represent nothing""" return _ffi_api.IntSetIsNothing(self) def is_everything(self): """Whether the set represent everything""" return _ffi_api.IntSetIsEverything(self) @staticmethod def vector(vec): """Construct an integer set that covers the vector expr Parameters ---------- vec : PrimExpr The vector expression. Returns ------- rset : IntSet The result set. """ return _ffi_api.intset_vector(vec) @staticmethod def single_point(point): """Construct a point set. Parameters ---------- point : PrimExpr The vector expression. Returns ------- rset : IntSet The result set. """ return _ffi_api.intset_single_point(point) @tvm._ffi.register_object("arith.IntervalSet") class IntervalSet(IntSet): """Represent set of continuous interval [min_value, max_value] Parameters ---------- min_value : PrimExpr The minimum value in the interval. max_value : PrimExpr The maximum value in the interval. """ def __init__(self, min_value, max_value): self.__init_handle_by_constructor__(_ffi_api.IntervalSet, min_value, max_value) def estimate_region_lower_bound(region, var_dom, predicate): """Analyze the region with affine map, given the domain of variables and their predicate Some subregion may be discarded during the lower-bound analysis. Parameters ---------- region : List[Range] The region to be analyzed. var_dom : Dict[Var, Range] The ranges of the variables predicate : PrimExpr The predicate for the affine map Returns ---------- region_int_set : Optional[List[IntSet]] None if the detection fails, or an array of IntSets as the result of analysis """ return _ffi_api.EstimateRegionLowerBound(region, var_dom, predicate) def estimate_region_strict_bound(region, var_dom, predicate): """Analyze the region with affine map, given the domain of variables and their predicate The result should be strict, i.e. no region is discarded or relaxed. Parameters ---------- region : List[Range] The region to be analyzed. var_dom : Dict[Var, Range] The ranges of the variables predicate : PrimExpr The predicate for the affine map Returns ---------- region_int_set : Optional[List[IntSet]] None if the detection fails, or an array of IntSets as the result of analysis """ return _ffi_api.EstimateRegionStrictBound(region, var_dom, predicate) def estimate_region_upper_bound(region, var_dom, predicate): """Analyze the region with affine map, given the domain of variables and their predicate Relaxation of the region may be used in upper-bound analysis, i.e. some extra region may be added to the result. Parameters ---------- region : List[Range] The region to be analyzed. var_dom : Dict[Var, Range] The ranges of the variables predicate : PrimExpr The predicate for the affine map Returns ---------- region_int_set : List[IntSet] an array of IntSets as the result of analysis """ return _ffi_api.EstimateRegionUpperBound(region, var_dom, predicate) def pos_inf(): """Returns the symbolic positive infinity Returns ---------- pos_inf : Var A symbolic var that indicates positive infinity """ return _ffi_api.PosInf() def neg_inf(): """Returns the symbolic positive infinity Returns ---------- neg_inf : Var A symbolic var that indicates positive infinity """ return _ffi_api.NegInf() def union_lower_bound(sets): """Create a lower-bound of union set, where some of the segments may be dropped Parameters ---------- sets : List[IntSet] The sets to be combined Returns ---------- union_lower_bound : List[IntSet] An N-dimensional integer set, the lower bound of the union """ return _ffi_api.UnionLowerBound(sets)
5,198
26.363158
92
py
tvm
tvm-main/python/tvm/arith/iter_affine_map.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """ Iterator (quasi)affine mapping patterns.""" from enum import IntEnum import tvm._ffi from tvm.runtime import Object from tvm.ir import PrimExpr from . import _ffi_api class IterMapExpr(PrimExpr): """Base class of all IterMap expressions.""" @tvm._ffi.register_object("arith.IterMark") class IterMark(Object): """Mark the source as an iterator in [0, extent). Parameters ---------- source : PrimExpr. The source expression. extent : PrimExpr The extent of the iterator. """ def __init__(self, source, extent): self.__init_handle_by_constructor__(_ffi_api.IterMark, source, extent) @tvm._ffi.register_object("arith.IterSplitExpr") class IterSplitExpr(IterMapExpr): """Split of an iterator. result = floormod(floordiv(source, lower_factor), extent) * scale Parameters ---------- source : IterMark The source marked iterator. lower_factor : PrimExpr The lower factor to split the domain. extent : PrimExpr The extent of the split. scale : PrimExpr Additional scale to the split. """ def __init__(self, source, lower_factor, extent, scale): self.__init_handle_by_constructor__( _ffi_api.IterSplitExpr, source, lower_factor, extent, scale ) @tvm._ffi.register_object("arith.IterSumExpr") class IterSumExpr(IterMapExpr): """Fuse multiple iterators by summing them with scaling. result = sum(args) + base Parameters ---------- args : List[IterSplitExpr] The input to the sum expression. base : PrimExpr The base offset. """ def __init__(self, args, base): self.__init_handle_by_constructor__(_ffi_api.IterSumExpr, args, base) class IterMapLevel(IntEnum): """Possible kinds of iter mapping check level.""" Bijective = 0 Surjective = 1 NoCheck = 3 @staticmethod def from_str(name: str): """Helper to create level enum from string""" if name is None: return IterMapLevel.NoCheck name = name.lower() if name == "bijective": check_level = IterMapLevel.Bijective elif name == "surjective": check_level = IterMapLevel.Surjective elif name == "nocheck": check_level = IterMapLevel.NoCheck else: raise ValueError(f"Unknown check level {name}") return check_level def detect_iter_map( indices, input_iters, predicate=True, check_level=IterMapLevel.Surjective, simplify_trivial_iterators=True, ): """Detect if indices can be written as mapped iters from input iters Parameters ---------- indices : List[PrimExpr] The input indices input_iters : Map[Var, Range] The domain of each input iterators. predicate : PrimExpr The predicate constraints on the input iterators check_level : Union[str, IterMapLevel] Checking level of iteration mapping simplify_trivial_iterators: bool If true, iterators with extent of 1 will be replaced with a constant value. Returns ------- results : IterMapResult The iter map matching result. The result's .indices is empty array if no match can be found. """ if isinstance(check_level, str): check_level = IterMapLevel.from_str(check_level) elif check_level is None: check_level = IterMapLevel.NoCheck return _ffi_api.DetectIterMap( indices, input_iters, predicate, check_level, simplify_trivial_iterators ) def normalize_to_iter_sum(index, input_iters): """Normalize expr to iter sum. The normalized result ensures that each scale is in the form of (symbol_prod) * cscale It will also sort in desc order by cscale then len(symbol_prod). Parameters ---------- index : PrimExpr The input index input_iters : Map[Var, Range] The domain of each input iterators. Returns ------- iter_sum: IterSumExpr The result iter sum Note ---- This function does best effort detection, so some undetected part can go into iter_sum.base This function is useful to decide the stride multiplier and division factor in buffer access patterns. """ return _ffi_api.NormalizeToIterSum(index, input_iters) def iter_map_simplify( indices, input_iters, predicate=True, check_level=IterMapLevel.Surjective, simplify_trivial_iterators=True, ): """Simplify the indices using iter map detection. Parameters ---------- indices : List[PrimExpr] The input indices input_iters : Map[Var, Range] The domain of each input iterators. predicate : PrimExpr The predicate constraints on the input iterators check_level : Union[str, IterMapLevel] Checking level of iteration mapping simplify_trivial_iterators: bool If true, iterators with extent of 1 will be replaced with a constant value. Returns ------- results : IterMapResult The iter map matching result. The result's .indices is empty array if no match can be found. """ if isinstance(check_level, str): check_level = IterMapLevel.from_str(check_level) elif check_level is None: check_level = IterMapLevel.NoCheck return _ffi_api.IterMapSimplify( indices, input_iters, predicate, check_level, simplify_trivial_iterators ) def normalize_iter_map_to_expr(expr): """Given an IterMapExpr, transform it to normal PrimExpr Parameters ---------- expr : IterMapExpr the input IterMapExpr Returns ------- result : PrimExpr the corresponding normal PrimExpr """ return _ffi_api.NormalizeIterMapToExpr(expr) def subspace_divide( bindings, input_iters, sub_iters, predicate=True, check_level=IterMapLevel.Surjective, simplify_trivial_iterators=True, ): """Detect if bindings can be written as [a_0*e_0 + b_0 + c_0, a_1*e_1 + b_1, ..., a_n*e_n + b_n] where a = some-quasi-affine-iter-map(input_iters set_minus sub_iters) b = some-quasi-affine-iter-map(sub_iters) c is constant symbols e is the extent of b For example, z*12 + y*3 + x + c = (z*4+y)*3 + x bindings = [z*12 + y*3 + x + c] input_iters = [z, y, x] sub_iter = [x] Then the result will be [a, b] where a = [z*4 + y] b = [x] Parameters ---------- bindings : List[PrimExpr] The input bindings input_iters : Map[Var, Range] The domain of input iterator, which is the basis of the whole space sub_iters : Array[Var] The subset of input_iters, which is the basis of the subspace predicate : PrimExpr The predicate constraints on the input iterators check_level : Union[str, IterMapLevel] Checking level of iteration mapping simplify_trivial_iterators: bool If true, iterators with extent of 1 will be replaced with a constant value. Returns ------- results : List[List[PrimExpr]] The result list has length len(bindings) + 1 [0, len(bindings)): The iter map matching result. The inner list is of length 2. The first expr is the basis of the quotient space. The second expr is the basis of the subspace. len(bindings): the predicate of outer space and inner space Empty array if no match can be found. """ if isinstance(check_level, str): check_level = IterMapLevel.from_str(check_level) return _ffi_api.SubspaceDivide( bindings, input_iters, sub_iters, predicate, check_level, simplify_trivial_iterators ) def inverse_affine_iter_map(iter_map, outputs): """Apply the inverse of the affine transformation to the outputs. Similar to the back-propagation, starting from the outputs, it visits the DAG of the expressions in reverse topology order and applies the inverse of the affine transformation until it reaches the input. The affine iter map is required to be bijective. For example, iter_map = [l0 // 16, l0 % 16], outputs = [output_0, output_1], the affine transformation specified by `iter_map` will be applied to `outputs` and the result will be {l0: ((output_0*16) + output_1)}. See also :any:`detect_iter_map`. Parameters ---------- iter_map : List[IterSumExpr] The bijective affine iter map. outputs : List[PrimExpr] The outputs of the affine transformation. Returns ------- results : Map[Var, PrimExpr] The map from the input to the transformed result. """ return _ffi_api.InverseAffineIterMap(iter_map, outputs)
9,684
27.997006
100
py
tvm
tvm-main/python/tvm/arith/analyzer.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=invalid-name """Arithmetic data structure and utility""" from enum import IntEnum import tvm._ffi from tvm.runtime import Object from . import _ffi_api class ProofStrength(IntEnum): """Proof strength of the analysis""" DEFAULT = 0 SYMBOLIC_BOUND = 1 @tvm._ffi.register_object("arith.ModularSet") class ModularSet(Object): """Represent range of (coeff * x + base) for x in Z""" def __init__(self, coeff, base): self.__init_handle_by_constructor__(_ffi_api.ModularSet, coeff, base) @tvm._ffi.register_object("arith.ConstIntBound") class ConstIntBound(Object): """Represent constant integer bound Parameters ---------- min_value : int The minimum value of the bound. max_value : int The maximum value of the bound. """ POS_INF = (1 << 63) - 1 NEG_INF = -POS_INF def __init__(self, min_value, max_value): self.__init_handle_by_constructor__(_ffi_api.ConstIntBound, min_value, max_value) class ConstraintScope: """Constraint scope. Parameters ---------- fenter : function A function that will be called to create an enter context. Note ---- Do not create object directly, use Analyzer.constraint_scope """ def __init__(self, fenter): self._fenter = fenter self._fexit = None def __enter__(self): self._fexit = self._fenter() def __exit__(self, ptype, value, trace): self._fexit() class Analyzer: """Integer arithmetic analyzer This is a stateful analyzer class that can be used to perform various symbolic integer analysis. """ def __init__(self): _mod = _ffi_api.CreateAnalyzer() self._const_int_bound = _mod("const_int_bound") self._const_int_bound_update = _mod("const_int_bound_update") self._bind = _mod("bind") self._modular_set = _mod("modular_set") self._simplify = _mod("Simplify") self._rewrite_simplify = _mod("rewrite_simplify") self._get_rewrite_simplify_stats = _mod("get_rewrite_simplify_stats") self._reset_rewrite_simplify_stats = _mod("reset_rewrite_simplify_stats") self._canonical_simplify = _mod("canonical_simplify") self._int_set = _mod("int_set") self._enter_constraint_context = _mod("enter_constraint_context") self._can_prove_equal = _mod("can_prove_equal") self._can_prove = _mod("can_prove") def const_int_bound(self, expr): """Find constant integer bound for expr. Parameters ---------- expr : PrimExpr The expression. Returns ------- bound : ConstIntBound The result bound """ return self._const_int_bound(expr) def modular_set(self, expr): """Find a modular set that expr belongs to. Parameters ---------- expr : PrimExpr The expression. Returns ------- result : ModularSet The result. """ return self._modular_set(expr) def simplify(self, expr, steps=2): """Simplify expression via both rewrite and canonicalization. Parameters ---------- expr : PrimExpr The expression. steps : The simplification runs in the order of rewrite_simplify (step 1) -> canonical_simplify (step 2) -> rewrite_simplify (step 3) -> canonical_simplify (step 4) -> ... param steps controls how many steps to run. Default is 2, i.e., rewrite_simplify + canonical_simplify. Returns ------- result : Expr The result. """ return self._simplify(expr, steps) def rewrite_simplify(self, expr): """Simplify expression via rewriting rules. Parameters ---------- expr : PrimExpr The expression. Returns ------- result : Expr The result. """ return self._rewrite_simplify(expr) @property def rewrite_simplify_stats(self): return self._get_rewrite_simplify_stats() def reset_rewrite_simplify_stats(self): self._reset_rewrite_simplify_stats() def canonical_simplify(self, expr): """Simplify expression via canonicalization. Parameters ---------- expr : PrimExpr The expression. Returns ------- result : Expr The result. """ return self._canonical_simplify(expr) def int_set(self, expr, dom_map): """Compute a symbolic IntSet that covers expr for all values in dom_map. Parameters ---------- expr : PrimExpr The expression. dom_map : Dict[Var, tvm.arith.IntSet] The domain for variables to be relaxed. Returns ------- result : IntSet The result. """ return self._int_set(expr, dom_map) def can_prove(self, expr, strength=ProofStrength.DEFAULT): """Check whether we can prove expr to be true. Parameters ---------- expr : PrimExpr The expression. strength: ProofStrength The proof strength Returns ------- result : Expr The result. """ return self._can_prove(expr, strength) def bind(self, var, expr): """Bind a variable to the expression. Parameters ---------- var : tvm.tir.Var The variable. expr : PrimExpr The expression. """ return self._bind(var, expr) def constraint_scope(self, constraint): """Create a constraint scope. Parameters ---------- constraint : PrimExpr The constraint expression. returns ------- scope : ConstraintScope The constraint scope Examples -------- .. code-block:: python x = te.var("x") analyzer = tvm.arith.Analyzer() with analzyer.constraint_scope(x % 3 == 0): # constraint in effect assert analyzer.modular_set(x).coeff == 3 # constraint no longer in effect assert analyzer.modular_set(x).coeff != 3 """ def _fenter(): return self._enter_constraint_context(constraint) return ConstraintScope(_fenter) def update(self, var, info, override=False): """Update infomation about var Parameters ---------- var : tvm.tir.Var The variable. info : tvm.Object Related information. override : bool Whether allow override. """ if isinstance(info, ConstIntBound): self._const_int_bound_update(var, info, override) else: raise TypeError("Do not know how to handle type {}".format(type(info))) def can_prove_equal(self, lhs: "PrimExpr", rhs: "PrimExpr"): """Whether we can prove that lhs == rhs Parameters ---------- lhs: PrimExpr The left-hand side of the comparison rhs: PrimExpr The right-hand side of the comparison Returns ------- result: bool Whether we can prove that lhs == rhs """ return self._can_prove_equal(lhs, rhs)
8,309
25.806452
89
py
tvm
tvm-main/python/tvm/arith/pattern.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Detect common patterns.""" from typing import Dict from tvm.tir import PrimExpr from . import _ffi_api def detect_linear_equation(expr, var_list): """Match `expr = sum_{i=0}^{n-1} var[i] * coeff[i] + coeff[n]` Where coeff[i] and base are invariant of var[j] for all i and j. Parameters ---------- expr : PrimExpr The expression to be matched. var_list : List[tvm.tir.Var] A list of variables. Returns ------- coeff : List[PrimExpr] A list of co-efficients if the match is successful. An empty list if the match failed. """ return _ffi_api.DetectLinearEquation(expr, var_list) def detect_clip_bound(expr, var_list): """Detect if expression corresponds to clip bound of the vars Parameters ---------- expr : PrimExpr The expression to be matched. var_list : List[tvm.tir.Var] A list of variables. Returns ------- coeff : List[PrimExpr] `concat([min_value[i], max_value[i]] for i, v in enumerate(var_list))` An empty list if the match failed. """ return _ffi_api.DetectClipBound(expr, var_list) def detect_common_subexpr(expr: PrimExpr, threshold: int) -> Dict[PrimExpr, int]: """Detect common sub expression which shows up more than a threshold times Parameters ---------- expr : PrimExpr The expression to be analyzed. threshold : int The threshold of repeat times that determines a common sub expression Returns ------- cse_dict : Dict[PrimExpr, int] The detected common sub expression dict, with sub expression and repeat times """ return _ffi_api.DetectCommonSubExpr(expr, threshold)
2,504
28.821429
85
py
tvm
tvm-main/python/tvm/arith/int_solver.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """integer constraints data structures and solvers""" import tvm._ffi from tvm.runtime import Object from . import _ffi_api @tvm._ffi.register_object("arith.IntGroupBounds") class IntGroupBounds(Object): """Represent integer grouped bounds which are classified into lower bounds (include), upper bounds (include) and equalities. Parameters ---------- coef : tvm.ir.PrimExpr The coefficient. Must be integer type. coef * var >= lower coef * var == equal coef * var >= upper lower : List[tvm.ir.PrimExpr] the lower bounds (include) equal : List[tvm.ir.PrimExpr] equalities upper : List[tvm.ir.PrimExpr] the upper bounds (include) """ def __init__(self, coef, lower, equal, upper): self.__init_handle_by_constructor__(_ffi_api.IntGroupBounds, coef, lower, equal, upper) @staticmethod def from_range(rng): """Construct a IntGroupedBounds by Range. Parameters ---------- rng : tvm.ir.Range Returns ------- ret : Range The constructed range. """ return _ffi_api.IntGroupBounds_from_range(rng) def find_best_range(self): """Return the best range from the grouped bounds. None if (-inf, +inf). """ return _ffi_api.IntGroupBounds_FindBestRange(self) @tvm._ffi.register_object("arith.IntConstraints") class IntConstraints(Object): """Represent a set of integer constraints including variables, their ranges and the relations between them (either equations or inequalities) Parameters ---------- variables : List[tvm.tir.Var] The variables in the constraints. Must be integers ranges : Map[tvm.tir.Var, tvm.ir.Range] The ranges of the variables. relations : List[tvm.ir.PrimExpr] The relations between the variables (either equations or inequalities) """ def __init__(self, variables, ranges, relations): self.__init_handle_by_constructor__(_ffi_api.IntConstraints, variables, ranges, relations) @tvm._ffi.register_object("arith.IntConstraintsTransform") class IntConstraintsTransform(Object): """We can have different set of variables to represent the same integer constraints. For example, the following two constrains are equivalent, {a + b = 0 | a >= 0, b >= 0} and {m - n = 0 | m >= 0, n <= 0} This data structure represents the transformation between two equivalent integer constraints. In the above example, src : {a + b = 0 | a >= 0, b >= 0} dst : {m - n = 0 | m >= 0, n <= 0} src_to_dst : {a -> m, b -> -n} dst_to_src : {m -> a, n -> -b} Parameters ---------- src : arith.IntConstraints source integer constraints, e.g., {a + b = 0 | a >= 0, b >= 0} dst : arith.IntConstraints integer constraints equivalent to the source, e.g., {m - n = 0 | m >= 0, n <= 0} src_to_dst : Map[tvm.tir.Var, tvm.ir.PrimExpr] mapping from variables in the src to the variables in the dst, e.g., {a -> m, b -> -n} dst_to_src : Map[tvm.tir.Var, tvm.ir.PrimExpr] mapping from variables in the dst to the variables in the src, e.g., {m -> a, n -> -b} """ def __init__(self, src, dst, src_to_dst, dst_to_src): self.__init_handle_by_constructor__( _ffi_api.IntConstraintsTransform, src, dst, src_to_dst, dst_to_src ) def solve_linear_equations(equations, variables=None, ranges=None): """Solve linear equations. Parameters ---------- equations: List[tvm.ir.PrimExpr] or IntConstraints The equations of the variables variables : Optional[List[tvm.tir.Var]] The variables in the system. ranges : Optional[Map[tvm.tir.Var, tvm.ir.Range]] The ranges of the variables. Returns ------- int_constraints_transform : IntConstraintsTransform New integer constraints, with less variables (if the problem is NOT of full rank), or no variable (if the problem is of full rank), or an empty integer constraints (if the problem is unsolvable). It also provides the ranges of the variables in the new system, as well as inequalities inferred from the problem. You can get the mapping from the original variables to the solution via int_constraints_transform.src_to_dst. """ if isinstance(equations, IntConstraints): return _ffi_api.SolveLinearEquations(equations) return _ffi_api.SolveLinearEquations(variables, ranges, equations) def solve_linear_inequalities(equations, variables=None, ranges=None, deskew_range=False): """Solve linear inequalities. Parameters ---------- equations : List[tvm.ir.PrimExpr] or IntConstraints The inequalities of the variables variables : Optional[List[tvm.tir.Var]] The variables in the system. ranges : Optional[Map[tvm.tir.Var, tvm.ir.Range]] The ranges of the variables. deskew_range: Optional[bool] Whether deskew the result ranges to be started from zero. Default false. Returns ------- ret_ranges: IntConstraints or IntConstraintsTransform The result ranges for each variables. Constrains that cannot be transformed to Range will be stored in IntConstraints.relations. If deskew_range is set (=True), the result ranges will be deskewed to be started from zero. New variables are created accordingly therefore IntConstraintsTransform is returned. """ solver = ( _ffi_api.SolveInequalitiesDeskewRange if deskew_range else _ffi_api.SolveInequalitiesToRange ) if isinstance(equations, IntConstraints): assert variables is None assert ranges is None return solver(equations) return solver(variables, ranges, equations)
6,743
36.259669
100
py
tvm
tvm-main/python/tvm/arith/__init__.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Integer bound analysis, simplification and pattern detection.""" from .int_set import ( IntSet, IntervalSet, estimate_region_lower_bound, estimate_region_strict_bound, estimate_region_upper_bound, ) from .analyzer import ModularSet, ConstIntBound, Analyzer, ProofStrength from .bound import deduce_bound from .pattern import detect_linear_equation, detect_clip_bound, detect_common_subexpr from .int_solver import solve_linear_equations, solve_linear_inequalities from .iter_affine_map import IterMapExpr, IterMark, IterSplitExpr, IterSumExpr from .iter_affine_map import ( detect_iter_map, iter_map_simplify, normalize_iter_map_to_expr, normalize_to_iter_sum, subspace_divide, inverse_affine_iter_map, )
1,538
38.461538
85
py
tvm
tvm-main/python/tvm/arith/bound.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Bound deduction.""" from . import _ffi_api def deduce_bound(var, cond, hint_map, relax_map): """Deduce the bound of the target variable in the cond. Parameters ---------- var : Var The target variable to be deduced. cond : PrimExpr The condition hint_map : Map[Var, IntSet] Domain of variables used to help deduction. relax_map : Map[Var, IntSet] The fomain of the variables to be relaxed using the provided domain. """ return _ffi_api.DeduceBound(var, cond, hint_map, relax_map)
1,350
32.775
63
py
tvm
tvm-main/python/tvm/relay/base.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=no-else-return, unidiomatic-typecheck, unused-import """The base node types for the Relay language.""" import os import tvm._ffi from tvm.ir import Node as RelayNode from tvm.ir import SourceName, Span, SequentialSpan from tvm.runtime import Object from . import _ffi_api __STD_PATH__ = os.path.join(os.path.dirname(os.path.realpath(__file__)), "std") def pretty_print(obj: Object) -> None: """Pretty print the object.""" return _ffi_api.PrettyPrint(obj) # type: ignore # pylint: disable=no-member def astext(obj: Object, show_meta_data=True, annotate=None): """Get the text format of the expression. Parameters ---------- obj : Object The object to be printed. show_meta_data : bool Whether to include meta data section in the text if there is meta data. annotate: Optional[Object->str] Optionally annotate function to provide additional information in the comment block. Returns ------- text : str The text format of the expression. Notes ----- The meta data section is necessary to fully parse the text format. However, it can contain dumps that are big (e.g constant weights), so it can be helpful to skip printing the meta data section. """ return _ffi_api.AsText(obj, show_meta_data, annotate) # type: ignore # pylint: disable=no-member @tvm._ffi.register_func("tvm.relay.std_path") def _std_path(): return __STD_PATH__ @tvm._ffi.register_object("relay.Id") class Id(Object): """Unique identifier(name) used in Var. Guaranteed to be stable across all passes. """ def __init__(self): raise RuntimeError("Cannot directly construct Id")
2,508
31.584416
101
py
tvm
tvm-main/python/tvm/relay/parser.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=invalid-name """The relay parser.""" from . import _ffi_api_parser def parse(source, source_name="from_string", init_module=None, init_meta_table=None): if init_meta_table is None: init_meta_table = {} return _ffi_api_parser.ParseModuleInContext( # type: ignore # pylint: disable=no-member source_name, source, init_module, init_meta_table, ) def parse_expr(source): return _ffi_api_parser.ParseExpr("string", source) # type: ignore # pylint: disable=no-member def fromtext(source, source_name="from_string"): return parse(source, source_name) def SpanCheck(): """A debugging utility for reporting missing span information.""" return _ffi_api_parser.SpanCheck() # type: ignore # pylint: disable=no-member
1,587
35.090909
98
py
tvm
tvm-main/python/tvm/relay/scope_builder.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """The scope builder interface.""" from __future__ import absolute_import from . import ty as _ty from . import expr as _expr from .._ffi import base as _base class WithScope(object): """A wrapper for builder methods which introduce scoping. Parameters ---------- enter_value: object The value returned by enter. """ def __init__(self, enter_value, exit_cb): self._enter_value = enter_value self._exit_cb = exit_cb def __enter__(self): return self._enter_value def __exit__(self, ptype, value, trace): if value: raise value self._exit_cb() def _make_lets(bindings, ret_value): """Make a nested let expressions. Parameters ---------- bindings: List[Tuple[tvm.relay.Var,tvm.relay.Expr]] The sequence of let bindings ret_value: tvm.relay.Expr The final value of the expression. Returns ------- lets: tvm.relay.Expr A nested let expression. """ if ret_value is None: raise RuntimeError("ret is not called in this scope") if isinstance(ret_value, _expr.If) and ret_value.false_branch is None: raise RuntimeError("Creating an If expression without else.") let_expr = ret_value for var, value in reversed(bindings): let_expr = _expr.Let(var, value, let_expr) return let_expr class ScopeBuilder(object): """Scope builder class. Enables users to build up a nested scope(let, if) expression easily. Examples -------- .. code-block: python sb = relay.ScopeBuilder() cond = relay.var("cond", 'bool') x = relay.var("x") y = relay.var("y") with sb.if_scope(cond): one = relay.const(1, "float32") t1 = sb.let(t1, relay.add(x, one)) sb.ret(t1) with sb.else_scope(): sb.ret(y) print(sb.get().astext()) """ def __init__(self): self._bindings = [[]] self._ret_values = [None] def _enter_scope(self): self._bindings.append([]) self._ret_values.append(None) def _exit_scope(self): bindings = self._bindings.pop() ret_value = self._ret_values.pop() return bindings, ret_value def let(self, var, value): """Create a new let binding. Parameters ---------- var: Union[Tuple[str, relay.Type], tvm.relay.Var] The variable or name of variable. value: tvm.relay.Expr The value to be bound """ if isinstance(var, (tuple, list)): if len(var) > 2: raise ValueError("Expect var to be Tuple[str, relay.Type]") var = _expr.var(*var) elif isinstance(var, _base.string_types): var = _expr.var(var) self._bindings[-1].append((var, value)) return var def if_scope(self, cond): """Create a new if scope. Parameters ---------- cond: tvm.relay.expr.Expr The condition Returns ------- scope: WithScope The if scope. Note ---- The user must follows with an else scope. """ self._enter_scope() def _on_exit(): bindings, ret_value = self._exit_scope() if self._ret_values[-1] is not None: raise RuntimeError("result already returned before if scope") true_branch = _make_lets(bindings, ret_value) self._ret_values[-1] = _expr.If(cond, true_branch, None) return WithScope(None, _on_exit) def else_scope(self): """Create a new else scope. Returns ------- scope: WithScope The if scope. """ self._enter_scope() def _on_exit(): bindings, ret_value = self._exit_scope() partial_if = self._ret_values[-1] no_else = not isinstance(partial_if, _expr.If) or partial_if.false_branch is not None if no_else: raise RuntimeError("else scope must follows") false_branch = _make_lets(bindings, ret_value) self._ret_values[-1] = _expr.If(partial_if.cond, partial_if.true_branch, false_branch) return WithScope(None, _on_exit) def type_of(self, expr): """ Compute the type of an expression. Parameters ---------- expr: relay.Expr The expression to compute the type of. """ if isinstance(expr, _expr.Var): return expr.type_annotation ity = _ty.IncompleteType() var = _expr.var("unify", ity) self.let(var, expr) return ity def ret(self, value): """Set the return value of this scope. Parameters ---------- value: tvm.relay.expr.Expr The return value. """ if self._ret_values[-1] is not None: raise RuntimeError("ret value is already set in this scope.") self._ret_values[-1] = value def get(self): """Get the generated result. Returns ------- value: tvm.relay.expr.Expr The final result of the expression. """ if len(self._bindings) != 1: raise RuntimeError("can only call get at the outmost scope") return _make_lets(self._bindings[-1], self._ret_values[-1])
6,235
27.217195
98
py
tvm
tvm-main/python/tvm/relay/build_module.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """ Construct the necessary state for the TVM graph executor from a Relay expression. """ import warnings import numpy as np from tvm.ir import IRModule from tvm.target import Target from .. import autotvm from .. import nd as _nd from .. import register_func from ..contrib import graph_executor as _graph_executor from ..contrib import utils as contrib_utils from ..runtime import load_module from ..runtime.executor import aot_executor as _aot_executor from ..target import Target from . import _build_module from . import expr as _expr from . import function as _function from . import ty as _ty from .backend import Executor, Runtime from .backend import executor_factory as _executor_factory from .backend import interpreter as _interpreter from .backend.utils import mangle_module_name from .backend.vm import VMExecutor from .transform import InferType def _convert_param_map(params): inputs = {} for name, param in params.items(): if isinstance(param, np.ndarray): param = _nd.array(param) inputs[name] = _expr.const(param) return inputs class BuildModule(object): """Build an IR module to run on TVM graph executor. This class is used to expose the `RelayBuildModule` APIs implemented in C++. """ def __init__(self): self.mod = _build_module._BuildModule() self._get_graph_json = self.mod["get_graph_json"] self._get_module = self.mod["get_module"] self._build = self.mod["build"] self._optimize = self.mod["optimize"] self._set_params_func = self.mod["set_params"] self._get_params_func = self.mod["get_params"] self._get_function_metadata = self.mod["get_function_metadata"] self._get_executor_codegen_metadata = self.mod["get_executor_codegen_metadata"] self._get_devices = self.mod["get_devices"] self._get_irmodule = self.mod["get_irmodule"] def build( self, mod, target=None, target_host=None, executor=Executor("graph"), runtime=Runtime("cpp"), workspace_memory_pools=None, constant_memory_pools=None, params=None, mod_name=None, ): """ Parameters ---------- mod : :py:class:`~tvm.IRModule` The IRModule to build. target : any multi-target like object, see Target.canon_multi_target For homogeneous compilation, the unique build target. For heterogeneous compilation, a dictionary or list of possible build targets. target_host : None, or any target-like object, see Target.canon_target Host compilation target, if target is device. When TVM compiles device specific program such as CUDA, we also need host(CPU) side code to interact with the driver to setup the dimensions and parameters correctly. target_host is used to specify the host side codegen target. By default, llvm is used if it is enabled, otherwise a stackvm interpreter is used. executor : Optional[Executor] The executor configuration with which to build the model. Defaults to "graph" if no executor specified. runtime : Optional[Runtime] Runtime configuration to use when building the model. Defaults to "cpp" if no runtime specified. workspace_memory_pools : Optional[WorkspaceMemoryPools] The object that contains an Array of WorkspacePoolInfo objects that hold properties of read-write workspace pools that could be used by the inference. constant_memory_pools : Optional[ConstantMemoryPools] The object that contains an Array of ConstantPoolInfo objects that hold properties of read-only memory pools that could be used by the inference. params : dict of str to NDArray Input parameters to the graph that do not change during inference time. Used for constant folding. mod_name: Optional[str] The module name we will build Returns ------- graph_json : str The json string that can be accepted by graph executor. mod : tvm.Module The module containing necessary libraries. params : dict The parameters of the final graph. """ # pylint: disable=import-outside-toplevel from tvm.auto_scheduler import is_auto_scheduler_enabled from tvm.meta_schedule import is_meta_schedule_enabled # pylint: enable=import-outside-toplevel # Setup the params. if params: self._set_params(params) # Build the IR module. If auto_scheduler is not enabled, # then use the TOPI-defined schedule. # Turn off AutoTVM config not found warnings if auto_scheduler is enabled. old_autotvm_silent = autotvm.GLOBAL_SCOPE.silent autotvm.GLOBAL_SCOPE.silent = ( is_auto_scheduler_enabled() or is_meta_schedule_enabled() or old_autotvm_silent ) mod_name = mangle_module_name(mod_name) self._build( mod, target, target_host, executor, runtime, workspace_memory_pools, constant_memory_pools, mod_name, ) autotvm.GLOBAL_SCOPE.silent = old_autotvm_silent # Get artifacts mod = self.get_module() params = self.get_params() executor_config = self.get_graph_json() if executor.name == "graph" else None return executor_config, mod, params def optimize(self, mod, target=None, target_host=None, params=None): """ Parameters ---------- mod : :py:class:`~tvm.IRModule` The IR module to build. target : any multi-target like object, see Target.canon_multi_target. For homogeneous compilation, the unique build target. For heterogeneous compilation, a dictionary or list of possible build targets. target_host : None, or any target-like object, see Target.canon_target Host compilation target, if target is device. params : dict of str to NDArray Input parameters to the graph that do not change during inference time. Used for constant folding. Returns ------- mod : :py:class:`~tvm.IRModule` The optimized relay module. params : dict The parameters of the final graph. """ raw_targets = Target.canon_multi_target_and_host(target, target_host) # Setup the params. if params: self._set_params(params) mod = self._optimize(mod, raw_targets) # Get artifacts params = self.get_params() return mod, params def _set_params(self, params): self._set_params_func(_convert_param_map(params)) def get_graph_json(self): """Return the json file of the built program.""" return self._get_graph_json() def get_module(self): """Return the built module.""" return self._get_module() def get_function_metadata(self): """Return the compiled function metadata. Currently, the metadata contains workspace size required by each PrimFunc""" return self._get_function_metadata() def get_executor_codegen_metadata(self): """Return the metadata produced after executor codegen """ return self._get_executor_codegen_metadata() def get_devices(self): """Returns a list of devices configured in this module""" return self._get_devices() def get_params(self): """Return the updated weights.""" params = self._get_params_func() ret = {} for key, value in params.items(): ret[key] = value.data return ret def get_irmodule(self): """Returns the TargetIRModule's post-lowering""" return self._get_irmodule() @register_func("tvm.relay.module_export_library") def _module_export(module, file_name): # fcompile, addons, kwargs? return module.export_library(file_name) @register_func("tvm.relay.build") def _build_module_no_factory_impl(mod, target, target_host, params, mod_name): return build( mod, target=target, target_host=target_host, params=params, mod_name=mod_name ).module def _build_module_no_factory(mod, target=None, target_host=None, params=None, mod_name="default"): """A wrapper around build which discards the Python GraphFactoryRuntime. This wrapper is suitable to be used from other programming languages as the runtime::Module can be freely passed between language boundaries. """ return _build_module_no_factory_impl(mod, target, target_host, params, mod_name) def build( ir_mod, target=None, target_host=None, executor=Executor("graph"), runtime=Runtime("cpp"), workspace_memory_pools=None, constant_memory_pools=None, params=None, mod_name="default", ): # fmt: off # pylint: disable=line-too-long """Helper function that builds a Relay function to run on TVM graph executor. Parameters ---------- ir_mod : :py:class:`~tvm.IRModule` The IR module to build. Using relay.Function is deprecated. target : None, or any multi-target like object, see Target.canon_multi_target For homogeneous compilation, the unique build target. For heterogeneous compilation, a dictionary or list of possible build targets. Defaults to the current target in the environment if None. target_host : None, or any target like object, see Target.canon_target Host compilation target, if target is device. executor : Optional[Executor] The executor configuration with which to build the model. Defaults to "graph" if no executor specified. runtime : Optional[Runtime] Runtime configuration to use when building the model. Defaults to "cpp" if no runtime specified. workspace_memory_pools : Optional[WorkspaceMemoryPools] The object that contains an Array of WorkspacePoolInfo objects that hold properties of read-write workspace pools that could be used by the inference. constant_memory_pools : Optional[ConstantMemoryPools] The object that contains an Array of ConstantPoolInfo objects that hold properties of read-only pools that could be used by the inference. params : dict of str to NDArray Input parameters to the graph that do not change during inference time. Used for constant folding. mod_name: Optional[str] The module name we will build Returns ------- factory_module : tvm.relay.backend.executor_factory.ExecutorFactoryModule The runtime factory for the TVM graph executor. """ # pylint: enable=line-too-long # fmt: on if not isinstance(ir_mod, (IRModule, _function.Function)): raise ValueError("Type of input parameter mod must be tvm.IRModule") if isinstance(ir_mod, _function.Function): if params: ir_mod = bind_params_by_name(ir_mod, params) ir_mod = IRModule.from_expr(ir_mod) warnings.warn( "Please use input parameter mod (tvm.IRModule) " "instead of deprecated parameter mod (tvm.relay.function.Function)", DeprecationWarning, ) raw_targets = Target.canon_multi_target_and_host(Target.target_or_current(target), target_host) assert len(raw_targets) > 0 target_host = raw_targets[0].host # If current dispatch context is fallback context (the default root context), # then load pre-tuned parameters from TopHub if isinstance(autotvm.DispatchContext.current, autotvm.FallbackContext): tophub_context = autotvm.tophub.context(list(raw_targets)) else: tophub_context = autotvm.utils.EmptyContext() with tophub_context: bld_mod = BuildModule() graph_json, runtime_mod, params = bld_mod.build( mod=ir_mod, target=raw_targets, params=params, executor=executor, runtime=runtime, workspace_memory_pools=workspace_memory_pools, constant_memory_pools=constant_memory_pools, mod_name=mod_name, ) func_metadata = bld_mod.get_function_metadata() devices = bld_mod.get_devices() lowered_ir_mods = bld_mod.get_irmodule() executor_codegen_metadata = bld_mod.get_executor_codegen_metadata() if executor.name == "aot": executor_factory = _executor_factory.AOTExecutorFactoryModule( ir_mod, lowered_ir_mods, raw_targets, executor, runtime, runtime_mod, mod_name, params, func_metadata, executor_codegen_metadata, devices, ) elif executor.name == "graph": executor_factory = _executor_factory.GraphExecutorFactoryModule( ir_mod, raw_targets, executor, graph_json, runtime_mod, mod_name, params, func_metadata, ) else: assert False, "Executor " + executor + " not supported" return executor_factory def optimize(mod, target=None, params=None): """Helper function that optimizes a Relay module. Parameters ---------- mod : :py:class:`~tvm.IRModule` The module to build. Using relay.Function is deprecated. target : None, or any multi-target like object, see Target.canon_multi_target For homogeneous compilation, the unique build target. For heterogeneous compilation, a dictionary or list of possible build targets. Defaults to the current target in the environment if None. params : dict of str to NDArray Input parameters to the graph that do not change during inference time. Used for constant folding. Returns ------- mod : :py:class:`~tvm.IRModule` The optimized relay module. params : dict The parameters of the final graph. """ if not isinstance(mod, (IRModule, _function.Function)): raise ValueError("Type of input parameter mod must be tvm.IRModule") if isinstance(mod, _function.Function): if params: mod = bind_params_by_name(mod, params) mod = IRModule.from_expr(mod) warnings.warn( "Please use input parameter mod (tvm.IRModule) " "instead of deprecated parameter func (tvm.relay.function.Function)", DeprecationWarning, ) raw_targets = Target.canon_multi_target_and_host(Target.target_or_current(target)) # If current dispatch context is fallback context (the default root context), # then load pre-tuned parameters from TopHub if isinstance(autotvm.DispatchContext.current, autotvm.FallbackContext): tophub_context = autotvm.tophub.context(raw_targets) else: tophub_context = autotvm.utils.EmptyContext() with tophub_context: bld_mod = BuildModule() mod, params = bld_mod.optimize(mod, target=raw_targets, params=params) return mod, params def bind_params_by_name(func, params): """Bind params to function by name. This could be useful when assembling custom Relay optimization passes that involve constant folding. Parameters ---------- func : relay.Function The function to bind parameters to. params : dict of str to NDArray Input parameters to the graph that do not change during inference time. Used for constant folding. Returns ------- func : relay.Function The function with parameters bound """ inputs = _convert_param_map(params) return _build_module.BindParamsByName(func, inputs) class GraphExecutor(_interpreter.Executor): """Wrapper around Executor interface. This executor is used for debug and testing purposes. Parameters ---------- mod : :py:class:`~tvm.IRModule` The module to support the execution. device : :py:class:`Device` The runtime device to run the code on. target : any multi-target like object, see Target.canon_multi_target For homogeneous compilation, the unique build target. For heterogeneous compilation, a dictionary or list of possible build targets. """ def __init__(self, mod, device, target): assert mod is not None self.mod = mod self.device = device self.target = target def _make_executor(self, expr=None): if expr: self.mod["main"] = expr self.mod = InferType()(self.mod) ret_type = self.mod["main"].checked_type.ret_type if _ty.is_dynamic(ret_type): raise ValueError( "Graph Executor only supports static graphs, got output type", ret_type ) mod = build(self.mod, target=self.target) gmodule = _graph_executor.GraphModule(mod["default"](self.device)) def _unflatten(flat_iter, cur_type): if isinstance(cur_type, _ty.TensorType): return next(flat_iter) if isinstance(cur_type, _ty.TupleType): fields = [] for field_type in cur_type.fields: field = _unflatten(flat_iter, field_type) fields.append(field) return fields raise ValueError("Return type", ret_type, "contains unsupported type", cur_type) def _graph_wrapper(*args, **kwargs): args = self._convert_args(self.mod["main"], args, kwargs) # Create map of inputs. for i, arg in enumerate(args): gmodule.set_input(i, arg) # Run the module, and fetch the output. gmodule.run() flattened = [] for i in range(gmodule.get_num_outputs()): flattened.append(gmodule.get_output(i).copyto(_nd.cpu(0))) unflattened = _unflatten(iter(flattened), ret_type) return unflattened return _graph_wrapper class AotExecutor(_interpreter.Executor): """Implements the Executor interface for AOT. Parameters ---------- mod : :py:class:`~tvm.IRModule` The module to support the execution. device : :py:class:`Device` The runtime device to run the code on. target : any multi-target like object, see Target.canon_multi_target For homogeneous compilation, the unique build target. For heterogeneous compilation, a dictionary or list of possible build targets. """ def __init__(self, mod, device, target): assert mod is not None self.mod = mod self.device = device self.target = target def _make_executor(self, expr=None): if expr: self.mod["main"] = expr self.mod = InferType()(self.mod) ret_type = self.mod["main"].checked_type.ret_type if _ty.is_dynamic(ret_type): raise ValueError("AOT Executor only supports static graphs, got output type", ret_type) mod = build(self.mod, target=self.target, executor=Executor("aot")) # NOTE: Given AOT requires use of the "c" backend, must export/import to compile the # generated code. temp_so_dir = contrib_utils.TempDirectory() temp_so = temp_so_dir / "temp.so" mod.export_library(temp_so, cc="gcc", options=["-std=c11"]) mod = load_module(temp_so) aot_mod = mod["default"](self.device) gmodule = _aot_executor.AotModule(aot_mod) def _unflatten(flat_iter, cur_type): if isinstance(cur_type, _ty.TensorType): return next(flat_iter) if isinstance(cur_type, _ty.TupleType): fields = [] for field_type in cur_type.fields: field = _unflatten(flat_iter, field_type) fields.append(field) return fields raise ValueError("Return type", ret_type, "contains unsupported type", cur_type) def _aot_wrapper(*args, **kwargs): args = self._convert_args(self.mod["main"], args, kwargs) # Create map of inputs. for i, arg in enumerate(args): gmodule.set_input(i, arg) # Run the module, and fetch the output. gmodule.run() flattened = [] for i in range(gmodule.get_num_outputs()): flattened.append(gmodule.get_output(i).copyto(_nd.cpu(0))) unflattened = _unflatten(iter(flattened), ret_type) return unflattened return _aot_wrapper # TODO(mbs): Collapse the create_executor/evaluate phases together since a) most callers don't # reuse the executor for multiple expressions and b) any preparation necessary for the expression # evaluation needs to (currently) be done along with preparation for the module. def create_executor(kind="debug", mod=None, device=None, target="llvm", params=None): """Factory function to create an executor. Example ------- .. code-block:: python import tvm.relay import numpy as np x = tvm.relay.var("x", tvm.relay.TensorType([1], dtype="float32")) expr = tvm.relay.add(x, tvm.relay.Constant(tvm.nd.array(np.array([1], dtype="float32")))) tvm.relay.create_executor( kind="vm", mod=tvm.IRModule.from_expr(tvm.relay.Function([x], expr)) ).evaluate()(np.array([2], dtype="float32")) # returns `array([3.], dtype=float32)` Parameters ---------- kind : str The type of executor. Avaliable options are `debug` for the interpreter, `graph` for the graph executor, `aot` for the aot executor, and `vm` for the virtual machine. mod : :py:class:`~tvm.IRModule` The Relay module containing collection of functions device : :py:class:`Device` The device to execute the code. target : any multi-target like object, see Target.canon_multi_target For homogeneous compilation, the unique build target. For heterogeneous compilation, a dictionary or list of possible build targets. CAUTION: Though this API allows multiple targets, it does not allow multiple devices, so heterogenous compilation is not yet supported. params : dict of str to NDArray Input parameters to the graph that do not change during inference time. Returns ------- executor : :py:class:`~tvm.relay.backend.interpreter.Executor` """ raw_targets = Target.canon_multi_target(target) if mod is None: mod = IRModule() if device is not None: assert device.device_type == raw_targets[0].get_target_device_type() else: # Derive the default device from the first target. device = _nd.device(raw_targets[0].get_target_device_type(), 0) if params is not None: mod = IRModule.from_expr(bind_params_by_name(mod["main"], params)) assert "executor" not in raw_targets[0].attrs or raw_targets[0].attrs["executor"] == kind if kind == "debug": assert len(raw_targets) == 1, "The interpreter currently only supports a single target" return _interpreter.Interpreter(mod, device, raw_targets[0]) if kind == "graph": return GraphExecutor(mod, device, raw_targets) if kind == "vm": return VMExecutor(mod, device, raw_targets) if kind == "aot": return AotExecutor(mod, device, raw_targets) raise RuntimeError(f"unknown execution strategy: {kind}")
24,808
35.112082
99
py
tvm
tvm-main/python/tvm/relay/loops.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=invalid-name """ Utilities for building Relay loops. """ from .scope_builder import ScopeBuilder from . import expr as _expr from . import function as _function def while_loop(cond, loop_vars, loop_bodies): """ Construct a while loop. Parameters ---------- cond: Callable[Tuple[relay.Expr], relay.Expr] The condition of the loop. loop_vars: Tuple[relay.Expr] The variables being looped over. The initial values of the loop, will be used to construct the loop variables. loop_bodies: Callable[Tuple[relay.Expr], Tuple[relay.Expr]] The body of the loop, should be a function which given loop variables produces the output result also as a tuple Returns ------- loop: relay.Expr The loop expression. """ sb = ScopeBuilder() loop = _expr.Var("while_loop") fresh_vars = [] for i, loop_var in enumerate(loop_vars): name = loop_var.name_hint if isinstance(loop_var, _expr.Var) else f"arg{i}" new_var = _expr.var(name, type_annotation=sb.type_of(loop_var), span=loop_var.span) fresh_vars.append(new_var) with sb.if_scope(cond(*fresh_vars)): sb.ret(loop(*loop_bodies(*fresh_vars))) with sb.else_scope(): sb.ret(_expr.Tuple(fresh_vars)) func = _function.Function(fresh_vars, sb.get()) let = _expr.Let(loop, func, loop) return let
2,218
31.632353
91
py
tvm
tvm-main/python/tvm/relay/type_functor.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """The type functor of Relay.""" from .ty import ( TypeVar, IncompleteType, TensorType, FuncType, TupleType, TypeRelation, RefType, GlobalTypeVar, TypeCall, ) from .adt import TypeData class TypeFunctor: """ An abstract visitor defined over Type. Defines the default dispatch over types. """ def __init__(self): # TODO(weberlo): make type vars hashable, so we can memoize pass # pylint: disable=no-else-return def visit(self, typ): """Apply the visitor to a type.""" if isinstance(typ, TypeVar): return self.visit_type_var(typ) elif isinstance(typ, IncompleteType): return self.visit_incomplete_type(typ) elif isinstance(typ, TensorType): return self.visit_tensor_type(typ) elif isinstance(typ, FuncType): return self.visit_func_type(typ) elif isinstance(typ, TupleType): return self.visit_tuple_type(typ) elif isinstance(typ, TypeRelation): return self.visit_type_relation(typ) elif isinstance(typ, RefType): return self.visit_ref_type(typ) elif isinstance(typ, GlobalTypeVar): return self.visit_global_type_var(typ) elif isinstance(typ, TypeCall): return self.visit_type_call(typ) elif isinstance(typ, TypeData): return self.visit_type_data(typ) else: raise Exception(f"unhandled case: {type(typ)}") def visit_type_var(self, _): raise NotImplementedError() def visit_incomplete_type(self, _): raise NotImplementedError() def visit_tensor_type(self, _): raise NotImplementedError() def visit_func_type(self, _): raise NotImplementedError() def visit_tuple_type(self, _): raise NotImplementedError() def visit_type_relation(self, _): raise NotImplementedError() def visit_ref_type(self, _): raise NotImplementedError() def visit_global_type_var(self, _): raise NotImplementedError() def visit_type_call(self, _): raise NotImplementedError() def visit_type_data(self, _): raise NotImplementedError() class TypeVisitor(TypeFunctor): """ A visitor over Type. The default behavior recursively traverses the AST. """ def visit_type_var(self, tv): pass def visit_incomplete_type(self, it): pass def visit_tensor_type(self, tt): pass def visit_func_type(self, ft): for arg_type in ft.arg_types: self.visit(arg_type) self.visit(ft.ret_type) for type_param in getattr(ft, "type_params", []): self.visit(type_param) for type_constraint in getattr(ft, "type_constraints", []): self.visit(type_constraint) def visit_tuple_type(self, tt): for field in tt.fields: self.visit(field) def visit_type_relation(self, tr): for arg in tr.args: self.visit(arg) def visit_ref_type(self, rt): self.visit(rt.value) def visit_global_type_var(self, gtv): pass def visit_type_call(self, tc): self.visit(tc.func) for arg in tc.args: self.visit(arg) def visit_type_data(self, td): self.visit(td.header) for type_var in td.type_vars: self.visit(type_var) class TypeMutator(TypeFunctor): """ A functional visitor over Type. The default behavior recursively traverses the AST and reconstructs the AST. """ def visit_type_var(self, tv): return TypeVar(tv.name_hint, tv.kind) def visit_incomplete_type(self, it): return IncompleteType(it.kind) def visit_tensor_type(self, tt): return TensorType(tt.shape, tt.dtype) def visit_func_type(self, ft): new_arg_types = [self.visit(arg_type) for arg_type in ft.arg_types] new_ret_type = self.visit(ft.ret_type) new_type_params = [self.visit(type_param) for type_param in getattr(ft, "type_params", [])] new_type_constraints = [ self.visit(type_constraint) for type_constraint in getattr(ft, "type_constraints", []) ] return FuncType(new_arg_types, new_ret_type, new_type_params, new_type_constraints) def visit_tuple_type(self, tt): return TupleType([self.visit(field) for field in tt.fields]) def visit_type_relation(self, tr): return TypeRelation(tr.func, [self.visit(arg) for arg in tr.args], tr.num_inputs, tr.attrs) def visit_ref_type(self, rt): return RefType(self.visit(rt.value)) def visit_global_type_var(self, gtv): return GlobalTypeVar(gtv.name_hint, gtv.kind) def visit_type_call(self, tc): return TypeCall(self.visit(tc.func), [self.visit(arg) for arg in tc.args]) def visit_type_data(self, td): return TypeData( self.visit(td.header), [self.visit(type_var) for type_var in td.type_vars], td.constructors, )
5,878
28.84264
99
py
tvm
tvm-main/python/tvm/relay/param_dict.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=invalid-name """Helper utility to save parameter dicts.""" import tvm.runtime def save_param_dict(params): """Save parameter dictionary to binary bytes. The result binary bytes can be loaded by the GraphModule with API "load_params". .. deprecated:: 0.9.0 Use :py:func:`tvm.runtime.save_param_dict` instead. Parameters ---------- params : dict of str to NDArray The parameter dictionary. Returns ------- param_bytes: bytearray Serialized parameters. Examples -------- .. code-block:: python # set up the parameter dict params = {"param0": arr0, "param1": arr1} # save the parameters as byte array param_bytes = tvm.runtime.save_param_dict(params) # We can serialize the param_bytes and load it back later. # Pass in byte array to module to directly set parameters tvm.runtime.load_param_dict(param_bytes) """ return tvm.runtime.save_param_dict(params) def load_param_dict(param_bytes): """Load parameter dictionary to binary bytes. .. deprecated:: 0.9.0 Use :py:func:`tvm.runtime.load_param_dict` instead. Parameters ---------- param_bytes: bytearray Serialized parameters. Returns ------- params : dict of str to NDArray The parameter dictionary. """ return tvm.runtime.load_param_dict(param_bytes)
2,213
29.328767
65
py
tvm
tvm-main/python/tvm/relay/_ffi_api_parser.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """FFI APIs for Relay parser.""" import tvm._ffi tvm._ffi._init_api("relay.parser", __name__)
880
40.952381
62
py
tvm
tvm-main/python/tvm/relay/_ffi_api.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """FFI APIs for Relay program IR.""" import tvm._ffi tvm._ffi._init_api("relay.ir", __name__)
880
40.952381
62
py
tvm
tvm-main/python/tvm/relay/expr_functor.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=no-else-return, unidiomatic-typecheck, invalid-name """The expression functor of Relay.""" from tvm.ir import Op from .function import Function, FunctionWithFields from .expr import Call, Let, Var, GlobalVar from .expr import If, Tuple, TupleGetItem, Constant from .expr import RefCreate, RefRead, RefWrite from .adt import Constructor, Match, Clause class ExprFunctor: """ An abstract visitor defined over Expr. Defines the default dispatch over expressions, and implements memoization. """ def __init__(self): self.memo_map = {} # pylint: disable=no-else-return def visit(self, expr): """Apply the visitor to an expression.""" if expr in self.memo_map: return self.memo_map[expr] if isinstance(expr, Function): res = self.visit_function(expr) elif isinstance(expr, Call): res = self.visit_call(expr) elif isinstance(expr, Let): res = self.visit_let(expr) elif isinstance(expr, Var): res = self.visit_var(expr) elif isinstance(expr, GlobalVar): res = self.visit_global_var(expr) elif isinstance(expr, If): res = self.visit_if(expr) elif isinstance(expr, Tuple): res = self.visit_tuple(expr) elif isinstance(expr, TupleGetItem): res = self.visit_tuple_getitem(expr) elif isinstance(expr, Constant): res = self.visit_constant(expr) elif isinstance(expr, Op): res = self.visit_op(expr) elif isinstance(expr, RefCreate): res = self.visit_ref_create(expr) elif isinstance(expr, RefRead): res = self.visit_ref_read(expr) elif isinstance(expr, RefWrite): res = self.visit_ref_write(expr) elif isinstance(expr, Constructor): res = self.visit_constructor(expr) elif isinstance(expr, Match): res = self.visit_match(expr) else: raise Exception(f"warning unhandled case: {type(expr)}") self.memo_map[expr] = res return res def visit_function(self, _): raise NotImplementedError() def visit_let(self, _): raise NotImplementedError() def visit_call(self, _): raise NotImplementedError() def visit_var(self, _): raise NotImplementedError() def visit_type(self, typ): return typ def visit_if(self, _): raise NotImplementedError() def visit_tuple(self, _): raise NotImplementedError() def visit_tuple_getitem(self, _): raise NotImplementedError() def visit_global_var(self, _): raise NotImplementedError() def visit_op(self, _): raise NotImplementedError() def visit_constant(self, _): raise NotImplementedError() def visit_ref_create(self, _): raise NotImplementedError() def visit_ref_write(self, _): raise NotImplementedError() def visit_ref_read(self, _): raise NotImplementedError() def visit_constructor(self, _): raise NotImplementedError() def visit_match(self, _): raise NotImplementedError() class ExprVisitor(ExprFunctor): """ A visitor over Expr. The default behavior recursively traverses the AST. """ def visit_tuple(self, tup): for x in tup.fields: self.visit(x) def visit_call(self, call): self.visit(call.op) for a in call.args: self.visit(a) def visit_var(self, var): pass def visit_let(self, let): self.visit(let.var) self.visit(let.value) self.visit(let.body) def visit_function(self, fn): for x in fn.params: self.visit(x) self.visit(fn.body) def visit_if(self, i): self.visit(i.cond) self.visit(i.true_branch) self.visit(i.false_branch) def visit_global_var(self, gv): pass def visit_constructor(self, c): pass def visit_op(self, op): pass def visit_constant(self, const): pass def visit_ref_create(self, r): self.visit(r.value) def visit_ref_read(self, r): self.visit(r.ref) def visit_ref_write(self, r): self.visit(r.ref) self.visit(r.value) def visit_tuple_getitem(self, t): self.visit(t.tuple_value) def visit_match(self, m): self.visit(m.data) for c in m.clauses: self.visit(c.rhs) class ExprMutator(ExprFunctor): """ A functional visitor over Expr. The default behavior recursively traverses the AST and reconstructs the AST. """ def visit_function(self, fn): new_params = [self.visit(x) for x in fn.params] new_body = self.visit(fn.body) if new_params == list(fn.params) and new_body == fn.body: return fn return FunctionWithFields(fn, list(new_params), new_body) def visit_let(self, let): new_var = self.visit(let.var) new_val = self.visit(let.value) new_body = self.visit(let.body) if new_var == let.var and new_val == let.value and new_body == let.body: return let return Let(new_var, new_val, new_body) def visit_call(self, call): new_fn = self.visit(call.op) new_args = [self.visit(arg) for arg in call.args] if new_fn == call.op and new_args == list(call.args): return call return Call(new_fn, new_args, call.attrs, call.type_args, call.span) def visit_var(self, var): return var def visit_global_id(self, global_var): return global_var def visit_if(self, ite): new_cond = self.visit(ite.cond) new_true_branch = self.visit(ite.true_branch) new_false_branch = self.visit(ite.false_branch) if ( new_cond == ite.cond and new_true_branch == ite.true_branch and new_false_branch == ite.false_branch ): return ite return If(new_cond, new_true_branch, new_false_branch) def visit_tuple(self, tup): new_fields = [self.visit(field) for field in tup.fields] if new_fields == list(tup.fields): return tup return Tuple(new_fields, tup.span) def visit_tuple_getitem(self, op): new_tuple_value = self.visit(op.tuple_value) if new_tuple_value == op.tuple_value: return op return TupleGetItem(new_tuple_value, op.index) def visit_global_var(self, gvar): return gvar def visit_op(self, op): return op def visit_constant(self, const): return const def visit_constructor(self, con): return con def visit_match(self, m): new_data = self.visit(m.data) new_clauses = [Clause(c.lhs, self.visit(c.rhs)) for c in m.clauses] if new_data == m.data and all(x.rhs == y.rhs for x, y in zip(new_clauses, m.clauses)): return m return Match(new_data, new_clauses, complete=m.complete) def visit_ref_create(self, r): new_value = self.visit(r.value) if new_value == r.value: return r return RefCreate(new_value) def visit_ref_write(self, r): new_ref = self.visit(r.ref) new_value = self.visit(r.value) if new_ref == r.ref and new_value == r.value: return r return RefWrite(new_ref, new_value) def visit_ref_read(self, r): new_ref = self.visit(r.ref) if new_ref == r.ref: return r return RefRead(new_ref)
8,457
27.866894
94
py
tvm
tvm-main/python/tvm/relay/adt.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=no-else-return, unidiomatic-typecheck, invalid-name, unused-import """Algebraic data types in Relay.""" from tvm.ir import Constructor, TypeData from tvm.runtime import Object import tvm._ffi from .base import RelayNode from . import _ffi_api from .ty import Type from .expr import ExprWithOp, RelayExpr, Call class Pattern(RelayNode): """Base type for pattern matching constructs.""" @tvm._ffi.register_object("relay.PatternWildcard") class PatternWildcard(Pattern): """Wildcard pattern in Relay: Matches any ADT and binds nothing.""" def __init__(self): """Constructs a wildcard pattern. Parameters ---------- None Returns ------- wildcard: PatternWildcard a wildcard pattern. """ self.__init_handle_by_constructor__(_ffi_api.PatternWildcard) @tvm._ffi.register_object("relay.PatternVar") class PatternVar(Pattern): """Variable pattern in Relay: Matches anything and binds it to the variable.""" def __init__(self, var): """Construct a variable pattern. Parameters ---------- var: tvm.relay.Var Returns ------- pv: PatternVar A variable pattern. """ self.__init_handle_by_constructor__(_ffi_api.PatternVar, var) @tvm._ffi.register_object("relay.PatternConstructor") class PatternConstructor(Pattern): """Constructor pattern in Relay: Matches an ADT of the given constructor, binds recursively.""" def __init__(self, constructor, patterns=None): """Construct a constructor pattern. Parameters ---------- constructor: Constructor The constructor. patterns: Optional[List[Pattern]] Optional subpatterns: for each field of the constructor, match to the given subpattern (treated as a variable pattern by default). Returns ------- wildcard: PatternWildcard a wildcard pattern. """ if patterns is None: patterns = [] self.__init_handle_by_constructor__(_ffi_api.PatternConstructor, constructor, patterns) @tvm._ffi.register_object("relay.PatternTuple") class PatternTuple(Pattern): """Constructor pattern in Relay: Matches a tuple, binds recursively.""" def __init__(self, patterns=None): """Construct a tuple pattern. Parameters ---------- patterns: Optional[List[Pattern]] Optional subpatterns: for each field of the constructor, match to the given subpattern (treated as a variable pattern by default). Returns ------- wildcard: PatternWildcard a wildcard pattern. """ if patterns is None: patterns = [] self.__init_handle_by_constructor__(_ffi_api.PatternTuple, patterns) @tvm._ffi.register_object("relay.Clause") class Clause(Object): """Clause for pattern matching in Relay.""" def __init__(self, lhs, rhs): """Construct a clause. Parameters ---------- lhs: tvm.relay.Pattern Left-hand side of match clause. rhs: tvm.relay.Expr Right-hand side of match clause. Returns ------- clause: Clause The Clause. """ self.__init_handle_by_constructor__(_ffi_api.Clause, lhs, rhs) @tvm._ffi.register_object("relay.Match") class Match(ExprWithOp): """Pattern matching expression in Relay.""" def __init__(self, data, clauses, complete=True): """Construct a Match. Parameters ---------- data: tvm.relay.Expr The value being deconstructed and matched. clauses: List[tvm.relay.Clause] The pattern match clauses. complete: Optional[Bool] Should the match be complete (cover all cases)? If yes, the type checker will generate an error if there are any missing cases. Returns ------- match: tvm.relay.Expr The match expression. """ self.__init_handle_by_constructor__(_ffi_api.Match, data, clauses, complete)
4,992
29.078313
99
py
tvm
tvm-main/python/tvm/relay/function.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=no-else-return, invalid-name, unused-import """The expression nodes of Relay.""" from __future__ import absolute_import import tvm._ffi from tvm.ir import BaseFunc from tvm.runtime import convert from . import _ffi_api from .base import astext, pretty_print from .expr import Call @tvm._ffi.register_object("relay.Function") class Function(BaseFunc): """A function declaration expression. Parameters ---------- params: List[tvm.relay.Var] List of input parameters to the function. body: tvm.relay.Expr The body of the function. ret_type: Optional[tvm.relay.Type] The return type annotation of the function. type_params: Optional[List[tvm.relay.TypeParam]] The additional type parameters, this is only used in advanced usecase of template functions. span: Optional[tvm.relay.Span] Span that points to original source code. """ def __init__(self, params, body, ret_type=None, type_params=None, attrs=None, span=None): if type_params is None: type_params = convert([]) self.__init_handle_by_constructor__( _ffi_api.Function, params, body, ret_type, type_params, attrs, span ) def __call__(self, *args): """Invoke the global function. Parameters ---------- args: List[relay.Expr] Arguments. """ return Call(self, args, None, None) def __str__(self): return pretty_print(self) def astext(self, show_meta_data=True, annotate=None): """Get the text format of the expression. Parameters ---------- show_meta_data : bool Whether to include meta data section in the text if there is meta data. annotate: Optional[Object->str] Optionally annotate function to provide additional information in the comment block. Returns ------- text : str The text format of the expression. Notes ----- The meta data section is necessary to fully parse the text format. However, it can contain dumps that are big (e.g constant weights), so it can be helpful to skip printing the meta data section. """ return astext(self, show_meta_data, annotate) def FunctionWithFields( function, params=None, body=None, ret_type=None, ty_params=None, attrs=None, virtual_device=None, span=None, ): """ Returns function with the given properties. A None property denotes 'no change'. Returns function if all properties are unchanged. Otherwise, returns a copy with the new fields. """ return _ffi_api.FunctionWithFields( function, params, body, ret_type, ty_params, attrs, virtual_device, span )
3,641
29.605042
93
py
tvm
tvm-main/python/tvm/relay/expr.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=no-else-return, invalid-name, unused-import """The expression nodes of Relay.""" from __future__ import absolute_import from numbers import Number as _Number import numpy as _np import tvm._ffi from tvm._ffi import base as _base from tvm.ir import GlobalVar, Node, RelayExpr from tvm.runtime import NDArray from tvm.runtime import ndarray as _nd from . import _ffi_api from . import ty as _ty from .base import RelayNode, astext, pretty_print # alias relay expr as Expr. Expr = RelayExpr # will be registered afterwards _op_make = None class ExprWithOp(RelayExpr): """Basetype of all relay expressions that defines op overloading.""" def astype(self, dtype): """Cast the content type of the current data to dtype. Parameters ---------- dtype : str The target data type. Note ---- This function only works for TensorType Exprs. Returns ------- result : tvm.relay.Expr The result expression. """ return _ffi_api.cast(self, dtype) def __str__(self): return pretty_print(self) def astext(self, show_meta_data=True, annotate=None): """Get the text format of the expression. Parameters ---------- show_meta_data : bool Whether to include meta data section in the text if there is meta data. annotate: Optional[Object->str] Optionally annotate function to provide additional information in the comment block. Returns ------- text : str The text format of the expression. Notes ----- The meta data section is necessary to fully parse the text format. However, it can contain dumps that are big (e.g constant weights), so it can be helpful to skip printing the meta data section. """ return astext(self, show_meta_data, annotate) def __neg__(self): return _op_make.negative(self) def __lt__(self, other): if isinstance(other, Expr): return _op_make.less(self, other) elif isinstance(other, _Number): raise TypeError(f'convert "{str(other)}" with `const` first') else: raise TypeError(f"type {type(other)} not supported") def __gt__(self, other): if isinstance(other, Expr): return _op_make.greater(self, other) elif isinstance(other, _Number): raise TypeError(f'convert "{str(other)}" with `const` first') else: raise TypeError(f"type {type(other)} not supported") def __ge__(self, other): if isinstance(other, Expr): return _op_make.greater_equal(self, other) elif isinstance(other, _Number): raise TypeError(f'convert "{str(other)}" with `const` first') else: raise TypeError(f"type {type(other)} not supported") def __le__(self, other): if isinstance(other, Expr): return _op_make.less_equal(self, other) elif isinstance(other, _Number): raise TypeError(f'convert "{str(other)}" with `const` first') else: raise TypeError(f"type {type(other)} not supported") def __add__(self, other): if isinstance(other, Expr): return _op_make.add(self, other) elif isinstance(other, _Number): raise TypeError(f'convert "{str(other)}" with `const` first') else: raise TypeError(f"type {type(other)} not supported") def __radd__(self, other): return self.__add__(other) def __sub__(self, other): if isinstance(other, Expr): return _op_make.subtract(self, other) elif isinstance(other, _Number): raise TypeError(f'convert "{str(other)}" with `const` first') else: raise TypeError(f"type {type(other)} not supported") def __rsub__(self, other): if isinstance(other, _Number): raise TypeError(f'convert "{str(other)}" with `const` first') raise TypeError(f"type {type(other)} not supported") def __mul__(self, other): if isinstance(other, Expr): return _op_make.multiply(self, other) elif isinstance(other, _Number): raise TypeError(f'convert "{str(other)}" with `const` first') else: raise TypeError(f"type {type(other)} not supported") def __rmul__(self, other): return self.__mul__(other) def __div__(self, other): if isinstance(other, Expr): return _op_make.divide(self, other) elif isinstance(other, _Number): raise TypeError(f'convert "{str(other)}" with `const` first') else: raise TypeError(f"type {type(other)} not supported") def __rdiv__(self, other): if isinstance(other, _Number): raise TypeError(f'convert "{str(other)}" with `const` first') raise TypeError(f"type {type(other)} not supported") def __truediv__(self, other): return self.__div__(other) def __rtruediv__(self, other): return self.__rdiv__(other) def __call__(self, *args): """Call the variable (if it represents a function). Parameters ---------- args: List[relay.Expr] The arguments to the call. Returns ------- call: Call A call taking the variable as a function. """ return Call(self, args) @tvm._ffi.register_object("relay.Constant") class Constant(ExprWithOp): """A constant expression in Relay. Parameters ---------- data : tvm.nd.NDArray The data content of the constant expression. span: Optional[tvm.relay.Span] Span that points to original source code. """ def __init__(self, data, span=None): self.__init_handle_by_constructor__(_ffi_api.Constant, data, span) @tvm._ffi.register_func("relay.ConstantWithFields") def ConstantWithFields(constant, data=None, virtual_device=None, span=None): """ Returns constant with the given properties. A None property denotes 'no change'. Returns constant if all properties are unchanged. Otherwise, returns a copy with the new fields. """ return _ffi_api.ConstantWithFields(constant, data, virtual_device, span) @tvm._ffi.register_object("relay.Tuple") class Tuple(ExprWithOp): """Tuple expression that groups several fields together. Parameters ---------- fields : List[tvm.relay.Expr] The fields in the tuple. span: Optional[tvm.relay.Span] Span that points to original source code. """ def __init__(self, fields, span=None): self.__init_handle_by_constructor__(_ffi_api.Tuple, fields, span) def __getitem__(self, index): if index >= len(self): raise IndexError("Tuple index out of range") return self.fields[index] def __len__(self): return len(self.fields) def astype(self, _): raise TypeError("astype cannot be used on tuple") @tvm._ffi.register_func("relay.TupleWithFields") def TupleWithFields(tup, fields=None, virtual_device=None, span=None): """ Returns tuple with the given properties. A None property denotes 'no change'. Returns tuple if all properties are unchanged. Otherwise, returns a copy with the new fields. """ return _ffi_api.TupleWithFields(tup, fields, virtual_device, span) @tvm._ffi.register_object("relay.Var") class Var(ExprWithOp): """A local variable in Relay. Local variable can be used to declare input arguments to a function, or intermediate variables. Parameters ---------- name_hint: str The name of the variable. This name only acts as a hint, and is not used for equality. type_annotation: tvm.relay.Type, optional The type annotation on the variable. span: Optional[tvm.relay.Span] Span that points to original source code. """ def __init__(self, name_hint, type_annotation=None, span=None): self.__init_handle_by_constructor__(_ffi_api.Var, name_hint, type_annotation, span) @property def name_hint(self): """Get name hint of the current var.""" name = str(self.vid.name_hint) return name @tvm._ffi.register_func("relay.VarWithFields") def VarWithFields(variable, vid=None, type_annotation=None, virtual_device=None, span=None): """ Returns var with the given properties. A None property denotes 'no change'. Returns var if all properties are unchanged. Otherwise, returns a copy with the new fields. """ return _ffi_api.VarWithFields(variable, vid, type_annotation, virtual_device, span) @tvm._ffi.register_object("relay.Call") class Call(ExprWithOp): """Function call node in Relay. Call node corresponds the operator application node in computational graph terminology. Parameters ---------- op: tvm.ir.Op or any tvm.relay.Expr with function type. The operation to be called. args: List[tvm.relay.Expr] The arguments to the call. attrs: Optional[tvm.Attrs] Attributes to the call, can be None type_args: Optional[List[tvm.relay.Type]] The additional type arguments, this is only used in advanced usecase of template functions. span: Optional[tvm.relay.Span] Span that points to original source code. """ def __init__(self, op, args, attrs=None, type_args=None, span=None): if not type_args: type_args = [] self.__init_handle_by_constructor__(_ffi_api.Call, op, args, attrs, type_args, span) @tvm._ffi.register_func("relay.CallWithFields") def CallWithFields( call, op=None, args=None, attrs=None, type_args=None, virtual_device=None, span=None ): """ Returns call with the given properties. A None property denotes 'no change'. Returns call if all properties are unchanged. Otherwise, returns a copy with the new fields. """ return _ffi_api.CallWithFields(call, op, args, attrs, type_args, virtual_device, span) @tvm._ffi.register_object("relay.Let") class Let(ExprWithOp): """Let variable binding expression. Parameters ---------- variable: tvm.relay.Var The local variable to be bound. value: tvm.relay.Expr The value to be bound. body: tvm.relay.Expr The body of the let binding. span: Optional[tvm.relay.Span] Span that points to original source code. """ def __init__(self, variable, value, body, span=None): self.__init_handle_by_constructor__(_ffi_api.Let, variable, value, body, span) @tvm._ffi.register_func("relay.LetWithFields") def LetWithFields(let, variable=None, value=None, body=None, virtual_device=None, span=None): """ Returns let with the given properties. A None property denotes 'no change'. Returns let if all properties are unchanged. Otherwise, returns a copy with the new fields. """ return _ffi_api.LetWithFields(let, variable, value, body, virtual_device, span) @tvm._ffi.register_object("relay.If") class If(ExprWithOp): """A conditional expression in Relay. Parameters ---------- cond: tvm.relay.Expr The condition. true_branch: tvm.relay.Expr The expression evaluated when condition is true. false_branch: tvm.relay.Expr The expression evaluated when condition is false. span: Optional[tvm.relay.Span] Span that points to original source code. """ def __init__(self, cond, true_branch, false_branch, span=None): self.__init_handle_by_constructor__(_ffi_api.If, cond, true_branch, false_branch, span) @tvm._ffi.register_func("relay.IfWithFields") def IfWithFields( if_expr, cond=None, true_branch=None, false_branch=None, virtual_device=None, span=None ): """ Returns if with the given properties. A None property denotes 'no change'. Returns if if all properties are unchanged. Otherwise, returns a copy with the new fields. """ return _ffi_api.IfWithFields(if_expr, cond, true_branch, false_branch, virtual_device, span) @tvm._ffi.register_object("relay.TupleGetItem") class TupleGetItem(ExprWithOp): """Get index-th item from a tuple. Parameters ---------- tuple_value: tvm.relay.Expr The input tuple expression. index: int The index. span: Optional[tvm.relay.Span] Span that points to original source code. """ def __init__(self, tuple_value, index, span=None): self.__init_handle_by_constructor__(_ffi_api.TupleGetItem, tuple_value, index, span) @tvm._ffi.register_func("relay.TupleGetItemWithFields") def TupleGetItemWithFields( tuple_get_item, tuple_value=None, index=None, virtual_device=None, span=None ): """ Returns tuple_get_item with the given properties. A None property denotes 'no change'. Returns tuple_get_item if all properties are unchanged. Otherwise, returns a copy with the new fields. """ return _ffi_api.TupleGetItemWithFields(tuple_get_item, tuple_value, index, virtual_device, span) @tvm._ffi.register_object("relay.RefCreate") class RefCreate(ExprWithOp): """Create a new reference from initial value. Parameters ---------- value: tvm.relay.Expr The initial value. span: Optional[tvm.relay.Span] Span that points to original source code. """ def __init__(self, value, span=None): self.__init_handle_by_constructor__(_ffi_api.RefCreate, value, span) @tvm._ffi.register_func("relay.RefCreateWithFields") def RefCreateWithFields(ref_create, value=None, virtual_device=None, span=None): """ Returns ref_create with the given properties. A None property denotes 'no change'. Returns ref_create if all properties are unchanged. Otherwise, returns a copy with the new fields. """ return _ffi_api.RefCreateWithFields(ref_create, value, virtual_device, span) @tvm._ffi.register_object("relay.RefRead") class RefRead(ExprWithOp): """Get the value inside the reference. Parameters ---------- ref: tvm.relay.Expr The reference. span: Optional[tvm.relay.Span] Span that points to original source code. """ def __init__(self, ref, span=None): self.__init_handle_by_constructor__(_ffi_api.RefRead, ref, span) @tvm._ffi.register_func("relay.RefReadWithFields") def RefReadWithFields(ref_read, ref=None, virtual_device=None, span=None): """ Returns ref_read with the given properties. A None property denotes 'no change'. Returns ref_read if all properties are unchanged. Otherwise, returns a copy with the new fields. """ return _ffi_api.RefReadWithFields(ref_read, ref, virtual_device, span) @tvm._ffi.register_object("relay.RefWrite") class RefWrite(ExprWithOp): """ Update the value inside the reference. The whole expression will evaluate to an empty tuple. Parameters ---------- ref: tvm.relay.Expr The reference. value: tvm.relay.Expr The new value. span: Optional[tvm.relay.Span] Span that points to original source code. """ def __init__(self, ref, value, span=None): self.__init_handle_by_constructor__(_ffi_api.RefWrite, ref, value, span) @tvm._ffi.register_func("relay.RefWriteWithFields") def RefWriteWithFields(ref_write, ref=None, value=None, virtual_device=None, span=None): """ Returns ref_write with the given properties. A None property denotes 'no change'. Returns ref_write if all properties are unchanged. Otherwise, returns a copy with the new fields. """ return _ffi_api.RefWriteWithFields(ref_write, ref, value, virtual_device, span) class TempExpr(ExprWithOp): """Baseclass of all TempExpr. TempExprs are pass specific expression that can be useful to define intermediate result in the rewriting pass such as layout or type transformation. """ def realize(self): """Convert the expression to a normal(non-temp) Expr. Returns ------- The corresponding normal expression. """ return _ffi_api.TempExprRealize(self) class TupleWrapper(object): """TupleWrapper. This class is a Python wrapper for a Relay tuple of known size. It allows for accessing the fields of the Relay tuple as though it were a Python tuple. Parameters ---------- tuple_value: tvm.relay.Expr The input tuple size: int The size of the tuple. """ def __init__(self, tuple_value, size): self.tuple_value = tuple_value self.size = size def astuple(self): """Returns the underlying Relay tuple if this wrapper is passed as an argument to an FFI function.""" return self.tuple_value def astext(self): """Get the text format of the tuple expression. Returns ------- text : str The text format of the tuple expression. """ return self.tuple_value.astext() def __getitem__(self, index): if index >= len(self): raise IndexError("Tuple index out of range") return TupleGetItem(self.tuple_value, index, span=self.tuple_value.span) def __len__(self): return self.size def __repr__(self): return "TupleWrapper(" + self.tuple_value.__repr__() + ", " + str(self.size) + ")" def astype(self, _): raise TypeError("astype cannot be used on tuple") def var(name_hint, type_annotation=None, shape=None, dtype="float32", span=None): """Create a new tvm.relay.Var. This is a simple wrapper function that allows specify shape and dtype directly. Parameters ---------- name_hint: str The name of the variable. This name only acts as a hint, and is not used for equality. type_annotation: Optional[tvm.relay.Type, str] The type annotation on the variable. When type_annotation is a str, we will create a scalar variable. shape: Optional[List[tvm.Expr]] The shape of the tensor type. dtype: str, optional The data type of the tensor. span: Optional[tvm.relay.Span] Span that points to original source code. Examples -------- .. code-block:: python # The following 4 lines are equivalent to each other x = tvm.relay.Var("x", tvm.relay.TensorType([1, 2])) x = tvm.relay.var("x", tvm.relay.TensorType([1, 2])) x = tvm.relay.var("x", shape=[1, 2]) x = tvm.relay.var("x", shape=[1, 2], dtype="float32") # The following 2 lines are equivalent to each other. y = tvm.relay.var("x", "float32") y = tvm.relay.var("x", shape=(), dtype="float32") """ if type_annotation is not None and shape is not None: raise ValueError("Can only specify either type_annotation or shape.") if shape is not None: type_annotation = _ty.TensorType(shape, dtype) elif isinstance(type_annotation, str): type_annotation = _ty.TensorType((), type_annotation) return Var(name_hint, type_annotation, span) def const(value, dtype=None, span=None): """Create a constant value. Parameters ---------- value: Union[bool, int, float, numpy.ndarray, tvm.nd.NDArray] The constant value. dtype: str, optional The data type of the resulting constant. span: Optional[tvm.relay.Span] Span that points to original source code. Note ---- When dtype is None, we use the following rule: - int maps to "int32" - float maps to "float32" - bool maps to "bool" - other using the same default rule as numpy. """ if isinstance(value, (_base.numeric_types, (bool, list))): value = _np.array(value, dtype=dtype) if not dtype: # when dtype is None: int maps to "int32", float maps to "float32" dtype = {_np.dtype("int64"): _np.int32, _np.dtype("float64"): _np.float32}.get( value.dtype, None ) if isinstance(value, (_np.ndarray, _np.generic)): if dtype is not None: value = value.astype(dtype) value = _nd.array(value) if not isinstance(value, _nd.NDArray): raise ValueError("value has to be scalar or NDArray") return Constant(value, span) def bind(expr, binds): """Bind an free variables in expr or function arguments. We can bind parameters expr if it is a function. Parameters ---------- expr : tvm.relay.Expr The input expression. binds : Map[tvm.relay.Var, tvm.relay.Expr] The specific bindings. Returns ------- result : tvm.relay.Expr The expression or function after binding. """ return _ffi_api.Bind(expr, binds) @tvm._ffi.register_object("relay.StorageInfo") class StorageInfo(Node): """StorageInfo The static storage information produced by memory planning. Contains the storage ids where expressions are stored, the type of the "virtual devices" the expressions are stored on, and the sizes of each storage element.""" def __init__(self, sids, dev_types, sizes): self.__init_handle_by_constructor__(_ffi_api.StorageInfo, sids, dev_types, sizes) def __str__(self): return pretty_print(self) @property def storage_ids(self): return _ffi_api.StorageInfoStorageIds(self) @property def device_types(self): return _ffi_api.StorageInfoDeviceTypes(self) @property def storage_sizes(self): return _ffi_api.StorageInfoStorageSizes(self) @property def virtual_devices(self): return _ffi_api.StorageInfoVirtualDevices(self) @tvm._ffi.register_object("relay.StaticMemoryPlan") class StaticMemoryPlan(Node): """StaticMemoryPlan The result of static memory planning.""" def __init__(self, expr_to_storage_info): self.__init_handle_by_constructor__(_ffi_api.StaticMemoryPlan, expr_to_storage_info) def __str__(self): return pretty_print(self)
23,020
29.491391
100
py
tvm
tvm-main/python/tvm/relay/__init__.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=wildcard-import, redefined-builtin, invalid-name """The Relay IR namespace containing the IR definition and compiler.""" import os from sys import setrecursionlimit from . import base from . import ty from . import expr from . import function from . import type_functor from . import expr_functor from . import adt from . import prelude from . import loops from . import scope_builder from .base import pretty_print, astext from . import transform from . import analysis from . import collage from .build_module import build, create_executor, optimize from .transform import build_config from . import debug from . import param_dict from .backend import vm # Root operators from .op import nn from .op import image from .op import annotation from .op import vision from .op import contrib from .op import dyn from .op import random from .op.reduce import * from .op.tensor import * from .op.transform import * from .op.algorithm import * from . import frontend from . import backend from . import quantize from . import data_dep_optimization # Dialects from . import qnn from .scope_builder import ScopeBuilder # Load Memory Passes from .transform import memory_plan # Parser from .parser import parse, parse_expr, fromtext, SpanCheck # Required to traverse large programs setrecursionlimit(10000) # Span Span = base.Span SequentialSpan = base.SequentialSpan SourceName = base.SourceName # Type Type = ty.Type TupleType = ty.TupleType TensorType = ty.TensorType TypeKind = ty.TypeKind TypeVar = ty.TypeVar ShapeVar = ty.ShapeVar TypeConstraint = ty.TypeConstraint FuncType = ty.FuncType TypeRelation = ty.TypeRelation IncompleteType = ty.IncompleteType scalar_type = ty.scalar_type RefType = ty.RefType GlobalTypeVar = ty.GlobalTypeVar TypeCall = ty.TypeCall Any = ty.Any # Expr Expr = expr.RelayExpr Constant = expr.Constant Tuple = expr.Tuple Var = expr.Var GlobalVar = expr.GlobalVar Function = function.Function Call = expr.Call Let = expr.Let If = expr.If TupleGetItem = expr.TupleGetItem RefCreate = expr.RefCreate RefRead = expr.RefRead RefWrite = expr.RefWrite # ADT Pattern = adt.Pattern PatternWildcard = adt.PatternWildcard PatternVar = adt.PatternVar PatternConstructor = adt.PatternConstructor PatternTuple = adt.PatternTuple Constructor = adt.Constructor TypeData = adt.TypeData Clause = adt.Clause Match = adt.Match # helper functions var = expr.var const = expr.const bind = expr.bind # TypeFunctor TypeFunctor = type_functor.TypeFunctor TypeVisitor = type_functor.TypeVisitor TypeMutator = type_functor.TypeMutator # ExprFunctor ExprFunctor = expr_functor.ExprFunctor ExprVisitor = expr_functor.ExprVisitor ExprMutator = expr_functor.ExprMutator # Prelude Prelude = prelude.Prelude # Scope Builder ScopeBuilder = scope_builder.ScopeBuilder # Param Serialization save_param_dict = param_dict.save_param_dict load_param_dict = param_dict.load_param_dict
3,690
24.280822
71
py
tvm
tvm-main/python/tvm/relay/debug.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=wildcard-import, redefined-builtin, invalid-name, forgotten-debug-statement """The Relay IR namespace containing the IR definition and compiler.""" import tvm._ffi # pylint: disable=unused-argument, import-outside-toplevel def _debugger_init(expr, stack): import pdb pdb.set_trace() @tvm._ffi.register_func("relay.debug") def _debug(*args): import pdb pdb.set_trace() # pylint: disable=unused-argument @tvm._ffi.register_func("relay.debug_interp") def _debug_interp(*args): _, _, _, ist = args print("Relay Debugger") print(" You can manipulate the expression under evaluation with the name `expr`.") print(" You can manipulate the call stack with the name `stack`.") print("--------------") print("--------------") _debugger_init(ist.current_expr, ist.stack)
1,616
34.933333
93
py
tvm
tvm-main/python/tvm/relay/_build_module.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=no-else-return, unidiomatic-typecheck, undefined-variable """The interface for building Relay functions exposed from C++.""" import tvm._ffi tvm._ffi._init_api("relay.build_module", __name__)
996
44.318182
75
py
tvm
tvm-main/python/tvm/relay/ty.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=invalid-name, unused-import """The type nodes of the Relay language.""" from tvm.ir import Type, TypeKind, TypeVar, GlobalTypeVar from tvm.ir import TypeConstraint, FuncType, TupleType, IncompleteType from tvm.ir import TypeCall, TypeRelation, TensorType, RelayRefType as RefType from .base import RelayNode from . import _ffi_api Any = _ffi_api.Any def is_dynamic(tensor_type): """Check whether type has any or symbolic variables as a shape. tensor_type : Type The type to be inspected Returns ------- has_any : bool The check result. """ return _ffi_api.IsDynamic(tensor_type) def ShapeVar(name): """A helper which constructs a type var of which the shape kind. Parameters ---------- name : str Returns ------- type_var : tvm.relay.TypeVar The shape variable. """ return TypeVar(name, kind=TypeKind.ShapeVar) def scalar_type(dtype): """Creates a scalar type. This function returns TensorType((), dtype) Parameters ---------- dtype : str The content data type. Returns ------- s_type : tvm.relay.TensorType The result type. """ return TensorType((), dtype)
2,020
26.310811
78
py
tvm
tvm-main/python/tvm/relay/prelude.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=no-else-return, unidiomatic-typecheck, invalid-name """A prelude containing useful global functions and ADT definitions.""" from tvm.ir import IRModule, TypeCall from tvm.tir import Any from tvm.relay.transform import ToANormalFormExpr from .ty import GlobalTypeVar, TensorType, scalar_type from .expr import Var, GlobalVar, If, const from .function import Function from .op.tensor import add, subtract, equal from .adt import Constructor, TypeData, Clause, Match from .adt import PatternConstructor, PatternVar, PatternWildcard from . import op, transform from .analysis import free_vars def get_tensor_array_shape(expr, dtype, prelude): """Get the static shape of a tensor array if it has fixed rank shape. By design, static ADT tensor in TVM has type name in the format of static_tensor_dim0_dim1_..._dimN_t. Parameters ---------- expr : Relay Expr Input expression. dtype : str Data type. prelude : Prelude Tensor array prelude Returns ------- shape : tuple of (int, Any) or None The output shape. None if input tensor array has dynamic shape. """ mod = prelude.mod mod["main"] = Function(free_vars(expr), expr) mod = transform.InferType()(mod) checked_type = mod["main"].body.checked_type assert isinstance(checked_type, TypeCall), "Input must be a tensor array." ta_type_str = checked_type.args[0].func.name_hint static_ta_ty_start = f"static_tensor_{dtype}" if ta_type_str.startswith(static_ta_ty_start): shape_str = ta_type_str.replace(f"{static_ta_ty_start}_", "").replace("_t", "") shape = [] if "scalar" not in shape_str: for dim_str in shape_str.split("_"): if dim_str in ["?", "any"]: shape.append(Any()) else: shape.append(int(dim_str)) return tuple(shape) return None def _get_name_static(canonical, dtype, shape, batch_dim=None, extra_shapes=None): """Get name for static shape tensor array op By design, static ADT tensor in TVM has type name in the format of static_tensor_dim0_dim1_..._dimN_t or static_tensor_batch1_dim0_dim1_..._dimN_t if tensorlist stack only have one item. Parameters ---------- canonical : String Tensor array op name dtype : str Data type. shape : tuple of (int, Any) or None Tensor array shape batch_dim: None or int 1 if tensorlist stack only have one item. None by default Returns ------- name : String The tensor array op name """ shape_str = _to_str(shape) if extra_shapes is not None: for n, s in extra_shapes.items(): extra_shape_str = f"_{n}_{_to_str(s)}" shape_str += extra_shape_str if len(shape_str) == 0: shape_str = "scalar" if canonical == "tensor_t": return f"static_tensor_{dtype}_{shape_str}_t" if batch_dim is None or canonical in ["tensor_constructor", "tensor_nil"]: return f"{canonical}_{dtype}_{shape_str}" if batch_dim != 1: return f"{canonical}_{dtype}_{shape_str}" return f"{canonical}_{dtype}_batch{batch_dim}_{shape_str}" def _to_str(shape): dim_names = [] for dim in shape: if isinstance(dim, Any): dim_names.append("any") else: dim_names.append(str(dim)) return "_".join(dim_names) class StaticTensorArrayOps(object): """Contains tensor array related ops for fixed rank tensor array""" def __init__(self, prelude, dtype, shape, batch_dim=None): """Create tensor array ops registry""" self.prelude = prelude self.dtype = dtype self.shape = shape self.batch_dim = batch_dim self.list, self.cons, self.nil = self.prelude.mod.get_type("List") def get_name(self, canonical, extra_shapes=None): """Get name corresponding to the canonical name""" return _get_name_static(canonical, self.dtype, self.shape, self.batch_dim, extra_shapes) def get_global_var(self, canonical): """Get global corresponding to the canonical name""" return self.prelude.get_global_var_static(canonical, self.dtype, self.shape, self.batch_dim) def get_type(self, canonical): """Get type corresponding to the canonical name""" return self.prelude.get_type_static(canonical, self.dtype, self.shape) def get_ctor(self, canonical): """Get ctor corresponding to the canonical name""" return self.prelude.get_ctor_static("tensor_t", canonical, self.dtype, self.shape) def define_tensor_adt(self): """Defines the static tensor ADT, which is the container for tensors with fixed shapes.""" tensor_type_name = self.get_name("tensor_t") # This is effectively functioning as a monomorphizer. # TODO(@jroesch): we should add full shape polymoprhism # and do monomorphization. # # Skip register if tensor type is already registered. global_type_names = set() for g_ty_var in self.prelude.mod.get_global_type_vars(): global_type_names.add(g_ty_var.name_hint) if tensor_type_name in global_type_names: self.tensor_type_var = self.get_type("tensor_t") return self.tensor_type_var = GlobalTypeVar(tensor_type_name) tensor_type = TensorType(self.shape, self.dtype) tensor_constructor_name = self.get_name("tensor_constructor") tensor_nil_name = self.get_name("tensor_nil") tensor_nil_case = Constructor(tensor_nil_name, [], self.tensor_type_var) tensor_case = Constructor(tensor_constructor_name, [tensor_type], self.tensor_type_var) self.prelude.mod[self.tensor_type_var] = TypeData( self.tensor_type_var, [], [tensor_nil_case, tensor_case] ) def define_tensor_array(self): """Defines a function to create a tensor array with size n. tensor_array(n) : Tensor[(), int32] -> list[tensor_t] """ tensor_array_constructor_name = self.get_name("tensor_array") tensor_array_constructor_var = self._create_global_var(tensor_array_constructor_name) tensor_nil_var = self.get_ctor("tensor_nil") tensor_type_var = self.get_ctor("tensor_t") n = Var("x", scalar_type("int32")) body = If( equal(n, const(0)), self.nil(), self.cons(tensor_nil_var(), tensor_array_constructor_var(subtract(n, const(1)))), ) self.prelude.mod[tensor_array_constructor_var] = Function( [n], body, self.list(tensor_type_var()), [] ) def define_tensor_take(self): """Defines a function to return a range of tensor_t on axis 0. tensor_take(t, lower, upper) : tensor_t -> Tensor[(), int32] -> Tensor[(), int32] -> tensor_t """ # We don't register take for scalar tensor. ndim = len(self.shape) if ndim == 0: return take_name = self.get_name("tensor_take") if self.is_cached(take_name): return take_var = GlobalVar(take_name) origin_tensor_constructor = self.get_ctor("tensor_constructor") output_shape = [Any()] + list(self.shape[1:]) tensor_type_var, tensor_constructor, _ = self._get_adt_by_shape(output_shape) t = Var("tensor", self.tensor_type_var()) lower = Var("lower", scalar_type("int32")) upper = Var("upper", scalar_type("int32")) tvar = Var("t") case = Clause( PatternConstructor(origin_tensor_constructor, [PatternVar(tvar)]), tensor_constructor(op.take(tvar, op.arange(lower, upper, dtype="int32"), axis=0)), ) self.prelude.mod[take_var] = Function( [t, lower, upper], Match(t, [case], False), tensor_type_var(), [] ) def define_tensor_concatenate(self): """Defines a function to concatenate two tensor_t on axis 0. tensor_concatenate(t) : tensor_t -> tensor_t -> tensor_t """ # We don't register concatenate for scalar tensor. ndim = len(self.shape) if ndim == 0: return concat_name = self.get_name("tensor_concatenate") concat_var = GlobalVar(concat_name) if self.is_cached(concat_name): return output_shape = [Any()] + list(self.shape[1:]) tensor_type_var, tensor_constructor, _ = self._get_adt_by_shape(output_shape) origin_tensor_constructor = self.get_ctor("tensor_constructor") origin_tensor_type_var = self.tensor_type_var x = Var("x", origin_tensor_type_var()) y = Var("y", origin_tensor_type_var()) t1 = Var("t1") t2 = Var("t2") case = Clause( PatternConstructor(origin_tensor_constructor, [PatternVar(t1)]), Match( y, [ Clause( PatternConstructor(origin_tensor_constructor, [PatternVar(t2)]), tensor_constructor(op.concatenate([t1, t2], axis=0)), ) ], False, ), ) self.prelude.mod[concat_var] = Function( [x, y], Match(x, [case], False), tensor_type_var(), [] ) def define_tensor_expand_dims(self): """Defines a function to grow a tensor_t's rank by adding one dimension in front of the original tensor_t. tensor_expand_dims(t) : tensor_t -> tensor_t """ expand_dims_name = self.get_name("tensor_expand_dims") expand_dims_var = self._create_global_var(expand_dims_name) setattr(self.prelude, expand_dims_name, expand_dims_var) origin_tensor_type_var = self.tensor_type_var origin_tensor_constructor = self.get_ctor("tensor_constructor") x = Var("x", origin_tensor_type_var()) # Note: we set the added axis to be Any() instead of 1 due to # in stack op, we need to recursively concatenate. new_axis = Any() if self.batch_dim is None or self.batch_dim != 1 else self.batch_dim tensor_type_var, tensor_constructor, _ = self._get_adt_by_shape( [new_axis] + list(self.shape) ) t = Var("t") case = Clause( PatternConstructor(origin_tensor_constructor, [PatternVar(t)]), tensor_constructor(op.expand_dims(t, 0, 1)), ) self.prelude.mod[expand_dims_var] = Function( [x], Match(x, [case], False), tensor_type_var(), [] ) def define_tensor_array_read(self): """Defines a function to get the nth element of a list. Assume the list has at least one element. tensor_array_read(ta, n) : list[static_tensor_t] -> Tensor[(), int32] -> Tensor[self.shape, self.dtype] """ read_name = self.get_name("tensor_array_read") if self.is_cached(read_name): return read_var = GlobalVar(read_name) tensor_array = Var("tensor_array", self.list(self.tensor_type_var())) n = Var("x", scalar_type("int32")) self.prelude.mod[read_var] = Function( [tensor_array, n], self.prelude.nth(tensor_array, n), self.tensor_type_var(), [] ) def is_cached(self, name): try: self.prelude.mod.get_global_var(name) return True except ValueError: return False def define_tensor_array_write(self): """Defines a function to update a tensor array at index n with value v. tensor_array_write(ta, n, v) : list[static_tensor_t] -> Tensor[(), int32] -> Tensor[self.shape, self.dtype] -> list[static_tensor_t] """ write_name = self.get_name("tensor_array_write") if self.is_cached(write_name): return write_var = GlobalVar(write_name) tensor_array = Var("tensor_array", self.list(self.tensor_type_var())) n = Var("x", scalar_type("int32")) v = Var("v", self.tensor_type_var()) self.prelude.mod[write_var] = Function( [tensor_array, n, v], self.prelude.update(tensor_array, n, v), self.list(self.tensor_type_var()), [], ) def define_tensor_array_unstack(self): """Defines a function to unstack the values of a tensor_t in a tensor array. tensor_array_unstack_tensor(t) : tensor_t -> list[tensor_t] """ ndim = len(self.shape) # We don't register unstack for scalar tensor array if ndim == 0: return helper_name = self.get_name("tensor_array_unstack_helper") helper_var = self._create_global_var(helper_name) setattr(self.prelude, helper_name, helper_var) tensor = Var("t", TensorType(self.shape, self.dtype)) up = Var("up", scalar_type("int32")) i = Var("i", scalar_type("int32")) tensor_var = Var("tensor", TensorType(self.shape, self.dtype)) reduced_tensor_type_var, tensor_constructor, _ = self._get_adt_by_shape(self.shape[1:]) helper_body = If( equal(i, up), self.nil(), self.cons( tensor_constructor(op.take(tensor, i, axis=0)), helper_var(add(i, const(1)), up, tensor), ), ) self.prelude.mod[helper_var] = Function( [i, up, tensor], helper_body, self.list(reduced_tensor_type_var()), [] ) unstack_name = self.get_name("tensor_array_unstack") unstack_var = self._create_global_var(unstack_name) setattr(self.prelude, unstack_name, unstack_var) shape = op.shape_of(tensor_var) unstack_length = op.take(shape, const(0)) self.prelude.mod[unstack_var] = Function( [tensor_var], helper_var(const(0), unstack_length, tensor_var), self.list(reduced_tensor_type_var()), [], ) def define_tensor_array_scatter(self, indices_shape=None, force_update=False): """Defines a function to scatter the values of a tensor_t in indices of a tensor array. tensor_array_scatter(ta, indices, value) : list[tensor_t] -> Tensor[(Any), int32] -> tensor_t -> list[tensor_t] Set static indices shape by specifying indices_shape. Set force_update to get static indices shape operator. """ # When this operator has already been registered, only update # when force_update is set. This should be used only when we need to # redefine this op for static indices shape. extra_shapes = {"indices": indices_shape} if indices_shape is not None else None tensor_array_scatter_name = self.get_name("tensor_array_scatter", extra_shapes) if hasattr(self.prelude, tensor_array_scatter_name) and not force_update: return tensor_array_scatter_helper_name = self.get_name( "tensor_array_scatter_helper", extra_shapes ) tensor_array_scatter_helper_var = self._create_global_var(tensor_array_scatter_helper_name) ta = Var("ta", self.list(self.tensor_type_var())) current = Var("current", scalar_type("int32")) limit = Var("limit", scalar_type("int32")) indices_ = Var("indices_", TensorType(indices_shape or [Any()], "int32")) values_ = Var("values_", self.list(self.tensor_type_var())) write_var = self.get_global_var("tensor_array_write") read_var = self.get_global_var("tensor_array_read") helper_body = If( equal(current, limit), ta, tensor_array_scatter_helper_var( write_var(ta, op.take(indices_, current), read_var(values_, current)), add(current, const(1)), limit, indices_, values_, ), ) self.prelude.mod[tensor_array_scatter_helper_var] = Function( [ta, current, limit, indices_, values_], helper_body, self.list(self.tensor_type_var()), [], ) tensor_array_scatter_var = self._create_global_var(tensor_array_scatter_name) setattr(self.prelude, tensor_array_scatter_name, tensor_array_scatter_var) tensor_array = Var("tensor_array", self.list(self.tensor_type_var())) indices = Var("indices", TensorType(indices_shape or [Any()], "int32")) values = Var("values", self.list(self.tensor_type_var())) if indices_shape is None: indices_shape = op.shape_of(indices) limit = op.take(indices_shape, const(0)) else: limit = const(indices_shape[0]) body = tensor_array_scatter_helper_var(tensor_array, const(0), limit, indices, values) self.prelude.mod[tensor_array_scatter_var] = Function( [tensor_array, indices, values], body, self.list(self.tensor_type_var()), [] ) def define_tensor_array_split(self, value_shape=None, lengths_shape=None, force_update=False): """Defines a function to split the values of a tensor_t into a tensor array. tensor_array_split(ta, value, lengths) : list[tensor_t] -> tensor_t -> Tensor[(Any), int32] -> list[tensor_t] Set static value and lengths shapes by specifying value_shape and lengths_shape. Set force_update to get static value and lengths shape operator. """ # Skip scalar case ndim = len(self.shape) if ndim == 0: return # When this operator has already been registered, only update # when force_update is set. This should be used only when we need to # redefine this op for static value/indices shape. split_name = self.get_name("tensor_array_split") if self.is_cached(split_name): if not force_update: return tensor_array_split_helper_var = self.get_global_var("ta_split_helper") split_var = self.get_global_var("tensor_array_split") else: tensor_array_split_helper_name = self.get_name("ta_split_helper") tensor_array_split_helper_var = GlobalVar(tensor_array_split_helper_name) split_var = GlobalVar(split_name) output_shape = [Any()] + list(self.shape[1:]) output_tensor_type_var, _, output_ops = self._get_adt_by_shape(output_shape) output_ops.define_tensor_array_write() write_var = output_ops.get_global_var("tensor_array_write") if value_shape is None: value_type_var = self.tensor_type_var take_var = self.get_global_var("tensor_take") else: value_type_var, _, value_adts = self._get_adt_by_shape(value_shape) value_adts.define_tensor_take() take_var = value_adts.get_global_var("tensor_take") ta1 = Var("tensor_array", self.list(output_tensor_type_var())) value1 = Var("value1", value_type_var()) offset1 = Var("offset1", scalar_type("int32")) current1 = Var("current1", scalar_type("int32")) limit1 = Var("limit1", scalar_type("int32")) lengths1 = Var("lengths", TensorType(lengths_shape or [Any()], "int32")) helper1_body = If( equal(current1, limit1), ta1, write_var( tensor_array_split_helper_var( ta1, value1, add(offset1, op.take(lengths1, current1)), add(current1, const(1)), limit1, lengths1, ), current1, take_var(value1, offset1, add(op.take(lengths1, current1), offset1)), ), ) self.prelude.mod[tensor_array_split_helper_var] = Function( [ta1, value1, offset1, current1, limit1, lengths1], helper1_body, self.list(output_tensor_type_var()), [], ) tensor_array = Var("tensor_array", self.list(output_tensor_type_var())) value = Var("value", value_type_var()) lengths = Var("lengths", TensorType(lengths_shape or [Any()], "int32")) if lengths_shape is None: lengths_shape = op.shape_of(lengths) lengths_limit = op.take(lengths_shape, const(0)) else: lengths_limit = const(lengths_shape[0]) body = tensor_array_split_helper_var( tensor_array, value, const(0), const(0), lengths_limit, lengths ) self.prelude.mod[split_var] = Function( [tensor_array, value, lengths], body, self.list(output_tensor_type_var()), [] ) def define_tensor_array_concat(self): """Defines a function to return the values in the tensor array as concatenated tensor_t. tensor_array_concat(ta) : list[tensor_t] -> tensor_t """ # We don't register concat for scalar tensor array. ndim = len(self.shape) if ndim == 0: return concat_name = self.get_name("tensor_array_concat") if self.is_cached(concat_name): return concat_var = GlobalVar(concat_name) output_shape = [Any()] + list(self.shape[1:]) tensor_type_var, _, output_ops = self._get_adt_by_shape(output_shape) # Register tensor concatenate and get tensor_nil var for output shape output_ops.define_tensor_concatenate() tensor_concat_var = output_ops.get_global_var("tensor_concatenate") tensor_nil_var = output_ops.get_ctor("tensor_nil") tensor_array = Var("tensor_array", self.list(tensor_type_var())) hd = Var("hd") tl = Var("tl") nil_case = Clause(PatternConstructor(self.nil), tensor_nil_var()) cons_case = Clause( PatternConstructor(self.cons, [PatternVar(hd), PatternVar(tl)]), Match( tl, [ Clause(PatternConstructor(self.nil), hd), Clause(PatternWildcard(), tensor_concat_var(hd, concat_var(tl))), ], False, ), ) self.prelude.mod[concat_var] = Function( [tensor_array], Match(tensor_array, [nil_case, cons_case], False), tensor_type_var(), [] ) def define_tensor_array_stack(self): """Defines a function to get the values in the tensor array as a stack tensor_t. tensor_array_stack(l) : list[tensor_t] -> tensor_t """ stack_name = self.get_name("tensor_array_stack") stack_var = self._create_global_var(stack_name) setattr(self.prelude, stack_name, stack_var) tensor_array = Var("tensor_array", self.list(self.tensor_type_var())) expand_dims_var = self.get_global_var("tensor_expand_dims") # Register tensor_concatenate for output_shape new_axis = Any() if not self.batch_dim or self.batch_dim != 1 else self.batch_dim output_shape = [new_axis] + list(self.shape) _, _, output_ops = self._get_adt_by_shape(output_shape) output_ops.define_tensor_concatenate() concat_var = output_ops.get_global_var("tensor_concatenate") tensor_array_expand_dims = self.prelude.map(expand_dims_var, tensor_array) if self.batch_dim is not None and self.batch_dim == 1: # only one element tensors = self.prelude.id(self.prelude.hd(tensor_array_expand_dims)) else: tensors = self.prelude.foldl( concat_var, self.prelude.hd(tensor_array_expand_dims), self.prelude.tl(tensor_array_expand_dims), ) output_tensor_type_var, _, _ = self._get_adt_by_shape(output_shape) self.prelude.mod[stack_var] = Function( [tensor_array], tensors, output_tensor_type_var(), [] ) def define_tensor_array_gather(self): """Defines a function to return the selected values in a tensor array as tensor_t. tensor_array_gather(ta, indices) : list[tensor_t] -> Tensor[(Any), int32] -> tensor_t """ helper_name = self.get_name("tensor_array_gather_helper") helper_var = self._create_global_var(helper_name) new_axis = Any() if self.batch_dim is None or self.batch_dim != 1 else self.batch_dim output_shape = [new_axis] + list(self.shape) output_tensor_type_var, _, _ = self._get_adt_by_shape(output_shape) stack_var = self.get_global_var("tensor_array_stack") read_var = self.get_global_var("tensor_array_read") ta = Var("ta", self.list(self.tensor_type_var())) accu = Var("accu", self.list(self.tensor_type_var())) current = Var("current", scalar_type("int32")) limit = Var("limit", scalar_type("int32")) indices_ = Var("indices_", TensorType([Any()], "int32")) helper_body = If( equal(current, const(0)), stack_var(accu), helper_var( ta, self.cons(read_var(ta, op.take(indices_, subtract(current, const(1)))), accu), subtract(current, const(1)), limit, indices_, ), ) self.prelude.mod[helper_var] = Function( [ta, accu, current, limit, indices_], helper_body, output_tensor_type_var(), [] ) gather_name = self.get_name("tensor_array_gather") gather_var = self._create_global_var(gather_name) tensor_array = Var("tensor_array", self.list(self.tensor_type_var())) indices = Var("indices", TensorType([Any()], "int32")) indices_shape = op.shape_of(indices) limit = op.take(indices_shape, const(0)) body = helper_var(tensor_array, self.nil(), limit, limit, indices) self.prelude.mod[gather_var] = Function( [tensor_array, indices], body, output_tensor_type_var(), [] ) def define_tensor_get_data(self): """Defines a function to get a Tensor from tensor_t with given shape.""" tensor_get_data_name = self.get_name("tensor_get_data") tensor_get_data_var = self._create_global_var(tensor_get_data_name) tensor_constructor = self.get_ctor("tensor_constructor") t = Var("tensor", self.tensor_type_var()) tvar = Var("t") case = Clause(PatternConstructor(tensor_constructor, [PatternVar(tvar)]), tvar) self.prelude.mod[tensor_get_data_var] = Function( [t], Match(t, [case], False), TensorType(self.shape, self.dtype), [] ) def register(self): """Register all tensor array ops in Prelude""" self.define_tensor_adt() self.define_tensor_take() self.define_tensor_concatenate() self.define_tensor_expand_dims() self.define_tensor_array() self.define_tensor_array_read() self.define_tensor_array_write() self.define_tensor_array_unstack() self.define_tensor_array_scatter() self.define_tensor_array_split() self.define_tensor_array_concat() self.define_tensor_array_stack() self.define_tensor_array_gather() self.define_tensor_get_data() def _get_adt_by_shape(self, shape): """Get ADT type and constructor with given shape.""" adt_ops = StaticTensorArrayOps(self.prelude, self.dtype, shape, self.batch_dim) adt_ops.define_tensor_adt() tensor_type_var = adt_ops.get_type("tensor_t") tensor_constructor = adt_ops.get_ctor("tensor_constructor") return tensor_type_var, tensor_constructor, adt_ops def _create_global_var(self, name): """Create a GlobalVar if doesn't exist in prelude.""" global_var_name_set = set() for g_var_name in self.prelude.mod.get_global_vars(): global_var_name_set.add(g_var_name.name_hint) if name not in global_var_name_set: gvar = GlobalVar(name) else: gvar = self.prelude.mod.get_global_var(name) return gvar class TensorArrayOps(object): """Contains tensor array related ops""" def __init__(self, prelude, dtype): """Create tensor array ops registry""" self.prelude = prelude self.dtype = dtype self.list, self.cons, self.nil = self.prelude.mod.get_type("List") def get_name(self, canonical): """Get name corresponding to the canonical name""" return self.prelude.get_name(canonical, self.dtype) def get_global_var(self, canonical): """Get global corresponding to the canonical name""" return self.prelude.get_global_var(canonical, self.dtype) def get_type(self, canonical): """Get type corresponding to the canonical name""" return self.prelude.get_type(canonical, self.dtype) def get_ctor(self, canonical): """Get ctor corresponding to the canonical name""" return self.prelude.get_ctor(self.tensor_type_var.name_hint, canonical, self.dtype) def define_tensor_adt(self): """Defines the dynamic tensor ADT, which is the container for tensors with variable shapes.""" tensor_type_name = self.get_name("tensor_t") self.tensor_type_var = tensor_type_var = GlobalTypeVar(tensor_type_name) tensor0_type = TensorType([], self.dtype) tensor1_type = TensorType([Any()], self.dtype) tensor2_type = TensorType([Any(), Any()], self.dtype) tensor3_type = TensorType([Any(), Any(), Any()], self.dtype) tensor4_type = TensorType([Any(), Any(), Any(), Any()], self.dtype) tensor5_type = TensorType([Any(), Any(), Any(), Any(), Any()], self.dtype) tensor6_type = TensorType([Any(), Any(), Any(), Any(), Any(), Any()], self.dtype) tensor_nil_name = self.get_name("tensor_nil") tensor0_name = self.get_name("tensor0") tensor1_name = self.get_name("tensor1") tensor2_name = self.get_name("tensor2") tensor3_name = self.get_name("tensor3") tensor4_name = self.get_name("tensor4") tensor5_name = self.get_name("tensor5") tensor6_name = self.get_name("tensor6") tensor_nil_case = Constructor(tensor_nil_name, [], tensor_type_var) tensor0_case = Constructor(tensor0_name, [tensor0_type], tensor_type_var) tensor1_case = Constructor(tensor1_name, [tensor1_type], tensor_type_var) tensor2_case = Constructor(tensor2_name, [tensor2_type], tensor_type_var) tensor3_case = Constructor(tensor3_name, [tensor3_type], tensor_type_var) tensor4_case = Constructor(tensor4_name, [tensor4_type], tensor_type_var) tensor5_case = Constructor(tensor5_name, [tensor5_type], tensor_type_var) tensor6_case = Constructor(tensor6_name, [tensor6_type], tensor_type_var) self.prelude.mod[tensor_type_var] = TypeData( tensor_type_var, [], [ tensor_nil_case, tensor0_case, tensor1_case, tensor2_case, tensor3_case, tensor4_case, tensor5_case, tensor6_case, ], ) def define_tensor_take(self): """Defines a function to return a range of tensor_t on axis 0. tensor_take(t, lower, upper) : tensor_t -> Tensor[(), int32] -> Tensor[(), int32] -> tensor_t """ take_name = self.get_name("tensor_take") take_var = GlobalVar(take_name) tensor_t = self.tensor_type_var tensor1_var = self.get_ctor("tensor1") tensor2_var = self.get_ctor("tensor2") tensor3_var = self.get_ctor("tensor3") tensor4_var = self.get_ctor("tensor4") tensor5_var = self.get_ctor("tensor5") tensor6_var = self.get_ctor("tensor6") t = Var("tensor", tensor_t()) lower = Var("lower", scalar_type("int32")) upper = Var("upper", scalar_type("int32")) t1 = Var("t1") t2 = Var("t2") t3 = Var("t3") t4 = Var("t4") t5 = Var("t5") t6 = Var("t6") tensor1_case = Clause( PatternConstructor(tensor1_var, [PatternVar(t1)]), tensor1_var(op.take(t1, op.arange(lower, upper, dtype="int32"))), ) tensor2_case = Clause( PatternConstructor(tensor2_var, [PatternVar(t2)]), tensor2_var(op.take(t2, op.arange(lower, upper, dtype="int32"), axis=0)), ) tensor3_case = Clause( PatternConstructor(tensor3_var, [PatternVar(t3)]), tensor3_var(op.take(t3, op.arange(lower, upper, dtype="int32"), axis=0)), ) tensor4_case = Clause( PatternConstructor(tensor4_var, [PatternVar(t4)]), tensor4_var(op.take(t4, op.arange(lower, upper, dtype="int32"), axis=0)), ) tensor5_case = Clause( PatternConstructor(tensor5_var, [PatternVar(t5)]), tensor5_var(op.take(t5, op.arange(lower, upper, dtype="int32"), axis=0)), ) tensor6_case = Clause( PatternConstructor(tensor6_var, [PatternVar(t6)]), tensor6_var(op.take(t6, op.arange(lower, upper, dtype="int32"), axis=0)), ) self.prelude.mod[take_var] = Function( [t, lower, upper], Match( t, [ tensor1_case, tensor2_case, tensor3_case, tensor4_case, tensor5_case, tensor6_case, ], False, ), tensor_t(), [], ) def define_tensor_expand_dims(self): """Defines a function to grow a tensor_t's rank by adding one dimension in front of the original tensor_t. tensor_expand_dims(t) : tensor_t -> tensor_t """ expand_dims_name = self.get_name("tensor_expand_dims") expand_dims_var = GlobalVar(expand_dims_name) tensor_type_var = self.tensor_type_var x = Var("x", tensor_type_var()) t0 = Var("t0") t1 = Var("t1") t2 = Var("t2") t3 = Var("t3") t4 = Var("t4") t5 = Var("t5") tensor0_var = self.get_ctor("tensor0") tensor1_var = self.get_ctor("tensor1") tensor2_var = self.get_ctor("tensor2") tensor3_var = self.get_ctor("tensor3") tensor4_var = self.get_ctor("tensor4") tensor5_var = self.get_ctor("tensor5") tensor6_var = self.get_ctor("tensor6") tensor0_case = Clause( PatternConstructor(tensor0_var, [PatternVar(t0)]), tensor1_var(op.expand_dims(t0, 0, 1)) ) tensor1_case = Clause( PatternConstructor(tensor1_var, [PatternVar(t1)]), tensor2_var(op.expand_dims(t1, 0, 1)) ) tensor2_case = Clause( PatternConstructor(tensor2_var, [PatternVar(t2)]), tensor3_var(op.expand_dims(t2, 0, 1)) ) tensor3_case = Clause( PatternConstructor(tensor3_var, [PatternVar(t3)]), tensor4_var(op.expand_dims(t3, 0, 1)) ) tensor4_case = Clause( PatternConstructor(tensor4_var, [PatternVar(t4)]), tensor5_var(op.expand_dims(t4, 0, 1)) ) tensor5_case = Clause( PatternConstructor(tensor5_var, [PatternVar(t5)]), tensor6_var(op.expand_dims(t5, 0, 1)) ) self.prelude.mod[expand_dims_var] = Function( [x], Match( x, [ tensor0_case, tensor1_case, tensor2_case, tensor3_case, tensor4_case, tensor5_case, ], False, ), tensor_type_var(), ) def define_tensor_concat(self): """Defines a function to concatenate two tensor_t on the first axis tensor_concatenate(t) : tensor_t -> tensor_t -> tensor_t """ concat_name = self.get_name("tensor_concatenate") concat_var = GlobalVar(concat_name) tensor_type_var = self.tensor_type_var x = Var("x", tensor_type_var()) y = Var("y", tensor_type_var()) tensor1_var = self.get_ctor("tensor1") tensor2_var = self.get_ctor("tensor2") tensor3_var = self.get_ctor("tensor3") tensor4_var = self.get_ctor("tensor4") t11 = Var("t11") t12 = Var("t12") t21 = Var("t21") t22 = Var("t22") t31 = Var("t31") t32 = Var("t32") t41 = Var("t41") t42 = Var("t42") tensor1_case = Clause( PatternConstructor(tensor1_var, [PatternVar(t11)]), Match( y, [ Clause( PatternConstructor(tensor1_var, [PatternVar(t12)]), tensor1_var(op.concatenate([t11, t12], axis=0)), ) ], False, ), ) tensor2_case = Clause( PatternConstructor(tensor2_var, [PatternVar(t21)]), Match( y, [ Clause( PatternConstructor(tensor2_var, [PatternVar(t22)]), tensor2_var(op.concatenate([t21, t22], axis=0)), ) ], False, ), ) tensor3_case = Clause( PatternConstructor(tensor3_var, [PatternVar(t31)]), Match( y, [ Clause( PatternConstructor(tensor3_var, [PatternVar(t32)]), tensor3_var(op.concatenate([t31, t32], axis=0)), ) ], False, ), ) tensor4_case = Clause( PatternConstructor(tensor4_var, [PatternVar(t41)]), Match( y, [ Clause( PatternConstructor(tensor4_var, [PatternVar(t42)]), tensor4_var(op.concatenate([t41, t42], axis=0)), ) ], False, ), ) # op.concatenate does not support tensor with rank higher than 4 self.prelude.mod[concat_var] = Function( [x, y], Match(x, [tensor1_case, tensor2_case, tensor3_case, tensor4_case], False), tensor_type_var(), ) def define_tensor_array(self): """Defines a function to create a tensor array with size n. tensor_array(n) : Tensor[(), int32] -> list[tensor_t] """ tensor_array_constructor_name = self.get_name("tensor_array") tensor_array_constructor_var = GlobalVar(tensor_array_constructor_name) setattr(self.prelude, tensor_array_constructor_name, tensor_array_constructor_var) tensor_nil_var = self.get_ctor("tensor_nil") tensor_type_var = self.get_ctor("tensor_t") n = Var("x", scalar_type("int32")) body = If( equal(n, const(0)), self.nil(), self.cons(tensor_nil_var(), tensor_array_constructor_var(subtract(n, const(1)))), ) self.prelude.mod[tensor_array_constructor_var] = Function( [n], body, self.list(tensor_type_var()), [] ) def define_tensor_array_read(self): """Defines a function to get the head of a list. Assume the list has at least one element. tensor_array_read(ta, n) : list[tensor_t] -> Tensor[(), int32] -> tensor_t """ read_name = self.get_name("tensor_array_read") read_var = GlobalVar(read_name) setattr(self.prelude, read_name, read_var) tensor_type_var = self.tensor_type_var tensor_array = Var("tensor_array", self.list(tensor_type_var())) n = Var("x", scalar_type("int32")) self.prelude.mod[read_var] = Function( [tensor_array, n], self.prelude.nth(tensor_array, n), tensor_type_var(), [] ) def define_tensor_array_write(self): """Defines a function to update a tensor array at index n with value v. tensor_array_write(ta, n, v) : list[tensor_t] -> Tensor[(), int32] -> tensor_t -> list[tensor_t] """ write_name = self.get_name("tensor_array_write") write_var = GlobalVar(write_name) tensor_type_var = self.tensor_type_var tensor_array = Var("tensor_array", self.list(tensor_type_var())) n = Var("x", scalar_type("int32")) v = Var("v", tensor_type_var()) self.prelude.mod[write_var] = Function( [tensor_array, n, v], self.prelude.update(tensor_array, n, v), self.list(tensor_type_var()), [], ) def define_tensor_array_unstack_tensor1(self): """Defines a function to unstack the values of a tensor_t with rank 1 in a tensor array. tensor_array_unstack_tensor1(t) : tensor_t -> list[tensor_t] """ helper_name = self.get_name("tensor_array_unstack_tensor1_helper") helper_var = GlobalVar(helper_name) tensor = Var("t", TensorType([Any()], self.dtype)) up = Var("up", scalar_type("int32")) i = Var("i", scalar_type("int32")) tensor_type_var = self.tensor_type_var tensor0_var = self.get_ctor("tensor0") helper_body = If( equal(i, up), self.nil(), self.cons(tensor0_var(op.take(tensor, i)), helper_var(add(i, const(1)), up, tensor)), ) self.prelude.mod[helper_var] = Function( [i, up, tensor], helper_body, self.list(tensor_type_var()), [] ) unstack_name = self.get_name("tensor_array_unstack_tensor1") unstack_var = GlobalVar(unstack_name) tensor1 = Var("tensor", TensorType([Any()], self.dtype)) shape = op.shape_of(tensor1) ndim = op.take(shape, const(0)) self.prelude.mod[unstack_var] = Function( [tensor1], helper_var(const(0), ndim, tensor1), self.list(tensor_type_var()), [] ) def define_tensor_array_unstack_tensor2(self): """Defines a function to unstack the values of a tensor_t with rank 2 in a tensor array. tensor_array_unstack_tensor2(t) : tensor_t -> list[tensor_t] """ helper_name = self.get_name("tensor_array_unstack_tensor2_helper") helper_var = GlobalVar(helper_name) setattr(self.prelude, helper_name, helper_var) tensor = Var("t", TensorType([Any(), Any()], self.dtype)) up = Var("up", scalar_type("int32")) i = Var("i", scalar_type("int32")) helper_body = If( equal(i, up), self.nil(), self.cons( self.get_ctor("tensor1")(op.take(tensor, i, axis=0)), helper_var(add(i, const(1)), up, tensor), ), ) self.prelude.mod[helper_var] = Function( [i, up, tensor], helper_body, self.list(self.tensor_type_var()), [] ) tensor_array_unstack_tensor2_name = self.get_name("tensor_array_unstack_tensor2") tensor_array_unstack_tensor2_var = GlobalVar(tensor_array_unstack_tensor2_name) setattr(self.prelude, tensor_array_unstack_tensor2_name, tensor_array_unstack_tensor2_var) tensor2 = Var("tensor", TensorType([Any(), Any()], self.dtype)) shape = op.shape_of(tensor2) ndim = op.take(shape, const(0)) self.prelude.mod[tensor_array_unstack_tensor2_var] = Function( [tensor2], helper_var(const(0), ndim, tensor2), self.list(self.tensor_type_var()), [] ) def define_tensor_array_unstack_tensor3(self): """Defines a function to unstack the values of a tensor_t with rank 3 in a tensor array. tensor_array_unstack_tensor3(t) : tensor_t -> list[tensor_t] """ helper_name = self.get_name("tensor_array_unstack_tensor3_helper") helper_var = GlobalVar(helper_name) setattr(self.prelude, helper_name, helper_var) tensor = Var("t", TensorType([Any(), Any(), Any()], self.dtype)) up = Var("up", scalar_type("int32")) i = Var("i", scalar_type("int32")) helper_body = If( equal(i, up), self.nil(), self.cons( self.get_ctor("tensor2")(op.take(tensor, i, axis=0)), helper_var(add(i, const(1)), up, tensor), ), ) self.prelude.mod[helper_var] = Function( [i, up, tensor], helper_body, self.list(self.tensor_type_var()), [] ) tensor_array_unstack_tensor3_name = self.get_name("tensor_array_unstack_tensor3") tensor_array_unstack_tensor3_var = GlobalVar(tensor_array_unstack_tensor3_name) setattr(self.prelude, tensor_array_unstack_tensor3_name, tensor_array_unstack_tensor3_var) tensor3 = Var("tensor", TensorType([Any(), Any(), Any()], self.dtype)) shape = op.shape_of(tensor3) ndim = op.take(shape, const(0)) self.prelude.mod[tensor_array_unstack_tensor3_var] = Function( [tensor3], helper_var(const(0), ndim, tensor3), self.list(self.tensor_type_var()), [] ) def define_tensor_array_unstack_tensor4(self): """Defines a function to unstack the values of a tensor_t with rank 4 in a tensor array. tensor_array_unstack_tensor4(t) : tensor_t -> list[tensor_t] """ helper_name = self.get_name("tensor_array_unstack_tensor4_helper") helper_var = GlobalVar(helper_name) setattr(self.prelude, helper_name, helper_var) tensor = Var("t", TensorType([Any(), Any(), Any(), Any()], self.dtype)) up = Var("up", scalar_type("int32")) i = Var("i", scalar_type("int32")) helper_body = If( equal(i, up), self.nil(), self.cons( self.get_ctor("tensor3")(op.take(tensor, i, axis=0)), helper_var(add(i, const(1)), up, tensor), ), ) self.prelude.mod[helper_var] = Function( [i, up, tensor], helper_body, self.list(self.tensor_type_var()), [] ) tensor_array_unstack_tensor4_name = self.get_name("tensor_array_unstack_tensor4") tensor_array_unstack_tensor4_var = GlobalVar(tensor_array_unstack_tensor4_name) setattr(self.prelude, tensor_array_unstack_tensor4_name, tensor_array_unstack_tensor4_var) tensor4 = Var("tensor", TensorType([Any(), Any(), Any(), Any()], self.dtype)) shape = op.shape_of(tensor4) ndim = op.take(shape, const(0)) self.prelude.mod[tensor_array_unstack_tensor4_var] = Function( [tensor4], helper_var(const(0), ndim, tensor4), self.list(self.tensor_type_var()), [] ) def define_tensor_array_unstack_tensor5(self): """Defines a function to unstack the values of a tensor_t with rank 5 in a tensor array. tensor_array_unstack_tensor5(t) : tensor_t -> list[tensor_t] """ helper_name = self.get_name("tensor_array_unstack_tensor5_helper") helper_var = GlobalVar(helper_name) setattr(self.prelude, helper_name, helper_var) tensor = Var("t", TensorType([Any(), Any(), Any(), Any(), Any()], self.dtype)) up = Var("up", scalar_type("int32")) i = Var("i", scalar_type("int32")) helper_body = If( equal(i, up), self.nil(), self.cons( self.get_ctor("tensor4")(op.take(tensor, i, axis=0)), helper_var(add(i, const(1)), up, tensor), ), ) self.prelude.mod[helper_var] = Function( [i, up, tensor], helper_body, self.list(self.tensor_type_var()), [] ) tensor_array_unstack_tensor5_name = self.get_name("tensor_array_unstack_tensor5") tensor_array_unstack_tensor5_var = GlobalVar(tensor_array_unstack_tensor5_name) setattr(self.prelude, tensor_array_unstack_tensor5_name, tensor_array_unstack_tensor5_var) tensor5 = Var("tensor", TensorType([Any(), Any(), Any(), Any(), Any()], self.dtype)) shape = op.shape_of(tensor5) ndim = op.take(shape, const(0)) self.prelude.mod[tensor_array_unstack_tensor5_var] = Function( [tensor5], helper_var(const(0), ndim, tensor5), self.list(self.tensor_type_var()), [] ) def define_tensor_array_unstack_tensor6(self): """Defines a function to unstack the values of a tensor_t with rank 6 in a tensor array. tensor_array_unstack_tensor6(t) : tensor_t -> list[tensor_t] """ helper_name = self.get_name("tensor_array_unstack_tensor6_helper") helper_var = GlobalVar(helper_name) setattr(self.prelude, helper_name, helper_var) tensor = Var("t", TensorType([Any(), Any(), Any(), Any(), Any(), Any()], self.dtype)) up = Var("up", scalar_type("int32")) i = Var("i", scalar_type("int32")) helper_body = If( equal(i, up), self.nil(), self.cons( self.get_ctor("tensor5")(op.take(tensor, i, axis=0)), helper_var(add(i, const(1)), up, tensor), ), ) self.prelude.mod[helper_var] = Function( [i, up, tensor], helper_body, self.list(self.tensor_type_var()), [] ) tensor_array_unstack_tensor6_name = self.get_name("tensor_array_unstack_tensor6") tensor_array_unstack_tensor6_var = GlobalVar(tensor_array_unstack_tensor6_name) setattr(self.prelude, tensor_array_unstack_tensor6_name, tensor_array_unstack_tensor6_var) tensor6 = Var("tensor", TensorType([Any(), Any(), Any(), Any(), Any(), Any()], self.dtype)) shape = op.shape_of(tensor6) ndim = op.take(shape, const(0)) self.prelude.mod[tensor_array_unstack_tensor6_var] = Function( [tensor6], helper_var(const(0), ndim, tensor6), self.list(self.tensor_type_var()), [] ) def define_tensor_array_scatter(self): """Defines a function to scatter the values of a tensor_t in indices of a tensor array. tensor_array_scatter(ta, indices, value) : list[tensor_t] -> Tensor[(Any), int32] -> tensor_t -> list[tensor_t] """ tensor_array_scatter_helper_name = self.get_name("tensor_array_scatter_helper") tensor_array_scatter_helper_var = GlobalVar(tensor_array_scatter_helper_name) tensor_t = self.tensor_type_var ta = Var("ta", self.list(tensor_t())) current = Var("current", scalar_type("int32")) limit = Var("limit", scalar_type("int32")) indices_ = Var("indices_", TensorType([Any()], "int32")) values_ = Var("values_", self.list(tensor_t())) write_var = self.get_global_var("tensor_array_write") read_var = self.get_global_var("tensor_array_read") helper_body = If( equal(current, limit), ta, tensor_array_scatter_helper_var( write_var(ta, op.take(indices_, current), read_var(values_, current)), add(current, const(1)), limit, indices_, values_, ), ) self.prelude.mod[tensor_array_scatter_helper_var] = Function( [ta, current, limit, indices_, values_], helper_body, self.list(tensor_t()), [] ) tensor_array_scatter_name = self.get_name("tensor_array_scatter") tensor_array_scatter_var = GlobalVar(tensor_array_scatter_name) setattr(self.prelude, tensor_array_scatter_name, tensor_array_scatter_var) tensor_array = Var("tensor_array", self.list(tensor_t())) indices = Var("indices", TensorType([Any()], "int32")) values = Var("values", self.list(tensor_t())) indices_shape = op.shape_of(indices) limit = op.take(indices_shape, const(0)) body = tensor_array_scatter_helper_var(tensor_array, const(0), limit, indices, values) self.prelude.mod[tensor_array_scatter_var] = Function( [tensor_array, indices, values], body, self.list(tensor_t()), [] ) def define_tensor_array_split(self): """Defines a function to split the values of a tensor_t into a tensor array. tensor_array_split(ta, value, lengths) : list[tensor_t] -> tensor_t -> Tensor[(Any), int32] -> list[tensor_t] """ tensor_t = self.tensor_type_var tensor_array_split_helper_name = self.get_name("ta_split_helper") tensor_array_split_helper_var = GlobalVar(tensor_array_split_helper_name) setattr(self.prelude, tensor_array_split_helper_name, tensor_array_split_helper_var) ta1 = Var("tensor_array", self.list(tensor_t())) value1 = Var("value1", tensor_t()) offset1 = Var("offset1", scalar_type("int32")) current1 = Var("current1", scalar_type("int32")) limit1 = Var("limit1", scalar_type("int32")) lengths1 = Var("lengths", TensorType([Any()], "int32")) write_var = self.get_global_var("tensor_array_write") take_var = self.get_global_var("tensor_take") helper1_body = If( equal(current1, limit1), ta1, write_var( tensor_array_split_helper_var( ta1, value1, add(offset1, op.take(lengths1, current1)), add(current1, const(1)), limit1, lengths1, ), current1, take_var(value1, offset1, add(op.take(lengths1, current1), offset1)), ), ) self.prelude.mod[tensor_array_split_helper_var] = Function( [ta1, value1, offset1, current1, limit1, lengths1], helper1_body, self.list(tensor_t()), [], ) split_name = self.get_name("tensor_array_split") split_var = GlobalVar(split_name) setattr(self.prelude, split_name, split_var) tensor_array = Var("tensor_array", self.list(tensor_t())) value = Var("value", tensor_t()) lengths = Var("lengths", TensorType([Any()], "int32")) lengths_shape = op.shape_of(lengths) lengths_limit = op.take(lengths_shape, const(0)) body = tensor_array_split_helper_var( tensor_array, value, const(0), const(0), lengths_limit, lengths ) self.prelude.mod[split_var] = Function( [tensor_array, value, lengths], body, self.list(tensor_t()), [] ) def define_tensor_array_concat(self): """Defines a function to return the values in the tensor array as concatenated tensor_t. tensor_array_concat(ta) : list[tensor_t] -> tensor_t """ concat_name = self.get_name("tensor_array_concat") concat_var = GlobalVar(concat_name) setattr(self.prelude, concat_name, concat_var) tensor_concat_var = self.get_global_var("tensor_concatenate") tensor_t = self.tensor_type_var tensor_nil_var = self.get_ctor("tensor_nil") tensor_array = Var("tensor_array", self.list(tensor_t())) hd = Var("hd") tl = Var("tl") nil_case = Clause(PatternConstructor(self.nil), tensor_nil_var()) cons_case = Clause( PatternConstructor(self.cons, [PatternVar(hd), PatternVar(tl)]), Match( tl, [ Clause(PatternConstructor(self.nil), hd), Clause(PatternWildcard(), tensor_concat_var(hd, concat_var(tl))), ], False, ), ) self.prelude.mod[concat_var] = Function( [tensor_array], Match(tensor_array, [nil_case, cons_case], False), tensor_t(), [] ) def define_tensor_array_gather(self): """Defines a function to return the selected values in a tensor array as tensor_t. tensor_array_gather(ta, indices) : list[tensor_t] -> Tensor[(Any), int32] -> tensor_t """ helper_name = self.get_name("tensor_array_gather_helper") helper_var = GlobalVar(helper_name) setattr(self.prelude, helper_name, helper_var) tensor_type_var = self.tensor_type_var stack_var = self.get_var("tensor_array_stack") read_var = self.get_var("tensor_array_read") ta = Var("ta", self.list(tensor_type_var())) accu = Var("accu", self.list(tensor_type_var())) current = Var("current", scalar_type("int32")) limit = Var("limit", scalar_type("int32")) indices_ = Var("indices_", TensorType([Any()], "int32")) helper_body = If( equal(current, const(0)), stack_var(accu), helper_var( ta, self.cons(read_var(ta, op.take(indices_, subtract(current, const(1)))), accu), subtract(current, const(1)), limit, indices_, ), ) self.prelude.mod[helper_var] = Function( [ta, accu, current, limit, indices_], helper_body, tensor_type_var(), [] ) gather_name = self.get_name("tensor_array_gather") gather_var = GlobalVar(gather_name) setattr(self.prelude, gather_name, gather_var) tensor_array = Var("tensor_array", self.list(tensor_type_var())) indices = Var("indices", TensorType([Any()], "int32")) indices_shape = op.shape_of(indices) limit = op.take(indices_shape, const(0)) body = helper_var(tensor_array, self.nil(), limit, limit, indices) self.prelude.mod[gather_var] = Function( [tensor_array, indices], body, tensor_type_var(), [] ) def define_tensor_array_stack(self): """Defines a function to get the values in the tensor array as a stack tensor_t. tensor_array_stack(l) : list[tensor_t] -> tensor_t """ stack_name = self.get_name("tensor_array_stack") stack_var = GlobalVar(stack_name) setattr(self.prelude, stack_name, stack_var) tensor_type_var = self.tensor_type_var tensor_array = Var("tensor_array", self.list(tensor_type_var())) expand_dims_var = self.get_global_var("tensor_expand_dims") concat_var = self.get_global_var("tensor_concatenate") tensor_array_expand_dims = self.prelude.map(expand_dims_var, tensor_array) tensors = self.prelude.foldl( concat_var, self.prelude.hd(tensor_array_expand_dims), self.prelude.tl(tensor_array_expand_dims), ) self.prelude.mod[stack_var] = Function( [tensor_array], ToANormalFormExpr(tensors), tensor_type_var(), [] ) def register(self): """Register all tensor array ops in Prelude""" self.define_tensor_adt() self.define_tensor_take() self.define_tensor_expand_dims() self.define_tensor_concat() self.define_tensor_array() self.define_tensor_array_read() self.define_tensor_array_write() self.define_tensor_array_unstack_tensor1() self.define_tensor_array_unstack_tensor2() self.define_tensor_array_unstack_tensor3() self.define_tensor_array_unstack_tensor4() self.define_tensor_array_unstack_tensor5() self.define_tensor_array_unstack_tensor6() self.define_tensor_array_scatter() self.define_tensor_array_split() self.define_tensor_array_concat() self.define_tensor_array_stack() # TODO(wweic): Gather fails in PartialEvaluate # self.define_tensor_array_gather() class Prelude: """Contains standard definitions.""" def __init__(self, mod=None): if mod is None: mod = IRModule() self.mod = mod self.load_prelude() def get_name(self, canonical, dtype): """Get name corresponding to the canonical name""" if canonical == "tensor_t": return f"tensor_{dtype}_t" return f"{canonical}_{dtype}" def get_global_var(self, canonical, dtype): """Get global var corresponding to the canonical name""" name = self.get_name(canonical, dtype) return self.mod.get_global_var(name) def get_type(self, canonical, dtype): """Get type corresponding to the canonical name""" name = self.get_name(canonical, dtype) return self.mod.get_global_type_var(name) def get_ctor(self, ty_name, canonical, dtype): """Get constructor corresponding to the canonical name""" name = self.get_name(canonical, dtype) ctors = self.mod.get_type(ty_name) for ctor in ctors: if ctor.name_hint == name: return ctor raise Exception(f"could not find {name}") def get_tensor_ctor(self, canonical, dtype): ty = self.get_type("tensor_t", dtype) return self.get_ctor(ty.name_hint, canonical, dtype) def get_name_static(self, canonical, dtype, shape, batch_dim=None): """Get name corresponding to the canonical name""" return _get_name_static(canonical, dtype, shape, batch_dim) def get_global_var_static(self, canonical, dtype, shape, batch_dim=None): """Get var corresponding to the canonical name""" name = self.get_name_static(canonical, dtype, shape, batch_dim) return self.mod.get_global_var(name) def get_type_static(self, canonical, dtype, shape): """Get type corresponding to the canonical name""" name = self.get_name_static(canonical, dtype, shape) return self.mod.get_global_type_var(name) def get_ctor_static(self, ty_name, name, dtype, shape): """Get constructor corresponding to the canonical name""" ty_name = self.get_name_static(ty_name, dtype, shape) name = self.get_name_static(name, dtype, shape) ctors = self.mod.get_type(ty_name) for ctor in ctors: if ctor.name_hint == name: return ctor raise Exception(f"could not find {name}") def get_tensor_ctor_static(self, name, dtype, shape): """Get constructor corresponding to the canonical name""" return self.get_ctor_static("tensor_t", name, dtype, shape) def load_prelude(self): """Parses the Prelude from Relay's text format into a module.""" # TODO(@jroesch): we should remove this helper when we port over prelude self.mod.import_from_std("prelude.rly") GLOBAL_DEFS = [ "id", "compose", "flip", "hd", "tl", "nth", "update", "map", "foldl", "foldr", "foldr1", "concat", "filter", "zip", "rev", "map_accuml", "map_accumr", "unfoldl", "unfoldr", "sum", "length", "tmap", "size", "iterate", ] for global_def in GLOBAL_DEFS: setattr(self, global_def, self.mod.get_global_var(global_def)) for dtype in [ "float32", "float16", "float64", "int32", "uint8", "int8", "int16", "uint16", "int64", ]: tensor_array_ops = TensorArrayOps(self, dtype) tensor_array_ops.register() # Renamer doesn't properly deal with constructors, etc # self.mod = AnnotateSpans()(self.mod)
64,853
39.969046
100
py
tvm
tvm-main/python/tvm/relay/_make.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """ The constructors for all Relay AST nodes exposed from C++. This module includes MyPy type signatures for all of the exposed modules. """ import tvm._ffi tvm._ffi._init_api("relay._make", __name__)
988
37.038462
62
py
tvm
tvm-main/python/tvm/relay/data_dep_optimization/simplify_fc_transpose.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=unused-argument, not-context-manager """Automatic optimize fc tranpose""" import numpy as np import tvm from tvm import relay from tvm.relay.analysis import search_fc_transpose from .utils import _run_opt_pass def convert(func, params): """convert all ```y = nn.dense(x, transpose(w, [1, 0]))``` to ```y = nn.dense(x, wt)``` Parameters ---------- func : relay.Expr Expr will be optimized params : Dict[String, tvm.nd.array] Parameters of Expr Returns ------- new_func : relay.Expr Mutated Expr from ```y = nn.dense(x, transpose(w, [1, 0]))``` to ```y = nn.dense(x, wt)``` params: Dict[String, tvm.nd.array] Parameters of mutated Expr, with weights pre-transposed """ weight_info = search_fc_transpose(func) for item in weight_info: name = str(item) w_np = params[name].numpy() new_w = np.transpose(w_np, axes=[1, 0]) params[name + ".T"] = tvm.nd.array(new_w) del params[name] new_func = _run_opt_pass( func, relay.transform.SimplifyFCTranspose( weight_info, ), ) return new_func, params
1,980
31.47541
72
py
tvm
tvm-main/python/tvm/relay/data_dep_optimization/bsr_conv2d.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=unused-argument, not-context-manager """Automatic convert model from dense to block sparse""" from tvm import relay from tvm.relay.analysis.sparse_conv2d import process_params from .utils import _run_opt_pass def convert(func, params, blocksize, sparsity_threshold, layout="NHWC", kernel_size=1): """Convert a conv2d func and according parameters to block sparse Parameters ---------- func : relay.Expr Expr will be optimized to sparse operation params : Dict[Srting, tvm.nd.array] Parameters of the Expr blocksize : Tuple(int, int) Blocksize for BSR matrix sparsity_threshold : float Minimal sparsity requirement for converting. If weight sparsity is lower than this threshold, the dense operation will be kept. layout : str layout of network Returns ------- new_func: relay.Expr Mutated Expr with sparse operations params: Dict[Srting, tvm.nd.array] New params with BSR matrix for mutated Expr """ weight_info = process_params(func, params, blocksize, sparsity_threshold, layout, kernel_size) new_func = _run_opt_pass( func, relay.transform.Conv2dToSparse( weight_info.weight_name, weight_info.weight_shape, layout, kernel_size ), ) return new_func, params def convert2(func, params, blocksize, sparsity_threshold, layout, kernel_size): """Convert a freezed conv2d func to block sparse Parameters ---------- func : relay.Expr Expr will be optimized to sparse operation, with params freezed params : Dict[Srting, tvm.nd.array] Parameters of the Expr (not used in this pass) blocksize : Tuple(int, int) Blocksize for BSR matrix sparsity_threshold : float Minimal sparsity requirement for converting. If weight sparsity is lower than this threshold, the dense operation will be kept. layout : str layout of network kernel_size : int kernel size of the conv2d, for filtering Returns ------- new_func: relay.Expr Mutated Expr with sparse operations params: Dict[Srting, tvm.nd.array] New params with BSR matrix for mutated Expr (not modified) """ new_func = _run_opt_pass( func, relay.transform.Conv2dToSparse2(layout, kernel_size, blocksize, sparsity_threshold) ) return new_func, params
3,234
33.052632
98
py
tvm
tvm-main/python/tvm/relay/data_dep_optimization/bsr_dense.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=unused-argument, not-context-manager """Automatic convert model from dense to block sparse""" from tvm import relay from tvm.relay.analysis.sparse_dense import process_params from .utils import _run_opt_pass def convert(func, params, blocksize, sparsity_threshold): """Convert a dense func and according parameters to block sparse Parameters ---------- func : relay.Expr Expr will be optimized to sparse operation params : Dict[Srting, tvm.nd.array] Parameters of the Expr blocksize : Tuple(int, int) Blocksize for BSR matrix sparsity_threshold : float Minimal sparsity requirement for converting. If weight sparsity is lower than this threshold, the dense operation will be kept. Returns ------- new_func: relay.Expr Mutated Expr with sparse operations params: Dict[Srting, tvm.nd.array] New params with BSR matrix for mutated Expr """ weight_info = process_params(func, params, blocksize, sparsity_threshold) new_func = _run_opt_pass( func, relay.transform.DenseToSparse(weight_info.weight_name, weight_info.weight_shape) ) return new_func, params
1,996
35.309091
94
py
tvm
tvm-main/python/tvm/relay/data_dep_optimization/utils.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=unused-argument, not-context-manager """Utils functions for optimizations""" import tvm def _run_opt_pass(expr, opt_pass): """Helper function to run pass Parameters ---------- expr : relay.Expr Expr will be optimized opt_pass : relay.Pass Optimization pass Returns ------- ret: relay.Expr Optimized Expr by running opt_pass """ assert isinstance(opt_pass, tvm.transform.Pass) mod = tvm.IRModule.from_expr(expr) mod = opt_pass(mod) return mod["main"]
1,334
30.785714
62
py
tvm
tvm-main/python/tvm/relay/data_dep_optimization/__init__.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=unused-argument, not-context-manager """Optimizations involves changing of paramters""" from . import bsr_dense from . import simplify_fc_transpose from . import bsr_conv2d
977
41.521739
62
py
tvm
tvm-main/python/tvm/relay/testing/lstm.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """ Implementation of a Long Short-Term Memory (LSTM) cell. Adapted from: https://gist.github.com/merrymercy/5eb24e3b019f84200645bd001e9caae9 """ from tvm import relay from . import layers from .init import create_workload def lstm_cell(num_hidden, batch_size=1, dtype="float32", name=""): """Long-Short Term Memory (LSTM) network cell. Parameters ---------- num_hidden : int Number of units in output symbol. batch_size : int Batch size (length of states). Returns ------- result : tvm.relay.Function A Relay function that evaluates an LSTM cell. The function takes in a tensor of input data, a tuple of two states, and weights and biases for dense operations on the inputs and on the state. It returns a tuple with two members, an output tensor and a tuple of two new states. """ builder = relay.ScopeBuilder() input_type = relay.TensorType((batch_size, num_hidden), dtype) weight_type = relay.TensorType((4 * num_hidden, num_hidden), dtype) bias_type = relay.TensorType((4 * num_hidden,), dtype) dense_type = relay.TensorType((batch_size, 4 * num_hidden), dtype) slice_type = relay.TupleType([input_type, input_type, input_type, input_type]) ret_type = relay.TupleType([input_type, relay.TupleType([input_type, input_type])]) inputs = relay.Var("inputs", input_type) states = relay.Var("states", relay.TupleType([input_type, input_type])) i2h_weight = relay.Var("i2h_weight", weight_type) i2h_bias = relay.Var("i2h_bias", bias_type) h2h_weight = relay.Var("h2h_weight", weight_type) h2h_bias = relay.Var("h2h_bias", bias_type) i2h = builder.let( ("i2h", dense_type), layers.dense_add_bias( data=inputs, units=num_hidden * 4, weight=i2h_weight, bias=i2h_bias, name=f"{name}i2h" ), ) h2h = builder.let( ("h2h", dense_type), layers.dense_add_bias( data=relay.TupleGetItem(states, 0), units=num_hidden * 4, weight=h2h_weight, bias=h2h_bias, name=f"{name}h2h", ), ) gates = builder.let(("gates", dense_type), relay.add(i2h, h2h)) slice_gates = builder.let( ("slice_gates", slice_type), relay.split(gates, indices_or_sections=4, axis=1).astuple() ) in_gate = builder.let( ("in_gate", input_type), relay.sigmoid(relay.TupleGetItem(slice_gates, 0)) ) forget_gate = builder.let( ("forget_gate", input_type), relay.sigmoid(relay.TupleGetItem(slice_gates, 1)) ) in_transform = builder.let( ("in_transform", input_type), relay.tanh(relay.TupleGetItem(slice_gates, 2)) ) out_gate = builder.let( ("out_gate", input_type), relay.sigmoid(relay.TupleGetItem(slice_gates, 3)) ) next_c = builder.let( ("next_c", input_type), relay.add( relay.multiply(forget_gate, relay.TupleGetItem(states, 1)), relay.multiply(in_gate, in_transform), ), ) next_h = builder.let(("next_h", input_type), relay.multiply(out_gate, relay.tanh(next_c))) ret = builder.let(("ret", ret_type), relay.Tuple([next_h, relay.Tuple([next_h, next_c])])) builder.ret(ret) body = builder.get() return relay.Function( [inputs, states, i2h_weight, i2h_bias, h2h_weight, h2h_bias], body, ret_type ) def get_net(iterations, num_hidden, batch_size=1, dtype="float32"): """Constructs an unrolled RNN with LSTM cells""" input_type = relay.TensorType((batch_size, num_hidden), dtype) weight_type = relay.TensorType((4 * num_hidden, num_hidden), dtype) bias_type = relay.TensorType((4 * num_hidden,), dtype) state_type = relay.TupleType([input_type, input_type]) cell_type = relay.TupleType([input_type, state_type]) builder = relay.ScopeBuilder() zeros = builder.let(("zeros", input_type), relay.zeros((batch_size, num_hidden), dtype)) init_states = builder.let(("init_states", state_type), relay.Tuple([zeros, zeros])) states = init_states out = None for i in range(iterations): inputs = relay.Var("data", input_type) i2h_weight = relay.Var(f"i2h_{i}_weight", weight_type) i2h_bias = relay.Var(f"i2h_{i}_bias", bias_type) h2h_weight = relay.Var(f"h2h_{i}_weight", weight_type) h2h_bias = relay.Var(f"h2h_{i}_bias", bias_type) cell_fn = lstm_cell(num_hidden, batch_size, dtype, f"lstm_{i}") call = builder.let( (f"call_{i}", cell_type), relay.Call(cell_fn, [inputs, states, i2h_weight, i2h_bias, h2h_weight, h2h_bias]), ) new_out = builder.let((f"out_{i}", input_type), relay.TupleGetItem(call, 0)) new_states = builder.let((f"states_{i}", state_type), relay.TupleGetItem(call, 1)) states = new_states out = new_out builder.ret(out) body = builder.get() args = relay.analysis.free_vars(body) return relay.Function(args, body, input_type) def get_workload(iterations, num_hidden, batch_size=1, dtype="float32"): """Get benchmark workload for an LSTM RNN. Parameters ---------- iterations : int The number of iterations in the desired LSTM RNN. num_hidden : int The size of the hiddxen state batch_size : int, optional (default 1) The batch size used in the model dtype : str, optional (default "float32") The data type Returns ------- mod : tvm.IRModule The relay module that contains a LSTM network. params : dict of str to NDArray The parameters. """ net = get_net(iterations, num_hidden, batch_size, dtype) return create_workload(net)
6,536
34.335135
98
py
tvm
tvm-main/python/tvm/relay/testing/byoc.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Defines test utilties useful for testing BYOC flows.""" from tvm import relay from tvm.relay.expr_functor import ExprMutator from tvm.relay.op.annotation import compiler_begin, compiler_end class CcompilerAnnotator(ExprMutator): """ This is used to create external functions for ccompiler. A simple annotator that creates the following program: | -- begin -- | add | subtract | multiply | -- end -- | """ def __init__(self): super(CcompilerAnnotator, self).__init__() self.in_compiler = 0 def visit_call(self, call): if call.op.name == "add": # Annotate begin at args if self.in_compiler == 1: lhs = compiler_begin(super().visit(call.args[0]), "ccompiler") rhs = compiler_begin(super().visit(call.args[1]), "ccompiler") op = relay.add(lhs, rhs) self.in_compiler = 2 return op elif call.op.name == "subtract": if self.in_compiler == 1: lhs = super().visit(call.args[0]) rhs = super().visit(call.args[1]) if isinstance(lhs, relay.expr.Var): lhs = compiler_begin(lhs, "ccompiler") if isinstance(rhs, relay.expr.Var): rhs = compiler_begin(rhs, "ccompiler") return relay.subtract(lhs, rhs) elif call.op.name == "multiply": # Annotate end at output self.in_compiler = 1 lhs = super().visit(call.args[0]) rhs = super().visit(call.args[1]) if isinstance(lhs, relay.expr.Var): lhs = compiler_begin(lhs, "ccompiler") if isinstance(rhs, relay.expr.Var): rhs = compiler_begin(rhs, "ccompiler") op = relay.multiply(lhs, rhs) if self.in_compiler == 2: op = compiler_end(op, "ccompiler") self.in_compiler = 0 return op return super().visit_call(call)
2,889
36.532468
78
py
tvm
tvm-main/python/tvm/relay/testing/nat.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=invalid-name """Defines a unary natural number (Peano natural number) abstract data type for Relay and provides some utility functions for it. Nats are useful for testing purposes, as they make it easy to write test cases for recursion and pattern matching.""" from tvm.relay.backend.interpreter import ConstructorValue def get_type(prelude, name): ty_var = prelude.mod.get_global_type_var(name) ty_data = prelude.mod.type_definitions[ty_var] return tuple([ty_var] + list(ty_data.constructors)) def count(prelude, n): """Takes a ConstructorValue corresponding to a nat ADT and converts it into a Python integer. This is an example of using an ADT value in Python. """ assert isinstance(n, ConstructorValue) _, z, s = prelude.mod.get_type("nat") if n.tag == z.tag: return 0 assert n.tag == s.tag return 1 + count(prelude, n.fields[0]) def make_nat_value(prelude, n): """The inverse of count(): Given a non-negative Python integer, constructs a ConstructorValue representing that value as a nat. """ _, z, s = prelude.mod.get_type("nat") if n == 0: return ConstructorValue(z.tag, [], z) return ConstructorValue(s.tag, [make_nat_value(prelude, n - 1)], s) def make_nat_expr(prelude, n): """Given a non-negative Python integer, constructs a Python expression representing that integer's value as a nat. """ assert n >= 0 _, z, s = prelude.mod.get_type("nat") ret = z() while n > 0: ret = s(ret) n = n - 1 return ret
2,360
34.772727
71
py
tvm
tvm-main/python/tvm/relay/testing/tf.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # pylint: disable=invalid-name, unused-variable, unused-argument, no-init, import-outside-toplevel """ Tensorflow Model Helpers ======================== Some helper definitions for tensorflow models. """ import re import os.path import collections import numpy as np # Tensorflow imports import tensorflow as tf from tensorflow.core.framework import graph_pb2 import tvm from tvm.contrib.download import download_testdata try: tf_compat_v1 = tf.compat.v1 except (ImportError, AttributeError): tf_compat_v1 = tf ###################################################################### # Some helper functions # --------------------- def ProcessGraphDefParam(graph_def): """Type-checks and possibly canonicalizes `graph_def`. Parameters ---------- graph_def : Obj tensorflow graph definition. Returns ------- graph_def : Obj tensorflow graph definition """ if not isinstance(graph_def, graph_pb2.GraphDef): # `graph_def` could be a dynamically-created message, so try a duck-typed # approach try: old_graph_def = graph_def graph_def = graph_pb2.GraphDef() graph_def.MergeFrom(old_graph_def) except TypeError: raise TypeError("graph_def must be a GraphDef proto.") return graph_def def convert_to_list(x): if not isinstance(x, list): x = [x] return x def vmobj_to_list(o): """Converts TVM objects returned by VM execution to Python List. Parameters ---------- o : Obj VM Object as output from VM runtime executor. Returns ------- result : list Numpy objects as list with equivalent values to the input object. """ if isinstance(o, tvm.nd.NDArray): result = [o.numpy()] elif isinstance(o, tvm.runtime.container.ADT): result = [] for f in o: result.extend(vmobj_to_list(f)) elif isinstance(o, tvm.relay.backend.interpreter.ConstructorValue): if o.constructor.name_hint == "Cons": tl = vmobj_to_list(o.fields[1]) hd = vmobj_to_list(o.fields[0]) hd.extend(tl) result = hd elif o.constructor.name_hint == "Nil": result = [] elif "tensor_nil" in o.constructor.name_hint: result = [0] elif "tensor" in o.constructor.name_hint: result = [o.fields[0].numpy()] else: raise RuntimeError(f"Unknown object type: {o.constructor.name_hint}") else: raise RuntimeError(f"Unknown object type: {type(o)}") return result def AddShapesToGraphDef(session, out_node): """Add shapes attribute to nodes of the graph. Input graph here is the default graph in context. Parameters ---------- session : tf.Session Tensorflow session out_node : String or List Final output node of the graph. Returns ------- graph_def : Obj tensorflow graph definition with shapes attribute added to nodes. """ graph_def = tf_compat_v1.graph_util.convert_variables_to_constants( session, session.graph.as_graph_def(add_shapes=True), convert_to_list(out_node) ) return graph_def class NodeLookup(object): """Converts integer node ID's to human readable labels.""" def __init__(self, label_lookup_path=None, uid_lookup_path=None): self.node_lookup = self.load(label_lookup_path, uid_lookup_path) def load(self, label_lookup_path, uid_lookup_path): """Loads a human readable English name for each softmax node. Parameters ---------- label_lookup_path: String File containing String UID to integer node ID mapping . uid_lookup_path: String File containing String UID to human-readable string mapping. Returns ------- node_id_to_name: dict dict from integer node ID to human-readable string. """ if not tf_compat_v1.gfile.Exists(uid_lookup_path): tf.logging.fatal("File does not exist %s", uid_lookup_path) if not tf_compat_v1.gfile.Exists(label_lookup_path): tf.logging.fatal("File does not exist %s", label_lookup_path) # Loads mapping from string UID to human-readable string proto_as_ascii_lines = tf_compat_v1.gfile.GFile(uid_lookup_path).readlines() uid_to_human = {} p = re.compile(r"[n\d]*[ \S,]*") for line in proto_as_ascii_lines: parsed_items = p.findall(line) uid = parsed_items[0] human_string = parsed_items[2] uid_to_human[uid] = human_string # Loads mapping from string UID to integer node ID. node_id_to_uid = {} proto_as_ascii = tf_compat_v1.gfile.GFile(label_lookup_path).readlines() for line in proto_as_ascii: if line.startswith(" target_class:"): target_class = int(line.split(": ")[1]) if line.startswith(" target_class_string:"): target_class_string = line.split(": ")[1] node_id_to_uid[target_class] = target_class_string[1:-2] # Loads the final mapping of integer node ID to human-readable string node_id_to_name = {} for key, val in node_id_to_uid.items(): if val not in uid_to_human: tf.logging.fatal("Failed to locate: %s", val) name = uid_to_human[val] node_id_to_name[key] = name return node_id_to_name def id_to_string(self, node_id): if node_id not in self.node_lookup: return "" return self.node_lookup[node_id] def get_workload_official(model_url, model_sub_path, retries=5): """Import workload from tensorflow official Parameters ---------- model_url: str URL from where it will be downloaded. model_sub_path: Sub path in extracted tar for the ftozen protobuf file. retries: int The number of retries to attempt downloading and uncompressing the model in the CI, due to possible network and CI node issues. Returns ------- model_path: str Full path to saved model file """ attempts = retries + 1 error = None for current_attempt_idx in range(attempts): try: model_tar_name = os.path.basename(model_url) model_path = download_testdata(model_url, model_tar_name, module=["tf", "official"]) dir_path = os.path.dirname(model_path) if model_path.endswith("tgz") or model_path.endswith("gz"): import tarfile tar = tarfile.open(model_path) tar.extractall(path=dir_path) tar.close() elif model_path.endswith("zip"): import zipfile zip_object = zipfile.ZipFile(model_path) zip_object.extractall(path=dir_path) zip_object.close() else: raise RuntimeError("Could not decompress the file: " + model_path) return os.path.join(dir_path, model_sub_path) except (EOFError, RuntimeError) as err: error = err print(f"Raised : {str(error)}, current attempt : {current_attempt_idx} ...") raise error def get_workload(model_path, model_sub_path=None, inputs_dict=None, output=None): """Import workload from frozen protobuf Parameters ---------- model_path: str model_path on remote repository to download from. model_sub_path: str Model path in the compressed archive. Returns ------- graph_def: graphdef graph_def is the tensorflow workload. """ if model_sub_path: path_model = get_workload_official(model_path, model_sub_path) else: repo_base = "https://github.com/dmlc/web-data/raw/main/tensorflow/models/" model_url = os.path.join(repo_base, model_path) path_model = download_testdata(model_url, model_path, module="tf") # Creates graph from saved graph_def.pb. with tf_compat_v1.gfile.FastGFile(path_model, "rb") as f: graph_def = tf_compat_v1.GraphDef() graph_def.ParseFromString(f.read()) graph = tf_compat_v1.import_graph_def(graph_def, name="", input_map=inputs_dict) if inputs_dict is not None: # graph is changed so generate graph_def again with tf_compat_v1.Session(graph=graph) as sess: graph_def = AddShapesToGraphDef(sess, output) return graph_def ####################################################################### # PTB LSTMBlockCell Model # ----------------------- class PTBSmallConfig(object): """Small config. This configurations are used when training the model """ num_layers = 2 num_steps = 1 hidden_size = 200 batch_size = 1 vocab_size = 10000 init_scale = 0.1 def get_config(): """Configuration used for training the model""" return PTBSmallConfig() def pick_from_weight(weight, pows=1.0): """Identify token from Softmax output. This token will be mapped to word in the vocabulary. """ weight = weight**pows t = np.cumsum(weight) s = np.sum(weight) return int(np.searchsorted(t, 0.5 * s)) def do_tf_sample(session, data, in_states, num_samples): """Sampled from the model""" samples = [] sample = None # Cell inputs c and h should be passed for each layer explicitly. state_input_name = [ "Model/MultiRNNCellZeroState/LSTMBlockCellZeroState/zeros:0", "Model/MultiRNNCellZeroState/LSTMBlockCellZeroState/zeros_1:0", "Model/MultiRNNCellZeroState/LSTMBlockCellZeroState_1/zeros:0", "Model/MultiRNNCellZeroState/LSTMBlockCellZeroState_1/zeros_1:0", ] state = in_states # Graph nodes to be fetched as run output. Tensorflow LSTMBlockCell create internal # nodes for intermediate operations (gates) in the cell during run. # Cell state (c) is ':1'and cell output (h) is ':6' for each layer. fetches = [ [ "Model/RNN/RNN/multi_rnn_cell/cell_0/lstm_cell/LSTMBlockCell:1", "Model/RNN/RNN/multi_rnn_cell/cell_0/lstm_cell/LSTMBlockCell:6", "Model/RNN/RNN/multi_rnn_cell/cell_0/lstm_cell/LSTMBlockCell_1:1", "Model/RNN/RNN/multi_rnn_cell/cell_0/lstm_cell/LSTMBlockCell_1:6", ], "Model/Softmax:0", ] def _get_feed_dict(input_name, input_data): """Create feed dict""" feed_dict = {} if isinstance(input_data, list): for i, e in enumerate(input_name): feed_dict[e] = input_data[i] else: feed_dict[input_name] = input_data return feed_dict for x in data: feed_dict = _get_feed_dict(state_input_name, state) feed_dict["Model/Placeholder:0"] = [[x]] state, probs = session.run(fetches, feed_dict) sample = pick_from_weight(probs[0]) if sample is not None: samples.append(sample) else: samples.append(0) k = 1 while k < num_samples: feed_dict = _get_feed_dict(state_input_name, state) feed_dict["Model/Placeholder:0"] = [[samples[-1]]] state, probs = session.run(fetches, feed_dict) sample = pick_from_weight(probs[0]) samples.append(sample) k += 1 return samples, state def _create_ptb_vocabulary(data_dir): """Read the PTB sample data input to create vocabulary""" data_path = os.path.join(data_dir, "simple-examples/data/") file_name = "ptb.train.txt" def _read_words(filename): """Read the data for creating vocabulary""" with tf_compat_v1.gfile.GFile(filename, "r") as f: return f.read().encode("utf-8").decode("utf-8").replace("\n", "<eos>").split() def _build_vocab(filename): """Create vocabulary""" data = _read_words(filename) counter = collections.Counter(data) count_pairs = sorted(counter.items(), key=lambda x: (-x[1], x[0])) words, _ = list(zip(*count_pairs)) word_to_id = dict(zip(words, range(len(words)))) # for python 3.x id_to_word = dict((v, k) for k, v in word_to_id.items()) return word_to_id, id_to_word def ptb_raw_data(data_path, file_name): """Read the sample data and create vocabulary""" train_path = os.path.join(data_path, file_name) word_to_id, id_2_word = _build_vocab(train_path) return word_to_id, id_2_word return ptb_raw_data(data_path, file_name) def get_workload_ptb(): """Import ptb workload from frozen protobuf Parameters ---------- Nothing. Returns ------- graph_def: graphdef graph_def is the tensorflow workload for ptb. word_to_id : dict English word to integer id mapping id_to_word : dict Integer id to English word mapping """ sample_repo = "http://www.fit.vutbr.cz/~imikolov/rnnlm/" sample_data_file = "simple-examples.tgz" sample_url = sample_repo + sample_data_file ptb_model_file = "RNN/ptb/ptb_model_with_lstmblockcell.pb" # pylint: disable=import-outside-toplevel import tarfile file_path = download_testdata(sample_url, sample_data_file, module=["data", "ptb_data"]) dir_path = os.path.dirname(file_path) t = tarfile.open(file_path, "r") t.extractall(dir_path) word_to_id, id_to_word = _create_ptb_vocabulary(dir_path) dtype = "float32" shape = (1, 200) # Convert states of LSTMBlockCell to placeholder, so TVM can feed data state_name = [ "Model/MultiRNNCellZeroState/LSTMBlockCellZeroState/zeros:0", "Model/MultiRNNCellZeroState/LSTMBlockCellZeroState/zeros_1:0", "Model/MultiRNNCellZeroState/LSTMBlockCellZeroState_1/zeros:0", "Model/MultiRNNCellZeroState/LSTMBlockCellZeroState_1/zeros_1:0", ] inputs_dict = { state_name[0]: tf_compat_v1.placeholder(dtype, shape, state_name[0].split(":")[0]), state_name[1]: tf_compat_v1.placeholder(dtype, shape, state_name[1].split(":")[0]), state_name[2]: tf_compat_v1.placeholder(dtype, shape, state_name[2].split(":")[0]), state_name[3]: tf_compat_v1.placeholder(dtype, shape, state_name[3].split(":")[0]), } return ( word_to_id, id_to_word, get_workload(ptb_model_file, inputs_dict=inputs_dict, output="Model/Softmax"), )
15,293
31.679487
98
py
tvm
tvm-main/python/tvm/relay/testing/resnet.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """ Adapted from https://github.com/tornadomeet/ResNet/blob/master/symbol_resnet.py Original author Wei Wu Implemented the following paper: Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. "Identity Mappings in Deep Residual Networks" """ # pylint: disable=unused-argument from tvm import relay from .init import create_workload from . import layers def residual_unit( data, num_filter, stride, dim_match, name, bottle_neck=True, data_layout="NCHW", kernel_layout="IOHW", ): """Return ResNet Unit symbol for building ResNet Parameters ---------- data : str Input data num_filter : int Number of output channels bnf : int Bottle neck channels factor with regard to num_filter stride : tuple Stride used in convolution dim_match : bool True means channel number between input and output is the same, otherwise means differ name : str Base name of the operators """ bn_axis = data_layout.index("C") if bottle_neck: bn1 = layers.batch_norm_infer(data=data, epsilon=2e-5, axis=bn_axis, name=name + "_bn1") act1 = relay.nn.relu(data=bn1) conv1 = layers.conv2d( data=act1, channels=int(num_filter * 0.25), kernel_size=(1, 1), strides=stride, padding=(0, 0), name=name + "_conv1", data_layout=data_layout, kernel_layout=kernel_layout, ) bn2 = layers.batch_norm_infer(data=conv1, epsilon=2e-5, axis=bn_axis, name=name + "_bn2") act2 = relay.nn.relu(data=bn2) conv2 = layers.conv2d( data=act2, channels=int(num_filter * 0.25), kernel_size=(3, 3), strides=(1, 1), padding=(1, 1), name=name + "_conv2", data_layout=data_layout, kernel_layout=kernel_layout, ) bn3 = layers.batch_norm_infer(data=conv2, epsilon=2e-5, axis=bn_axis, name=name + "_bn3") act3 = relay.nn.relu(data=bn3) conv3 = layers.conv2d( data=act3, channels=num_filter, kernel_size=(1, 1), strides=(1, 1), padding=(0, 0), name=name + "_conv3", data_layout=data_layout, kernel_layout=kernel_layout, ) if dim_match: shortcut = data else: shortcut = layers.conv2d( data=act1, channels=num_filter, kernel_size=(1, 1), strides=stride, name=name + "_sc", data_layout=data_layout, kernel_layout=kernel_layout, ) return relay.add(conv3, shortcut) bn1 = layers.batch_norm_infer(data=data, epsilon=2e-5, axis=bn_axis, name=name + "_bn1") act1 = relay.nn.relu(data=bn1) conv1 = layers.conv2d( data=act1, channels=num_filter, kernel_size=(3, 3), strides=stride, padding=(1, 1), name=name + "_conv1", data_layout=data_layout, kernel_layout=kernel_layout, ) bn2 = layers.batch_norm_infer(data=conv1, epsilon=2e-5, axis=bn_axis, name=name + "_bn2") act2 = relay.nn.relu(data=bn2) conv2 = layers.conv2d( data=act2, channels=num_filter, kernel_size=(3, 3), strides=(1, 1), padding=(1, 1), name=name + "_conv2", data_layout=data_layout, kernel_layout=kernel_layout, ) if dim_match: shortcut = data else: shortcut = layers.conv2d( data=act1, channels=num_filter, kernel_size=(1, 1), strides=stride, name=name + "_sc", data_layout=data_layout, kernel_layout=kernel_layout, ) return relay.add(conv2, shortcut) def resnet( units, num_stages, filter_list, num_classes, data_shape, bottle_neck=True, layout="NCHW", dtype="float32", ): """Return ResNet Program. Parameters ---------- units : list Number of units in each stage num_stages : int Number of stages filter_list : list Channel size of each stage num_classes : int Output size of symbol data_shape : tuple of int. The shape of input data. bottle_neck : bool Whether apply bottleneck transformation. layout: str The data layout for conv2d dtype : str The global data type. """ data_layout = layout kernel_layout = "OIHW" if layout == "NCHW" else "HWIO" bn_axis = data_layout.index("C") num_unit = len(units) assert num_unit == num_stages data = relay.var("data", shape=data_shape, dtype=dtype) data = layers.batch_norm_infer( data=data, epsilon=2e-5, axis=bn_axis, scale=False, name="bn_data" ) (_, _, height, _) = data_shape if layout == "NHWC": (_, height, _, _) = data_shape if height <= 32: # such as cifar10 body = layers.conv2d( data=data, channels=filter_list[0], kernel_size=(3, 3), strides=(1, 1), padding=(1, 1), name="conv0", data_layout=data_layout, kernel_layout=kernel_layout, ) else: # often expected to be 224 such as imagenet body = layers.conv2d( data=data, channels=filter_list[0], kernel_size=(7, 7), strides=(2, 2), padding=(3, 3), name="conv0", data_layout=data_layout, kernel_layout=kernel_layout, ) body = layers.batch_norm_infer(data=body, epsilon=2e-5, axis=bn_axis, name="bn0") body = relay.nn.relu(data=body) body = relay.nn.max_pool2d( data=body, pool_size=(3, 3), strides=(2, 2), padding=(1, 1), layout=data_layout ) for i in range(num_stages): body = residual_unit( body, filter_list[i + 1], (1 if i == 0 else 2, 1 if i == 0 else 2), False, name=f"stage{i + 1}_unit1", bottle_neck=bottle_neck, data_layout=data_layout, kernel_layout=kernel_layout, ) for j in range(units[i] - 1): body = residual_unit( body, filter_list[i + 1], (1, 1), True, name=f"stage{i + 1}_unit{j + 2}", bottle_neck=bottle_neck, data_layout=data_layout, kernel_layout=kernel_layout, ) bn1 = layers.batch_norm_infer(data=body, epsilon=2e-5, axis=bn_axis, name="bn1") relu1 = relay.nn.relu(data=bn1) # Although kernel is not used here when global_pool=True, we should put one pool1 = relay.nn.global_avg_pool2d(data=relu1, layout=data_layout) flat = relay.nn.batch_flatten(data=pool1) fc1 = layers.dense_add_bias(data=flat, units=num_classes, name="fc1") net = relay.nn.softmax(data=fc1) return relay.Function(relay.analysis.free_vars(net), net) def get_net( batch_size, num_classes, num_layers=50, image_shape=(3, 224, 224), layout="NCHW", dtype="float32", **kwargs, ): """ Adapted from https://github.com/tornadomeet/ResNet/blob/master/train_resnet.py Original author Wei Wu """ (_, height, _) = image_shape if layout == "NHWC": (height, _, _) = image_shape data_shape = (batch_size,) + image_shape if height <= 28: num_stages = 3 if (num_layers - 2) % 9 == 0 and num_layers >= 164: per_unit = [(num_layers - 2) // 9] filter_list = [16, 64, 128, 256] bottle_neck = True elif (num_layers - 2) % 6 == 0 and num_layers < 164: per_unit = [(num_layers - 2) // 6] filter_list = [16, 16, 32, 64] bottle_neck = False else: raise ValueError(f"no experiments done on num_layers {num_layers}") units = per_unit * num_stages else: if num_layers >= 50: filter_list = [64, 256, 512, 1024, 2048] bottle_neck = True else: filter_list = [64, 64, 128, 256, 512] bottle_neck = False num_stages = 4 if num_layers == 18: units = [2, 2, 2, 2] elif num_layers == 34: units = [3, 4, 6, 3] elif num_layers == 50: units = [3, 4, 6, 3] elif num_layers == 101: units = [3, 4, 23, 3] elif num_layers == 152: units = [3, 8, 36, 3] elif num_layers == 200: units = [3, 24, 36, 3] elif num_layers == 269: units = [3, 30, 48, 8] else: raise ValueError(f"no experiments done on num_layers {num_layers}") return resnet( units=units, num_stages=num_stages, filter_list=filter_list, num_classes=num_classes, data_shape=data_shape, bottle_neck=bottle_neck, layout=layout, dtype=dtype, ) def get_workload( batch_size=1, num_classes=1000, num_layers=18, image_shape=(3, 224, 224), layout="NCHW", dtype="float32", **kwargs, ): """Get benchmark workload for resnet Parameters ---------- batch_size : int The batch size used in the model num_classes : int, optional Number of classes num_layers : int, optional Number of layers image_shape : tuple, optional The input image shape layout: str The data layout for conv2d dtype : str, optional The data type kwargs : dict Extra arguments Returns ------- mod : tvm.IRModule The relay module that contains a ResNet network. params : dict of str to NDArray The parameters. """ net = get_net( batch_size=batch_size, num_classes=num_classes, num_layers=num_layers, image_shape=image_shape, dtype=dtype, layout=layout, **kwargs, ) return create_workload(net)
11,058
27.576227
97
py
tvm
tvm-main/python/tvm/relay/testing/squeezenet.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # coding: utf-8 # pylint: disable=unused-argument """ Symbol of SqueezeNet Reference: Iandola, Forrest N., et al. "Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size." (2016). """ from tvm import relay from .init import create_workload from . import layers # Helpers def _make_fire(net, squeeze_channels, expand1x1_channels, expand3x3_channels, prefix): net = _make_fire_conv(net, squeeze_channels, 1, 0, f"{prefix}_input") left = _make_fire_conv(net, expand1x1_channels, 1, 0, f"{prefix}_left") right = _make_fire_conv(net, expand3x3_channels, 3, 1, f"{prefix}_right") # NOTE : Assume NCHW layout here net = relay.concatenate((left, right), axis=1) return net def _make_fire_conv(net, channels, kernel_size, padding=0, prefix=""): net = layers.conv2d( net, channels=channels, kernel_size=(kernel_size, kernel_size), padding=(padding, padding), name=f"{prefix}_conv", ) net = relay.nn.bias_add(net, relay.var(f"{prefix}_conv_bias")) net = relay.nn.relu(net) return net # Net def get_net(batch_size, image_shape, num_classes, version, dtype): """Get symbol of SqueezeNet Parameters ---------- batch_size : int The batch size used in the model image_shape : tuple, optional The input image shape num_classes: int The number of classification results version : str, optional "1.0" or "1.1" of SqueezeNet """ assert version in ["1.0", "1.1"], ( f"Unsupported SqueezeNet version {version}:" "1.0 or 1.1 expected" ) data_shape = (batch_size,) + image_shape net = relay.var("data", shape=data_shape, dtype=dtype) if version == "1.0": net = layers.conv2d( net, channels=96, kernel_size=(7, 7), strides=(2, 2), padding=(3, 3), name="conv1" ) net = relay.nn.bias_add(net, relay.var("conv1_bias")) net = relay.nn.relu(net) net = relay.nn.max_pool2d(net, pool_size=(3, 3), strides=(2, 2)) net = _make_fire(net, 16, 64, 64, "fire1") net = _make_fire(net, 16, 64, 64, "fire2") net = _make_fire(net, 32, 128, 128, "fire3") net = relay.nn.max_pool2d(net, pool_size=(3, 3), strides=(2, 2)) net = _make_fire(net, 32, 128, 128, "fire4") net = _make_fire(net, 48, 192, 192, "fire5") net = _make_fire(net, 48, 192, 192, "fire6") net = _make_fire(net, 64, 256, 256, "fire7") net = relay.nn.max_pool2d(net, pool_size=(3, 3), strides=(2, 2)) net = _make_fire(net, 64, 256, 256, "fire8") else: net = layers.conv2d( net, channels=64, kernel_size=(3, 3), strides=(2, 2), padding=(1, 1), name="conv1" ) net = relay.nn.bias_add(net, relay.var("conv1_bias")) net = relay.nn.relu(net) net = relay.nn.max_pool2d(net, pool_size=(3, 3), strides=(2, 2)) net = _make_fire(net, 16, 64, 64, "fire1") net = _make_fire(net, 16, 64, 64, "fire2") net = relay.nn.max_pool2d(net, pool_size=(3, 3), strides=(2, 2)) net = _make_fire(net, 32, 128, 128, "fire3") net = _make_fire(net, 32, 128, 128, "fire4") net = relay.nn.max_pool2d(net, pool_size=(3, 3), strides=(2, 2)) net = _make_fire(net, 48, 192, 192, "fire5") net = _make_fire(net, 48, 192, 192, "fire6") net = _make_fire(net, 64, 256, 256, "fire7") net = _make_fire(net, 64, 256, 256, "fire8") net = relay.nn.dropout(net, rate=0.5) net = layers.conv2d(net, channels=num_classes, kernel_size=(1, 1), name="conv_final") net = relay.nn.bias_add(net, relay.var("conv_final_bias")) net = relay.nn.relu(net) net = relay.nn.global_avg_pool2d(net) net = relay.nn.batch_flatten(net) net = relay.nn.softmax(net) args = relay.analysis.free_vars(net) return relay.Function(args, net) def get_workload( batch_size=1, num_classes=1000, version="1.0", image_shape=(3, 224, 224), dtype="float32" ): """Get benchmark workload for SqueezeNet Parameters ---------- batch_size : int The batch size used in the model num_classes : int, optional Number of classes version : str, optional "1.0" or "1.1" of SqueezeNet image_shape : tuple, optional The input image shape dtype : str, optional The data type Returns ------- mod : tvm.IRModule The relay module that contains a SqueezeNet network. params : dict of str to NDArray The parameters. """ net = get_net(batch_size, image_shape, num_classes, version, dtype) return create_workload(net)
5,472
33.639241
94
py
tvm
tvm-main/python/tvm/relay/testing/vgg.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """References: Simonyan, Karen, and Andrew Zisserman. "Very deep convolutional networks for large-scale image recognition." arXiv preprint arXiv:1409.1556 (2014). """ from tvm import relay from .init import create_workload from . import layers as wrapper def get_feature(internal_layer, layers, filters, batch_norm=False): """Get VGG feature body as stacks of convolutions.""" for i, num in enumerate(layers): for j in range(num): internal_layer = wrapper.conv2d( data=internal_layer, kernel_size=(3, 3), padding=(1, 1), channels=filters[i], name=f"conv{i + 1}_{j + 1}", ) internal_layer = relay.nn.bias_add( internal_layer, relay.var(f"conv{i + 1}_{j + 1}_bias") ) if batch_norm: internal_layer = wrapper.batch_norm_infer( data=internal_layer, name=f"bn{i + 1}_{j + 1}" ) internal_layer = relay.nn.relu(data=internal_layer) internal_layer = relay.nn.max_pool2d(data=internal_layer, pool_size=(2, 2), strides=(2, 2)) return internal_layer def get_classifier(input_data, num_classes): """Get VGG classifier layers as fc layers.""" flatten = relay.nn.batch_flatten(data=input_data) fc6 = wrapper.dense_add_bias(data=flatten, units=4096, name="fc6") relu6 = relay.nn.relu(data=fc6) drop6 = relay.nn.dropout(data=relu6, rate=0.5) fc7 = wrapper.dense_add_bias(data=drop6, units=4096, name="fc7") relu7 = relay.nn.relu(data=fc7) drop7 = relay.nn.dropout(data=relu7, rate=0.5) fc8 = wrapper.dense_add_bias(data=drop7, units=num_classes, name="fc8") return fc8 def get_net(batch_size, image_shape, num_classes, dtype, num_layers=11, batch_norm=False): """ Parameters ---------- batch_size : int The batch size used in the model image_shape : tuple, optional The input image shape num_classes : int, optional Number of claseses dtype : str, optional The data type num_layers : int Number of layers for the variant of vgg. Options are 11, 13, 16, 19. batch_norm : bool, default False Use batch normalization. """ vgg_spec = { 11: ([1, 1, 2, 2, 2], [64, 128, 256, 512, 512]), 13: ([2, 2, 2, 2, 2], [64, 128, 256, 512, 512]), 16: ([2, 2, 3, 3, 3], [64, 128, 256, 512, 512]), 19: ([2, 2, 4, 4, 4], [64, 128, 256, 512, 512]), } if num_layers not in vgg_spec: raise ValueError(f"Invalid num_layers {num_layers}. Choices are 11,13,16,19.") layers, filters = vgg_spec[num_layers] data_shape = (batch_size,) + image_shape data = relay.var("data", shape=data_shape, dtype=dtype) feature = get_feature(data, layers, filters, batch_norm) classifier = get_classifier(feature, num_classes) symbol = relay.nn.softmax(data=classifier) args = relay.analysis.free_vars(symbol) return relay.Function(args, symbol) def get_workload( batch_size, num_classes=1000, image_shape=(3, 224, 224), dtype="float32", num_layers=11, batch_norm=False, ): """Get benchmark workload for VGG nets. Parameters ---------- batch_size : int The batch size used in the model num_classes : int, optional Number of claseses image_shape : tuple, optional The input image shape dtype : str, optional The data type num_layers : int Number of layers for the variant of vgg. Options are 11, 13, 16, 19. batch_norm : bool Use batch normalization. Returns ------- mod : tvm.IRModule The relay module that contains a VGG network. params : dict of str to NDArray The parameters. """ net = get_net(batch_size, image_shape, num_classes, dtype, num_layers, batch_norm) return create_workload(net)
4,754
32.020833
99
py
tvm
tvm-main/python/tvm/relay/testing/mlp.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """ a simple multilayer perceptron """ from __future__ import absolute_import from tvm import relay from .init import create_workload def get_net(batch_size, num_classes=10, image_shape=(1, 28, 28), dtype="float32"): """Get network a simple multilayer perceptron. batch_size : int The batch size used in the model num_classes : int, optional Number of claseses image_shape : tuple, optional The input image shape dtype : str, optional The data type Returns ------- net : relay.Function The dataflow. """ data_shape = (batch_size,) + image_shape data = relay.var("data", shape=data_shape, dtype=dtype) data = relay.nn.batch_flatten(data) fc1 = relay.nn.dense(data, relay.var("fc1_weight"), units=128) fc1 = relay.nn.bias_add(fc1, relay.var("fc1_bias"), axis=-1) act1 = relay.nn.relu(fc1) fc2 = relay.nn.dense(act1, relay.var("fc2_weight"), units=64) fc2 = relay.nn.bias_add(fc2, relay.var("fc2_bias"), axis=-1) act2 = relay.nn.relu(fc2) fc3 = relay.nn.dense(act2, relay.var("fc3_weight"), units=num_classes) fc3 = relay.nn.bias_add(fc3, relay.var("fc3_bias"), axis=-1) mlp = relay.nn.softmax(data=fc3) args = relay.analysis.free_vars(mlp) return relay.Function(args, mlp) def get_workload(batch_size, num_classes=10, image_shape=(1, 28, 28), dtype="float32"): """Get benchmark workload for a simple multilayer perceptron. Parameters ---------- batch_size : int The batch size used in the model num_classes : int, optional Number of claseses image_shape : tuple, optional The input image shape dtype : str, optional The data type Returns ------- mod : tvm.IRModule The relay module that contains a mlp network. params : dict of str to NDArray The parameters. """ net = get_net(batch_size, num_classes, image_shape, dtype) return create_workload(net)
2,784
30.647727
87
py