code
stringlengths
114
1.05M
path
stringlengths
3
312
quality_prob
float64
0.5
0.99
learning_prob
float64
0.2
1
filename
stringlengths
3
168
kind
stringclasses
1 value
defmodule Exhort do @moduledoc ~S""" Exhort is an idomatic Elixir interface to the [Google OR Tools](https://developers.google.com/optimization/). Exhort is currently focused on the "SAT" portion of the tooling: > A constraint programming solver that uses SAT (satisfiability) methods. The primary API for Exhort is through a few modules: * `Exhort.SAT.Builder` - The module and struct for building a `Exhort.SAT.Model`. The builder provides functions for defining variables, expressions and building the model. * `Exhort.SAT.Expr` - A factory for expressions, constraints and variables. This module may be used as the primary interface for defining the parts of a model which are then added to a `%Exhort.SAT.Builder{}` struct before building the model. * `Exhort.SAT.Model` - The result of building a model through `Exhort.SAT.Builder.build/1`. Solving the model is done with `Exhort.SAT.Model.solve/1` or `Exhort.SAT.Model.solve/2`. The latter accepts a function that receives intermediate results in the solution. * `Exhort.SAT.SolverResponse` - A model solution. The `%Exhort.SAT.SolverResponse{}` struct contains meta-level information of the solution. The module has functions for retriving the values of variables defined in the model. ## Livebook See the included sample Livebook notebooks for examples on using Exhort. ## Setup See the Exhort README for information on using Exhort in a Livebook or adding it as a dependency to a project. Exhort uses native code so the host systsem must have a C/C++ compiler and the `make` utility. ## API Exhort is in the early stages of development. As such, we are investigating a varity of API approaches. We may end up with more than one (a la Ecto), but in the short term will likely focus on a single approach. The API is centered around the `Builder` and `Expr` modules. Those modules leverage Elixir macros to provide a DSL "expression language" for Exhort. ### Builder Building a model is done using `Exhort.SAT.Builder`. `Exhort.SAT.Builder` has functions for defining variables, specifying constraints and creating a `%Exhort.SAT.Model{}` using the `build` function. By specifying `use Exhort.SAT.Builder`, all of the relevant modules will be aliased and the Exhort macros will be expanded. ```elixir use Exhort.SAT.Builder ... builder = Builder.new() |> Builder.def_int_var("x", {0, 10}) |> Builder.def_int_var("y", {0, 10}) |> Builder.def_bool_var("b") |> Builder.constrain("x" >= 5, if: "b") |> Builder.constrain("x" < 5, unless: "b") |> Builder.constrain("x" + "y" == 10, if: "b") |> Builder.constrain("y" == 0, unless: "b") {response, acc} = builder |> Builder.build() |> Model.solve(fn _response, nil -> 1 _response, acc -> acc + 1 end) # 2 responses acc |> IO.inspect(label: "acc: ") response |> IO.inspect(label: "response: ") # :optimal response.status |> IO.inspect(label: "status: ") # 10, 0, true SolverResponse.int_val(response, "x") |> IO.inspect(label: "x: ") SolverResponse.int_val(response, "y") |> IO.inspect(label: "y: ") SolverResponse.bool_val(response, "b") |> IO.inspect(label: "b: ") ``` See below for more about the expression language used in Exhort. ### Expr Sometimes it may be more convenient to build up expressions separately and then add them to a `%Builer{}` all at once. This is often the case when more complex data sets are invovled in generating many variables and constraints for the model. Instead of having to maintain the builder through an `Enum.reduce/3` construct like this: ```elixir builder = Enum.reduce(all_days, builder, fn day, builder -> Enum.reduce(all_shifts, builder, fn shift, builder -> shift_option_vars = shifts |> Enum.filter(fn {_n, d, s} -> d == day and s == shift end) |> Enum.map(fn {n, d, s} -> "shift_#{n}_#{d}_#{s}" end) Builder.constrain(builder, sum(shift_option_vars) == 1) end) end) ``` Exhort allows the generation of lists of variables or constraint, maybe using `Enum.map/2`: ```elixir shift_nurses_per_period = Enum.map(all_days, fn day -> Enum.map(all_shifts, fn shift -> shift_options = Enum.filter(shifts, fn {_n, d, s} -> d == day and s == shift end) shift_option_vars = Enum.map(shift_options, fn {n, d, s} -> "shift_#{n}_#{d}_#{s}" end) Expr.new(sum(shift_option_vars) == 1) end) end) |> List.flatten() ``` These may then be added to the builder as a list: ```elixir builder |> Builder.add(shift_nurses_per_period) ... ``` ### Variables Model variables in the expression language are symbolic, represented as strings or atoms, and so don't interfere to the surrounding Elixir context. This allows the variables to be consistently referenced through a builder pipeline, for example, without having to capture an intermediate result. Elixir variables may be used "as is" in expressions, allowing variables to be generated from enumerable collections. In the following expression, `"x"` is a model variable, while `y` is an Elixir variable: ```elixir "x" < y + 3 ``` Variables may be defined in a few ways. It's often convenient to just focus on the `Exhort.SAT.Expr` and `Exhort.SAT.Builder` modules, which each have functions like `def_int_var` and `def_bool_var`. ```elixir all_bins |> Enum.map(fn bin -> Expr.def_bool_var("slack_#{bin}") end) ``` However, `BoolVar.new/1` and `IntVar.new/1` may also be used: ```elixir all_bins |> Enum.map(fn bin -> BoolVar.new("slack_#{bin}") end) ``` Of course, such names are still usable in expressions: ```elixir Expr.new("slack_#{bin}" <= bin_total) ``` Note that any variables or expressions created outside of the `Exhort.SAT.Builder` still need to be added to a `%Exhort.SAT.Builder{}` struct for them to be part of the model resulting from `build/1`. There's no magic here, these are still Elixir immutable data structures. ```elixir variables = ... expressions = ... Builder.new() |> Builder.add(variables) |> Builder.add(expressions) |> Builder.build() ``` ## Expressions Exhort supports a limited set of expressions. Expressions may use the binary operators `+`, `-` and `*`, with their traditional mathematical meaning. They may also use comparison operators `<`, `<=`, `==`, `>=`, `>`, the `sum` function and even the `for` comprehension. ```elixir all_bins |> Enum.map(fn bin -> vars = Enum.map(items, &{elem(&1, 0), "x_#{elem(&1, 0)}_#{bin}"}) load_bin = "load_#{bin}" Expr.constrain(sum(for {item, x} <- vars, do: item * x) == load_bin) end) ``` ## Model The model is the result of finalizing the builder, created through the `Exhort.SAT.Builder.build/1` function. The model may then be solved with `Exhort.SAT.Model.solve/1` or `Exhort.SAT.Model.solve/2`. The latter function allows for a function to be passed to receive intermediate solutions from the solver. ## SolverResponse The result of `Exhort.SAT.Model.solve/1` is a `%Exhort.SAT.SolverResponse{}`. The response containts meta-level information of the solution. `Exhort.SAT.SolverResponse` has functions for retriving the values of variables defined in the model. ```elixir response = Builder.new() |> Builder.def_int_var("r", {0, 100}) |> Builder.def_int_var("p", {0, 100}) |> Builder.constrain("r" + "p" == 20) |> Builder.constrain(4 * "r" + 2 * "p" == 56) |> Builder.build() |> Model.solve() assert :optimal = response.status assert 8 == SolverResponse.int_val(response, "r") assert 12 == SolverResponse.int_val(response, "p") ``` ## Implementation Exhort relies on the underlying native C++ implementation of the Google OR Tools. Exhort interacts with the Google OR Tools library when the model is built using `Builder.build/1` and when solved using `Model.solve/1` or `Model.solve/2`. References to the native objects are returned via NIF resources to the Elixir runtime as `%Reference{}` values. These are often stored in corresponding Exhort structs under the `res` key. """ end
lib/Exhort.ex
0.899038
0.972257
Exhort.ex
starcoder
defmodule ReactSurface.SSR do @moduledoc """ Macro to transform a module into a server rendered react component with some default props. ```elixir defmodule ReactComponent do use ReactSurface.SSR, [default_props: %{name: "Doug"}] end ``` This assumes there is a react component that is the default export of a js module with the same name. The expected location is `assets/js/components/` The above example would import and generate static markup based on the react component located: `assets/js/components/ReactComponent.js` with the props `{"name": "Doug"}` ```js export default ({name}) => <h1> Hi {name}</h1> ``` Which can now be used in any surface component ```elixir ~F""" <ReactComponent id="a_unique_id" props={@dynamic_props}/> \"\"\" ``` """ defmacro __using__(opts) do quote bind_quoted: [opts: opts] do Module.register_attribute(__MODULE__, :rendered_content, accumulate: false) use Surface.Component alias ReactSurface.React @doc "React ID - used to generate a unique DOM ID used for container elements, uses component name if not supplied" prop rid, :string @doc "Props passed to the react component" prop props, :map, default: %{} @doc "Class for container div" prop container_class, :css_class, default: [] @doc "Passed to container div :attrs" prop opts, :keyword, default: [] @impl true def render(var!(assigns)) do ~F""" <React rid={@rid || nil} ssr={true} opts={@opts} container_class={@container_class} component={component_name()} props={@props}>{{:safe, get_ssr()}}</React> """ end @component_name Module.split(__MODULE__) |> List.last() Module.put_attribute( __MODULE__, :rendered_content, ReactSurface.ssr(@component_name, opts[:default_props] || %{}) ) def get_ssr() do @rendered_content end def component_name() do Module.split(__MODULE__) |> List.last() end end end end
lib/react_surface/ssr.ex
0.879923
0.732592
ssr.ex
starcoder
defmodule IBMSpeechToText.Response do @moduledoc """ Elixir representation of response body from the Speech to Text API. Described [here](https://cloud.ibm.com/apidocs/speech-to-text#recognize-audio) in "Response" part """ alias IBMSpeechToText.{RecognitionResult, SpeakerLabelsResult} @type t() :: %__MODULE__{ results: [RecognitionResult.t()], result_index: non_neg_integer(), speaker_labels: [SpeakerLabelsResult.t()], warnings: [String.t()] } @struct_keys [:results, :result_index, :speaker_labels, :warnings] defstruct @struct_keys @doc """ Parse JSON response from the API into struct `#{inspect(__MODULE__)}` """ @spec from_json(String.t()) :: {:ok, __MODULE__.t()} | {:ok, :listening} | {:error, String.t()} | {:error, Jason.DecodeError.t()} def from_json(input) do with {:ok, map} <- Jason.decode(input) do from_map(map) end end @doc false @spec from_map(%{required(String.t()) => String.t()}) :: {:ok, t()} | {:ok, :listening} | {:error, String.t()} def from_map(%{"state" => "listening"}) do {:ok, :listening} end def from_map(%{"error" => error}) do {:error, error} end def from_map(map) when is_map(map) do parsed_keyword = Enum.map(@struct_keys, fn key_atom -> key_string = Atom.to_string(key_atom) parse_entry(key_atom, map[key_string]) end) {:ok, struct!(__MODULE__, parsed_keyword)} end defp parse_entry(:results, value) when value != nil do {:results, Enum.map(value, &RecognitionResult.from_map(&1))} end defp parse_entry(:speaker_labels, value) when value != nil do {:speaker_labels, Enum.map(value, &SpeakerLabelsResult.from_map(&1))} end defp parse_entry(key_atom, value) do {key_atom, value} end end defimpl Jason.Encoder, for: IBMSpeechToText.Response do def encode(value, opts) do value |> Map.from_struct() |> Enum.filter(fn {_key, val} -> val != nil end) |> Map.new() |> Jason.Encode.map(opts) end end
lib/ibm_speech_to_text/response.ex
0.857455
0.4474
response.ex
starcoder
defmodule Luger.Message do @moduledoc """ This module contains functions to create an format log messages using Luger. This is separated out as it makes testing your logging easier as you can use the `split/1` function to convert your log into a struct. """ defstruct [ :method, :path, :status, :duration, :ip_address ] # add the opaque type @opaque t :: %__MODULE__{} @doc """ Creates a Message struct from a connection and duration. """ @spec create(Conn.t, integer, Luger.t) :: Message.t def create(%Plug.Conn{} = conn, duration, %Luger{ include_ip: include }) when is_integer(duration) do %{ method: method, request_path: path, status: status } = conn %__MODULE__{ method: method, path: path, status: status, duration: duration, ip_address: include && conn.remote_ip || nil } end @doc """ Joins a Message struct into a binary log message. """ @spec join(Message.t) :: binary def join(%__MODULE__{ method: m, path: p, status: s, duration: d } = msg) do "#{m} #{p} - #{join_status(s)} - #{join_duration(d)}" <> case msg.ip_address do nil -> "" val -> " - #{join_ip(val)}" end end # Converts a duration to a human readable format. defp join_duration(diff) when diff > 1000, do: "#{round(diff / 1000)}ms" defp join_duration(diff), do: "#{diff}µs" # Converts an IP structure to a binary. defp join_ip({ a, b, c, d }), do: "#{a}.#{b}.#{c}.#{d}" defp join_ip(ip) when is_binary(ip), do: ip # Converts a status to a binary. defp join_status(nil), do: "unset" defp join_status(val), do: to_string(val) @doc """ Splits a log message back into a Message struct. """ @spec split(binary) :: Message.t def split(message) when is_binary(message) do [ route, status, duration | ip_address ] = String.split(message, " - ") [ path, method | _ ] = route |> String.split(" ") |> Enum.reverse %__MODULE__{ method: method, path: path, status: split_status(status), duration: split_duration(duration), ip_address: split_ip(ip_address) } end # Converts a binary duration to an integer. defp split_duration(duration) do duration |> String.replace_suffix("ms", "") |> String.replace_suffix("µs", "") |> String.to_integer end # Converts a binary IP split to a Tuple. defp split_ip([]), do: nil defp split_ip([ ip ]) do ip |> String.split(".") |> Enum.map(&String.to_integer/1) |> List.to_tuple end # Converts a status split to an integer (if there is one). defp split_status(<< char, _rest :: binary >> = status) when char in ?0..?9, do: String.to_integer(status) defp split_status(_status), do: nil end
lib/luger/message.ex
0.766031
0.469763
message.ex
starcoder
defmodule Saucexages do @moduledoc """ Saucexages is a library that provides functionality for reading, writing, interrogating, and fixing [SAUCE](http://www.acid.org/info/sauce/sauce.htm) – Standard Architecture for Universal Comment Extensions. The primary use of SAUCE is to add or augment metadata for files. This metadata has historically been focused on supporting art scene formats such as ANSi art (ANSI), but also supports many other formats. SAUCE support includes popular formats often found in various underground and computing communities, including, but not limited to: * `Character` - ANSI, ASCII, RIP, etc. * `Bitmap` - GIF, JPEG, PNG, etc * `Audio` - S3M, IT, MOD, XM, etc. * `Binary Text` - BIN * `Extended Binary Text` - XBIN * `Archive` - ZIP, LZH, TAR, RAR, etc. * `Executable` - EXE, COM, BAT, etc. Saucexages was created to help make handling SAUCE data contained in such file easier and available in the BEAM directly. As such, a number of features are included, such as: * Reading and writing all SAUCE fields, including comments * Cleaning SAUCE entries, including removal of both SAUCE records and comments * Encoding/Decoding of all SAUCE fields, including type dependent fields * Encoding/Decoding fallbacks based on real-world data * Proper padding and truncation handling per the SAUCE spec * Full support of all file and data types in the SAUCE spec * Compiled and optimized binary handling that respects output from `ERL_COMPILER_OPTIONS=bin_opt_info` * Choice of pure binary or positional file-based interface * Complete metadata related to all types, footnotes, and fields in the SAUCE spec * Sugar functions for dealing with interface building, dynamic behaviors, and media-specific handling * Macros for compile time guarantees, efficiency, and ease-of-use and to allow further use of metadata in guards, matches, and binaries as consumers see fit. ## Overview The Saucexages codebase is divided into 3 major namespaces: * `Saucexages` - Core namespace with a wrapper API and various types and functions useful for working with SAUCE. This includes facilities for decoding font information, ANSi flags, data type, file information, and general media info. * `Saucexages.Codec` - Contains functionality for encoding and decoding SAUCE data on a per-field and per-binary basis. An effort has been made to ensure that these functions attempt to work with binary as efficiently as possible. * `Saucexages.IO` - Modules for handling IO tasks including reading and writing both binaries and files. ## SAUCE Structure SAUCE is constructed via 2 major binary components, a SAUCE record and a comment block. The sum of these two parts can be thought of as a SAUCE block, or simply SAUCE. The SAUCE record is primarily what other libraries are referring to when they mention SAUCE or use the term. Saucexages attempts to make an effort when possible for clarity purposes to differentiate between the SAUCE block, SAUCE record, and SAUCE comment block. ## Location The SAUCE block itself is always written after an EOF character, and 128 bytes from the actual end of file. It is important to note there are possibly two EOF markers used in practice. The first is the modern notion of an EOF which means the end of the file data, as commonly recognized by modern OSs. The second marker is an EOF character,represented in hex by `0x1a`. The key to SAUCE co-existing with formats is the EOF character as its presence often signaled to viewers, readers, and other programs to stop reading data past the character in the file, or to stop interpreting this data as part of the format. Writing a SAUCE without an EOF character or before an EOF character will interfere with reading many formats. It is important to note that this requirement means that the EOF character must be *before* a SAUCE record, however in practice this does *not* always mean it will be *adjacent* to the EOF character. The reasons for this can be many, but common ones are flawed SAUCE writers, co-existing with other EOF-based formats, and misc. data or even garbage that may have been written in the file by other programs. ## Sauce Record The SAUCE record is 128-bytes used to hold the majority of metadata to describe files using SAUCE. The SAUCE layout is extensively described in the [SAUCE specification](http://www.acid.org/info/sauce/sauce.htm). Saucexages follows the same layout and types, but in Elixir-driven way. Please see the specification for more detailed descriptions including size, encoding/decoding requirements, and limitations. All fields are fixed-size and are padded if shorter when written. As such, any integer fields are subject to wrap when encoding, and decoding. Likewise, any string fields are subject to truncation and padding when encoding/decoding accordingly. The fields contained within the SAUCE record are as follows: * `ID` - SAUCE identifier. Should always be "SAUCE". This field is defining for the format, used extensively to pattern match, and to find SAUCE records within binaries and files. * `Version` - The SAUCE version. Should generally be "00". Programs should set this value to "00" and avoid changing it arbitrarily. * `Title` - Title of the file. * `Author` - Author of the file. * `Group` - The group associated with the file, for example the name of an art scene group. * `Date` - File creation date in "CCYYMMDD" format. * `File Size` - The original file size, excluding the SAUCE data. Generally, this is the size of all the data before the SAUCE block, and typically this equates to all data before an EOF character. * `Data Type` - An integer corresponding to a data type identifier. * `File Type` - An integer corresponding to a file type identifier. This identifier is dependent on the data type as well for interpretation. * `TInfo 1` - File type identifier dependent field 1. * `TInfo 2` - File type identifier dependent field 2. * `TInfo 3` - File type identifier dependent field 3. * `TInfo 4` - File type identifier dependent field 4. * `Comments` - Comment lines in the SAUCE block. This field serves as a crucial pointer and is required to properly read a comments block. * `TFlags` - File type identifier dependent flags. * `TInfoS` - File type identifier dependent string. ## Comment Block The SAUCE comment block is an optional, variable sized binary structure that holds up to 255 lines of 64 bytes of information. The SAUCE comment block fields are as follows: * ID - Identifier for the comment block that should always be equal to "COMNT". This field is defining for the format, used extensively to pattern match, and to find SAUCE comments. * Comment Line - Fixed size (64 bytes) field of text. Each comment block consists of 1 or more lines. It is vital to note that the SAUCE comment block is often broken in practice in many files. Saucexages provides many functions for identifying, repairing, and dealing with such cases. When reading and writing SAUCE, however, by default the approach described in the SAUCE specification is used. That is, the comment block location and size is always read and written according to the SAUCE record comments (comment lines) field. ## Format The general format of a SAUCE binary is as follows: ``` [contents][eof character][sauce block [comment block][sauce record]] ``` Conceptually, the parts of a file with SAUCE data are as follows: * `Contents` - The file contents. Generally anything but the SAUCE data. * `EOF Character` - The EOF character, or `0x1a` in hex. Occupies a single byte. * `SAUCE Block` - The SAUCE record + optional comment block. * `SAUCE Record` - The 128-byte collection of fields outlined in this module, as defined by the SAUCE spec. * `Comment Block` - Optional comment block of variable size, consisting of a minimum of 69 characters, and a maximum of `5 + 64 * 255` bytes. Dependent on the comment field in the SAUCE record which determines the size and location of this block. In Elixir binary format, this takes the pseudo-form of: ```elixir <<contents::binary, eof_character::binary-size(1), comment_block::binary-size(comment_lines * 64 + 5), sauce_record::binary-size(128)>> ``` The SAUCE comment block is optional and as such, the following format is also valid: ```elixir <<contents::binary, eof_character::binary-size(1), sauce_record::binary-size(128)>> ``` Additionally, since the SAUCE block itself is defined from the EOF, and after the EOF character, the following form is also valid and includes combinations of the above forms: ```elixir <<contents::binary, eof_character::binary-size(1), other_content::binary, comment_block::binary-size(comment_lines * 64 + 5), sauce_record::binary-size(128)>> ``` """ require Saucexages.IO.BinaryReader alias Saucexages.IO.{BinaryWriter, BinaryReader} alias Saucexages.SauceBlock #TODO: in the future we my decide to make file or binary readers/writers pluggable via using/protocols @doc """ Reads a binary containing a SAUCE record and returns decoded SAUCE information as `{:ok, sauce_block}`. If the binary does not contain a SAUCE record, `{:error, :no_sauce}` is returned. """ @spec sauce(binary()) :: {:ok, SauceBlock.t} | {:error, :no_sauce} | {:error, :invalid_sauce} | {:error, term()} def sauce(bin) do BinaryReader.sauce(bin) end @doc """ Reads a binary containing a SAUCE record and returns the raw binary in the form `{:ok, {sauce_bin, comments_bin}}`. If the binary does not contain a SAUCE record, `{:error, :no_sauce}` is returned. """ @spec raw(binary()) :: {:ok, {binary(), binary()}} | {:error, :no_sauce} | {:error, term()} def raw(bin) do BinaryReader.raw(bin) end @doc """ Reads a binary containing a SAUCE record and returns the decoded SAUCE comments. """ @spec comments(binary()) :: {:ok, [String.t]} | {:error, :no_sauce} | {:error, :no_comments} | {:error, term()} def comments(bin) do BinaryReader.comments(bin) end @doc """ Reads a binary and returns the contents without the SAUCE block. """ @spec contents(binary()) :: {:ok, binary()} | {:error, term()} def contents(bin) do BinaryReader.contents(bin) end @doc """ Reads a binary and returns whether or not a SAUCE record exists. Will match both binary that is a SAUCE record and binary that contains a SAUCE record. """ @spec sauce?(binary()) :: boolean() def sauce?(bin) do BinaryReader.sauce?(bin) end @doc """ Reads a binary and returns whether or not a SAUCE comments block exists within the SAUCE block. Will match a comments block only if it a SAUCE record exists. Comment fragments are not considered to be valid without the presence of a SAUCE record. """ @spec comments?(binary()) :: boolean() def comments?(bin) when is_binary(bin) do BinaryReader.comments?(bin) end @doc """ Writes the given SAUCE block to the provided binary. """ @spec write(binary(), SauceBlock.t) :: {:ok, binary()} | {:error, term()} def write(bin, sauce_block) do BinaryWriter.write(bin, sauce_block) end @doc """ Removes a SAUCE block from a binary. Both the SAUCE record and comments block will be removed. """ @spec remove_sauce(binary()) :: {:ok, binary()} | {:error, term()} def remove_sauce(bin) when is_binary(bin) do BinaryWriter.remove_sauce(bin) end @doc """ Removes any comments, if present from a SAUCE and rewrites the SAUCE accordingly. Can be used to remove a SAUCE comments block or to clean erroneous comment information such as mismatched comment lines or double comment blocks. """ @spec remove_comments(binary()) :: {:ok, binary()} | {:error, :no_sauce} | {:error, term()} def remove_comments(bin) when is_binary(bin) do BinaryWriter.remove_comments(bin) end @doc """ Returns a detailed map of all SAUCE block data. """ @spec details(binary()) :: {:ok, map()} | {:error, term()} def details(bin) when is_binary(bin) do with {:ok, sauce_block} <- sauce(bin) do {:ok, SauceBlock.details(sauce_block)} else err -> err end end end
lib/saucexages.ex
0.930514
0.927034
saucexages.ex
starcoder
defmodule PatternMetonyms do @moduledoc """ Documentation for `PatternMetonyms`. """ @doc """ implicit bidirectional target : pattern just2(a, b) = just({a, b}) currently work as is for that kind of complexity unidirectional target : pattern head(x) <- [x | _] but doesn't work as is "pattern(head(x) <- [x | _])" explicit bidirectional target : pattern polar(r, a) <- (pointPolar -> {r, a}) when polar(r, a) = polarPoint(r, a) but doesn't work as is "pattern (polar(r, a) <- (pointPolar -> {r, a})) when polar(r, a) = polarPoint(r, a) " """ # implicit bidirectional # lhs = {:just2, [], [{:a, [], Elixir}, {:b, [], Elixir}]} # just2(a, b) # pat = {:just, [], [{{:a, [], Elixir}, {:b, [], Elixir}}]} # just({a, b}) defmacro pattern(_syn = {:=, _, [lhs, pat]}) do {name, meta, args} = lhs quote do defmacro unquote({:"$pattern_metonyms_viewing_#{name}", meta, args}) do ast_args = unquote(Macro.escape(args)) args = unquote(args) relate = fn {{name, _, con}, substitute} -> {{name, con}, substitute} end args_relation = Map.new(Enum.zip(ast_args, args), relate) ast_pat = unquote(Macro.escape(pat)) Macro.prewalk(ast_pat, fn x -> case {ast_var?(x), x} do {false, x} -> x {true, {name, _, con}} -> case Map.fetch(args_relation, {name, con}) do :error -> x {:ok, substitute} -> substitute end end end) #|> case do x -> _ = IO.puts("#{unquote(name)} [implicit bidirectional]:\n#{Macro.to_string(x)}") ; x end end defmacro unquote(lhs) do ast_args = unquote(Macro.escape(args)) args = unquote(args) relate = fn {{name, _, con}, substitute} -> {{name, con}, substitute} end args_relation = Map.new(Enum.zip(ast_args, args), relate) ast_pat = unquote(Macro.escape(pat)) Macro.prewalk(ast_pat, fn x -> case {ast_var?(x), x} do {false, x} -> x {true, {name, _, con}} -> case Map.fetch(args_relation, {name, con}) do :error -> x {:ok, substitute} -> substitute end end end) #|> case do x -> _ = IO.puts("#{unquote(name)} [implicit bidirectional]:\n#{Macro.to_string(x)}") ; x end end end #|> case do x -> _ = IO.puts("pattern [implicit bidirectional]:\n#{Macro.to_string(x)}") ; x end end # unidirectional / with view defmacro pattern({:<-, _, [lhs, view = [{:->, _, [[_], pat]}]]}) do {name, meta, args} = lhs quote do defmacro unquote({:"$pattern_metonyms_viewing_#{name}", meta, args}) do ast_args = unquote(Macro.escape(args)) args = unquote(args) relate = fn {{name, _, con}, substitute} -> {{name, con}, substitute} end args_relation = Map.new(Enum.zip(ast_args, args), relate) ast_pat = unquote(Macro.escape(pat)) ast_pat_updated = Macro.prewalk(ast_pat, fn x -> case {ast_var?(x), x} do {false, x} -> x {true, {name, _, con}} -> case Map.fetch(args_relation, {name, con}) do :error -> x {:ok, substitute} -> substitute end end end) ast_view = unquote(Macro.escape(view)) import Access updated_view = put_in(ast_view, [at(0), elem(2), at(1)], ast_pat_updated) #updated_view = [{:->, meta, [[fun], ast_pat_updated]}] #|> case do x -> _ = IO.puts("#{unquote(name)} [unidirectional]:\n#{Macro.to_string(x)}") ; x end end end #|> case do x -> _ = IO.puts("pattern [unidirectional]:\n#{Macro.to_string(x)}") ; x end end # unidirectional # lhs = {:head, [], [{:x, [], Elixir}]} # head(x) # pat = [{:|, [], [{:x, [], Elixir}, {:_, [], Elixir}]}] # [x | _] defmacro pattern({:<-, _, [lhs, pat]}) do {name, meta, args} = lhs quote do defmacro unquote({:"$pattern_metonyms_viewing_#{name}", meta, args}) do ast_args = unquote(Macro.escape(args)) args = unquote(args) relate = fn {{name, _, con}, substitute} -> {{name, con}, substitute} end args_relation = Map.new(Enum.zip(ast_args, args), relate) ast_pat = unquote(Macro.escape(pat)) Macro.prewalk(ast_pat, fn x -> case {ast_var?(x), x} do {false, x} -> x {true, {name, _, con}} -> case Map.fetch(args_relation, {name, con}) do :error -> x {:ok, substitute} -> substitute end end end) #|> case do x -> _ = IO.puts("#{unquote(name)} [implicit bidirectional]:\n#{Macro.to_string(x)}") ; x end end defmacro unquote(lhs) do ast_args = unquote(Macro.escape(args)) args = unquote(args) relate = fn {{name, _, con}, substitute} -> {{name, con}, substitute} end args_relation = Map.new(Enum.zip(ast_args, args), relate) ast_pat = unquote(Macro.escape(pat)) Macro.prewalk(ast_pat, fn x -> case {ast_var?(x), x} do {false, x} -> x {true, {name, _, con}} -> case Map.fetch(args_relation, {name, con}) do :error -> x {:ok, substitute} -> substitute end end end) #|> case do x -> _ = IO.puts("#{unquote(name)} [implicit bidirectional]:\n#{Macro.to_string(x)}") ; x end end end #|> case do x -> _ = IO.puts("pattern [implicit bidirectional]:\n#{Macro.to_string(x)}") ; x end end # explicit bidirectional / with view # lhs = {:polar, [], [{:r, [], Elixir}, {:a, [], Elixir}]} # polar(r, a) # fun = {:pointPolar, [], Elixir} # pointPolar # pat = {{:r, [], Elixir}, {:a, [], Elixir}} # {r, a} # lhs2 = {:polar, [], [{:r, [], Elixir}, {:a, [], Elixir}]} # polar(r, a) # expr = {:polarPoint, [], [{:r, [], Elixir}, {:a, [], Elixir}]} # polarPoint(r, a) defmacro pattern({:when, _, [{:<-, _, [lhs, view = [{:->, _, [[_], pat]}]]}, {:=, _, [lhs2, expr]}]}) do {name, meta, args} = lhs {^name, _meta2, args2} = lhs2 quote do defmacro unquote({:"$pattern_metonyms_viewing_#{name}", meta, args}) do ast_args = unquote(Macro.escape(args)) args = unquote(args) relate = fn {{name, _, con}, substitute} -> {{name, con}, substitute} end args_relation = Map.new(Enum.zip(ast_args, args), relate) ast_pat = unquote(Macro.escape(pat)) ast_pat_updated = Macro.prewalk(ast_pat, fn x -> case {ast_var?(x), x} do {false, x} -> x {true, {name, _, con}} -> case Map.fetch(args_relation, {name, con}) do :error -> x {:ok, substitute} -> substitute end end end) ast_view = unquote(Macro.escape(view)) import Access updated_view = put_in(ast_view, [at(0), elem(2), at(1)], ast_pat_updated) #updated_view = [{:->, meta, [[fun], ast_pat_updated]}] #|> case do x -> _ = IO.puts("#{unquote(name)} [expr bidirectional]:\n#{Macro.to_string(x)}") ; x end end defmacro unquote(lhs2) do ast_args = unquote(Macro.escape(args2)) args = unquote(args2) relate = fn {{name, _, con}, substitute} -> {{name, con}, substitute} end args_relation = Map.new(Enum.zip(ast_args, args), relate) ast_expr = unquote(Macro.escape(expr)) Macro.prewalk(ast_expr, fn x -> case {ast_var?(x), x} do {false, x} -> x {true, {name, _, con}} -> case Map.fetch(args_relation, {name, con}) do :error -> x {:ok, substitute} -> substitute end end end) #|> case do x -> _ = IO.puts("#{unquote(name)} [explicit bidirectional]:\n#{Macro.to_string(x)}") ; x end end end #|> case do x -> _ = IO.puts("pattern [explicit bidirectional]:\n#{Macro.to_string(x)}") ; x end end defmacro pattern(ast) do raise("pattern not recognized: #{Macro.to_string(ast)}") end # view defmacro view(data, do: clauses) when is_list(clauses) do [last | rev_clauses] = Enum.reverse(clauses) rev_tail = case view_folder(last, nil, data, __CALLER__) do # presumably a catch all pattern case_ast = {:case, [], [_, [do: [{:->, _, [[_lhs = {name, meta, con}], _rhs]}, _]]]} when is_atom(name) and is_list(meta) and is_atom(con) -> import Access case_ast = update_in(case_ast, [elem(2), at(1), at(0), elem(1)], &Enum.take(&1, 1)) [case_ast] _ -> fail_ast = quote do raise(CaseClauseError, term: unquote(data)) end [fail_ast, last] end ast = Enum.reduce(rev_tail ++ rev_clauses, fn x, acc -> view_folder(x, acc, data, __CALLER__) end) ast #|> case do x -> _ = IO.puts("view:\n#{Macro.to_string(x)}") ; x end end def view_folder({:->, _, [[[{:->, _, [[{name, meta, nil}], pat]}]], rhs]}, acc, data, _caller_env) do call = {name, meta, [data]} quote do case unquote(call) do unquote(pat) -> unquote(rhs) _ -> unquote(acc) end end end def view_folder({:->, meta_clause, [[{name, meta, con} = call], rhs]}, acc, data, caller_env) when is_atom(name) and is_list(meta) and is_list(con) do augmented_call = {:"$pattern_metonyms_viewing_#{name}", meta, con} case Macro.expand(augmented_call, caller_env) do # didn't expand because didn't exist, so we let other macros do their stuff later ^augmented_call -> quote do case unquote(data) do unquote(call) -> unquote(rhs) _ -> unquote(acc) end end # can this recurse indefinitely ? new_call -> new_clause = {:->, meta_clause, [[new_call], rhs]} view_folder(new_clause, acc, data, caller_env) end end def view_folder({:->, _, [[lhs = {name, meta, con}], rhs]}, acc, data, _caller_env) when is_atom(name) and is_list(meta) and is_atom(con) do quote do case unquote(data) do unquote(lhs) -> unquote(rhs) _ -> unquote(acc) end end end def view_folder({:->, _, [[lhs], rhs]}, acc, data, _caller_env) do quote do case unquote(data) do unquote(lhs) -> unquote(rhs) _ -> unquote(acc) end end end # Utils def ast_var?({name, meta, con}) when is_atom(name) and is_list(meta) and is_atom(con), do: true def ast_var?(_), do: false end
pattern_metonyms/lib/pattern_metonyms.ex
0.64512
0.431285
pattern_metonyms.ex
starcoder
defmodule Pigeon.FCM do @moduledoc """ `Pigeon.Adapter` for Firebase Cloud Messaging (FCM) push notifications. ## Getting Started 1. Create a `FCM` dispatcher. ``` # lib/fcm.ex defmodule YourApp.FCM do use Pigeon.Dispatcher, otp_app: :your_app end ``` 2. (Optional) Add configuration to your `config.exs`. ``` # config.exs config :your_app, YourApp.FCM, adapter: Pigeon.FCM, project_id: "example-project-123", service_account_json: File.read!("service-account.json") ``` 3. Start your dispatcher on application boot. ``` defmodule YourApp.Application do @moduledoc false use Application @doc false def start(_type, _args) do children = [ YourApp.FCM ] opts = [strategy: :one_for_one, name: YourApp.Supervisor] Supervisor.start_link(children, opts) end end ``` If you skipped step two, include your configuration. ``` defmodule YourApp.Application do @moduledoc false use Application @doc false def start(_type, _args) do children = [ {YourApp.FCM, fcm_opts()} ] opts = [strategy: :one_for_one, name: YourApp.Supervisor] Supervisor.start_link(children, opts) end defp fcm_opts do [ adapter: Pigeon.FCM, project_id: "example-project-123", service_account_json: File.read!("service-account.json") ] end end ``` 4. Create a notification. ``` n = Pigeon.FCM.Notification.new({:token, "reg ID"}, %{"body" => "test message"}) ``` 5. Send the notification. On successful response, `:name` will be set to the name returned from the FCM API and `:response` will be `:success`. If there was an error, `:error` will contain a JSON map of the response and `:response` will be an atomized version of the error type. ``` YourApp.FCM.push(n) ``` """ @max_retries 3 defstruct config: nil, queue: Pigeon.NotificationQueue.new(), refresh_before: 5 * 60, retries: @max_retries, socket: nil, stream_id: 1, token: nil @behaviour Pigeon.Adapter alias Pigeon.{Configurable, NotificationQueue} alias Pigeon.Http2.{Client, Stream} @refresh :"$refresh" @retry_after 1000 @scopes [ "https://www.googleapis.com/auth/cloud-platform", "https://www.googleapis.com/auth/firebase.messaging" ] @impl true def init(opts) do config = Pigeon.FCM.Config.new(opts) Configurable.validate!(config) state = %__MODULE__{config: config} with {:ok, socket} <- connect_socket(config), {:ok, token} <- fetch_token(config) do Configurable.schedule_ping(config) schedule_refresh(state, token) {:ok, %{state | socket: socket, token: token}} else {:error, reason} -> {:stop, reason} end end @impl true def handle_push(notification, state) do %{config: config, queue: queue, token: token} = state headers = Configurable.push_headers(config, notification, token: token) payload = Configurable.push_payload(config, notification, []) Client.default().send_request(state.socket, headers, payload) new_q = NotificationQueue.add(queue, state.stream_id, notification) state = state |> inc_stream_id() |> Map.put(:queue, new_q) {:noreply, state} end @impl true def handle_info(:ping, state) do Client.default().send_ping(state.socket) Configurable.schedule_ping(state.config) {:noreply, state} end def handle_info({:closed, _}, %{config: config} = state) do case connect_socket(config) do {:ok, socket} -> Configurable.schedule_ping(config) {:noreply, %{state | socket: socket}} {:error, reason} -> {:stop, reason} end end def handle_info(@refresh, %{config: config} = state) do case fetch_token(config) do {:ok, token} -> schedule_refresh(state, token) {:noreply, %{state | retries: @max_retries, token: token}} {:error, exception} -> if state.retries > 0 do Process.send_after(self(), @refresh, @retry_after) {:noreply, %{state | retries: state.retries - 1}} else raise "too many failed attempts to refresh, last error: #{ inspect(exception) }" end end end def handle_info(msg, state) do case Client.default().handle_end_stream(msg, state) do {:ok, %Stream{} = stream} -> process_end_stream(stream, state) _else -> {:noreply, state} end end defp connect_socket(config), do: connect_socket(config, @max_retries) defp connect_socket(config, tries) do case Configurable.connect(config) do {:ok, socket} -> {:ok, socket} {:error, reason} -> if tries > 0 do connect_socket(config, tries - 1) else {:error, reason} end end end defp fetch_token(config) do source = {:service_account, config.service_account_json, [scopes: @scopes]} Goth.Token.fetch(%{source: source}) end defp schedule_refresh(state, token) do time_in_seconds = max(token.expires - System.system_time(:second) - state.refresh_before, 0) Process.send_after(self(), @refresh, time_in_seconds * 1000) end @doc false def process_end_stream(%Stream{id: stream_id} = stream, state) do %{queue: queue, config: config} = state case NotificationQueue.pop(queue, stream_id) do {nil, new_queue} -> # Do nothing if no queued item for stream {:noreply, %{state | queue: new_queue}} {notif, new_queue} -> Configurable.handle_end_stream(config, stream, notif) {:noreply, %{state | queue: new_queue}} end end @doc false def inc_stream_id(%{stream_id: stream_id} = state) do %{state | stream_id: stream_id + 2} end end
lib/pigeon/fcm.ex
0.783285
0.609408
fcm.ex
starcoder
defmodule AWS.SMS do @moduledoc """ AAWS Sever Migration Service This is the *AWS Sever Migration Service API Reference*. It provides descriptions, syntax, and usage examples for each of the actions and data types for the AWS Sever Migration Service (AWS SMS). The topic for each action shows the Query API request parameters and the XML response. You can also view the XML request elements in the WSDL. Alternatively, you can use one of the AWS SDKs to access an API that's tailored to the programming language or platform that you're using. For more information, see [AWS SDKs](http://aws.amazon.com/tools/#SDKs). To learn more about the Server Migration Service, see the following resources: <ul> <li> [AWS Sever Migration Service product page](https://aws.amazon.com/server-migration-service/) </li> <li> [AWS Sever Migration Service User Guide](https://docs.aws.amazon.com/server-migration-service/latest/userguide/server-migration.html) </li> </ul> """ @doc """ Creates an application. An application consists of one or more server groups. Each server group contain one or more servers. """ def create_app(client, input, options \\ []) do request(client, "CreateApp", input, options) end @doc """ Creates a replication job. The replication job schedules periodic replication runs to replicate your server to AWS. Each replication run creates an Amazon Machine Image (AMI). """ def create_replication_job(client, input, options \\ []) do request(client, "CreateReplicationJob", input, options) end @doc """ Deletes an existing application. Optionally deletes the launched stack associated with the application and all AWS SMS replication jobs for servers in the application. """ def delete_app(client, input, options \\ []) do request(client, "DeleteApp", input, options) end @doc """ Deletes existing launch configuration for an application. """ def delete_app_launch_configuration(client, input, options \\ []) do request(client, "DeleteAppLaunchConfiguration", input, options) end @doc """ Deletes existing replication configuration for an application. """ def delete_app_replication_configuration(client, input, options \\ []) do request(client, "DeleteAppReplicationConfiguration", input, options) end @doc """ Deletes the specified replication job. After you delete a replication job, there are no further replication runs. AWS deletes the contents of the Amazon S3 bucket used to store AWS SMS artifacts. The AMIs created by the replication runs are not deleted. """ def delete_replication_job(client, input, options \\ []) do request(client, "DeleteReplicationJob", input, options) end @doc """ Deletes all servers from your server catalog. """ def delete_server_catalog(client, input, options \\ []) do request(client, "DeleteServerCatalog", input, options) end @doc """ Disassociates the specified connector from AWS SMS. After you disassociate a connector, it is no longer available to support replication jobs. """ def disassociate_connector(client, input, options \\ []) do request(client, "DisassociateConnector", input, options) end @doc """ Generates a target change set for a currently launched stack and writes it to an Amazon S3 object in the customer’s Amazon S3 bucket. """ def generate_change_set(client, input, options \\ []) do request(client, "GenerateChangeSet", input, options) end @doc """ Generates an Amazon CloudFormation template based on the current launch configuration and writes it to an Amazon S3 object in the customer’s Amazon S3 bucket. """ def generate_template(client, input, options \\ []) do request(client, "GenerateTemplate", input, options) end @doc """ Retrieve information about an application. """ def get_app(client, input, options \\ []) do request(client, "GetApp", input, options) end @doc """ Retrieves the application launch configuration associated with an application. """ def get_app_launch_configuration(client, input, options \\ []) do request(client, "GetAppLaunchConfiguration", input, options) end @doc """ Retrieves an application replication configuration associatd with an application. """ def get_app_replication_configuration(client, input, options \\ []) do request(client, "GetAppReplicationConfiguration", input, options) end @doc """ Describes the connectors registered with the AWS SMS. """ def get_connectors(client, input, options \\ []) do request(client, "GetConnectors", input, options) end @doc """ Describes the specified replication job or all of your replication jobs. """ def get_replication_jobs(client, input, options \\ []) do request(client, "GetReplicationJobs", input, options) end @doc """ Describes the replication runs for the specified replication job. """ def get_replication_runs(client, input, options \\ []) do request(client, "GetReplicationRuns", input, options) end @doc """ Describes the servers in your server catalog. Before you can describe your servers, you must import them using `ImportServerCatalog`. """ def get_servers(client, input, options \\ []) do request(client, "GetServers", input, options) end @doc """ Gathers a complete list of on-premises servers. Connectors must be installed and monitoring all servers that you want to import. This call returns immediately, but might take additional time to retrieve all the servers. """ def import_server_catalog(client, input, options \\ []) do request(client, "ImportServerCatalog", input, options) end @doc """ Launches an application stack. """ def launch_app(client, input, options \\ []) do request(client, "LaunchApp", input, options) end @doc """ Returns a list of summaries for all applications. """ def list_apps(client, input, options \\ []) do request(client, "ListApps", input, options) end @doc """ Creates a launch configuration for an application. """ def put_app_launch_configuration(client, input, options \\ []) do request(client, "PutAppLaunchConfiguration", input, options) end @doc """ Creates or updates a replication configuration for an application. """ def put_app_replication_configuration(client, input, options \\ []) do request(client, "PutAppReplicationConfiguration", input, options) end @doc """ Starts replicating an application. """ def start_app_replication(client, input, options \\ []) do request(client, "StartAppReplication", input, options) end @doc """ Starts an on-demand replication run for the specified replication job. This replication run starts immediately. This replication run is in addition to the ones already scheduled. There is a limit on the number of on-demand replications runs you can request in a 24-hour period. """ def start_on_demand_replication_run(client, input, options \\ []) do request(client, "StartOnDemandReplicationRun", input, options) end @doc """ Stops replicating an application. """ def stop_app_replication(client, input, options \\ []) do request(client, "StopAppReplication", input, options) end @doc """ Terminates the stack for an application. """ def terminate_app(client, input, options \\ []) do request(client, "TerminateApp", input, options) end @doc """ Updates an application. """ def update_app(client, input, options \\ []) do request(client, "UpdateApp", input, options) end @doc """ Updates the specified settings for the specified replication job. """ def update_replication_job(client, input, options \\ []) do request(client, "UpdateReplicationJob", input, options) end @spec request(map(), binary(), map(), list()) :: {:ok, Poison.Parser.t | nil, Poison.Response.t} | {:error, Poison.Parser.t} | {:error, HTTPoison.Error.t} defp request(client, action, input, options) do client = %{client | service: "sms"} host = get_host("sms", client) url = get_url(host, client) headers = [{"Host", host}, {"Content-Type", "application/x-amz-json-1.1"}, {"X-Amz-Target", "AWSServerMigrationService_V2016_10_24.#{action}"}] payload = Poison.Encoder.encode(input, []) headers = AWS.Request.sign_v4(client, "POST", url, headers, payload) case HTTPoison.post(url, payload, headers, options) do {:ok, response=%HTTPoison.Response{status_code: 200, body: ""}} -> {:ok, nil, response} {:ok, response=%HTTPoison.Response{status_code: 200, body: body}} -> {:ok, Poison.Parser.parse!(body), response} {:ok, _response=%HTTPoison.Response{body: body}} -> error = Poison.Parser.parse!(body) exception = error["__type"] message = error["message"] {:error, {exception, message}} {:error, %HTTPoison.Error{reason: reason}} -> {:error, %HTTPoison.Error{reason: reason}} end end defp get_host(endpoint_prefix, client) do if client.region == "local" do "localhost" else "#{endpoint_prefix}.#{client.region}.#{client.endpoint}" end end defp get_url(host, %{:proto => proto, :port => port}) do "#{proto}://#{host}:#{port}/" end end
lib/aws/sms.ex
0.833426
0.585457
sms.ex
starcoder
defmodule Ecto.Query.Builder do @moduledoc false alias Ecto.Query @expand_fragments [:sigil_f, :sigil_F] @expand_sigils [:sigil_c, :sigil_C, :sigil_s, :sigil_S, :sigil_w, :sigil_W] @doc """ Smart escapes a query expression and extracts interpolated values in a map. Everything that is a query expression will be escaped, interpolated expressions (`^foo`) will be moved to a map unescaped and replaced with `^index` in the query where index is a number indexing into the map. """ @spec escape(Macro.t, Keyword.t) :: {Macro.t, %{}} def escape(expr, external \\ %{}, vars) # var.x - where var is bound def escape({{:., _, [{var, _, context}, right]}, _, []}, external, vars) when is_atom(var) and is_atom(context) and is_atom(right) do left_escaped = escape_var(var, vars) dot_escaped = {:{}, [], [:., [], [left_escaped, right]]} expr = {:{}, [], [dot_escaped, [], []]} {expr, external} end # interpolation def escape({:^, _, [arg]}, external, _vars) do index = Map.size(external) external = Map.put(external, index, arg) expr = {:{}, [], [:^, [], [index]]} {expr, external} end # ecto types def escape({:binary, _, [arg]}, external, vars) do {arg_escaped, external} = escape(arg, external, vars) expr = {:%, [], [Ecto.Tagged, {:%{}, [], [value: arg_escaped, type: :binary]}]} {expr, external} end def escape({:uuid, _, [arg]}, external, vars) do {arg_escaped, external} = escape(arg, external, vars) expr = {:%, [], [Ecto.Tagged, {:%{}, [], [value: arg_escaped, type: :uuid]}]} {expr, external} end def escape({:array, _, [arg, type]}, external, vars) do {arg, external} = escape(arg, external, vars) type = atom(type) expr = {:%, [], [Ecto.Tagged, {:%{}, [], [value: arg, type: {:array, type}]}]} {expr, external} # TODO: Check that arg is and type is an atom end # field macro def escape({:field, _, [{var, _, context}, field]}, external, vars) when is_atom(var) and is_atom(context) do var = escape_var(var, vars) field = atom(field) dot = {:{}, [], [:., [], [var, field]]} expr = {:{}, [], [dot, [], []]} {expr, external} end # binary literal def escape({:<<>>, _, _} = bin, external, _vars), do: {bin, external} # fragments def escape({sigil, _, [{:<<>>, _, frags}, []]}, external, vars) when sigil in @expand_fragments do {frags, external} = Enum.map_reduce frags, external, fn frag, external when is_binary(frag) -> {frag, external} {:::, _, [{{:., _, [Kernel, :to_string]}, _, [frag]}, _]}, external -> escape(frag, external, vars) end {{:%, [], [Ecto.Query.Fragment, {:%{}, [], [parts: frags]}]}, external} end # sigils def escape({name, _, _} = sigil, external, _vars) when name in @expand_sigils do {sigil, external} end # ops & functions def escape({name, meta, args}, external, vars) when is_atom(name) and is_list(args) do {args, external} = Enum.map_reduce(args, external, &escape(&1, &2, vars)) expr = {:{}, [], [name, meta, args]} {expr, external} end # list def escape(list, external, vars) when is_list(list) do Enum.map_reduce(list, external, &escape(&1, &2, vars)) end # literals def escape(literal, external, _vars) when is_binary(literal), do: {literal, external} def escape(literal, external, _vars) when is_boolean(literal), do: {literal, external} def escape(literal, external, _vars) when is_number(literal), do: {literal, external} def escape(nil, external, _vars), do: {nil, external} # everything else is not allowed def escape(other, _external, _vars) do raise Ecto.QueryError, reason: "`#{Macro.to_string(other)}` is not a valid query expression" end def escape_external(map) do {:%{}, [], Map.to_list(map)} end @doc """ Escapes a variable according to the given binds. A escaped variable is represented internally as `&0`, `&1` and so on. """ @spec escape_var(atom, Keyword.t) :: Macro.t | no_return def escape_var(var, vars) def escape_var(var, vars) do ix = vars[var] if var != :_ and ix do {:{}, [], [:&, [], [ix]]} else raise Ecto.QueryError, reason: "unbound variable `#{var}` in query" end end @doc """ Escapes dot calls in query expressions. A dot may be in three formats, all shown in the examples below. Returns :error if it isn't a dot expression. ## Examples iex> escape_dot(quote(do: x.y), [x: 0]) {{:{}, [], [:&, [], [0]]}, :y} iex> escape_dot(quote(do: x.y()), [x: 0]) {{:{}, [], [:&, [], [0]]}, :y} iex> escape_dot(quote(do: field(x, :y)), [x: 0]) {{:{}, [], [:&, [], [0]]}, :y} iex> escape_dot(quote(do: x), [x: 0]) :error """ @spec escape_dot(Macro.t, Keyword.t) :: {Macro.t, Macro.t} | :error def escape_dot({:field, _, [{var, _, context}, field]}, vars) when is_atom(var) and is_atom(context) do var = escape_var(var, vars) field = atom(field) {var, field} end def escape_dot({{:., _, [{var, _, context}, field]}, _, []}, vars) when is_atom(var) and is_atom(context) and is_atom(field) do {escape_var(var, vars), field} end def escape_dot(_, _vars) do :error end @doc """ Escapes a list of bindings as a list of atoms. ## Examples iex> escape_binding(quote do: [x, y, z]) [x: 0, y: 1, z: 2] iex> escape_binding(quote do: [x, y, x]) ** (Ecto.QueryError) variable `x` is bound twice """ def escape_binding(binding) when is_list(binding) do vars = binding |> Stream.with_index |> Enum.map(&escape_bind(&1)) bound_vars = vars |> Keyword.keys |> Enum.filter(&(&1 != :_)) dup_vars = bound_vars -- Enum.uniq(bound_vars) unless dup_vars == [] do raise Ecto.QueryError, reason: "variable `#{hd dup_vars}` is bound twice" end vars end def escape_binding(bind) do raise Ecto.QueryError, reason: "binding should be list of variables, got: #{Macro.to_string(bind)}" end defp escape_bind({{var, _} = tuple, _}) when is_atom(var), do: tuple defp escape_bind({{var, _, context}, ix}) when is_atom(var) and is_atom(context), do: {var, ix} defp escape_bind({bind, _ix}), do: raise(Ecto.QueryError, reason: "binding list should contain only variables, got: #{Macro.to_string(bind)}") @doc """ Escapes simple expressions. An expression may be a single variable `x`, representing all fields in that model, a field `x.y`, or a list of fields and variables. ## Examples iex> escape_fields_and_vars(quote(do: [x.x, y.y]), [x: 0, y: 1]) [{{:{}, [], [:&, [], [0]]}, :x}, {{:{}, [], [:&, [], [1]]}, :y}] iex> escape_fields_and_vars(quote(do: x), [x: 0, y: 1]) [{:{}, [], [:&, [], [0]]}] """ @spec escape_fields_and_vars(Macro.t, Keyword.t) :: Macro.t | no_return def escape_fields_and_vars(ast, vars) do Enum.map(List.wrap(ast), &do_escape_expr(&1, vars)) end defp do_escape_expr({var, _, context}, vars) when is_atom(var) and is_atom(context) do escape_var(var, vars) end defp do_escape_expr(dot, vars) do case escape_dot(dot, vars) do {_, _} = var_field -> var_field :error -> raise Ecto.QueryError, reason: "malformed query expression" end end @doc """ Counts the bindings in a query expression. ## Examples iex> count_binds(%Ecto.Query{joins: [1,2,3]}) 3 iex> count_binds(%Ecto.Query{from: 0, joins: [1,2]}) 3 """ def count_binds(%Query{from: from, joins: joins}) do count = if from, do: 1, else: 0 count + length(joins) end @doc """ Applies a query at compilation time or at runtime. This function is responsible to check if a given query is an `Ecto.Query` struct at compile time or not and act accordingly. If a query is available, it invokes the `apply` function in the given `module`, otherwise, it delegates the call to runtime. It is important to keep in mind the complexities introduced by this function. In particular, a %Query{} is mixture of escaped and unescaped expressions which makes it impossible for this function to properly escape or unescape it at compile/runtime. For this reason, the apply function should be ready to handle arguments in both escaped and unescaped form. For example, take into account the `Builder.Select`: select = %Ecto.Query.QueryExpr{expr: expr, file: env.file, line: env.line} Builder.apply_query(query, __MODULE__, [select], env) `expr` is already an escaped expression and we must not escape it again. However, it is wrapped in an Ecto.Query.QueryExpr, which must be escaped! Furthermore, the `apply/2` function in `Builder.Select` very likely will inject the QueryExpr inside Query, which again, is a mixture of escaped and unescaped expressions. That said, you need to obey the following rules: 1. In order to call this function, the arguments must be escapable values supported by the `escape/1` function below; 2. The apply function may not manipulate the given arguments, with exception to the query. In particular, when invoked at compilation time, all arguments (except the query) will be escaped, so they can be injected into the query properly, but they will be in their runtime form when invoked at runtime. """ def apply_query(query, module, args, env) do query = Macro.expand(query, env) args = for i <- args, do: escape_query(i) case unescape_query(query) do %Query{} = unescaped -> apply(module, :apply, [unescaped|args]) |> escape_query _ -> quote do: unquote(module).apply(unquote_splicing([query|args])) end end # Unescapes an `Ecto.Query` struct. defp unescape_query({:%, _, [Query, {:%{}, _, list}]}) do struct(Query, list) end defp unescape_query({:%{}, _, list} = ast) do if List.keyfind(list, :__struct__, 0) == {:__struct__, Query} do Enum.into(list, %{}) else ast end end defp unescape_query(other) do other end # Escapes an `Ecto.Query` and associated structs. defp escape_query(%Query{} = query), do: {:%{}, [], Map.to_list(query)} defp escape_query(other), do: other # Removes the interpolation hat from an expression, leaving the # expression unescaped, or if there is no hat escapes the query defp atom({:^, _, [expr]}), do: quote(do: :"Elixir.Ecto.Query.Builder".check_atom(unquote(expr))) defp atom(atom) when is_atom(atom), do: atom defp atom(other), do: raise(Ecto.QueryError, reason: "expected literal atom or interpolated value, got: `#{inspect other}`") @doc """ Called by escaper at runtime to verify that value is an atom. """ def check_atom(atom) when is_atom(atom), do: atom def check_atom(other), do: raise(Ecto.QueryError, reason: "expected atom, got: `#{inspect other}`") end
lib/ecto/query/builder.ex
0.683102
0.551815
builder.ex
starcoder
defmodule WHATWG.PercentEncoding do @moduledoc """ Functions to work with percent-encoding. """ @doc """ Percent-encodes a byte in integer into a string. This function will raise `FunctionClauseError` if the given `char` is not an integer for a byte. See also: - [Percent-Encoding in RFC 3986 - Uniform Resource Identifier (URI): Generic Syntax](https://tools.ietf.org/html/rfc3986#section-2.1) - [Percent-encoded bytes in URL Standard](https://url.spec.whatwg.org/#percent-encoded-bytes) ### Examples iex> encode_byte(0x23) "%23" iex> encode_byte(0x7F) "%7F" iex> encode_byte(0x20) "%20" iex> encode_byte(256) ** (FunctionClauseError) no function clause matching in WHATWG.PercentEncoding.encode_byte/1 """ def encode_byte(char) when is_integer(char) and char >= 0x00 and char < 256 do <<"%", hex(Bitwise.bsr(char, 4)), hex(Bitwise.band(char, 15))>> end defp hex(n) when n <= 9, do: n + ?0 defp hex(n), do: n + ?A - 10 @doc """ Percent-encodes bytes in a binary into a string. ### Examples iex> encode_bytes(<<0x23>>) "%23" iex> encode_bytes(<<0x7F>>) "%7F" iex> encode_bytes(" ") "%20" """ def encode_bytes(binary) when is_binary(binary) do for <<byte <- binary>>, into: "", do: encode_byte(byte) end @doc """ Percent-encodes bytes with `encode_set_func/1` func and `space_as_plus`. If the given function returns `true` for a byte, the byte will be percent-encoded. Otherwise, the byte will not be percent-encoded. If `space_as_plus` is `true`, `0x20` will be encoded to `+` instead of `"0x20"`. ### Examples iex> encode_bytes("a1", fn char -> char in ?0..?9 end) "a%31" iex> encode_bytes("a1 ", fn char -> char not in ?0..?9 and char not in ?a..?z end) "a1%20" iex> encode_bytes("a1 ", fn char -> char not in ?0..?9 and char not in ?a..?z end, true) "a1+" """ def encode_bytes(binary, encode_set_func, space_as_plus \\ false) when is_binary(binary) and is_function(encode_set_func, 1) do for <<byte <- binary>>, into: "", do: encode_byte(byte, encode_set_func, space_as_plus) end defp encode_byte(0x20, _, true), do: "+" defp encode_byte(0x20, _, false), do: "%20" defp encode_byte(char, encode_set_func, _) do if encode_set_func.(char) do encode_byte(char) else <<char>> end end @doc """ Percent-decodes a string into bytes. If `space_as_plus` is true, `+` will be decoded to `" "`. This function will raise `FunctionClauseError` if the given `string` is not a binary. ### Examples iex> decode_bytes("a%23b%25") "a#b%" iex> decode_bytes("%6a%6A") "jj" iex> decode_bytes("‽%25%2E") <<0xE2, 0x80, 0xBD, 0x25, 0x2E>> iex> decode_bytes("a%20+b") "a +b" iex> decode_bytes("a%20+b", true) "a b" iex> decode_bytes("%GG") ** (ArgumentError) malformed percent encoding "%GG" iex> decode_bytes("a%0") ** (ArgumentError) malformed percent encoding "a%0" iex> decode_bytes("a%") ** (ArgumentError) malformed percent encoding "a%" iex> decode_bytes('a') ** (FunctionClauseError) no function clause matching in WHATWG.PercentEncoding.decode_bytes/2 iex> decode_bytes('a') ** (FunctionClauseError) no function clause matching in WHATWG.PercentEncoding.decode_bytes/2 iex> decode_bytes(<<1::4>>) ** (FunctionClauseError) no function clause matching in WHATWG.PercentEncoding.decode_bytes/2 """ def decode_bytes(string, space_as_plus \\ false) when is_binary(string) and is_boolean(space_as_plus) do decode_recursive(string, "", space_as_plus) catch :malformed_percent_encoding -> raise ArgumentError, "malformed percent encoding #{inspect(string)}" end defp decode_recursive(<<?%, hex1, hex2, tail::binary>>, acc, space_as_plus) do decode_recursive( tail, <<acc::binary, Bitwise.bsl(hex_to_dec(hex1), 4) + hex_to_dec(hex2)>>, space_as_plus ) end defp decode_recursive(<<?%, _::binary>>, _acc, _space_as_plus), do: throw(:malformed_percent_encoding) defp decode_recursive(<<?+, tail::binary>>, acc, true), do: decode_recursive(tail, <<acc::binary, ?\s>>, true) defp decode_recursive(<<head, tail::binary>>, acc, space_as_plus), do: decode_recursive(tail, <<acc::binary, head>>, space_as_plus) defp decode_recursive(<<>>, acc, _spaces), do: acc defp hex_to_dec(n) when n in ?A..?F, do: n - ?A + 10 defp hex_to_dec(n) when n in ?a..?f, do: n - ?a + 10 defp hex_to_dec(n) when n in ?0..?9, do: n - ?0 defp hex_to_dec(_n), do: throw(:malformed_percent_encoding) end
lib/whatwg/percent_encoding.ex
0.839142
0.568416
percent_encoding.ex
starcoder
defmodule Individual do @moduledoc """ Process adapter to handle singleton processes in Elixir applications. ### The problem Sometimes, when yo start your program on cluster with *MASTER<->MASTER* strategy, some of your modules should be started only on one nod at a time. The should be registered within `:global` module, but `:global` doesn't handle name conflicts and restarts. This is what `Individual` for. ### Usage Wrap your worker or supervisor specification inside any of your supervisors with `Individual` call, passing supervisor specification as argument for `Individual`. Your worker or supervisor should be registered within `:global` module. ### Examples # Simple call: def start(_type, _args) do Supervisor.start_link([ {Individual, MyModule} ], strategy: :one_for_one, name: Individual.Supervisor) end # Call with args: def start(_type, _args) do Supervisor.start_link([ {Individual, {MyModule, %{foo: :bar}}} ], strategy: :one_for_one, name: Individual.Supervisor) end # To start multiple processes with same name: def start(_type, _args) do Supervisor.start_link([ {Individual, Supervisor.child_spec({MyModule, []}, id: Test1)}, {Individual, Supervisor.child_spec({MyModule, []}, id: Test2)} ], strategy: :one_for_one, name: Individual.Supervisor) end """ use GenServer require Logger @type child_spec :: :supervisor.child_spec() | {module, term} | module @doc false @spec child_spec(child_spec :: child_spec) :: :supervisor.child_spec() def child_spec(child_spec) do son_child_spec = child_spec |> convert_child_spec() Map.merge( son_child_spec, %{ type: :supervisor, shutdown: :infinity, start: {__MODULE__, :start_link, [son_child_spec]} } ) end @doc """ This function will start your module, monitored with `Individual`. It requires your module's specification, the same you pass into any of your supervisors. ### Examples Individual.start_link(MyModule) Individual.start_link({MyModule, [1,2,3]}) Individual.start_link(MyModule.child_spec(:foobar)) """ @spec start_link(son_childspec :: child_spec) :: GenServer.on_start def start_link(son_childspec) do GenServer.start_link(__MODULE__, son_childspec, name: :"#Individual<#{son_childspec.id}>") end @doc false def init(son_childspec) do {:ok, start_wrapper(son_childspec)} end ### DEATH # If the process is dying - `Individual` dies also. # If the process is exiting - `Individual` is forced to exit. # Everything depends on supervision and workers strategies. @doc false def handle_info({:DOWN, _, :process, _pid, reason}, son_childspec) do # Managed process exited. We need to die with the same reason. {:stop, reason, son_childspec} end defp start_wrapper(%{id: id} = worker_child_spec) do case Individual.Wrapper.start_link(worker_child_spec) do {:ok, pid} -> Logger.debug("Individual: Starting wrapper for worker #{id}") pid {:error, {:already_started, pid}} -> Logger.debug "Individual: Worker #{id} already started. Subscribing..." pid end |> Process.monitor() worker_child_spec end defp convert_child_spec(module) when is_atom(module) do module.child_spec([]) |> convert_child_spec() end defp convert_child_spec({module, arg}) when is_atom(module) do module.child_spec(arg) |> convert_child_spec() end defp convert_child_spec(spec) when is_map(spec) do case Map.get(spec, :type) do :supervisor -> Map.merge(%{restart: :permanent, shutdown: :infinity}, spec) :worker -> Map.merge(%{restart: :permanent, shutdown: 5000}, spec) nil -> Map.merge(%{restart: :permanent, shutdown: 5000, type: :worker}, spec) end end end
lib/individual.ex
0.731346
0.418459
individual.ex
starcoder
defmodule Cachex.Services.Locksmith do @moduledoc """ Locking service in charge of table transactions. This module acts as a global lock table against all cache. This is due to the fact that ETS tables are fairly expensive to construct if they're only going to store a few keys. Due to this we have a single global table in charge of locks, and we tag just the key in the table with the name of the cache it's associated with. This keyspace will typically be very small, so there should be almost no impact to operating in this way (except that we only have a single ETS table rather than a potentially large N). It should be noted that the behaviour in this module could easily live as a GenServer if it weren't for the speedup gained when using ETS. When using an ETS table, checking for a lock is typically 0.3-0.5µs/op whereas a call to a server process is roughly 10x this (due to the process interactions). """ alias Cachex.Services.Locksmith.Queue # we need records import Cachex.Spec # our global lock table name @table_name :cachex_locksmith @doc """ Starts the backing services required by the Locksmith. At this point this will start the backing ETS table required by the locking logic inside the Locksmith. This is started with concurrency enabled and logging disabled to avoid spamming log output. This may become configurable in future, but this table will likelyn never cause issues in the first place (as it only handles very basic operations). """ @spec start_link :: GenServer.on_start def start_link do Eternal.start_link( @table_name, [ read_concurrency: true, write_concurrency: true ], [ quiet: true ] ) end @doc """ Locks a number of keys for a cache. This function can handle multiple keys to lock together atomically. The returned boolean will signal if the lock was successful. A lock can fail if one of the provided keys is already locked. """ @spec lock(Spec.cache, [ any ]) :: boolean def lock(cache(name: name), keys) do t_proc = self() writes = keys |> List.wrap |> Enum.map(&({ { name, &1 }, t_proc })) :ets.insert_new(@table_name, writes) end @doc """ Retrieves a list of locked keys for a cache. This uses some ETS maching voodoo to pull back the locked keys. They won't be returned in any specific order, so don't rely on it. """ @spec locked(Spec.cache) :: [ any ] def locked(cache(name: name)), do: :ets.select(@table_name, [ { { { name, :"$1" }, :_ }, [], [ :"$1" ] } ]) @doc """ Determines if a key is able to be written to by the current process. For a key to be writeable, it must either have no lock or be locked by the calling process. """ @spec locked?(Spec.cache, [ any ]) :: true | false def locked?(cache(name: name), keys) when is_list(keys) do Enum.any?(keys, fn(key) -> case :ets.lookup(@table_name, { name, key }) do [{ _key, proc }] -> proc != self() _else -> false end end) end @doc """ Executes a transaction against a cache table. If the process is already in a transactional context, the provided function will be executed immediately. Otherwise the required keys will be locked until the provided function has finished executing. This is mainly shorthand to avoid having to handle row locking explicitly. """ @spec transaction(Spec.cache, [ any ], ( -> any)) :: any def transaction(cache() = cache, keys, fun) when is_list(keys) do case transaction?() do true -> fun.() false -> Queue.transaction(cache, keys, fun) end end @doc """ Determines if the current process is in transactional context. """ @spec transaction? :: boolean def transaction?, do: Process.get(:cachex_transaction, false) @doc """ Flags this process as running in a transaction. """ @spec start_transaction :: no_return def start_transaction, do: Process.put(:cachex_transaction, true) @doc """ Flags this process as not running in a transaction. """ @spec stop_transaction :: no_return def stop_transaction, do: Process.put(:cachex_transaction, false) @doc """ Unlocks a number of keys for a cache. There's currently no way to batch delete items in ETS beyond a select_delete, so we have to simply iterate over the locks and remove them sequentially. This is a little less desirable, but needs must. """ # TODO: figure out how to remove atomically @spec unlock(Spec.cache, [ any ]) :: true def unlock(cache(name: name), keys) do keys |> List.wrap |> Enum.all?(&:ets.delete(@table_name, { name, &1 })) end @doc """ Performs a write against the given key inside the table. If the key is locked, the write is queued inside the lock server to ensure that we execute consistently. This is a little hard to explain, but if the cache has not had any transactions executed against it we skip the lock check as any of our ETS writes are atomic and so do not require a lock. """ @spec write(Spec.cache, any, (() -> any)) :: any def write(cache(transactional: false), _keys, fun), do: fun.() def write(cache() = cache, keys, fun) do case transaction?() or !locked?(cache, keys) do true -> fun.() false -> Queue.execute(cache, fun) end end end
lib/cachex/services/locksmith.ex
0.671578
0.653569
locksmith.ex
starcoder
defmodule Asteroid.ObjectStore.AuthenticatedSession.Mnesia do @moduledoc """ Mnesia implementation of the `Asteroid.ObjectStore.AuthenticatedSession` behaviour ## Options The options (`Asteroid.ObjectStore.AuthenticatedSession.opts()`) are: - `:table_name`: an `atom()` for the table name. Defaults to `:asteroid_authenticated_session` - `:tab_def`: Mnesia's table definitions of the `:mnesia.create_table/2` function. Defaults to the options below. User-defined `:tab_def` will be merged on a key basis, i.e. defaults will not be erased. One can use it to add additional indexes for clients or devices, e.g.: `tab_def: [index: :refresh_token, :subject_id, :client_id]` - `:purge_interval`: the `integer()` interval in seconds the purge process will be triggered, or `:no_purge` to disable purge. Defaults to `1800` (30 minutes) ## Default Mnesia table definition ```elixir [ attributes: [:id, :subject_id, :data], index: [:subject_id] ] ``` ## Purge process The purge process uses the `Singleton` library. Therefore the purge process will be unique per cluster (and that's probably what you want if you use Mnesia). """ require Logger alias Asteroid.OIDC.AuthenticatedSession @behaviour Asteroid.ObjectStore.AuthenticatedSession @impl true def install(opts) do :mnesia.stop() :mnesia.create_schema([node()]) :mnesia.start() table_name = opts[:table_name] || :asteroid_authenticated_session tab_def = [ attributes: [:id, :subject_id, :data], index: [:subject_id] ] |> Keyword.merge(opts[:tab_def] || []) case :mnesia.create_table(table_name, tab_def) do {:atomic, :ok} -> Logger.info("#{__MODULE__}: created authenticated session store #{table_name}") :ok {:aborted, {:already_exists, _}} -> Logger.info("#{__MODULE__}: authenticated session store #{table_name} already exists") :ok {:aborted, reason} -> Logger.error( "#{__MODULE__}: failed to create authenticated session store #{table_name} " <> "(reason: #{inspect(reason)})" ) {:error, reason} end end @impl true def start_link(opts) do case :mnesia.start() do :ok -> opts = Keyword.merge([purge_interval: 1800], opts) # we launch the process anyway because we need to return a process # but the singleton will do nothing if the value is `:no_purge` Singleton.start_child(__MODULE__.Purge, opts, __MODULE__) {:error, _} = error -> error end end @impl true def get(authenticated_session_id, opts) do table_name = opts[:table_name] || :asteroid_authenticated_session case :mnesia.dirty_read(table_name, authenticated_session_id) do [] -> Logger.debug( "#{__MODULE__}: getting authenticated session `#{authenticated_session_id}`, " <> "value: `nil`" ) {:ok, nil} [{^table_name, ^authenticated_session_id, subject_id, data}] -> authenticated_session = %AuthenticatedSession{ id: authenticated_session_id, subject_id: subject_id, data: data } Logger.debug( "#{__MODULE__}: getting authenticated session `#{authenticated_session_id}`, " <> "value: `#{inspect(authenticated_session)}`" ) {:ok, authenticated_session} _ -> {:error, "Multiple results from Mnesia"} end catch :exit, reason -> {:error, reason} end @impl true def get_from_subject_id(subject_id, opts) do table_name = opts[:table_name] || :asteroid_authenticated_session {:ok, for {_table_name, authenticated_session_id, _subject_id, _data} <- :mnesia.dirty_match_object({table_name, :_, subject_id, :_}) do authenticated_session_id end} catch :exit, reason -> {:error, reason} end @impl true def put(authenticated_session, opts) do table_name = opts[:table_name] || :asteroid_authenticated_session record = { table_name, authenticated_session.id, authenticated_session.subject_id, authenticated_session.data } :mnesia.dirty_write(table_name, record) Logger.debug( "#{__MODULE__}: stored authenticated session `#{authenticated_session.id}`, " <> "value: `#{inspect(authenticated_session)}`" ) :ok catch :exit, reason -> {:error, reason} end @impl true def delete(authenticated_session_id, opts) do table_name = opts[:table_name] || :asteroid_authenticated_session :mnesia.dirty_delete(table_name, authenticated_session_id) Logger.debug("#{__MODULE__}: deleted authenticated session `#{authenticated_session_id}`") :ok catch :exit, reason -> {:error, reason} end end
lib/asteroid/object_store/authenticated_session/mnesia.ex
0.913474
0.831622
mnesia.ex
starcoder
defmodule AdventOfCode.Solutions.Day01 do @moduledoc """ Solution for day 1 exercise. ### Exercise As the submarine drops below the surface of the ocean, it automatically performs a sonar sweep of the nearby sea floor. On a small screen, the sonar sweep report (your puzzle input) appears: each line is a measurement of the sea floor depth as the sweep looks further and further away from the submarine. For example, suppose you had the following report: ``` 199 200 208 210 200 207 240 269 260 263 ``` This report indicates that, scanning outward from the submarine, the sonar sweep found depths of 199, 200, 208, 210, and so on. The first order of business is to figure out how quickly the depth increases, just so you know what you're dealing with - you never know if the keys will get carried into deeper water by an ocean current or a fish or something. To do this, count the number of times a depth measurement increases from the previous measurement. (There is no measurement before the first measurement.) In the example above, the changes are as follows: ``` 199 (N/A - no previous measurement) 200 (increased) 208 (increased) 210 (increased) 200 (decreased) 207 (increased) 240 (increased) 269 (increased) 260 (decreased) 263 (increased) ``` In this example, there are 7 measurements that are larger than the previous measurement. How many measurements are larger than the previous measurement? """ require Logger def first_part(filename) do result = filename |> File.read!() |> parse_file() |> calculate_increases() Logger.info("Detected #{result} increases") end def second_part(filename) do result = filename |> File.read!() |> parse_file() |> prepare_groups() |> calculate_increases() Logger.info("Detected #{result} increases using grouping") end defp parse_file(file_contents) do file_contents |> String.replace("\r\n", "\n") |> String.split("\n") |> Enum.reject(&(&1 == "")) |> Enum.map(&String.to_integer/1) end defp calculate_increases(measurements) do Logger.info("Received #{length(measurements)} data points...") {increases, _last} = Enum.reduce(measurements, {0, nil}, fn measure, {0, nil} -> {0, measure} measure, {increases, last} when measure > last -> {increases + 1, measure} measure, {increases, _last} -> {increases, measure} end) increases end defp prepare_groups(measurements) do num_groups = length(measurements) - 3 Enum.reduce(0..num_groups, [], fn idx, acc -> new_datapoint = Enum.at(measurements, idx) + Enum.at(measurements, idx + 1) + Enum.at(measurements, idx + 2) [new_datapoint | acc] end) |> Enum.reverse() end end
lib/advent_of_code/solutions/day01.ex
0.85738
0.891622
day01.ex
starcoder
defmodule AshPolicyAuthorizer.FilterCheck do @moduledoc """ A type of check that is represented by a filter statement That filter statement can be templated, currently only supporting `{:_actor, field}` which will replace that portion of the filter with the appropriate field value from the actor and `{:_actor, :_primary_key}` which will replace the value with a keyword list of the primary key fields of an actor to their values, like `[id: 1]`. If the actor is not present `{:_actor, field}` becomes `nil`, and `{:_actor, :_primary_key}` becomes `false`. """ @type options :: Keyword.t() @callback filter(options()) :: Keyword.t() @optional_callbacks [filter: 1] defmacro __using__(_) do quote do @behaviour AshPolicyAuthorizer.FilterCheck @behaviour AshPolicyAuthorizer.Check def type, do: :filter def strict_check_context(opts) do [:query] end def strict_check(actor, %{query: %{filter: candidate}, resource: resource, api: api}, opts) do configured_filter = filter(opts) if is_nil(actor) and AshPolicyAuthorizer.FilterCheck.references_actor?(configured_filter) do {:ok, false} else filter = AshPolicyAuthorizer.FilterCheck.build_filter(configured_filter, actor) case Ash.Filter.parse(resource, filter) do {:ok, parsed_filter} -> if Ash.Filter.strict_subset_of?(parsed_filter, candidate) do {:ok, true} else case Ash.Filter.parse(resource, not: filter) do {:ok, negated_filter} -> if Ash.Filter.strict_subset_of?(negated_filter, candidate) do {:ok, false} else {:ok, :unknown} end {:error, error} -> {:error, error} end end {:error, error} -> {:error, error} end end end def strict_check(_, _, _), do: {:ok, :unknown} def auto_filter(actor, _auuthorizer, opts) do AshPolicyAuthorizer.FilterCheck.build_filter(filter(opts), actor) end def check(actor, data, authorizer, opts) do pkey = Ash.Resource.primary_key(authorizer.resource) filter = case data do [record] -> Map.take(record, pkey) records -> [or: Enum.map(data, &Map.take(&1, pkey))] end authorizer.resource |> authorizer.api.query() |> Ash.Query.filter(filter) |> Ash.Query.filter(auto_filter(authorizer.actor, authorizer, opts)) |> authorizer.api.read() |> case do {:ok, authorized_data} -> authorized_pkeys = Enum.map(authorized_data, &Map.take(&1, pkey)) Enum.filter(data, fn record -> Map.take(record, pkey) in authorized_pkeys end) {:error, error} -> {:error, error} end end end end def is_filter_check?(module) do :erlang.function_exported(module, :filter, 1) end def build_filter(filter, actor) do walk_filter(filter, fn {:_actor, :_primary_key} -> if actor do Map.take(actor, Ash.Resource.primary_key(actor.__struct__)) else false end {:_actor, field} -> Map.get(actor || %{}, field) other -> other end) end def references_actor?({:_actor, _}), do: true def references_actor?(filter) when is_list(filter) do Enum.any?(filter, &references_actor?/1) end def references_actor?(filter) when is_map(filter) do Enum.any?(fn {key, value} -> references_actor?(key) || references_actor?(value) end) end def references_actor?(tuple) when is_tuple(tuple) do Enum.any?(Tuple.to_list(tuple), &references_actor?/1) end def references_actor?(_), do: false defp walk_filter(filter, mapper) when is_list(filter) do case mapper.(filter) do ^filter -> Enum.map(filter, &walk_filter(&1, mapper)) other -> walk_filter(other, mapper) end end defp walk_filter(filter, mapper) when is_map(filter) do case mapper.(filter) do ^filter -> Enum.into(filter, %{}, &walk_filter(&1, mapper)) other -> walk_filter(other, mapper) end end defp walk_filter(tuple, mapper) when is_tuple(tuple) do case mapper.(tuple) do ^tuple -> tuple |> Tuple.to_list() |> Enum.map(&walk_filter(&1, mapper)) |> List.to_tuple() other -> walk_filter(other, mapper) end end defp walk_filter(value, mapper), do: mapper.(value) end
lib/ash_policy_authorizer/filter_check.ex
0.831588
0.481515
filter_check.ex
starcoder
defmodule Geometry.MultiLineStringM do @moduledoc """ A set of line-strings from type `Geometry.LineStringM` `MultiLineStringMZ` implements the protocols `Enumerable` and `Collectable`. ## Examples iex> Enum.map( ...> MultiLineStringM.new([ ...> LineStringM.new([ ...> PointM.new(1, 2, 4), ...> PointM.new(3, 4, 6) ...> ]), ...> LineStringM.new([ ...> PointM.new(1, 2, 4), ...> PointM.new(11, 12, 14), ...> PointM.new(13, 14, 16) ...> ]) ...> ]), ...> fn line_string -> length line_string end ...> ) [2, 3] iex> Enum.into( ...> [LineStringM.new([PointM.new(1, 2, 4), PointM.new(5, 6, 8)])], ...> MultiLineStringM.new()) %MultiLineStringM{ line_strings: MapSet.new([ [[1, 2, 4], [5, 6, 8]] ]) } """ alias Geometry.{GeoJson, LineStringM, MultiLineStringM, PointM, WKB, WKT} defstruct line_strings: MapSet.new() @type t :: %MultiLineStringM{line_strings: MapSet.t(Geometry.coordinates())} @doc """ Creates an empty `MultiLineStringM`. ## Examples iex> MultiLineStringM.new() %MultiLineStringM{line_strings: MapSet.new()} """ @spec new :: t() def new, do: %MultiLineStringM{} @doc """ Creates a `MultiLineStringM` from the given `Geometry.MultiLineStringM`s. ## Examples iex> MultiLineStringM.new([ ...> LineStringM.new([ ...> PointM.new(1, 2, 4), ...> PointM.new(2, 3, 5), ...> PointM.new(3, 4, 6) ...> ]), ...> LineStringM.new([ ...> PointM.new(10, 20, 40), ...> PointM.new(30, 40, 60) ...> ]), ...> LineStringM.new([ ...> PointM.new(10, 20, 40), ...> PointM.new(30, 40, 60) ...> ]) ...> ]) %Geometry.MultiLineStringM{ line_strings: MapSet.new([ [[1, 2, 4], [2, 3, 5], [3, 4, 6]], [[10, 20, 40], [30, 40, 60]] ]) } iex> MultiLineStringM.new([]) %MultiLineStringM{line_strings: MapSet.new()} """ @spec new([LineStringM.t()]) :: t() def new([]), do: %MultiLineStringM{} def new(line_strings) do %MultiLineStringM{ line_strings: Enum.into(line_strings, MapSet.new(), fn line_string -> line_string.points end) } end @doc """ Returns `true` if the given `MultiLineStringM` is empty. ## Examples iex> MultiLineStringM.empty?(MultiLineStringM.new()) true iex> MultiLineStringM.empty?( ...> MultiLineStringM.new([ ...> LineStringM.new([PointM.new(1, 2, 4), PointM.new(3, 4, 6)]) ...> ]) ...> ) false """ @spec empty?(t()) :: boolean def empty?(%MultiLineStringM{} = multi_line_string), do: Enum.empty?(multi_line_string.line_strings) @doc """ Creates a `MultiLineStringM` from the given coordinates. ## Examples iex> MultiLineStringM.from_coordinates([ ...> [[-1, 1, 1], [2, 2, 2], [-3, 3, 3]], ...> [[-10, 10, 10], [-20, 20, 20]] ...> ]) %MultiLineStringM{ line_strings: MapSet.new([ [[-1, 1, 1], [2, 2, 2], [-3, 3, 3]], [[-10, 10, 10], [-20, 20, 20]] ]) } """ @spec from_coordinates([Geometry.coordinate()]) :: t() def from_coordinates(coordinates) do %MultiLineStringM{line_strings: MapSet.new(coordinates)} end @doc """ Returns an `:ok` tuple with the `MultiLineStringM` from the given GeoJSON term. Otherwise returns an `:error` tuple. ## Examples iex> ~s( ...> { ...> "type": "MultiLineString", ...> "coordinates": [ ...> [[-1, 1, 1], [2, 2, 2], [-3, 3, 3]], ...> [[-10, 10, 10], [-20, 20, 20]] ...> ] ...> } ...> ) iex> |> Jason.decode!() iex> |> MultiLineStringM.from_geo_json() {:ok, %Geometry.MultiLineStringM{ line_strings: MapSet.new([ [[-10, 10, 10], [-20, 20, 20]], [[-1, 1, 1], [2, 2, 2], [-3, 3, 3]] ]) }} """ @spec from_geo_json(Geometry.geo_json_term()) :: {:ok, t()} | Geometry.geo_json_error() def from_geo_json(json), do: GeoJson.to_multi_line_string(json, MultiLineStringM) @doc """ The same as `from_geo_json/1`, but raises a `Geometry.Error` exception if it fails. """ @spec from_geo_json!(Geometry.geo_json_term()) :: t() def from_geo_json!(json) do case GeoJson.to_multi_line_string(json, MultiLineStringM) do {:ok, geometry} -> geometry error -> raise Geometry.Error, error end end @doc """ Returns the GeoJSON term of a `MultiLineStringM`. There are no guarantees about the order of line-strings in the returned `coordinates`. ## Examples ```elixir [ [[-1, 1, 1], [2, 2, 2], [-3, 3, 3]], [[-10, 10, 10], [-20, 20, 20]] ] |> MultiLineStringM.from_coordinates() MultiLineStringM.to_geo_json( MultiLineStringM.new([ LineStringM.new([ PointM.new(-1, 1, 1), PointM.new(2, 2, 2), PointM.new(-3, 3, 3) ]), LineStringM.new([ PointM.new(-10, 10, 10), PointM.new(-20, 20, 20) ]) ]) ) # => # %{ # "type" => "MultiLineString", # "coordinates" => [ # [[-1, 1, 1], [2, 2, 2], [-3, 3, 3]], # [[-10, 10, 10], [-20, 20, 20]] # ] # } ``` """ @spec to_geo_json(t()) :: Geometry.geo_json_term() def to_geo_json(%MultiLineStringM{line_strings: line_strings}) do %{ "type" => "MultiLineString", "coordinates" => MapSet.to_list(line_strings) } end @doc """ Returns an `:ok` tuple with the `MultiLineStringM` from the given WKT string. Otherwise returns an `:error` tuple. If the geometry contains a SRID the id is added to the tuple. ## Examples iex> MultiLineStringM.from_wkt(" ...> SRID=1234;MultiLineString M ( ...> (10 20 45, 20 10 15, 20 40 15), ...> (40 30 20, 30 30 30) ...> ) ...> ") {:ok, { %MultiLineStringM{ line_strings: MapSet.new([ [[10, 20, 45], [20, 10, 15], [20, 40, 15]], [[40, 30, 20], [30, 30, 30]] ]) }, 1234 }} iex> MultiLineStringM.from_wkt("MultiLineString M EMPTY") {:ok, %MultiLineStringM{}} """ @spec from_wkt(Geometry.wkt()) :: {:ok, t() | {t(), Geometry.srid()}} | Geometry.wkt_error() def from_wkt(wkt), do: WKT.to_geometry(wkt, MultiLineStringM) @doc """ The same as `from_wkt/1`, but raises a `Geometry.Error` exception if it fails. """ @spec from_wkt!(Geometry.wkt()) :: t() | {t(), Geometry.srid()} def from_wkt!(wkt) do case WKT.to_geometry(wkt, MultiLineStringM) do {:ok, geometry} -> geometry error -> raise Geometry.Error, error end end @doc """ Returns the WKT representation for a `MultiLineStringM`. With option `:srid` an EWKT representation with the SRID is returned. There are no guarantees about the order of line-strings in the returned WKT-string. ## Examples ```elixir MultiLineStringM.to_wkt(MultiLineStringM.new()) # => "MultiLineString M EMPTY" MultiLineStringM.to_wkt( MultiLineStringM.new([ LineStringM( [PointM.new(7.1, 8.1, 1), PointM.new(9.2, 5.2, 2)] ), LineStringM( [PointM.new(5.5, 9.2, 1), PointM.new(1.2, 3.2, 2)] ) ]) ) # Returns a string without any \\n or extra spaces (formatted just for readability): # MultiLineString M ( # (5.5 9.2 1, 1.2 3.2 2), # (7.1 8.1 1, 9.2 5.2 2) # ) MultiLineStringM.to_wkt( MultiLineStringM.new([ LineStringM( [PointM.new(7.1, 8.1, 1), PointM.new(9.2, 5.2, 2)] ), LineStringM( [PointM.new(5.5, 9.2, 1), PointM.new(1.2, 3.2, 2)] ) ]), srid: 555 ) # Returns a string without any \\n or extra spaces (formatted just for readability): # SRID=555;MultiLineString M ( # (5.5 9.2 1, 1.2 3.2 2), # (7.1 8.1 1, 9.2 5.2 2) # ) ``` """ @spec to_wkt(t(), opts) :: Geometry.wkt() when opts: [srid: Geometry.srid()] def to_wkt(%MultiLineStringM{line_strings: line_strings}, opts \\ []) do WKT.to_ewkt( << "MultiLineString M ", line_strings |> MapSet.to_list() |> to_wkt_line_strings()::binary() >>, opts ) end @doc """ Returns the WKB representation for a `MultiLineStringM`. With option `:srid` an EWKB representation with the SRID is returned. The option `endian` indicates whether `:xdr` big endian or `:ndr` little endian is returned. The default is `:xdr`. The `:mode` determines whether a hex-string or binary is returned. The default is `:binary`. An example of a simpler geometry can be found in the description for the `Geometry.PointM.to_wkb/1` function. """ @spec to_wkb(t(), opts) :: Geometry.wkb() when opts: [endian: Geometry.endian(), srid: Geometry.srid(), mode: Geometry.mode()] def to_wkb(%MultiLineStringM{} = multi_line_string, opts \\ []) do endian = Keyword.get(opts, :endian, Geometry.default_endian()) mode = Keyword.get(opts, :mode, Geometry.default_mode()) srid = Keyword.get(opts, :srid) to_wkb(multi_line_string, srid, endian, mode) end @doc """ Returns an `:ok` tuple with the `MultiLineStringM` from the given WKB string. Otherwise returns an `:error` tuple. If the geometry contains a SRID the id is added to the tuple. An example of a simpler geometry can be found in the description for the `Geometry.PointM.from_wkb/2` function. """ @spec from_wkb(Geometry.wkb(), Geometry.mode()) :: {:ok, t() | {t(), Geometry.srid()}} | Geometry.wkb_error() def from_wkb(wkb, mode \\ :binary), do: WKB.to_geometry(wkb, mode, MultiLineStringM) @doc """ The same as `from_wkb/2`, but raises a `Geometry.Error` exception if it fails. """ @spec from_wkb!(Geometry.wkb(), Geometry.mode()) :: t() | {t(), Geometry.srid()} def from_wkb!(wkb, mode \\ :binary) do case WKB.to_geometry(wkb, mode, MultiLineStringM) do {:ok, geometry} -> geometry error -> raise Geometry.Error, error end end @doc """ Returns the number of elements in `MultiLineStringM`. ## Examples iex> MultiLineStringM.size( ...> MultiLineStringM.new([ ...> LineStringM.new([ ...> PointM.new(11, 12, 14), ...> PointM.new(21, 22, 24) ...> ]), ...> LineStringM.new([ ...> PointM.new(31, 32, 34), ...> PointM.new(41, 42, 44) ...> ]) ...> ]) ...> ) 2 """ @spec size(t()) :: non_neg_integer() def size(%MultiLineStringM{line_strings: line_strings}), do: MapSet.size(line_strings) @doc """ Checks if `MultiLineStringM` contains `line_string`. ## Examples iex> MultiLineStringM.member?( ...> MultiLineStringM.new([ ...> LineStringM.new([ ...> PointM.new(11, 12, 14), ...> PointM.new(21, 22, 24) ...> ]), ...> LineStringM.new([ ...> PointM.new(31, 32, 34), ...> PointM.new(41, 42, 44) ...> ]) ...> ]), ...> LineStringM.new([ ...> PointM.new(31, 32, 34), ...> PointM.new(41, 42, 44) ...> ]) ...> ) true iex> MultiLineStringM.member?( ...> MultiLineStringM.new([ ...> LineStringM.new([ ...> PointM.new(11, 12, 14), ...> PointM.new(21, 22, 24) ...> ]), ...> LineStringM.new([ ...> PointM.new(31, 32, 34), ...> PointM.new(41, 42, 44) ...> ]) ...> ]), ...> LineStringM.new([ ...> PointM.new(11, 12, 14), ...> PointM.new(41, 42, 44) ...> ]) ...> ) false """ @spec member?(t(), LineStringM.t()) :: boolean() def member?(%MultiLineStringM{line_strings: line_strings}, %LineStringM{points: points}) do MapSet.member?(line_strings, points) end @doc """ Converts `MultiLineStringM` to a list. """ @spec to_list(t()) :: [PointM.t()] def to_list(%MultiLineStringM{line_strings: line_strings}), do: MapSet.to_list(line_strings) @compile {:inline, to_wkt_line_strings: 1} defp to_wkt_line_strings([]), do: "EMPTY" defp to_wkt_line_strings([line_string | line_strings]) do <<"(", Enum.reduce(line_strings, LineStringM.to_wkt_points(line_string), fn line_string, acc -> <<acc::binary(), ", ", LineStringM.to_wkt_points(line_string)::binary()>> end)::binary(), ")">> end @doc false @compile {:inline, to_wkb: 4} @spec to_wkb(t(), srid, endian, mode) :: wkb when srid: Geometry.srid() | nil, endian: Geometry.endian(), mode: Geometry.mode(), wkb: Geometry.wkb() def to_wkb(%MultiLineStringM{line_strings: line_strings}, srid, endian, mode) do << WKB.byte_order(endian, mode)::binary(), wkb_code(endian, not is_nil(srid), mode)::binary(), WKB.srid(srid, endian, mode)::binary(), to_wkb_line_strings(line_strings, endian, mode)::binary() >> end @compile {:inline, to_wkb_line_strings: 3} defp to_wkb_line_strings(line_strings, endian, mode) do Enum.reduce(line_strings, WKB.length(line_strings, endian, mode), fn line_string, acc -> <<acc::binary(), LineStringM.to_wkb(line_string, nil, endian, mode)::binary()>> end) end @compile {:inline, wkb_code: 3} defp wkb_code(endian, srid?, :hex) do case {endian, srid?} do {:xdr, false} -> "40000005" {:ndr, false} -> "05000040" {:xdr, true} -> "60000005" {:ndr, true} -> "05000060" end end defp wkb_code(endian, srid?, :binary) do case {endian, srid?} do {:xdr, false} -> <<0x40000005::big-integer-size(32)>> {:ndr, false} -> <<0x40000005::little-integer-size(32)>> {:xdr, true} -> <<0x60000005::big-integer-size(32)>> {:ndr, true} -> <<0x60000005::little-integer-size(32)>> end end defimpl Enumerable do # credo:disable-for-next-line Credo.Check.Readability.Specs def count(multi_line_string) do {:ok, MultiLineStringM.size(multi_line_string)} end # credo:disable-for-next-line Credo.Check.Readability.Specs def member?(multi_line_string, val) do {:ok, MultiLineStringM.member?(multi_line_string, val)} end # credo:disable-for-next-line Credo.Check.Readability.Specs def slice(multi_line_string) do size = MultiLineStringM.size(multi_line_string) {:ok, size, &Enumerable.List.slice(MultiLineStringM.to_list(multi_line_string), &1, &2, size)} end # credo:disable-for-next-line Credo.Check.Readability.Specs def reduce(multi_line_string, acc, fun) do Enumerable.List.reduce(MultiLineStringM.to_list(multi_line_string), acc, fun) end end defimpl Collectable do # credo:disable-for-next-line Credo.Check.Readability.Specs def into(%MultiLineStringM{line_strings: line_strings}) do fun = fn list, {:cont, x} -> [{x, []} | list] list, :done -> map = Map.merge( line_strings.map, Enum.into(list, %{}, fn {line_string, []} -> {line_string.points, []} end) ) %MultiLineStringM{line_strings: %{line_strings | map: map}} _list, :halt -> :ok end {[], fun} end end end
lib/geometry/multi_line_string_m.ex
0.929983
0.498352
multi_line_string_m.ex
starcoder
defmodule Zaryn.Governance.Code.CICD do @moduledoc ~S""" Provides CICD pipeline for `Zaryn.Governance.Code.Proposal` The evolution of zaryn-node could be represented using following stages: * Init - when source code is compiled into zaryn-node (not covered here) * CI - zaryn-node is verifying a proposal and generating a release upgrade * CD - zaryn-node is forking a testnet to verify release upgrade In each stage a transition from a source to a result could happen | Stage | Source | Transition | Result | |-------+------------------+--------------+----------------| | Init | Code | compile | Release | | CI | Code, Proposal | run CI tests | CiLog, Upgrade | | CD | Release, Upgrade | run testnet | TnLog, Release | where * Code - a source code of zaryn-node * Propsal - a code proposal transaction * Release - a release of zaryn-node * Upgrade - an upgrade to a release of zaryn-node * CiLog - unit tests and type checker logs * TnLog - logs retrieved from running testnet fork ## CI Given a `Code.Proposal` the `CICD.run_ci!/1` should generate: a log of application of the `Proposal` to the `Code`, a release upgrade which is a delta between previous release and new release, and a new version of `zaryn-proposal-validator` escript. ## CD Given a `Code.Proposal` the `CICD.run_testnet!/1` should start a testnet with few `zaryn-node`s and one `zaryn-validator`. The `zaryn-validator` runs `zaryn-proposal-validator` escript and gathers metrics from `zaryn-node`s. The `zaryn-proposal-validator` escript runs benchmarks and playbooks before and after upgrade. """ alias Zaryn.Governance.Code.Proposal use Knigge, otp_app: :zaryn, default: __MODULE__.Docker @doc """ Start CICD """ @callback child_spec(any()) :: Supervisor.child_spec() @doc """ Execute the continuous integration of the code proposal """ @callback run_ci!(Proposal.t()) :: :ok @doc """ Return CI log from the proposal address """ @callback get_log(binary()) :: {:ok, binary()} | {:error, term} @doc """ Execute the continuous delivery of the code proposal to a testnet """ @callback run_testnet!(Proposal.t()) :: :ok @doc """ Remove all artifacts generated during `run_ci!/1` and `run_testnet!/1` """ @callback clean(address :: binary()) :: :ok end
lib/zaryn/governance/code/CICD.ex
0.73431
0.6622
CICD.ex
starcoder
defmodule Nebulex.RPC do @moduledoc """ RPC utilities for distributed task execution. This module uses supervised tasks underneath `Task.Supervisor`. > **NOTE:** The approach by using distributed tasks will be deprecated in the future in favor of `:erpc`. """ @typedoc "Task supervisor" @type task_sup :: Supervisor.supervisor() @typedoc "Task callback" @type callback :: {module, atom, [term]} @typedoc "Group entry: node -> callback" @type node_callback :: {node, callback} @typedoc "Node group" @type node_group :: %{optional(node) => callback} | [node_callback] @typedoc "Reducer function spec" @type reducer_fun :: ({:ok, term} | {:error, term}, node_callback | node, term -> term) @typedoc "Reducer spec" @type reducer :: {acc :: term, reducer_fun} ## API @doc """ Evaluates `apply(mod, fun, args)` on node `node` and returns the corresponding evaluation result, or `{:badrpc, reason}` if the call fails. A timeout, in milliseconds or `:infinity`, can be given with a default value of `5000`. It uses `Task.await/2` internally. ## Example iex> Nebulex.RPC.call(:my_task_sup, :node1, Kernel, :to_string, [1]) "1" """ @spec call(task_sup, node, module, atom, [term], timeout) :: term | {:badrpc, term} def call(supervisor, node, mod, fun, args, timeout \\ 5000) do rpc_call(supervisor, node, mod, fun, args, timeout) end @doc """ In contrast to a regular single-node RPC, a multicall is an RPC that is sent concurrently from one client to multiple servers. The function evaluates `apply(mod, fun, args)` on each `node_group` entry and collects the answers. Then, evaluates the `reducer` function (set in the `opts`) on each answer. This function is similar to `:rpc.multicall/5`. ## Options * `:timeout` - A timeout, in milliseconds or `:infinity`, can be given with a default value of `5000`. It uses `Task.yield_many/2` internally. * `:reducer` - Reducer function to be executed on each collected result. (check out `reducer` type). ## Example iex> Nebulex.RPC.multi_call( ...> :my_task_sup, ...> %{ ...> node1: {Kernel, :to_string, [1]}, ...> node2: {Kernel, :to_string, [2]} ...> }, ...> timeout: 10_000, ...> reducer: { ...> [], ...> fn ...> {:ok, res}, _node_callback, acc -> ...> [res | acc] ...> ...> {:error, _}, _node_callback, acc -> ...> acc ...> end ...> } ...> ) ["1", "2"] """ @spec multi_call(task_sup, node_group, Keyword.t()) :: term def multi_call(supervisor, node_group, opts \\ []) do rpc_multi_call(supervisor, node_group, opts) end @doc """ Similar to `multi_call/3` but the same `node_callback` (given by `module`, `fun`, `args`) is executed on all `nodes`; Internally it creates a `node_group` with the same `node_callback` for each node. ## Options Same options as `multi_call/3`. ## Example iex> Nebulex.RPC.multi_call( ...> :my_task_sup, ...> [:node1, :node2], ...> Kernel, ...> :to_string, ...> [1], ...> timeout: 5000, ...> reducer: { ...> [], ...> fn ...> {:ok, res}, _node_callback, acc -> ...> [res | acc] ...> ...> {:error, _}, _node_callback, acc -> ...> acc ...> end ...> } ...> ) ["1", "1"] """ @spec multi_call(task_sup, [node], module, atom, [term], Keyword.t()) :: term def multi_call(supervisor, nodes, mod, fun, args, opts \\ []) do rpc_multi_call(supervisor, nodes, mod, fun, args, opts) end ## Helpers if Code.ensure_loaded?(:erpc) do defp rpc_call(_supervisor, node, mod, fun, args, _timeout) when node == node() do apply(mod, fun, args) end defp rpc_call(_supervisor, node, mod, fun, args, timeout) do :erpc.call(node, mod, fun, args, timeout) rescue e in ErlangError -> case e.original do {:exception, original, _} when is_struct(original) -> reraise original, __STACKTRACE__ {:exception, original, _} -> :erlang.raise(:error, original, __STACKTRACE__) other -> reraise %Nebulex.RPCError{reason: other, node: node}, __STACKTRACE__ end end def rpc_multi_call(_supervisor, node_group, opts) do {reducer_acc, reducer_fun} = opts[:reducer] || default_reducer() timeout = opts[:timeout] || 5000 node_group |> Enum.map(fn {node, {mod, fun, args}} = group -> {:erpc.send_request(node, mod, fun, args), group} end) |> Enum.reduce(reducer_acc, fn {req_id, group}, acc -> try do res = :erpc.receive_response(req_id, timeout) reducer_fun.({:ok, res}, group, acc) rescue exception -> reducer_fun.({:error, exception}, group, acc) catch :exit, reason -> reducer_fun.({:error, {:exit, reason}}, group, acc) end end) end def rpc_multi_call(_supervisor, nodes, mod, fun, args, opts) do {reducer_acc, reducer_fun} = opts[:reducer] || default_reducer() nodes |> :erpc.multicall(mod, fun, args, opts[:timeout] || 5000) |> :lists.zip(nodes) |> Enum.reduce(reducer_acc, fn {res, node}, acc -> reducer_fun.(res, node, acc) end) end else # TODO: This approach by using distributed tasks will be deprecated in the # future in favor of `:erpc` which is proven to improve performance # almost by 3x. defp rpc_call(_supervisor, node, mod, fun, args, _timeout) when node == node() do apply(mod, fun, args) rescue # FIXME: this is because coveralls does not check this as covered # coveralls-ignore-start exception -> {:badrpc, exception} # coveralls-ignore-stop end defp rpc_call(supervisor, node, mod, fun, args, timeout) do {supervisor, node} |> Task.Supervisor.async_nolink( __MODULE__, :call, [supervisor, node, mod, fun, args, timeout] ) |> Task.await(timeout) end defp rpc_multi_call(supervisor, node_group, opts) do node_group |> Enum.map(fn {node, {mod, fun, args}} -> Task.Supervisor.async_nolink({supervisor, node}, mod, fun, args) end) |> handle_multi_call(node_group, opts) end defp rpc_multi_call(supervisor, nodes, mod, fun, args, opts) do rpc_multi_call(supervisor, Enum.map(nodes, &{&1, {mod, fun, args}}), opts) end defp handle_multi_call(tasks, node_group, opts) do {reducer_acc, reducer_fun} = Keyword.get(opts, :reducer, default_reducer()) tasks |> Task.yield_many(opts[:timeout] || 5000) |> :lists.zip(node_group) |> Enum.reduce(reducer_acc, fn {{_task, {:ok, res}}, group}, acc -> reducer_fun.({:ok, res}, group, acc) {{_task, {:exit, reason}}, group}, acc -> reducer_fun.({:error, {:exit, reason}}, group, acc) {{task, nil}, group}, acc -> _ = Task.shutdown(task, :brutal_kill) reducer_fun.({:error, :timeout}, group, acc) end) end end defp default_reducer do { {[], []}, fn {:ok, res}, _node_callback, {ok, err} -> {[res | ok], err} {kind, _} = error, node_callback, {ok, err} when kind in [:error, :exit, :throw] -> {ok, [{error, node_callback} | err]} end } end end
lib/nebulex/rpc.ex
0.857365
0.492127
rpc.ex
starcoder
defmodule ExLimiter.Plug do @moduledoc """ Plug for enforcing rate limits. The usage should be something like ``` plug ExLimiter.Plug, scale: 1000, limit: 5 ``` Additionally, you can pass the following options: - `:bucket`, a 1-arity function of a `Plug.Conn.t` which determines the bucket for the rate limit. Defaults to the phoenix controller, action and remote_ip. - `:consumes`, a 1-arity function of a `Plug.Conn.t` which determines the amount to consume. Defaults to 1 respectively. - `:decorate`, a 2-arity function which can return an updated conn based on the outcome of the limiter call. The first argument is the `Plug.Conn.t`, and the second can be: - `{:ok, Bucket.t}` - `{:rate_limited, binary}` Where the second element is the bucket name that triggered the rate limit. Additionally, you can configure a custom limiter with ``` config :ex_limiter, ExLimiter.Plug, limiter: MyLimiter ``` and you can also configure the rate limited response with ``` config :ex_limiter, ExLimiter.Plug, fallback: MyFallback ``` `MyFallback` needs to implement a function `render_error(conn, :rate_limited)` """ import Plug.Conn @limiter Application.get_env(:ex_limiter, __MODULE__)[:limiter] defmodule Config do @limit Application.get_env(:ex_limiter, ExLimiter.Plug)[:limit] @scale Application.get_env(:ex_limiter, ExLimiter.Plug)[:scale] @fallback Application.get_env(:ex_limiter, ExLimiter.Plug)[:fallback] defstruct [ scale: @scale, limit: @limit, bucket: &ExLimiter.Plug.get_bucket/1, consumes: nil, decorate: nil, fallback: @fallback, ] def new(opts) do contents = Enum.into(opts, %{}) |> Map.put_new(:consumes, fn _ -> 1 end) |> Map.put_new(:decorate, &ExLimiter.Plug.decorate/2) struct(__MODULE__, contents) end end def get_bucket(%{private: %{phoenix_controller: contr, phoenix_action: ac}} = conn) do "#{contr}.#{ac}.#{ip(conn)}" end def render_error(conn, :rate_limited) do conn |> resp(429, "Rate Limit Exceeded") |> halt() end @spec decorate(Plug.Conn.t, {:ok, Bucket.t} | {:rate_limited, bucket_name :: binary}) :: Plug.Conn.t def decorate(conn, _), do: conn def init(opts), do: Config.new(opts) def call(conn, %Config{bucket: bucket_fun, scale: scale, limit: limit, consumes: consume_fun, decorate: decorate_fun, fallback: fallback}) do bucket_name = bucket_fun.(conn) bucket_name |> @limiter.consume(consume_fun.(conn), scale: scale, limit: limit) |> case do {:ok, bucket} = response -> remaining = @limiter.remaining(bucket, scale: scale, limit: limit) conn |> put_resp_header("x-ratelimit-limit", to_string(limit)) |> put_resp_header("x-ratelimit-window", to_string(scale)) |> put_resp_header("x-ratelimit-remaining", to_string(remaining)) |> decorate_fun.(response) {:error, :rate_limited} -> conn |> decorate_fun.({:rate_limited, bucket_name}) |> fallback.render_error(:rate_limited) end end defp ip(conn), do: conn.remote_ip |> Tuple.to_list() |> Enum.join(".") end
lib/ex_limiter/plug.ex
0.880938
0.921428
plug.ex
starcoder
defmodule Harald.HCI.InformationalParameters do @moduledoc """ > The Informational Parameters are fixed by the > manufacturer of the Bluetooth hardware. > These parameters provide information about the BR/EDR > Controller and the capabilities of the Link Manager and > Baseband in the BR/EDR Controller and PAL in the AMP Controller. > The host device cannot modify any of these parameters. See Section 7.4 of the Bluetooth spec """ alias Harald.HCI @ogf 0x04 @doc """ """ def read_local_version(), do: @ogf |> HCI.opcode(0x0001) |> HCI.command() @doc """ > This command reads the list of HCI commands supported for > the local Controller.This command shall return the Supported_Commands > configuration parameter. It is implied that if a command is > listed as supported, the feature underlying that command is also supported. iex> read_local_supported_commands() <<0x02, 0x10, 0x00>> """ def read_local_supported_commands(), do: @ogf |> HCI.opcode(0x0002) |> HCI.command() @doc """ > This command requests a list of the supported features for the local BR/EDR Controller. iex> read_local_supported_features() <<0x03, 0x10, 0x00>> """ def read_local_supported_features(), do: @ogf |> HCI.opcode(0x0003) |> HCI.command() @doc """ > On a BR/EDR Controller, this command reads the Bluetooth Controller address > On an LE Controller, this command shall read the Public Device Address as defined iex> read_bd_addr() <<0x09, 0x10, 0x00>> """ def read_bd_addr(), do: @ogf |> HCI.opcode(0x0009) |> HCI.command() @doc """ > The Read_Buffer_Size command is used to read the maximum size of > the data portion of HCI ACL and synchronous Data Packets sent > from the Host to the Controller. The Host will segment the data > to be transmitted from the Host to the Controller according to > these sizes, so that the HCI Data Packets will contain data with > up to these sizes. The Read_Buffer_Size command also returns the > total number of HCI ACL and synchronous Data Packets that can be > stored in the data buffers of the Controller. The Read_Buffer_Size > command must be issued by the Host before it sends any data to the > Controller. iex> read_buffer_size() <<0x05, 0x10, 0x00>> """ def read_buffer_size(), do: @ogf |> HCI.opcode(0x0005) |> HCI.command() end
lib/harald/hci/informational_parameters.ex
0.701406
0.74934
informational_parameters.ex
starcoder
defmodule HLDSLogs do @moduledoc """ A library for connecting to Half-Life Dedicated Servers (a.k.a "HLDS") and using GenStage to produce structured log entries sent from the connected HLDS server. Uses a `DynamicSupervisor` for creating producers. If you want to manage the producer supervision yourself you can use the `HLDSLogs.LogProducer` module directly, however it will still call `HLDSRcon.connect/2` which makes use of another `DynamicSupervisor`. ## Quickstart If you are running a HLDS server and want to consume log entries from the game server, you could connect and consume by calling `HLDSLogs.produce_logs/3`; ``` HLDSLogs.produce_logs( %HLDSRcon.ServerInfo{ host: "127.0.0.1", port: 27015 }, %HLDSLogs.ListenInfo{ host: "127.0.0.1" }, consumer_pid ) ``` Your consumer would then begin receiving `%HLDSLogs.LogEntry` structs as events, for you to carry out processing as you wish. """ alias HLDSRcon.ServerInfo alias HLDSLogs.ListenInfo @doc """ Similar to `produce_logs/3`, except no consumers will be subsrcibed to the producer immediately after starting """ @spec produce_logs(%HLDSRcon.ServerInfo{}, %HLDSLogs.ListenInfo{}) :: {:ok, pid()} def produce_logs( %ServerInfo{} = from, %ListenInfo{} = to ) do produce_logs(from, to, []) end @doc """ Creates a producer that will connect to the server specified by the `HLDSRcon.ServerInfo` struct in `from`, instructing HLDS to connect to the `HLDSLogs.ListenInfo` struct in `to`, where the information in `to` will be used for setting up a local UDP socket that must be reachable by the HLDS server. After the producer is set up, each consumer in `consumers` will be subscribed to the producer. Returns the producer pid. """ @spec produce_logs(%HLDSRcon.ServerInfo{}, %HLDSLogs.ListenInfo{}, list()) :: {:ok, pid()} def produce_logs( %ServerInfo{} = from, %ListenInfo{} = to, consumers ) when is_list(consumers) do {:ok, pid} = DynamicSupervisor.start_child(HLDSLogs.ProducerSupervisor, {HLDSLogs.LogProducer, {from, to}}) consumers |> Enum.map(&GenStage.sync_subscribe(&1, to: pid)) {:ok, pid} end def produce_logs( %ServerInfo{} = from, %ListenInfo{} = to, consumer ) do produce_logs(from, to, [consumer]) end end
lib/hlds_logs.ex
0.783823
0.712507
hlds_logs.ex
starcoder
defmodule WxStatusBar do import WxUtilities import WinInfo require Logger @moduledoc """ A status bar is a narrow window that can be placed along the bottom of a frame to show small amounts of status information. A status bar can contain one or more fields, one or more of which can be variable length according to the size of the window. The status bar also maintains an independent stack of status texts for each field (see pushStatusText() and popStatusText()). """ @doc """ Create a new status bar and attach it to the main frame. If a :text option is supplied, set the text. """ def new(parent, attributes) do new_id = :wx_misc.newId() defaults = [id: :status_bar, number: nil, style: nil] {id, options, restOpts} = getOptions(attributes, defaults) Logger.debug(" :wxFrame.createStatusBar(#{inspect(parent)}, #{inspect(options)}") sb = :wxFrame.createStatusBar(parent, options) put_table({id, new_id, sb}) defaults = [text: nil] {_, options, _restOpts} = getOptions(restOpts, defaults) case options[:text] do nil -> :ok "" -> :ok other -> setText(other) end {id, new_id, sb} end @doc """ Set the status bar text. If the suppied test is a string, then set it in the first field. If it is a list of strings, then set all fields, setting the number of fields to the length of the supplied list """ def setText(text) when is_binary(text) do {_, _, sb} = WinInfo.get_by_name(:status_bar) :wxStatusBar.setStatusText(sb, text) end def setText(textList) when is_list(textList) do {_, _, sb} = WinInfo.get_by_name(:status_bar) setFieldCount(sb, length(textList)) setStatusText(sb, textList, 0) end def setText(text, index) when is_binary(text) do {_, _, sb} = WinInfo.get_by_name(:status_bar) setFieldCount(sb, index + 1) :wxStatusBar.setStatusText(sb, text, [{:number, index}]) end defp setFieldCount(sb, count) do n = :wxStatusBar.getFieldsCount(sb) cond do n < count -> :wxStatusBar.setFieldsCount(sb, count) n n >= count -> n end end defp setStatusText(_, [], _), do: :ok defp setStatusText(sb, [h | t], n) do :wxStatusBar.setStatusText(sb, h, [{:number, n}]) setStatusText(sb, t, n + 1) end end
lib/ElixirWx/WxStatusBar.ex
0.629888
0.44065
WxStatusBar.ex
starcoder
defmodule Ello.Auth.JWT do @moduledoc """ Responsible for generating and verifying JWTs. """ @issuer "Ello, PBC" @doc """ Verifies an Ello JWT is correctly signed and has appropriate data. Checks `exp` and `iss` claims. Does not require user info. Returns {:ok, payload} or {:error, reason} """ @spec verify(token :: String.t) :: {:ok, payload :: Map.t} | {:error, reason: String.t} def verify(""), do: verify(nil) def verify(nil), do: {:error, "No token found"} def verify(jwt) do jwt |> Joken.token |> Joken.with_validation("exp", &(&1 > Joken.current_time)) |> Joken.with_validation("iss", &(&1 == @issuer)) |> Joken.with_signer(jwt_signer()) |> Joken.verify! end @doc """ Generate a public JWT token (with no user info) See Ello.Auth.JWT.generate/1 to generate a user token. """ @spec generate() :: String.t def generate do sign_token(%{ exp: Joken.current_time + jwt_exp_duration(), iss: @issuer, }) end @doc """ Generate a user specific JWT token. Expects a map with id: user_id. (User struct is fine). See Ello.Auth.JWT.generate/0 to generate a public token. """ @spec generate(user :: %{id: number}) :: String.t def generate(%{id: id}) do sign_token(%{ exp: Joken.current_time + jwt_exp_duration(), iss: @issuer, data: %{ id: id } }) end defp sign_token(payload) do payload |> Joken.token |> Joken.with_signer(jwt_signer()) |> Joken.sign |> Joken.get_compact end # Let the per environment config set what algorithm we are using for signing # JWT tokens. Defaults to dev/staging/production's RSA Private/Public keys. defp jwt_signer do case Application.get_env(:ello_auth, :jwt_alg, :rs512) do :rs512 -> rs512_signer() :hs256 -> hs256_signer() end end # In Dev/Production we use a RSA Private/Public Key pair to sign tokens. # Steps: # 1. Grab key from config (set in config.ex) # 2. Convert PEM style private key to a JWK (JSON WEB KEY) # 3. Convert to the proper RS512 Joken.Signer defp rs512_signer do pem = Application.get_env(:ello_auth, :jwt_private_key) pem |> JOSE.JWK.from_pem |> Joken.rs512 end # In test we just use a simple string to sign tokens so this service does not # need the private key. def hs256_signer do Joken.hs256(Application.get_env(:ello_auth, :jwt_secret)) end # How long should generated tokens be valid for? defp jwt_exp_duration do Application.get_env(:ello_auth, :jwt_exp_duration, 2700) end end
apps/ello_auth/lib/ello_auth/jwt.ex
0.853364
0.466906
jwt.ex
starcoder
defmodule Mix.Tasks.Potato.Full do @moduledoc """ Prepare a full release. ## Command line options None ## Notes This task produces a full tar file from a previously run release task, but adds a shell script, `preboot.sh` to the releases folder. The task itself expects that `mix release` has already been run, e.g. ``` MIX_ENV=prod mix do release, potato.full ``` `preboot.sh` enables the system to be downgraded to its original installed state, and should be run _before_ the system is fully booted for the first time, e.g ``` tar xzf myrel-1.0.0.tar.gz sh myrel/release/1.0.0/preboot.sh ``` """ use Mix.Task alias Mix.Project @shortdoc "Prepare a full (upgradeable) release." @impl Mix.Task def run(_args) do app = Keyword.fetch!(Project.config(), :app) ver = Keyword.fetch!(Project.config(), :version) build_path = Project.build_path() root_path = Path.join([build_path, "rel", to_string(app)]) rel_path = Path.join([root_path, "releases"]) ver_path = Path.join([rel_path, to_string(ver)]) for path <- [build_path, root_path, rel_path, ver_path], do: Potato.check_exists(path) # rel file checking Potato.check_releases(root_path, app, [ver], ver) # Pretty much all we need to do for the initial release is copy and rename the rel file. rel_file_src = Path.join([ver_path, to_string(app) <> ".rel"]) rel_file_dst = Path.join([rel_path, to_string(app) <> "-" <> to_string(ver) <> ".rel"]) File.copy!(rel_file_src, rel_file_dst) # And write a script to help create the initial releases file File.write!(Path.join(ver_path, "preboot.sh"), preboot_script(app, ver)) # Tar the release up tarfile = to_charlist(Path.join([build_path, "rel", "#{app}-#{ver}.tar.gz"])) tar_full_release(build_path, app, ver, tarfile) Mix.shell().info("Generated full release in #{tarfile}.") end defp tar_full_release(build_path, rel_name, rel_ver, tarfile) do rel_path = Path.join([build_path, "rel"]) root_path = Path.join([rel_path, to_string(rel_name)]) rel_file = Path.join([root_path, "releases", rel_ver, "#{rel_name}.rel"]) rel_files = case :file.consult(rel_file) do {:ok, [{:release, _, {erts, erts_ver}, app_vers}]} -> [ "bin", "#{erts}-#{erts_ver}", Path.join("releases", "#{rel_ver}"), Path.join("releases", "COOKIE"), Path.join("releases", "start_erl.data"), Path.join("releases", "#{rel_name}-#{rel_ver}.rel") | for( {app_name, app_ver, _} <- app_vers, do: Path.join(["lib", "#{app_name}-#{app_ver}"]) ) ] {:error, reason} -> Mix.raise("Could not read #{rel_file}. #{reason}") end abs_files = Enum.map(rel_files, fn f -> Path.join(root_path, f) end) tar_files = Enum.map(abs_files, fn f -> {to_charlist(Path.relative_to(f, rel_path)), to_charlist(f)} end) case :erl_tar.create(tarfile, tar_files, [:compressed, :dereference]) do :ok -> :ok {:error, reason} -> Mix.raise("Failed to create #{tarfile}. #{reason}") end end defp preboot_script(app, ver) do """ #!/bin/sh SELF=$(readlink "$0" || true) if [ -z "$SELF" ]; then SELF="$0"; fi RR="$(cd "$(dirname "$SELF")/../.." && pwd -P)" $RR/erts-10.7/bin/erl \\ -boot_var RELEASE_LIB "$RR/lib" \\ -boot "$(dirname "$SELF")/start_clean" \\ -noshell \\ -eval "ok = application:start(sasl)." \\ -eval "ok = release_handler:create_RELEASES(\\"$RR\\", \\"$RR/releases\\", \\"$RR/releases/#{ app }-#{ver}.rel\\", [])." \\ -eval "init:stop()." case $? in 0) echo "Release initialized ok." ;; *) echo "Release initialization failed." ;; esac """ end end
lib/mix/tasks/potato/full.ex
0.696371
0.62405
full.ex
starcoder
defmodule Crux.Structs.BitField do @moduledoc """ Custom non discord api behaviour to help with bitfields of all kind. """ @moduledoc since: "0.2.3" @typedoc """ The name of a bit flag. """ @typedoc since: "0.2.3" @type name :: atom() @doc """ Get a map of `t:name/0`s and their corresponding bit values. """ @doc since: "0.2.3" @callback flags() :: %{required(name()) => t()} @doc """ Get a list of all available `t:name/0`s. """ @doc since: "0.2.3" @callback names() :: [name()] @doc """ Get a bitfield representing all available bits set. """ @doc since: "0.2.3" @callback all() :: t() @typedoc """ All valid types that can be directly resolved into a bitfield. """ @typedoc since: "0.2.3" @type resolvable() :: t() | non_neg_integer() | String.t() | name() | [resolvable()] @typedoc """ Represents a bitfield of a module implementing the `Crux.Structs.BitField` behaviour. """ @typedoc since: "0.2.3" @type t :: non_neg_integer() @doc """ Resolve a `t:resolvable/0` into a bitfield. """ @doc since: "0.2.3" @callback resolve(resolvable()) :: t() @doc """ Serialize a `t:resolvable/0` into a map, which is mapping all `t:name/0`s to whether they are set. """ @doc since: "0.2.3" @callback to_map(resolvable()) :: %{required(name()) => boolean()} @doc """ Serialize a `t:resolvable/0` into a list of set bit flag names. """ @doc since: "0.2.3" @callback to_list(resolvable()) :: [name()] @doc """ Add all set bits of `to_add` to `base`. """ @doc since: "0.2.3" @callback add(base :: resolvable(), to_add :: resolvable()) :: t() @doc """ Remove all set bits of `to_remove` from `base`. """ @doc since: "0.2.3" @callback remove(base :: resolvable(), to_remove :: resolvable()) :: t() @doc """ Check whether the `t:resolvable/0` you `have` has everything set you `want`. """ @doc since: "0.2.3" @callback has(have :: resolvable(), want :: resolvable()) :: boolean() @doc """ Return a `t:t/0` of all bits you `want` but not `have`. """ @doc since: "0.2.3" @callback missing(have :: resolvable(), want :: resolvable()) :: t() defmacro __using__(flags) do quote location: :keep do @behaviour Crux.Structs.BitField use Bitwise @flags unquote(flags) @doc """ Get a map of `t:name/0`s and their corresponding bit values. """ @spec flags() :: %{required(name()) => t()} def flags(), do: @flags @names Map.keys(@flags) @doc """ Get a list of all available `t:name/0`s. """ @spec names() :: [name()] def names(), do: @names @all Enum.reduce(@flags, 0, fn {_name, bit}, acc -> bit ||| acc end) @doc """ Get an integer representing all available bits set. """ @spec all() :: t() def all(), do: @all @typedoc """ All valid types that can be directly resolved into a bitfield. """ @type resolvable() :: t() | raw() | name() | [resolvable()] @type t :: non_neg_integer() @typedoc """ Raw bitfield that can be used as a `t:resolvable/0`. """ @type raw :: non_neg_integer() | String.t() @doc """ Resolve a `t:resolvable/0` into a bitfield. """ @spec resolve(resolvable()) :: t() def resolve(resolvable) def resolve(resolvable) when is_integer(resolvable) and resolvable >= 0 do resolvable end def resolve(resolvable) when resolvable in @names do Map.get(@flags, resolvable) end def resolve(resolvable) when is_list(resolvable) do Enum.reduce(resolvable, 0, &(resolve(&1) ||| &2)) end def resolve(resolvable) when is_binary(resolvable) do case Integer.parse(resolvable) do {bitfield, ""} -> resolve(bitfield) _ -> raise_resolve(resolvable) end end def resolve(resolvable) do raise_resolve(resolvable) end defp raise_resolve(resolvable) do raise ArgumentError, """ Expected a name atom, a non negative integer, or a list of them. Received: #{inspect(resolvable)} """ end @doc """ Serialize a `t:resolvable/0` into a map representing bit flag names to whether they are set. """ @spec to_map(resolvable()) :: %{required(name()) => boolean()} def to_map(resolvable) do bitfield = resolve(resolvable) Map.new(@names, &{&1, has(bitfield, &1)}) end @doc """ Serialize a `t:resolvable/0` into a list of set bit flag names. """ @spec to_list(resolvable()) :: [name()] def to_list(resolvable) do resolvable = resolve(resolvable) Enum.reduce(@flags, [], fn {name, val}, acc -> if has(resolvable, val), do: [name | acc], else: acc end) end @doc """ Add all set bits of `to_add` to `base`. """ @spec add(base :: resolvable(), to_add :: resolvable()) :: t() def add(base, to_add) do to_add = resolve(to_add) base |> resolve() |> bor(to_add) end @doc """ Remove all set bits of `to_remove` from `base`. """ @spec remove(base :: resolvable(), to_remove :: resolvable()) :: t() def remove(base, to_remove) do to_remove = to_remove |> resolve() |> bnot() base |> resolve() |> band(to_remove) end @doc """ Check whether the `t:resolvable/0` you `have` has everything set you `want`. """ @spec has( have :: resolvable(), want :: resolvable() ) :: boolean() def has(have, want) do have = resolve(have) want = resolve(want) (have &&& want) == want end @doc """ Return a `t:t/0` of all bits you `want` but not `have`. """ @spec missing(resolvable(), resolvable()) :: t() def missing(have, want) do have = resolve(have) want = resolve(want) want |> band(~~~have) end end end end
lib/structs/bit_field.ex
0.878829
0.447762
bit_field.ex
starcoder
defmodule Ecto.OLAP.GroupingSets do @moduledoc """ Helpers for advenced grouping functions in SQL. **WARNING**: Currently only PostgreSQL is supported # Example data All examples assumes we have table `grouping` with content: | foo | bar | baz | | --- | --- | --- | | a | 1 | c | | a | 1 | d | | a | 2 | c | | b | 2 | d | | b | 3 | c | """ @type column :: any() @type columns :: tuple() | list(column()) @opaque query :: tuple() @doc """ Group by each set of columns in `groups`. ## Params - groups list of tuples or lists of columns to group by, empty tuple/list means that we should aggregate all columns ## Example iex> import Ecto.Query iex> import Ecto.OLAP.GroupingSets iex> iex> alias Ecto.Integration.TestRepo iex> iex> TestRepo.all from entry in "grouping", ...> group_by: grouping_sets([{entry.foo, entry.bar}, {entry.foo}]), ...> order_by: [entry.foo, entry.bar], ...> select: %{foo: entry.foo, bar: entry.bar, count: count(entry.foo)} [%{foo: "a", bar: 1, count: 2}, %{foo: "a", bar: 2, count: 1}, %{foo: "a", bar: nil, count: 3}, %{foo: "b", bar: 2, count: 1}, %{foo: "b", bar: 3, count: 1}, %{foo: "b", bar: nil, count: 2}] """ @spec grouping_sets([columns]) :: query defmacro grouping_sets(groups) when is_list(groups) do groups |> Enum.map(&to_sql/1) |> query("GROUPING SETS") end @doc """ Create prefix list of given columns. This is shorthand notation for all prefixes of given column list. from e in "grouping", group_by: rollup([e.foo, e.bar]), # … Will be equivalent to: from e in "grouping", group_by: grouping_sets([{e.foo, e.bar}, {e.foo}, {}]), # … See `grouping_sets/1` for details. ## Example iex> import Ecto.Query iex> import Ecto.OLAP.GroupingSets iex> alias Ecto.Integration.TestRepo iex> iex> TestRepo.all from entry in "grouping", ...> group_by: rollup([entry.foo, entry.bar]), ...> order_by: [entry.foo, entry.bar], ...> select: %{foo: entry.foo, bar: entry.bar, count: count(entry.foo)} [%{foo: "a", bar: 1, count: 2}, %{foo: "a", bar: 2, count: 1}, %{foo: "a", bar: nil, count: 3}, %{foo: "b", bar: 2, count: 1}, %{foo: "b", bar: 3, count: 1}, %{foo: "b", bar: nil, count: 2}, %{foo: nil, bar: nil, count: 5}] """ @spec rollup([column]) :: query defmacro rollup(columns), do: query(columns, "ROLLUP") @doc """ Create cube of given columns. This is shorthand notation for all combinations of given columns. from e in "grouping", group_by: cube([e.foo, e.bar, e.baz]), # … Will be equivalent to: from e in "grouping", group_by: grouping_sets([{e.foo, e.bar, e.baz}, {e.foo, e.bar }, {e.foo, e.baz}, {e.foo, }, { e.bar, e.baz}, { e.bar }, { e.baz}, { }]), # … See `grouping_sets/1` for details. ## Example iex> import Ecto.Query iex> import Ecto.OLAP.GroupingSets iex> alias Ecto.Integration.TestRepo iex> iex> TestRepo.all from entry in "grouping", ...> group_by: cube([entry.foo, entry.bar, entry.baz]), ...> order_by: [entry.foo, entry.bar, entry.baz], ...> select: %{foo: entry.foo, bar: entry.bar, baz: entry.baz, count: count(entry.foo)} [%{foo: "a", bar: 1, baz: "c", count: 1}, %{foo: "a", bar: 1, baz: "d", count: 1}, %{foo: "a", bar: 1, baz: nil, count: 2}, %{foo: "a", bar: 2, baz: "c", count: 1}, %{foo: "a", bar: 2, baz: nil, count: 1}, %{foo: "a", bar: nil, baz: "c", count: 2}, %{foo: "a", bar: nil, baz: "d", count: 1}, %{foo: "a", bar: nil, baz: nil, count: 3}, %{foo: "b", bar: 2, baz: "d", count: 1}, %{foo: "b", bar: 2, baz: nil, count: 1}, %{foo: "b", bar: 3, baz: "c", count: 1}, %{foo: "b", bar: 3, baz: nil, count: 1}, %{foo: "b", bar: nil, baz: "c", count: 1}, %{foo: "b", bar: nil, baz: "d", count: 1}, %{foo: "b", bar: nil, baz: nil, count: 2}, %{foo: nil, bar: 1, baz: "c", count: 1}, %{foo: nil, bar: 1, baz: "d", count: 1}, %{foo: nil, bar: 1, baz: nil, count: 2}, %{foo: nil, bar: 2, baz: "c", count: 1}, %{foo: nil, bar: 2, baz: "d", count: 1}, %{foo: nil, bar: 2, baz: nil, count: 2}, %{foo: nil, bar: 3, baz: "c", count: 1}, %{foo: nil, bar: 3, baz: nil, count: 1}, %{foo: nil, bar: nil, baz: "c", count: 3}, %{foo: nil, bar: nil, baz: "d", count: 2}, %{foo: nil, bar: nil, baz: nil, count: 5}] """ @spec cube([column]) :: query defmacro cube(columns), do: query(columns, "CUBE") @doc """ Select operator that provide bitmask for given grouping set. Params for this needs to be exactly the same as the list given to any grouping set command. Bits are assigned with the rightmost argument being the least-signifant bit. Each bit is `0` if the corresponding expression is in the grouping criteria, and 1 if it is not. ## Example iex> import Ecto.Query iex> import Ecto.OLAP.GroupingSets iex> alias Ecto.Integration.TestRepo iex> iex> TestRepo.all from entry in "grouping", ...> group_by: cube([entry.foo, entry.bar]), ...> order_by: [entry.foo, entry.bar], ...> select: %{cols: grouping([entry.foo, entry.bar]), count: count(entry.foo)} [%{cols: 0b00, count: 2}, %{cols: 0b00, count: 1}, %{cols: 0b01, count: 3}, %{cols: 0b00, count: 1}, %{cols: 0b00, count: 1}, %{cols: 0b01, count: 2}, %{cols: 0b10, count: 2}, %{cols: 0b10, count: 2}, %{cols: 0b10, count: 1}, %{cols: 0b11, count: 5}] See also `cube/1`. """ @spec grouping([column]) :: query defmacro grouping(columns), do: query(columns, "GROUPING") defp query(data, name) do quote do: fragment(unquote(name <> " ?"), unquote(fragment_list(data))) end defp fragment_list(list) when is_list(list) do query = "?" |> List.duplicate(Enum.count(list)) |> Enum.join(",") quote do: fragment(unquote("(" <> query <> ")"), unquote_splicing(list)) end defp to_sql({:{}, _, data}), do: to_sql(data) defp to_sql(tuple) when is_tuple(tuple), do: to_sql(Tuple.to_list(tuple)) defp to_sql(list) when is_list(list), do: fragment_list(list) end
lib/grouping_sets.ex
0.917159
0.615203
grouping_sets.ex
starcoder
defmodule Toml.Decoder do @moduledoc false alias Toml.Document alias Toml.Builder alias Toml.Lexer @compile :inline_list_funs @compile inline: [ pop_skip: 2, peek_skip: 2, iodata_to_str: 1, iodata_to_integer: 1, iodata_to_float: 1 ] @doc """ Decodes a raw binary into a map. `Toml.Error` is raised if decoding fails. """ @spec decode!(binary, Toml.opts()) :: map | no_return def decode!(bin, opts) when is_binary(bin) and is_list(opts) do # Raise if the filename is invalid filename = case Keyword.get(opts, :filename, "nofile") do name when is_binary(name) -> name n -> raise ArgumentError, "invalid :filename option '#{inspect(n)}', must be a binary!" end # Get our lexer outside the try so that we can stop it if things go south # This can only fail if the lexer process itself crashes during init {:ok, lexer} = Lexer.new(bin) try do case do_decode(lexer, bin, Builder.new(opts)) do {:ok, doc} -> case Document.to_map(doc) do {:ok, result} -> result {:error, reason} -> raise Toml.Error, reason end {:error, reason, skip, lines} -> raise Toml.Error, format_error(reason, bin, filename, skip, lines) end catch :throw, {:error, {:invalid_toml, reason}} -> raise Toml.Error, reason :throw, {:badarg, {option, value, valid}} -> raise Toml.Error, {:badarg, option, value, valid} after Lexer.stop(lexer) end end @doc """ Decodes a raw binary safely, returns `{:ok, map}` or `{:error, reason}` """ @spec decode(binary, Toml.opts()) :: {:ok, map} | Toml.error() def decode(bin, opts) when is_binary(bin) and is_list(opts) do {:ok, decode!(bin, opts)} rescue err in [Toml.Error] -> {:error, {:invalid_toml, Exception.message(err)}} err in [ArgumentError] -> {:error, err.message} catch :error, reason -> {:error, Toml.Error.format_reason(reason)} end @doc """ Decodes a stream. Raises `Toml.Error` if decoding fails. """ @spec decode_stream!(Enumerable.t(), Toml.opts()) :: map | no_return def decode_stream!(stream, opts) do decode!(Enum.into(stream, <<>>), opts) end @doc """ Decodes a stream safely. Returns same type as `decode/2` """ @spec decode_stream(Enumerable.t(), Toml.opts()) :: {:ok, map} | Toml.error() def decode_stream(stream, opts) do decode(Enum.into(stream, <<>>), opts) end @doc """ Decodes a file. Raises `Toml.Error` if decoding fails. """ @spec decode_file!(String.t(), Toml.opts()) :: map | no_return def decode_file!(path, opts) when is_binary(path) do with {:ok, opts} <- set_filename_opt(opts, path), bin = File.read!(path) do decode!(bin, opts) end end @doc """ Decodes a file safely. Returns same type as `decode/2` """ @spec decode_file(String.t(), Toml.opts()) :: {:ok, map} | Toml.error() def decode_file(path, opts) when is_binary(path) do with {:ok, opts} <- set_filename_opt(opts, path), {:ok, bin} <- File.read(path) do decode(bin, opts) else {:error, reason} -> {:error, "unable to open file '#{Path.relative_to_cwd(path)}': #{inspect(reason)}"} end end defp set_filename_opt(opts, default) do case Keyword.get(opts, :filename) do nil -> {:ok, Keyword.put(opts, :filename, default)} _name -> {:ok, opts} end end ## Decoder Implementation @spec do_decode(Lexer.t(), binary, Document.t()) :: {:ok, Document.t()} | Lexer.lexer_err() defp do_decode(lexer, original, %Document{} = doc) do with {:ok, {type, skip, data, lines}} <- Lexer.pop(lexer) do handle_token(lexer, original, doc, type, skip, data, lines) end end # Converts an error into a friendly, printable representation defp format_error(reason, original, filename, skip, lines) do msg = "#{Toml.Error.format_reason(reason)} in #{Path.relative_to_cwd(filename)} on line #{lines}" {ctx, pos} = seek_line(original, skip - 1, lines) """ #{msg}: #{ctx} #{String.duplicate(" ", pos)}^ at column #{pos} """ end # Finds the line of context for display in a formatted error defp seek_line(original, skip, lines) do seek_line(original, original, 0, 0, skip, lines - 1) end defp seek_line(original, rest, lastnl, from, len, 0) do case seek_to_eol(rest, 0) do 0 -> {binary_part(original, lastnl, from - lastnl), from - lastnl} len_to_eol when len_to_eol > 0 -> {binary_part(original, from, len_to_eol), len} end end defp seek_line(original, <<?\r, ?\n, rest::binary>>, lastnl, from, len, 1) when len <= 0 do # Content occurred on the last line right before the newline seek_line(original, rest, lastnl, from + 2, 0, 0) end defp seek_line(original, <<?\r, ?\n, rest::binary>>, _, from, len, lines) do seek_line(original, rest, from + 2, from + 2, len - 2, lines - 1) end defp seek_line(original, <<?\n, rest::binary>>, lastnl, from, len, 1) when len <= 0 do # Content occurred on the last line right before the newline seek_line(original, rest, lastnl, from + 1, 0, 0) end defp seek_line(original, <<?\n, rest::binary>>, _, from, len, lines) do seek_line(original, rest, from + 1, from + 1, len - 1, lines - 1) end defp seek_line(original, <<_::utf8, rest::binary>>, lastnl, from, len, lines) do seek_line(original, rest, lastnl, from + 1, len - 1, lines) end # Find the number of bytes to the end of the current line in the input defp seek_to_eol(<<>>, len), do: len defp seek_to_eol(<<?\r, ?\n, _::binary>>, len), do: len defp seek_to_eol(<<?\n, _::binary>>, len), do: len defp seek_to_eol(<<_::utf8, rest::binary>>, len) do seek_to_eol(rest, len + 1) end # Handles invalid byte sequences (i.e. invalid unicode) # Considers the invalid byte as EOL so context can still be shown defp seek_to_eol(<<_, _rest::binary>>, len), do: len # Skip top-level whitespace and newlines @spec handle_token( Lexer.t(), binary, Document.t(), Lexer.type(), Lexer.skip(), binary | non_neg_integer, Lexer.lines() ) :: {:ok, Document.t()} | Lexer.lexer_err() defp handle_token(lexer, original, doc, :whitespace, _skip, _data, _lines), do: do_decode(lexer, original, doc) defp handle_token(lexer, original, doc, :newline, _skip, _data, _lines), do: do_decode(lexer, original, doc) # Push comments on the comment stack defp handle_token(lexer, original, doc, :comment, skip, size, _lines) do comment = binary_part(original, skip - size, size) do_decode(lexer, original, Builder.push_comment(doc, comment)) end # Handle valid top-level entities # - array of tables # - table # - key/value defp handle_token(lexer, original, doc, ?\[, skip, data, lines) do case peek_skip(lexer, [:whitespace]) do {:error, _, _, _} = err -> err {:ok, {?\[, _, _, _}} -> # Push opening bracket, second was peeked, so no need Lexer.push(lexer, {?\[, skip, data, lines}) handle_table_array(lexer, original, doc) {:ok, {_, _, _, _}} -> Lexer.push(lexer, {?\[, skip, data, lines}) handle_table(lexer, original, doc) end end defp handle_token(lexer, original, doc, type, skip, data, lines) when type in [:digits, :alpha, :string] do Lexer.push(lexer, {type, skip, data, lines}) with {:ok, key, _skip, _lines} <- key(lexer) do handle_key(lexer, original, doc, key) end end defp handle_token(lexer, original, doc, type, skip, _data, lines) when type in '-_' do handle_token(lexer, original, doc, :string, skip, <<type::utf8>>, lines) end # We're done defp handle_token(_lexer, _original, doc, :eof, _skip, _data, _lines) do {:ok, doc} end # Anything else at top-level is invalid defp handle_token(_lexer, _original, _doc, type, skip, data, lines) do {:error, {:invalid_token, {type, data}}, skip, lines} end defp handle_key(lexer, original, doc, key) do with {:ok, {?=, _, _, _}} <- pop_skip(lexer, [:whitespace]), {:ok, value} <- value(lexer), {:ok, doc} <- Builder.push_key(doc, key, value) do # Make sure key/value pairs are separated by a newline case peek_skip(lexer, [:whitespace]) do {:error, _, _, _} = err -> err {:ok, {:comment, _, _, _}} -> # Implies newline do_decode(lexer, original, doc) {:ok, {type, _, _, _}} when type in [:newline, :eof] -> do_decode(lexer, original, doc) {:ok, {type, skip, data, lines}} -> {:error, {:expected, :newline, {type, data}}, skip, lines} end else {:error, _, _, _} = err -> err {:ok, {type, skip, data, lines}} -> {:error, {:expected, ?=, {type, data}}, skip, lines} end end defp handle_table(lexer, original, doc) do # Guaranteed to have an open bracket with {:ok, {?\[, _, _, _}} <- pop_skip(lexer, [:whitespace]), {:ok, key, _line, _col} <- key(lexer), {:ok, {?\], _, _, _}} <- pop_skip(lexer, [:whitespace]), {:ok, doc} <- Builder.push_table(doc, key) do # Make sure table opening is followed by newline case peek_skip(lexer, [:whitespace]) do {:error, _, _, _} = err -> err {:ok, {:comment, _, _, _}} -> do_decode(lexer, original, doc) {:ok, {type, _, _, _}} when type in [:newline, :eof] -> do_decode(lexer, original, doc) {:ok, {type, skip, data, lines}} -> {:error, {:expected, :newline, {type, data}}, skip, lines} end else {:error, _, _, _} = err -> err {:ok, {type, skip, data, lines}} -> {:error, {:invalid_token, {type, data}}, skip, lines} end end defp handle_table_array(lexer, original, doc) do # Guaranteed to have two open brackets with {:ok, {?\[, _, _, _}} <- pop_skip(lexer, [:whitespace]), {:ok, {?\[, _, _, _}} <- pop_skip(lexer, [:whitespace]), {:ok, key, _, _} <- key(lexer), {_, {:ok, {?\], _, _, _}}} <- {:close, pop_skip(lexer, [:whitespace])}, {_, {:ok, {?\], _, _, _}}} <- {:close, pop_skip(lexer, [:whitespace])}, {:ok, doc} <- Builder.push_table_array(doc, key) do # Make sure table opening is followed by newline case peek_skip(lexer, [:whitespace]) do {:error, _, _, _} = err -> err {:ok, {:comment, _, _, _}} -> do_decode(lexer, original, doc) {:ok, {type, _, _, _}} when type in [:newline, :eof] -> do_decode(lexer, original, doc) {:ok, {type, skip, data, lines}} -> {:error, {:expected, :newline, {type, data}}, skip, lines} end else {:error, _, _, _} = err -> err {_, {:error, _, _, _} = err} -> err {:close, {:ok, {type, skip, data, lines}}} -> {:error, {:unclosed_table_array_name, {type, data}}, skip, lines} end end defp maybe_integer(lexer) do case pop_skip(lexer, [:whitespace]) do {:ok, {type, _skip, _data, _lines}} when type in '-+' -> # Can be integer, float case Lexer.peek(lexer) do {:error, _, _, _} = err -> err # Handle infinity/nan with leading sign {:ok, {:alpha, _, "inf", _}} -> Lexer.advance(lexer) if type == ?\+ do {:ok, :infinity} else {:ok, :negative_infinity} end {:ok, {:alpha, _, "nan", _}} -> Lexer.advance(lexer) if type == ?\+ do {:ok, :nan} else {:ok, :negative_nan} end # Must be a signed integer or float {:ok, {:digits, _, d, _}} -> Lexer.advance(lexer) maybe_integer(lexer, [d, type]) # Invalid {:ok, {type, skip, data, lines}} -> {:error, {:invalid_token, {type, data}}, skip, lines} end {:ok, {:digits, skip, <<leader::utf8, _::utf8, _::utf8, _::utf8>> = d, lines} = token} -> # Could be a datetime case Lexer.peek(lexer) do {:ok, {?-, _, _, _}} -> # This is a date or datetime Lexer.push(lexer, token) maybe_datetime(lexer) {:ok, {?., _, _, _}} -> # Float Lexer.advance(lexer) float(lexer, ?., [?., d]) {:ok, {:alpha, _, <<c::utf8>>, _}} when c in 'eE' -> # Float Lexer.advance(lexer) float(lexer, ?e, [?e, ?0, ?., d]) {:ok, {?_, _, _, _}} -> # Integer maybe_integer(lexer, [d]) _ -> # Just an integer if leader == ?0 do # Leading zeroes not allowed {:error, {:invalid_integer, :leading_zero}, skip, lines} else {:ok, String.to_integer(d)} end end {:ok, {:digits, skip, <<leader::utf8, _::utf8>> = d, lines} = token} -> # Could be a time case Lexer.peek(lexer) do {:ok, {?:, _, _, _}} -> # This is a local time Lexer.push(lexer, token) time(lexer) _ -> # It's just an integer if leader == ?0 do # Leading zeros not allowed {:error, {:invalid_integer, :leading_zero}, skip, lines} else maybe_integer(lexer, [d]) end end {:ok, {:digits, _, d, _}} -> # Just a integer or float maybe_integer(lexer, [d]) {:ok, {type, skip, data, lines}} -> {:error, {:invalid_token, {type, data}}, skip, lines} end end defp maybe_integer(lexer, parts) do case Lexer.pop(lexer) do {:error, _, _, _} = err -> err {:ok, {?., _, _, _}} -> # Float float(lexer, ?., [?. | parts]) {:ok, {:alpha, _, <<c::utf8>>, _}} when c in 'eE' -> # Float, need to add .0 before e, or String.to_float fails float(lexer, ?e, [?e, ?0, ?. | parts]) {:ok, {?_, _, _, _}} -> case Lexer.peek(lexer) do {:ok, {:digits, _, d, _}} -> # Allowed, just skip the underscore Lexer.advance(lexer) maybe_integer(lexer, [d | parts]) {:ok, {type, skip, data, lines}} -> {:error, {:invalid_token, {type, data}}, skip, lines} {:error, _, _, _} = err -> err end {:ok, {_, skip, _, lines} = token} -> # Just an integer Lexer.push(lexer, token) with {:ok, _} = result <- iodata_to_integer(parts) do result else {:error, reason} -> {:error, reason, skip, lines} end end end defp float(lexer, signal, [last | _] = parts) do case Lexer.pop(lexer) do {:error, _, _, _} = err -> err {:ok, {?., skip, _, lines}} -> # Always an error at this point, as either duplicate or after E {:error, {:invalid_float, {?., 0}}, skip, lines} {:ok, {sign, _, _, _}} when sign in '-+' and last == ?e -> # +/- are allowed after e/E float(lexer, signal, [sign | parts]) {:ok, {:alpha, _, <<c::utf8>>, _}} when c in 'eE' and signal == ?. -> # Valid if after a dot float(lexer, ?e, [?e | parts]) {:ok, {?_, skip, _, lines}} when last not in '_e.' -> # Valid only when surrounded by digits with {:ok, {:digits, _, d, _}} <- Lexer.peek(lexer), _ = Lexer.advance(lexer) do float(lexer, signal, [d | parts]) else {:error, _, _, _} = err -> err {:ok, {_, _, _, _}} -> {:error, {:invalid_float, {?_, 0}}, skip, lines} end {:ok, {:digits, _, d, _}} -> float(lexer, signal, [d | parts]) {:ok, {type, skip, data, lines}} when last in 'e.' -> # Incomplete float {:error, {:invalid_float, {type, data}}, skip, lines} {:ok, {_type, skip, _data, lines} = token} when last not in '_e.' -> # Done Lexer.push(lexer, token) with {:ok, _} = result <- iodata_to_float(parts) do result else {:error, reason} -> {:error, reason, skip, lines} end end end defp time(lexer) do # At this point we know we have at least HH: with {:ok, {:digits, skip, <<_::utf8, _::utf8>> = hh, lines}} <- Lexer.pop(lexer), {:ok, {?:, _, _, _}} <- Lexer.pop(lexer), {:ok, {:digits, _, <<_::utf8, _::utf8>> = mm, _}} <- Lexer.pop(lexer), {:ok, {?:, _, _, _}} <- Lexer.pop(lexer), {:ok, {:digits, _, <<_::utf8, _::utf8>> = ss, _}} <- Lexer.pop(lexer) do # Check for fractional parts = [ss, ?:, mm, ?:, hh] parts = case Lexer.peek(lexer) do {:ok, {?., _, _, _}} -> Lexer.advance(lexer) case Lexer.pop(lexer) do {:ok, {:digits, _, d, _}} -> [d, ?. | parts] {:ok, {type, skip, data, lines}} -> # Invalid throw({:error, {:invalid_token, {type, data}}, skip, lines}) {:error, reason, skip, lines} -> throw({:error, {:invalid_fractional_seconds, reason}, skip, lines}) end {:ok, _} -> parts end case Time.from_iso8601(iodata_to_str(parts)) do {:ok, _} = result -> result {:error, :invalid_time} -> {:error, :invalid_time, skip, lines} {:error, reason} -> {:error, {:invalid_time, reason}, skip, lines} end else {:error, _, _, _} = err -> err {:ok, {type, skip, data, lines}} -> {:error, {:invalid_token, {type, data}}, skip, lines} end catch :throw, {:error, _, _, _} = err -> err end defp maybe_datetime(lexer) do # At this point we have at least YYYY- with {:ok, {:digits, _, <<_::utf8, _::utf8, _::utf8, _::utf8>> = yy, _}} <- Lexer.pop(lexer), {:ok, {?-, _, _, _}} <- Lexer.pop(lexer), {:ok, {:digits, _, <<_::utf8, _::utf8>> = mm, _}} <- Lexer.pop(lexer), {:ok, {?-, _, _, _}} <- Lexer.pop(lexer), {:ok, {:digits, skip, <<_::utf8, _::utf8>> = dd, lines}} <- Lexer.pop(lexer) do # At this point we have a full date, check for time case Lexer.pop(lexer) do {:ok, {:alpha, _, "T", _}} -> # Expecting a time with {:ok, time} <- time(lexer) do datetime(lexer, [dd, ?-, mm, ?-, yy], time) end {:ok, {:whitespace, _, _, _}} -> case Lexer.peek(lexer) do {:ok, {:digits, _, <<_::utf8, _::utf8>>, _}} -> # Expecting a time with {:ok, time} <- time(lexer) do datetime(lexer, [dd, ?-, mm, ?-, yy], time) end _ -> # Just a date case Date.from_iso8601(iodata_to_str([dd, ?-, mm, ?-, yy])) do {:ok, _} = result -> result {:error, :invalid_date} -> {:error, :invalid_date, skip, lines} {:error, reason} -> {:error, {:invalid_date, reason}, skip, lines} end end {:ok, {_type, skip, _data, lines} = token} -> # Just a date Lexer.push(lexer, token) case Date.from_iso8601(iodata_to_str([dd, ?-, mm, ?-, yy])) do {:ok, _} = result -> result {:error, :invalid_date} -> {:error, :invalid_date, skip, lines} {:error, reason} -> {:error, {:invalid_date, reason}, skip, lines} end end else {:error, _, _, _} = err -> err {:ok, {type, skip, data, lines}} -> {:error, {:invalid_token, {type, data}}, skip, lines} end end defp datetime(lexer, parts, time) do # Track the current lexer position for errors {:ok, skip, lines} = Lexer.pos(lexer) # Convert parts to string datestr = iodata_to_str(parts) # At this point we have at least YYYY-mm-dd and a fully decoded time with {_, {:ok, date}} <- {:date, Date.from_iso8601(datestr)}, {_, {:ok, naive}} <- {:datetime, NaiveDateTime.new(date, time)} do # We just need to check for Z or UTC offset case Lexer.pop(lexer) do {:ok, {:alpha, _, "Z", _}} -> DateTime.from_naive(naive, "Etc/UTC") {:ok, {sign, _, _, _}} when sign in '-+' -> # We have an offset with {:ok, {:digits, _, <<_::utf8, _::utf8>> = hh, _}} <- Lexer.pop(lexer), {:ok, {?:, _, _, _}} <- Lexer.pop(lexer), {:ok, {:digits, _, <<_::utf8, _::utf8>> = mm, _}} <- Lexer.pop(lexer) do # Shift naive to account for offset hours = String.to_integer(hh) mins = String.to_integer(mm) offset = hours * 60 * 60 + mins * 60 naive = case sign do ?- -> NaiveDateTime.add(naive, offset * -1, :second) ?+ -> NaiveDateTime.add(naive, offset, :second) end DateTime.from_naive(naive, "Etc/UTC") else {:error, _, _, _} = err -> err {:ok, {type, skip, data, lines}} -> {:error, {:invalid_datetime_offset, {type, data}}, skip, lines} end {:ok, {type, _, _, _} = token} when type in [:eof, :whitespace, :newline] -> # Just a local date/time Lexer.push(lexer, token) {:ok, naive} end else {:date, {:error, :invalid_date}} -> {:error, {:invalid_date, datestr}, skip, lines} {:date, {:error, reason}} -> {:error, {:invalid_date, reason, datestr}, skip, lines} {:datetime, {:error, :invalid_date}} -> {:error, {:invalid_datetime, {datestr, time}}, skip, lines} {:datetime, {:error, reason}} -> {:error, {:invalid_date, reason, {datestr, time}}, skip, lines} {:error, _, _, _} = err -> err {:ok, {type, skip, data, lines}} -> {:error, {:invalid_token, {type, data}}, skip, lines} end end # Allowed values # - Array # - Inline table # - Integer (in all forms) # - Float # - String # - DateTime defp value(lexer) do case peek_skip(lexer, [:whitespace]) do {:error, _, _, _} = err -> err {:ok, {:comment, _, _, _}} -> Lexer.advance(lexer) value(lexer) {:ok, {?\[, skip, _, lines}} -> # Need to embellish some errors with line/col with {:ok, _} = ok <- array(lexer) do ok else {:error, _, _, _} = err -> err {:error, {:invalid_array, _} = reason} -> {:error, reason, skip, lines} end {:ok, {?\{, _, _, _}} -> inline_table(lexer) {:ok, {:hex, _, v, _}} -> Lexer.advance(lexer) {:ok, String.to_integer(v, 16)} {:ok, {:octal, _, v, _}} -> Lexer.advance(lexer) {:ok, String.to_integer(v, 8)} {:ok, {:binary, _, v, _}} -> Lexer.advance(lexer) {:ok, String.to_integer(v, 2)} {:ok, {true, _, _, _}} -> Lexer.advance(lexer) {:ok, true} {:ok, {false, _, _, _}} -> Lexer.advance(lexer) {:ok, false} {:ok, {:alpha, _, "inf", _}} -> Lexer.advance(lexer) {:ok, :infinity} {:ok, {:alpha, _, "nan", _}} -> Lexer.advance(lexer) {:ok, :nan} {:ok, {type, _, v, _}} when type in [:string, :multiline_string] -> Lexer.advance(lexer) {:ok, v} {:ok, {sign, _, _, _}} when sign in '-+' -> maybe_integer(lexer) {:ok, {:digits, _, _, _}} -> maybe_integer(lexer) {:ok, {type, skip, data, lines}} -> {:error, {:invalid_token, {type, data}}, skip, lines} end end defp array(lexer) do with {:ok, {?\[, skip, _, lines}} <- pop_skip(lexer, [:whitespace]), {:ok, elements} <- accumulate_array_elements(lexer), {:valid?, true} <- {:valid?, valid_array?(elements)}, {_, _, {:ok, {?\], _, _, _}}} <- {:close, {skip, lines}, pop_skip(lexer, [:whitespace, :newline, :comment])} do {:ok, elements} else {:error, _, _, _} = err -> err {:close, {:error, _, _, _} = err} -> err {:close, {_oline, _ocol} = opened, {:ok, {_, eskip, _, elines}}} -> {:error, {:unclosed_array, opened}, eskip, elines} {:valid?, err} -> err end end defp valid_array?([]), do: true defp valid_array?([h | t]), do: valid_array?(t, typeof(h)) defp valid_array?([], _type), do: true defp valid_array?([h | t], type) do if typeof(h) == type do valid_array?(t, type) else {:error, {:invalid_array, {:expected_type, t, h}}} end end defp typeof(v) when is_integer(v), do: :integer defp typeof(v) when is_float(v), do: :float defp typeof(v) when is_binary(v), do: :string defp typeof(%Time{}), do: :time defp typeof(%Date{}), do: :date defp typeof(%DateTime{}), do: :datetime defp typeof(%NaiveDateTime{}), do: :datetime defp typeof(v) when is_list(v), do: :list defp typeof(v) when is_map(v), do: :map defp typeof(v) when is_boolean(v), do: :boolean defp inline_table(lexer) do with {:ok, {?\{, skip, _, lines}} <- pop_skip(lexer, [:whitespace]), {:ok, elements} <- accumulate_table_elements(lexer), {_, _, {:ok, {?\}, _, _, _}}} <- {:close, {skip, lines}, pop_skip(lexer, [:whitespace])} do {:ok, elements} else {:error, _, _, _} = err -> err {:close, {:error, _, _, _} = err} -> err {:close, {_oskip, _olines} = opened, {:ok, {_, eskip, _, elines}}} -> {:error, {:unclosed_inline_table, opened}, eskip, elines} end end defp accumulate_array_elements(lexer) do accumulate_array_elements(lexer, []) end defp accumulate_array_elements(lexer, acc) do with {:ok, {type, _, _, _}} <- peek_skip(lexer, [:whitespace, :newline, :comment]), {_, false} <- {:trailing_comma, type == ?\]}, {:ok, value} <- value(lexer), {:ok, {next, _, _, _}} <- peek_skip(lexer, [:whitespace]) do if next == ?, do Lexer.advance(lexer) accumulate_array_elements(lexer, [value | acc]) else {:ok, Enum.reverse([value | acc])} end else {:error, _, _, _} = err -> err {:trailing_comma, true} -> {:ok, Enum.reverse(acc)} end end defp accumulate_table_elements(lexer) do accumulate_table_elements(lexer, %{}) end defp accumulate_table_elements(lexer, acc) do with {:ok, {type, _, _, _}} <- peek_skip(lexer, [:whitespace, :newline, :comment]), {_, false} <- {:trailing_comma, type == ?\}}, {:ok, key, skip, lines} <- key(lexer), {_, _, false, _, _} <- {:key_exists, key, Map.has_key?(acc, key), skip, lines}, {:ok, {?=, _, _, _}} <- pop_skip(lexer, [:whitespace, :comments]), {:ok, value} <- value(lexer), {_, {:ok, acc2}} <- {key, Builder.push_key_into_table(acc, key, value)}, {:ok, {next, _, _, _}} <- peek_skip(lexer, [:whitespace, :comments]) do if next == ?, do Lexer.advance(lexer) accumulate_table_elements(lexer, acc2) else {:ok, acc2} end else {:error, _, _, _} = err -> err {:key_exists, key, true, line, col} -> {:error, {:key_exists, key}, line, col} {table, {:error, :key_exists}} -> {:error, {:key_exists_in_table, table}, -1, -1} {:trailing_comma, true} -> {:ok, acc} {:ok, {type, skip, data, lines} = token} -> Lexer.push(lexer, token) {:error, {:invalid_key_value, {type, data}}, skip, lines} end end defp key(lexer) do result = case pop_skip(lexer, [:whitespace]) do {:error, _, _, _} = err -> err {:ok, {type, skip, s, lines}} when type in [:digits, :alpha, :string] -> {key(lexer, s, []), skip, lines} {:ok, {type, skip, _, lines}} when type in '-_' -> {key(lexer, <<type::utf8>>, []), skip, lines} {:ok, {type, skip, data, lines} = token} -> Lexer.push(lexer, token) {:error, {:invalid_token, {type, data}}, skip, lines} end case result do {:error, _, _, _} = err -> err {{:ok, key}, skip, lines} -> {:ok, key, skip, lines} {{:error, _, _, _} = err, _, _} -> err end end defp key(lexer, word, acc) do case Lexer.peek(lexer) do {:error, _, _, _} = err -> err {:ok, {:whitespace, _, _, _}} -> # The only allowed continuation now is a . followed by key char case peek_skip(lexer, [:whitespace]) do {:ok, {?., _, _, _}} -> Lexer.advance(lexer) case peek_skip(lexer, [:whitespace]) do {:error, _, _, _} = err -> err {:ok, {type, _, _, _}} when type in [:digits, :alpha, :string] -> key(lexer, "", [word | acc]) {:ok, {type, _, _, _}} when type in '-_' -> key(lexer, "", [word | acc]) {:ok, {type, skip, data, lines}} -> {:error, {:invalid_token, {type, data}}, skip, lines} end {:ok, _} -> {:ok, Enum.reverse([word | acc])} end {:ok, {type, _, s, _}} when type in [:digits, :alpha, :string] -> Lexer.advance(lexer) key(lexer, word <> s, acc) {:ok, {type, _, _, _}} when type in '-_' -> Lexer.advance(lexer) key(lexer, word <> iodata_to_str([type]), acc) {:ok, {?., _, _, _}} -> Lexer.advance(lexer) case peek_skip(lexer, [:whitespace]) do {:error, _, _, _} = err -> err {:ok, {type, _, _, _}} when type in [:digits, :alpha, :string] -> key(lexer, "", [word | acc]) {:ok, {type, _, _, _}} when type in '-_' -> key(lexer, "", [word | acc]) {:ok, {type, skip, data, lines}} -> {:error, {:invalid_token, {type, data}}, skip, lines} end {:ok, _} -> {:ok, Enum.reverse([word | acc])} end end defp iodata_to_integer(data) do case iodata_to_str(data) do <<?0, rest::binary>> when byte_size(rest) > 0 -> {:error, {:invalid_integer, :leading_zero}} <<sign::utf8, ?0, rest::binary>> when (sign == ?- or sign == ?+) and byte_size(rest) > 0 -> {:error, {:invalid_integer, :leading_zero}} s -> {:ok, String.to_integer(s)} end end defp iodata_to_float(data) do case iodata_to_str(data) do <<?0, next::utf8, _::binary>> when next != ?. -> {:error, {:invalid_float, :leading_zero}} <<sign::utf8, ?0, next::utf8, _::binary>> when (sign == ?- or sign == ?+) and next != ?. -> {:error, {:invalid_float, :leading_zero}} s -> {:ok, String.to_float(s)} end end defp iodata_to_str(parts) do parts |> Enum.reverse() |> IO.chardata_to_string() end defp pop_skip(lexer, skip) do case Lexer.pop(lexer) do {:error, _, _, _} = err -> err {:ok, {type, _, _, _}} = result -> if :lists.member(type, skip) do pop_skip(lexer, skip) else result end end end defp peek_skip(lexer, skip) do case Lexer.peek(lexer) do {:error, _, _, _} = err -> err {:ok, {type, _, _, _}} = result -> if :lists.member(type, skip) do Lexer.advance(lexer) peek_skip(lexer, skip) else result end end end end
lib/decoder.ex
0.868785
0.504516
decoder.ex
starcoder
defmodule StateMachine.Transition do @moduledoc """ Transition module gathers together all of the actions that happen around transition from the old state to the new state in response to an event. """ alias StateMachine.{Transition, Event, State, Context, Callback, Guard} @type t(model) :: %__MODULE__{ from: atom, to: atom, before: list(Callback.t(model)), after: list(Callback.t(model)), guards: list(Guard.t(model)) } @type callback_pos() :: :before | :after @enforce_keys [:from, :to] defstruct [ :from, :to, before: [], after: [], guards: [] ] @doc """ Checks if the transition is allowed in the current context. Returns boolean. """ @spec is_allowed?(Context.t(model), t(model)) :: boolean when model: var def is_allowed?(ctx, transition) do Guard.check(ctx, transition) end @doc """ Given populated context and Transition structure, sequentially runs all callbacks along with actual state update: * before(event) * before(transition) * before_leave(state) * before_enter(state) * *** (state update) *** * after_leave(state) * after_enter(state) * after(transition) * after(event) If any of the callbacks fails, all sequential ops are cancelled. """ @spec run(Context.t(model)) :: Context.t(model) when model: var def run(ctx) do ctx |> Event.callback(:before) |> Transition.callback(:before) |> State.callback(:before_leave) |> State.callback(:before_enter) |> Transition.update_state() |> State.callback(:after_leave) |> State.callback(:after_enter) |> Transition.callback(:after) |> Event.callback(:after) |> Transition.finalize() end @doc """ Private function for running Transition callbacks. """ @spec callback(Context.t(model), callback_pos()) :: Context.t(model) when model: var def callback(ctx, pos) do callbacks = Map.get(ctx.transition, pos) Callback.apply_chain(ctx, callbacks, :"#{pos}_transition") end @doc """ Private function for updating state. """ @spec update_state(Context.t(model)) :: Context.t(model) when model: var def update_state(%{status: :init} = ctx) do ctx.definition.state_setter.(ctx, ctx.transition.to) end def update_state(ctx), do: ctx @doc """ Private function sets status to :done, unless it has failed before. """ @spec finalize(Context.t(model)) :: Context.t(model) when model: var def finalize(%{status: :init} = ctx) do %{ctx | status: :done} end def finalize(ctx), do: ctx @doc """ True if transition is a loop, i.e. doesn't change state. """ @spec loop?(t(any)) :: boolean def loop?(%{from: s, to: s}), do: true def loop?(_), do: false end
lib/state_machine/transition.ex
0.852568
0.461441
transition.ex
starcoder
defmodule Bacen.CCS.ACCS004 do @moduledoc """ The ACCS004 message. This message reports the actual persons registered on Bacen's system for given CNPJ company. It has the following XML example: ```xml <CCSArqPosCad> <Repet_ACCS004_Congl> <CNPJBasePart>12345678</CNPJBasePart> </Repet_ACCS004_Congl> <Repet_ACCS004_Pessoa> <Grupo_ACCS004_Pessoa> <TpPessoa>F</TpPessoa> <CNPJ_CPFPessoa>12345678901</CNPJ_CPFPessoa> <DtIni>2002-01-01</DtIni> <DtFim>2002-01-03</DtFim> </Grupo_ACCS004_Pessoa> </Repet_ACCS004_Pessoa> <DtMovto>2004-10-10</DtMovto> </CCSArqPosCad> ``` """ use Ecto.Schema import Brcpfcnpj.Changeset import Ecto.Changeset @typedoc """ The ACCS004 message type """ @type t :: %__MODULE__{} @registration_position_opts [source: :CCSArqPosCad, primary_key: false] @registration_position_fields ~w(movement_date)a @registration_position_fields_source_sequence ~w(Repet_ACCS004_Congl Repet_ACCS004_Pessoa QtdOpCCS DtHrBC DtMovto)a @participant_fields ~w(cnpj)a @participant_fields_source_sequence ~w(CNPJBasePart)a @persons_fields ~w(cnpj)a @persons_fields_source_sequence ~w(CNPJBasePart Grupo_ACCS004_Pessoa)a @person_fields ~w(type cpf_cnpj start_date end_date)a @person_required_fields ~w(type cpf_cnpj start_date)a @person_fields_source_sequence ~w(TpPessoa CNPJ_CPFPessoa DtIni DtFim)a @allowed_person_types ~w(F J) @primary_key false embedded_schema do embeds_one :registration_position, RegistrationPosition, @registration_position_opts do embeds_one :conglomerate, Conglomerate, source: :Repet_ACCS004_Congl, primary_key: false do embeds_many :participant, Participant, primary_key: false do field :cnpj, :string, source: :CNPJBasePart end end embeds_one :persons, Persons, source: :Repet_ACCS004_Pessoa, primary_key: false do embeds_many :person, Person, source: :Grupo_ACCS004_Pessoa, primary_key: false do field :type, :string, source: :TpPessoa field :cpf_cnpj, :string, source: :CNPJ_CPFPessoa field :start_date, :date, source: :DtIni field :end_date, :date, source: :DtFim end field :cnpj, :string, source: :CNPJBasePart end field :movement_date, :date, source: :DtMovto end end @doc """ Creates a new ACCS004 message from given attributes. """ @spec new(map()) :: {:ok, t()} | {:error, Ecto.Changeset.t()} def new(attrs) when is_map(attrs) do attrs |> changeset() |> apply_action(:insert) end @doc false def changeset(schema \\ %__MODULE__{}, attrs) when is_map(attrs) do schema |> cast(attrs, []) |> cast_embed(:registration_position, with: &registration_position_changeset/2, required: true) end @doc false def registration_position_changeset(registration_position, attrs) when is_map(attrs) do registration_position |> cast(attrs, @registration_position_fields) |> validate_required(@registration_position_fields) |> cast_embed(:conglomerate, with: &conglomerate_changeset/2, required: true) |> cast_embed(:persons, with: &persons_changeset/2, required: true) end @doc false def conglomerate_changeset(conglomerate, attrs) when is_map(attrs) do conglomerate |> cast(attrs, []) |> cast_embed(:participant, with: &participant_changeset/2, required: true) end @doc false def participant_changeset(participant, attrs) when is_map(attrs) do participant |> cast(attrs, @participant_fields) |> validate_required(@participant_fields) |> validate_length(:cnpj, is: 8) |> validate_format(:cnpj, ~r/[0-9]{8}/) end @doc false def persons_changeset(persons, attrs) when is_map(attrs) do persons |> cast(attrs, @persons_fields) |> validate_required(@persons_fields) |> validate_length(:cnpj, is: 8) |> validate_format(:cnpj, ~r/[0-9]{8}/) |> cast_embed(:person, with: &person_changeset/2, required: true) end @doc false def person_changeset(person, attrs) when is_map(attrs) do person |> cast(attrs, @person_fields) |> validate_required(@person_required_fields) |> validate_inclusion(:type, @allowed_person_types) |> validate_length(:type, is: 1) |> validate_by_type() end defp validate_by_type(changeset) do case get_field(changeset, :type) do "F" -> validate_cpf(changeset, :cpf_cnpj, message: "invalid CPF format") "J" -> validate_cnpj(changeset, :cpf_cnpj, message: "invalid CNPJ format") _ -> changeset end end @doc """ Returns the field sequence for given root xml element ## Examples iex> Bacen.CCS.ACCS004.sequence(:CCSArqPosCad) [:Repet_ACCS004_Congl, :Repet_ACCS004_Pessoa, :QtdOpCCS, :DtHrBC, :DtMovto] iex> Bacen.CCS.ACCS004.sequence(:Repet_ACCS004_Congl) [:CNPJBasePart] iex> Bacen.CCS.ACCS004.sequence(:Repet_ACCS004_Pessoa) [:CNPJBasePart, :Grupo_ACCS004_Pessoa] iex> Bacen.CCS.ACCS004.sequence(:Grupo_ACCS004_Pessoa) [:TpPessoa, :CNPJ_CPFPessoa, :DtIni, :DtFim] """ @spec sequence( :CCSArqPosCad | :Repet_ACCS004_Congl | :Repet_ACCS004_Pessoa | :Grupo_ACCS004_Pessoa ) :: list(atom()) def sequence(element) def sequence(:CCSArqPosCad), do: @registration_position_fields_source_sequence def sequence(:Repet_ACCS004_Congl), do: @participant_fields_source_sequence def sequence(:Repet_ACCS004_Pessoa), do: @persons_fields_source_sequence def sequence(:Grupo_ACCS004_Pessoa), do: @person_fields_source_sequence end
lib/bacen/ccs/accs004.ex
0.784855
0.63358
accs004.ex
starcoder
defmodule Soundcloud.HashConversions do import Soundcloud.Utils, only: [list_of_maps_to_map: 1, list_of_maps_to_map: 2] @doc """ Returns a map with every key-value pair in `map` mapped with `normalize_param`. ## Examples iex> Soundcloud.HashConversions.to_params(%{"foo" => %{"bar" => %{"a" => 5}, "tar" => [1, 2]}}) %{"foo[tar][]" => [1, 2]} """ @spec to_params(%{}) :: %{} def to_params(map) do normalized = for {k, v} <- map, do: normalize_param(k, v) list_of_maps_to_map(normalized) end @doc """ Convert a set of key, value parameters into a map suitable for passing into requests. This will convert lists into the syntax required by SoundCloud. Heavily lifeted from HTTParty. ## Examples iex> Soundcloud.HashConversions.normalize_param("oauth_token", "foo") %{"oauth_token" => "foo"} iex> Soundcloud.HashConversions.normalize_param("playlist[tracks]", [1234, 4567]) %{"playlist[tracks][]" => [1234, 4567]} """ def normalize_param(key, value) do {params, stack} = do_normalize_param(key, value) stack = Enum.map(stack, &List.to_tuple/1) ps = for {parent, hash} <- stack, {key, value} <- hash do if not is_map(value) do normalize_param("#{parent}[#{key}]", value) end end list_of_maps_to_map(ps, params) end defp do_normalize_param(key, value, params \\ %{}, stack \\ []) defp do_normalize_param(key, value, params, stack) when is_list(value) do normalized = Enum.map(value, &normalize_param("#{key}[]", &1)) keys = Enum.flat_map(normalized, &Map.keys/1) lists = duplicates(keys, normalized) params = params |> Map.merge(list_of_maps_to_map(normalized)) |> Map.merge(lists) {params, stack} end defp do_normalize_param(key, value, params, stack) when is_map(value) do {params, stack ++ [[key, value]]} end defp do_normalize_param(key, value, params, stack) do {Map.put(params, key, value), stack} end defp duplicates(keys, normalized) do if length(keys) != length(Enum.uniq(keys)) do duplicates = keys -- Enum.uniq(keys) for dup <- duplicates, into: %{}, do: {dup, for(m <- normalized, do: Map.fetch!(m, dup))} else %{} end end end
lib/soundcloud/hash_conversions.ex
0.843219
0.517205
hash_conversions.ex
starcoder
defmodule Grizzly.FirmwareUpdates do @moduledoc """ Module for upgrading firmware on target devices. Required options: * `manufacturer_id` - The unique id indentifying the manufacturer of the target device * `firmware_id` - The id of the current firmware Other options: * `device_id` - Node id of the device to be updated. Defaults to 1 (controller) * `firmware_target` - The firwmare target id. Defaults to 0 (the ZWave chip) * `max_fragment_size` - The maximum number of bytes that will be transmitted at a time. Defaults to 2048. * `hardware_version` - The current hardware version of the device to be updated. Defaults to 0. * `activation_may_be_delayed?` - Whether it is permitted for the device may delay the actual firmware update. Defaults to false. * `handler` - The process that will receive callbacks. Defaults to the caller.any() The firmware update process is as follows: 1. Grizzly sends a `firmware_md_get` command to the target device to get the manufacturer_id, hardware_id, max_fragment size, among other info needed to specify a firmware update request. The info is returned via a `firmware_md_report` command. 2. Grizzly uses this info to send a `firmware_update_md_request` command to the target device, telling it to initiate the image uploading process. The checksum of the entire firmware image is added to the request. The target device says yeah or nay via a `firmware_update_md_request_report` command. 3. If the target device agrees to have its firmware updated, it next sends a first `firmware_update_md_get` command to Grizzly asking for a number_of_reports (a bunch of firmware image fragment uploads) starting at fragment `report_number`. 4. Grizzly responds by sending the requested series of `firmware_update_md_report` commands to the target device, each one containing a firmware image fragment, with a checksum for the command. 5. Once a series of uploads is completed, the target device either asks for more fragments via another `firmware_update_md_get` command, or it sends a `firmware_update_md_status_report` command either to cancel the yet incomplete upload (bad command checksums!), or to announce that the update has completed either successfully (with some info about what happens next) or in failure (invalid overall image checksum!). 6. As part of a successful `firmware_update_md_status_report` command, the target device tells Grizzly whether the new firmware needs to be activated. If it does, Grizzly would then be expected to send a `firmware_update_activation_set` command which success is reported by the target device via a `firmware_update_activation_report` command. """ alias Grizzly.FirmwareUpdates.FirmwareUpdateRunnerSupervisor alias Grizzly.FirmwareUpdates.FirmwareUpdateRunner @type opt :: {:manufacturer_id, non_neg_integer} | {:firmware_id, non_neg_integer} | {:device_id, Grizzly.node_id()} | {:hardware_version, byte} | {:handler, pid() | module() | {module, keyword()}} | {:firmware_target, byte} | {:fragment_size, non_neg_integer} | {:activation_may_be_delayed?, boolean} @type image_path :: String.t() require Logger @doc """ Starts the firmware update process """ @spec start_firmware_update(image_path(), [opt()]) :: :ok | {:error, :image_not_found} | {:error, :busy} def start_firmware_update(firmware_image_path, opts) do with {:ok, runner} <- FirmwareUpdateRunnerSupervisor.start_runner(opts) do FirmwareUpdateRunner.start_firmware_update(runner, firmware_image_path) else {:error, :imgae_not_found} -> Logger.warn("[Grizzly] Firmware image file not found") {:error, :image_not_found} other -> Logger.warn("[Grizzly] Failed to start firmware update: #{inspect(other)}") {:error, :busy} end end @spec firmware_update_running?() :: boolean() def firmware_update_running?() do child_count = DynamicSupervisor.count_children(FirmwareUpdateRunnerSupervisor) child_count.active == 1 end @spec firmware_image_fragment_count :: {:ok, non_neg_integer} | {:error, :not_updating} def firmware_image_fragment_count() do if firmware_update_running?() do {:ok, FirmwareUpdateRunner.firmware_image_fragment_count()} else {:error, :not_updating} end end end
lib/grizzly/firmware_updates.ex
0.808861
0.448909
firmware_updates.ex
starcoder
defmodule Collision.Vector do @moduledoc """ Wrapper around vector creation functions. """ @type vector_tuple :: {float, float} | {float, float, float} @type vector :: Collision.Vector.Vector2.t | Collision.Vector.Vector3 @spec from_tuple(vector_tuple) :: vector def from_tuple({_x, _y} = t), do: Collision.Vector.Vector2.from_tuple(t) def from_tuple({_x, _y, _z} = t), do: Collision.Vector.Vector3.from_tuple(t) end defprotocol Vector do @type vector :: Collision.Vector.Vector2.t | Collision.Vector.Vector3.t @type scalar :: float @doc """ Convert a vector to a tuple. ## Examples iex> Vector.to_tuple(%Collision.Vector2{x: 1.0, y: 1.5}) {1.0, 1.5} """ def to_tuple(vector) @doc """ Round all the vector components to n decimal places. ## Examples iex> Vector.round_components(%Vector2{x: 1.32342, y: 4.23231}, 2) %Vector2{x: 1.32, y: 4.23} """ def round_components(vector, integer) @doc """ Multiple a vector by a scalar value. ## Examples iex> Vector.scalar_mult(%Collision.Vector2{x: 5.0, y: 2.0}, -1) %Collision.Vector2{x: -5.0, y: -2.0} """ def scalar_mult(vector, scalar) @doc """ Add two vectors together. ## Examples iex> Vector.add(%Collision.Vector2{x: 1.0, y: 1.0}, %Collision.Vector2{x: 2.0, y: 2.0}) %Collision.Vector2{x: 3.0, y: 3.0} """ def add(vector, vector) @doc """ Subtract two vectors. ## Examples iex> Vector.subtract(%Collision.Vector2{x: 4.0, y: 1.0}, %Collision.Vector2{x: 1.0, y: 4.0}) %Collision.Vector2{x: 3.0, y: -3.0} """ def subtract(vector, vector) @doc """ Calculate the magnitude of a vector. ## Examples iex> Vector.magnitude(%Collision.Vector2{x: 3.0, y: 4.0}) 5.0 """ def magnitude(vector) @doc """ Calculate the squared magnitude of a vector. ## Examples iex> Vector.magnitude(%Collision.Vector2{x: 3.0, y: 4.0}) 25.0 """ def magnitude_squared(vector) @doc """ Normalize a vector. ## Examples iex> Vector.normalize(%Collision.Vector2{x: 3.0, y: 4.0}) %Collision.Vector2{x: 0.6, y: 0.8} """ def normalize(vector) @doc """ Calculate the dot product of two vectors. A negative value indicates they are moving away from each other, positive towards. ## Examples iex> Vector.dot_product(%Collision.Vector2{x: 3.0, y: 4.0}, %Collision.Vector2{x: -1.0, y: 2.0}) 5.0 """ def dot_product(vector, vector) @doc """ Project a vector, v1, onto another, v2. ## Examples iex> Vector.projection(%Collision.Vector2{x: 3.0, y: 4.0}, %Collision.Vector2{x: -1.0, y: 2.0}) %Collision.Vector2{x: -2.23606797749979, y: 4.47213595499958} """ def projection(vector, vector) end
lib/collision/vector.ex
0.947998
0.968171
vector.ex
starcoder
defmodule Tesla.StatsD do @moduledoc """ This middleware sends histogram stats to Datadog for every outgoing request. The sent value is response time in milliseconds. Metric name is configurable and defaults to "http.request". The middleware also sends tags: * `http_status` - HTTP status code. * `http_status_family` (2xx, 4xx, 5xx) - HTTP status family * `http_host` - The host request has been sent to Tags have "http" in their names to avoid collisions with default tags sent by Datadog StatsD agent. ## Configuration * `:backend` - StatsD backend module. Defaults to `Tesla.StatsD.Backend.ExStatsD`. A backend must implement `Tesla.StatsD.Backend` behaviour. `Statix` backends are supported by default, just provide a module name that uses `Statix` (`use Statix`). * `:metric` - Metric name. Can be ether string or function `(Tesla.Env.t -> String.t)`. * `:metric_type` - Metric type. Can be `:histogram` (default) or `:gauge`. See [Datadog documentation](https://docs.datadoghq.com/guides/dogstatsd/#data-types). * `:tags` - List of additional tags. Can be either list or function `(Tesla.Env.t -> [String.t])`. * `:sample_rate` - Limit how often the metric is collected (default: 1) ## Usage with Tesla defmodule AccountsClient do use Tesla plug Tesla.StatsD, metric: "external.request", tags: ["service:accounts"], backend: MyApp.Statix end """ @behaviour Tesla.Middleware @default_options [ metric: "http.request", metric_type: :histogram, backend: Tesla.StatsD.Backend.ExStatsD, sample_rate: 1.0, tags: [] ] # `reraise` macro in `call/3` expands into `case` statement # which triggers warnings "guard test is_binary/is_atom(exception) can never succeed" @dialyzer {:no_match, call: 3} def call(env, next, opts) do opts = opts || [] start = System.monotonic_time() result = Tesla.run(env, next) case result do {:ok, env} -> send_stats(env, elapsed_from(start), opts) {:error, _reason} -> send_stats(%{env | status: 0}, elapsed_from(start), opts) end result end defp send_stats(env, elapsed, opts) do opts = Keyword.merge(@default_options, opts) backend = Keyword.fetch!(opts, :backend) rate = Keyword.fetch!(opts, :sample_rate) tags = Keyword.fetch!(opts, :tags) metric = opts |> Keyword.fetch!(:metric) |> normalize_metric(env) metric_type = Keyword.fetch!(opts, :metric_type) apply(backend, metric_type, [ metric, elapsed, [sample_rate: rate, tags: build_tags(env, tags)] ]) end defp build_tags(env, tags) do default_tags(env) ++ custom_tags(tags, env) end defp default_tags(%{status: status} = env) do [ "http_status:#{status}", "http_host:#{extract_host(env)}", "http_status_family:#{http_status_family(status)}" ] end defp custom_tags(tags, env) when is_function(tags) do tags.(env) end defp custom_tags(tags, _env) do tags end defp extract_host(%{url: url} = _env) do %URI{host: host} = URI.parse(url) host end defp http_status_family(status) do "#{div(status, 100)}xx" end defp elapsed_from(start) do System.convert_time_unit(System.monotonic_time() - start, :native, :millisecond) end defp normalize_metric(metric, env) when is_function(metric) do metric.(env) end defp normalize_metric(metric, _env) do metric end end
lib/tesla_statsd.ex
0.888008
0.519887
tesla_statsd.ex
starcoder
defmodule Zstream do @moduledoc """ Module for creating ZIP file stream ## Example ``` Zstream.zip([ Zstream.entry("report.csv", Stream.map(records, &CSV.dump/1)), Zstream.entry("catfilm.mp4", File.stream!("/catfilm.mp4"), coder: Zstream.Coder.Stored) ]) |> Stream.into(File.stream!("/archive.zip")) |> Stream.run ``` """ alias Zstream.Protocol defmodule State do @moduledoc false @entry_initial_state %{ local_file_header_offset: nil, crc: nil, c_size: 0, size: 0, options: [] } defstruct zlib_handle: nil, entries: [], offset: 0, current: @entry_initial_state, coder: nil, coder_state: nil def entry_initial_state do @entry_initial_state end end @opaque entry :: map @default [coder: {Zstream.Coder.Deflate, []}] @doc """ Creates a ZIP file entry with the given `name` The `enum` could be either lazy `Stream` or `List`. The elements in `enum` should be of type `iodata` ## Options * `:coder` (module | {module, list}) - The compressor that should be used to encode the data. Available options are `Zstream.Coder.Deflate` - use deflate compression `Zstream.Coder.Stored` - store without any compression Defaults to `Zstream.Coder.Deflate` * `:mtime` (DateTime) - File last modication time. Defaults to system local time. """ @spec entry(String.t(), Enumerable.t(), Keyword.t()) :: entry def entry(name, enum, options \\ []) do local_time = :calendar.local_time() |> NaiveDateTime.from_erl!() options = Keyword.merge(@default, mtime: local_time) |> Keyword.merge(options) |> update_in([:coder], &normalize_coder/1) %{name: name, stream: enum, options: options} end @doc """ Creates a ZIP file stream entries are consumed one by one in the given order """ @spec zip([entry]) :: Enumerable.t() def zip(entries) do Stream.concat([ [{:start}], Stream.flat_map(entries, fn %{stream: stream, name: name, options: options} -> Stream.concat( [{:head, %{name: name, options: options}}], stream ) end), [{:end}] ]) |> Stream.transform(fn -> %State{} end, &construct/2, &free_resource/1) end defp normalize_coder(module) when is_atom(module), do: {module, []} defp normalize_coder({module, args}), do: {module, args} defp construct({:start}, state) do state = put_in(state.zlib_handle, :zlib.open()) {[], state} end defp construct({:end}, state) do {compressed, state} = close_entry(state) :ok = :zlib.close(state.zlib_handle) state = put_in(state.zlib_handle, nil) central_directory_headers = Enum.map(state.entries, &Protocol.central_directory_header/1) central_directory_end = Protocol.end_of_central_directory( state.offset, IO.iodata_length(central_directory_headers), length(state.entries) ) {[compressed, central_directory_headers, central_directory_end], state} end defp construct({:head, header}, state) do {compressed, state} = close_entry(state) {coder, coder_opts} = Keyword.fetch!(header.options, :coder) state = put_in(state.coder, coder) state = put_in(state.coder_state, state.coder.init(coder_opts)) state = update_in(state.current, &Map.merge(&1, header)) state = put_in(state.current.options, header.options) state = put_in(state.current.crc, :zlib.crc32(state.zlib_handle, <<>>)) state = put_in(state.current.local_file_header_offset, state.offset) local_file_header = Protocol.local_file_header(state.current) state = update_in(state.offset, &(&1 + IO.iodata_length(local_file_header))) {[[compressed, local_file_header]], state} end defp construct(chunk, state) do {compressed, coder_state} = state.coder.encode(chunk, state.coder_state) c_size = IO.iodata_length(compressed) state = put_in(state.coder_state, coder_state) state = update_in(state.current.c_size, &(&1 + c_size)) state = update_in(state.current.crc, &:zlib.crc32(state.zlib_handle, &1, chunk)) state = update_in(state.current.size, &(&1 + IO.iodata_length(chunk))) state = update_in(state.offset, &(&1 + c_size)) case compressed do [] -> {[], state} _ -> {[compressed], state} end end defp close_entry(state) do if state.coder do compressed = state.coder.close(state.coder_state) c_size = IO.iodata_length(compressed) state = put_in(state.coder, nil) state = put_in(state.coder_state, nil) state = update_in(state.offset, &(&1 + c_size)) state = update_in(state.current.c_size, &(&1 + c_size)) data_descriptor = Protocol.data_descriptor(state.current.crc, state.current.c_size, state.current.size) state = update_in(state.offset, &(&1 + IO.iodata_length(data_descriptor))) state = update_in(state.entries, &[state.current | &1]) state = put_in(state.current, State.entry_initial_state()) {[compressed, data_descriptor], state} else {[], state} end end defp free_resource(state) do state = if state.coder do _compressed = state.coder.close(state.coder_state) state = put_in(state.coder, nil) put_in(state.coder_state, nil) else state end if state.zlib_handle do :ok = :zlib.close(state.zlib_handle) put_in(state.zlib_handle, nil) else state end end end
lib/zstream.ex
0.799677
0.727104
zstream.ex
starcoder
defmodule Legion.RegistryDirectory do @moduledoc """ Provides metaprogramming tools to register singleton directories. ### Motivation Global settings directories can be used to gather runtime configuration. One can create a configuration to change the behavior of some modules, or enable/disable it through utility functions. Suppose you have a function, `enable_some_feature/0`, to enable a feature at the runtime of the application. To define a backend for this feature in persistence layer, you can leverage registry directories. ## Registries To create a registry directory with a module named `SomeSettings`, one may use the `defregdir/2` provided by this module. ```elixir import Legion.RegistryDirectory defregdir SomeSettings, "messaging_settings" ``` Upon that call, the macro resolves for defining three modules, namely 1. `SomeSettings.Register`, 2. `SomeSettings.RegistryEntry` and 3. `SomeSettings`. The first two modules are valid Ecto schemas, you can query them directly. ```elixir Repo.all(SomeSettings.Register) # returns all of the register keys ``` Although it is strictly discouraged, you are also able to query the `SomeSettings.RegistryEntry` schema. ### Retrieving/manipulating settings The main module, `SomeSettings` (which was provided to the macro), exports functions for retrieving and manipulating the settings. The functionality expects an authenticated user, or its identifier to perform a manipulation settings (i.e. creating a new entry). The settings do not override each other, but bucketed as an event source, hence you may provide additional features (i.e. deferral, duration), which might likely need more than the last entry created. ## Adding registers Registers can be added at build time, and can be constantly referenced by the other tables. The macro does not perform a migration for the database, indeed, but you can prepare migrations for both key registration and table creation. For synchronization of the keys, see `Legion.RegistryDirectory.Synchronization`. """ @doc """ Defines a registration directory, resolving to several modules. - `{namespace}.Register`: An Ecto schema using string keys to refer to the registers. - `{namespace}.RegistryEntry`: An Ecto schema holds JSON data for a specific key. - `{namespace}`: Module providing utility functions for manipulating the registry. The macro also makes use of a base table name, provided by the second parameter, which will resolve to two table names in database, `"{namespace}_registers"` and `"{namespace}_registry_entries"`, for the two Ecto schemas, respectively. """ defmacro defregdir(namespace, dirname) do quote do defmodule :"#{unquote(namespace)}.Register" do @moduledoc """ Defines a settings register. """ use Ecto.Schema import Ecto import Ecto.Changeset import Ecto.Query alias Legion.Repo @primary_key false schema unquote("#{dirname}_registers") do field :key, :string, primary_key: true, source: "key" end def changeset(struct, _params) do struct |> cast(%{}, []) |> add_error(:key, "cannot add register at runtime") end end defmodule :"#{unquote(namespace)}.RegistryEntry" do @moduledoc """ Configures runtime configurable settings. """ use Ecto.Schema import Ecto import Ecto.Changeset import Ecto.Query alias Legion.Repo alias Legion.Identity.Information.Registration, as: User schema unquote("#{dirname}_registry_entries") do belongs_to :register, :"#{unquote(namespace)}.Register", primary_key: true, foreign_key: :key, type: :string, references: :key field :value, :map belongs_to :authority, User field :inserted_at, :naive_datetime_usec, read_after_writes: true end def changeset(struct, params \\ %{}) do struct |> cast(params, [:key, :value, :authority_id]) |> validate_required([:key, :value, :authority_id]) |> foreign_key_constraint(:authority_id) |> foreign_key_constraint(:key) end end defmodule unquote(namespace) do @moduledoc """ Manages global settings for registry modules. ## Caveats Instead of using functions of this module directly, to retrieve or alter the settings at runtime, use delegating functions supplied by relevant modules. """ import Ecto import Ecto.Changeset import Ecto.Query alias Legion.Repo alias Legion.Identity.Information.Registration, as: User @registry_entry_schema :"#{unquote(namespace)}.RegistryEntry" @registry_entry_table_name "#{unquote(dirname)}_registry_entries" @doc """ Changes the value of the setting identified by given `key`, to the new value `value`, on behalf of `user` authority. """ @spec put(User.id() | User, String.t(), map()) :: :ok | :error def put(user = %User{}, key, value), do: put(user.id, key, value) def put(user_id, key, value) when is_binary(key) do changeset = @registry_entry_schema.changeset( %@registry_entry_schema{}, %{key: key, authority_id: user_id, value: value} ) case Repo.insert(changeset) do {:ok, _setting} -> :ok {:error, _changeset} -> {:error, :unavailable} end end @doc """ Retrieves the value of the setting identified by given `key`, or returns `default` if there was no value registered (yet). """ @spec get(String.t(), term()) :: term() def get(key, default \\ nil) when is_binary(key) do query = from re1 in @registry_entry_table_name, left_join: re2 in @registry_entry_table_name, on: re1.key == re2.key and re1.id < re2.id, where: is_nil(re2.id) and re1.key == ^key, select: re1.value if value = Repo.one(query), do: value, else: default end @doc """ Takes the last `quantity` entries for the given `key`. """ @spec take(String.t(), pos_integer()) :: term() def take(key, quantity) when is_binary(key) do query = from re in @registry_entry_table_name, where: re.key == ^key, limit: ^quantity, order_by: [desc: re.id], select: {re.value, re.inserted_at} Repo.all(query) end end end end end
apps/legion/lib/registry_directory/registry_directory.ex
0.869535
0.768342
registry_directory.ex
starcoder
defmodule AWS.SageMaker do @moduledoc """ Provides APIs for creating and managing Amazon SageMaker resources. Other Resources: * [Amazon SageMaker Developer Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/whatis.html#first-time-user) * [Amazon Augmented AI Runtime API Reference](https://docs.aws.amazon.com/augmented-ai/2019-11-07/APIReference/Welcome.html) """ @doc """ Adds or overwrites one or more tags for the specified Amazon SageMaker resource. You can add tags to notebook instances, training jobs, hyperparameter tuning jobs, batch transform jobs, models, labeling jobs, work teams, endpoint configurations, and endpoints. Each tag consists of a key and an optional value. Tag keys must be unique per resource. For more information about tags, see For more information, see [AWS Tagging Strategies](https://aws.amazon.com/answers/account-management/aws-tagging-strategies/). Tags that you add to a hyperparameter tuning job by calling this API are also added to any training jobs that the hyperparameter tuning job launches after you call this API, but not to training jobs that the hyperparameter tuning job launched before you called this API. To make sure that the tags associated with a hyperparameter tuning job are also added to all training jobs that the hyperparameter tuning job launches, add the tags when you first create the tuning job by specifying them in the `Tags` parameter of `CreateHyperParameterTuningJob` """ def add_tags(client, input, options \\ []) do request(client, "AddTags", input, options) end @doc """ Associates a trial component with a trial. A trial component can be associated with multiple trials. To disassociate a trial component from a trial, call the `DisassociateTrialComponent` API. """ def associate_trial_component(client, input, options \\ []) do request(client, "AssociateTrialComponent", input, options) end @doc """ Create a machine learning algorithm that you can use in Amazon SageMaker and list in the AWS Marketplace. """ def create_algorithm(client, input, options \\ []) do request(client, "CreateAlgorithm", input, options) end @doc """ Creates a running App for the specified UserProfile. Supported Apps are JupyterServer and KernelGateway. This operation is automatically invoked by Amazon SageMaker Studio upon access to the associated Domain, and when new kernel configurations are selected by the user. A user may have multiple Apps active simultaneously. """ def create_app(client, input, options \\ []) do request(client, "CreateApp", input, options) end @doc """ Creates an Autopilot job. Find the best performing model after you run an Autopilot job by calling . Deploy that model by following the steps described in [Step 6.1: Deploy the Model to Amazon SageMaker Hosting Services](https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-deploy-model.html). For information about how to use Autopilot, see [ Automate Model Development with Amazon SageMaker Autopilot](https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-automate-model-development.html). """ def create_auto_m_l_job(client, input, options \\ []) do request(client, "CreateAutoMLJob", input, options) end @doc """ Creates a Git repository as a resource in your Amazon SageMaker account. You can associate the repository with notebook instances so that you can use Git source control for the notebooks you create. The Git repository is a resource in your Amazon SageMaker account, so it can be associated with more than one notebook instance, and it persists independently from the lifecycle of any notebook instances it is associated with. The repository can be hosted either in [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) or in any other Git repository. """ def create_code_repository(client, input, options \\ []) do request(client, "CreateCodeRepository", input, options) end @doc """ Starts a model compilation job. After the model has been compiled, Amazon SageMaker saves the resulting model artifacts to an Amazon Simple Storage Service (Amazon S3) bucket that you specify. If you choose to host your model using Amazon SageMaker hosting services, you can use the resulting model artifacts as part of the model. You can also use the artifacts with AWS IoT Greengrass. In that case, deploy them as an ML resource. In the request body, you provide the following: * A name for the compilation job * Information about the input model artifacts * The output location for the compiled model and the device (target) that the model runs on * The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker assumes to perform the model compilation job. You can also provide a `Tag` to track the model compilation job's resource use and costs. The response body contains the `CompilationJobArn` for the compiled job. To stop a model compilation job, use `StopCompilationJob`. To get information about a particular model compilation job, use `DescribeCompilationJob`. To get information about multiple model compilation jobs, use `ListCompilationJobs`. """ def create_compilation_job(client, input, options \\ []) do request(client, "CreateCompilationJob", input, options) end @doc """ Creates a `Domain` used by Amazon SageMaker Studio. A domain consists of an associated Amazon Elastic File System (EFS) volume, a list of authorized users, and a variety of security, application, policy, and Amazon Virtual Private Cloud (VPC) configurations. An AWS account is limited to one domain per region. Users within a domain can share notebook files and other artifacts with each other. When a domain is created, an EFS volume is created for use by all of the users within the domain. Each user receives a private home directory within the EFS volume for notebooks, Git repositories, and data files. ## VPC configuration All SageMaker Studio traffic between the domain and the EFS volume is through the specified VPC and subnets. For other Studio traffic, you can specify the `AppNetworkAccessType` parameter. `AppNetworkAccessType` corresponds to the network access type that you choose when you onboard to Studio. The following options are available: * `PublicInternetOnly` - Non-EFS traffic goes through a VPC managed by Amazon SageMaker, which allows internet access. This is the default value. * `VpcOnly` - All Studio traffic is through the specified VPC and subnets. Internet access is disabled by default. To allow internet access, you must specify a NAT gateway. When internet access is disabled, you won't be able to train or host models unless your VPC has an interface endpoint (PrivateLink) or a NAT gateway and your security groups allow outbound connections. ## `VpcOnly` network access type When you choose `VpcOnly`, you must specify the following: * Security group inbound and outbound rules to allow NFS traffic over TCP on port 2049 between the domain and the EFS volume * Security group inbound and outbound rules to allow traffic between the JupyterServer app and the KernelGateway apps * Interface endpoints to access the SageMaker API and SageMaker runtime For more information, see: * [Security groups for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) * [VPC with public and private subnets (NAT)](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html) * [Connect to SageMaker through a VPC interface endpoint](https://docs.aws.amazon.com/sagemaker/latest/dg/interface-vpc-endpoint.html) """ def create_domain(client, input, options \\ []) do request(client, "CreateDomain", input, options) end @doc """ Creates an endpoint using the endpoint configuration specified in the request. Amazon SageMaker uses the endpoint to provision resources and deploy models. You create the endpoint configuration with the `CreateEndpointConfig` API. Use this API to deploy models using Amazon SageMaker hosting services. For an example that calls this method when deploying a model to Amazon SageMaker hosting services, see [Deploy the Model to Amazon SageMaker Hosting Services (AWS SDK for Python (Boto 3)).](https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-deploy-model.html#ex1-deploy-model-boto) You must not delete an `EndpointConfig` that is in use by an endpoint that is live or while the `UpdateEndpoint` or `CreateEndpoint` operations are being performed on the endpoint. To update an endpoint, you must create a new `EndpointConfig`. The endpoint name must be unique within an AWS Region in your AWS account. When it receives the request, Amazon SageMaker creates the endpoint, launches the resources (ML compute instances), and deploys the model(s) on them. When you call `CreateEndpoint`, a load call is made to DynamoDB to verify that your endpoint configuration exists. When you read data from a DynamoDB table supporting [ `Eventually Consistent Reads` ](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadConsistency.html), the response might not reflect the results of a recently completed write operation. The response might include some stale data. If the dependent entities are not yet in DynamoDB, this causes a validation error. If you repeat your read request after a short time, the response should return the latest data. So retry logic is recommended to handle these possible issues. We also recommend that customers call `DescribeEndpointConfig` before calling `CreateEndpoint` to minimize the potential impact of a DynamoDB eventually consistent read. When Amazon SageMaker receives the request, it sets the endpoint status to `Creating`. After it creates the endpoint, it sets the status to `InService`. Amazon SageMaker can then process incoming requests for inferences. To check the status of an endpoint, use the `DescribeEndpoint` API. If any of the models hosted at this endpoint get model data from an Amazon S3 location, Amazon SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provided. AWS STS is activated in your IAM user account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see [Activating and Deactivating AWS STS in an AWS Region](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html) in the *AWS Identity and Access Management User Guide*. """ def create_endpoint(client, input, options \\ []) do request(client, "CreateEndpoint", input, options) end @doc """ Creates an endpoint configuration that Amazon SageMaker hosting services uses to deploy models. In the configuration, you identify one or more models, created using the `CreateModel` API, to deploy and the resources that you want Amazon SageMaker to provision. Then you call the `CreateEndpoint` API. Use this API if you want to use Amazon SageMaker hosting services to deploy models into production. In the request, you define a `ProductionVariant`, for each model that you want to deploy. Each `ProductionVariant` parameter also describes the resources that you want Amazon SageMaker to provision. This includes the number and type of ML compute instances to deploy. If you are hosting multiple models, you also assign a `VariantWeight` to specify how much traffic you want to allocate to each model. For example, suppose that you want to host two models, A and B, and you assign traffic weight 2 for model A and 1 for model B. Amazon SageMaker distributes two-thirds of the traffic to Model A, and one-third to model B. For an example that calls this method when deploying a model to Amazon SageMaker hosting services, see [Deploy the Model to Amazon SageMaker Hosting Services (AWS SDK for Python (Boto 3)).](https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-deploy-model.html#ex1-deploy-model-boto) When you call `CreateEndpoint`, a load call is made to DynamoDB to verify that your endpoint configuration exists. When you read data from a DynamoDB table supporting [ `Eventually Consistent Reads` ](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadConsistency.html), the response might not reflect the results of a recently completed write operation. The response might include some stale data. If the dependent entities are not yet in DynamoDB, this causes a validation error. If you repeat your read request after a short time, the response should return the latest data. So retry logic is recommended to handle these possible issues. We also recommend that customers call `DescribeEndpointConfig` before calling `CreateEndpoint` to minimize the potential impact of a DynamoDB eventually consistent read. """ def create_endpoint_config(client, input, options \\ []) do request(client, "CreateEndpointConfig", input, options) end @doc """ Creates an SageMaker *experiment*. An experiment is a collection of *trials* that are observed, compared and evaluated as a group. A trial is a set of steps, called *trial components*, that produce a machine learning model. The goal of an experiment is to determine the components that produce the best model. Multiple trials are performed, each one isolating and measuring the impact of a change to one or more inputs, while keeping the remaining inputs constant. When you use Amazon SageMaker Studio or the Amazon SageMaker Python SDK, all experiments, trials, and trial components are automatically tracked, logged, and indexed. When you use the AWS SDK for Python (Boto), you must use the logging APIs provided by the SDK. You can add tags to experiments, trials, trial components and then use the `Search` API to search for the tags. To add a description to an experiment, specify the optional `Description` parameter. To add a description later, or to change the description, call the `UpdateExperiment` API. To get a list of all your experiments, call the `ListExperiments` API. To view an experiment's properties, call the `DescribeExperiment` API. To get a list of all the trials associated with an experiment, call the `ListTrials` API. To create a trial call the `CreateTrial` API. """ def create_experiment(client, input, options \\ []) do request(client, "CreateExperiment", input, options) end @doc """ Creates a flow definition. """ def create_flow_definition(client, input, options \\ []) do request(client, "CreateFlowDefinition", input, options) end @doc """ Defines the settings you will use for the human review workflow user interface. Reviewers will see a three-panel interface with an instruction area, the item to review, and an input area. """ def create_human_task_ui(client, input, options \\ []) do request(client, "CreateHumanTaskUi", input, options) end @doc """ Starts a hyperparameter tuning job. A hyperparameter tuning job finds the best version of a model by running many training jobs on your dataset using the algorithm you choose and values for hyperparameters within ranges that you specify. It then chooses the hyperparameter values that result in a model that performs the best, as measured by an objective metric that you choose. """ def create_hyper_parameter_tuning_job(client, input, options \\ []) do request(client, "CreateHyperParameterTuningJob", input, options) end @doc """ Creates a job that uses workers to label the data objects in your input dataset. You can use the labeled data to train machine learning models. You can select your workforce from one of three providers: * A private workforce that you create. It can include employees, contractors, and outside experts. Use a private workforce when want the data to stay within your organization or when a specific set of skills is required. * One or more vendors that you select from the AWS Marketplace. Vendors provide expertise in specific areas. * The Amazon Mechanical Turk workforce. This is the largest workforce, but it should only be used for public data or data that has been stripped of any personally identifiable information. You can also use *automated data labeling* to reduce the number of data objects that need to be labeled by a human. Automated data labeling uses *active learning* to determine if a data object can be labeled by machine or if it needs to be sent to a human worker. For more information, see [Using Automated Data Labeling](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-automated-labeling.html). The data objects to be labeled are contained in an Amazon S3 bucket. You create a *manifest file* that describes the location of each object. For more information, see [Using Input and Output Data](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-data.html). The output can be used as the manifest file for another labeling job or as training data for your machine learning models. """ def create_labeling_job(client, input, options \\ []) do request(client, "CreateLabelingJob", input, options) end @doc """ Creates a model in Amazon SageMaker. In the request, you name the model and describe a primary container. For the primary container, you specify the Docker image that contains inference code, artifacts (from prior training), and a custom environment map that the inference code uses when you deploy the model for predictions. Use this API to create a model if you want to use Amazon SageMaker hosting services or run a batch transform job. To host your model, you create an endpoint configuration with the `CreateEndpointConfig` API, and then create an endpoint with the `CreateEndpoint` API. Amazon SageMaker then deploys all of the containers that you defined for the model in the hosting environment. For an example that calls this method when deploying a model to Amazon SageMaker hosting services, see [Deploy the Model to Amazon SageMaker Hosting Services (AWS SDK for Python (Boto 3)).](https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-deploy-model.html#ex1-deploy-model-boto) To run a batch transform using your model, you start a job with the `CreateTransformJob` API. Amazon SageMaker uses your model and your dataset to get inferences which are then saved to a specified S3 location. In the `CreateModel` request, you must define a container with the `PrimaryContainer` parameter. In the request, you also provide an IAM role that Amazon SageMaker can assume to access model artifacts and docker image for deployment on ML compute hosting instances or for batch transform jobs. In addition, you also use the IAM role to manage permissions the inference code needs. For example, if the inference code access any other AWS resources, you grant necessary permissions via this role. """ def create_model(client, input, options \\ []) do request(client, "CreateModel", input, options) end @doc """ Creates a model package that you can use to create Amazon SageMaker models or list on AWS Marketplace. Buyers can subscribe to model packages listed on AWS Marketplace to create models in Amazon SageMaker. To create a model package by specifying a Docker container that contains your inference code and the Amazon S3 location of your model artifacts, provide values for `InferenceSpecification`. To create a model from an algorithm resource that you created or subscribed to in AWS Marketplace, provide a value for `SourceAlgorithmSpecification`. """ def create_model_package(client, input, options \\ []) do request(client, "CreateModelPackage", input, options) end @doc """ Creates a schedule that regularly starts Amazon SageMaker Processing Jobs to monitor the data captured for an Amazon SageMaker Endoint. """ def create_monitoring_schedule(client, input, options \\ []) do request(client, "CreateMonitoringSchedule", input, options) end @doc """ Creates an Amazon SageMaker notebook instance. A notebook instance is a machine learning (ML) compute instance running on a Jupyter notebook. In a `CreateNotebookInstance` request, specify the type of ML compute instance that you want to run. Amazon SageMaker launches the instance, installs common libraries that you can use to explore datasets for model training, and attaches an ML storage volume to the notebook instance. Amazon SageMaker also provides a set of example notebooks. Each notebook demonstrates how to use Amazon SageMaker with a specific algorithm or with a machine learning framework. After receiving the request, Amazon SageMaker does the following: 1. Creates a network interface in the Amazon SageMaker VPC. 2. (Option) If you specified `SubnetId`, Amazon SageMaker creates a network interface in your own VPC, which is inferred from the subnet ID that you provide in the input. When creating this network interface, Amazon SageMaker attaches the security group that you specified in the request to the network interface that it creates in your VPC. 3. Launches an EC2 instance of the type specified in the request in the Amazon SageMaker VPC. If you specified `SubnetId` of your VPC, Amazon SageMaker specifies both network interfaces when launching this instance. This enables inbound traffic from your own VPC to the notebook instance, assuming that the security groups allow it. After creating the notebook instance, Amazon SageMaker returns its Amazon Resource Name (ARN). You can't change the name of a notebook instance after you create it. After Amazon SageMaker creates the notebook instance, you can connect to the Jupyter server and work in Jupyter notebooks. For example, you can write code to explore a dataset that you can use for model training, train a model, host models by creating Amazon SageMaker endpoints, and validate hosted models. For more information, see [How It Works](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works.html). """ def create_notebook_instance(client, input, options \\ []) do request(client, "CreateNotebookInstance", input, options) end @doc """ Creates a lifecycle configuration that you can associate with a notebook instance. A *lifecycle configuration* is a collection of shell scripts that run when you create or start a notebook instance. Each lifecycle configuration script has a limit of 16384 characters. The value of the `$PATH` environment variable that is available to both scripts is `/sbin:bin:/usr/sbin:/usr/bin`. View CloudWatch Logs for notebook instance lifecycle configurations in log group `/aws/sagemaker/NotebookInstances` in log stream `[notebook-instance-name]/[LifecycleConfigHook]`. Lifecycle configuration scripts cannot run for longer than 5 minutes. If a script runs for longer than 5 minutes, it fails and the notebook instance is not created or started. For information about notebook instance lifestyle configurations, see [Step 2.1: (Optional) Customize a Notebook Instance](https://docs.aws.amazon.com/sagemaker/latest/dg/notebook-lifecycle-config.html). """ def create_notebook_instance_lifecycle_config(client, input, options \\ []) do request(client, "CreateNotebookInstanceLifecycleConfig", input, options) end @doc """ Creates a URL for a specified UserProfile in a Domain. When accessed in a web browser, the user will be automatically signed in to Amazon SageMaker Studio, and granted access to all of the Apps and files associated with the Domain's Amazon Elastic File System (EFS) volume. This operation can only be called when the authentication mode equals IAM. The URL that you get from a call to `CreatePresignedDomainUrl` is valid only for 5 minutes. If you try to use the URL after the 5-minute limit expires, you are directed to the AWS console sign-in page. """ def create_presigned_domain_url(client, input, options \\ []) do request(client, "CreatePresignedDomainUrl", input, options) end @doc """ Returns a URL that you can use to connect to the Jupyter server from a notebook instance. In the Amazon SageMaker console, when you choose `Open` next to a notebook instance, Amazon SageMaker opens a new tab showing the Jupyter server home page from the notebook instance. The console uses this API to get the URL and show the page. The IAM role or user used to call this API defines the permissions to access the notebook instance. Once the presigned URL is created, no additional permission is required to access this URL. IAM authorization policies for this API are also enforced for every HTTP request and WebSocket frame that attempts to connect to the notebook instance. You can restrict access to this API and to the URL that it returns to a list of IP addresses that you specify. Use the `NotIpAddress` condition operator and the `aws:SourceIP` condition context key to specify the list of IP addresses that you want to have access to the notebook instance. For more information, see [Limit Access to a Notebook Instance by IP Address](https://docs.aws.amazon.com/sagemaker/latest/dg/security_iam_id-based-policy-examples.html#nbi-ip-filter). The URL that you get from a call to `CreatePresignedNotebookInstanceUrl` is valid only for 5 minutes. If you try to use the URL after the 5-minute limit expires, you are directed to the AWS console sign-in page. """ def create_presigned_notebook_instance_url(client, input, options \\ []) do request(client, "CreatePresignedNotebookInstanceUrl", input, options) end @doc """ Creates a processing job. """ def create_processing_job(client, input, options \\ []) do request(client, "CreateProcessingJob", input, options) end @doc """ Starts a model training job. After training completes, Amazon SageMaker saves the resulting model artifacts to an Amazon S3 location that you specify. If you choose to host your model using Amazon SageMaker hosting services, you can use the resulting model artifacts as part of the model. You can also use the artifacts in a machine learning service other than Amazon SageMaker, provided that you know how to use them for inferences. In the request body, you provide the following: * `AlgorithmSpecification` - Identifies the training algorithm to use. * `HyperParameters` - Specify these algorithm-specific parameters to enable the estimation of model parameters during training. Hyperparameters can be tuned to optimize this learning process. For a list of hyperparameters for each training algorithm provided by Amazon SageMaker, see [Algorithms](https://docs.aws.amazon.com/sagemaker/latest/dg/algos.html). * `InputDataConfig` - Describes the training dataset and the Amazon S3, EFS, or FSx location where it is stored. * `OutputDataConfig` - Identifies the Amazon S3 bucket where you want Amazon SageMaker to save the results of model training. * `ResourceConfig` - Identifies the resources, ML compute instances, and ML storage volumes to deploy for model training. In distributed training, you specify more than one instance. * `EnableManagedSpotTraining` - Optimize the cost of training machine learning models by up to 80% by using Amazon EC2 Spot instances. For more information, see [Managed Spot Training](https://docs.aws.amazon.com/sagemaker/latest/dg/model-managed-spot-training.html). * `RoleARN` - The Amazon Resource Number (ARN) that Amazon SageMaker assumes to perform tasks on your behalf during model training. You must grant this role the necessary permissions so that Amazon SageMaker can successfully complete model training. * `StoppingCondition` - To help cap training costs, use `MaxRuntimeInSeconds` to set a time limit for training. Use `MaxWaitTimeInSeconds` to specify how long you are willing to wait for a managed spot training job to complete. For more information about Amazon SageMaker, see [How It Works](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works.html). """ def create_training_job(client, input, options \\ []) do request(client, "CreateTrainingJob", input, options) end @doc """ Starts a transform job. A transform job uses a trained model to get inferences on a dataset and saves these results to an Amazon S3 location that you specify. To perform batch transformations, you create a transform job and use the data that you have readily available. In the request body, you provide the following: * `TransformJobName` - Identifies the transform job. The name must be unique within an AWS Region in an AWS account. * `ModelName` - Identifies the model to use. `ModelName` must be the name of an existing Amazon SageMaker model in the same AWS Region and AWS account. For information on creating a model, see `CreateModel`. * `TransformInput` - Describes the dataset to be transformed and the Amazon S3 location where it is stored. * `TransformOutput` - Identifies the Amazon S3 location where you want Amazon SageMaker to save the results from the transform job. * `TransformResources` - Identifies the ML compute instances for the transform job. For more information about how batch transformation works, see [Batch Transform](https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html). """ def create_transform_job(client, input, options \\ []) do request(client, "CreateTransformJob", input, options) end @doc """ Creates an Amazon SageMaker *trial*. A trial is a set of steps called *trial components* that produce a machine learning model. A trial is part of a single Amazon SageMaker *experiment*. When you use Amazon SageMaker Studio or the Amazon SageMaker Python SDK, all experiments, trials, and trial components are automatically tracked, logged, and indexed. When you use the AWS SDK for Python (Boto), you must use the logging APIs provided by the SDK. You can add tags to a trial and then use the `Search` API to search for the tags. To get a list of all your trials, call the `ListTrials` API. To view a trial's properties, call the `DescribeTrial` API. To create a trial component, call the `CreateTrialComponent` API. """ def create_trial(client, input, options \\ []) do request(client, "CreateTrial", input, options) end @doc """ Creates a *trial component*, which is a stage of a machine learning *trial*. A trial is composed of one or more trial components. A trial component can be used in multiple trials. Trial components include pre-processing jobs, training jobs, and batch transform jobs. When you use Amazon SageMaker Studio or the Amazon SageMaker Python SDK, all experiments, trials, and trial components are automatically tracked, logged, and indexed. When you use the AWS SDK for Python (Boto), you must use the logging APIs provided by the SDK. You can add tags to a trial component and then use the `Search` API to search for the tags. `CreateTrialComponent` can only be invoked from within an Amazon SageMaker managed environment. This includes Amazon SageMaker training jobs, processing jobs, transform jobs, and Amazon SageMaker notebooks. A call to `CreateTrialComponent` from outside one of these environments results in an error. """ def create_trial_component(client, input, options \\ []) do request(client, "CreateTrialComponent", input, options) end @doc """ Creates a user profile. A user profile represents a single user within a domain, and is the main way to reference a "person" for the purposes of sharing, reporting, and other user-oriented features. This entity is created when a user onboards to Amazon SageMaker Studio. If an administrator invites a person by email or imports them from SSO, a user profile is automatically created. A user profile is the primary holder of settings for an individual user and has a reference to the user's private Amazon Elastic File System (EFS) home directory. """ def create_user_profile(client, input, options \\ []) do request(client, "CreateUserProfile", input, options) end @doc """ Use this operation to create a workforce. This operation will return an error if a workforce already exists in the AWS Region that you specify. You can only create one workforce in each AWS Region per AWS account. If you want to create a new workforce in an AWS Region where a workforce already exists, use the API operation to delete the existing workforce and then use `CreateWorkforce` to create a new workforce. To create a private workforce using Amazon Cognito, you must specify a Cognito user pool in `CognitoConfig`. You can also create an Amazon Cognito workforce using the Amazon SageMaker console. For more information, see [ Create a Private Workforce (Amazon Cognito)](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-create-private.html). To create a private workforce using your own OIDC Identity Provider (IdP), specify your IdP configuration in `OidcConfig`. Your OIDC IdP must support *groups* because groups are used by Ground Truth and Amazon A2I to create work teams. For more information, see [ Create a Private Workforce (OIDC IdP)](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-create-private-oidc.html). """ def create_workforce(client, input, options \\ []) do request(client, "CreateWorkforce", input, options) end @doc """ Creates a new work team for labeling your data. A work team is defined by one or more Amazon Cognito user pools. You must first create the user pools before you can create a work team. You cannot create more than 25 work teams in an account and region. """ def create_workteam(client, input, options \\ []) do request(client, "CreateWorkteam", input, options) end @doc """ Removes the specified algorithm from your account. """ def delete_algorithm(client, input, options \\ []) do request(client, "DeleteAlgorithm", input, options) end @doc """ Used to stop and delete an app. """ def delete_app(client, input, options \\ []) do request(client, "DeleteApp", input, options) end @doc """ Deletes the specified Git repository from your account. """ def delete_code_repository(client, input, options \\ []) do request(client, "DeleteCodeRepository", input, options) end @doc """ Used to delete a domain. If you onboarded with IAM mode, you will need to delete your domain to onboard again using SSO. Use with caution. All of the members of the domain will lose access to their EFS volume, including data, notebooks, and other artifacts. """ def delete_domain(client, input, options \\ []) do request(client, "DeleteDomain", input, options) end @doc """ Deletes an endpoint. Amazon SageMaker frees up all of the resources that were deployed when the endpoint was created. Amazon SageMaker retires any custom KMS key grants associated with the endpoint, meaning you don't need to use the [RevokeGrant](http://docs.aws.amazon.com/kms/latest/APIReference/API_RevokeGrant.html) API call. """ def delete_endpoint(client, input, options \\ []) do request(client, "DeleteEndpoint", input, options) end @doc """ Deletes an endpoint configuration. The `DeleteEndpointConfig` API deletes only the specified configuration. It does not delete endpoints created using the configuration. You must not delete an `EndpointConfig` in use by an endpoint that is live or while the `UpdateEndpoint` or `CreateEndpoint` operations are being performed on the endpoint. If you delete the `EndpointConfig` of an endpoint that is active or being created or updated you may lose visibility into the instance type the endpoint is using. The endpoint must be deleted in order to stop incurring charges. """ def delete_endpoint_config(client, input, options \\ []) do request(client, "DeleteEndpointConfig", input, options) end @doc """ Deletes an Amazon SageMaker experiment. All trials associated with the experiment must be deleted first. Use the `ListTrials` API to get a list of the trials associated with the experiment. """ def delete_experiment(client, input, options \\ []) do request(client, "DeleteExperiment", input, options) end @doc """ Deletes the specified flow definition. """ def delete_flow_definition(client, input, options \\ []) do request(client, "DeleteFlowDefinition", input, options) end @doc """ Use this operation to delete a human task user interface (worker task template). To see a list of human task user interfaces (work task templates) in your account, use . When you delete a worker task template, it no longer appears when you call `ListHumanTaskUis`. """ def delete_human_task_ui(client, input, options \\ []) do request(client, "DeleteHumanTaskUi", input, options) end @doc """ Deletes a model. The `DeleteModel` API deletes only the model entry that was created in Amazon SageMaker when you called the `CreateModel` API. It does not delete model artifacts, inference code, or the IAM role that you specified when creating the model. """ def delete_model(client, input, options \\ []) do request(client, "DeleteModel", input, options) end @doc """ Deletes a model package. A model package is used to create Amazon SageMaker models or list on AWS Marketplace. Buyers can subscribe to model packages listed on AWS Marketplace to create models in Amazon SageMaker. """ def delete_model_package(client, input, options \\ []) do request(client, "DeleteModelPackage", input, options) end @doc """ Deletes a monitoring schedule. Also stops the schedule had not already been stopped. This does not delete the job execution history of the monitoring schedule. """ def delete_monitoring_schedule(client, input, options \\ []) do request(client, "DeleteMonitoringSchedule", input, options) end @doc """ Deletes an Amazon SageMaker notebook instance. Before you can delete a notebook instance, you must call the `StopNotebookInstance` API. When you delete a notebook instance, you lose all of your data. Amazon SageMaker removes the ML compute instance, and deletes the ML storage volume and the network interface associated with the notebook instance. """ def delete_notebook_instance(client, input, options \\ []) do request(client, "DeleteNotebookInstance", input, options) end @doc """ Deletes a notebook instance lifecycle configuration. """ def delete_notebook_instance_lifecycle_config(client, input, options \\ []) do request(client, "DeleteNotebookInstanceLifecycleConfig", input, options) end @doc """ Deletes the specified tags from an Amazon SageMaker resource. To list a resource's tags, use the `ListTags` API. When you call this API to delete tags from a hyperparameter tuning job, the deleted tags are not removed from training jobs that the hyperparameter tuning job launched before you called this API. """ def delete_tags(client, input, options \\ []) do request(client, "DeleteTags", input, options) end @doc """ Deletes the specified trial. All trial components that make up the trial must be deleted first. Use the `DescribeTrialComponent` API to get the list of trial components. """ def delete_trial(client, input, options \\ []) do request(client, "DeleteTrial", input, options) end @doc """ Deletes the specified trial component. A trial component must be disassociated from all trials before the trial component can be deleted. To disassociate a trial component from a trial, call the `DisassociateTrialComponent` API. """ def delete_trial_component(client, input, options \\ []) do request(client, "DeleteTrialComponent", input, options) end @doc """ Deletes a user profile. When a user profile is deleted, the user loses access to their EFS volume, including data, notebooks, and other artifacts. """ def delete_user_profile(client, input, options \\ []) do request(client, "DeleteUserProfile", input, options) end @doc """ Use this operation to delete a workforce. If you want to create a new workforce in an AWS Region where a workforce already exists, use this operation to delete the existing workforce and then use to create a new workforce. If a private workforce contains one or more work teams, you must use the operation to delete all work teams before you delete the workforce. If you try to delete a workforce that contains one or more work teams, you will recieve a `ResourceInUse` error. """ def delete_workforce(client, input, options \\ []) do request(client, "DeleteWorkforce", input, options) end @doc """ Deletes an existing work team. This operation can't be undone. """ def delete_workteam(client, input, options \\ []) do request(client, "DeleteWorkteam", input, options) end @doc """ Returns a description of the specified algorithm that is in your account. """ def describe_algorithm(client, input, options \\ []) do request(client, "DescribeAlgorithm", input, options) end @doc """ Describes the app. """ def describe_app(client, input, options \\ []) do request(client, "DescribeApp", input, options) end @doc """ Returns information about an Amazon SageMaker job. """ def describe_auto_m_l_job(client, input, options \\ []) do request(client, "DescribeAutoMLJob", input, options) end @doc """ Gets details about the specified Git repository. """ def describe_code_repository(client, input, options \\ []) do request(client, "DescribeCodeRepository", input, options) end @doc """ Returns information about a model compilation job. To create a model compilation job, use `CreateCompilationJob`. To get information about multiple model compilation jobs, use `ListCompilationJobs`. """ def describe_compilation_job(client, input, options \\ []) do request(client, "DescribeCompilationJob", input, options) end @doc """ The description of the domain. """ def describe_domain(client, input, options \\ []) do request(client, "DescribeDomain", input, options) end @doc """ Returns the description of an endpoint. """ def describe_endpoint(client, input, options \\ []) do request(client, "DescribeEndpoint", input, options) end @doc """ Returns the description of an endpoint configuration created using the `CreateEndpointConfig` API. """ def describe_endpoint_config(client, input, options \\ []) do request(client, "DescribeEndpointConfig", input, options) end @doc """ Provides a list of an experiment's properties. """ def describe_experiment(client, input, options \\ []) do request(client, "DescribeExperiment", input, options) end @doc """ Returns information about the specified flow definition. """ def describe_flow_definition(client, input, options \\ []) do request(client, "DescribeFlowDefinition", input, options) end @doc """ Returns information about the requested human task user interface (worker task template). """ def describe_human_task_ui(client, input, options \\ []) do request(client, "DescribeHumanTaskUi", input, options) end @doc """ Gets a description of a hyperparameter tuning job. """ def describe_hyper_parameter_tuning_job(client, input, options \\ []) do request(client, "DescribeHyperParameterTuningJob", input, options) end @doc """ Gets information about a labeling job. """ def describe_labeling_job(client, input, options \\ []) do request(client, "DescribeLabelingJob", input, options) end @doc """ Describes a model that you created using the `CreateModel` API. """ def describe_model(client, input, options \\ []) do request(client, "DescribeModel", input, options) end @doc """ Returns a description of the specified model package, which is used to create Amazon SageMaker models or list them on AWS Marketplace. To create models in Amazon SageMaker, buyers can subscribe to model packages listed on AWS Marketplace. """ def describe_model_package(client, input, options \\ []) do request(client, "DescribeModelPackage", input, options) end @doc """ Describes the schedule for a monitoring job. """ def describe_monitoring_schedule(client, input, options \\ []) do request(client, "DescribeMonitoringSchedule", input, options) end @doc """ Returns information about a notebook instance. """ def describe_notebook_instance(client, input, options \\ []) do request(client, "DescribeNotebookInstance", input, options) end @doc """ Returns a description of a notebook instance lifecycle configuration. For information about notebook instance lifestyle configurations, see [Step 2.1: (Optional) Customize a Notebook Instance](https://docs.aws.amazon.com/sagemaker/latest/dg/notebook-lifecycle-config.html). """ def describe_notebook_instance_lifecycle_config(client, input, options \\ []) do request(client, "DescribeNotebookInstanceLifecycleConfig", input, options) end @doc """ Returns a description of a processing job. """ def describe_processing_job(client, input, options \\ []) do request(client, "DescribeProcessingJob", input, options) end @doc """ Gets information about a work team provided by a vendor. It returns details about the subscription with a vendor in the AWS Marketplace. """ def describe_subscribed_workteam(client, input, options \\ []) do request(client, "DescribeSubscribedWorkteam", input, options) end @doc """ Returns information about a training job. """ def describe_training_job(client, input, options \\ []) do request(client, "DescribeTrainingJob", input, options) end @doc """ Returns information about a transform job. """ def describe_transform_job(client, input, options \\ []) do request(client, "DescribeTransformJob", input, options) end @doc """ Provides a list of a trial's properties. """ def describe_trial(client, input, options \\ []) do request(client, "DescribeTrial", input, options) end @doc """ Provides a list of a trials component's properties. """ def describe_trial_component(client, input, options \\ []) do request(client, "DescribeTrialComponent", input, options) end @doc """ Describes a user profile. For more information, see `CreateUserProfile`. """ def describe_user_profile(client, input, options \\ []) do request(client, "DescribeUserProfile", input, options) end @doc """ Lists private workforce information, including workforce name, Amazon Resource Name (ARN), and, if applicable, allowed IP address ranges ([CIDRs](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html)). Allowable IP address ranges are the IP addresses that workers can use to access tasks. This operation applies only to private workforces. """ def describe_workforce(client, input, options \\ []) do request(client, "DescribeWorkforce", input, options) end @doc """ Gets information about a specific work team. You can see information such as the create date, the last updated date, membership information, and the work team's Amazon Resource Name (ARN). """ def describe_workteam(client, input, options \\ []) do request(client, "DescribeWorkteam", input, options) end @doc """ Disassociates a trial component from a trial. This doesn't effect other trials the component is associated with. Before you can delete a component, you must disassociate the component from all trials it is associated with. To associate a trial component with a trial, call the `AssociateTrialComponent` API. To get a list of the trials a component is associated with, use the `Search` API. Specify `ExperimentTrialComponent` for the `Resource` parameter. The list appears in the response under `Results.TrialComponent.Parents`. """ def disassociate_trial_component(client, input, options \\ []) do request(client, "DisassociateTrialComponent", input, options) end @doc """ An auto-complete API for the search functionality in the Amazon SageMaker console. It returns suggestions of possible matches for the property name to use in `Search` queries. Provides suggestions for `HyperParameters`, `Tags`, and `Metrics`. """ def get_search_suggestions(client, input, options \\ []) do request(client, "GetSearchSuggestions", input, options) end @doc """ Lists the machine learning algorithms that have been created. """ def list_algorithms(client, input, options \\ []) do request(client, "ListAlgorithms", input, options) end @doc """ Lists apps. """ def list_apps(client, input, options \\ []) do request(client, "ListApps", input, options) end @doc """ Request a list of jobs. """ def list_auto_m_l_jobs(client, input, options \\ []) do request(client, "ListAutoMLJobs", input, options) end @doc """ List the Candidates created for the job. """ def list_candidates_for_auto_m_l_job(client, input, options \\ []) do request(client, "ListCandidatesForAutoMLJob", input, options) end @doc """ Gets a list of the Git repositories in your account. """ def list_code_repositories(client, input, options \\ []) do request(client, "ListCodeRepositories", input, options) end @doc """ Lists model compilation jobs that satisfy various filters. To create a model compilation job, use `CreateCompilationJob`. To get information about a particular model compilation job you have created, use `DescribeCompilationJob`. """ def list_compilation_jobs(client, input, options \\ []) do request(client, "ListCompilationJobs", input, options) end @doc """ Lists the domains. """ def list_domains(client, input, options \\ []) do request(client, "ListDomains", input, options) end @doc """ Lists endpoint configurations. """ def list_endpoint_configs(client, input, options \\ []) do request(client, "ListEndpointConfigs", input, options) end @doc """ Lists endpoints. """ def list_endpoints(client, input, options \\ []) do request(client, "ListEndpoints", input, options) end @doc """ Lists all the experiments in your account. The list can be filtered to show only experiments that were created in a specific time range. The list can be sorted by experiment name or creation time. """ def list_experiments(client, input, options \\ []) do request(client, "ListExperiments", input, options) end @doc """ Returns information about the flow definitions in your account. """ def list_flow_definitions(client, input, options \\ []) do request(client, "ListFlowDefinitions", input, options) end @doc """ Returns information about the human task user interfaces in your account. """ def list_human_task_uis(client, input, options \\ []) do request(client, "ListHumanTaskUis", input, options) end @doc """ Gets a list of `HyperParameterTuningJobSummary` objects that describe the hyperparameter tuning jobs launched in your account. """ def list_hyper_parameter_tuning_jobs(client, input, options \\ []) do request(client, "ListHyperParameterTuningJobs", input, options) end @doc """ Gets a list of labeling jobs. """ def list_labeling_jobs(client, input, options \\ []) do request(client, "ListLabelingJobs", input, options) end @doc """ Gets a list of labeling jobs assigned to a specified work team. """ def list_labeling_jobs_for_workteam(client, input, options \\ []) do request(client, "ListLabelingJobsForWorkteam", input, options) end @doc """ Lists the model packages that have been created. """ def list_model_packages(client, input, options \\ []) do request(client, "ListModelPackages", input, options) end @doc """ Lists models created with the `CreateModel` API. """ def list_models(client, input, options \\ []) do request(client, "ListModels", input, options) end @doc """ Returns list of all monitoring job executions. """ def list_monitoring_executions(client, input, options \\ []) do request(client, "ListMonitoringExecutions", input, options) end @doc """ Returns list of all monitoring schedules. """ def list_monitoring_schedules(client, input, options \\ []) do request(client, "ListMonitoringSchedules", input, options) end @doc """ Lists notebook instance lifestyle configurations created with the `CreateNotebookInstanceLifecycleConfig` API. """ def list_notebook_instance_lifecycle_configs(client, input, options \\ []) do request(client, "ListNotebookInstanceLifecycleConfigs", input, options) end @doc """ Returns a list of the Amazon SageMaker notebook instances in the requester's account in an AWS Region. """ def list_notebook_instances(client, input, options \\ []) do request(client, "ListNotebookInstances", input, options) end @doc """ Lists processing jobs that satisfy various filters. """ def list_processing_jobs(client, input, options \\ []) do request(client, "ListProcessingJobs", input, options) end @doc """ Gets a list of the work teams that you are subscribed to in the AWS Marketplace. The list may be empty if no work team satisfies the filter specified in the `NameContains` parameter. """ def list_subscribed_workteams(client, input, options \\ []) do request(client, "ListSubscribedWorkteams", input, options) end @doc """ Returns the tags for the specified Amazon SageMaker resource. """ def list_tags(client, input, options \\ []) do request(client, "ListTags", input, options) end @doc """ Lists training jobs. """ def list_training_jobs(client, input, options \\ []) do request(client, "ListTrainingJobs", input, options) end @doc """ Gets a list of `TrainingJobSummary` objects that describe the training jobs that a hyperparameter tuning job launched. """ def list_training_jobs_for_hyper_parameter_tuning_job(client, input, options \\ []) do request(client, "ListTrainingJobsForHyperParameterTuningJob", input, options) end @doc """ Lists transform jobs. """ def list_transform_jobs(client, input, options \\ []) do request(client, "ListTransformJobs", input, options) end @doc """ Lists the trial components in your account. You can sort the list by trial component name or creation time. You can filter the list to show only components that were created in a specific time range. You can also filter on one of the following: * `ExperimentName` * `SourceArn` * `TrialName` """ def list_trial_components(client, input, options \\ []) do request(client, "ListTrialComponents", input, options) end @doc """ Lists the trials in your account. Specify an experiment name to limit the list to the trials that are part of that experiment. Specify a trial component name to limit the list to the trials that associated with that trial component. The list can be filtered to show only trials that were created in a specific time range. The list can be sorted by trial name or creation time. """ def list_trials(client, input, options \\ []) do request(client, "ListTrials", input, options) end @doc """ Lists user profiles. """ def list_user_profiles(client, input, options \\ []) do request(client, "ListUserProfiles", input, options) end @doc """ Use this operation to list all private and vendor workforces in an AWS Region. Note that you can only have one private workforce per AWS Region. """ def list_workforces(client, input, options \\ []) do request(client, "ListWorkforces", input, options) end @doc """ Gets a list of private work teams that you have defined in a region. The list may be empty if no work team satisfies the filter specified in the `NameContains` parameter. """ def list_workteams(client, input, options \\ []) do request(client, "ListWorkteams", input, options) end @doc """ Renders the UI template so that you can preview the worker's experience. """ def render_ui_template(client, input, options \\ []) do request(client, "RenderUiTemplate", input, options) end @doc """ Finds Amazon SageMaker resources that match a search query. Matching resources are returned as a list of `SearchRecord` objects in the response. You can sort the search results by any resource property in a ascending or descending order. You can query against the following value types: numeric, text, Boolean, and timestamp. """ def search(client, input, options \\ []) do request(client, "Search", input, options) end @doc """ Starts a previously stopped monitoring schedule. New monitoring schedules are immediately started after creation. """ def start_monitoring_schedule(client, input, options \\ []) do request(client, "StartMonitoringSchedule", input, options) end @doc """ Launches an ML compute instance with the latest version of the libraries and attaches your ML storage volume. After configuring the notebook instance, Amazon SageMaker sets the notebook instance status to `InService`. A notebook instance's status must be `InService` before you can connect to your Jupyter notebook. """ def start_notebook_instance(client, input, options \\ []) do request(client, "StartNotebookInstance", input, options) end @doc """ A method for forcing the termination of a running job. """ def stop_auto_m_l_job(client, input, options \\ []) do request(client, "StopAutoMLJob", input, options) end @doc """ Stops a model compilation job. To stop a job, Amazon SageMaker sends the algorithm the SIGTERM signal. This gracefully shuts the job down. If the job hasn't stopped, it sends the SIGKILL signal. When it receives a `StopCompilationJob` request, Amazon SageMaker changes the `CompilationJobSummary$CompilationJobStatus` of the job to `Stopping`. After Amazon SageMaker stops the job, it sets the `CompilationJobSummary$CompilationJobStatus` to `Stopped`. """ def stop_compilation_job(client, input, options \\ []) do request(client, "StopCompilationJob", input, options) end @doc """ Stops a running hyperparameter tuning job and all running training jobs that the tuning job launched. All model artifacts output from the training jobs are stored in Amazon Simple Storage Service (Amazon S3). All data that the training jobs write to Amazon CloudWatch Logs are still available in CloudWatch. After the tuning job moves to the `Stopped` state, it releases all reserved resources for the tuning job. """ def stop_hyper_parameter_tuning_job(client, input, options \\ []) do request(client, "StopHyperParameterTuningJob", input, options) end @doc """ Stops a running labeling job. A job that is stopped cannot be restarted. Any results obtained before the job is stopped are placed in the Amazon S3 output bucket. """ def stop_labeling_job(client, input, options \\ []) do request(client, "StopLabelingJob", input, options) end @doc """ Stops a previously started monitoring schedule. """ def stop_monitoring_schedule(client, input, options \\ []) do request(client, "StopMonitoringSchedule", input, options) end @doc """ Terminates the ML compute instance. Before terminating the instance, Amazon SageMaker disconnects the ML storage volume from it. Amazon SageMaker preserves the ML storage volume. Amazon SageMaker stops charging you for the ML compute instance when you call `StopNotebookInstance`. To access data on the ML storage volume for a notebook instance that has been terminated, call the `StartNotebookInstance` API. `StartNotebookInstance` launches another ML compute instance, configures it, and attaches the preserved ML storage volume so you can continue your work. """ def stop_notebook_instance(client, input, options \\ []) do request(client, "StopNotebookInstance", input, options) end @doc """ Stops a processing job. """ def stop_processing_job(client, input, options \\ []) do request(client, "StopProcessingJob", input, options) end @doc """ Stops a training job. To stop a job, Amazon SageMaker sends the algorithm the `SIGTERM` signal, which delays job termination for 120 seconds. Algorithms might use this 120-second window to save the model artifacts, so the results of the training is not lost. When it receives a `StopTrainingJob` request, Amazon SageMaker changes the status of the job to `Stopping`. After Amazon SageMaker stops the job, it sets the status to `Stopped`. """ def stop_training_job(client, input, options \\ []) do request(client, "StopTrainingJob", input, options) end @doc """ Stops a transform job. When Amazon SageMaker receives a `StopTransformJob` request, the status of the job changes to `Stopping`. After Amazon SageMaker stops the job, the status is set to `Stopped`. When you stop a transform job before it is completed, Amazon SageMaker doesn't store the job's output in Amazon S3. """ def stop_transform_job(client, input, options \\ []) do request(client, "StopTransformJob", input, options) end @doc """ Updates the specified Git repository with the specified values. """ def update_code_repository(client, input, options \\ []) do request(client, "UpdateCodeRepository", input, options) end @doc """ Updates the default settings for new user profiles in the domain. """ def update_domain(client, input, options \\ []) do request(client, "UpdateDomain", input, options) end @doc """ Deploys the new `EndpointConfig` specified in the request, switches to using newly created endpoint, and then deletes resources provisioned for the endpoint using the previous `EndpointConfig` (there is no availability loss). When Amazon SageMaker receives the request, it sets the endpoint status to `Updating`. After updating the endpoint, it sets the status to `InService`. To check the status of an endpoint, use the `DescribeEndpoint` API. You must not delete an `EndpointConfig` in use by an endpoint that is live or while the `UpdateEndpoint` or `CreateEndpoint` operations are being performed on the endpoint. To update an endpoint, you must create a new `EndpointConfig`. If you delete the `EndpointConfig` of an endpoint that is active or being created or updated you may lose visibility into the instance type the endpoint is using. The endpoint must be deleted in order to stop incurring charges. """ def update_endpoint(client, input, options \\ []) do request(client, "UpdateEndpoint", input, options) end @doc """ Updates variant weight of one or more variants associated with an existing endpoint, or capacity of one variant associated with an existing endpoint. When it receives the request, Amazon SageMaker sets the endpoint status to `Updating`. After updating the endpoint, it sets the status to `InService`. To check the status of an endpoint, use the `DescribeEndpoint` API. """ def update_endpoint_weights_and_capacities(client, input, options \\ []) do request(client, "UpdateEndpointWeightsAndCapacities", input, options) end @doc """ Adds, updates, or removes the description of an experiment. Updates the display name of an experiment. """ def update_experiment(client, input, options \\ []) do request(client, "UpdateExperiment", input, options) end @doc """ Updates a previously created schedule. """ def update_monitoring_schedule(client, input, options \\ []) do request(client, "UpdateMonitoringSchedule", input, options) end @doc """ Updates a notebook instance. NotebookInstance updates include upgrading or downgrading the ML compute instance used for your notebook instance to accommodate changes in your workload requirements. """ def update_notebook_instance(client, input, options \\ []) do request(client, "UpdateNotebookInstance", input, options) end @doc """ Updates a notebook instance lifecycle configuration created with the `CreateNotebookInstanceLifecycleConfig` API. """ def update_notebook_instance_lifecycle_config(client, input, options \\ []) do request(client, "UpdateNotebookInstanceLifecycleConfig", input, options) end @doc """ Updates the display name of a trial. """ def update_trial(client, input, options \\ []) do request(client, "UpdateTrial", input, options) end @doc """ Updates one or more properties of a trial component. """ def update_trial_component(client, input, options \\ []) do request(client, "UpdateTrialComponent", input, options) end @doc """ Updates a user profile. """ def update_user_profile(client, input, options \\ []) do request(client, "UpdateUserProfile", input, options) end @doc """ Use this operation to update your workforce. You can use this operation to require that workers use specific IP addresses to work on tasks and to update your OpenID Connect (OIDC) Identity Provider (IdP) workforce configuration. Use `SourceIpConfig` to restrict worker access to tasks to a specific range of IP addresses. You specify allowed IP addresses by creating a list of up to ten [CIDRs](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html). By default, a workforce isn't restricted to specific IP addresses. If you specify a range of IP addresses, workers who attempt to access tasks using any IP address outside the specified range are denied and get a `Not Found` error message on the worker portal. Use `OidcConfig` to update the configuration of a workforce created using your own OIDC IdP. You can only update your OIDC IdP configuration when there are no work teams associated with your workforce. You can delete work teams using the operation. After restricting access to a range of IP addresses or updating your OIDC IdP configuration with this operation, you can view details about your update workforce using the operation. This operation only applies to private workforces. """ def update_workforce(client, input, options \\ []) do request(client, "UpdateWorkforce", input, options) end @doc """ Updates an existing work team with new member definitions or description. """ def update_workteam(client, input, options \\ []) do request(client, "UpdateWorkteam", input, options) end @spec request(AWS.Client.t(), binary(), map(), list()) :: {:ok, map() | nil, map()} | {:error, term()} defp request(client, action, input, options) do client = %{client | service: "sagemaker"} host = build_host("api.sagemaker", client) url = build_url(host, client) headers = [ {"Host", host}, {"Content-Type", "application/x-amz-json-1.1"}, {"X-Amz-Target", "SageMaker.#{action}"} ] payload = encode!(client, input) headers = AWS.Request.sign_v4(client, "POST", url, headers, payload) post(client, url, payload, headers, options) end defp post(client, url, payload, headers, options) do case AWS.Client.request(client, :post, url, payload, headers, options) do {:ok, %{status_code: 200, body: body} = response} -> body = if body != "", do: decode!(client, body) {:ok, body, response} {:ok, response} -> {:error, {:unexpected_response, response}} error = {:error, _reason} -> error end end defp build_host(_endpoint_prefix, %{region: "local", endpoint: endpoint}) do endpoint end defp build_host(_endpoint_prefix, %{region: "local"}) do "localhost" end defp build_host(endpoint_prefix, %{region: region, endpoint: endpoint}) do "#{endpoint_prefix}.#{region}.#{endpoint}" end defp build_url(host, %{:proto => proto, :port => port}) do "#{proto}://#{host}:#{port}/" end defp encode!(client, payload) do AWS.Client.encode!(client, payload, :json) end defp decode!(client, payload) do AWS.Client.decode!(client, payload, :json) end end
lib/aws/generated/sage_maker.ex
0.885347
0.652241
sage_maker.ex
starcoder
defmodule Support.Generators do import ExUnitProperties import StreamData alias Hoplon.Data require Hoplon.Data require Record def input_package() do gen all ecosystem <- frequency([{5, proper_string()}, {1, constant(:asn1_DEFAULT)}]), name <- proper_string(), version <- proper_string(), hash <- proper_string() do Data.package(ecosystem: ecosystem, name: name, version: version, hash: hash) end end def input_audit() do gen all package <- input_package(), v <- optional(verdict()), comment <- optional(proper_string()), public_key_fingerprint <- proper_string(), created_at <- integer(), audited_by_author <- boolean() do Data.audit( package: package, verdict: v, comment: comment, publicKeyFingerprint: public_key_fingerprint, createdAt: created_at, auditedByAuthor: audited_by_author ) end end def input_signed_audit() do gen all audit <- input_audit(), signature <- binary() do Data.signed_audit(audit: audit, signature: signature) end end def verdict do one_of(~w{dangerous suspicious lgtm safe}a) end def optional(gen) do frequency([{4, gen}, {1, constant(:asn1_NOVALUE)}]) end def has_default_values?(record) when is_tuple(record) do record |> Tuple.to_list() |> Enum.any?(&(&1 == :asn1_DEFAULT)) end def proper_string() do string(:printable) end def fill_in_defaults(list) when is_list(list) do Enum.map(list, &fill_in_defaults/1) end def fill_in_defaults(package) when Record.is_record(package, :Package) do case Data.package(package, :ecosystem) do :asn1_DEFAULT -> Data.package(package, ecosystem: "hexpm") _ -> package end end def fill_in_defaults(audit) when Record.is_record(audit, :Audit) do package = Data.audit(audit, :package) package = fill_in_defaults(package) Data.audit(audit, package: package) end def fill_in_defaults(signed_audit) when Record.is_record(signed_audit, :SignedAudit) do audit = Data.signed_audit(signed_audit, :audit) audit = fill_in_defaults(audit) Data.signed_audit(signed_audit, audit: audit) end def fill_in_defaults(other) do other end end
test/support/generators.ex
0.522446
0.456713
generators.ex
starcoder
defmodule Bargad.LogClient do @moduledoc """ Client APIs for `Bargad.Log`. This module is automatically started on application start. The request in each API has to be of the form `t:request/0`. Look into the corresponding handler of each request for the exact arguments to be supplied. """ use GenServer ## Client API @type response :: Bargad.Types.tree | Bargad.Types.audit_proof | Bargad.Types.consistency_proof | boolean @type request :: tuple @doc """ Starts the `Bargad.LogClient`. Provides an API layer for operations on `Bargad.Log`. """ def start_link(opts) do GenServer.start_link(__MODULE__, :ok, opts) end @doc """ Creates a new `Log`. The event handler for this request calls `Bargad.Log.new/3` with the specified arguments. """ @spec new(request) :: Bargad.Types.tree def new(args) do GenServer.call(Bargad.LogClient, {:new, args}) end @doc """ Builds a new `Log` with the provided list of values. The event handler for this request calls `Bargad.Log.build/4` with the specified arguments. """ @spec build(request) :: Bargad.Types.tree def build(args) do GenServer.call(Bargad.LogClient, {:build, args}) end @doc """ Appends a new value into the `Log`. The event handler for this request calls `Bargad.Log.insert/2` with the specified arguments. """ @spec append(request) :: Bargad.Types.tree def append(args) do GenServer.call(Bargad.LogClient, {:insert, args}) end @doc """ Generates an audit proof from the `Log` for the specified value. The event handler for this request calls `Bargad.Log.audit_proof/2` with the specified arguments. """ @spec generate_audit_proof(request) :: Bargad.Types.audit_proof def generate_audit_proof(args) do GenServer.call(Bargad.LogClient, {:audit_proof, args}) end @doc """ Generates a consistency proof for the first M leaves appended in the `Log`. The event handler for this request calls `Bargad.Log.consistency_proof/2` with the specified arguments. """ @spec generate_consistency_proof(request) :: Bargad.Types.consistency_proof def generate_consistency_proof(args) do GenServer.call(Bargad.LogClient, {:consistency_proof, args}) end @doc """ Verifies the generated consistency proof. The event handler for this request calls `Bargad.Log.verify_consistency_proof/3` with the specified arguments. """ @spec verify_consistency_proof(request) :: boolean def verify_consistency_proof(args) do GenServer.call(Bargad.LogClient, {:verify_consistency_proof, args}) end @doc """ Verifies the generated audit proof. The event handler for this request calls `Bargad.Log.verify_audit_proof/2` with the specified arguments. """ @spec verify_audit_proof(request) :: boolean def verify_audit_proof(args) do GenServer.call(Bargad.LogClient, {:verify_audit_proof, args}) end ## Server Callbacks @doc false def init(:ok) do {:ok, %{}} end @doc false def handle_call({operation, args}, _from, state) do args = Tuple.to_list(args) result = apply(Bargad.Log, operation, args) {:reply, result, state} end end
lib/bargad_log_client.ex
0.852844
0.415373
bargad_log_client.ex
starcoder
defmodule AutoApi.Capability do @moduledoc """ Capability behaviour """ alias AutoApi.{CapabilityHelper, UniversalProperties} defmacro __using__(spec_file: spec_file) do spec_path = Path.join(["specs", "capabilities", spec_file]) raw_spec = Jason.decode!(File.read!(spec_path)) universal_properties = UniversalProperties.raw_spec()["universal_properties"] properties = (raw_spec["properties"] || []) ++ universal_properties base_functions = quote location: :keep do @external_resource unquote(spec_path) @raw_spec unquote(Macro.escape(raw_spec)) @identifier <<@raw_spec["identifier"]["msb"], @raw_spec["identifier"]["lsb"]>> @name String.to_atom(@raw_spec["name"]) if @raw_spec["name_pretty"] do @desc @raw_spec["name_pretty"] else @desc @raw_spec["name"] |> String.split("_") |> Enum.map(&String.capitalize/1) |> Enum.join(" ") end @properties unquote(Macro.escape(properties)) |> Enum.map(fn prop -> {prop["id"], String.to_atom(prop["name"])} end) @setters CapabilityHelper.extract_setters_data(@raw_spec) @state_properties CapabilityHelper.extract_state_properties(@raw_spec) @first_property unquote(Macro.escape(properties)) |> Enum.map(fn prop -> {prop["id"], String.to_atom(prop["name"]), prop["multiple"] || false} end) |> List.first() @doc false @spec raw_spec() :: map() def raw_spec, do: @raw_spec @doc """ Returns the command module related to this capability """ @spec command :: atom def command, do: @command_module @doc """ Returns the state module related to this capability """ # @spec state() :: atom def state, do: @state_module @doc """ Returns which properties are included in the State specification. Universal properties are always included ## Examples iex> AutoApi.SeatsCapability.state_properties() [:persons_detected, :seatbelts_state, :nonce, :vehicle_signature, :timestamp, :vin, :brand] iex> AutoApi.WakeUpCapability.state_properties() [:nonce, :vehicle_signature, :timestamp, :vin, :brand] """ @spec state_properties() :: list(atom) def state_properties(), do: @state_properties @doc """ Retunrs capability's identifier: #{inspect @identifier, base: :hex} """ @spec identifier :: binary def identifier, do: @identifier @doc """ Returns capability's unique name: #{@name} """ @spec name :: atom def name, do: @name @doc """ Returns capability's description: #{@desc} """ @spec description :: String.t() def description, do: @desc @doc """ Returns properties under #{@desc}: ``` #{inspect @properties, base: :hex} ``` """ @spec properties :: list(tuple()) def properties, do: @properties @doc """ Returns the list of setters defined for the capability. The list is a `Keyword` with the setter name as a key and as value a tuple with three elements: 1. _mandatory_ properties 2. _optional_ properties 3. _constants_ ## Example iex> #{inspect __MODULE__}.setters() #{inspect @setters} """ @spec setters() :: keyword({list(atom), list(atom), keyword(binary)}) def setters(), do: @setters @doc """ Returns the ID of a property given its name. ## Example iex> #{inspect __MODULE__}.property_id(#{inspect elem(@first_property, 1)}) #{inspect elem(@first_property, 0)} """ @spec property_id(atom()) :: integer() def property_id(name) @doc """ Returns the name of a property given its ID. ## Example iex> #{inspect __MODULE__}.property_name(#{inspect elem(@first_property, 0)}) #{inspect elem(@first_property, 1)} """ @spec property_name(integer()) :: atom() def property_name(id) @doc false @spec property_spec(atom()) :: map() def property_spec(name) @doc """ Returns whether the property is multiple, that is if it can contain multiple values. ## Example iex> #{inspect __MODULE__}.multiple?(#{inspect elem(@first_property, 1)}) #{inspect elem(@first_property, 2)} """ @spec multiple?(atom()) :: boolean() def multiple?(name) end property_functions = for prop <- properties do prop_id = prop["id"] prop_name = String.to_atom(prop["name"]) multiple? = prop["multiple"] || false quote location: :keep do def property_id(unquote(prop_name)), do: unquote(prop_id) def property_name(unquote(prop_id)), do: unquote(prop_name) def property_spec(unquote(prop_name)), do: unquote(Macro.escape(prop)) def multiple?(unquote(prop_name)), do: unquote(multiple?) end end [base_functions, property_functions] end @doc """ Returns full capabilities with all of them marked as disabled ie> <<cap_len, first_cap :: binary-size(3), _::binary>> = AutoApi.Capability.blank_capabilities ie> cap_len 8 ie> first_cap <<0, 0x20, 0>> """ def blank_capabilities do caps_len = length(all()) for cap_module <- all() do iden = apply(cap_module, :identifier, []) cap_module |> apply(:capabilities, []) |> Enum.map(fn _ -> <<0>> end) |> Enum.reduce(iden, fn i, x -> x <> i end) end |> Enum.reduce(<<caps_len>>, fn i, x -> x <> i end) end @doc """ Returns a list of all capability modules. ## Examples iex> capabilities = AutoApi.Capability.all() iex> length(capabilities) 55 iex> List.first(capabilities) AutoApi.BrowserCapability """ @spec all() :: list(module) defdelegate all(), to: AutoApi.Capability.Delegate @doc """ Returns a capability module by its binary id. Returns `nil` if there is no capability with the given id. ## Examples iex> AutoApi.Capability.get_by_id(<<0x00, 0x35>>) AutoApi.IgnitionCapability iex> AutoApi.Capability.get_by_id(<<0xDE, 0xAD>>) nil """ @spec get_by_id(binary) :: module | nil defdelegate get_by_id(id), to: AutoApi.Capability.Delegate @doc """ Returns a capability module by its name. The name can be specified either as an atom or a string. Returns `nil` if there is no capability with the given name. ## Examples iex> AutoApi.Capability.get_by_name("doors") AutoApi.DoorsCapability iex> AutoApi.Capability.get_by_name(:wake_up) AutoApi.WakeUpCapability iex> AutoApi.Capability.get_by_name("Nobody") nil """ @spec get_by_name(binary | atom) :: module | nil defdelegate get_by_name(name), to: AutoApi.Capability.Delegate end
lib/auto_api/capability.ex
0.871816
0.655253
capability.ex
starcoder
defmodule Plot.Subscriber do use GenServer, restart: :temporary require Decimal import Gnuplot require Logger alias Decimal, as: D D.Context.set(%D.Context{D.Context.get() | precision: 9}) defmodule State do defstruct symbol: None, trades: Deque.new(1_000_000), ma_short: Deque.new(1_000_000), ma_long: Deque.new(1_000_000), ma_trend: Deque.new(1_000_000), trades_bucket: %{ts: 0, price: 0, count: 1}, short_bucket: %{ts: 0, price: 0, count: 1}, long_bucket: %{ts: 0, price: 0, count: 1}, trend_bucket: %{ts: 0, price: 0, count: 1} end defp set_tick() do _timer = Process.send_after(self(), :plot, 5_000) end def start_link(symbol) do Logger.notice("Starting link: #{__MODULE__}-#{symbol}") GenServer.start_link( __MODULE__, %State{symbol: symbol}, name: :"#{__MODULE__}-#{symbol}" ) end def init(%State{symbol: symbol, trades: _trades} = state) do symbol = String.downcase(symbol) Streamer.start_streaming(symbol) Phoenix.PubSub.subscribe( Streamer.PubSub, "trade_events:#{symbol}" ) Phoenix.PubSub.subscribe( Streamer.PubSub, "ma_events:#{symbol}" ) set_tick() {:ok, state} end def handle_info(:plot, state) do dt = DateTime.add(DateTime.now!("Etc/UTC"), -(12 * 3600), :second) plots_data = [ [ [ "-", :with, :lines, :title, "Binance trade since #{DateTime.to_iso8601(dt)}" ], state.trades ], [ ["-", :with, :lines, :title, "MA short"], state.ma_short ], [ ["-", :with, :lines, :title, "MA long"], state.ma_long ], [ ["-", :with, :lines, :title, "MA trend"], state.ma_trend ] ] available = Enum.filter(plots_data, fn [_title, data] -> data.size > 1 end) |> Enum.reduce(%{titles: [], data: []}, fn [title, points], m -> %{m | titles: m.titles ++ [title], data: m.data ++ [Enum.to_list(points)]} end) if state.trades.size > 0 do try do _stat = plot( [ [:set, :term, :pngcairo], [:set, :output, "/tmp/#{state.symbol}.png"], [:set, :title, "#{state.symbol}"], [:set, :key, :left, :top], plots(available.titles) ], available.data ) rescue e in MatchError -> "Data: #{inspect(e)}" end end _timer = set_tick() {:noreply, state} end def handle_info( %{short_ma: short_ma, long_ma: long_ma, trend_ma: trend_ma, ts: ts}, state ) do ts = ts * 1000 {short, short_bucket} = event_append(state.ma_short, state.short_bucket, ts, D.to_float(short_ma)) {long, long_bucket} = event_append(state.ma_long, state.long_bucket, ts, D.to_float(long_ma)) {trend, trend_bucket} = event_append(state.ma_trend, state.trend_bucket, ts, D.to_float(trend_ma)) dt = DateTime.add(DateTime.now!("Etc/UTC"), -(12 * 3600), :second) ts = DateTime.to_unix(dt, :second) ma_short = drop_while(short, fn [time, _price] -> time < ts end) ma_long = drop_while(long, fn [time, _price] -> time < ts end) ma_trend = drop_while(trend, fn [time, _price] -> time < ts end) new_state = %{ state | ma_short: ma_short, ma_long: ma_long, ma_trend: ma_trend, short_bucket: short_bucket, long_bucket: long_bucket, trend_bucket: trend_bucket } {:noreply, new_state} end def handle_info(%Streamer.Binance.TradeEvent{trade_time: t_time, price: price}, state) do {price, _} = Float.parse(price) {t, bucket} = event_append(state.trades, state.trades_bucket, t_time, price) dt = DateTime.add(DateTime.now!("Etc/UTC"), -(12 * 3600), :second) ts = DateTime.to_unix(dt, :second) trades = drop_while(t, fn [time, _price] -> time < ts end) new_state = %{state | trades: trades, trades_bucket: bucket} {:noreply, new_state} end def handle_info(msg, state) do case msg do {_port, {:exit_status, _}} -> None msg_umatch -> Logger.warn("#{inspect(state.symbol)} - #{inspect(msg_umatch)}") end {:noreply, state} end defp event_append(coll, bucket, ts, price) do # convert from milli to seconds current_ts = div(ts, 1000) {new_coll, bucket} = cond do current_ts != bucket.ts -> {Deque.append(coll, [bucket.ts, bucket.price / bucket.count]), %{ts: current_ts, price: price, count: 1}} current_ts == bucket.ts -> {coll, %{bucket | price: price + bucket.price, count: bucket.count + 1}} end {new_coll, bucket} end def drop_while(deque, fun) do {x, new_deque} = Deque.popleft(deque) popped_que = if !is_nil(x) do if fun.(x) do drop_while(new_deque, fun) else deque end else deque end popped_que end end
apps/plot/lib/plot/subscriber.ex
0.579757
0.530845
subscriber.ex
starcoder
defmodule Cpfcnpj do @moduledoc """ Módulo responsável por realizar todos os cálculos de validação. ## Examples iex>Cpfcnpj.valid?({:cnpj,"69.103.604/0001-60"}) true iex>Cpfcnpj.valid?({:cpf,"111.444.777-35"}) true Com ou sem os caracteres especiais os mesmos serão validados """ @division 11 @cpf_length 11 @cpf_algs_1 [10, 9, 8, 7, 6, 5, 4, 3, 2, 0, 0] @cpf_algs_2 [11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 0] @cpf_regex ~r/(\d{3})?(\d{3})?(\d{3})?(\d{2})/ @cnpj_length 14 @cnpj_algs_1 [5, 4, 3, 2, 9, 8, 7, 6, 5, 4, 3, 2, 0, 0] @cnpj_algs_2 [6, 5, 4, 3, 2, 9, 8, 7, 6, 5, 4, 3, 2, 0] @cnpj_regex ~r/(\d{2})?(\d{3})?(\d{3})?(\d{4})?(\d{2})/ @doc """ Valida cpf/cnpj caracteres especias não são levados em consideração. ## Examples iex>Cpfcnpj.valid?({:cnpj,"69.103.604/0001-60"}) true """ @spec valid?({:cpf | :cnpj, String.t()}) :: boolean() def valid?(number_in) do check_number(number_in) and type_checker(number_in) and special_checker(number_in) end defp check_number({_, nil}) do false end # Checks length and if all are equal defp check_number(tp_cpfcnpj) do cpfcnpj = String.replace(elem(tp_cpfcnpj, 1), ~r/[\.\/-]/, "") all_equal? = cpfcnpj |> String.replace(String.at(cpfcnpj, 0), "") |> String.length() |> Kernel.==(0) correct_length? = case tp_cpfcnpj do {:cpf, _} -> String.length(cpfcnpj) == @cpf_length {:cnpj, _} -> String.length(cpfcnpj) == @cnpj_length end correct_length? and not all_equal? end # Checks validation digits defp type_checker(tp_cpfcnpj) do cpfcnpj = String.replace(elem(tp_cpfcnpj, 1), ~r/[^0-9]/, "") first_char_valid = character_valid(cpfcnpj, {elem(tp_cpfcnpj, 0), :first}) second_char_valid = character_valid(cpfcnpj, {elem(tp_cpfcnpj, 0), :second}) verif = first_char_valid <> second_char_valid verif == String.slice(cpfcnpj, -2, 2) end # Checks special cases defp special_checker({:cpf, _}) do true end defp special_checker(tp_cpfcnpj = {:cnpj, _}) do cnpj = String.replace(elem(tp_cpfcnpj, 1), ~r/[\.\/-]/, "") order = String.slice(cnpj, 8..11) first_three_digits = String.slice(cnpj, 0..2) basic = String.slice(cnpj, 0..7) cond do order == "0000" -> false String.to_integer(order) > 300 and first_three_digits == "000" and basic != "00000000" -> false true -> true end end defp mult_sum(algs, cpfcnpj) do mult = cpfcnpj |> String.codepoints() |> Enum.with_index() |> Enum.map(fn {k, v} -> String.to_integer(k) * Enum.at(algs, v) end) Enum.reduce(mult, 0, &+/2) end defp character_calc(remainder) do if remainder < 2, do: 0, else: @division - remainder end defp character_valid(cpfcnpj, valid_type) do array = case valid_type do {:cpf, :first} -> @cpf_algs_1 {:cnpj, :first} -> @cnpj_algs_1 {:cpf, :second} -> @cpf_algs_2 {:cnpj, :second} -> @cnpj_algs_2 end mult_sum(array, cpfcnpj) |> rem(@division) |> character_calc |> Integer.to_string() end @doc """ Valida o Cpf/Cnpj e retorna uma String com o mesmo formatado. Caso seja inválido retorna `nil` ## Examples iex> Cpfcnpj.format_number({:cnpj,"69.103.604/0001-60"}) "69.103.604/0001-60" """ @spec format_number({:cpf | :cnpj, String.t()}) :: String.t() | nil def format_number(number_in) do if valid?(number_in) do tp_cpfcnpj = {elem(number_in, 0), String.replace(elem(number_in, 1), ~r/[^0-9]/, "")} case tp_cpfcnpj do {:cpf, cpf} -> Regex.replace(@cpf_regex, cpf, "\\1.\\2.\\3-\\4") {:cnpj, cnpj} -> Regex.replace(@cnpj_regex, cnpj, "\\1.\\2.\\3/\\4-\\5") end else nil end end @doc """ Gerador de cpf/cnpj concatenado com o dígito verificador. """ @spec generate(:cpf | :cnpj) :: String.t() def generate(tp_cpfcnpj) do numbers = random_numbers(tp_cpfcnpj) first_valid_char = character_valid(numbers, {tp_cpfcnpj, :first}) second_valid_char = character_valid(numbers <> first_valid_char, {tp_cpfcnpj, :second}) result = numbers <> first_valid_char <> second_valid_char # Chance de gerar um inválido seguindo esse algoritmo é baixa o suficiente que # vale a pena simplesmente retentar caso o resultado for inválido if valid?({tp_cpfcnpj, result}) do result else generate(tp_cpfcnpj) end end defp random_numbers(tp_cpfcnpj) do random_digit_generator = fn -> Enum.random(0..9) end random_digit_generator |> Stream.repeatedly() |> Enum.take(if(tp_cpfcnpj == :cpf, do: @cpf_length, else: @cnpj_length) - 2) |> Enum.join() end end
lib/cpfcnpj.ex
0.708213
0.441974
cpfcnpj.ex
starcoder
defmodule TypedEctoSchema do @moduledoc """ TypedEctoSchema provides a DSL on top of `Ecto.Schema` to define schemas with typespecs without all the boilerplate code. ## Rationale Normally, when defining an `Ecto.Schema` you probably want to define: * the schema itself * the list of enforced keys (which helps reducing problems) * its associated type (`Ecto.Schema` doesn't define it for you) It ends up in something like this: defmodule Person do use Ecto.Schema @enforce_keys [:name] schema "people" do field(:name, :string) field(:age, :integer) field(:happy, :boolean, default: true) field(:phone, :string) belongs_to(:company, Company) timestamps(type: :naive_datetime_usec) end @type t() :: %__MODULE__{ __meta__: Ecto.Schema.Metadata.t(), id: integer() | nil, name: String.t(), age: non_neg_integer() | nil, happy: boolean(), phone: String.t() | nil, company_id: integer() | nil, company: Company.t() | Ecto.Association.NotLoaded.t() | nil, inserted_at: NaiveDateTime.t(), updated_at: NaiveDateTime.t() } end This is problematic for a a lot of reasons, summing up: - A lot of repetition. Field names appear in 3 different places, so in order to understand one field, a reader needs to go up and down the code to get that. - Ecto has some "hidden" fields that are added behind the scenes to the struct, such as the primary key `id`, the foreign key `company_id`, the timestamps and the `__meta__` field for schemas. Knowing all those rules can be hard to remember and would probably be easily forgotten when changing the schema. Also, Ecto has strange types for associations and metadata that need to be remembered. All of this makes this process extremely repetitive and error prone. Sometimes, you want to enforce factory functions to control defaults in a better way, you would probably add all fields to `@enforce_keys`. This would make the `@enforce_keys` big and repetitive, once again. This module aims to help with that, by providing some syntax sugar that allow you to define this in a more compact way. defmodule Person do use TypedEctoSchema typed_schema "people" do field(:name, :string, enforce: true, null: false) field(:age, :integer) :: non_neg_integer() | nil field(:happy, :boolean, default: true, null: false) field(:phone, :string) belongs_to(:company, Company) timestamps(type: :naive_datetime_usec) end end This is way simpler and less error prone. There is a lot going under the hoods here. ## Extra Options All ecto macros are called under the hood with the options you pass, with exception of a few added options: - `:null` - when `true`, adds a `| nil` to the typespec. Default is `true`. Has no effect on `has_one/3` because it can always be `nil`. On `belongs_to/3` only add `| nil` to the underlying foreign key. - `:enforce` - when `true` adds the field to the `@enforce_keys`. Default is `false` ## Schema Options When calling `typed_schema/3` or `typed_embedded_schema/2` you can pass some options, as defined: - `:null` - Set the default `:null` field option, which normally is true. Note that it is still can be overwritten by passing `:null` to the field itself. Also, `embeds_many` and `has_many` can never be null, because they are always initialized to empty string, so they never receive the `| nil` on the typespec. In addition to that, `has_one/3` and `belongs_to/3` always receive `| nil` because the related schema may be deleted from the repo so it is safe to always assume they can be `nil`. - `:enforce` - When `true`, enforces all fields unless they explicitly set `enforce: false` or defines a default (`default: value`), since it makes no sense to have a default value for an enforced field. - `:opaque` - When `true` makes the generated type `t` be an opaque type. ## Type Inference TypedEctoSchema does it's best job to guess the typespec for the field. It does so by following the Elixir types as defined in [`Ecto.Schema`](https://hexdocs.pm/ecto/Ecto.Schema.html#module-primitive-types). For custom `Ecto.Type` and related schemas (embedded and associations), which are always a module, it assumes the schemas has a type `t/0` defined, so for a schema called `MySchema`, it will assume the type is `MySchema.t/0`, which is also, the default type generated by this library. ## Overriding the Typespec for a field If for somereason you want to narrow the type or the automatic type inference is incorrect, the `::` operator allows the typespec to be overriden. This is done as you would when defining typespecs. So, for example, instead of ```elixir field(:my_int, :integer) ``` Which would generate a `integer() | nil` typespec, you can: ```elixir field(:my_int, :integer) :: non_neg_integer() | nil ``` And then have a `non_neg_integer()` type for it. ## Non explicit generated fields Ecto generates some fields for you in a lot of cases, they are: - For primary keys - When using a `belongs_to/3` - When calling `timestamps/1` The `__meta__` typespec is automatically generated and cannot be overriden. That is because there is no point on overriding it. ### Primary Keys Primary keys are generated by default and can be customized by the `@primary_key` module attribute, just as defined by Ecto. We handle `@primary_key` the same way we handle `field/3`, so you can pass the same field options to it. However, if you want to customize the type, you need to set `@primary_key false` and define a field with `primary_key: true`. ### Belongs To `belongs_to` generates an underlying foreign key that is dependent on a few Ecto options, as defined on [`Ecto.Schema`](https://hexdocs.pm/ecto/Ecto.Schema.html#belongs_to/3-options). The options we are interested in are `:foreign_key`, `:define_field` and `:type` When `:null` is passed, it will add `| nil` to the generated `foreign_key`'s typespec. The `:enforce` option enforces the association field instead. If you want to `:enforce` the foreign key to be set, you should probably pass `define_field: false` and define the foreign key by hand, setting another `field/3`, the same way as described by Ecto's doc. ### Timestamps In the case of the timestamps, we currently don't allow overriding the type by using the `::` operator. That being said, however, we define the type of the fields using the `:type` option ([as defined by Ecto doc](https://hexdocs.pm/ecto/Ecto.Schema.html#timestamps/1-options)) """ alias TypedEctoSchema.SyntaxSugar alias TypedEctoSchema.TypeBuilder @doc false defmacro __using__(_) do quote do import TypedEctoSchema, only: [ typed_embedded_schema: 1, typed_embedded_schema: 2, typed_schema: 2, typed_schema: 3 ] use Ecto.Schema end end @doc """ Replaces `Ecto.Schema.embedded_schema/1` """ defmacro typed_embedded_schema(opts \\ [], do: block) do quote do unquote(prelude(opts)) Ecto.Schema.embedded_schema do unquote(inner(block, __CALLER__)) end unquote(postlude(opts)) end end @doc """ Replaces `Ecto.Schema.schema/2` """ defmacro typed_schema(table_name, opts \\ [], do: block) do quote do unquote(prelude(opts)) unquote(TypeBuilder).add_meta(__MODULE__) Ecto.Schema.schema unquote(table_name) do unquote(inner(block, __CALLER__)) end unquote(postlude(opts)) end end defp prelude(opts) do quote do require unquote(TypeBuilder) unquote(TypeBuilder).init(unquote(opts)) end end defp inner(block, env) do quote do unquote(TypeBuilder).add_primary_key(__MODULE__) unquote(SyntaxSugar.apply_to_block(block, env)) unquote(TypeBuilder).enforce_keys() end end defp postlude(opts) do quote do unquote(TypeBuilder).define_type(unquote(opts)) end end end
lib/typed_ecto_schema.ex
0.886445
0.835886
typed_ecto_schema.ex
starcoder
defmodule Numerix.Optimization do @moduledoc """ Optimization algorithms to select the best element from a set of possible solutions. """ @default_opts [population_size: 50, mutation_prob: 0.2, elite_fraction: 0.2, iterations: 100] @doc """ Genetic algorithm to find the solution with the lowest cost where `domain` is a set of all possible values (i.e. ranges) in the solution and `cost_fun` determines how optimal each solution is. Example iex> domain = [0..9] |> Stream.cycle |> Enum.take(10) iex> cost_fun = fn(x) -> Enum.sum(x) end iex> Numerix.Optimize.genetic(domain, cost_fun) [0, 0, 0, 0, 0, 0, 0, 0, 0, 0] ## Options * `:population_size` - the size of population to draw the solutions from * `:mutation_prob` - the minimum probability that decides if mutation should occur * `:elite_fraction` - the percentage of population that will form the elite group in each generation * `:iterations` - the maximum number of generations to evolve the solutions """ @spec genetic([Range.t()], ([integer] -> number), Keyword.t()) :: [integer] @lint [{Credo.Check.Refactor.Nesting, false}] def genetic(domain, cost_fun, opts \\ []) do merged_opts = Keyword.merge(@default_opts, opts) top_elite = round(merged_opts[:elite_fraction] * merged_opts[:population_size]) population = init_population(domain, merged_opts[:population_size]) evolve = fn [best | _rest], 0, _fun -> best pop, iteration, fun -> Stream.repeatedly(fn -> if :rand.uniform() < merged_opts[:mutation_prob] do pop |> mutate(domain, top_elite) else pop |> crossover(top_elite) end end) |> Stream.take(merged_opts[:population_size]) |> Stream.concat(pop |> Enum.take(top_elite)) |> Stream.map(&{cost_fun.(&1), &1}) |> Enum.sort() |> Enum.map(fn {_cost, solution} -> solution end) |> fun.(iteration - 1, fun) end evolve.(population, merged_opts[:iterations], evolve) end defp init_population(domain, size) do Stream.repeatedly(fn -> Enum.map(domain, &Enum.random/1) end) |> Enum.take(size) end defp crossover(population, top_elite) do vector1 = population |> Enum.at(Enum.random(0..top_elite)) vector2 = population |> Enum.at(Enum.random(0..top_elite)) idx = random_index(vector1) vector1 |> Enum.take(idx) |> Enum.concat(vector2 |> Enum.drop(idx)) end defp mutate(population, domain, top_elite) do elite_idx = Enum.random(0..top_elite) target_vector = population |> Enum.at(elite_idx) target_idx = random_index(target_vector) target_vector |> Stream.with_index() |> Stream.map(&do_mutate(&1, target_idx, domain)) |> Enum.to_list() end defp do_mutate({x, i}, target_idx, domain) when i == target_idx do {min, max} = domain |> Enum.at(i) |> Enum.min_max() cond do :rand.uniform() < 0.5 and x > min -> x - 1 x < max -> x + 1 true -> x end end defp do_mutate({x, _i}, _target_idx, _domain), do: x defp random_index(vector) do max = Enum.count(vector) - 1 Enum.random(0..max) end end
lib/optimization.ex
0.927536
0.746278
optimization.ex
starcoder
defmodule Storage.Wallet.Eth do @moduledoc ~S""" NOTE: defmodule Storage.Repo.Migrations.WalletEth do use Ecto.Migration def change do create table(:eth) do add(:address, :eth) add(:privatekey, :eth) add(:publickey, :eth) end create(unique_index(:eth, [:address])) end end """ require Logger use Ecto.Schema # ---------------------------------------------------------------------------- # Public API # ---------------------------------------------------------------------------- @doc """ Job Posting definition. This is the record type that will be written/read from the database. """ @primary_key false schema "eth" do field(:address, :string, primary_key: true) field(:privatekey, :string) field(:publickey, :string) field(:meta, :map) timestamps(type: :naive_datetime, autogenerate: {Storage.Repo, :timestamps, []}) end # ---------------------------------------------------------------------------- # Storage.Auth.Posting.t Struct definition and accessors and settors # ---------------------------------------------------------------------------- # The basic struct returned from the table. For now I am just going # to use this struct directly however I did add accessors so that # it can be used without having to know the actual key names # incase I change them in the future @type t :: %Storage.Wallet.Eth{ address: String.t(), privatekey: String.t(), publickey: String.t(), meta: map, inserted_at: NativeDateTime.t(), updated_at: NativeDateTime.t() } @doc """ Storage.Work.Posting.t accessor to address """ @spec address(Storage.Wallet.Eth.t()) :: String.t() def address(ethT), do: ethT.address @doc """ Storage.Work.Posting.t accessor to privateKey """ @spec privateKey(Storage.Wallet.Eth.t()) :: String.t() def privateKey(ethT), do: ethT.privatekey @doc """ Storage.Work.Posting.t accessor to publicKey """ @spec publicKey(Storage.Wallet.Eth.t()) :: String.t() def publicKey(ethT), do: ethT.publickey @doc """ Storage.Work.Posting.t accessor to meta data """ @spec meta(Storage.Wallet.Eth.t()) :: map def meta(ethT), do: ethT.meta @doc """ Create a new wallet structure with the data passed into via the map. It is assumed that the map structure should look like so: ``` %{ "address" => "String value", "privatekey" => "String value", "publickey" => "String value", "meta" => %{} } ``` """ @spec new(map) :: Storage.Wallet.Eth.t() def new(map) do %Storage.Wallet.Eth{ address: map["address"], privatekey: map["privateKey"], publickey: map["publicKey"], meta: %{} } end # ---------------------------------------------------------------------------- # Insertion Commands # ---------------------------------------------------------------------------- @doc """ Write the record to the database """ @spec write(Storage.Wallet.Eth.t()) :: {:ok, Storage.Wallet.Eth.t()} | {:error, any()} def write(eth) do encrypt(eth) |> Storage.Repo.insert() end @doc """ Update the meta value assigned to this address """ @spec updateMeta(String.t(), map) :: {:ok, Storage.Wallet.Eth.t()} | {:error, any()} def updateMeta(address, meta) do changes = Storage.Repo.changeSetField(%{}, :meta, meta) post = Ecto.Changeset.change(%Storage.Wallet.Eth{address: address}, changes) case Storage.Repo.update(post) do {:error, changeset} = err -> Logger.error("[Storage.Wallet.Eth.updateMeta] Failed #{inspect(changeset)}") err results -> results end end ## ---------------------------------------------------------------------------- ## Query Operations ## ---------------------------------------------------------------------------- @doc """ Pull all the users from the system. The cost of this call will grow with the total number of users in the system. It will require a DB read """ @spec query :: [Storage.Wallet.Eth.t()] def query() do Storage.Repo.all(Storage.Wallet.Eth) |> decrypt() end @doc """ Pulls an address info out of the DB. """ @spec query(String.t()) :: nil | Storage.Wallet.Eth.t() def query(address) do Storage.Repo.get_by(Storage.Wallet.Eth, address: address) |> decrypt() end @doc """ Pull all the users from the system. The cost of this call will grow with the total number of users in the system. It will require a DB read NOTE: This will not decrypt the info from the DB. Thus making it a bit faster call """ @spec queryRaw :: [Storage.Wallet.Eth.t()] def queryRaw() do Storage.Repo.all(Storage.Wallet.Eth) end @doc """ Pulls an address info out of the DB. NOTE: This will not decrypt the info from the DB. Thus making it a bit faster call """ @spec queryRaw(String.t()) :: nil | Storage.Wallet.Eth.t() def queryRaw(address) do Storage.Repo.get_by(Storage.Wallet.Eth, address: address) end # ---------------------------------------------------------------------------- # Private API # ---------------------------------------------------------------------------- # Encrypt some data. defp encrypt(nil), do: nil defp encrypt(entries) when is_list(entries) do Enum.map(entries, fn eth -> encrypt(eth) end) end defp encrypt(eth) do %{ eth | privatekey: Utils.Crypto.encrypt(eth.privatekey) |> Base.encode64(), publickey: Utils.Crypto.encrypt(eth.publickey) |> Base.encode64() } end # Decrypt some data defp decrypt(nil), do: nil defp decrypt(entries) when is_list(entries) do Enum.map(entries, fn eth -> decrypt(eth) end) end defp decrypt(eth) do %{ eth | privatekey: Base.decode64!(eth.privatekey) |> Utils.Crypto.decrypt(), publickey: Base.decode64!(eth.publickey) |> Utils.Crypto.decrypt() } end end
src/apps/storage/lib/storage/wallet/eth.ex
0.771628
0.692746
eth.ex
starcoder
defmodule Pavlov.Matchers do @moduledoc """ Provides several matcher functions. Matchers accept up to two values, `actual` and `expected`, and return a Boolean. Using "Expects" syntax, all matchers have positive and negative forms. For a matcher `eq`, there is a positive `to_eq` and a negative `not_to_eq` method. """ import ExUnit.Assertions, only: [flunk: 1] @type t :: list | map @doc """ Performs an equality test between two values using ==. Example: eq(1, 2) # => false eq("a", "a") # => true """ @spec eq(any, any) :: boolean def eq(actual, expected) do actual == expected end @doc """ Performs an equality test between a given expression and 'true'. Example: be_true(1==1) # => true be_true("a"=="b") # => false """ @spec be_true(any) :: boolean def be_true(exp) do exp == true end @doc """ Performs a truthy check with a given expression. Example: be_truthy(1) # => true be_truthy("a") # => true be_truthy(nil) # => false be_truthy(false) # => false """ @spec be_truthy(any) :: boolean def be_truthy(exp) do exp end @doc """ Performs a falsey check with a given expression. Example: be_falsey(1) # => false be_falsey("a") # => false be_falsey(nil) # => true be_falsey(false) # => true """ @spec be_falsey(any) :: boolean def be_falsey(exp) do !exp end @doc """ Performs a nil check with a given expression. Example: be_nil(nil) # => true be_nil("a") # => false """ @spec be_nil(any) :: boolean def be_nil(exp) do is_nil exp end @doc """ Performs has_key? operation on a Dict. Example: have_key(%{:a => 1}, :a) # => true have_key(%{:a => 1}, :b) # => false """ @spec have_key(node, t) :: boolean def have_key(key, dict) do Dict.has_key? dict, key end @doc """ Checks if a Dict is empty. Example: be_empty(%{}) # => true be_empty(%{:a => 1}) # => false """ @spec be_empty(t|char_list) :: boolean def be_empty(list) do cond do is_bitstring(list) -> String.length(list) == 0 is_list(list) || is_map(list) -> Enum.empty? list true -> false end end @doc """ Tests whether a given value is part of an array. Example: include([1, 2, 3], 1) # => true include([1], 2) # => false """ @spec include(any, list|char_list) :: boolean def include(member, list) do cond do is_bitstring(list) -> String.contains? list, member is_list(list) || is_map(list) -> Enum.member? list, member true -> false end end @doc """ Tests whether a given exception was raised. Example: have_raised(fn -> 1 + "test") end, ArithmeticError) # => true have_raised(fn -> 1 + 2) end, ArithmeticError) # => false """ @spec have_raised(any, function) :: boolean def have_raised(exception, fun) do raised = try do fun.() rescue error -> stacktrace = System.stacktrace name = error.__struct__ cond do name == exception -> error name == ExUnit.AssertionError -> reraise(error, stacktrace) true -> flunk "Expected exception #{inspect exception} but got #{inspect name} (#{Exception.message(error)})" end else _ -> false end end @doc """ Tests whether a given value was thrown. Example: have_thrown(fn -> throw "x" end, "x") # => true have_thrown(fn -> throw "x" end, "y") # => false """ @spec have_thrown(any, function) :: boolean def have_thrown(expected, fun) do value = try do fun.() catch x -> x end value == expected end @doc """ Tests whether the process has exited. Example: have_exited(fn -> exit "x" end) # => true have_thrown(fn -> :ok end) # => false """ @spec have_exited(function) :: boolean def have_exited(fun) do exited = try do fun.() catch :exit, _ -> true end case exited do true -> true _ -> false end end end
lib/matchers.ex
0.825976
0.706532
matchers.ex
starcoder
defmodule Dicon.SecureShell do @moduledoc """ A `Dicon.Executor` based on SSH. ## Configuration The configuration for this executor must be specified under the configuration for the `:dicon` application: config :dicon, Dicon.SecureShell, dir: "..." The available configuration options for this executor are: * `:dir` - a binary that specifies the directory where the SSH keys are (in the local machine). Defaults to `"~/.ssh"`. * `:connect_timeout` - an integer that specifies the timeout (in milliseconds) when connecting to the host. * `:write_timeout` - an integer that specifies the timeout (in milliseconds) when writing data to the host. * `:exec_timeout` - an integer that specifies the timeout (in milliseconds) when executing commands on the host. The username and password user to connect to the server will be picked up by the URL that identifies that server (in `:dicon`'s configuration); read more about this in the documentation for the `Dicon` module. """ @behaviour Dicon.Executor # Size in bytes. @file_chunk_size 100_000 defstruct [ :conn, :connect_timeout, :write_timeout, :exec_timeout ] def connect(authority) do config = Application.get_env(:dicon, __MODULE__, []) connect_timeout = Keyword.get(config, :connect_timeout, 5_000) write_timeout = Keyword.get(config, :write_timeout, 5_000) exec_timeout = Keyword.get(config, :exec_timeout, 5_000) user_dir = Keyword.get(config, :dir, "~/.ssh") |> Path.expand() {user, passwd, host, port} = parse_elements(authority) opts = put_option([], :user, user) |> put_option(:password, <PASSWORD>) |> put_option(:user_dir, user_dir) host = String.to_charlist(host) result = with :ok <- ensure_started(), {:ok, conn} <- :ssh.connect(host, port, opts, connect_timeout) do state = %__MODULE__{ conn: conn, connect_timeout: connect_timeout, write_timeout: write_timeout, exec_timeout: exec_timeout } {:ok, state} end format_if_error(result) end defp put_option(opts, _key, nil), do: opts defp put_option(opts, key, value) do [{key, String.to_charlist(value)} | opts] end defp ensure_started() do case :ssh.start() do :ok -> :ok {:error, {:already_started, :ssh}} -> :ok {:error, reason} -> {:error, "could not start ssh application: " <> Application.format_error(reason)} end end defp parse_elements(authority) do parts = String.split(authority, "@", parts: 2) [user_info, host_info] = case parts do [host_info] -> ["", host_info] result -> result end parts = String.split(user_info, ":", parts: 2, trim: true) destructure([user, passwd], parts) parts = String.split(host_info, ":", parts: 2, trim: true) {host, port} = case parts do [host, port] -> {host, String.to_integer(port)} [host] -> {host, 22} end {user, passwd, host, port} end def exec(%__MODULE__{} = state, command, device) do %{conn: conn, connect_timeout: connect_timeout, exec_timeout: exec_timeout} = state result = with {:ok, channel} <- :ssh_connection.session_channel(conn, connect_timeout), :success <- :ssh_connection.exec(conn, channel, command, exec_timeout) do handle_reply(conn, channel, device, exec_timeout, _acc = []) end format_if_error(result) end defp handle_reply(conn, channel, device, exec_timeout, acc) do receive do {:ssh_cm, ^conn, {:data, ^channel, _code, data}} -> handle_reply(conn, channel, device, exec_timeout, [acc | data]) {:ssh_cm, ^conn, {:eof, ^channel}} -> handle_reply(conn, channel, device, exec_timeout, acc) {:ssh_cm, ^conn, {:exit_status, ^channel, _status}} -> handle_reply(conn, channel, device, exec_timeout, acc) {:ssh_cm, ^conn, {:closed, ^channel}} -> IO.write(device, acc) after exec_timeout -> {:error, :timeout} end end def write_file(%__MODULE__{} = state, target, content, :append) do write_file(state, ["cat >> ", target], content) end def write_file(%__MODULE__{} = state, target, content, :write) do write_file(state, ["cat > ", target], content) end defp write_file(state, command, content) do %{conn: conn, connect_timeout: connect_timeout, exec_timeout: exec_timeout} = state result = with {:ok, channel} <- :ssh_connection.session_channel(conn, connect_timeout), :success <- :ssh_connection.exec(conn, channel, command, exec_timeout), :ok <- :ssh_connection.send(conn, channel, content, exec_timeout), :ok <- :ssh_connection.send_eof(conn, channel) do handle_reply(conn, channel, Process.group_leader(), exec_timeout, _acc = []) end format_if_error(result) end def copy(%__MODULE__{} = state, source, target) do %{conn: conn, connect_timeout: connect_timeout, exec_timeout: exec_timeout} = state result = with {:ok, %File.Stat{size: size}} <- File.stat(source), chunk_count = round(Float.ceil(size / @file_chunk_size)), stream = File.stream!(source, [], @file_chunk_size) |> Stream.with_index(1), {:ok, channel} <- :ssh_connection.session_channel(conn, connect_timeout), :success <- :ssh_connection.exec(conn, channel, ["cat > ", target], exec_timeout), Enum.each(stream, fn {chunk, chunk_index} -> # TODO: we need to remove this assertion here as well, once we have a # better "streaming" API. :ok = :ssh_connection.send(conn, channel, chunk, exec_timeout) write_spinner(chunk_index, chunk_count) end), IO.write(IO.ANSI.format([:clear_line, ?\r])), :ok <- :ssh_connection.send_eof(conn, channel) do handle_reply(conn, channel, Process.group_leader(), exec_timeout, _acc = []) end format_if_error(result) end @spinner_chars {?|, ?/, ?-, ?\\} defp write_spinner(index, count) do percent = round(100 * index / count) spinner = elem(@spinner_chars, rem(index, tuple_size(@spinner_chars))) [:clear_line, ?\r, spinner, ?\s, Integer.to_string(percent), ?%] |> IO.ANSI.format() |> IO.write() end defp format_if_error(:failure) do {:error, "failure on the SSH connection"} end defp format_if_error({:error, reason} = error) when is_binary(reason) do error end defp format_if_error({:error, reason}) do case :inet.format_error(reason) do 'unknown POSIX error' -> {:error, inspect(reason)} message -> {:error, List.to_string(message)} end end defp format_if_error(other), do: other end
lib/dicon/secure_shell.ex
0.764232
0.405037
secure_shell.ex
starcoder
defmodule Zxcvbn.TimeEstimates do @moduledoc false @delta 5 @second 1 @minute 60 @hour 3600 @day 86_400 @month 2_678_400 @year 31_536_000 @century 3_153_600_000 def estimate_attack_times(guesses) do crack_times_seconds = %{ online_throttling_100_per_hour: guesses / 100 / 3600, online_no_throttling_10_per_second: guesses / 10, offline_slow_hashing_1e4_per_second: guesses / 1.0e4, offline_fast_hashing_1e10_per_second: guesses / 1.0e10 } crack_times_display = crack_times_seconds |> Map.keys() |> Enum.reduce(%{}, fn key, acc -> Map.update(acc, key, display_time(crack_times_seconds[key]), & &1) end) %{ crack_times_seconds: crack_times_seconds, crack_times_display: crack_times_display, score: guesses_to_score(guesses) } end def guesses_to_score(guesses) do cond do # risky password: "<PASSWORD>" guesses < 1.0e3 + @delta -> 0 # modest protection from throttled online attacks: "very guessable" guesses < 1.0e6 + @delta -> 1 # modest protection from unthrottled online attacks: "somewhat guessable" guesses < 1.0e8 + @delta -> 2 # modest protection from offline attacks: "safely unguessable" # assuming a salted, slow hash function like bcrypt, scrypt, PBKDF2, argon, etc guesses < 1.0e10 + @delta -> 3 # strong protection from offline attacks under same scenario: "very unguessable" true -> 4 end end def display_time(seconds) when is_number(seconds) and seconds < @second, do: "less than a second" def display_time(seconds) when is_number(seconds) and seconds > @century, do: "centuries" def display_time(@century), do: "1 century" def display_time(3_153_600_000.0), do: "1 century" def display_time(seconds) when seconds < @minute, do: {trunc(seconds), "second"} |> tuple_to_desc def display_time(seconds) when seconds < @hour, do: seconds |> to_desc(@minute, "minute") def display_time(seconds) when seconds < @day, do: seconds |> to_desc(@hour, "hour") def display_time(seconds) when seconds < @month, do: seconds |> to_desc(@day, "day") def display_time(seconds) when seconds < @year, do: seconds |> to_desc(@month, "month") def display_time(seconds) when seconds < @century, do: seconds |> to_desc(@year, "year") defp to_desc(seconds, divider, desc) do base = seconds / divider {trunc(base), desc} |> tuple_to_desc end defp tuple_to_desc({1, desc}), do: "1 #{desc}" defp tuple_to_desc({base, desc}), do: "#{base} #{desc}s" end
lib/zxcvbn/time_estimates.ex
0.655997
0.402245
time_estimates.ex
starcoder
defmodule Hangman.Dictionary.Cache do @moduledoc """ Module implements a GenServer process providing access to a dictionary word cache. Handles lookup routines to access `words`, `tallys`, and `random` words. Serves as a wrapper around dictinary specific implementation """ use GenServer alias Hangman.Dictionary require Logger # External API @doc "Check whether ets is setup" @spec setup? :: atom | no_return def setup? do Dictionary.ETS.setup?() end @doc """ GenServer start link wrapper function """ @spec start_link(Keyword.t()) :: {:ok, pid} def start_link(args) do options = [name: :hangman_dictionary_cache_server] GenServer.start_link(__MODULE__, args, options) end @doc """ Cache lookup routines The allowed modes: * `:random` - extracts count number of random hangman words. * `:tally` - retrieve letter tally associated with word length key * `:words` - retrieve the word data lists associated with the word length key """ @spec lookup(pid, atom, pos_integer) :: [String.t()] | Counter.t() | Words.t() | no_return def lookup(pid, :random, count) do GenServer.call(pid, {:lookup_random, count}) end def lookup(pid, :tally, length_key) when is_number(length_key) and length_key > 0 do GenServer.call(pid, {:lookup_tally, length_key}) end def lookup(pid, :words, length_key) when is_number(length_key) and length_key > 0 do GenServer.call(pid, {:lookup_words, length_key}) end @doc """ Routine to stop server normally """ @spec stop(none | pid) :: {} def stop(pid) when is_pid(pid) do GenServer.call(pid, :stop) end @doc """ GenServer callback to initalize server process Kicks off ingestion process to load dictionary words """ @callback init(Keyword.t()) :: tuple def init(args) do _ = Logger.debug("Starting Hangman Dictionary Cache Server, args #{inspect(args)}") # Run ingestion workflow, store the results into ETS Dictionary.Ingestion.run(args) {:ok, {}} end # GenServer callback to retrieve random hangman word # @callback handle_call({:atom, pos_integer}, {}, {}) :: {} def handle_call({:lookup_random, count}, _from, {}) do data = Dictionary.ETS.get(:random, count) {:reply, data, {}} end # GenServer callback to retrieve tally given word length key # @callback handle_call({:atom, pos_integer}, {}, {}) :: {} def handle_call({:lookup_tally, length_key}, _from, {}) when is_integer(length_key) do data = Dictionary.ETS.get(:counter, length_key) {:reply, data, {}} end # GenServer callback to retrieve word lists given word length key # @callback handle_call({:atom, pos_integer}, {}, {}) :: {} def handle_call({:lookup_words, length_key}, _from, {}) do data = Dictionary.ETS.get(:words, length_key) {:reply, data, {}} end # GenServer callback to stop server normally # @callback handle_call(:atom, pid, {}) :: {} def handle_call(:stop, _from, {}) do {:stop, :normal, :ok, {}} end # GenServer callback to cleanup server state # @callback terminate(reason :: term, {}) :: term | no_return def terminate(reason, _state) do _ = Logger.debug("Dictionary Cache Server terminating, reason #{reason}") :ok end end
lib/hangman/dictionary_cache.ex
0.861727
0.482063
dictionary_cache.ex
starcoder
defmodule Geo.JSON.Encoder do @moduledoc false alias Geo.{ Point, PointZ, LineString, LineStringZ, Polygon, PolygonZ, MultiPoint, MultiPointZ, MultiLineString, MultiLineStringZ, MultiPolygon, MultiPolygonZ, GeometryCollection } defmodule EncodeError do @type t :: %__MODULE__{message: String.t(), value: any} defexception [:message, :value] def message(%{message: nil, value: value}) do "unable to encode value: #{inspect(value)}" end def message(%{message: message}) do message end end @doc """ Takes a Geometry and returns a map representing the GeoJSON """ @spec encode!(Geo.geometry()) :: map() def encode!(geom) do case geom do %GeometryCollection{geometries: geometries, srid: srid, properties: properties} -> %{"type" => "GeometryCollection", "geometries" => Enum.map(geometries, &encode!(&1))} |> add_crs(srid) |> add_properties(properties) _ -> geom |> do_encode() |> add_crs(geom.srid) |> add_properties(geom.properties) end end @doc """ Takes a Geometry and returns a map representing the GeoJSON """ @spec encode(Geo.geometry()) :: {:ok, map()} | {:error, EncodeError.t()} def encode(geom) do {:ok, encode!(geom)} rescue exception in [EncodeError] -> {:error, exception} end defp do_encode(%Point{coordinates: {x, y}}) do %{"type" => "Point", "coordinates" => [x, y]} end defp do_encode(%PointZ{coordinates: {x, y, z}}) do %{"type" => "Point", "coordinates" => [x, y, z]} end defp do_encode(%LineString{coordinates: coordinates}) do coordinates = Enum.map(coordinates, &Tuple.to_list(&1)) %{"type" => "LineString", "coordinates" => coordinates} end defp do_encode(%LineStringZ{coordinates: coordinates}) do coordinates = Enum.map(coordinates, &Tuple.to_list(&1)) %{"type" => "LineStringZ", "coordinates" => coordinates} end defp do_encode(%Polygon{coordinates: coordinates}) do coordinates = Enum.map(coordinates, fn sub_coordinates -> Enum.map(sub_coordinates, &Tuple.to_list(&1)) end) %{"type" => "Polygon", "coordinates" => coordinates} end defp do_encode(%PolygonZ{coordinates: coordinates}) do coordinates = Enum.map(coordinates, fn sub_coordinates -> Enum.map(sub_coordinates, &Tuple.to_list(&1)) end) %{"type" => "PolygonZ", "coordinates" => coordinates} end defp do_encode(%MultiPoint{coordinates: coordinates}) do coordinates = Enum.map(coordinates, &Tuple.to_list(&1)) %{"type" => "MultiPoint", "coordinates" => coordinates} end defp do_encode(%MultiPointZ{coordinates: coordinates}) do coordinates = Enum.map(coordinates, &Tuple.to_list(&1)) %{"type" => "MultiPointZ", "coordinates" => coordinates} end defp do_encode(%MultiLineString{coordinates: coordinates}) do coordinates = Enum.map(coordinates, fn sub_coordinates -> Enum.map(sub_coordinates, &Tuple.to_list(&1)) end) %{"type" => "MultiLineString", "coordinates" => coordinates} end defp do_encode(%MultiLineStringZ{coordinates: coordinates}) do coordinates = Enum.map(coordinates, fn sub_coordinates -> Enum.map(sub_coordinates, &Tuple.to_list(&1)) end) %{"type" => "MultiLineStringZ", "coordinates" => coordinates} end defp do_encode(%MultiPolygon{coordinates: coordinates}) do coordinates = Enum.map(coordinates, fn sub_coordinates -> Enum.map(sub_coordinates, fn third_sub_coordinates -> Enum.map(third_sub_coordinates, &Tuple.to_list(&1)) end) end) %{"type" => "MultiPolygon", "coordinates" => coordinates} end defp do_encode(%MultiPolygonZ{coordinates: coordinates}) do coordinates = Enum.map(coordinates, fn sub_coordinates -> Enum.map(sub_coordinates, fn third_sub_coordinates -> Enum.map(third_sub_coordinates, &Tuple.to_list(&1)) end) end) %{"type" => "MultiPolygon", "coordinates" => coordinates} end defp do_encode(data) do raise EncodeError, message: "Unable to encode given value: #{inspect(data)}" end defp add_crs(map, nil) do map end defp add_crs(map, srid) do Map.put(map, "crs", %{"type" => "name", "properties" => %{"name" => "EPSG:#{srid}"}}) end def add_properties(map, props) do if Enum.empty?(props) do map else Map.put(map, "properties", props) end end end
lib/geo/json/encoder.ex
0.828523
0.69272
encoder.ex
starcoder
defmodule TrainLoc.Vehicles.Vehicle do @moduledoc """ Functions for working with individual vehicles. """ alias TrainLoc.Utilities.Time, as: TrainLocTime alias TrainLoc.Vehicles.Vehicle require Logger @enforce_keys [:vehicle_id] defstruct [ :vehicle_id, timestamp: DateTime.from_naive!(~N[1970-01-01T00:00:00], "Etc/UTC"), block: "", trip: "", latitude: 0.0, longitude: 0.0, heading: 0, speed: 0 ] @typedoc """ Vehicle data throughout the app is represented by vehicle structs. A vehicle struct includes: * `vehicle_id`: unique vehicle identifier * `timestamp`: datetime when data was received * `block`: represents a series of trips made by a single vehicle in a day * `trip`: represents a scheduled commuter rail trip * `latitude`: geographic coordinate that specifies the north–south position of the vehicle * `longitude`: geographic coordinate that specifies the east–west position of the vehicle * `heading`: compass direction to which the "nose" of the vehicle is pointing, its orientation * `speed`: the vehicle's speed (miles per hour) """ @type t :: %__MODULE__{ vehicle_id: non_neg_integer, timestamp: DateTime.t(), block: String.t(), trip: String.t(), latitude: float | nil, longitude: float | nil, heading: 0..359, speed: non_neg_integer } def from_json_object(obj) do from_json_elem({nil, obj}) end @spec from_json_map(map) :: [t] def from_json_map(map) do Enum.flat_map(map, &from_json_elem/1) end @spec from_json_elem({any, map}) :: [%Vehicle{}] defp from_json_elem({_, veh_data = %{"VehicleID" => _vehicle_id}}) do [from_json(veh_data)] end defp from_json_elem({_, _}), do: [] def from_json(veh_data) when is_map(veh_data) do %__MODULE__{ vehicle_id: veh_data["VehicleID"], timestamp: TrainLocTime.parse_improper_iso(veh_data["Update Time"]), block: process_trip_block(veh_data["WorkID"]), trip: process_trip_block(veh_data["TripID"]), latitude: process_lat_long(veh_data["Latitude"]), longitude: process_lat_long(veh_data["Longitude"]), heading: veh_data["Heading"], speed: veh_data["Speed"] } end defp process_lat_long(0), do: nil defp process_lat_long(lat_long), do: lat_long defp process_trip_block(trip_or_block) when is_integer(trip_or_block) do trip_or_block |> Integer.to_string() |> String.pad_leading(3, ["0"]) end defp process_trip_block(_), do: nil def active_vehicle?(%__MODULE__{block: "000"}), do: false def active_vehicle?(%__MODULE__{trip: "000"}), do: false def active_vehicle?(%__MODULE__{}), do: true @doc """ Logs all available vehicle data for a single vehicle and returns it without modifying it. """ @spec log_vehicle(Vehicle.t()) :: Vehicle.t() def log_vehicle(vehicle) do _ = Logger.debug(fn -> Enum.reduce(Map.from_struct(vehicle), "Vehicle - ", fn {key, value}, acc -> acc <> format_key_value_pair(key, value) end) end) vehicle end defp format_key_value_pair(key, %DateTime{} = value) do format_key_value_pair(key, DateTime.to_iso8601(value)) end defp format_key_value_pair(key, value) do "#{key}=#{value} " end end
apps/train_loc/lib/train_loc/vehicles/vehicle.ex
0.878868
0.765111
vehicle.ex
starcoder
defmodule Ptolemy.Engines.PKI do @moduledoc """ `Ptolemy.Engines.PKI` provides a public facing API for CRUD operations for the Vault PKI engine. Some function in this modules have additional options that can be provided to vault, you can get the option values from: https://www.vaultproject.io/api/secret/pki/index.html """ alias Ptolemy.Engines.PKI.Engine alias Ptolemy.Server @doc """ Create a role with a role from the specification provided. Optional payload is provided if there is a need to overide other options. See https://www.vaultproject.io/api/secret/pki/index.html#create-update-role for options. ## Example ```elixir iex(2)> Ptolemy.Engines.PKI.create(:production, :pki_engine1, :test_role1, %{allow_any_name: true}) {:ok, "PKI role created"} ``` """ @spec create(atom(), atom(), atom(), map()) :: {:ok, String.t()} | {:error, String.t()} def create(server_name, engine_name, role, params \\ %{}) do path = get_pki_path!(server_name, engine_name, role, "roles") path_create(server_name, path, params) end @doc """ Create a role from the specification provided, errors out if an errors occurs. Optional payload is provided if there is a need to overide other options. See https://www.vaultproject.io/api/secret/pki/index.html#create-update-role for options. """ @spec create!(atom(), atom(), atom(), map()) :: :ok | no_return() def create!(server_name, engine_name, role, params \\ %{}) do case create(server_name, engine_name, role, params) do {:error, msg} -> raise RuntimeError, message: msg _resp -> :ok end end @doc """ Create a role from the specification provided via a specific path. ## Example ```elixir iex(2)> Ptolemy.Engines.PKI.path_create(:production, "/pki/data/", %{allow_any_name: true}) {:ok, "PKI role created"} ``` """ @spec path_create(atom(), String.t(), map()) :: {:ok, String.t()} | {:error, String.t()} def path_create(server_name, path, params \\ %{}) do client = create_client(server_name) Engine.create_role(client, path, params) end @doc """ Reads a brand new generated certificate from a role. Optional payload is provided if there is a need to overide other options. See https://www.vaultproject.io/api/secret/pki/index.html#generate-certificate for options. ## Example ```elixir iex(2)> Ptolemy.Engines.PKI.read(:production, :pki_engine1, :test_role1, "www.example.com") {:ok, %{ "auth" => nil, "data" => %{ "certificate" => "-----BEGIN CERTIFICATE-----generated-cert-----END CERTIFICATE-----", "expiration" => 1555610944, "issuing_ca" => "-----BEGIN CERTIFICATE-----ca-cert-goes-here-----END CERTIFICATE-----", "private_key" => "-----BEGIN RSA PRIVATE KEY-----some-rsa-key-here-----END RSA PRIVATE KEY-----", "private_key_type" => "rsa", "serial_number" => "fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b:70:af:c9:64:55:11:95:84:44:22:6f:e5" }, "lease_duration" => 0, "lease_id" => "", "renewable" => false, "request_id" => "f53c85d0-46ef-df35-349f-dfe4e43ac6d8", "warnings" => nil, "wrap_info" => nil } } ``` """ @spec read(atom(), atom(), atom(), String.t(), map()) :: {:ok, map()} | {:error, String.t()} def read(server_name, engine_name, role, common_name, payload \\ %{}) do path = get_pki_path!(server_name, engine_name, role, "issue") path_read(server_name, path, common_name, payload) end @doc """ Reads a brand new generated certificate from a role, errors out if an error occurs. """ @spec read!(atom(), atom(), atom(), String.t(), map()) :: map() | no_return() def read!(server_name, engine_name, role, common_name, payload \\ %{}) do case read(server_name, engine_name, role, common_name, payload) do {:error, msg} -> raise RuntimeError, message: msg {:ok, resp} -> resp end end @doc """ Reads a brand new generated certificate from a role via given a specific path. Optional payload is provided if there is a need to overide other options. See https://www.vaultproject.io/api/secret/pki/index.html#generate-certificate for options. ## Example ```elixir iex(2)> Ptolemy.Engines.PKI.path_read(:production, "/pki/test", "www.example.com") {:ok, %{ "auth" => nil, "data" => %{ "certificate" => "-----BEGIN CERTIFICATE-----generated-cert-----END CERTIFICATE-----", "expiration" => 1555610944, "issuing_ca" => "-----BEGIN CERTIFICATE-----ca-cert-goes-here-----END CERTIFICATE-----", "private_key" => "-----BEGIN RSA PRIVATE KEY-----some-rsa-key-here-----END RSA PRIVATE KEY-----", "private_key_type" => "rsa", "serial_number" => "fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b:70:af:c9:64:55:11:95:84:44:22:6f:e5" }, "lease_duration" => 0, "lease_id" => "", "renewable" => false, "request_id" => "f53c85d0-46ef-df35-349f-dfe4e43ac6d8", "warnings" => nil, "wrap_info" => nil } } """ @spec path_read(atom(), String.t(), String.t(), map()) :: {:ok, map()} | {:error, String.t()} def path_read(server_name, path, common_name, payload \\ %{}) do client = create_client(server_name) Engine.generate_secret(client, path, common_name, payload) end @doc """ Update a pki role in vault. Optional payload is provided if there is a need to overide other options. See https://www.vaultproject.io/api/secret/pki/index.html#create-update-role for options. ## Example ```elixir iex(2)> Ptolemy.Engines.PKI.update(:production, :pki_engine1, :test_role1, %{allow_any_name: false}) {:ok, "PKI role updated"} ``` """ @spec update(atom(), atom(), atom(), map()) :: {:ok, String.t()} | {:error, String.t()} def update(server_name, engine_name, role, payload \\ %{}) do path = get_pki_path!(server_name, engine_name, role, "roles") path_update(server_name, path, payload) end @doc """ Update a pki role in vault, errors out if an errors occurs. Optional payload is provided if there is a need to overide other options. See https://www.vaultproject.io/api/secret/pki/index.html#create-update-role for options. """ @spec update!(atom(), atom(), atom(), map()) :: :ok | no_return() def update!(server_name, engine_name, secret, payload \\ %{}) do case update(server_name, engine_name, secret, payload) do {:error, msg} -> raise RuntimeError, message: msg _resp -> :ok end end @doc """ Update a pki role in vault via a specified path. Optional payload is provided if there is a need to overide other options. See https://www.vaultproject.io/api/secret/pki/index.html#create-update-role for options. ## Example ```elixir iex(2)> Ptolemy.Engines.PKI.path_update(:production, "pki/test", %{allow_any_name: false}) {:ok, "PKI role updated"} ``` """ @spec path_update(atom(), String.t(), map()) :: {:ok, String.t()} | {:error, String.t()} def path_update(server_name, path, payload \\ %{}) do client = create_client(server_name) case Engine.create_role(client, path, payload) do {:ok, _} -> {:ok, "PKI role updated"} err -> err end end @doc """ Revoke either a certificate or a role from the pki engine in vault. Optional payload is provided if there is a need to overide other options. See: - For role deletion options: https://www.vaultproject.io/api/secret/pki/index.html#delete-role - For cert deletion options: ## Example ```elixir iex(2)> Ptolemy.Engines.PKI.delete(:production, :pki_engine1, :certificate, "17:84:7f:5b:bd:90:da:21:16") {:ok, "PKI certificate revoked"} iex(3)> Ptolemy.Engines.PKI.delete(:production, :pki_engine1, :role, :test_role1) {:ok, "PKI role revoked"} ``` """ @spec delete(atom(), atom(), atom(), any()) :: {:ok, String.t()} | {:error, String.t()} def delete(server_name, engine_name, deleteType, arg1) do case deleteType do :certificate -> delete_cert(server_name, engine_name, arg1) :role -> delete_role(server_name, engine_name, arg1) end end @doc """ Revoke either a certificate or a role from the pki engine in vault, errors out if an errors occurs. Optional payload is provided if there is a need to overide other options. See https://www.vaultproject.io/api/secret/pki/index.html#delete-role for options. """ @spec delete!(atom(), atom(), atom(), any()) :: :ok | no_return() def delete!(server_name, engine_name, deleteType, arg1) do case delete(server_name, engine_name, deleteType, arg1) do {:ok, _} -> :ok _ -> raise "Failed to delete from PKI engine" end end @doc """ Revoke a certificate in vault. Optional payload is provided if there is a need to overide other options. See https://www.vaultproject.io/api/secret/pki/index.html#delete-role for options. ## Example ```elixir iex(2)> Ptolemy.Engines.PKI.delete_cert(:production, :pki_engine1, serial_number) {:ok, "PKI certificate revoked"} ``` """ @spec delete_cert(atom(), atom(), String.t()) :: {:ok, String.t()} | {:error, String.t()} def delete_cert(server_name, engine_name, serial_number) do path = get_pki_path!(server_name, engine_name, "revoke") path_delete_cert(server_name, path, serial_number) end @doc """ Revoke a certificate in vault. """ @spec path_delete_cert(atom(), String.t(), String.t()) :: {:ok, String.t()} | {:error, String.t()} def path_delete_cert(server_name, path, serial_number) do client = create_client(server_name) Engine.revoke_cert(client, path, serial_number) end @doc """ Revoke a role in vault. ## Example ```elixir iex(2)> Ptolemy.Engines.PKI.delete_role(:production, :pki_engine1, :test_role1) {:ok, "PKI role revoked"} ``` """ @spec delete_role(atom(), atom(), atom()) :: {:ok, String.t()} | {:error, String.t()} def delete_role(server_name, engine_name, role) do path = get_pki_path!(server_name, engine_name, role, "roles") path_delete_role(server_name, path) end @doc """ Revoke a role in vault. """ @spec path_delete_role(atom(), String.t()) :: {:ok, String.t()} | {:error, String.t()} def path_delete_role(server_name, path) do client = create_client(server_name) Engine.revoke_role(client, path) end # Tesla client function defp create_client(server_name) do creds = Server.fetch_credentials(server_name) {:ok, http_opts} = Server.get_data(server_name, :http_opts) {:ok, url} = Server.get_data(server_name, :vault_url) Tesla.client([ {Tesla.Middleware.BaseUrl, "#{url}/v1"}, {Tesla.Middleware.Headers, creds}, {Tesla.Middleware.Opts, http_opts}, {Tesla.Middleware.JSON, []} ]) end # Helper functions to make paths defp get_pki_path!(server_name, engine_name, role, operation) when is_atom(role) do with {:ok, conf} <- Server.get_data(server_name, :engines), {:ok, pki_conf} <- Keyword.fetch(conf, engine_name), %{engine_path: path, roles: roles} <- pki_conf do {:ok, role} = Map.fetch(roles, role) make_pki_path!(path, role, operation) else {:error, _msg} -> throw("#{server_name} does not have a pki_engine config") :error -> throw("Could not find engine_name in specified config") end end defp get_pki_path!(server_name, engine_name, role, operation) when is_bitstring(role) do with {:ok, conf} <- Server.get_data(server_name, :engines), {:ok, pki_conf} <- Keyword.fetch(conf, engine_name), %{engine_path: path, roles: roles} <- pki_conf do {:ok, role} = Map.fetch(roles, role) make_pki_path!(path, role, operation) else {:error, _msg} -> raise "#{server_name} does not have a pki_engine config" :error -> raise "Could not find engine_name in specified config" end end defp get_pki_path!(server_name, engine_name, operation) do with {:ok, conf} <- Server.get_data(server_name, :engines), {:ok, pki_conf} <- Keyword.fetch(conf, engine_name), %{engine_path: path} <- pki_conf do "/#{path}#{operation}" else {:error, _msg} -> raise "#{server_name} does not have a pki_engine config" :error -> raise "Could not find engine_name in specified config" end end defp make_pki_path!(engine_path, role_path, operation) do "/#{engine_path}#{operation}#{role_path}" end end
lib/engines/pki/pki.ex
0.873795
0.642348
pki.ex
starcoder
defmodule Chunkr.Pagination do @moduledoc """ Pagination functions. This module provides the high-level pagination logic. Under the hood, it delegates to whatever "planner" module is configured in the call to `use Chunkr, planner: YourApp.PaginationPlanner`. Note that you'll generally want to call the `paginate/4` or `paginate!/4` convenience functions on your Repo module and not the ones directly provided by this module. That way, you'll inherit any configuration previously set on your call to `use Chunkr`. """ alias Chunkr.{Cursor, Opts, Page} @doc """ Paginates a query in `sort_dir` using your predefined `strategy`. The `sort_dir` you specify aligns with the primary sort direction of your pagination strategy. However, you can also provide the inverse sort direction from what your pagination strategy specifies, and the entire sort strategy will automically be inverted. The query _must not_ be ordered before calling `paginate/4` as the proper ordering will be automatically applied per the registered strategy. ## Options * `:first` — Retrieve the first _n_ results; must be between `0` and `:max_limit`. * `:last` — Retrieve the last _n_ results; must be between `0` and `:max_limit`. * `:after` — Return results starting after the provided cursor; optionally pairs with `:first`. * `:before` — Return results ending at the provided cursor; optionally pairs with `:last`. * `:max_limit` — Maximum number of results the user can request for this query. Default is #{Chunkr.default_max_limit()}. * `:cursor_mod` — Specifies the cursor module to use for encoding values as a cursor. Defaults to `Chunkr.Cursor.Base64`. * `:repo` — Repo to use for querying (automatically passed when calling either of the paginate convenience functions on your Repo). * `:planner` — The module implementing your pagination strategy (automatically passed when calling either of the paginate convenience functions on your Repo). """ @spec paginate(any, atom(), Opts.sort_dir(), keyword) :: {:error, String.t()} | {:ok, Page.t()} def paginate(queryable, strategy, sort_dir, options) do with {:ok, opts} <- Opts.new(queryable, strategy, sort_dir, options), {:ok, queryable} <- validate_queryable(queryable) do extended_rows = queryable |> apply_where(opts) |> apply_order(opts) |> apply_select(opts) |> apply_limit(opts.limit + 1, opts) |> opts.repo.all() requested_rows = Enum.take(extended_rows, opts.limit) rows_to_return = case opts.paging_dir do :forward -> requested_rows :backward -> Enum.reverse(requested_rows) end {:ok, %Page{ raw_results: rows_to_return, has_previous_page: has_previous_page?(opts, extended_rows, requested_rows), has_next_page: has_next_page?(opts, extended_rows, requested_rows), start_cursor: List.first(rows_to_return) |> row_to_cursor(opts), end_cursor: List.last(rows_to_return) |> row_to_cursor(opts), opts: opts }} else {:invalid_opts, message} -> {:error, message} {:invalid_query, :already_ordered} -> {:error, "Query must not be ordered prior to paginating with Chunkr"} end end defp validate_queryable(%Ecto.Query{order_bys: [_ | _]}), do: {:invalid_query, :already_ordered} defp validate_queryable(query), do: {:ok, query} @doc """ Same as `paginate/4`, but raises an error for invalid input. """ @spec paginate!(any, atom(), Opts.sort_dir(), keyword) :: Page.t() def paginate!(queryable, strategy, sort_dir, opts) do case paginate(queryable, strategy, sort_dir, opts) do {:ok, page} -> page {:error, message} -> raise ArgumentError, message end end defp has_previous_page?(%{paging_dir: :forward} = opts, _, _), do: !!opts.cursor defp has_previous_page?(%{paging_dir: :backward}, rows, requested_rows), do: rows != requested_rows defp has_next_page?(%{paging_dir: :forward}, rows, requested_rows), do: rows != requested_rows defp has_next_page?(%{paging_dir: :backward} = opts, _, _), do: !!opts.cursor defp row_to_cursor(nil, _opts), do: nil defp row_to_cursor({cursor_values, _}, opts), do: Cursor.encode!(cursor_values, opts.cursor_mod) defp apply_where(query, %{cursor: nil}), do: query defp apply_where(query, opts) do cursor_values = Cursor.decode!(opts.cursor, opts.cursor_mod) opts.planner.beyond_cursor( query, opts.strategy, opts.sort_dir, opts.paging_dir, cursor_values ) end defp apply_order(query, opts) do opts.planner.apply_order(query, opts.strategy, opts.sort_dir, opts.paging_dir) end defp apply_select(query, opts) do opts.planner.apply_select(query, opts.strategy) end defp apply_limit(query, limit, opts) do opts.planner.apply_limit(query, limit) end end
lib/chunkr/pagination.ex
0.863089
0.736377
pagination.ex
starcoder
defprotocol Realm.Functor do @moduledoc ~S""" Functors are datatypes that allow the application of functions to their interior values. Always returns data in the same structure (same size, tree layout, and so on). Please note that bitstrings are not functors, as they fail the functor composition constraint. They change the structure of the underlying data, and thus composed lifting does not equal lifing a composed function. If you need to map over a bitstring, convert it to and from a charlist. ## Type Class An instance of `Realm.Functor` must define `Realm.Functor.map/2`. Functor [map/2] """ @doc ~S""" `map` a function into one layer of a data wrapper. There is an autocurrying variant: `lift/2`. ## Examples iex> Realm.Functor.map([1, 2, 3], fn x -> x + 1 end) [2, 3, 4] iex> %{a: 1, b: 2} ~> fn x -> x * 10 end %{a: 10, b: 20} iex> Realm.Functor.map(%{a: 2, b: [1, 2, 3]}, fn ...> int when is_integer(int) -> int * 100 ...> value -> inspect(value) ...> end) %{a: 200, b: "[1, 2, 3]"} """ @spec map(t(), (any() -> any())) :: t() def map(wrapped, fun) end defmodule Realm.Functor.Algebra do use Quark alias Realm.Functor @doc ~S""" Replace all inner elements with a constant value ## Examples iex> import Realm.Functor.Algebra ...> replace([1, 2, 3], "hi") ["hi", "hi", "hi"] """ @spec replace(Functor.t(), any()) :: Functor.t() def replace(functor, value), do: Functor.map(functor, curry(&constant(value, &1))) end defimpl Realm.Functor, for: Function do use Quark @doc """ Compose functions ## Example iex> ex = Realm.Functor.lift(fn x -> x * 10 end, fn x -> x + 2 end) ...> ex.(2) 22 """ def map(f, g), do: Quark.compose(g, f) end defimpl Realm.Functor, for: List do def map(list, fun), do: Enum.map(list, fun) end defimpl Realm.Functor, for: Tuple do def map(tuple, fun) do case tuple do {} -> {} {first} -> {fun.(first)} {first, second} -> {first, fun.(second)} {first, second, third} -> {first, second, fun.(third)} {first, second, third, fourth} -> {first, second, third, fun.(fourth)} {first, second, third, fourth, fifth} -> {first, second, third, fourth, fun.(fifth)} big_tuple -> last_index = tuple_size(big_tuple) - 1 mapped = big_tuple |> elem(last_index) |> fun.() put_elem(big_tuple, last_index, mapped) end end end defimpl Realm.Functor, for: Map do def map(hashmap, fun) do hashmap |> Map.to_list() |> Realm.Functor.map(fn {key, value} -> {key, fun.(value)} end) |> Enum.into(%{}) end end
lib/realm/functor.ex
0.829285
0.668578
functor.ex
starcoder
defmodule Geocoder.Providers.OpenStreetMaps do use HTTPoison.Base use Towel @endpoint "https://nominatim.openstreetmap.org/" @endpath_reverse "/reverse" @endpath_search "/search" @defaults [format: "json", "accept-language": "en", addressdetails: 1] def geocode(opts) do request(@endpath_search, extract_opts(opts)) |> fmap(&parse_geocode/1) end def geocode_list(opts) do request_all(@endpath_search, extract_opts(opts)) |> fmap(fn %{} = result -> [parse_geocode(result)] r when is_list(r) -> Enum.map(r, &parse_geocode/1) end) end def reverse_geocode(opts) do request(@endpath_reverse, extract_opts(opts)) |> fmap(&parse_reverse_geocode/1) end def reverse_geocode_list(opts) do request_all(@endpath_search, extract_opts(opts)) |> fmap(fn %{} = result -> [parse_reverse_geocode(result)] r when is_list(r) -> Enum.map(r, &parse_reverse_geocode/1) end) end defp extract_opts(opts) do @defaults |> Keyword.merge(opts) |> Keyword.update!(:"accept-language", fn default -> opts[:language] || default end) |> Keyword.put( :q, case opts |> Keyword.take([:address, :latlng]) |> Keyword.values() do [{lat, lon}] -> "#{lat},#{lon}" [query] -> query _ -> nil end ) |> Keyword.take( [ :q, :key, :address, :components, :bounds, :region, :latlon, :lat, :lon, :placeid, :result_type, :location_type ] ++ Keyword.keys(@defaults) ) end defp parse_geocode([]), do: :error defp parse_geocode(response) do coords = geocode_coords(response) bounds = geocode_bounds(response) location = geocode_location(response) %{coords | bounds: bounds, location: location} end defp parse_reverse_geocode([]), do: :error defp parse_reverse_geocode(response) do coords = geocode_coords(response) bounds = geocode_bounds(response) location = geocode_location(response) %{coords | bounds: bounds, location: location} end defp geocode_coords(%{"lat" => lat, "lon" => lon}) do [lat, lon] = [lat, lon] |> Enum.map(&elem(Float.parse(&1), 0)) %Geocoder.Coords{lat: lat, lon: lon} end defp geocode_coords(_), do: %Geocoder.Coords{} defp geocode_bounds(%{"boundingbox" => bbox}) do [north, south, west, east] = bbox |> Enum.map(&elem(Float.parse(&1), 0)) %Geocoder.Bounds{top: north, right: east, bottom: south, left: west} end defp geocode_bounds(_), do: %Geocoder.Bounds{} # %{"address" => # %{"city" => "Ghent", "city_district" => "Wondelgem", "country" => "Belgium", # "country_code" => "be", "county" => "Gent", "postcode" => "9032", # "road" => "Dikkelindestraat", "state" => "Flanders"}, # "boundingbox" => ["51.075731", "51.0786674", "3.7063849", "3.7083991"], # "display_name" => "Dikkelindestraat, Wondelgem, Ghent, Gent, East Flanders, Flanders, 9032, Belgium", # "lat" => "51.0772661", # "licence" => "Data © OpenStreetMap contributors, ODbL 1.0. http://www.openstreetmap.org/copyright", # "lon" => "3.7074267", # "osm_id" => "45352282", "osm_type" => "way", "place_id" => "70350383"} @map %{ "house_number" => :street_number, # Australia suburbs are used instead of counties: https://github.com/knrz/geocoder/pull/71 "suburb" => :county, "county" => :county, "city" => :city, "road" => :street, "state" => :state, "postcode" => :postal_code, "country" => :country } defp geocode_location( %{ "address" => address } = response ) do reduce = fn {type, name}, location -> struct(location, [{@map[type], name}]) end location = %Geocoder.Location{ country_code: address["country_code"], formatted_address: response["display_name"] } address |> Enum.reduce(location, reduce) end defp request_all(path, params) do httpoison_options = Application.get_env(:geocoder, Geocoder.Worker)[:httpoison_options] || [] case get(path, [], Keyword.merge(httpoison_options, params: Enum.into(params, %{}))) do {:ok, %{status_code: 200, body: results}} -> {:ok, List.wrap(results)} {_, response} -> {:error, response} end end def request(path, params) do request_all(path, params) |> fmap(&List.first/1) end def process_url(url) do @endpoint <> url end def process_response_body(body) do body |> Jason.decode!() end end
lib/geocoder/providers/open_street_maps.ex
0.623721
0.499451
open_street_maps.ex
starcoder
defmodule Scenic.Scrollable.PositionCap do alias __MODULE__ @moduledoc """ Module for applying limits to a position. """ @typedoc """ A vector 2 in the form of {x, y} """ @type v2 :: Scenic.Scrollable.v2() @typedoc """ Data structure representing a minimum, or maximum cap which values will be compared against. The cap can be either a `t:v2/0` or a `t:Scenic.Scrollable.Direction.t/0`. By using a `t:Scenic.Scrollable.Direction/0` it is possible to cap a position only for either its x, or its y value. """ @type cap :: v2 | {:horizontal, number} | {:vertical, number} @typedoc """ The settings with which to initialize a `t:Scenic.Scrollable.PositionCap.t`. Both min and max caps are optional, and can be further limited to only the x, or y axes by passing in a `t:Scenic.Scrollable.Direction/0` rather than a `t:v2/0`. """ @type settings :: %{ optional(:max) => cap, optional(:min) => cap } @typedoc """ A struct representing a position cap. Positions in the form of a `t:v2/0` can be compared against, and increased or reduced to the capped values by using the `cap/2` function. """ @type t :: %PositionCap{ max: {:some, cap} | :none, min: {:some, cap} | :none } defstruct max: :none, min: :none @doc """ Initializes a `t:Scenic.Scrollable.PositionCap.t/0` according to the provided `t:Scenic.Scrollable.PositionCap.settings/0`. """ @spec init(settings) :: t def init(settings) do # TODO add validation in order to prevent a max value that is smaller than the min value # In the current code, the max value will take precedence in such case %PositionCap{ max: OptionEx.return(settings[:max]), min: OptionEx.return(settings[:min]) } end @doc """ Compare the upper and lower limits set in the `t:Scenic.Scrollable.PositionCap.t/0` against the `t:v2/0` provided, and adjusts the `t:v2/0` according to those limits. """ @spec cap(t, v2) :: v2 def cap(%{min: min, max: max}, coordinate) do coordinate |> floor(min) |> ceil(max) end @spec floor(v2, {:some, cap} | :none) :: v2 defp floor(coordinate, :none), do: coordinate defp floor({x, y}, {:some, {:horizontal, min_x}}), do: {max(x, min_x), y} defp floor({x, y}, {:some, {:vertical, min_y}}), do: {x, max(y, min_y)} defp floor({x, y}, {:some, {min_x, min_y}}), do: {max(x, min_x), max(y, min_y)} @spec ceil(v2, {:some, cap} | :none) :: v2 defp ceil(coordinate, :none), do: coordinate defp ceil({x, y}, {:some, {:horizontal, max_x}}), do: {min(x, max_x), y} defp ceil({x, y}, {:some, {:vertical, max_y}}), do: {x, min(y, max_y)} defp ceil({x, y}, {:some, {max_x, max_y}}), do: {min(x, max_x), min(y, max_y)} end
lib/utility/position_cap.ex
0.86148
0.748651
position_cap.ex
starcoder
defmodule Neuron do @moduledoc""" Neurons have the format {{:neuron, .37374628}, {weights}} """ defstruct cortex_id: nil, cortex_pid: nil, id: nil, pid: nil, af: :tanh, input_neurons: [], output_neurons: [], output_pids: nil, index: nil, weights: nil @doc""" Creates neurons corresponding to the size of nn desired. """ def generate(size) do hld = case is_atom(size) do true -> HLD.generate(size) _ -> IO.puts "HLD selected randomly" [bottom] = Enum.take_random(1..7, 1) HLD.generate(bottom, bottom + 2) end layers(hld, [], 0) end @doc""" Generate layers of neurons based on HLD. Assigns index corresponding to layer depth according to HLD. """ def layers(hld, acc, index) do if length(hld) >= 1 do [layer | rem] = hld neurons = Neuron.create(layer, index + 1, []) layers(rem, [neurons | acc], index + 1) else List.flatten(acc) end end @doc""" Create neurons, assign weight and index, along with random ID. Default create() creates a single neuron, calling create(1, 1, []) """ def create do create(1, 1, []) end def create(density, index, acc) do neuron = [%Neuron{id: {:neuron, Generate.id}, weights: {:weights}, index: index}] case density do 0 -> acc _ -> create(density - 1, index, [neuron | acc]) end end @doc""" Receives the list of neurons and interactors. Reads the index values and assigns input and output neurons, such that each neuron feeds forward. """ def assign_inputs_outputs_and_weights(neurons, sensors, actuators) do Enum.map(neurons, fn x -> %{x | input_neurons: input_neurons(neurons, sensors, x.index), output_neurons: output_neurons(neurons, actuators, x.index) } end) |> Enum.map(fn x -> %{x | weights: Weights.generate(length(x.input_neurons) + 1, [])} end) end @doc""" Grabs the neurons with corresponding index value, creates list with their ids. For the first layer, it grabs the sensors. Neuron.input_neurons(neurons, sensors, index) """ def input_neurons(neurons, sensors, index) do case index == 1 do true -> Enum.map(sensors, fn x -> x.id end) false -> neurons |> Enum.filter(fn x -> x.index == index - 1 end) |> Enum.map(fn x -> x.id end) end end @doc""" Same as input_neurons(), creates a list of output ids. For the final layer, actuators are used. """ def output_neurons(neurons, actuators, index) do max = Enum.max(Enum.map(neurons, fn x -> x.index end)) case index == max do true -> Enum.map(actuators, fn x -> x.id end) false -> neurons |> Enum.filter(fn x -> x.index == index + 1 end) |> Enum.map(fn x -> x.id end) end end def run(neuron, acc) do receive do {:update_pids, output_pids, cortex_pid} -> %{neuron | output_pids: output_pids, cortex_pid: cortex_pid} |> run(acc) {:fire, input_vector} -> Transmit.neurons(neuron.output_pids, {:input_vector, neuron.id, af(input_vector, neuron.weights)}) run(neuron, acc) {:input_vector, incoming_neuron, input} -> input_list = [{incoming_neuron, input} | acc] case length(input_list) == length(neuron.input_neurons) do true -> input_vector = Enum.map(neuron.input_neurons, fn x -> Enum.find(input_list, fn {incoming_neuron, input} -> x == incoming_neuron end) end) |> Enum.map(fn {incoming_neuron, input} -> input end) input_vector_with_bias = List.flatten([input_vector, 1]) Transmit.neurons(neuron.output_pids, {:input_vector, neuron.id, af(input_vector_with_bias, neuron.weights)}) run(neuron, []) false -> run(neuron, input_list) end {:test, _} -> Transmit.neurons(neuron.output_pids, {:test, :neuron}) run(neuron, acc) {:terminate} -> IO.puts "exiting neuron" Process.exit(self(), :normal) end end def af(input_vector, weights) do dot = dot(input_vector, weights, [], 0) :math.tanh(dot) end def dot(a, b) do dot(a, b, [], 0) end def dot(matrix1, matrix2, acc, counter) do if counter < length(matrix1) do sum = Enum.at(matrix1, counter) * Enum.at(matrix2, counter) dot(matrix1, matrix2, [sum | acc], counter + 1) else Enum.sum(acc) end end end
lib/neuron.ex
0.810179
0.790328
neuron.ex
starcoder
defmodule GrovePi.Digital do alias GrovePi.Board @moduledoc """ Write to and read digital I/O on the GrovePi. This module provides a low level API to digital sensors. Example usage: ``` iex> pin = 3 iex> GrovePi.Digital.set_pin_mode(pin, :input) :ok iex> GrovePi.Digital.read(pin, 0) 1 iex> GrovePi.Digital.set_pin_mode(pin, :output) :ok iex> GrovePi.Digital.write(pin, 1) :ok iex> GrovePi.Digital.write(pin, 0) :ok ``` """ @type pin_mode :: :input | :output @type level :: 0 | 1 @spec set_pin_mode(atom, GrovePi.pin, pin_mode) :: :ok | {:error, term} def set_pin_mode(prefix, pin, pin_mode) do Board.send_request(prefix, <<5, pin, mode(pin_mode), 0>>) end @doc """ Configure a digital I/O pin to be an `:input` or an `:output`. """ @spec set_pin_mode(GrovePi.pin, pin_mode) :: :ok | {:error, term} def set_pin_mode(pin, pin_mode) do set_pin_mode(Default, pin, pin_mode) end @spec read(atom, GrovePi.pin) :: level | {:error, term} def read(prefix, pin) do with :ok <- Board.send_request(prefix, <<1, pin, 0, 0>>), <<value>> = Board.get_response(prefix, 1), do: value end @doc """ Read the value on a digital I/O pin. Before this is called, the pin must be configured as an `:input` with `set_pin_mode/2` or `set_pin_mode/3`. """ @spec read(GrovePi.pin) :: level | {:error, term} def read(pin) do read(Default, pin) end @spec write(atom, GrovePi.pin, level) :: :ok | {:error, term} def write(prefix, pin, value) when value == 0 or value == 1 do Board.send_request(prefix, <<2, pin, value, 0>>) end @doc """ Write a value on a digital I/O pin. Before this is called, the pin must be configured as an `:output` with `set_pin_mode/2` or `set_pin_mode/3`. Valid values are `0` (low) and `1` (high). """ @spec write(GrovePi.pin, level) :: :ok | {:error, term} def write(pin, value) do write(Default, pin, value) end defp mode(:input), do: 0 defp mode(:output), do: 1 end
lib/grovepi/digital.ex
0.805938
0.840128
digital.ex
starcoder
defmodule Omise.Json do alias Omise.Json.{Decoder, Encoder} defdelegate encode(input), to: Encoder defdelegate decode(input, opts \\ []), to: Decoder end defmodule Omise.Json.Encoder do def encode(input) do input |> transform_data() |> Jason.encode() |> case do {:ok, output} -> {:ok, output} _ -> {:error, :invalid_input_data} end end defp transform_data(data) when is_list(data) or is_map(data) do data |> Enum.map(&do_transform_data/1) |> Enum.into(%{}) end defp transform_data(data) do data end defp do_transform_data({key, [head | _] = value}) when is_tuple(head) do {key, transform_data(value)} end defp do_transform_data(kv) do kv end end defmodule Omise.Json.Decoder do alias Omise.Json.StructTransformer def decode(input, opts) do case Jason.decode(input) do {:ok, output} -> {:ok, transform_decoded_data(output, opts)} _ -> {:error, :invalid_input_data} end end def transform_decoded_data(decoded_data, opts) do case Keyword.fetch(opts, :as) do {:ok, struct} -> to_struct(decoded_data, struct) :error -> decoded_data end end defp to_struct(decoded_data, struct) do fields = extract_fields_from_struct(struct, decoded_data) struct.__struct__ |> struct(fields) |> StructTransformer.transform() end defp extract_fields_from_struct(struct, decoded_data) do struct |> Map.from_struct() |> Enum.reduce(%{}, fn {key, default_value}, acc -> value = decoded_data |> Map.get(Atom.to_string(key)) |> transform_value(default_value) Map.put(acc, key, value) end) end defp transform_value(nil, %{__struct__: _}), do: nil defp transform_value(nil, default_value), do: default_value defp transform_value(value, %{__struct__: _} = struct), do: to_struct(value, struct) defp transform_value(values, [struct]), do: Enum.map(values, &transform_value(&1, struct)) defp transform_value(value, _), do: value end defprotocol Omise.Json.StructTransformer do @fallback_to_any true @spec transform(struct()) :: struct() def transform(struct) end defimpl Omise.Json.StructTransformer, for: Any do def transform(struct) do struct end end
lib/omise/json.ex
0.705481
0.467149
json.ex
starcoder
defmodule Absinthe.Plug.GraphiQL do @moduledoc """ Enables GraphiQL # Usage ```elixir if Absinthe.Plug.GraphiQL.serve? do plug Absinthe.Plug.GraphiQL end ``` """ require EEx @graphiql_version "0.7.8" EEx.function_from_file :defp, :graphiql_html, Path.join(__DIR__, "graphiql.html.eex"), [:graphiql_version, :query_string, :variables_string, :result_string] @graphql_toolbox_version "1.0.1" EEx.function_from_file :defp, :graphql_toolbox_html, Path.join(__DIR__, "graphql_toolbox.html.eex"), [:graphql_toolbox_version, :query_string, :variables_string] @behaviour Plug import Plug.Conn import Absinthe.Plug, only: [prepare: 3, setup_pipeline: 3, load_body_and_params: 1] @type opts :: [ schema: atom, adapter: atom, path: binary, context: map, json_codec: atom | {atom, Keyword.t}, interface: atom ] @doc """ Sets up and validates the Absinthe schema """ @spec init(opts :: opts) :: map def init(opts) do opts |> Absinthe.Plug.init |> Map.put(:interface, Keyword.get(opts, :interface, :advanced)) end def call(conn, config) do case html?(conn) do true -> do_call(conn, config) _ -> Absinthe.Plug.call(conn, config) end end defp html?(conn) do Plug.Conn.get_req_header(conn, "accept") |> List.first |> case do string when is_binary(string) -> String.contains?(string, "text/html") _ -> false end end defp do_call(conn, %{json_codec: _, interface: interface} = config) do {conn, body} = load_body_and_params(conn) with {:ok, input, opts} <- prepare(conn, body, config), pipeline <- setup_pipeline(conn, config, opts), {:ok, result, _} <- Absinthe.Pipeline.run(input, pipeline) do {:ok, result, opts[:variables], input} end |> case do {:ok, result, variables, query} -> query = query |> js_escape var_string = variables |> Poison.encode!(pretty: true) |> js_escape result = result |> Poison.encode!(pretty: true) |> js_escape html = case interface do :advanced -> graphql_toolbox_html(@graphql_toolbox_version, query, var_string) :simple -> graphiql_html(@graphiql_version, query, var_string, result) end conn |> put_resp_content_type("text/html") |> send_resp(200, html) {:input_error, msg} -> conn |> send_resp(400, msg) {:error, {:http_method, text}, _} -> conn |> send_resp(405, text) {:error, error, _} when is_binary(error) -> conn |> send_resp(500, error) end end defp js_escape(string) do string |> String.replace(~r/\n/, "\\n") |> String.replace(~r/'/, "\\'") end end
lib/absinthe/plug/graphiql.ex
0.673836
0.487307
graphiql.ex
starcoder
defmodule Postgrex.Date do @moduledoc """ Struct for Postgres date. ## Fields * `year` * `month` * `day` """ @type t :: %__MODULE__{year: 0..10000, month: 1..12, day: 1..31} defstruct [ year: 0, month: 1, day: 1] end defmodule Postgrex.Time do @moduledoc """ Struct for Postgres time. ## Fields * `hour` * `min` * `sec` * `usec` """ @type t :: %__MODULE__{hour: 0..23, min: 0..59, sec: 0..59, usec: 0..999_999} defstruct [ hour: 0, min: 0, sec: 0, usec: 0] end defmodule Postgrex.Timestamp do @moduledoc """ Struct for Postgres timestamp. ## Fields * `year` * `month` * `day` * `hour` * `min` * `sec` * `usec` """ @type t :: %__MODULE__{year: 0..10000, month: 1..12, day: 1..31, hour: 0..23, min: 0..59, sec: 0..59, usec: 0..999_999} defstruct [ year: 0, month: 1, day: 1, hour: 0, min: 0, sec: 0, usec: 0] end defmodule Postgrex.Interval do @moduledoc """ Struct for Postgres interval. ## Fields * `months` * `days` * `secs` """ @type t :: %__MODULE__{months: integer, days: integer, secs: integer} defstruct [ months: 0, days: 0, secs: 0] end defmodule Postgrex.Range do @moduledoc """ Struct for Postgres range. ## Fields * `lower` * `upper` * `lower_inclusive` * `upper_inclusive` """ @type t :: %__MODULE__{lower: term, upper: term, lower_inclusive: boolean, upper_inclusive: boolean} defstruct [ lower: nil, upper: nil, lower_inclusive: true, upper_inclusive: true] end defmodule Postgrex.INET do @moduledoc """ Struct for Postgres inet. ## Fields * `address` """ @type t :: %__MODULE__{address: :inet.ip_address} defstruct [address: nil] end defmodule Postgrex.CIDR do @moduledoc """ Struct for Postgres cidr. ## Fields * `address` * `netmask` """ @type t :: %__MODULE__{address: :inet.ip_address, netmask: 0..128} defstruct [ address: nil, netmask: nil] end defmodule Postgrex.MACADDR do @moduledoc """ Struct for Postgres macaddr. ## Fields * `address` """ @type macaddr :: {0..255, 0..255, 0..255, 0..255, 0..255, 0..255} @type t :: %__MODULE__{address: macaddr } defstruct [address: nil] end defmodule Postgrex.Point do @moduledoc """ Struct for Postgres point. ## Fields * `x` * `y` """ @type t :: %__MODULE__{x: float, y: float} defstruct [ x: nil, y: nil] end
deps/postgrex/lib/postgrex/builtins.ex
0.904387
0.596051
builtins.ex
starcoder
defmodule EctoShorts.CommonFilters do @moduledoc """ This modules main purpose is to house a collection of common schema filters and functionality to be included in params -> filters Common filters available include - `preload` - Preloads fields onto the query results - `start_date` - Query for items inserted after this date - `end_date` - Query for items inserted before this date - `before` - Get items with ID's before this value - `after` - Get items with ID's after this value - `ids` - Get items with a list of ids - `first` - Gets the first n items - `last` - Gets the last n items - `search` - ***Warning:*** This requires schemas using this to have a `&by_search(query, val)` function You are also able to filter on any natural field of a model, as well as use - gte/gt - lte/lt - like/ilike - is_nil/not(is_nil) ```elixir CommonFilters.convert_params_to_filter(User, %{name: %{ilike: "steve"}}) CommonFilters.convert_params_to_filter(User, %{name: %{age: %{gte: 18, lte: 30}}}) CommonFilters.convert_params_to_filter(User, %{name: %{is_banned: %{!=: nil}}}) CommonFilters.convert_params_to_filter(User, %{name: %{is_banned: %{==: nil}}}) CommonFilters.convert_params_to_filter(User, %{name: "Billy"}) ``` """ import Ecto.Query, only: [order_by: 2] alias EctoShorts.QueryBuilder @common_filters QueryBuilder.Common.filters() @doc "Converts filter params into a query" @spec convert_params_to_filter( queryable :: Ecto.Query.t(), params :: Keyword.t | map ) :: Ecto.Query.t def convert_params_to_filter(query, params) when params === %{}, do: query def convert_params_to_filter(query, params) when is_map(params), do: convert_params_to_filter(query, Map.to_list(params)) def convert_params_to_filter(query, params, order_by_prop \\ :id) def convert_params_to_filter(query, params, nil) do params |> ensure_last_is_final_filter |> Enum.reduce(query, &create_schema_filter/2) end def convert_params_to_filter(query, params, order_by_prop) do params |> ensure_last_is_final_filter |> Enum.reduce(order_by(query, ^order_by_prop), &create_schema_filter/2) end def create_schema_filter({filter, val}, query) when filter in @common_filters do QueryBuilder.create_schema_filter(QueryBuilder.Common, {filter, val}, query) end def create_schema_filter({filter, val}, query) do QueryBuilder.create_schema_filter(QueryBuilder.Schema, {filter, val}, query) end defp ensure_last_is_final_filter(params) do if Keyword.has_key?(params, :last) do params |> Keyword.delete(:last) |> Kernel.++([last: params[:last]]) else params end end end
lib/common_filters.ex
0.829803
0.785597
common_filters.ex
starcoder
defmodule Xgit.Tree do @moduledoc ~S""" Represents a git `tree` object in memory. """ alias Xgit.ContentSource alias Xgit.FileMode alias Xgit.FilePath alias Xgit.Object alias Xgit.ObjectId import Xgit.Util.ForceCoverage @typedoc ~S""" This struct describes a single `tree` object so it can be manipulated in memory. ## Struct Members * `:entries`: list of `Tree.Entry` structs, which must be sorted by name """ @type t :: %__MODULE__{entries: [__MODULE__.Entry.t()]} @enforce_keys [:entries] defstruct [:entries] defmodule Entry do @moduledoc ~S""" A single file in a `tree` structure. """ use Xgit.FileMode alias Xgit.FileMode alias Xgit.FilePath alias Xgit.ObjectId alias Xgit.Util.Comparison import Xgit.Util.ForceCoverage @typedoc ~S""" A single file in a tree structure. ## Struct Members * `name`: (`FilePath.t`) entry path name, relative to top-level directory (without leading slash) * `object_id`: (`ObjectId.t`) SHA-1 for the represented object * `mode`: (`FileMode.t`) """ @type t :: %__MODULE__{ name: FilePath.t(), object_id: ObjectId.t(), mode: FileMode.t() } @enforce_keys [:name, :object_id, :mode] defstruct [:name, :object_id, :mode] @doc ~S""" Return `true` if this entry struct describes a valid tree entry. """ @spec valid?(entry :: any) :: boolean def valid?(entry) def valid?( %__MODULE__{ name: name, object_id: object_id, mode: mode } = _entry ) when is_list(name) and is_binary(object_id) and is_file_mode(mode) do FilePath.check_path_segment(name) == :ok && ObjectId.valid?(object_id) && object_id != ObjectId.zero() end def valid?(_), do: cover(false) @doc ~S""" Compare two entries according to git file name sorting rules. ## Return Value * `:lt` if `entry1` sorts before `entry2`. * `:eq` if they are the same. * `:gt` if `entry1` sorts after `entry2`. """ @spec compare(entry1 :: t | nil, entry2 :: t) :: Comparison.result() def compare(entry1, entry2) def compare(nil, _entry2), do: cover(:lt) def compare(%{name: name1} = _entry1, %{name: name2} = _entry2) do cond do name1 < name2 -> cover :lt name2 < name1 -> cover :gt true -> cover :eq end end end @doc ~S""" Return `true` if the value is a tree struct that is valid. All of the following must be true for this to occur: * The value is a `Tree` struct. * The entries are properly sorted. * All entries are valid, as determined by `Xgit.Tree.Entry.valid?/1`. """ @spec valid?(tree :: any) :: boolean def valid?(tree) def valid?(%__MODULE__{entries: entries}) when is_list(entries) do Enum.all?(entries, &Entry.valid?/1) && entries_sorted?([nil | entries]) end def valid?(_), do: cover(false) defp entries_sorted?([entry1, entry2 | tail]), do: Entry.compare(entry1, entry2) == :lt && entries_sorted?([entry2 | tail]) defp entries_sorted?([_]), do: cover(true) @typedoc ~S""" Error response codes returned by `from_object/1`. """ @type from_object_reason :: :not_a_tree | :invalid_format | :invalid_tree @doc ~S""" Renders a tree structure from an `Xgit.Object`. ## Return Values `{:ok, tree}` if the object contains a valid `tree` object. `{:error, :not_a_tree}` if the object contains an object of a different type. `{:error, :invalid_format}` if the object says that is of type `tree`, but can not be parsed as such. `{:error, :invalid_tree}` if the object can be parsed as a tree, but the entries are not well formed or not properly sorted. """ @spec from_object(object :: Object.t()) :: {:ok, tree :: t} | {:error, from_object_reason} def from_object(object) def from_object(%Object{type: :tree, content: content} = _object) do content |> ContentSource.stream() |> Enum.to_list() |> from_object_internal([]) end def from_object(%Object{} = _object), do: cover({:error, :not_a_tree}) defp from_object_internal(data, entries_acc) defp from_object_internal([], entries_acc) do tree = %__MODULE__{entries: Enum.reverse(entries_acc)} if valid?(tree) do cover {:ok, tree} else cover {:error, :invalid_tree} end end defp from_object_internal(data, entries_acc) do with {:ok, file_mode, data} <- parse_file_mode(data, 0), true <- FileMode.valid?(file_mode), {name, [0 | data]} <- path_and_object_id(data), :ok <- FilePath.check_path_segment(name), {raw_object_id, data} <- Enum.split(data, 20), 20 <- Enum.count(raw_object_id), false <- Enum.all?(raw_object_id, &(&1 == 0)) do this_entry = %__MODULE__.Entry{ name: name, mode: file_mode, object_id: ObjectId.from_binary_iodata(raw_object_id) } from_object_internal(data, [this_entry | entries_acc]) else _ -> cover {:error, :invalid_format} end end defp parse_file_mode([], _mode), do: cover({:error, :invalid_mode}) defp parse_file_mode([?\s | data], mode), do: cover({:ok, mode, data}) defp parse_file_mode([?0 | _data], 0), do: cover({:error, :invalid_mode}) defp parse_file_mode([c | data], mode) when c >= ?0 and c <= ?7, do: parse_file_mode(data, mode * 8 + (c - ?0)) defp parse_file_mode([_c | _data], _mode), do: cover({:error, :invalid_mode}) defp path_and_object_id(data), do: Enum.split_while(data, &(&1 != 0)) @doc ~S""" Renders this tree structure into a corresponding `Xgit.Object`. """ @spec to_object(tree :: t) :: Object.t() def to_object(tree) def to_object(%__MODULE__{entries: entries} = _tree) do rendered_entries = entries |> Enum.map(&entry_to_iodata/1) |> IO.iodata_to_binary() |> :binary.bin_to_list() %Object{ type: :tree, content: rendered_entries, size: Enum.count(rendered_entries), id: ObjectId.calculate_id(rendered_entries, :tree) } end defp entry_to_iodata(%__MODULE__.Entry{name: name, object_id: object_id, mode: mode}), do: cover([FileMode.to_short_octal(mode), ?\s, name, 0, ObjectId.to_binary_iodata(object_id)]) end
lib/xgit/tree.ex
0.90091
0.505554
tree.ex
starcoder
defmodule ExMath do @spec id(a) :: a when a: any def id(x), do: x @spec sign(number) :: -1 | 0 | 1 def sign(0), do: 0 def sign(x) when x < 0, do: -1 def sign(_), do: 1 @spec sum([number]) :: number def sum([]), do: 0 def sum(xs), do: Enum.reduce(xs, &Kernel.+/2) @spec factors(pos_integer) :: [pos_integer] def factors(1), do: [1] def factors(x), do: factors(x, Math.Primes.sieve(x), []) defp factors(x, [prime | _rest], fs) when x < prime, do: Enum.reverse(fs) defp factors(x, [prime | rest], fs) when rem(x, prime) == 0 do factors div(x, prime), [prime | rest], [prime | fs] end defp factors(x, [_ | rest], fs), do: factors(x, rest, fs) @spec gcd(integer, integer) :: integer def gcd(a, 0), do: a def gcd(a, b), do: gcd(b, rem(a, b)) @spec hypot(number, number) :: float def hypot(a, b) do a = abs(a) b = abs(b) if a < b, do: {a, b} = {b, a} if a == 0 do 0.0 else ba = b/a a * :math.sqrt(1 + ba*ba) end end @spec copysign(float, float) :: float def copysign(a, b) do <<_ :: 1, rest :: bitstring>> = <<a :: float>> <<sign :: 1, _ :: bitstring>> = <<b :: float>> <<ret :: float>> = <<sign :: 1, rest :: bitstring>> ret end @spec signbit(float) :: boolean def signbit(x) do case <<x :: float>> do <<1 :: 1, _ :: bitstring>> -> true _ -> false end end @doc """ Equality comparison for floating point numbers, based on [this blog post](https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/) by <NAME>. """ @spec close_enough?(number, number, number, non_neg_integer) :: boolean def close_enough?(a, b, epsilon, max_ulps) do a = :erlang.float a b = :erlang.float b cond do abs(a - b) <= epsilon -> true signbit(a) != signbit(b) -> false ulp_diff(a, b) <= max_ulps -> true true -> false end end @spec ulp_diff(float, float) :: integer defp ulp_diff(a, b), do: abs(as_int(a) - as_int(b)) @spec as_int(float) :: non_neg_integer defp as_int(x) do <<int :: 64>> = <<x :: float>> int end end
lib/exmath.ex
0.737158
0.672053
exmath.ex
starcoder
defmodule MelodyMatch.Matches do @moduledoc """ The Matches context. """ import Ecto.Query, warn: false alias MelodyMatch.Repo alias MelodyMatch.Matches.Match @doc """ Returns the list of matches. ## Examples iex> list_matches() [%Match{}, ...] """ def list_matches do Repo.all(Match) end @doc """ Gets a single match. Raises `Ecto.NoResultsError` if the Match does not exist. ## Examples iex> get_match!(123) %Match{} iex> get_match!(456) ** (Ecto.NoResultsError) """ def get_match!(id), do: Repo.get!(Match, id) @doc """ Gets a list of matches by user id (either first or second user involved). """ def get_matches_by_user_id(id) do query = from m in Match, where: m.first_user_id == ^id or m.second_user_id == ^id Repo.all(query) end @doc """ Gets a list of matches by user id where the match occured `delta_hours` (default = 24) ago. """ def get_user_recent_matches(user_id, delta_hours \\ 24) do cutoff = DateTime.utc_now() |> DateTime.add(-1 * delta_hours * 60 * 60) |> DateTime.to_naive() query = from m in Match, where: (m.first_user_id == ^user_id or m.second_user_id == ^user_id) \ and m.updated_at > ^cutoff Repo.all(query) end @doc """ Creates a match. ## Examples iex> create_match(%{field: value}) {:ok, %Match{}} iex> create_match(%{field: bad_value}) {:error, %Ecto.Changeset{}} """ def create_match(attrs \\ %{}) do %Match{} |> Match.changeset(attrs) |> Repo.insert() end @doc """ Deletes a match. ## Examples iex> delete_match(match) {:ok, %Match{}} iex> delete_match(match) {:error, %Ecto.Changeset{}} """ def delete_match(%Match{} = match) do Repo.delete(match) end @doc """ Returns an `%Ecto.Changeset{}` for tracking match changes. ## Examples iex> change_match(match) %Ecto.Changeset{data: %Match{}} """ def change_match(%Match{} = match, attrs \\ %{}) do Match.changeset(match, attrs) end end
server/lib/melody_match/matches.ex
0.831725
0.434821
matches.ex
starcoder
if Appsignal.phoenix?() do defmodule Appsignal.Phoenix.Instrumenter do @moduledoc """ Phoenix instrumentation hooks This module can be used as a Phoenix instrumentation module. Adding this module to the list of Phoenix instrumenters will result in the `phoenix_controller_call` and `phoenix_controller_render` events to become part of your request timeline. Add this to your `config.exs`: ``` config :my_app, MyApp.Endpoint, instrumenters: [Appsignal.Phoenix.Instrumenter] ``` Note: Channels (`phoenix_channel_join` hook) are currently not supported. See the [Phoenix integration guide](http://docs.appsignal.com/elixir/integrations/phoenix.html) for information on how to instrument other aspects of Phoenix. """ @transaction Application.get_env(:appsignal, :appsignal_transaction, Appsignal.Transaction) @doc false def phoenix_controller_call(:start, _, args) do start_event(transaction(), args) end @doc false def phoenix_controller_call(:stop, _diff, {%Appsignal.Transaction{} = transaction, args}) do finish_event(transaction, "call.phoenix_controller", args) end def phoenix_controller_call(:stop, _, _), do: nil @doc false def phoenix_controller_render(:start, _, args) do start_event(transaction(), args) end @doc false def phoenix_controller_render(:stop, _diff, {%Appsignal.Transaction{} = transaction, args}) do finish_event(transaction, "render.phoenix_controller", args) end def phoenix_controller_render(:stop, _, _), do: nil defp transaction do Appsignal.TransactionRegistry.lookup(self()) end defp start_event(%Appsignal.Transaction{} = transaction, %{conn: conn} = args) do @transaction.set_action(Appsignal.Plug.extract_action(conn)) {@transaction.start_event(transaction), args} end defp start_event(%Appsignal.Transaction{} = transaction, args) do {@transaction.start_event(transaction), args} end defp start_event(_transaction, _args), do: nil defp finish_event(transaction, name, args) do @transaction.finish_event( transaction, name, name, Map.delete(args, :conn), 0 ) end end end
lib/appsignal/phoenix/instrumenter.ex
0.818193
0.778355
instrumenter.ex
starcoder
defmodule GeoTIFFFormatter do @doc ~S""" Formats a single IFD. ### Examples: iex> tag = %{:tag => "Spam", :type => "EGGS", :value => 42, :count => 1} iex> ifd = %{:offset => 42, :entries => 42, :next_ifd => 0, :tags => [tag]} iex> headers = %{:filename => 'spam', :endianess => :little, :first_ifd_offset => 42, :ifds => [ifd]} iex> GeoTIFFFormatter.format_headers headers "\n====================================================\nFilename: spam\nEndianess: little\nFirst IFD: 42\n\nAvailable IFDs\n----------------------------------------------------\n Offset: 42\n Entries: 42\n Next IFD: 0\n\n Spam [EGGS]: 42 {count: 1}\n----------------------------------------------------\n\n====================================================\n" """ def format_headers(headers) do """ ==================================================== Filename: #{headers.filename} Endianess: #{headers.endianess} First IFD: #{headers.first_ifd_offset} Available IFDs #{Enum.map headers.ifds, &(format_ifd(&1))} ==================================================== """ end @doc ~S""" Formats a single IFD. ### Examples: iex> tag = %{:tag => "Spam", :type => "EGGS", :value => 42, :count => 1} iex> ifd = %{:offset => 42, :entries => 42, :next_ifd => 0, :tags => [tag]} iex> GeoTIFFFormatter.format_ifd ifd "----------------------------------------------------\n Offset: 42\n Entries: 42\n Next IFD: 0\n\n Spam [EGGS]: 42 {count: 1}\n----------------------------------------------------\n" """ def format_ifd(ifd) do """ ---------------------------------------------------- Offset: #{ifd.offset} Entries: #{ifd.entries} Next IFD: #{ifd.next_ifd} #{Enum.map(ifd.tags, &(format_tag(&1))) |> Enum.join("\n")} ---------------------------------------------------- """ end @doc ~S""" Formats a single TIFF tag. ### Examples: iex> tag = %{:tag => "Spam", :type => "EGGS", :value => 42, :count => 1} iex> GeoTIFFFormatter.format_tag tag " Spam [EGGS]: 42 {count: 1}" iex> tag = %{:tag => "Spam", :type => "EGGS", :value => [4, 2, 42], :count => 12} iex> GeoTIFFFormatter.format_tag tag " Spam [EGGS]: [4, 2, 42] {count: 12}" """ def format_tag(tag) do cond do is_list(tag.value) -> " #{tag.tag} [#{tag.type}]: #{"[" <> (Enum.join tag.value, ", ") <> "]"} {count: #{tag.count}}" true -> " #{tag.tag} [#{tag.type}]: #{tag.value} {count: #{tag.count}}" end end end
lib/geotiff_formatter.ex
0.519034
0.433082
geotiff_formatter.ex
starcoder
defmodule NetworkUtils do @moduledoc """ A bunch o' little titbits of code that may (or may not) make the elevator lab slightly more survivable To test software on multiple computers, use safe shell to access another user on network: 1. get all possible computers: "$ nmap -sP 10.100.23.*" 2. Connect to computer "$ ssh username@ip.of.that.comp" 3. Download code and start application """ @doc """ Returns (hopefully) the ip address of your NetworkUtils interface. ## Examples iex> NetworkUtils.get_my_ip {10, 100, 23, 253} """ def get_my_ip do {:ok, socket} = :gen_udp.open(6789, active: false, broadcast: true) :ok = :gen_udp.send(socket, {255, 255, 255, 255}, 6789, "test packet") ip = case :gen_udp.recv(socket, 100, 1000) do {:ok, {ip, _port, _data}} -> ip {:error, _} -> IO.puts("-- Struggling to get my IP - ${_} --") Process.sleep(100) get_my_ip end :gen_udp.close(socket) ip end @doc """ Formats an ip address on tuple format to a bytestring ## Examples iex> NetworkUtils.ip_to_string {10, 100, 23, 253} '10.100.23.253' """ def ip_to_string(ip) do :inet.ntoa(ip) |> to_string() end @doc """ Returns all nodes in the current cluster. Returns a list of nodes or an error message ## Examples iex> NetworkUtils.all_nodes [:'heis@10.100.23.253', :'heis@10.100.23.226'] iex> NetworkUtils.all_nodes {:error, :node_not_running} """ def all_nodes do case [Node.self() | Node.list()] do [:nonode@nohost] -> {:error, :node_not_running} nodes -> nodes end end @doc """ Boots a node with a specified tick time. node_name sets the node name before @. The IP-address is automatically imported. Returns the full name of the node. iex> NetworkUtils.boot_node "frank" {:ok, #PID<0.12.2>} iex(frank@10.100.23.253)> _ """ def boot_node(node_name, tick_time \\ 2000) do ip = get_my_ip() |> ip_to_string() full_name = node_name <> "@" <> ip Node.start(String.to_atom(full_name), :longnames, tick_time) full_name end end
lib/network_utils.ex
0.68595
0.478468
network_utils.ex
starcoder
defmodule Yacht do @type category :: :ones | :twos | :threes | :fours | :fives | :sixes | :full_house | :four_of_a_kind | :little_straight | :big_straight | :choice | :yacht defp die_frequencies(dice) do Enum.reduce(dice, %{}, fn die, frequencies -> Map.update(frequencies, die, 1, &(&1 + 1)) end) end @doc """ Calculate the score of the list of 5 dice rolls using the given category. """ @spec score(category :: category(), dice :: [integer]) :: integer def score(category, dice) def score(:ones, dice), do: score_number(1, dice) def score(:twos, dice), do: score_number(2, dice) def score(:threes, dice), do: score_number(3, dice) def score(:fours, dice), do: score_number(4, dice) def score(:fives, dice), do: score_number(5, dice) def score(:sixes, dice), do: score_number(6, dice) def score(:full_house, dice) do full_house = die_frequencies(dice) |> Map.values() |> MapSet.new() == MapSet.new([3, 2]) if full_house do Enum.sum(dice) else 0 end end def score(:four_of_a_kind, dice) do frequencies = die_frequencies(dice) |> Enum.to_list() |> Enum.filter(fn {_, frequency} -> frequency >= 4 end) case frequencies do [{number, _frequencies}] -> number * 4 _ -> 0 end end def score(:little_straight, dice) do if MapSet.new(dice) == MapSet.new([1, 2, 3, 4, 5]) do 30 else 0 end end def score(:big_straight, dice) do if MapSet.new(dice) == MapSet.new([2, 3, 4, 5, 6]) do 30 else 0 end end def score(:choice, dice) do Enum.sum(dice) end def score(:yacht, dice) do unique_dice = MapSet.size(MapSet.new(dice)) case unique_dice do 1 -> 50 _ -> 0 end end defp score_number(number, dice) when is_integer(number) do Enum.count(dice, &(&1 == number)) * number end end
exercises/practice/yacht/.meta/example.ex
0.709019
0.458591
example.ex
starcoder
defmodule Optimus.PropertyParsers do def build_parser(_name, :integer) do {:ok, &integer_parser/1} end def build_parser(name, "integer"), do: build_parser(name, :integer) def build_parser(name, ":integer"), do: build_parser(name, :integer) def build_parser(_name, :float) do {:ok, &float_parser/1} end def build_parser(name, "float"), do: build_parser(name, :float) def build_parser(name, ":float"), do: build_parser(name, :float) def build_parser(_name, :string) do {:ok, &string_parser/1} end def build_parser(name, "string"), do: build_parser(name, :string) def build_parser(name, ":string"), do: build_parser(name, :string) def build_parser(_name, nil) do {:ok, &string_parser/1} end def build_parser(_name, fun) when is_function(fun, 1), do: {:ok, fun} def build_parser(name, _), do: {:error, "value of #{inspect(name)} property is expected to be a function of arity 1 or one of the following: :integer, :float, :string or nil"} defp integer_parser(value) when is_binary(value) do case Integer.parse(value) do {n, ""} -> {:ok, n} _ -> {:error, "should be integer"} end end defp float_parser(value) when is_binary(value) do try do case Float.parse(value) do {v, ""} -> {:ok, v} _ -> {:error, "should be float"} end rescue ArgumentError -> {:error, "should be valid float"} end end defp string_parser(value) when is_binary(value), do: {:ok, value} def build_command_name(name, value) when is_binary(value) do if !(value =~ ~r/\s/) do {:ok, value} else {:error, "value of #{inspect(name)} property is expected to be String without spaces"} end end def build_command_name(name, _), do: {:error, "value of #{inspect(name)} property is expected to be String"} def build_string(name, value, default \\ "") def build_string(_name, nil, default), do: {:ok, default} def build_string(_name, value, _default) when is_binary(value), do: {:ok, value} def build_string(name, _value, _default), do: {:error, "value of #{inspect(name)} property is expected to be String or nil"} def build_bool(_name, nil, default), do: {:ok, default} def build_bool(_name, value, _default) when is_boolean(value), do: {:ok, value} def build_bool(name, _value, _default), do: {:error, "value of #{inspect(name)} property is expected to be Boolean or nil"} def build_short(_name, nil), do: {:ok, nil} def build_short(name, value) when is_binary(value) do trimmed_value = String.replace(value, ~r{\A[\-]+}, "") if trimmed_value =~ ~r{\A[A-Za-z]\z} do {:ok, "-" <> trimmed_value} else {:error, "value of #{inspect(name)} property is expected to be \"-X\" or \"X\" where X is a single letter character"} end end def build_short(name, _), do: {:error, "value of #{inspect(name)} property is expected to be String or nil"} def build_long(_name, nil), do: {:ok, nil} def build_long(name, value) when is_binary(value) do trimmed_value = String.replace(value, ~r{\A[\-]+}, "") if trimmed_value =~ ~r{\A[^\s]+\z} do {:ok, "--" <> trimmed_value} else {:error, "value of #{inspect(name)} property is expected to be --XX...X or XX...X where XX...X is a sequence of characters whithout spaces"} end end def build_long(name, _), do: {:error, "value of #{inspect(name)} property is expected to be String or nil"} def build_default(_name, fun) when is_function(fun, 0), do: {:ok, fun.()} def build_default(_name, value), do: {:ok, value} end
lib/optimus/property_parsers.ex
0.584864
0.522689
property_parsers.ex
starcoder
defmodule Gradient.ElixirChecker do @moduledoc ~s""" Provide checks specific to Elixir that complement type checking delivered by Gradient. Options: - {`ex_check`, boolean()}: whether to use checks specific only to Elixir. """ @spec check([:erl_parse.abstract_form()], keyword()) :: [{:file.filename(), any()}] def check(forms, opts) do if Keyword.get(opts, :ex_check, true) do check_spec(forms) else [] end end @doc ~s""" Check if all specs are exactly before the function that they specify and if there is only one spec per function clause. Correct spec locations: ``` @spec convert(integer()) :: float() def convert(int) when is_integer(int), do: int / 1 @spec convert(atom()) :: binary() def convert(atom) when is_atom(atom), do: to_string(atom) ``` Incorrect spec locations: - More than one spec above function clause. ``` @spec convert(integer()) :: float() @spec convert(atom()) :: binary() def convert(int) when is_integer(int), do: int / 1 def convert(atom) when is_atom(atom), do: to_string(atom) ``` - Spec name doesn't match the function name. ``` @spec last_two(atom()) :: atom() def last_three(:ok) do :ok end ``` """ @spec check_spec([:erl_parse.abstract_form()]) :: [{:file.filename(), any()}] def check_spec([{:attribute, _, :file, {file, _}} | forms]) do forms |> Stream.filter(&is_fun_or_spec?/1) |> Stream.map(&simplify_form/1) |> Stream.concat() |> Stream.filter(&has_line/1) |> Enum.sort(&(elem(&1, 2) < elem(&2, 2))) |> Enum.reduce({nil, []}, fn {:fun, fna, _} = fun, {{:spec, {n, a} = sna, anno}, errors} when fna != sna -> # Spec name doesn't match the function name {fun, [{:spec_error, :wrong_spec_name, anno, n, a} | errors]} {:spec, {n, a}, anno} = s1, {{:spec, _, _}, errors} -> # Only one spec per function clause is allowed {s1, [{:spec_error, :spec_after_spec, anno, n, a} | errors]} x, {_, errors} -> {x, errors} end) |> elem(1) |> Enum.map(&{file, &1}) end # Filter out __info__ generated function def has_line(form), do: :erl_anno.line(elem(form, 2)) > 1 def is_fun_or_spec?({:attribute, _, :spec, _}), do: true def is_fun_or_spec?({:function, _, _, _, _}), do: true def is_fun_or_spec?(_), do: false @spec simplify_form(:erl_parse.abstract_form()) :: Enumerable.t({:spec | :fun, {atom(), integer()}, :erl_anno.anno()}) def simplify_form({:attribute, _, :spec, {{name, arity}, types}}) do Stream.map(types, &{:spec, {name, arity}, elem(&1, 1)}) end def simplify_form({:function, _, name, arity, clauses}) do Stream.map(clauses, &{:fun, {name, arity}, elem(&1, 1)}) end end
lib/gradient/elixir_checker.ex
0.829492
0.86988
elixir_checker.ex
starcoder
defmodule D03.Challenge do @moduledoc false require Logger def run(1) do columns = Utils.read_input(3, &String.codepoints/1) |> transpose column_length = length(Enum.at(columns, 0)) result = columns |> Enum.map(fn column -> find_gamma_and_epsilon_bit(column, column_length / 2) end) |> Enum.zip_with(&Enum.join/1) |> Enum.map(&String.to_integer(&1, 2)) Logger.info( "Gamma is #{Enum.at(result, 0)}, epsilon is #{Enum.at(result, 1)}. Result is #{Enum.product(result)}" ) end def run(2) do rows = Utils.read_input(3, &String.codepoints/1) oxygen_bit_amount_func = &Enum.max/1 oxygen_bit_equal_func = fn counted_bits, size -> if Enum.at(counted_bits, 1) == size, do: "1", else: "0" end co2_bit_amount_func = &Enum.min/1 co2_bit_equal_func = fn counted_bits, size -> if Enum.at(counted_bits, 0) == size, do: "0", else: "1" end oxygen = find_life_support(rows, 0, oxygen_bit_amount_func, oxygen_bit_equal_func) co2 = find_life_support(rows, 0, co2_bit_amount_func, co2_bit_equal_func) Logger.info("Oxygen is #{oxygen}, CO2 is #{co2}. Result is #{oxygen * co2}") end defp find_gamma_and_epsilon_bit(bit_column, column_length) do # Check if Zeroes or the most common bit zeroes = Enum.count(bit_column, fn x -> x == "0" end) # Return 0 for gamma, 1 for epsilon if there are more zeroes than ones if zeroes > column_length do [0, 1] else # Otherwise [1, 0] end end # Final step in recursion defp find_life_support(rows, _, _, _) when length(rows) == 1 do rows |> Enum.join() |> String.to_integer(2) end defp find_life_support(rows, column_index, bit_amount_func, bit_equal_func) do columns = transpose(rows) # Count bits in the current column counted_bits = count_bits(Enum.at(columns, column_index), length(rows)) # Find the most / least common bit size = bit_amount_func.(counted_bits) # For what bit are we looking? 1 or 0? bit = bit_equal_func.(counted_bits, size) rows |> Enum.filter(fn row -> Enum.at(row, column_index) == bit end) |> Enum.take(size) |> find_life_support(column_index + 1, bit_amount_func, bit_equal_func) end defp count_bits(bit_column, rows_length) do zeroes = Enum.count(bit_column, fn x -> x == "0" end) [zeroes, rows_length - zeroes] end defp transpose(rows) do rows |> Enum.zip() |> Enum.map(&Tuple.to_list/1) end end
lib/d03/challenge.ex
0.737536
0.459986
challenge.ex
starcoder
defmodule ExQuickBooks do @moduledoc """ API client for QuickBooks Online. ## Configuration You can configure the application through `Mix.Config`: ``` config :exquickbooks, consumer_key: "key", consumer_secret: "secret", use_production_api: true ``` ### Accepted configuration keys #### `:consumer_key`, `:consumer_secret` Required. OAuth consumer credentials which you can get for your application at <https://developer.intuit.com/getstarted>. Please note that there are different credentials for the sandbox and production APIs. #### `:use_production_api` Optional, `false` by default. Set to `false` to use the QuickBooks Sandbox, `true` to connect to the production APIs. ### Reading environment variables If you store configuration in the system’s environment variables, you can have ExQuickBooks read them at runtime: ``` config :exquickbooks, consumer_key: {:system, "EXQUICKBOOKS_KEY"}, consumer_secret: {:system, "EXQUICKBOOKS_SECRET"} ``` This syntax works for binary and boolean values. Booleans are parsed from `"true"` and `"false"`, otherwise the binary is used as is. """ @backend_config :backend @credential_config [:consumer_key, :consumer_secret] @use_production_api_config :use_production_api @minorversion :minorversion @default_minorversion 12 # Returns the Intuit OAuth API URL. @doc false def oauth_api do "https://oauth.intuit.com/oauth/v1/" end # Returns the QuickBooks Accounting API URL. @doc false def accounting_api do if get_env(@use_production_api_config, false) do "https://quickbooks.api.intuit.com/v3/" else "https://sandbox-quickbooks.api.intuit.com/v3/" end end # Returns the configured HTTP backend. @doc false def backend do case get_env(@backend_config) do backend when is_atom(backend) -> backend nil -> raise_missing(@backend_config) other -> raise_invalid(@backend_config, other) end end # Returns the configured OAuth credentials. @doc false def credentials do for k <- @credential_config, into: %{} do case get_env(k) do v when is_binary(v) -> {k, v} nil -> raise_missing(k) v -> raise_invalid(k, v) end end end def minorversion do get_env(@minorversion, @default_minorversion) end # Returns a value from the application's environment. If the value matches # the {:system, ...} syntax, a system environment variable will be retrieved # instead and parsed. defp get_env(key, default \\ nil) do with {:system, var} <- Application.get_env(:exquickbooks, key, default) do case System.get_env(var) do "true" -> true "false" -> false value -> value end end end defp raise_missing(key) do raise ArgumentError, message: """ ExQuickBooks config #{inspect(key)} is required. """ end defp raise_invalid(key, value) do raise ArgumentError, message: """ ExQuickBooks config #{inspect(key)} is invalid, got: #{inspect(value)} """ end end
lib/exquickbooks.ex
0.806815
0.740808
exquickbooks.ex
starcoder
defmodule Cldr.Date do @moduledoc """ Provides localization and formatting of a `Date` struct or any map with the keys `:year`, `:month`, `:day` and `:calendar`. `Cldr.Date` provides support for the built-in calendar `Calendar.ISO` or any calendars defined with [ex_cldr_calendars](https://hex.pm/packages/ex_cldr_calendars) CLDR provides standard format strings for `Date` which are reresented by the names `:short`, `:medium`, `:long` and `:full`. This allows for locale-independent formatting since each locale may define the underlying format string as appropriate. """ alias Cldr.DateTime.Format alias Cldr.LanguageTag @format_types [:short, :medium, :long, :full] defmodule Formats do @moduledoc false defstruct Module.get_attribute(Cldr.Date, :format_types) end @doc """ Formats a date according to a format string as defined in CLDR and described in [TR35](http://unicode.org/reports/tr35/tr35-dates.html) ## Arguments * `date` is a `%Date{}` struct or any map that contains the keys `year`, `month`, `day` and `calendar` * `backend` is any module that includes `use Cldr` and therefore is a `Cldr` backend module. The default is `Cldr.default_backend/0`. * `options` is a keyword list of options for formatting. The valid options are: ## Options * `format:` `:short` | `:medium` | `:long` | `:full` or a format string. The default is `:medium` * `locale:` any locale returned by `Cldr.known_locale_names/1`. The default is `Cldr.get_locale()`. * `number_system:` a number system into which the formatted date digits should be transliterated ## Returns * `{:ok, formatted_string}` or * `{:error, reason}` ## Examples iex> Cldr.Date.to_string ~D[2017-07-10], MyApp.Cldr, format: :medium, locale: "en" {:ok, "Jul 10, 2017"} iex> Cldr.Date.to_string ~D[2017-07-10], MyApp.Cldr, locale: "en" {:ok, "Jul 10, 2017"} iex> Cldr.Date.to_string ~D[2017-07-10], MyApp.Cldr, format: :full, locale: "en" {:ok, "Monday, July 10, 2017"} iex> Cldr.Date.to_string ~D[2017-07-10], MyApp.Cldr, format: :short, locale: "en" {:ok, "7/10/17"} iex> Cldr.Date.to_string ~D[2017-07-10], MyApp.Cldr, format: :short, locale: "fr" {:ok, "10/07/2017"} iex> Cldr.Date.to_string ~D[2017-07-10], MyApp.Cldr, format: :long, locale: "af" {:ok, "10 Julie 2017"} """ @spec to_string(map, Cldr.backend() | Keyword.t(), Keyword.t()) :: {:ok, String.t()} | {:error, {module, String.t()}} def to_string(date, backend \\ Cldr.default_backend(), options \\ []) def to_string(%{calendar: Calendar.ISO} = date, backend, options) do %{date | calendar: Cldr.Calendar.Gregorian} |> to_string(backend, options) end def to_string(date, options, []) when is_list(options) do to_string(date, Cldr.default_backend(), options) end def to_string(%{calendar: calendar} = date, backend, options) do options = Keyword.merge(default_options(backend), options) format_backend = Module.concat(backend, DateTime.Formatter) with {:ok, locale} <- Cldr.validate_locale(options[:locale], backend), {:ok, cldr_calendar} <- Cldr.DateTime.type_from_calendar(calendar), {:ok, format_string} <- format_string(options[:format], locale, cldr_calendar, backend), {:ok, formatted} <- format_backend.format(date, format_string, locale, options) do {:ok, formatted} else {:error, reason} -> {:error, reason} end rescue e in [Cldr.DateTime.UnresolvedFormat] -> {:error, {e.__struct__, e.message}} end def to_string(date, _backend, _options) do error_return(date, [:year, :month, :day, :calendar]) end defp default_options(backend) do [format: :medium, locale: Cldr.get_locale(backend), number_system: :default] end @doc """ Formats a date according to a format string as defined in CLDR and described in [TR35](http://unicode.org/reports/tr35/tr35-dates.html) ## Arguments * `date` is a `%Date{}` struct or any map that contains the keys `year`, `month`, `day` and `calendar` * `backend` is any module that includes `use Cldr` and therefore is a `Cldr` backend module. The default is `Cldr.default_backend/0`. * `options` is a keyword list of options for formatting. ## Options * `format:` `:short` | `:medium` | `:long` | `:full` or a format string. The default is `:medium` * `locale` is any valid locale name returned by `Cldr.known_locale_names/0` or a `Cldr.LanguageTag` struct. The default is `Cldr.get_locale/0` * `number_system:` a number system into which the formatted date digits should be transliterated ## Returns * `formatted_date` or * raises an exception. ## Examples iex> Cldr.Date.to_string! ~D[2017-07-10], MyApp.Cldr, format: :medium, locale: "en" "Jul 10, 2017" iex> Cldr.Date.to_string! ~D[2017-07-10], MyApp.Cldr, locale: "en" "Jul 10, 2017" iex> Cldr.Date.to_string! ~D[2017-07-10], MyApp.Cldr, format: :full,locale: "en" "Monday, July 10, 2017" iex> Cldr.Date.to_string! ~D[2017-07-10], MyApp.Cldr, format: :short, locale: "en" "7/10/17" iex> Cldr.Date.to_string! ~D[2017-07-10], MyApp.Cldr, format: :short, locale: "fr" "10/07/2017" iex> Cldr.Date.to_string! ~D[2017-07-10], MyApp.Cldr, format: :long, locale: "af" "10 Julie 2017" """ @spec to_string!(map, Cldr.backend() | Keyword.t(), Keyword.t()) :: String.t() | no_return def to_string!(date, backend \\ Cldr.default_backend(), options \\ []) def to_string!(date, backend, options) do case to_string(date, backend, options) do {:ok, string} -> string {:error, {exception, message}} -> raise exception, message end end defp format_string(format, %LanguageTag{cldr_locale_name: locale_name}, calendar, backend) when format in @format_types do with {:ok, date_formats} <- Format.date_formats(locale_name, calendar, backend) do {:ok, Map.get(date_formats, format)} end end defp format_string(%{number_system: number_system, format: format}, locale, calendar, backend) do {:ok, format_string} = format_string(format, locale, calendar, backend) {:ok, %{number_system: number_system, format: format_string}} end defp format_string(format, _locale, _calendar, _backend) when is_atom(format) do {:error, {Cldr.InvalidDateFormatType, "Invalid date format type. " <> "The valid types are #{inspect(@format_types)}."}} end defp format_string(format_string, _locale, _calendar, _backend) when is_binary(format_string) do {:ok, format_string} end defp error_return(map, requirements) do requirements = requirements |> Enum.map(&inspect/1) |> Cldr.DateTime.Formatter.join_requirements() {:error, {ArgumentError, "Invalid date. Date is a map that contains at least #{requirements}. " <> "Found: #{inspect(map)}"}} end end
lib/cldr/date.ex
0.926703
0.704268
date.ex
starcoder
defmodule RandomColor do @moduledoc "README.md" |> File.read!() |> String.split("<!-- MDOC !-->") |> Enum.fetch!(1) alias RandomColor.Color ## Dictionary @color_dictionary [ monochrome: Color.monochrome(), red: Color.red(), orange: Color.orange(), yellow: Color.yellow(), green: Color.green(), blue: Color.blue(), purple: Color.purple(), pink: Color.pink() ] @doc false def color_dictionary do @color_dictionary end ## Interface @doc """ Generator a random color ## Options * `hue:` - `:monochrome`, `:red`, `:orange`, `:yellow`, `:green`, `:blue`, `:purple`, `:pink` * `luminosity:` - `:dark`, `:bright`, `:light`, `:random` * `format:` - `:string` (default), `:tuple` ## Output Format * `string` - `"rgb(221, 186, 95)"` * `tuple` - `{221, 186, 95}` ## Examples iex> RandomColor.rgb(hue: :red, luminosity: :light) """ def rgb(opts \\ []) do format = Keyword.get(opts, :format, :string) opts |> random_color() |> hsv_to_rgb() |> format(:rgb, format) end @doc """ Generator a random color ## Options * `hue:` - `:monochrome`, `:red`, `:orange`, `:yellow`, `:green`, `:blue`, `:purple`, `:pink` * `luminosity:` - `:dark`, `:bright`, `:light`, `:random` * `format:` - `:string` (default), `:tuple` `alpha` a value between `0.0` and `1.0` ## Output Format * `string` - `"rgba(221, 186, 95, 0.1)"` * `tuple` - `{221, 186, 95, 0.1}` ## Examples iex> RandomColor.rgba([hue: :purple], 0.8) """ def rgba(opts \\ [], alpha \\ nil) do format = Keyword.get(opts, :format, :string) hsv = random_color(opts) alpha = if alpha do alpha else Enum.random(0..10) / 10 end hsv |> hsv_to_rgb() |> Tuple.append(alpha) |> format(:rgba, format) end @doc """ Generator a random color ## Options * `hue:` - `:monochrome`, `:red`, `:orange`, `:yellow`, `:green`, `:blue`, `:purple`, `:pink` * `luminosity:` - `:dark`, `:bright`, `:light`, `:random` ## Output Format * `string` - `"#13B592"` ## Examples iex> RandomColor.hex(hue: :blue) """ def hex(opts \\ []) do opts |> random_color() |> hsv_to_hex() end @doc """ Generator a random color ## Options * `hue:` - `:monochrome`, `:red`, `:orange`, `:yellow`, `:green`, `:blue`, `:purple`, `:pink` * `luminosity:` - `:dark`, `:bright`, `:light`, `:random` * `format:` - `:string` (default), `:tuple` ## Output Format * `string` - `"hsl(139, 82.32%, 71.725%)"` * `tuple` - `{139, 82.32, 71.725}` ## Examples iex> RandomColor.hsl(hue: :yellow, luminosity: :light) """ def hsl(opts \\ []) do format = Keyword.get(opts, :format, :string) opts |> random_color() |> hsv_to_hsl() |> format(:hsl, format) end @doc """ Generator a random color ## Options * `hue:` - `:monochrome`, `:red`, `:orange`, `:yellow`, `:green`, `:blue`, `:purple`, `:pink` * `luminosity:` - `:dark`, `:bright`, `:light`, `:random` * `format:` - `:string` (default), `:tuple` `alpha` a value between `0.0` and `1.0` ## Output Format * `string` - `"hsl(139, 82.32%, 71.725%)"` * `tuple` - `{139, 82.32, 71.725}` ## Examples iex> RandomColor.hsl(hue: :yellow, luminosity: :light) """ def hsla(opts \\ [], alpha \\ nil) do format = Keyword.get(opts, :format, :string) hsv = random_color(opts) alpha = if alpha do alpha else Enum.random(0..10) / 10 end hsv |> hsv_to_hsl() |> Tuple.append(alpha) |> format(:hsla, format) end ## Implementation defp random_color(opts) do # FIXME(ts): handle seed opt h = pick_hue(opts) s = pick_saturation(h, opts) v = pick_brightness(h, s, opts) {h, s, v} end defp pick_hue(opts) do hue_opt = Keyword.get(opts, :hue) hue_range = Color.get_color_hue_range(hue_opt) hue = Enum.random(hue_range) if hue < 0 do 360 + hue else hue end end defp pick_saturation(h, opts) do cond do opts[:hue] === :monochrome -> 0 opts[:luminosity] === :random -> Enum.random(0..100) true -> color_info = Color.get_color_info(h) saturation_range = color_info.saturation_range s_min..s_max = saturation_range saturation_range = cond do opts[:luminosity] == :bright -> 55..s_max opts[:luminosity] == :dark -> (s_max - 10)..s_max opts[:luminosity] == :light -> s_min..55 true -> saturation_range end Enum.random(saturation_range) end end defp pick_brightness(h, s, opts) do b_min = floor(get_min_brightness(h, s)) b_max = 100 brightness_range = cond do opts[:luminosity] == :random -> 0..100 opts[:luminosity] == :dark -> b_min..(b_min + 20) opts[:luminosity] == :light -> round((b_max + b_min) / 2)..b_max true -> b_min..b_max end Enum.random(brightness_range) end defp hsv_to_hex(hsv) do hsv |> hsv_to_rgb() |> Tuple.to_list() |> Enum.reduce("#", fn v, acc -> acc <> (v |> Integer.to_string(16) |> String.pad_leading(2, "0")) end) end defp hsv_to_hsl({h, s, v}) do s = s / 100 v = v / 100 k = (2 - s) * v { h, round( s * v / if k < 1 do k else 2 - k end * 10000 ) / 100, k / 2 * 100 } end defp hsv_to_rgb({h, s, v}) do h = case h do 0 -> 1 360 -> 359 _ -> h end {h, s, v} = {h / 360, s / 100, v / 100} h_i = floor(h * 6) f = h * 6 - h_i p = v * (1 - s) q = v * (1 - f * s) t = v * (1 - (1 - f) * s) {r, g, b} = case h_i do 0 -> {v, t, p} 1 -> {q, t, p} 2 -> {p, v, t} 3 -> {p, q, v} 4 -> {t, p, v} 5 -> {v, p, q} end {floor(r * 255), floor(g * 255), floor(b * 255)} end defp get_min_brightness(h, s) do color_info = Color.get_color_info(h) lower_bounds = color_info.lower_bounds lower_bounds |> Enum.with_index() |> Enum.reduce_while(0, fn {lb, index}, acc -> next = Enum.at(lower_bounds, index + 1) if next do s1..v1 = lb s2..v2 = next if s >= s1 and s <= s2 do m = v2 / v1 / (s2 - s1) b = v1 - m * s1 {:halt, m * s + b} else {:cont, acc} end else {:halt, 0} end end) end ## Formatting defp format({r, g, b}, :rgb, :string), do: "rgb(#{r}, #{g}, #{b})" defp format(rgb, :rgb, :tuple), do: rgb defp format({r, g, b, a}, :rgba, :string), do: "rgba(#{r}, #{g}, #{b}, #{a})" defp format(rgba, :rgba, :tuple), do: rgba defp format({h, s, l}, :hsl, :string), do: "hsl(#{h}, #{s}%, #{l}%)" defp format(hsl, :hsl, :tuple), do: hsl defp format({h, s, l, a}, :hsla, :string), do: "hsla(#{h}, #{s}%, #{l}%, #{a})" defp format(hsla, :hsla, :tuple), do: hsla end
lib/random_color.ex
0.878679
0.537891
random_color.ex
starcoder
defmodule GoodTimes.Generate do @vsn GoodTimes.version @moduledoc """ Generate streams of datetimes. Generate a stream of datetimes, starting with the input datetime and stepping forward or backward by some time unit. All functions operate on an Erlang datetime, and returns a `Stream` of datetime elements. There are functions stepping a second, minute, hour, week, day, month or year at a time. Step forward with `all_<unit>_after/1`, or backward with `all_<unit>_before/1`. ## Examples iex> {{2015, 2, 27}, {18, 30, 45}} |> all_days_after |> Enum.take(3) [{{2015, 2, 27}, {18, 30, 45}}, {{2015, 2, 28}, {18, 30, 45}}, {{2015, 3, 1}, {18, 30, 45}}] """ @doc """ Returns a `Stream` of datetimes, starting with `datetime`, stepping forward one second at a time. ## Examples iex> {{2015, 2, 27}, {18, 30, 45}} |> all_seconds_after |> Enum.take(3) [{{2015, 2, 27}, {18, 30, 45}}, {{2015, 2, 27}, {18, 30, 46}}, {{2015, 2, 27}, {18, 30, 47}}] """ @spec all_seconds_after(GoodTimes.datetime) :: Enumerable.t def all_seconds_after(datetime) do datetime |> Stream.iterate(&GoodTimes.a_second_after/1) end @doc """ Returns a `Stream` of datetimes, starting with `datetime`, stepping backward one second at a time. ## Examples iex> {{2015, 2, 27}, {18, 30, 45}} |> all_seconds_before |> Enum.take(3) [{{2015, 2, 27}, {18, 30, 45}}, {{2015, 2, 27}, {18, 30, 44}}, {{2015, 2, 27}, {18, 30, 43}}] """ @spec all_seconds_before(GoodTimes.datetime) :: Enumerable.t def all_seconds_before(datetime) do datetime |> Stream.iterate(&GoodTimes.a_second_before/1) end @doc """ Returns a `Stream` of datetimes, starting with `datetime`, stepping forward one minute at a time. ## Examples iex> {{2015, 2, 27}, {18, 30, 45}} |> all_minutes_after |> Enum.take(3) [{{2015, 2, 27}, {18, 30, 45}}, {{2015, 2, 27}, {18, 31, 45}}, {{2015, 2, 27}, {18, 32, 45}}] """ @spec all_minutes_after(GoodTimes.datetime) :: Enumerable.t def all_minutes_after(datetime) do datetime |> Stream.iterate(&GoodTimes.a_minute_after/1) end @doc """ Returns a `Stream` of datetimes, starting with `datetime`, stepping backward one minute at a time. ## Examples iex> {{2015, 2, 27}, {18, 30, 45}} |> all_minutes_before |> Enum.take(3) [{{2015, 2, 27}, {18, 30, 45}}, {{2015, 2, 27}, {18, 29, 45}}, {{2015, 2, 27}, {18, 28, 45}}] """ @spec all_minutes_before(GoodTimes.datetime) :: Enumerable.t def all_minutes_before(datetime) do datetime |> Stream.iterate(&GoodTimes.a_minute_before/1) end @doc """ Returns a `Stream` of datetimes, starting with `datetime`, stepping forward one hour at a time. ## Examples iex> {{2015, 2, 27}, {18, 30, 45}} |> all_hours_after |> Enum.take(3) [{{2015, 2, 27}, {18, 30, 45}}, {{2015, 2, 27}, {19, 30, 45}}, {{2015, 2, 27}, {20, 30, 45}}] """ @spec all_hours_after(GoodTimes.datetime) :: Enumerable.t def all_hours_after(datetime) do datetime |> Stream.iterate(&GoodTimes.an_hour_after/1) end @doc """ Returns a `Stream` of datetimes, starting with `datetime`, stepping backward one hour at a time. ## Examples iex> {{2015, 2, 27}, {18, 30, 45}} |> all_hours_before |> Enum.take(3) [{{2015, 2, 27}, {18, 30, 45}}, {{2015, 2, 27}, {17, 30, 45}}, {{2015, 2, 27}, {16, 30, 45}}] """ @spec all_hours_before(GoodTimes.datetime) :: Enumerable.t def all_hours_before(datetime) do datetime |> Stream.iterate(&GoodTimes.an_hour_before/1) end @doc """ Returns a `Stream` of datetimes, starting with `datetime`, stepping forward one day at a time. ## Examples iex> {{2015, 2, 27}, {18, 30, 45}} |> all_days_after |> Enum.take(3) [{{2015, 2, 27}, {18, 30, 45}}, {{2015, 2, 28}, {18, 30, 45}}, {{2015, 3, 1}, {18, 30, 45}}] """ @spec all_days_after(GoodTimes.datetime) :: Enumerable.t def all_days_after(datetime) do datetime |> Stream.iterate(&GoodTimes.a_day_after/1) end @doc """ Returns a `Stream` of datetimes, starting with `datetime`, stepping backward one day at a time. ## Examples iex> {{2015, 2, 27}, {18, 30, 45}} |> all_days_before |> Enum.take(3) [{{2015, 2, 27}, {18, 30, 45}}, {{2015, 2, 26}, {18, 30, 45}}, {{2015, 2, 25}, {18, 30, 45}}] """ @spec all_days_before(GoodTimes.datetime) :: Enumerable.t def all_days_before(datetime) do datetime |> Stream.iterate(&GoodTimes.a_day_before/1) end @doc """ Returns a `Stream` of datetimes, starting with `datetime`, stepping forward one week at a time. ## Examples iex> {{2015, 2, 27}, {18, 30, 45}} |> all_weeks_after |> Enum.take(3) [{{2015, 2, 27}, {18, 30, 45}}, {{2015, 3, 6}, {18, 30, 45}}, {{2015, 3, 13}, {18, 30, 45}}] """ @spec all_weeks_after(GoodTimes.datetime) :: Enumerable.t def all_weeks_after(datetime) do datetime |> Stream.iterate(&GoodTimes.a_week_after/1) end @doc """ Returns a `Stream` of datetimes, starting with `datetime`, stepping backward one week at a time. ## Examples iex> {{2015, 2, 27}, {18, 30, 45}} |> all_weeks_before |> Enum.take(3) [{{2015, 2, 27}, {18, 30, 45}}, {{2015, 2, 20}, {18, 30, 45}}, {{2015, 2, 13}, {18, 30, 45}}] """ @spec all_weeks_before(GoodTimes.datetime) :: Enumerable.t def all_weeks_before(datetime) do datetime |> Stream.iterate(&GoodTimes.a_week_before/1) end @doc """ Returns a `Stream` of datetimes, starting with `datetime`, stepping forward one month at a time. ## Examples iex> {{2015, 2, 27}, {18, 30, 45}} |> all_months_after |> Enum.take(3) [{{2015, 2, 27}, {18, 30, 45}}, {{2015, 3, 27}, {18, 30, 45}}, {{2015, 4, 27}, {18, 30, 45}}] """ @spec all_months_after(GoodTimes.datetime) :: Enumerable.t def all_months_after(datetime) do datetime |> Stream.iterate(&GoodTimes.a_month_after/1) end @doc """ Returns a `Stream` of datetimes, starting with `datetime`, stepping backward one month at a time. ## Examples iex> {{2015, 2, 27}, {18, 30, 45}} |> all_months_before |> Enum.take(3) [{{2015, 2, 27}, {18, 30, 45}}, {{2015, 1, 27}, {18, 30, 45}}, {{2014, 12, 27}, {18, 30, 45}}] """ @spec all_months_before(GoodTimes.datetime) :: Enumerable.t def all_months_before(datetime) do datetime |> Stream.iterate(&GoodTimes.a_month_before/1) end @doc """ Returns a `Stream` of datetimes, starting with `datetime`, stepping forward one year at a time. ## Examples iex> {{2015, 2, 27}, {18, 30, 45}} |> all_years_after |> Enum.take(3) [{{2015, 2, 27}, {18, 30, 45}}, {{2016, 2, 27}, {18, 30, 45}}, {{2017, 2, 27}, {18, 30, 45}}] """ @spec all_years_after(GoodTimes.datetime) :: Enumerable.t def all_years_after(datetime) do datetime |> Stream.iterate(&GoodTimes.a_year_after/1) end @doc """ Returns a `Stream` of datetimes, starting with `datetime`, stepping backward one year at a time. ## Examples iex> {{2015, 2, 27}, {18, 30, 45}} |> all_years_before |> Enum.take(3) [{{2015, 2, 27}, {18, 30, 45}}, {{2014, 2, 27}, {18, 30, 45}}, {{2013, 2, 27}, {18, 30, 45}}] """ @spec all_years_before(GoodTimes.datetime) :: Enumerable.t def all_years_before(datetime) do datetime |> Stream.iterate(&GoodTimes.a_year_before/1) end end
lib/good_times/generate.ex
0.941601
0.742958
generate.ex
starcoder
defmodule Trans do @moduledoc ~S""" `Trans` provides a way to manage and query translations embedded into schemas and removes the necessity of maintaining extra tables only for translation storage. ## What does this package do? `Trans` allows you to store translations for a struct embedded into a field of that struct itself. `Trans` is split into two main components: - `Trans.Translator` - allows to easily access translated values from structs and automatically fallbacks to the default value when the translation does not exist in the required locale. - `Trans.QueryBuilder` - adds conditions to `Ecto.Query` for filtering values of translated fields. This module will be available only if `Ecto.SQL` is available. `Trans` shines when paired with an `Ecto.Schema`. It allows you to keep the translations into a field of the schema and avoids requiring extra tables for translation storage and complex _joins_ when retrieving translations from the database. ## What does this module do? This module provides the required metadata for `Trans.Translator` and `Trans.QueryBuilder` modules. You can use `Trans` in your module like in the following example (usage of `Ecto.Schema` and schema declaration are optional): defmodule Article do use Ecto.Schema use Trans, translates: [:title, :body] schema "articles" do field :title, :string field :body, :text field :translations, :map end end When used, `Trans` will define a `__trans__` function that can be used for runtime introspection of the translation metadata. - `__trans__(:fields)` - Returns the list of translatable fields. Fields declared as translatable must be present in the module's schema or struct declaration. - `__trans__(:container)` - Returns the name of the _translation container_ field. To learn more about the _translation container_ field see the following section. ## The translation container By default, `Trans` stores and looks for translations in a field named `translations`. This field is known as the translations container. If you need to use a different field name for storing translations, you can specify it when using `Trans` from your module. In the following example, `Trans` will store and look for translations in the field `locales`. defmodule Article do use Ecto.Schema use Trans, translates: [:title, :body], container: :locales schema "articles" do field :title, :string field :body, :text field :locales, :map end end """ defmacro __using__(opts) do quote do Module.put_attribute(__MODULE__, :trans_fields, unquote(translatable_fields(opts))) Module.put_attribute(__MODULE__, :trans_container, unquote(translation_container(opts))) @after_compile {Trans, :__validate_translatable_fields__} @after_compile {Trans, :__validate_translation_container__} @spec __trans__(:fields) :: list(atom) def __trans__(:fields), do: @trans_fields @spec __trans__(:container) :: atom def __trans__(:container), do: @trans_container end end @doc """ Checks whether the given field is translatable or not. **Important:** This function will raise an error if the given module does not use `Trans`. ## Usage example Imagine that we have an _Article_ schema declared as follows: defmodule Article do use Ecto.Schema use Trans, translates: [:title, :body] schema "articles" do field :title, :string field :body, :string field :translations, :map end end If we want to know whether a certain field is translatable or not we can use this function as follows (we can also pass a struct instead of the module name itself): iex> Trans.translatable?(Article, :title) true iex> Trans.translatable?(%Article{}, :not_existing) false """ @spec translatable?(module | struct, String.t() | atom) :: boolean def translatable?(%{__struct__: module}, field), do: translatable?(module, field) def translatable?(module_or_struct, field) when is_atom(module_or_struct) and is_binary(field) do translatable?(module_or_struct, String.to_atom(field)) end def translatable?(module_or_struct, field) when is_atom(module_or_struct) and is_atom(field) do if Keyword.has_key?(module_or_struct.__info__(:functions), :__trans__) do Enum.member?(module_or_struct.__trans__(:fields), field) else raise "#{module_or_struct} must use `Trans` in order to be translated" end end @doc false def __validate_translatable_fields__(%{module: module}, _bytecode) do struct_fields = module.__struct__() |> Map.keys() |> MapSet.new() translatable_fields = :fields |> module.__trans__ |> MapSet.new() invalid_fields = MapSet.difference(translatable_fields, struct_fields) case MapSet.size(invalid_fields) do 0 -> nil 1 -> raise ArgumentError, message: "#{module} declares '#{MapSet.to_list(invalid_fields)}' as translatable but it is not defined in the module's struct" _ -> raise ArgumentError, message: "#{module} declares '#{MapSet.to_list(invalid_fields)}' as translatable but it they not defined in the module's struct" end end @doc false def __validate_translation_container__(%{module: module}, _bytecode) do container = module.__trans__(:container) unless Enum.member?(Map.keys(module.__struct__()), container) do raise ArgumentError, message: "The field #{container} used as the translation container is not defined in #{module} struct" end end defp translatable_fields(opts) do case Keyword.fetch(opts, :translates) do {:ok, fields} when is_list(fields) -> fields _ -> raise ArgumentError, message: "Trans requires a 'translates' option that contains the list of translatable fields names" end end defp translation_container(opts) do case Keyword.fetch(opts, :container) do :error -> :translations {:ok, container} -> container end end end
lib/trans.ex
0.876859
0.677037
trans.ex
starcoder
defmodule Membrane.H264.FFmpeg.Decoder do @moduledoc """ Membrane element that decodes video in H264 format. It is backed by decoder from FFmpeg. The element expects the data for each frame (Access Unit) to be received in a separate buffer, so the parser (`Membrane.H264.FFmpeg.Parser`) may be required in a pipeline before decoder (e.g. when input is read from `Membrane.File.Source`). """ use Membrane.Filter alias __MODULE__.Native alias Membrane.Buffer alias Membrane.Caps.Video.{H264, Raw} alias Membrane.H264.FFmpeg.Common require Membrane.Logger def_input_pad :input, demand_unit: :buffers, caps: {H264, stream_format: :byte_stream, alignment: :au} def_output_pad :output, caps: {Raw, format: one_of([:I420, :I422]), aligned: true} @impl true def handle_init(_opts) do state = %{decoder_ref: nil, caps_changed: false} {:ok, state} end @impl true def handle_stopped_to_prepared(_ctx, state) do with {:ok, decoder_ref} <- Native.create() do {:ok, %{state | decoder_ref: decoder_ref}} else {:error, reason} -> {{:error, reason}, state} end end @impl true def handle_demand(:output, size, :buffers, _ctx, state) do {{:ok, demand: {:input, size}}, state} end @impl true def handle_process(:input, %Buffer{metadata: metadata, payload: payload}, ctx, state) do %{decoder_ref: decoder_ref} = state dts = metadata[:dts] || 0 with {:ok, pts_list_h264_base, frames} <- Native.decode(payload, Common.to_h264_time_base(dts), decoder_ref), bufs = wrap_frames(pts_list_h264_base, frames), in_caps = ctx.pads.input.caps do {caps, state} = update_caps_if_needed(state, in_caps) # redemand actually makes sense only for the first call (because decoder keeps 2 frames buffered) # but it is noop otherwise, so there is no point in implementing special logic for that case actions = Enum.concat([caps, bufs, [redemand: :output]]) {{:ok, actions}, state} else {:error, reason} -> {{:error, reason}, state} end end @impl true def handle_caps(:input, _caps, _ctx, state) do # only redeclaring decoder - new caps will be generated in handle_process, after decoding key_frame with {:ok, decoder_ref} <- Native.create() do {{:ok, redemand: :output}, %{state | decoder_ref: decoder_ref, caps_changed: true}} else {:error, reason} -> {{:error, reason}, state} end end @impl true def handle_end_of_stream(:input, _ctx, state) do with {:ok, best_effort_pts_list, frames} <- Native.flush(state.decoder_ref), bufs <- wrap_frames(best_effort_pts_list, frames) do actions = bufs ++ [end_of_stream: :output, notify: {:end_of_stream, :input}] {{:ok, actions}, state} else {:error, reason} -> {{:error, reason}, state} end end @impl true def handle_prepared_to_stopped(_ctx, state) do {:ok, %{state | decoder_ref: nil}} end defp wrap_frames([], []), do: [] defp wrap_frames(pts_list, frames) do Enum.zip(pts_list, frames) |> Enum.map(fn {pts, frame} -> %Buffer{metadata: %{pts: Common.to_membrane_time_base(pts)}, payload: frame} end) |> then(&[buffer: {:output, &1}]) end defp update_caps_if_needed(%{caps_changed: true, decoder_ref: decoder_ref} = state, in_caps) do {[caps: {:output, generate_caps(in_caps, decoder_ref)}], %{state | caps_changed: false}} end defp update_caps_if_needed(%{caps_changed: false} = state, _in_caps) do {[], state} end defp generate_caps(input_caps, decoder_ref) do {:ok, width, height, pix_fmt} = Native.get_metadata(decoder_ref) framerate = case input_caps do nil -> {0, 1} %H264{framerate: in_framerate} -> in_framerate end %Raw{ aligned: true, format: pix_fmt, framerate: framerate, height: height, width: width } end end
lib/membrane_h264_ffmpeg/decoder.ex
0.852736
0.440048
decoder.ex
starcoder
defmodule Kafka.Topic do @moduledoc """ Defines a Kafka topic. Kafka topics are used as message buses in Hindsight, allowing data to be passed from one service to another without directly coupling those services. ## Configuration * `name` - Required. Topic name. * `endpoints` - Required. Keyword list of Kafka brokers. * `partitions` - Number of partitions to create the topic with in Kafka. Defaults to 1. * `partitioner` - Method for partitioning messages as they're written to Kafka. Must be one of `[:default, :md5, :random]`. * `key_path` - String or list of strings used to parse message key from message content. Defaults to empty list, resulting in no message key (`""`). """ use Definition, schema: Kafka.Topic.V1 use JsonSerde, alias: "kafka_topic" @type t :: %__MODULE__{ version: integer(), endpoints: [{atom, pos_integer}], name: String.t(), partitions: pos_integer, partitioner: :default | :md5 | :random, key_path: list } defstruct version: 1, endpoints: nil, name: nil, partitions: 1, partitioner: :default, key_path: [] defimpl Source do defdelegate start_link(t, context), to: Kafka.Topic.Source defdelegate stop(t, server), to: Kafka.Topic.Source defdelegate delete(t), to: Kafka.Topic.Source end defimpl Destination do defdelegate start_link(t, context), to: Kafka.Topic.Destination defdelegate write(t, server, messages), to: Kafka.Topic.Destination defdelegate stop(t, server), to: Kafka.Topic.Destination defdelegate delete(t), to: Kafka.Topic.Destination end end defmodule Kafka.Topic.V1 do @moduledoc false use Definition.Schema def s do schema(%Kafka.Topic{ version: version(1), endpoints: spec(is_list()), name: required_string(), partitions: spec(pos_integer?()), partitioner: spec(fn x -> x in [:default, :random, :md5] end), key_path: access_path() }) end end
apps/definition_kafka/lib/kafka/topic.ex
0.822937
0.553626
topic.ex
starcoder
defmodule SmartCity.Registry.Dataset do @moduledoc """ Struct defining a dataset definition and functions for reading and writing dataset definitions to Redis. ```javascript const Dataset = { "id": "", // UUID "business": { // Project Open Data Metadata Schema v1.1 "dataTitle": "", // user friendly (dataTitle) "description": "", "keywords": [""], "modifiedDate": "", "orgTitle": "", // user friendly (orgTitle) "contactName": "", "contactEmail": "", "license": "", "rights": "", "homepage": "", "spatial": "", "temporal": "", "publishFrequency": "", "conformsToUri": "", "describedByUrl": "", "describedByMimeType": "", "parentDataset": "", "issuedDate": "", "language": "", "referenceUrls": [""], "categories": [""] }, "technical": { "dataName": "", // ~r/[a-zA-Z_]+$/ "orgId": "", "orgName": "", // ~r/[a-zA-Z_]+$/ "systemName": "", // ${orgName}__${dataName}, "schema": [ { "name": "", "type": "", "description": "" } ], "sourceUrl": "", "protocol": "", // List of protocols to use. Defaults to nil. Can be [http1, http2] "authUrl": "", "sourceFormat": "", "sourceType": "", // remote|stream|ingest|host "cadence": "", "sourceQueryParams": { "key1": "", "key2": "" }, "transformations": [], // ? "validations": [], // ? "sourceHeaders": { "header1": "", "header2": "" } "authHeaders": { "header1": "", "header2": "" } }, "_metadata": { "intendedUse": [], "expectedBenefit": [] } } ``` """ alias SmartCity.Registry.Dataset.{Business, Technical, Metadata} alias SmartCity.Helpers alias SmartCity.Registry.Subscriber @typep id :: term() @type t :: %SmartCity.Registry.Dataset{ version: String.t(), id: String.t(), business: SmartCity.Registry.Dataset.Business.t(), technical: SmartCity.Registry.Dataset.Technical.t(), _metadata: SmartCity.Registry.Dataset.Metadata.t() } @derive Jason.Encoder defstruct version: "0.3", id: nil, business: nil, technical: nil, _metadata: nil @conn SmartCity.Registry.Application.db_connection() defmodule NotFound do defexception [:message] end @doc """ Returns a new `SmartCity.Registry.Dataset` struct. `SmartCity.Registry.Dataset.Business`, `SmartCity.Registry.Dataset.Technical`, and `SmartCity.Registry.Dataset.Metadata` structs will be created along the way. ## Parameters - msg : map defining values of the struct to be created. Can be initialized by - map with string keys - map with atom keys - JSON """ @spec new(String.t() | map()) :: {:ok, SmartCity.Registry.Dataset.t()} | {:error, term()} def new(msg) when is_binary(msg) do with {:ok, decoded} <- Jason.decode(msg, keys: :atoms) do new(decoded) end end def new(%{"id" => _} = msg) do msg |> Helpers.to_atom_keys() |> new() end def new(%{id: id, business: biz, technical: tech, _metadata: meta}) do struct = struct(%__MODULE__{}, %{ id: id, business: Business.new(biz), technical: Technical.new(tech), _metadata: Metadata.new(meta) }) {:ok, struct} rescue e -> {:error, e} end def new(%{id: id, business: biz, technical: tech}) do new(%{id: id, business: biz, technical: tech, _metadata: %{}}) end def new(msg) do {:error, "Invalid registry message: #{inspect(msg)}"} end @doc """ Writes the dataset to history and sets the dataset as the latest definition for the given `id` field of the passed in dataset in Redis. Registry subscribers will be notified and have their `handle_dataset/1` callback triggered. Returns an {:ok, id} tuple() where id is the dataset id. ## Parameters - dataset: SmartCity.Registry.Dataset struct to be written. """ @spec write(SmartCity.Registry.Dataset.t()) :: {:ok, id()} def write(%__MODULE__{id: id} = dataset) do add_to_history(dataset) Redix.command!(@conn, ["SET", latest_key(id), Jason.encode!(dataset)]) Subscriber.send_dataset_update(id) ok(id) end @doc """ Returns `{:ok, dataset}` with the dataset for the given id, or an error with the reason. """ @spec get(id()) :: {:ok, SmartCity.Registry.Dataset.t()} | {:error, term()} def get(id) do with {:ok, json} <- get_latest(id), {:ok, dataset} <- new(json) do {:ok, dataset} end end defp get_latest(id) do case Redix.command(@conn, ["GET", latest_key(id)]) do {:ok, nil} -> {:error, %NotFound{message: "no dataset with given id found -- ID: #{id}"}} result -> result end end @doc """ Returns the dataset with the given id or raises an error. """ @spec get!(id()) :: SmartCity.Registry.Dataset.t() | no_return() def get!(id) do handle_ok_error(fn -> get(id) end) end @doc """ Returns `{:ok, dataset_versions}` with a history of all versions of the given dataset. """ @spec get_history(id()) :: {:ok, [SmartCity.Registry.Dataset.t()]} | {:error, term()} def get_history(id) do with {:ok, list} <- Redix.command(@conn, ["LRANGE", history_key(id), "0", "-1"]) do list |> Enum.map(&Jason.decode!(&1, keys: :atoms)) |> Enum.map(fn value -> %{value | dataset: to_dataset(value.dataset)} end) |> ok() end end @doc """ See `get_history/1`. Raises on errors. """ @spec get_history!(id()) :: [SmartCity.Registry.Dataset.t()] | no_return() def get_history!(id) do handle_ok_error(fn -> get_history(id) end) end @doc """ Returns `{:ok, datasets}` with all dataset definitions in the system. """ @spec get_all() :: {:ok, [SmartCity.Registry.Dataset.t()]} | {:error, term()} def get_all() do case keys_mget(latest_key("*")) do {:ok, list} -> {:ok, Enum.map(list, &to_dataset(&1))} error -> error end end @doc """ See `get_all/0`. Raises on errors. """ @spec get_all!() :: [SmartCity.Registry.Dataset.t()] | no_return() def get_all!() do handle_ok_error(fn -> get_all() end) end @doc """ Returns true if `SmartCity.Registry.Dataset.Technical sourceType field is stream` """ def is_stream?(%__MODULE__{technical: %{sourceType: sourceType}}) do "stream" == sourceType end @doc """ Returns true if `SmartCity.Registry.Dataset.Technical sourceType field is remote` """ def is_remote?(%__MODULE__{technical: %{sourceType: sourceType}}) do "remote" == sourceType end @doc """ Returns true if `SmartCity.Registry.Dataset.Technical sourceType field is ingest` """ def is_ingest?(%__MODULE__{technical: %{sourceType: sourceType}}) do "ingest" == sourceType end @doc """ Returns true if `SmartCity.Registry.Dataset.Technical sourceType field is host` """ def is_host?(%__MODULE__{technical: %{sourceType: sourceType}}) do "host" == sourceType end defp add_to_history(%__MODULE__{id: id} = dataset) do body = %{creation_ts: DateTime.utc_now() |> DateTime.to_iso8601(), dataset: dataset} Redix.command!(@conn, ["RPUSH", history_key(id), Jason.encode!(body)]) end defp latest_key(id) do "smart_city:dataset:latest:#{id}" end defp history_key(id) do "smart_city:dataset:history:#{id}" end defp keys_mget(key) do case Redix.command(@conn, ["KEYS", key]) do {:ok, []} -> {:ok, []} {:ok, keys} -> Redix.command(@conn, ["MGET" | keys]) result -> result end end defp handle_ok_error(function) when is_function(function) do case function.() do {:ok, value} -> value {:error, reason} -> raise reason end end defp to_dataset(%{} = map) do {:ok, dataset} = new(map) dataset end defp to_dataset(json) do json |> Jason.decode!() |> to_dataset() end defp ok(value), do: {:ok, value} end
lib/smart_city/registry/dataset.ex
0.832747
0.705481
dataset.ex
starcoder
defmodule KVX.Bucket.ExShards do @moduledoc """ ExShards adapter. This is the default adapter supported by `KVX`. ExShards adapter only works with `set` and `ordered_set` table types. ExShards extra config options: * `:module` - internal ExShards module to use. By default, `ExShards` module is used, which is a wrapper on top of `ExShards.Local` and `ExShards.Dist`. * `:buckets` - this can be used to set bucket options in config, so it can be loaded when the bucket is created. See example below. Run-time options when calling `new/2` function, are the same as `ExShards.new/2`. For example: MyModule.new(:mybucket, [n_shards: 4]) ## Example: config :kvx, adapter: KVX.Bucket.ExShards, ttl: 43200, module: ExShards, buckets: [ mybucket1: [ n_shards: 4 ], mybucket2: [ n_shards: 8 ] ] For more information about `ExShards`: * [GitHub](https://github.com/cabol/ex_shards) * [GitHub](https://github.com/cabol/shards) * [Blog Post](http://cabol.github.io/posts/2016/04/14/sharding-support-for-ets.html) """ @behaviour KVX.Bucket @mod (Application.get_env(:kvx, :module, ExShards)) @default_ttl (Application.get_env(:kvx, :ttl, :infinity)) require Ex2ms ## Setup Commands def new(bucket, opts \\ []) when is_atom(bucket) do case Process.whereis(bucket) do nil -> new_bucket(bucket, opts) _ -> bucket end end defp new_bucket(bucket, opts) do opts = maybe_get_bucket_opts(bucket, opts) @mod.new(bucket, opts) end defp maybe_get_bucket_opts(bucket, []) do :kvx |> Application.get_env(:buckets, []) |> Keyword.get(bucket, []) end defp maybe_get_bucket_opts(_, opts), do: opts ## Storage Commands def add(bucket, key, value, ttl \\ @default_ttl) do case get(bucket, key) do nil -> set(bucket, key, value, ttl) _ -> raise KVX.ConflictError, key: key, value: value end end def set(bucket, key, value, ttl \\ @default_ttl) do @mod.set(bucket, {key, value, seconds_since_epoch(ttl)}) end def mset(bucket, entries, ttl \\ @default_ttl) when is_list(entries) do entries |> Enum.each(fn({key, value}) -> ^bucket = set(bucket, key, value, ttl) end) bucket end ## Retrieval Commands def get(bucket, key) do case @mod.lookup(bucket, key) do [{^key, value, ttl}] -> if ttl > seconds_since_epoch(0) do value else true = @mod.delete(bucket, key) nil end _ -> nil end end def mget(bucket, keys) when is_list(keys) do for key <- keys do get(bucket, key) end end def find_all(bucket, query \\ nil) do do_find_all(bucket, query) end defp do_find_all(bucket, nil) do do_find_all(bucket, Ex2ms.fun do object -> object end) end defp do_find_all(bucket, query) do bucket |> @mod.select(query) |> Enum.reduce([], fn({k, v, ttl}, acc) -> case ttl > seconds_since_epoch(0) do true -> [{k, v} | acc] _ -> true = @mod.delete(bucket, k) acc end end) end ## Cleanup functions def delete(bucket, key) do true = @mod.delete(bucket, key) bucket end def delete(bucket) do true = @mod.delete(bucket) bucket end def flush(bucket) do true = @mod.delete_all_objects(bucket) bucket end ## Extended functions def __ex_shards_mod__, do: @mod def __default_ttl__, do: @default_ttl ## Private functions defp seconds_since_epoch(diff) when is_integer(diff) do {mega, secs, _} = :os.timestamp() mega * 1000000 + secs + diff end defp seconds_since_epoch(:infinity), do: :infinity defp seconds_since_epoch(diff), do: raise ArgumentError, "ttl #{inspect diff} is invalid." end
lib/kvx/adapters/ex_shards/bucket_shards.ex
0.835265
0.481027
bucket_shards.ex
starcoder
defmodule ExOrient.DB do @moduledoc """ MarcoPolo ExOrientDB wrapper that provides a clean syntax for queries. This module simply routes SQL commands to the correct submodule, providing the ability to call all commands through ExOrient.DB.<command> """ alias ExOrient.DB.CRUD, as: CRUD alias ExOrient.DB.Graph, as: Graph alias ExOrient.DB.Indexes, as: Indexes alias ExOrient.DB.Schema, as: Schema @doc """ Execute a raw query with MarcoPolo and return the response. ExOrient.DB.command("SELECT FROM ProgrammingLanguage") # [%MarcoPolo.Document{class: "ProgrammingLanguage", fields: _, rid: _, version: _} | _rest] Only use this function directly if you need the power of a raw query. Be sure to use the params argument if you need to bind variables: ExOrient.DB.command("SELECT FROM ProgrammingLanguage WHERE name = :name", params: %{name: "Elixir"}) """ def command(query, opts \\ []) do :poolboy.transaction(:marco_polo, fn(worker) -> try do case MarcoPolo.command(worker, query, opts) do {:ok, %{response: response}} -> {:ok, response} {:error, error} -> {:error, error} end catch :exit, {:timeout, _} -> GenServer.stop(worker, :timeout, 5_000) raise "Database query timeout" end end) end @doc """ An alias for command/1 """ def exec({query, params}, opts \\ []), do: command(query, [params: params] ++ opts) @doc """ Execute a server-side script. `type` can be either `"SQL"` or `"Javascript"`. `str` is the string of the script you want to run. Example: DB.script("SQL", "begin; let v = create vertex V set name = 'test'; commit; return $v") {:ok, %MarcoPolo.Document{fields: %{"name" => "test"}}} """ def script(type, str, opts \\ []) do :poolboy.transaction(:marco_polo, fn(worker) -> try do case MarcoPolo.script(worker, type, str, opts) do {:ok, %{response: response}} -> {:ok, response} {:error, error} -> {:error, error} end catch :exit, {:timeout, _} -> GenServer.stop(worker, :timeout, 5_000) raise "Database script timeout" end end) end defdelegate select(field), to: CRUD defdelegate select(field, opts), to: CRUD defdelegate rid(rid), to: CRUD defdelegate insert(opts), to: CRUD defdelegate update(obj, opts), to: CRUD defdelegate truncate(opts), to: CRUD def delete(opts \\ []) do cond do Keyword.get(opts, :vertex) -> Graph.delete(opts) Keyword.get(opts, :edge) -> Graph.delete(opts) true -> CRUD.delete(opts) end end def create(opts \\ []) do cond do Keyword.get(opts, :vertex) -> Graph.create_vertex(opts) Keyword.get(opts, :edge) -> Graph.create_edge(opts) Keyword.get(opts, :class) -> Schema.create_class(opts) Keyword.get(opts, :property) -> Schema.create_property(opts) Keyword.get(opts, :index) -> Indexes.create(opts) true -> {:error, "Invalid command"} end end def alter(opts \\ []) do cond do Keyword.get(opts, :class) -> Schema.alter_class(opts) Keyword.get(opts, :property) -> Schema.alter_property(opts) true -> {:error, "Invalid command"} end end def drop(opts \\ []) do cond do Keyword.get(opts, :class) -> Schema.drop_class(opts) Keyword.get(opts, :property) -> Schema.drop_property(opts) Keyword.get(opts, :index) -> Indexes.drop(opts) true -> {:error, "Invalid command"} end end defdelegate rebuild(opts), to: Indexes end
lib/ex_orient/db.ex
0.739986
0.420272
db.ex
starcoder
defmodule OddJob.Scheduler do @moduledoc """ The `OddJob.Scheduler` is responsible for execution of scheduled jobs. Each scheduler is a dynamically supervised process that is created to manage a single timer and a job or collection of jobs to send to a pool when the timer expires. After the jobs are delivered to the pool the scheduler shuts itself down. The scheduler process will also automatically shutdown if a timer is cancelled with `OddJob.cancel_timer/1`. If a timer is cancelled with `Process.cancel_timer/1` then the scheduler will eventually timeout and shutdown one second after the timer would have expired. """ @moduledoc since: "0.2.0" @doc false use GenServer, restart: :temporary import OddJob.Guards alias OddJob.Scheduler.Supervisor, as: SchedulerSup @name __MODULE__ @registry OddJob.Registry @typedoc false @type timer :: non_neg_integer @type pool :: atom # <---- Client API ----> @doc false @spec perform(timer, pool, function) :: reference def perform(timer, pool, function) when is_timer(timer) do pool |> SchedulerSup.start_child() |> GenServer.call({:schedule_perform, timer, {pool, function}}) end @spec perform_many(timer, pool, list | map, function) :: reference def perform_many(timer, pool, collection, function) do pool |> SchedulerSup.start_child() |> GenServer.call({:schedule_perform, timer, {pool, collection, function}}) end @doc false @spec cancel_timer(reference) :: non_neg_integer | false def cancel_timer(timer_ref) when is_reference(timer_ref) do result = Process.cancel_timer(timer_ref) case lookup(timer_ref) do [{pid, :timer}] -> GenServer.cast(pid, :abort) [] -> :noop end result end defp lookup(timer_ref), do: Registry.lookup(@registry, timer_ref) @doc false @spec start_link([]) :: :ignore | {:error, any} | {:ok, pid} def start_link([]) do GenServer.start_link(@name, []) end # <---- Callbacks ----> @impl GenServer @spec init([]) :: {:ok, []} def init([]) do {:ok, []} end @impl GenServer def handle_call({:schedule_perform, timer, dispatch}, _, state) do timer_ref = timer |> set_timer(dispatch) |> register() timeout = Process.read_timer(timer_ref) + 1 {:reply, timer_ref, state, timeout} end defp set_timer(timer, dispatch) do Process.send_after(self(), {:perform, dispatch}, timer) end defp register(timer_ref) do Registry.register(@registry, timer_ref, :timer) timer_ref end @impl GenServer def handle_cast(:abort, state) do {:stop, :normal, state} end @impl GenServer def handle_info({:perform, {pool, fun}}, state) do OddJob.perform(pool, fun) {:stop, :normal, state} end def handle_info({:perform, {pool, collection, function}}, state) do OddJob.perform_many(pool, collection, function) {:stop, :normal, state} end def handle_info(:timeout, state) do {:stop, :normal, state} end end
lib/odd_job/scheduler.ex
0.817356
0.506103
scheduler.ex
starcoder
defmodule Numbers do @moduledoc """ Allows you to use arithmetical operations on _any_ type that implements the proper protocols. For each arithmetical operation, a different protocol is used so that types for which only a subset of these operations makes sense can still work with those. ## Basic Usage Usually, `Numbers` is used in another module by using `alias Numbers, as: N`, followed by calling the functions using this aliased but still descriptive syntax: `total_price = N.mult(N.add(price, fee), vat)` Because `Numbers` dispatches based on protocol definitions, you only need to swap what kind of arguments are used to change the type of the result. ## Overloaded Operators As explicit opt-in functionality, `Numbers` will add overloaded operators to your module, if you write `use Numbers, overload_operators: true` This will alter `a + b`, `a - b`, `a * b`, `a / b`, `-a` and `abs(a)` to dispatch to the corresponding functions in the `Numbers` module. Do note that these overloaded operator versions are _not_ allowed in guard tests, which is why this functionality is only provided as an opt-in option to use when an algorithm would be too unreadable without it. ## Examples: Using built-in numbers: iex> alias Numbers, as: N iex> N.add(1, 2) 3 iex> N.mult(3,5) 15 iex> N.mult(1.5, 100) 150.0 Using Decimals: (requires the [Decimal](https://hex.pm/packages/decimal) library.) iex> alias Numbers, as: N iex> d = Decimal.new(2) iex> N.div(d, 10) #Decimal<0.2> iex> small_number = N.div(d, 1234) #Decimal<0.001620745542949756888168557536> iex> N.pow(small_number, 100) #Decimal<9.364478495445313580679473524E-280> ## Defining your own Numbers implementations See `Numbers.Protocols` for a full explanation on how to do this. """ import Kernel, except: [div: 2] @type t :: any @doc """ Adds two Numeric `a` and `b` together. Depends on an implementation existing of `Numbers.Protocol.Addition` """ @spec add(t, t) :: t def add(a, b) do {a, b} = Coerce.coerce(a, b) Numbers.Protocols.Addition.add(a, b) end defdelegate add_id(num), to: Numbers.Protocols.Addition @doc """ Subtracts the Numeric `b` from the Numeric `a`. Depends on an implementation existing of `Numbers.Protocol.Subtraction` """ @spec sub(t, t) :: t def sub(a, b) do {a, b} = Coerce.coerce(a, b) Numbers.Protocols.Subtraction.sub(a, b) end @doc """ Multiplies the Numeric `a` with the Numeric `b` Depends on an implementation existing of `Numbers.Protocol.Multiplication` """ @spec mult(t, t) :: t def mult(a, b) do {a, b} = Coerce.coerce(a, b) Numbers.Protocols.Multiplication.mult(a, b) end defdelegate mult_id(num), to: Numbers.Protocols.Multiplication @doc """ Divides the Numeric `a` by `b`. Note that this is a supposed to be a full (non-truncated) division; no rounding or truncation is supposed to happen, even when calculating with integers. Depends on an implementation existing of `Numbers.Protocol.Division` """ @spec div(t, t) :: t def div(a, b) do {a, b} = Coerce.coerce(a, b) Numbers.Protocols.Division.div(a, b) end @doc """ Power function: computes `base^exponent`, where `base` is Numeric, and `exponent` _has_ to be an integer. _(This means that it is impossible to calculate roots by using this function.)_ Depends on an implementation existing of `Numbers.Protocol.Exponentiation` """ @spec pow(t, non_neg_integer) :: t def pow(num, power) do Numbers.Protocols.Exponentiation.pow(num, power) end @doc """ Unary minus. Returns the negation of the number. Depends on an implementation existing of `Numbers.Protocols.Minus` """ @spec minus(t) :: t defdelegate minus(num), to: Numbers.Protocols.Minus @doc """ The absolute value of a number. Depends on an implementation existing of `Numbers.Protocols.Absolute` """ @spec abs(t) :: t defdelegate abs(num), to: Numbers.Protocols.Absolute @doc """ Convert the custom Numeric struct to the built-in float datatype. This operation might be lossy, losing precision in the process. """ @spec to_float(t) :: {:ok, t_as_float :: float} | :error defdelegate to_float(num), to: Numbers.Protocols.ToFloat defmacro __using__(opts) do if opts[:overload_operators] != true do raise """ `use Numbers` called without `overload_operators: true` option. Either make the exporting of operators explicit by writing `use Numbers, overload_operators: true` or if you do not want the overridden operators, simply use Numbers directly, and optionally alias it, using: `alias Numbers, as: N`. """ else quote do import Kernel, except: [abs: 1, *: 2, /: 2, -: 2, -: 1, +: 2] import Numbers.Operators end end end end
lib/numbers.ex
0.898586
0.648369
numbers.ex
starcoder
defmodule Opus.Graph do @available_filetypes [:svg, :png, :pdf] @defaults %{ filetype: :png, docs_base_url: "", theme: %{ penwidth: 2, stage_shape: "box", colors: %{ border: %{ pipeline: "#222222", conditional: "#F9E79F", normal: "#222222" }, background: %{ pipeline: "#DDDDDD", step: "#73C6B6", link: "#C39BD3", check: "#7FB3D5", tee: "#A6ACAF", skip: "#8B80FF" }, edges: %{ link: "purple", normal: "black" } }, style: %{ normal: "filled", conditional: "filled, dashed" } } } @moduledoc """ Generates a Graph with all the pipeline modules, their stages and their relationships when there are links. Make sure to have Graphviz installed before using this. Usage: Opus.Graph.generate(:awesome_app, %{filetype: :png, filename: "my_graph"}) The above will create a `my_graph.png` graph image at the current working directory. Configuration: * `filetype`: The output format. Must be one of: `#{inspect(@available_filetypes)}`. Defaults to: #{ inspect(@defaults[:filetype]) } * `docs_base_url`: The prefix part of the documentation URLs set as hrefs for graph nodes. Defaults to: `#{inspect(@defaults[:docs_base_url])}` * `theme`: A map of options on how the graph should be styled. Defaults to: ``` #{inspect(@defaults[:theme], pretty: true)} ``` """ alias Graphvix.{Graph, Node, Edge} use Opus.Pipeline step :assign_config, with: fn %{app: app, config: config} = assigns -> put_in( assigns[:config], config || Application.get_env(app, :opus, %{})[:graph] || @defaults ) end step :assign_modules, error_message: "Could not fetch Opus pipeline modules for the given app", with: fn %{app: app} = assigns -> {:ok, app_modules} = :application.get_key(app, :modules) app_modules |> Enum.each(&Code.ensure_loaded/1) modules = for mod <- app_modules, function_exported?(mod, :pipeline?, 0), do: mod put_in(assigns[:modules], modules) end check :pipelines_found?, error_message: "Could not find any Opus pipeline modules for the given application", with: &match?(%{modules: [_ | _]}, &1) check :filename_valid?, error_message: "Invalid filename for the compiled graph", with: &match?(%{config: %{filename: name}} when is_atom(name) or is_bitstring(name), &1), if: &match?(%{config: %{filename: _}}, &1) check :filetype_valid?, error_message: "Invalid output format", with: &match?(%{config: %{filetype: filetype}} when filetype in [:png, :svg, :pdf], &1), if: &match?(%{config: %{filetype: _}}, &1) step :normalize_filename, with: fn %{config: %{filename: filename}} = assigns -> put_in(assigns, [:config, :filename], :"#{filename}") %{app: app} = assigns -> put_in(assigns, [:config, :filename], :"#{app}_opus_graph") end step :initialize_graph, with: fn %{config: %{filename: filename}} = assigns -> Graph.new(filename) assigns end step :build_pipeline_nodes step :build_stage_nodes step :build_output, with: fn %{config: %{filetype: filetype}} = assigns -> Graph.compile(filetype) assigns assigns -> Graph.compile(:png) assigns end step :format_output, with: fn %{config: %{filename: filename, filetype: filetype}} -> "Graph file has been written to #{filename}.#{filetype}" end def generate(app, config \\ nil), do: call(%{app: app, config: config}) @doc false def build_pipeline_nodes(%{modules: modules, config: config} = assigns) do nodes = for module <- modules do moduledoc = Code.get_docs(module, :moduledoc) {module, Node.new( label: inspect(module), penwidth: 2, href: module_href(config, module), class: "opus-pipeline", tooltip: tooltip(moduledoc), color: color(config, [:border, :pipeline]), fillcolor: color(config, [:background, :pipeline]), style: style(config, :normal) )} end put_in(assigns[:pipeline_nodes], nodes) end @doc false def build_stage_nodes(%{pipeline_nodes: pipeline_nodes, config: config} = assigns) do _ = for {module, root} <- pipeline_nodes do module.stages |> Enum.reduce(root, fn {type, name, opts}, {prev_id, _} -> {id, _node} = new_node = Node.new( label: "#{type}: #{inspect(name)}", penwidth: 2, class: "opus-stage", color: color(config, [:border, opts]), style: style(config, opts), fillcolor: color(config, [:background, type]), shape: from_config(config, [:theme, :stage_shape]) ) Edge.new(prev_id, id) case type do :link -> Edge.new(id, pipeline_nodes[name] |> elem(0), color: color(config, [:edges, :link])) _ -> :ok end new_node end) end assigns end defp color(config, [attr, %{if: _}]), do: color(config, [attr, :conditional]) defp color(config, [attr, %{}]), do: color(config, [attr, :normal]) defp color(config, list), do: from_config(config, [:theme, :colors | list]) defp style(config, %{if: _}), do: from_config(config, [:theme, :style, :conditional]) defp style(config, _type), do: from_config(config, [:theme, :style, :normal]) defp from_config(config, attr), do: get_in(config, attr) || get_in(@defaults, attr) defp tooltip({_line, doc}) when is_bitstring(doc), do: html_entities(doc) defp tooltip(_), do: "" defp module_href(config, module), do: "#{Access.get(config, :base_url, "")}/#{module}.html" defp html_entities(string) do string |> String.graphemes() |> Enum.map(fn "'" -> "&apos;" "\"" -> "&quot;" "&" -> "&amp;" "<" -> "&lt;" ">" -> "&gt;" other -> other end) |> Enum.join() end end
lib/opus/graph.ex
0.71123
0.752513
graph.ex
starcoder
defmodule NiceMaps do @moduledoc """ NiceMaps provides a single function `parse` to convert maps into the desired format. It can build camelcase/snake_case keys, convert string keys to atom keys and vice versa, or convert structs to maps """ @doc """ The main interface - this is where the magic happens. ## Options * `:keys` one of `:camelcase` or `:snake_case` * `:convert_structs` one of `true` or `false`, default: `false` * `:key_type`, one of `:string`, `:existing_atom`, or `:unsave_atom` (please use `:existing_atom` whenever possible) ## Examples ### Without Options: iex> NiceMaps.parse(%MyStruct{id: 1, my_key: "bar"}) %{id: 1, my_key: "bar"} iex> NiceMaps.parse([%MyStruct{id: 1, my_key: "bar"}, %{value: "a"}]) [%{id: 1, my_key: "bar"}, %{value: "a"}] iex> NiceMaps.parse([%MyStruct{id: 1, my_key: "bar"}, "String"]) [%{id: 1, my_key: "bar"}, "String"] iex> NiceMaps.parse(%{0 => "0", 1 => "1"}) %{0 => "0", 1 => "1"} ### Keys to camelcase: iex> NiceMaps.parse([%MyStruct{id: 1, my_key: "bar"}, %{value: "a"}], keys: :camelcase) [%{id: 1, myKey: "bar"}, %{value: "a"}] iex> NiceMaps.parse(%MyStruct{id: 1, my_key: "foo"}, keys: :camelcase) %{id: 1, myKey: "foo"} iex> NiceMaps.parse(%{"string" => "value", "another_string" => "value"}, keys: :camelcase) %{"string" => "value", "anotherString" => "value"} # Keys to snake case: iex> NiceMaps.parse(%MyCamelStruct{id: 1, myKey: "foo"}, keys: :snake_case) %{id: 1, my_key: "foo"} iex> NiceMaps.parse(%MyCamelStruct{id: 1, myKey: "foo"}, keys: :snake_case) %{id: 1, my_key: "foo"} iex> NiceMaps.parse(%{"string" => "value", "another_string" => "value"}, keys: :camelcase) %{"string" => "value", "anotherString" => "value"} ### Convert all structs into maps iex> map = %{ ...> list: [ ...> %MyStruct{id: 1, my_key: "foo"} ...> ], ...> struct: %MyStruct{id: 2, my_key: "bar"}, ...> other_struct: %MyStruct{id: 3, my_key: %MyStruct{id: 4, my_key: nil}} ...> } ...> NiceMaps.parse(map, convert_structs: true) %{ list: [ %{id: 1, my_key: "foo"} ], struct: %{id: 2, my_key: "bar"}, other_struct: %{id: 3, my_key: %{id: 4, my_key: nil}} } ### Convert string keys to existing atom iex> map = %{ ...> "key1" => "value 1", ...> "nested" => %{"key2" => "value 2"}, ...> "list" => [%{"key3" => "value 3", "key4" => "value 4"}], ...> 1 => "an integer key", ...> %MyStruct{} => "a struct key" ...> } iex> [:key1, :key2, :key3, :key4, :nested, :list] # Make sure atoms exist iex> NiceMaps.parse(map, key_type: :existing_atom) %{ :key1 => "value 1", :nested => %{key2: "value 2"}, :list => [%{key3: "value 3", key4: "value 4"}], 1 => "an integer key", %MyStruct{} => "a struct key" } ### Mix it all together iex> map = %{ ...> "hello_there" => [%{"aA" => "asdf"}, %{"a_a" => "bhjk"}, "a string", 1], ...> thingA: "thing A", ...> thing_b: "thing B" ...> } iex> NiceMaps.parse(map, keys: :camelcase, key_type: :string) %{"helloThere" => [%{"aA" => "asdf"}, %{"aA" => "bhjk"}, "a string", 1], "thingA" => "thing A", "thingB" => "thing B"} iex> map = %{ ...> "helloThere" => [%{"aA" => "asdf"}, %{"a_a" => "bhjk"}, "a string", 1], ...> thingA: "thing A", ...> thing_b: "thing B" ...> } iex> [:hello_there, :thing_a, :thing_b] # make sure atoms exist iex> NiceMaps.parse(map, keys: :snake_case, key_type: :existing_atom) %{:hello_there => [%{:a_a => "asdf"}, %{:a_a => "bhjk"}, "a string", 1], :thing_a => "thing A", :thing_b => "thing B"} """ def parse(map_or_struct, opts \\ []) def parse(map_or_struct, opts) when is_list(map_or_struct), do: parse_list(map_or_struct, opts) def parse(%{__struct__: _} = struct, opts), do: struct |> Map.from_struct() |> parse_keys(opts) def parse(map_or_struct, opts) when is_map(map_or_struct), do: parse_keys(map_or_struct, opts) def parse(obj, _opts), do: obj @doc false def parse_list(list, opts, result \\ []) def parse_list([], _opts, result), do: Enum.reverse(result) def parse_list([obj | rest], opts, result), do: parse_list(rest, opts, [parse(obj, opts) | result]) @doc false def parse_keys(map, opts) do case Keyword.get(opts, :keys) do :camelcase -> parse_camelcase_keys(map, opts) :snake_case -> parse_snake_case(map, opts) nil -> key_type = Keyword.get(opts, :key_type) convert_structs = Keyword.get(opts, :convert_structs) if key_type || convert_structs do for {key, val} <- map, do: {parse_key_type(key, key_type), parse(val, opts)}, into: %{} else map end end end defp parse_key_type(key, :existing_atom) when is_atom(key), do: key defp parse_key_type(key, :existing_atom) when is_bitstring(key), do: String.to_existing_atom(key) defp parse_key_type(key, :unsave_atom) when is_atom(key), do: key defp parse_key_type(key, :unsave_atom) when is_bitstring(key), do: String.to_atom(key) defp parse_key_type(key, :string) when is_bitstring(key), do: key defp parse_key_type(key, :string), do: to_string(key) defp parse_key_type(key, _), do: key defp parse_snake_case(map, opts) do Enum.map(map, fn {key, val} -> {convert_to_snake_case(key, opts), parse(val, opts)} end) |> Enum.into(%{}) end defp convert_to_snake_case(key, opts) when is_bitstring(key) do key_type = Keyword.get(opts, :key_type, :string) key |> Macro.underscore() |> parse_key_type(key_type) end defp convert_to_snake_case(key, opts) when is_atom(key) do key_type = Keyword.get(opts, :key_type) new_key = key |> to_string() |> Macro.underscore() if key_type do parse_key_type(new_key, key_type) else String.to_existing_atom(new_key) end end defp convert_to_snake_case(key, opts) do key_type = Keyword.get(opts, :key_type) key |> parse_key_type(key_type) end defp parse_camelcase_keys(map, opts) do Enum.map(map, fn {key, val} -> {convert_to_camelcase(key, opts), parse(val, opts)} end) |> Enum.into(%{}) end defp convert_to_camelcase(key, opts) when is_bitstring(key) do first_char = String.first(key) key_type = Keyword.get(opts, :key_type, :string) key |> Macro.camelize() |> String.replace_prefix(String.upcase(first_char), first_char) |> parse_key_type(key_type) end defp convert_to_camelcase(key, opts) when is_atom(key) do key_type = Keyword.get(opts, :key_type) new_key = key |> to_string() |> convert_to_camelcase(opts) if key_type do parse_key_type(new_key, key_type) else String.to_existing_atom(new_key) end end defp convert_to_camelcase(key, opts) do key_type = Keyword.get(opts, :key_type) parse_key_type(key, key_type) end end
lib/nice_maps.ex
0.817429
0.585397
nice_maps.ex
starcoder
defmodule XDR.String do @moduledoc """ This module manages the `String` type based on the RFC4506 XDR Standard. """ @behaviour XDR.Declaration alias XDR.VariableOpaque alias XDR.Error.String, as: StringError defstruct [:string, :max_length] @typedoc """ `XDR.String` structure type specification. """ @type t :: %XDR.String{string: binary(), max_length: integer} @doc """ Create a new `XDR.String` structure with the `opaque` and `length` passed. """ @spec new(string :: bitstring(), max_length :: integer()) :: t def new(string, max_length \\ 4_294_967_295) def new(string, max_length), do: %XDR.String{string: string, max_length: max_length} @impl XDR.Declaration @doc """ Encode a `XDR.String` structure into a XDR format. """ @spec encode_xdr(string :: t) :: {:ok, binary} | {:error, :not_bitstring | :invalid_length} def encode_xdr(%{string: string}) when not is_bitstring(string), do: {:error, :not_bitstring} def encode_xdr(%{string: string, max_length: max_length}) when byte_size(string) > max_length, do: {:error, :invalid_length} def encode_xdr(%{string: string, max_length: max_length}) do variable_opaque = string |> VariableOpaque.new(max_length) |> VariableOpaque.encode_xdr!() {:ok, variable_opaque} end @impl XDR.Declaration @doc """ Encode a `XDR.String` structure into a XDR format. If the `string` is not valid, an exception is raised. """ @spec encode_xdr!(string :: t) :: binary def encode_xdr!(string) do case encode_xdr(string) do {:ok, binary} -> binary {:error, reason} -> raise(StringError, reason) end end @impl XDR.Declaration @doc """ Decode the String in XDR format to a `XDR.String` structure. """ @spec decode_xdr(bytes :: binary, string :: t | map()) :: {:ok, {t, binary()}} | {:error, :not_binary} def decode_xdr(bytes, string \\ %{max_length: 4_294_967_295}) def decode_xdr(bytes, _string) when not is_binary(bytes), do: {:error, :not_binary} def decode_xdr(bytes, %{max_length: max_length}) do variable_struct = VariableOpaque.new(nil, max_length) {binary, rest} = VariableOpaque.decode_xdr!(bytes, variable_struct) decoded_string = binary |> Map.get(:opaque) |> String.graphemes() |> Enum.join("") |> new(max_length) {:ok, {decoded_string, rest}} end @impl XDR.Declaration @doc """ Decode the String in XDR format to a `XDR.String` structure. If the binaries are not valid, an exception is raised. """ @spec decode_xdr!(bytes :: binary, string :: t | map()) :: {t, binary()} def decode_xdr!(bytes, string \\ %{max_length: 4_294_967_295}) def decode_xdr!(bytes, string) do case decode_xdr(bytes, string) do {:ok, result} -> result {:error, reason} -> raise(StringError, reason) end end end
lib/xdr/string.ex
0.923996
0.511168
string.ex
starcoder
defmodule Gate do @moduledoc """ Write description """ alias Gate.Validator alias Gate.Locale defstruct params: %{}, schema: %{}, errors: %{}, output: %{} def valid?(params, schema) when is_map(schema) do %Gate{params: params, schema: schema} |> validate() end def valid?(attribute, schema), do: Validator.validate(attribute, schema) defp validate(%{schema: schema} = result) when schema == %{} do if result.errors == %{} do {:ok, result.output} else {:error, result.errors} end end defp validate(tracker) do key = tracker.schema |> Map.keys() |> List.first() if Map.has_key?(tracker.params, key) do if is_map(tracker.schema[key]) do case validate(%{tracker | schema: tracker.schema[key], params: tracker.params[key], errors: %{}, output: %{}}) do {:ok, nested_params} -> nested_valid(key, tracker, nested_params) {:error, nested_errors} -> nested_invalid(key, tracker, nested_errors) end else case Validator.validate(tracker.params[key], tracker.schema[key]) do true -> valid(key, tracker) error -> invalid(key, tracker, error) end end else if optional?(tracker.schema[key]) do validate(tracker |> delete(key)) else missing(key, tracker) end end end defp nested_valid(key, tracker, nested_params) do validate(%{tracker |> delete(key) | output: tracker.output |> Map.put(key, nested_params)}) end defp nested_invalid(key, tracker, nested_errors) do validate(%{tracker |> delete(key) | errors: tracker.errors |> Map.put(key, nested_errors)}) end defp valid(key, tracker) do validate(%{tracker |> delete(key) | output: tracker.output |> Map.put(key, tracker.params[key])}) end defp invalid(key, tracker, error) do validate(%{tracker |> delete(key) | errors: tracker.errors |> Map.put(key, error)}) end defp missing(key, tracker) do validate( %{tracker |> delete(key) | errors: tracker.errors |> Map.put(key, Locale.get("missing"))}) end defp optional?(validations) when is_list(validations), do: Enum.member?(validations, :optional) defp optional?(validation), do: validation == :optional defp delete(tracker, key), do: %{ tracker | schema: tracker.schema |> Map.delete(key)} end
lib/gate.ex
0.656108
0.463566
gate.ex
starcoder
defmodule Bpmn.Context do @moduledoc """ Bpmn.Engine.Context =================== The context is an important part of executing a BPMN process. It allows you to keep track of any data changes in the execution of the process and well as monitor the execution state of your process. BPMN execution context for a process. It contains: - the list of active nodes for a process - the list of completed nodes - the initial data received when starting the process - the current data in the process - the current process - extra information about the execution context """ @doc "Start the context process" def start_link(process, initData) do Agent.start_link(fn -> %{ init: initData, # initial data for the context passed down from an external system data: %{}, # current data saved in the context process: process, # the process definition to execute on nodes: %{}, # information about each node that is executed } end) end @doc """ Get a key from the current state of the context. Use this method to have access to the following: - init: the initial data with which the context was started - data: the current data saved in the context - process: a representation of the current process that is executing - nodes: metadata about each node information """ def get(context, key) do Agent.get(context, fn state -> state[key] end) end @doc """ Persist a value under the given key in the data state of the context. """ def put_data(context, key, value) do Agent.update(context, fn state -> update_in state.data, &Map.put(&1, key, value) # %{state | data: Map.put(state.data, key, value)} end) end @doc """ Load some information from the current data of the context from the given key """ def get_data(context, key) do Agent.get(context, fn state -> state.data[key] end) end @doc """ Put metadata information for a node. """ def put_meta(context, key, meta) do Agent.update(context, fn state -> update_in state.nodes, &Map.put(&1, key, meta) end) end @doc """ Get meta data for a node """ def get_meta(context, key) do Agent.get(context, fn state -> state.nodes[key] end) end @doc """ Check if the node is active """ def is_node_active(context, key) do Agent.get(context, fn state -> state.nodes[key].active end) end @doc """ Check if the node is completed """ def is_node_completed(context, key) do Agent.get(context, fn state -> state.nodes[key].completed end) end end
lib/bpmn/context.ex
0.708213
0.733237
context.ex
starcoder
defmodule Number.Human do @moduledoc """ Provides functions for converting numbers into more human readable strings. """ import Number.Delimit, only: [number_to_delimited: 2] import Decimal, only: [cmp: 2] @doc """ Formats and labels a number with the appropriate English word. ## Examples iex> Number.Human.number_to_human(123) "123.00" iex> Number.Human.number_to_human(1234) "1.23 Thousand" iex> Number.Human.number_to_human(999001) "999.00 Thousand" iex> Number.Human.number_to_human(1234567) "1.23 Million" iex> Number.Human.number_to_human(1234567890) "1.23 Billion" iex> Number.Human.number_to_human(1234567890123) "1.23 Trillion" iex> Number.Human.number_to_human(1234567890123456) "1.23 Quadrillion" iex> Number.Human.number_to_human(1234567890123456789) "1,234.57 Quadrillion" iex> Number.Human.number_to_human(Decimal.new("5000.0")) "5.00 Thousand" iex> Number.Human.number_to_human('charlist') ** (ArgumentError) number must be a float, integer or implement `Number.Conversion` protocol, was 'charlist' """ def number_to_human(number, options \\ [], locale \\ "en_US") def number_to_human(number, options, locale) when not is_map(number) do if Number.Conversion.impl_for(number) do number |> Number.Conversion.to_decimal() |> number_to_human(options, locale) else raise ArgumentError, "number must be a float, integer or implement `Number.Conversion` protocol, was #{ inspect(number) }" end end def number_to_human(number, options, locale) do case locale do "zh_CN" -> number_to_human_cn(number, options) _ -> number_to_human_en(number, options) end end defp number_to_human_cn(number, options) do cond do cmp(number, ~d(-1_0000_0000)) in [:lt, :eq] -> delimit(number, ~d(-1_0000_0000), "亿", options) cmp(number, ~d(-1_0000)) in [:lt, :eq] && cmp(number, ~d(-1_0000_0000)) == :gt -> delimit(number, ~d(-1_0000), "万", options) cmp(number, ~d(9999)) == :gt && cmp(number, ~d(1_0000_0000)) == :lt -> delimit(number, ~d(1_0000), "万", options) cmp(number, ~d(1_0000_0000)) in [:gt, :eq] -> delimit(number, ~d(1_0000_0000), "亿", options) true -> number_to_delimited(number, options) end end defp number_to_human_en(number, options) do cond do cmp(number, ~d(-1_000_000_000_000_000)) == :gt && cmp(number, ~d(-1_000_000_000_000)) in [:lt, :eq] -> delimit(number, ~d(-1_000_000_000_000), "t", options) cmp(number, ~d(-1_000_000_000_000)) == :gt && cmp(number, ~d(-1_000_000_000)) in [:lt, :eq] -> delimit(number, ~d(-1_000_000_000), "b", options) cmp(number, ~d(-1_000_000_000)) == :gt && cmp(number, ~d(-1_000_000)) in [:lt, :eq] -> delimit(number, ~d(-1_000_000), "m", options) cmp(number, ~d(-1_000_000)) == :gt && cmp(number, ~d(-1_000)) in [:lt, :eq] -> delimit(number, ~d(-1_000), "k", options) cmp(number, ~d(999)) == :gt && cmp(number, ~d(1_000_000)) == :lt -> delimit(number, ~d(1_000), "k", options) cmp(number, ~d(1_000_000)) in [:gt, :eq] and cmp(number, ~d(1_000_000_000)) == :lt -> delimit(number, ~d(1_000_000), "m", options) cmp(number, ~d(1_000_000_000)) in [:gt, :eq] and cmp(number, ~d(1_000_000_000_000)) == :lt -> delimit(number, ~d(1_000_000_000), "b", options) cmp(number, ~d(1_000_000_000_000)) == :gt and cmp(number, ~d(1_000_000_000_000_000)) == :lt -> delimit(number, ~d(1_000_000_000_000), "t", options) true -> number_to_delimited(number, options) end end @doc """ Adds ordinal suffix (st, nd, rd or th) for the number ## Examples iex> Number.Human.number_to_ordinal(3) "3rd" iex> Number.Human.number_to_ordinal(1) "1st" iex> Number.Human.number_to_ordinal(46) "46th" iex> Number.Human.number_to_ordinal(442) "442nd" iex> Number.Human.number_to_ordinal(4001) "4001st" """ def number_to_ordinal(number) when is_integer(number) do sfx = ~w(th st nd rd th th th th th th) Integer.to_string(number) <> case rem(number, 100) do 11 -> "th" 12 -> "th" 13 -> "th" _ -> Enum.at(sfx, rem(number, 10)) end end defp sigil_d(number, _modifiers) do number |> String.replace("_", "") |> String.to_integer() |> Decimal.new() end defp delimit(number, divisor, label, options) do number = number |> Decimal.div(Decimal.abs(divisor)) |> number_to_delimited(options) number <> "" <> label end end
lib/number/human.ex
0.817064
0.507751
human.ex
starcoder
defmodule Waffle.Ecto.Schema do @moduledoc ~S""" Defines helpers to work with changeset. Add a using statement `use Waffle.Ecto.Schema` to the top of your ecto schema, and specify the type of the column in your schema as `MyApp.Avatar.Type`. Attachments can subsequently be passed to Waffle's storage though a Changeset `cast_attachments/3` function, following the syntax of `cast/3`. ## Example defmodule MyApp.User do use MyApp.Web, :model use Waffle.Ecto.Schema schema "users" do field :name, :string field :avatar, MyApp.Uploaders.AvatarUploader.Type end def changeset(user, params \\ :invalid) do user |> cast(params, [:name]) |> cast_attachments(params, [:avatar]) |> validate_required([:name, :avatar]) end end """ defmacro __using__(_) do quote do import Waffle.Ecto.Schema end end @doc ~S""" Extracts attachments from params and converts it to the accepted format. ## Options * `:allow_urls` — fetches remote file if the string matches `~r/^https?:\/\//` * `:allow_paths` — accepts any local path as file destination ## Examples cast_attachments(changeset, params, [:fetched_remote_file], allow_urls: true) """ defmacro cast_attachments(changeset_or_data, params, allowed, options \\ []) do quote bind_quoted: [ changeset_or_data: changeset_or_data, params: params, allowed: allowed, options: options ] do # If given a changeset, apply the changes to obtain the underlying data scope = do_apply_changes(changeset_or_data) # Cast supports both atom and string keys, ensure we're matching on both. allowed_param_keys = Enum.map(allowed, fn key -> case key do key when is_binary(key) -> key key when is_atom(key) -> Atom.to_string(key) end end) waffle_params = case params do :invalid -> :invalid %{} -> params |> convert_params_to_binary() |> Map.take(allowed_param_keys) |> check_and_apply_scope(scope, options) |> Enum.into(%{}) end Ecto.Changeset.cast(changeset_or_data, waffle_params, allowed) end end def do_apply_changes(%Ecto.Changeset{} = changeset), do: Ecto.Changeset.apply_changes(changeset) def do_apply_changes(%{__meta__: _} = data), do: data def check_and_apply_scope(params, scope, options) do Enum.reduce(params, [], fn # Don't wrap nil casts in the scope object {field, nil}, fields -> [{field, nil} | fields] # Allow casting Plug.Uploads {field, upload = %{__struct__: Plug.Upload}}, fields -> [{field, {upload, scope}} | fields] # Allow casting binary data structs {field, upload = %{filename: filename, binary: binary}}, fields when is_binary(filename) and is_binary(binary) -> [{field, {upload, scope}} | fields] # If casting a binary (path), ensure we've explicitly allowed paths {field, path}, fields when is_binary(path) -> path = String.trim(path) cond do path == "" -> fields Keyword.get(options, :allow_urls, false) and Regex.match?(~r/^https?:\/\//, path) -> [{field, {path, scope}} | fields] Keyword.get(options, :allow_paths, false) -> [{field, {path, scope}} | fields] true -> fields end end) end def convert_params_to_binary(params) do Enum.reduce(params, nil, fn {key, _value}, nil when is_binary(key) -> nil {key, _value}, _ when is_binary(key) -> raise ArgumentError, "expected params to be a map with atoms or string keys, " <> "got a map with mixed keys: #{inspect(params)}" {key, value}, acc when is_atom(key) -> Map.put(acc || %{}, Atom.to_string(key), value) end) || params end end
lib/waffle_ecto/schema.ex
0.876363
0.400515
schema.ex
starcoder
defmodule ExPng.Image.Filtering do @moduledoc """ This module contains code for filtering and defiltering lines of bytestring image data """ use Bitwise use ExPng.Constants import ExPng.Utilities, only: [reduce_to_binary: 1] @type filtered_line :: {ExPng.filter(), binary} def none, do: @filter_none def sub, do: @filter_sub def up, do: @filter_up def average, do: @filter_average def paeth, do: @filter_paeth @doc """ Passes a line of filtered pixel data through a filtering algorithm based on its filter type, and returns the unfiltered original data. """ @spec unfilter(filtered_line, ExPng.bit_depth(), ExPng.maybe(filtered_line | binary)) :: binary def unfilter(line, pixel_size, prev_line \\ nil) def unfilter({@filter_none, line}, _, _), do: line def unfilter({@filter_sub, data}, pixel_size, _) do [base | chunks] = for <<pixel::bytes-size(pixel_size) <- data>>, do: pixel Enum.reduce(chunks, [base], fn chunk, [prev | _] = acc -> new = Enum.reduce(0..(pixel_size - 1), <<>>, fn i, acc -> <<_::bytes-size(i), bit, _::binary>> = chunk <<_::bytes-size(i), a, _::binary>> = prev acc <> <<bit + a &&& 0xFF>> end) [new | acc] end) # this handles the reversal...something something fold direction... |> Enum.reduce(&Kernel.<>/2) end def unfilter({@filter_up, data}, _, nil), do: data def unfilter({@filter_up, _} = line, pixel_size, {_, prev_data}) do unfilter(line, pixel_size, prev_data) end def unfilter({@filter_up, data}, _pixel_size, prev_data) do data = for <<pixel <- data>>, do: pixel prev = for <<pixel <- prev_data>>, do: pixel Enum.zip(data, prev) |> Enum.map(fn {byte, prev} -> <<byte + prev &&& 0xFF>> end) |> reduce_to_binary() end def unfilter({@filter_average, data} = line, pixel_size, nil) do prev_data = build_pad_for_filter(byte_size(data)) unfilter(line, pixel_size, prev_data) end def unfilter({@filter_average, _} = line, pixel_size, {_, prev_data}) do unfilter(line, pixel_size, prev_data) end def unfilter({@filter_average, data}, pixel_size, prev_data) do pad = build_pad_for_filter(pixel_size) data = for <<pixel::bytes-size(pixel_size) <- data>>, do: pixel prev_line = for <<pixel::bytes-size(pixel_size) <- prev_data>>, do: pixel Enum.reduce(Enum.with_index(data), [pad], fn {byte, i}, [prev | _] = acc -> prev_line = Enum.at(prev_line, i) filtered_byte = Enum.reduce(0..(pixel_size - 1), <<>>, fn j, acc -> <<_::bytes-size(j), bit, _::binary>> = byte <<_::bytes-size(j), a, _::binary>> = prev <<_::bytes-size(j), b, _::binary>> = prev_line avg = (a + b) >>> 1 acc <> <<avg + bit &&& 0xFF>> end) [filtered_byte | acc] end) |> Enum.reverse() |> Enum.drop(1) |> reduce_to_binary() end def unfilter({@filter_paeth, data} = line, pixel_size, nil) do prev_data = build_pad_for_filter(byte_size(data)) unfilter(line, pixel_size, prev_data) end def unfilter({@filter_paeth, _} = line, pixel_size, {_, prev_data}) do unfilter(line, pixel_size, prev_data) end def unfilter({@filter_paeth, data}, pixel_size, prev_data) do pad = build_pad_for_filter(pixel_size) data = for <<pixel::bytes-size(pixel_size) <- data>>, do: pixel prev_line = for <<pixel::bytes-size(pixel_size) <- prev_data>>, do: pixel Enum.chunk_every([pad | prev_line], 2, 1, :discard) |> Enum.zip(data) |> Enum.reduce([pad], fn {[c_byte, b_byte], byte}, [a_byte | _] = acc -> filtered_byte = Enum.reduce(0..(pixel_size - 1), <<>>, fn j, acc -> <<_::bytes-size(j), x, _::binary>> = byte <<_::bytes-size(j), a, _::binary>> = a_byte <<_::bytes-size(j), b, _::binary>> = b_byte <<_::bytes-size(j), c, _::binary>> = c_byte delta = calculate_paeth_delta(a, b, c) acc <> <<delta + x &&& 0xFF>> end) [filtered_byte | acc] end) |> Enum.reverse() |> Enum.drop(1) |> reduce_to_binary() end def apply_filter(line, pixel_size, prev_line \\ nil) def apply_filter({@filter_none, data}, _, _), do: data def apply_filter({@filter_sub, data}, pixel_size, _) do [base | _] = pixels = for <<chunk::bytes-size(pixel_size) <- data>>, do: chunk tail = Enum.chunk_every(pixels, 2, 1, :discard) |> Enum.map(fn [prev, pixel] -> Enum.reduce(0..(pixel_size - 1), <<>>, fn i, acc -> <<_::bytes-size(i), bit, _::binary>> = pixel <<_::bytes-size(i), a, _::binary>> = prev acc <> <<bit - a &&& 0xFF>> end) end) [base | tail] |> reduce_to_binary() end def apply_filter({@filter_up, data}, _, nil), do: data def apply_filter({@filter_up, _} = line, pixel_size, {_, prev}) do apply_filter(line, pixel_size, prev) end def apply_filter({@filter_up, data}, _, prev) do data = for <<pixel <- data>>, do: pixel prev = for <<pixel <- prev>>, do: pixel Enum.zip(data, prev) |> Enum.map(fn {byte, prev} -> <<byte - prev &&& 0xFF>> end) |> reduce_to_binary() end def apply_filter({@filter_average, _} = line, pixel_size, {_, prev_data}) do apply_filter(line, pixel_size, prev_data) end def apply_filter({@filter_average, data}, pixel_size, prev) do pad = build_pad_for_filter(pixel_size) data = for <<pixel::bytes-size(pixel_size) <- data>>, do: pixel prev_line = for <<pixel::bytes-size(pixel_size) <- prev>>, do: pixel line = Enum.chunk_every([pad | data], 2, 1, :discard) Enum.zip(line, prev_line) |> Enum.map(fn {[a_byte, byte], b_byte} -> Enum.reduce(0..(pixel_size - 1), <<>>, fn j, acc -> <<_::bytes-size(j), byte, _::binary>> = byte <<_::bytes-size(j), a, _::binary>> = a_byte <<_::bytes-size(j), b, _::binary>> = b_byte avg = (a + b) >>> 1 acc <> <<byte - avg &&& 0xFF>> end) end) |> reduce_to_binary() end def apply_filter({@filter_paeth, _} = line, pixel_size, {_, prev_data}) do apply_filter(line, pixel_size, prev_data) end def apply_filter({@filter_paeth, data}, pixel_size, prev_data) do pad = build_pad_for_filter(pixel_size) data = for <<pixel::bytes-size(pixel_size) <- data>>, do: pixel prev_line = for <<pixel::bytes-size(pixel_size) <- prev_data>>, do: pixel Enum.chunk_every([pad | prev_line], 2, 1, :discard) |> Enum.zip(Enum.chunk_every([pad | data], 2, 1, :discard)) |> Enum.map(fn {[c_byte, b_byte], [a_byte, byte]} -> Enum.reduce(0..(pixel_size - 1), <<>>, fn j, acc -> <<_::bytes-size(j), x, _::binary>> = byte <<_::bytes-size(j), a, _::binary>> = a_byte <<_::bytes-size(j), b, _::binary>> = b_byte <<_::bytes-size(j), c, _::binary>> = c_byte delta = calculate_paeth_delta(a, b, c) acc <> <<x - delta &&& 0xFF>> end) end) |> reduce_to_binary() end defp calculate_paeth_delta(a, b, c) do p = a + b - c pa = abs(p - a) pb = abs(p - b) pc = abs(p - c) cond do pa <= pb && pa <= pc -> a pb <= pc -> b true -> c end end defp build_pad_for_filter(pixel_size) do Stream.cycle([<<0>>]) |> Enum.take(pixel_size) |> Enum.reduce(&Kernel.<>/2) end end
lib/ex_png/image/filtering.ex
0.733833
0.577793
filtering.ex
starcoder
defmodule TuringMachine.Program do @moduledoc """ Programs for a turing machine. You can define programs directly constructing list of 5-tuples, or generating from codes by `from_string/1` or `from_file/1`. The 5-tuple `{0, 1, 2, :right, 3}` means a command that when the machine state is `0`, and the value of tape at now position is `1`, then turn it into `2`, go `:right` and make the machine state `3`. """ @type direction :: :right | :left | :stay | 1 | -1 | 0 @type t :: [ { TuringMachine.state, TuringMachine.value, TuringMachine.value, direction, TuringMachine.state } ] @doc """ Transforms `direction` into diff of position. """ @spec direction_to_diff(direction) :: 1 | -1 | 0 def direction_to_diff(direction) do case direction do :right -> 1 :left -> -1 :stay -> 0 diff when is_integer(diff) -> diff end end @doc """ Interprets string to a direction. Returns `:error` when it fails, `{:ok, direction}` otherwise. `R`, `L`, `S`, `right`, `left`, `stay`, `1`, `-1`, `0` are supported (case sensitive). """ @spec direction_from_string(String.t) :: {:ok, direction} | :error %{ "R" => :right, "L" => :left, "S" => :stay, "right" => :right, "left" => :left, "stay" => :stay, "1" => 1, "-1" => -1, "0" => 0, } |> Enum.each(fn {dir_str, dir} -> def direction_from_string(unquote(dir_str)), do: {:ok, unquote(dir)} end) def direction_from_string(_other), do: :error @doc """ Generates a program from the given `code`. Each line of code is converted into a command. For example, `"0, 1, 2, R, 3"` becomes `{"0", "1", "2", :right, "3"}`. A command is described by comma separated 5-tuple. Spaces before and after each element are trimmed. Characters after `#` is ignored, so you can insert comments like: `"0, 1, 2, R, 3 # This is a comment"` The direction can be written as one of the followings: `R`, `L`, `S`, `right`, `left`, `stay`, `1`, `-1`, `0` Lines that doesn't match to the command pattern are just ignored. """ @spec from_string(String.t) :: t def from_string(code) do code |> String.replace(~r/#.*/, "") |> String.split("\n") |> Enum.map(&String.split(&1, ",")) |> Enum.filter_map( fn list -> length(list) == 5 end, fn list -> Enum.map(list, &String.trim/1) end ) |> Enum.filter_map( fn [_, _, _, dir_str, _] -> match?({:ok, _}, direction_from_string(dir_str)) end, fn [state0, value0, value1, dir_str, state1] -> {:ok, dir} = direction_from_string(dir_str) {state0, value0, value1, dir, state1} end ) end @doc """ Reads code from a file. Raises when it fails to read the file. """ @spec from_file(String.t) :: t | none def from_file(path) do from_string(File.read!(path)) end end
lib/program.ex
0.899119
0.819135
program.ex
starcoder
defmodule Litmus.Type.Number do @moduledoc """ This type validates that values are numbers, and converts them to numbers if possible. It converts "stringified" numerical values to numbers. ## Options * `:default` - Setting `:default` will populate a field with the provided value, assuming that it is not present already. If a field already has a value present, it will not be altered. * `:min` - Specifies the minimum value of the field. * `:max` - Specifies the maximum value of the field. * `:integer` - Specifies that the number must be an integer (no floating point). Allowed values are `true` and `false`. The default is `false`. * `:required` - Setting `:required` to `true` will cause a validation error when a field is not present or the value is `nil`. Allowed values for required are `true` and `false`. The default is `false`. ## Examples iex> schema = %{ ...> "id" => %Litmus.Type.Number{ ...> integer: true ...> }, ...> "gpa" => %Litmus.Type.Number{ ...> min: 0, ...> max: 4 ...> } ...> } iex> params = %{"id" => "123", "gpa" => 3.8} iex> Litmus.validate(params, schema) {:ok, %{"id" => 123, "gpa" => 3.8}} iex> params = %{"id" => "123.456", "gpa" => 3.8} iex> Litmus.validate(params, schema) {:error, "id must be an integer"} """ defstruct [ :min, :max, default: Litmus.Type.Any.NoDefault, integer: false, required: false ] @type t :: %__MODULE__{ default: any, min: number | nil, max: number | nil, integer: boolean, required: boolean } alias Litmus.{Default, Required} @spec validate_field(t, binary, map) :: {:ok, map} | {:error, binary} def validate_field(type, field, data) do with {:ok, data} <- Required.validate(type, field, data), {:ok, data} <- Default.validate(type, field, data), {:ok, data} <- convert(type, field, data), {:ok, data} <- min_validate(type, field, data), {:ok, data} <- max_validate(type, field, data), {:ok, data} <- integer_validate(type, field, data) do {:ok, data} else {:error, msg} -> {:error, msg} end end @spec convert(t, binary, map) :: {:ok, map} | {:error, binary} defp convert(%__MODULE__{}, field, params) do cond do !Map.has_key?(params, field) -> {:ok, params} params[field] == nil -> {:ok, params} is_number(params[field]) -> {:ok, params} is_binary(params[field]) && string_to_number(params[field]) -> modified_value = string_to_number(params[field]) {:ok, Map.put(params, field, modified_value)} true -> {:error, "#{field} must be a number"} end end @spec string_to_number(binary) :: number | nil defp string_to_number(str) do str = if String.starts_with?(str, "."), do: "0" <> str, else: str cond do int = string_to_integer(str) -> int float = string_to_float(str) -> float true -> nil end end @spec string_to_integer(binary) :: number | nil defp string_to_integer(str) do case Integer.parse(str) do {num, ""} -> num _ -> nil end end @spec string_to_float(binary) :: number | nil defp string_to_float(str) do case Float.parse(str) do {num, ""} -> num _ -> nil end end @spec integer_validate(t, binary, map) :: {:ok, map} | {:error, binary} defp integer_validate(%__MODULE__{integer: false}, _field, params) do {:ok, params} end defp integer_validate(%__MODULE__{integer: true}, field, params) do if Map.has_key?(params, field) && !is_integer(params[field]) do {:error, "#{field} must be an integer"} else {:ok, params} end end @spec min_validate(t, binary, map) :: {:ok, map} | {:error, binary} defp min_validate(%__MODULE__{min: nil}, _field, params) do {:ok, params} end defp min_validate(%__MODULE__{min: min}, field, params) when is_number(min) do if Map.has_key?(params, field) && params[field] < min do {:error, "#{field} must be greater than or equal to #{min}"} else {:ok, params} end end @spec max_validate(t, binary, map) :: {:ok, map} | {:error, binary} defp max_validate(%__MODULE__{max: nil}, _field, params) do {:ok, params} end defp max_validate(%__MODULE__{max: max}, field, params) when is_number(max) do if Map.has_key?(params, field) && params[field] > max do {:error, "#{field} must be less than or equal to #{max}"} else {:ok, params} end end defimpl Litmus.Type do alias Litmus.Type @spec validate(Type.t(), binary, map) :: {:ok, map} | {:error, binary} def validate(type, field, data), do: Type.Number.validate_field(type, field, data) end end
lib/litmus/type/number.ex
0.923996
0.663466
number.ex
starcoder
defmodule Adventofcode.Day03SpiralMemory do require Integer def steps_to_access_port(input) when is_binary(input) do steps_to_access_port(String.to_integer(input)) end def steps_to_access_port(1), do: 0 def steps_to_access_port(value) when is_number(value) do circle = get_inner_circle(value) smaller_area = :math.pow(circle, 2) |> round() bigger_area = :math.pow(circle + 2, 2) |> round() diff_area = (bigger_area - smaller_area) |> round() side_len = (diff_area / 4) |> Float.floor() |> round() len = rem(value - smaller_area - 1, side_len) mid = (side_len / 2 - 1) |> Float.floor() |> round() out_steps = ((circle + 1) / 2) |> Float.floor() |> round() cond do len == mid -> out_steps len < mid -> out_steps + mid - len len > mid -> out_steps + len - mid end end defp get_inner_circle(value) do rounded_sqrt = :math.pow(value - 1, 1 / 2) |> Float.floor() |> round() if Integer.is_even(rounded_sqrt), do: rounded_sqrt - 1, else: rounded_sqrt end def first_bigger_value(input) when is_binary(input) do first_bigger_value(String.to_integer(input)) end defstruct goal: nil, value: 1, direction: :east, coordinate: {0, 0}, visited: %{{0, 0} => 1} def first_bigger_value(value) do do_travel(%__MODULE__{goal: value}) end defp do_travel(%{goal: goal, value: value}) when value > goal, do: value defp do_travel(state) do {coordinate, rotation} = next_coordinate_and_direction(state) value = neighbour_sum(state.visited, coordinate) direction = if Map.has_key?(state.visited, next_coordinate(coordinate, rotation)) do state.direction else rotation end visited = Map.put(state.visited, coordinate, value) %{state | coordinate: coordinate, value: value, direction: direction, visited: visited} |> do_travel() end defp neighbour_sum(visited, current) do neighbour_circle = [:east, :north, :west, :west, :south, :south, :east, :east] {neighbours, _} = Enum.map_reduce(neighbour_circle, current, fn direction, coordinate -> next = next_coordinate(coordinate, direction) {next, next} end) neighbours |> Enum.map(&Map.get(visited, &1)) |> Enum.filter(& &1) |> Enum.sum() end defp next_coordinate_and_direction(state) do case state.direction do :east -> {next_coordinate(state), :north} :north -> {next_coordinate(state), :west} :west -> {next_coordinate(state), :south} :south -> {next_coordinate(state), :east} end end defp next_coordinate(%{coordinate: coordinate, direction: direction}) do next_coordinate(coordinate, direction) end defp next_coordinate({x, y}, direction) do case direction do :east -> {x + 1, y} :north -> {x, y - 1} :west -> {x - 1, y} :south -> {x, y + 1} end end end
lib/day_03_spiral_memory.ex
0.745954
0.6437
day_03_spiral_memory.ex
starcoder
defmodule Tensorflow.CostGraphDef.Node.InputInfo do @moduledoc false use Protobuf, syntax: :proto3 @type t :: %__MODULE__{ preceding_node: integer, preceding_port: integer } defstruct [:preceding_node, :preceding_port] field(:preceding_node, 1, type: :int32) field(:preceding_port, 2, type: :int32) end defmodule Tensorflow.CostGraphDef.Node.OutputInfo do @moduledoc false use Protobuf, syntax: :proto3 @type t :: %__MODULE__{ size: integer, alias_input_port: integer, shape: Tensorflow.TensorShapeProto.t() | nil, dtype: Tensorflow.DataType.t() } defstruct [:size, :alias_input_port, :shape, :dtype] field(:size, 1, type: :int64) field(:alias_input_port, 2, type: :int64) field(:shape, 3, type: Tensorflow.TensorShapeProto) field(:dtype, 4, type: Tensorflow.DataType, enum: true) end defmodule Tensorflow.CostGraphDef.Node do @moduledoc false use Protobuf, syntax: :proto3 @type t :: %__MODULE__{ name: String.t(), device: String.t(), id: integer, input_info: [Tensorflow.CostGraphDef.Node.InputInfo.t()], output_info: [Tensorflow.CostGraphDef.Node.OutputInfo.t()], temporary_memory_size: integer, persistent_memory_size: integer, host_temp_memory_size: integer, device_temp_memory_size: integer, device_persistent_memory_size: integer, compute_cost: integer, compute_time: integer, memory_time: integer, is_final: boolean, control_input: [integer], inaccurate: boolean } defstruct [ :name, :device, :id, :input_info, :output_info, :temporary_memory_size, :persistent_memory_size, :host_temp_memory_size, :device_temp_memory_size, :device_persistent_memory_size, :compute_cost, :compute_time, :memory_time, :is_final, :control_input, :inaccurate ] field(:name, 1, type: :string) field(:device, 2, type: :string) field(:id, 3, type: :int32) field(:input_info, 4, repeated: true, type: Tensorflow.CostGraphDef.Node.InputInfo ) field(:output_info, 5, repeated: true, type: Tensorflow.CostGraphDef.Node.OutputInfo ) field(:temporary_memory_size, 6, type: :int64) field(:persistent_memory_size, 12, type: :int64) field(:host_temp_memory_size, 10, type: :int64, deprecated: true) field(:device_temp_memory_size, 11, type: :int64, deprecated: true) field(:device_persistent_memory_size, 16, type: :int64, deprecated: true) field(:compute_cost, 9, type: :int64) field(:compute_time, 14, type: :int64) field(:memory_time, 15, type: :int64) field(:is_final, 7, type: :bool) field(:control_input, 8, repeated: true, type: :int32) field(:inaccurate, 17, type: :bool) end defmodule Tensorflow.CostGraphDef.AggregatedCost do @moduledoc false use Protobuf, syntax: :proto3 @type t :: %__MODULE__{ cost: float | :infinity | :negative_infinity | :nan, dimension: String.t() } defstruct [:cost, :dimension] field(:cost, 1, type: :float) field(:dimension, 2, type: :string) end defmodule Tensorflow.CostGraphDef do @moduledoc false use Protobuf, syntax: :proto3 @type t :: %__MODULE__{ node: [Tensorflow.CostGraphDef.Node.t()], cost: [Tensorflow.CostGraphDef.AggregatedCost.t()] } defstruct [:node, :cost] field(:node, 1, repeated: true, type: Tensorflow.CostGraphDef.Node) field(:cost, 2, repeated: true, type: Tensorflow.CostGraphDef.AggregatedCost) end
lib/tensorflow/core/framework/cost_graph.pb.ex
0.790611
0.643651
cost_graph.pb.ex
starcoder
defmodule Dayron.Repo do @moduledoc """ Defines a rest repository. A repository maps to an underlying http client, which send requests to a remote server. Currently the only available client is HTTPoison with hackney. When used, the repository expects the `:otp_app` as option. The `:otp_app` should point to an OTP application that has the repository configuration. For example, the repository: defmodule MyApp.RestRepo do use Dayron.Repo, otp_app: :my_app end Could be configured with: config :my_app, MyApp.RestRepo, url: "https://api.example.com", headers: [access_token: "token"] The available configuration is: * `:url` - an URL that specifies the server api address * `:adapter` - a module implementing Dayron.Adapter behaviour, default is HTTPoisonAdapter * `:headers` - a keywords list with values to be sent on each request header URLs also support `{:system, "KEY"}` to be given, telling Dayron to load the configuration from the system environment instead: config :my_app, MyApp.RestRepo, url: {:system, "API_URL"} """ @cannot_call_directly_error """ Cannot call Dayron.Repo directly. Instead implement your own Repo module with: use Dayron.Repo, otp_app: :my_app """ alias Dayron.Model alias Dayron.Config alias Dayron.Request alias Dayron.Response defmacro __using__(opts) do quote bind_quoted: [opts: opts] do alias Dayron.Repo {otp_app, adapter, logger} = Config.parse(__MODULE__, opts) @otp_app otp_app @adapter adapter @logger logger def get_config, do: Config.get(__MODULE__, @otp_app) def get(model, id, opts \\ []) do Repo.get(@adapter, model, [id: id, options: opts], @logger, get_config) end def get!(model, id, opts \\ []) do request_data = [id: id, options: opts] Repo.get!(@adapter, model, request_data, @logger, get_config) end def all(model, opts \\ []) do Repo.all(@adapter, model, [options: opts], @logger, get_config) end def insert(model, data, opts \\ []) do request_data = [body: data, options: opts] Repo.insert(@adapter, model, request_data, @logger, get_config) end def insert!(model, data, opts \\ []) do request_data = [body: data, options: opts] Repo.insert!(@adapter, model, request_data, @logger, get_config) end def update(model, id, data, opts \\ []) do request_data = [id: id, body: data, options: opts] Repo.update(@adapter, model, request_data, @logger, get_config) end def update!(model, id, data, opts \\ []) do request_data = [id: id, body: data, options: opts] Repo.update!(@adapter, model, request_data, @logger, get_config) end def delete(model, id, opts \\ []) do request_data = [id: id, options: opts] Repo.delete(@adapter, model, request_data, @logger, get_config) end def delete!(model, id, opts \\ []) do request_data = [id: id, options: opts] Repo.delete!(@adapter, model, request_data, @logger, get_config) end end end @doc """ Fetches a single model from the external api, building the request url based on the given model and id. Returns `nil` if no result was found or server reponds with an error. Returns a model struct with response values if valid. Options are sent directly to the selected adapter. See `Dayron.Adapter.get/3` for avaliable options. ## Possible Exceptions * `Dayron.ServerError` - if server responds with a 500 internal error. * `Dayron.ClientError` - for any error detected in client side, such as timeout or connection errors. """ def get(_module, _id, _opts \\ []) do raise @cannot_call_directly_error end @doc """ Similar to `get/3` but raises `Dayron.NoResultsError` if no resource is returned in the server response. """ def get!(_module, _id, _opts \\ []) do raise @cannot_call_directly_error end @doc """ Fetches a list of models from the external api, building the request url based on the given model. Returns an empty list if no result was found or server reponds with an error. Returns a list of model structs if response is valid. Options are sent directly to the selected adapter. See `Dayron.Adapter.get/3` for avaliable options. ## Possible Exceptions * `Dayron.ServerError` - if server responds with a 500 internal error. * `Dayron.ClientError` - for any error detected in client side, such as timeout or connection errors. """ def all(_module, _opts \\ []) do raise @cannot_call_directly_error end @doc """ Inserts a model given a map with resource attributes. Options are sent directly to the selected adapter. See Dayron.Adapter.insert/3 for avaliable options. ## Possible Exceptions * `Dayron.ServerError` - if server responds with a 500 internal error. * `Dayron.ClientError` - for any error detected in client side, such as timeout or connection errors. ## Example case RestRepo.insert User, %{name: "Dayse"} do {:ok, model} -> # Inserted with success {:error, error} -> # Something went wrong end """ def insert(_module, _data, _opts \\ []) do raise @cannot_call_directly_error end @doc """ Similar to `insert/3` but raises a `Dayron.ValidationError` if server responds with a 422 unprocessable entity. """ def insert!(_module, _data, _opts \\ []) do raise @cannot_call_directly_error end @doc """ Updates a model given an id and a map with resource attributes. Options are sent directly to the selected adapter. See Dayron.Adapter.insert/3 for avaliable options. ## Possible Exceptions * `Dayron.ServerError` - if server responds with a 500 internal error. * `Dayron.ClientError` - for any error detected in client side, such as timeout or connection errors. ## Example case RestRepo.update User, "user-id", %{name: "Dayse"} do {:ok, model} -> # Updated with success {:error, error} -> # Something went wrong end """ def update(_module, _id, _data, _opts \\ []) do raise @cannot_call_directly_error end @doc """ Similar to `insert/4` but raises: * `Dayron.NoResultsError` - if server responds with 404 resource not found. * `Dayron.ValidationError` - if server responds with 422 unprocessable entity. """ def update!(_module, _id, _data, _opts \\ []) do raise @cannot_call_directly_error end @doc """ Deletes a resource given a model and id. It returns `{:ok, model}` if the resource has been successfully deleted or `{:error, error}` if there was a validation or a known constraint error. Options are sent directly to the selected adapter. See `Dayron.Adapter.delete/3` for avaliable options. ## Possible Exceptions * `Dayron.ServerError` - if server responds with a 500 internal error. * `Dayron.ClientError` - for any error detected in client side, such as timeout or connection errors. """ def delete(_module, _id, _opts \\ []) do raise @cannot_call_directly_error end @doc """ Similar to `delete/3` but raises: * `Dayron.NoResultsError` - if server responds with 404 resource not found. * `Dayron.ValidationError` - if server responds with 422 unprocessable entity. """ def delete!(_module, _id, _opts \\ []) do raise @cannot_call_directly_error end @doc false def get(adapter, model, request_data, logger, config) do {_request, response} = config |> Config.init_request_data(:get, model, request_data) |> execute!(adapter, logger) case response do %Response{status_code: 200, body: body} -> Model.from_json(model, body) %Response{status_code: code} when code >= 300 and code < 500 -> nil end end @doc false def get!(adapter, model, request_data, logger, config) do {request, response} = config |> Config.init_request_data(:get, model, request_data) |> execute!(adapter, logger) case response do %Response{status_code: 200, body: body} -> Model.from_json(model, body) %Response{status_code: code} when code >= 300 and code < 500 -> raise Dayron.NoResultsError, request: request end end @doc false def all(adapter, model, request_data, logger, config) do {_request, response} = config |> Config.init_request_data(:get, model, request_data) |> execute!(adapter, logger) case response do %Response{status_code: 200, body: body} -> Model.from_json_list(model, body) end end @doc false def insert(adapter, model, request_data, logger, config) do {request, response} = config |> Config.init_request_data(:post, model, request_data) |> execute!(adapter, logger) case response do %Response{status_code: 201, body: body} -> {:ok, Model.from_json(model, body)} %Response{status_code: 422} -> {:error, %{request: request, response: response}} end end @doc false def insert!(adapter, model, request_data, logger, config) do case insert(adapter, model, request_data, logger, config) do {:ok, model} -> {:ok, model} {:error, error} -> raise Dayron.ValidationError, Map.to_list(error) end end @doc false def update(adapter, model, request_data, logger, config) do {request, response} = config |> Config.init_request_data(:patch, model, request_data) |> execute!(adapter, logger) case response do %Response{status_code: 200, body: body} -> {:ok, Model.from_json(model, body)} %Response{status_code: code} when code >= 400 and code < 500 -> {:error, %{request: request, response: response, status_code: code}} end end @doc false def update!(adapter, model, request_data, logger, config) do case update(adapter, model, request_data, logger, config) do {:ok, model} -> {:ok, model} {:error, %{status_code: 404} = error} -> raise Dayron.NoResultsError, Map.to_list(error) {:error, %{status_code: 422} = error} -> raise Dayron.ValidationError, Map.to_list(error) end end @doc false def delete(adapter, model, request_data, logger, config) do {request, response} = config |> Config.init_request_data(:delete, model, request_data) |> execute!(adapter, logger) case response do %Response{status_code: 200, body: body} -> {:ok, Model.from_json(model, body)} %Response{status_code: 204} -> {:ok, nil} %Response{status_code: code} when code >= 400 -> {:error, %{request: request, response: response, status_code: code}} end end @doc false def delete!(adapter, model, request_data, logger, config) do case delete(adapter, model, request_data, logger, config) do {:ok, model} -> {:ok, model} {:error, %{status_code: 404} = error} -> raise Dayron.NoResultsError, Map.to_list(error) {:error, %{status_code: 422} = error} -> raise Dayron.ValidationError, Map.to_list(error) end end defp execute!(%Request{} = request, adapter, logger) do request |> Request.send(adapter) |> handle_errors |> log_request(logger) end defp handle_errors({request, response}) do case response do %Response{status_code: 500} -> raise Dayron.ServerError, request: request, response: response %Dayron.ClientError{reason: reason} -> raise Dayron.ClientError, request: request, reason: reason _ -> {request, response} end end defp log_request({request, response}, logger) do :ok = logger.log(request, response) {request, response} end end
lib/dayron/repo.ex
0.90455
0.466056
repo.ex
starcoder
defmodule GGity.Docs.Geom.Text do @moduledoc false @doc false @spec examples() :: iolist() def examples do [ """ Examples.mtcars() |> Plot.new(%{x: :wt, y: :mpg, label: :model}) |> Plot.geom_text() """, """ # Set the font size for the label Examples.mtcars() |> Plot.new(%{x: :wt, y: :mpg, label: :model}) |> Plot.geom_text(size: 10) """, """ # Shift positioning Examples.mtcars() |> Plot.new(%{x: :wt, y: :mpg, label: :model}) |> Plot.geom_point(size: 2) |> Plot.geom_text(size: 5, hjust: :left, nudge_x: 3) """, """ Examples.mtcars() |> Plot.new(%{x: :wt, y: :mpg, label: :model}) |> Plot.geom_point(size: 2) |> Plot.geom_text(size: 5, vjust: :top, nudge_y: 3) """, """ # Map other aesthetics Examples.mtcars() |> Plot.new(%{x: :wt, y: :mpg, label: :model}) |> Plot.geom_text(%{color: :cyl}, size: 5) """, """ # Add a text annotation Examples.mtcars() |> Plot.new(%{x: :wt, y: :mpg, label: :model}) |> Plot.geom_text(size: 5) |> Plot.annotate(:text, label: "plot mpg vs. wt", x: 1.5, y: 15, size: 8, color: "red") """, """ # Bar chart labelling [%{x: "1", y: 1, grp: "a"}, %{x: "1", y: 3, grp: "b"}, %{x: "2", y: 2, grp: "a"}, %{x: "2", y: 1, grp: "b"},] |> Plot.new(%{x: :x, y: :y, group: :grp}) |> Plot.geom_col(%{fill: :grp}, position: :dodge) |> Plot.geom_text(%{label: :y}, position: :dodge, size: 6) """, """ # Nudge the label up a bit [%{x: "1", y: 1, grp: "a"}, %{x: "1", y: 3, grp: "b"}, %{x: "2", y: 2, grp: "a"}, %{x: "2", y: 1, grp: "b"},] |> Plot.new(%{x: :x, y: :y, group: :grp}) |> Plot.geom_col(%{fill: :grp}, position: :dodge) |> Plot.geom_text(%{label: :y}, position: :dodge, size: 6, nudge_y: 4) """, """ # Position label in the middle of stacked bars [%{x: "1", y: 1, grp: "a"}, %{x: "1", y: 3, grp: "b"}, %{x: "2", y: 2, grp: "a"}, %{x: "2", y: 1, grp: "b"},] |> Plot.new(%{x: :x, y: :y, group: :grp}) |> Plot.geom_col(%{fill: :grp}) |> Plot.geom_text(%{label: :y}, position: :stack, position_vjust: 0.5, size: 6) """ ] end end
lib/mix/tasks/doc_examples/geom_text.ex
0.870996
0.584479
geom_text.ex
starcoder
defmodule ExSDP.Attribute.Extmap do @moduledoc """ This module represents extmap (RFC 8285). """ alias ExSDP.Utils @enforce_keys [:id, :uri] defstruct @enforce_keys ++ [direction: nil, attributes: []] @type extension_id :: 1..14 @type direction :: :sendonly | :recvonly | :sendrecv | :inactive | nil @type t :: %__MODULE__{ id: extension_id(), direction: direction(), uri: String.t(), attributes: [String.t()] } @typedoc """ Key that can be used for searching this attribute using `ExSDP.Media.get_attribute/2`. """ @type attr_key :: :extmap @typedoc """ Reason of parsing failure. """ @type reason :: :invalid_extmap | :invalid_id | :invalid_direction | :invalid_uri @valid_directions ["sendonly", "recvonly", "sendrecv", "inactive"] @spec parse(binary()) :: {:ok, t()} | {:error, reason()} def parse(extmap) do with [id_direction, uri_attributes] <- String.split(extmap, " ", parts: 2), {:ok, {id, direction}} <- parse_id_direction(id_direction), {:ok, {uri, attributes}} <- parse_uri_attributes(uri_attributes) do {:ok, %__MODULE__{id: id, direction: direction, uri: uri, attributes: attributes}} else {:error, reason} -> {:error, reason} _invalid_extmap -> {:error, :invalid_extmap} end end defp parse_id_direction(id_direction) do case String.split(id_direction, "/") do [id, direction] -> with {:ok, id} <- Utils.parse_numeric_string(id), {:ok, direction} <- parse_direction(direction) do {:ok, {id, direction}} else {:error, :string_nan} -> {:error, :invalid_id} {:error, reason} -> {:error, reason} end [id] -> case Utils.parse_numeric_string(id) do {:ok, id} -> {:ok, {id, nil}} _invalid_id -> {:error, :invalid_id} end _invalid_extmap -> {:error, :invalid_extmap} end end defp parse_uri_attributes(uri_attributes) do case String.split(uri_attributes, " ") do [uri | attributes] -> {:ok, {uri, attributes}} _invalid_uri -> {:error, :invalid_uri} end end defp parse_direction(direction) when direction in @valid_directions, do: {:ok, String.to_atom(direction)} defp parse_direction(_invalid_directio), do: {:error, :invalid_direction} end defimpl String.Chars, for: ExSDP.Attribute.Extmap do alias ExSDP.Attribute.Extmap @impl true def to_string(%Extmap{id: id, direction: direction, uri: uri, attributes: attributes}) do maybe_direction = if direction == nil, do: "", else: "/#{Atom.to_string(direction)}" attributes = Enum.join(attributes, " ") "extmap:#{id}#{maybe_direction} #{uri} #{attributes}" |> String.trim() end end
lib/ex_sdp/attribute/extmap.ex
0.730386
0.425158
extmap.ex
starcoder
defmodule Iptrie do @external_resource "README.md" @moduledoc File.read!("README.md") |> String.split("<!-- @MODULEDOC -->") |> Enum.fetch!(1) require Pfx alias Radix defstruct [] @typedoc """ An Iptrie struct that contains a `Radix` tree per type of `t:Pfx.t/0` used. A [prefix'](`Pfx`) _type_ is determined by its `maxlen` property: IPv4 has `maxlen: 32`, IPv6 has `maxlen: 128`, MAC addresses have `maxlen: 48` and so on. Although Iptrie facilitates (exact or longest prefix match) lookups of any type of prefix, it has a bias towards IP prefixes. So, any binaries (strings) are first interpreted as IPv4 CIDR/IPv6 strings and as EUI-48/64 string second, while tuples of address digits and/or {address-digits, length} are interpreted as IPv4 or IPv6 representations. """ @type t :: %__MODULE__{} @typedoc """ The type of a prefix is its maxlen property. """ @type type :: non_neg_integer() @typedoc """ A prefix represented as an `t:Pfx.t/0` struct, an `t:Pfx.ip_address/0`, `t:Pfx.ip_prefix/0` or a string in IPv4 CIDR, IPv6, EUI-48 or EUI-64 format. """ @type prefix :: Pfx.prefix() # Guards defguardp is_type(type) when is_integer(type) and type >= 0 # Helpers @spec match(keyword) :: function defp match(opts) do case Keyword.get(opts, :match) do :lpm -> &lookup/2 _ -> &get/2 end end @spec arg_err(atom, any) :: Exception.t() defp arg_err(:bad_keyvals, arg), do: ArgumentError.exception("expected a valid {key,value}-list, got #{inspect(arg)}") defp arg_err(:bad_trie, arg), do: ArgumentError.exception("expected an Iptrie, got #{inspect(arg)}") defp arg_err(:bad_pfxs, arg), do: ArgumentError.exception("expected a list of valid prefixes, got #{inspect(arg)}") defp arg_err(:bad_pfx, arg), do: ArgumentError.exception("expected a valid prefix, got #{inspect(arg)}") defp arg_err(:bad_fun, {fun, arity}), do: ArgumentError.exception("expected a function/#{arity}, got #{inspect(fun)}") defp arg_err(:bad_type, arg), do: ArgumentError.exception("expected a maxlen (non_neg_integer) value, got #{inspect(arg)}") # API @doc """ Returns the number of prefix,value-pairs in given `trie`. Note that this requires traversal of radix tree(s) present in `trie`. ## Example iex> t = new([{"1.1.1.1", 1}, {"acdc::", 2}]) iex> count(t) 2 """ @spec count(t) :: non_neg_integer() def count(%__MODULE__{} = trie) do types(trie) |> Enum.map(fn type -> count(trie, type) end) |> Enum.sum() end def count(trie), do: raise(arg_err(:bad_trie, trie)) @doc """ Returns the number of prefix,value-pairs for given `type` in `trie`. If `trie` has no radix tree of given `type`, `0` is returned. Use `Iptrie.has_type?/2` to check if a trie holds a given type. ## Example iex> t = new([{"1.1.1.1", 1}, {"acdc::", 2}]) iex> count(t, 32) 1 iex> count(t, 128) 1 iex> types(t) ...> |> Enum.map(fn type -> {type, count(t, type)} end) [{32, 1}, {128, 1}] """ @spec count(t, type) :: non_neg_integer def count(%__MODULE__{} = trie, type) when is_type(type), do: radix(trie, type) |> Radix.count() def count(%__MODULE__{} = _trie, type), do: raise(arg_err(:bad_type, type)) def count(trie, _type), do: raise(arg_err(:bad_trie, trie)) @doc ~S""" Deletes a prefix,value-pair from `trie` using an exact match for `prefix`. If the `prefix` does not exist in the `trie`, the latter is returned unchanged. ## Examples iex> ipt = new() ...> |> put("1.1.1.0/24", "one") ...> |> put("2.2.2.0/24", "two") iex> iex> for pfx <- keys(ipt), do: "#{pfx}" ["1.1.1.0/24", "2.2.2.0/24"] iex> iex> ipt = delete(ipt, "1.1.1.0/24") iex> for pfx <- keys(ipt), do: "#{pfx}" ["2.2.2.0/24"] """ @spec delete(t, prefix) :: t def delete(%__MODULE__{} = trie, prefix) do pfx = Pfx.new(prefix) tree = radix(trie, pfx.maxlen) Map.put(trie, pfx.maxlen, Radix.delete(tree, pfx.bits)) rescue err -> raise err end def delete(trie, _prefix), do: raise(arg_err(:bad_trie, trie)) @doc ~S""" Drops given `prefixes` from `trie` using an exact match. If a given prefix does not exist in `trie` it is ignored. ## Example # drop 2 existing prefixes and ignore the third iex> t = new([{"1.1.1.0/24", 1}, {"2.2.2.0/24", 2}, {"11-22-33-00-00-00/24", 3}]) iex> t2 = drop(t, ["1.1.1.0/24", "11-22-33-00-00-00/24", "3.3.3.3"]) iex> for pfx <- keys(t2), do: "#{pfx}" ["2.2.2.0/24"] """ @spec drop(t, [prefix]) :: t def drop(%__MODULE__{} = trie, prefixes) when is_list(prefixes) do prefixes |> Enum.map(fn pfx -> Pfx.new(pfx) end) |> Enum.reduce(trie, fn pfx, acc -> delete(acc, pfx) end) rescue err -> raise err end def drop(%__MODULE__{} = _trie, prefixes), do: raise(arg_err(:bad_pfxs, prefixes)) def drop(trie, _prefixes), do: raise(arg_err(:bad_trie, trie)) @doc """ Returns true if the given `trie` is empty, false otherwise. ## Examples iex> t = new([{"1.1.1.1", 1}, {"11-22-33-44-55-66", 2}]) iex> empty?(t) false iex> new() |> empty?() true """ @spec empty?(t) :: boolean def empty?(%__MODULE__{} = trie) do types(trie) |> Enum.map(fn type -> empty?(trie, type) end) |> Enum.all?() end def empty?(trie), do: raise(arg_err(:bad_trie, trie)) @doc """ Returns true if the radix tree for given `type` in `trie` is empty, false otherwise. ## Example iex> t = new([{"1.1.1.1", 1}, {"11-22-33-44-55-66", 2}]) iex> empty?(t, 32) false iex> empty?(t, 48) false iex> empty?(t, 128) true """ @spec empty?(t, type) :: boolean def empty?(%__MODULE__{} = trie, type) when is_type(type), do: radix(trie, type) |> Radix.empty?() def empty?(%__MODULE__{} = _trie, type), do: raise(arg_err(:bad_type, type)) def empty?(trie, _type), do: raise(arg_err(:bad_trie, trie)) @doc """ Fetches the prefix,value-pair for given `prefix` from `trie`. Returns one of: - `{:ok, {prefix, value}}` in case of success - `{:error, :notfound}` if `prefix` is not present in `trie` - `{:error, :einval}` in case of an invalid `prefix`, and - `{:error, :bad_trie}` in case `trie` is not an `t:Iptrie.t/0` Optionally fetches based on a longest prefix match by specifying `match: :lpm`. ## Example iex> ipt = new() ...> |> put("1.1.1.0/24", "one") ...> |> put("2.2.2.0/24", "two") iex> iex> fetch(ipt, "1.1.1.0/24") {:ok, {"1.1.1.0/24", "one"}} iex> iex> fetch(ipt, "1.1.1.1") {:error, :notfound} iex> iex> fetch(ipt, "1.1.1.1", match: :lpm) {:ok, {"1.1.1.0/24", "one"}} iex> iex> fetch(ipt, "13.13.13.333") {:error, :einval} """ @spec fetch(t, prefix, keyword) :: {:ok, {prefix, any}} | {:error, atom} def fetch(trie, prefix, opts \\ []) def fetch(%__MODULE__{} = trie, prefix, opts) do pfx = Pfx.new(prefix) tree = radix(trie, pfx.maxlen) case Radix.fetch(tree, pfx.bits, opts) do :error -> {:error, :notfound} {:ok, {bits, value}} -> {:ok, {Pfx.marshall(%{pfx | bits: bits}, prefix), value}} end rescue _ -> {:error, :einval} end def fetch(_trie, _prefix, _opts), do: {:error, :bad_trie} # raise(arg_err(:bad_trie, trie)) @doc """ Fetches the prefix,value-pair for given `prefix` from `trie`. In case of success, returns `{prefix, value}`. If `prefix` is not present, raises a `KeyError`. If `prefix` could not be encoded, raises an `ArgumentError`. Optionally fetches based on a longest prefix match by specifying `match: :lpm`. ## Example iex> ipt = new() ...> |> put("10.10.10.0/24", "ten") ...> |> put("11.11.11.0/24", "eleven") iex> iex> fetch!(ipt, "10.10.10.0/24") {"10.10.10.0/24", "ten"} iex> iex> fetch!(ipt, "11.11.11.11", match: :lpm) {"11.11.11.0/24", "eleven"} iex> iex> fetch!(ipt, "12.12.12.12") ** (KeyError) prefix "12.12.12.12" not found iex> ipt = new() iex> fetch!(ipt, "13.13.13.333") ** (ArgumentError) expected a valid prefix, got "13.13.13.333" """ @spec fetch!(t, prefix, keyword) :: {prefix, any} | KeyError | ArgumentError def fetch!(trie, prefix, opts \\ []) def fetch!(trie, prefix, opts) do case fetch(trie, prefix, opts) do {:ok, result} -> result {:error, :notfound} -> raise KeyError, "prefix #{inspect(prefix)} not found" {:error, :einval} -> raise arg_err(:bad_pfx, prefix) {:error, :bad_trie} -> raise arg_err(:bad_trie, trie) end rescue err -> raise err end @doc """ Finds a prefix,value-pair for given `prefix` from `trie` using a longest prefix match. Convenience wrapper for `Iptrie.fetch/3` with `match: :lpm`. ## Example iex> ipt = new() ...> |> put("1.1.1.0/24", "one") ...> |> put("2.2.2.0/24", "two") iex> iex> find(ipt, "1.1.1.0/24") {:ok, {"1.1.1.0/24", "one"}} iex> iex> find(ipt, "12.12.12.12") {:error, :notfound} iex> iex> find(ipt, "13.13.13.333") {:error, :einval} """ @spec find(t, prefix) :: {:ok, {prefix, any}} | {:error, atom} def find(trie, prefix), # fetch returns error/ok tuple, never raises.. do: fetch(trie, prefix, match: :lpm) @doc """ Finds a prefix,value-pair for given `prefix` from `trie` using a longest prefix match. Convenience wrapper for `Iptrie.fetch!/3` with `match: :lpm`. ## Examples iex> ipt = new() ...> |> put("10.10.10.0/24", "ten") ...> |> put("11.11.11.0/24", "eleven") iex> iex> find!(ipt, "10.10.10.0/24") {"10.10.10.0/24", "ten"} iex> iex> find!(ipt, "10.10.10.10") {"10.10.10.0/24", "ten"} iex> iex> find!(ipt, "12.12.12.12") ** (KeyError) prefix "12.12.12.12" not found iex> ipt = new() iex> find!(ipt, "13.13.13.333") ** (ArgumentError) expected a valid prefix, got "13.13.13.333" """ @spec find!(t, prefix) :: {prefix, any} | KeyError | ArgumentError def find!(trie, prefix) do fetch!(trie, prefix, match: :lpm) rescue err -> raise err end @doc ~S""" Returns a new Iptrie, keeping only the prefix,value-pairs for which `fun` returns _truthy_. The signature for `fun` is (prefix, value -> boolean), where the value is stored under prefix in the trie. Radix trees that are empty, are removed from the new Iptrie. ## Example iex> ipt = new() ...> |> put("acdc:1975::/32", "rock") ...> |> put("acdc:1976::/32", "rock") ...> |> put("abba:1975::/32", "pop") ...> |> put("abba:1976::/32", "pop") ...> |> put("1.1.1.0/24", "v4") iex> iex> filter(ipt, fn pfx, _value -> pfx.maxlen == 32 end) ...> |> to_list() [{%Pfx{bits: <<1, 1, 1>>, maxlen: 32}, "v4"}] iex> iex> filter(ipt, fn _pfx, value -> value == "rock" end) ...> |> to_list() ...> |> Enum.map(fn {pfx, value} -> {"#{pfx}", value} end) [ {"acdc:1975:0:0:0:0:0:0/32", "rock"}, {"acdc:1976:0:0:0:0:0:0/32", "rock"} ] """ @spec filter(t, (prefix, any -> boolean)) :: t def filter(%__MODULE__{} = trie, fun) when is_function(fun, 2) do types(trie) |> Enum.map(fn type -> {type, filterp(radix(trie, type), type, fun)} end) |> Enum.filter(fn {_t, rdx} -> not Radix.empty?(rdx) end) |> Enum.reduce(Iptrie.new(), fn {type, rdx}, ipt -> Map.put(ipt, type, rdx) end) end def filter(%__MODULE__{} = _trie, fun), do: raise(arg_err(:bad_fun, {fun, 3})) def filter(trie, _fun), do: raise(arg_err(:bad_trie, trie)) defp filterp(rdx, type, fun) do keep = fn key, val, acc -> if fun.(%Pfx{bits: key, maxlen: type}, val), do: Radix.put(acc, key, val), else: acc end Radix.reduce(rdx, Radix.new(), keep) end @doc """ Returns the prefix,value-pair stored under given `prefix` in `trie`, using an exact match. If `prefix` is not found, `default` is returned. If `default` is not provided, `nil` is used. ## Example iex> ipt = new([{"1.1.1.0/30", "A"}, {"1.1.1.0/31", "B"}, {"1.1.1.0", "C"}]) iex> iex> get(ipt, "1.1.1.0/31") {"1.1.1.0/31", "B"} iex> iex> get(ipt, "2.2.2.0/30") nil iex> get(ipt, "2.2.2.0/30", :notfound) :notfound """ @spec get(t, prefix(), any) :: {prefix(), any} | any def get(trie, prefix, default \\ nil) def get(%__MODULE__{} = trie, prefix, default) do pfx = Pfx.new(prefix) tree = radix(trie, pfx.maxlen) case Radix.get(tree, pfx.bits) do nil -> default {bits, value} -> {Pfx.marshall(%{pfx | bits: bits}, prefix), value} end rescue err -> raise err end def get(trie, _prefix, _default), do: raise(arg_err(:bad_trie, trie)) @doc """ Returns true if given `prefix` is present in `trie`, false otherwise. The check is done based on an exact match, unless the option `match: :lpm` is provided to match based on longest prefix match. ## Example iex> t = new([{"1.1.1.1", 1}, {"1.1.1.0/24", 2}, {"acdc::/16", 3}]) iex> has_prefix?(t, "1.1.1.2") false iex> has_prefix?(t, "1.1.1.2", match: :lpm) true iex> has_prefix?(t, "1.1.1.1") true iex> has_prefix?(t, "acdc::/16") true """ @spec has_prefix?(t, prefix, keyword) :: boolean def has_prefix?(trie, prefix, opts \\ []) def has_prefix?(%__MODULE__{} = trie, prefix, opts) do case match(opts).(trie, prefix) do nil -> false _ -> true end rescue err -> raise err end def has_prefix?(trie, _prefix, _opts), do: raise(arg_err(:bad_trie, trie)) @doc """ Returns true if `trie` has given `type`, false otherwise. An Iptrie groups prefixes into radix trees by their maxlen property, also known as the type of prefix. Use `Iptrie.types/1` to get a list of all available types. ## Example iex> t = new([{"1.1.1.1", 1}]) iex> has_type?(t, 32) true iex> has_type?(t, 128) false """ @spec has_type?(t, type) :: boolean def has_type?(%__MODULE__{} = trie, type) when is_type(type), do: Map.has_key?(trie, type) def has_type?(%__MODULE__{} = _trie, type), do: raise(arg_err(:bad_type, type)) def has_type?(trie, _type), do: raise(arg_err(:bad_trie, trie)) @doc ~S""" Returns all prefixes stored in all available radix trees in `trie`. The prefixes are reconstructed as `t:Pfx.t/0` by combining the stored bitstrings with the `Radix`-tree's type, that is the maxlen property associated with the radix tree whose keys are being retrieved. ## Example iex> ipt = new() ...> |> put("1.1.1.0/24", 1) ...> |> put("2.2.2.0/24", 2) ...> |> put("acdc:1975::/32", 3) ...> |> put("acdc:2021::/32", 4) iex> iex> keys(ipt) [ %Pfx{bits: <<1, 1, 1>>, maxlen: 32}, %Pfx{bits: <<2, 2, 2>>, maxlen: 32}, %Pfx{bits: <<0xacdc::16, 0x1975::16>>, maxlen: 128}, %Pfx{bits: <<0xacdc::16, 0x2021::16>>, maxlen: 128} ] """ @spec keys(t) :: list(prefix) def keys(%__MODULE__{} = trie) do types(trie) |> Enum.map(fn type -> keys(trie, type) end) |> List.flatten() rescue err -> raise err end def keys(trie), do: raise(arg_err(:bad_trie, trie)) @doc ~S""" Returns the prefixes stored in the radix tree in `trie` for given `type`. Note that the Iptrie keys are returned as `t:Pfx.t/0` structs. ## Example iex> ipt = new() ...> |> put("1.1.1.0/24", 1) ...> |> put("2.2.2.0/24", 2) ...> |> put("acdc:1975::/32", 3) ...> |> put("acdc:2021::/32", 4) iex> iex> keys(ipt, 32) [ %Pfx{bits: <<1, 1, 1>>, maxlen: 32}, %Pfx{bits: <<2, 2, 2>>, maxlen: 32} ] iex> iex> keys(ipt, 128) [ %Pfx{bits: <<0xacdc::16, 0x1975::16>>, maxlen: 128}, %Pfx{bits: <<0xacdc::16, 0x2021::16>>, maxlen: 128} ] iex> iex> keys(ipt, 48) [] """ @spec keys(t, type) :: list(prefix) def keys(%__MODULE__{} = trie, type) when is_type(type) do radix(trie, type) |> Radix.keys() |> Enum.map(fn bits -> Pfx.new(bits, type) end) end def keys(%__MODULE__{} = _trie, type), do: raise(arg_err(:bad_type, type)) def keys(trie, _type), do: raise(arg_err(:bad_trie, trie)) @doc """ Returns all the prefix,value-pairs whose prefix is a prefix for the given search `prefix`. This returns the less specific entries that enclose the given search `prefix`. Note that any bitstring is always a prefix of itself. So, if present, the search key will be included in the result. ## Example iex> ipt = new() ...> |> put("1.1.1.0/25", "A25-lower") ...> |> put("1.1.1.128/25", "A25-upper") ...> |> put("1.1.1.0/30", "A30") ...> |> put("1.1.2.0/24", "B24") iex> iex> less(ipt, "1.1.1.0/30") [ {"1.1.1.0/30", "A30"}, {"1.1.1.0/25", "A25-lower"}, ] iex> less(ipt, "2.2.2.2") [] """ @spec less(t(), prefix()) :: list({prefix(), any}) def less(%__MODULE__{} = trie, prefix) do pfx = Pfx.new(prefix) tree = radix(trie, pfx.maxlen) case Radix.less(tree, pfx.bits) do [] -> [] list -> Enum.map(list, fn {bits, value} -> {Pfx.marshall(%{pfx | bits: bits}, prefix), value} end) end rescue err -> raise err end def less(trie, _prefix), do: raise(arg_err(:bad_trie, trie)) @doc """ Returns the prefix,value-pair, whose prefix is the longest match for given search `prefix`. Returns `nil` if there is no match for search `prefix`. ## Examples iex> ipt = new() ...> |> put("1.1.1.0/24", 1) ...> |> put("2.2.2.0/24", 2) ...> |> put("acdc:1975::/32", 3) ...> |> put("acdc:2021::/32", 4) iex> iex> lookup(ipt, "1.1.1.1") {"1.1.1.0/24", 1} iex> lookup(ipt, "acdc:1975:1::") {"acdc:1975:0:0:0:0:0:0/32", 3} iex> iex> lookup(ipt, "3.3.3.3") nil iex> lookup(ipt, "3.3.3.300") ** (ArgumentError) expected a ipv4/ipv6 CIDR or EUI-48/64 string, got "3.3.3.300" """ @spec lookup(t(), prefix()) :: {prefix(), any} | nil def lookup(%__MODULE__{} = trie, prefix) do pfx = Pfx.new(prefix) tree = radix(trie, pfx.maxlen) case Radix.lookup(tree, pfx.bits) do nil -> nil {bits, value} -> {Pfx.marshall(%{pfx | bits: bits}, prefix), value} end rescue err -> raise err end def lookup(trie, _prefix), do: raise(arg_err(:bad_trie, trie)) @doc """ Merges `trie1` and `trie2` into a new Iptrie. Adds all prefix,value-pairs of `trie2` to `trie1`, overwriting any existing entries when prefixes match (based on exact match). ## Example iex> t1 = new([{"1.1.1.0/24", 1}, {"2.2.2.0/24", 2}]) iex> t2 = new([{"2.2.2.0/24", 22}, {"3.3.3.0/24", 3}]) iex> t = merge(t1, t2) iex> count(t) 3 iex> get(t, "1.1.1.0/24") {"1.1.1.0/24", 1} iex> get(t, "2.2.2.0/24") {"2.2.2.0/24", 22} iex> get(t, "3.3.3.0/24") {"3.3.3.0/24", 3} """ @spec merge(t, t) :: t def merge(%__MODULE__{} = trie1, %__MODULE__{} = trie2) do reduce(trie2, trie1, fn pfx, val, acc -> put(acc, pfx, val) end) rescue err -> raise err end def merge(trie1, %__MODULE__{} = _trie2), do: raise(arg_err(:bad_trie, trie1)) def merge(_trie1, trie2), do: raise(arg_err(:bad_trie, trie2)) @doc ~S""" Merges `trie1` and `trie2` into a new Iptrie, resolving conflicts through `fun`. In cases where a prefix is present in both tries, the conflict is resolved by calling `fun` with the prefix (a `t:Pfx.t/0`), its value in `trie1` and its value in `trie2`. The function's return value will be stored under the prefix in the merged trie. ## Example iex> t1 = new([{"1.1.1.0/24", 1}, {"2.2.2.0/24", 2}, {"acdc:1975::/32", 3}]) iex> t2 = new([{"3.3.3.0/24", 4}, {"2.2.2.0/24", 5}, {"acdc:2021::/32", 6}]) iex> t = merge(t1, t2, fn _pfx, v1, v2 -> v1 + v2 end) iex> count(t) 5 iex> get(t, "2.2.2.0/24") {"2.2.2.0/24", 7} iex> for ip4 <- keys(t, 32), do: "#{ip4}" ["1.1.1.0/24", "2.2.2.0/24", "3.3.3.0/24"] iex> for ip6 <- keys(t, 128), do: "#{ip6}" ["fdf8:f53e:61e4::18/32", "fdf8:f53e:61e4::18/32"] iex> values(t) |> Enum.sum() 1 + 7 + 3 + 4 + 6 """ @spec merge(t, t, (prefix, any, any -> any)) :: t def merge(%__MODULE__{} = trie1, %__MODULE__{} = trie2, fun) when is_function(fun, 3) do f = fn k2, v2, acc -> case get(trie1, k2) do nil -> put(acc, k2, v2) {^k2, v1} -> put(acc, k2, fun.(k2, v1, v2)) end end reduce(trie2, trie1, f) rescue err -> raise err end def merge(%__MODULE__{} = _trie1, %__MODULE__{} = _trie2, fun), do: raise(arg_err(:bad_fun, {fun, 3})) def merge(trie1, %__MODULE__{} = _trie2, _fun), do: raise(arg_err(:bad_trie, trie1)) def merge(_trie1, trie2, _fun), do: raise(arg_err(:bad_trie, trie2)) @doc """ Returns all the prefix,value-pairs where the search `prefix` is a prefix for the stored prefix. This returns the more specific entries that are enclosed by given search `prefix`. Note that any bitstring is always a prefix of itself. So, if present, the search `prefix` will be included in the result. ## Example iex> ipt = new() ...> |> put("1.1.1.0/25", "A25-lower") ...> |> put("1.1.1.128/25", "A25-upper") ...> |> put("1.1.1.0/30", "A30") ...> |> put("1.1.2.0/24", "B24") iex> iex> more(ipt, "1.1.1.0/24") [ {"1.1.1.0/30", "A30"}, {"1.1.1.0/25", "A25-lower"}, {"1.1.1.128/25", "A25-upper"} ] """ @spec more(t(), prefix()) :: list({prefix(), any}) def more(%__MODULE__{} = trie, prefix) do pfx = Pfx.new(prefix) tree = radix(trie, pfx.maxlen) case Radix.more(tree, pfx.bits) do [] -> [] list -> Enum.map(list, fn {bits, value} -> {Pfx.marshall(%{pfx | bits: bits}, prefix), value} end) end rescue err -> raise err end def more(trie, _prefix), do: raise(arg_err(:bad_trie, trie)) @doc """ Creates an new, empty Iptrie. ## Example iex> Iptrie.new() %Iptrie{} """ @spec new() :: t() def new(), do: %__MODULE__{} @doc """ Creates a new Iptrie, populated via a list of prefix,value-pairs. ## Example iex> elements = [ ...> {"1.1.1.0/24", "net1"}, ...> {{{1, 1, 2, 0}, 24}, "net2"}, ...> {"acdc:1975::/32", "TNT"} ...> ] iex> ipt = Iptrie.new(elements) iex> radix(ipt, 32) {0, {22, [{<<1, 1, 1>>, "net1"}], [{<<1, 1, 2>>, "net2"}]}, nil} iex> radix(ipt, 128) {0, nil, [{<<172, 220, 25, 117>>, "TNT"}]} """ @spec new(list({prefix(), any})) :: t def new(elements) when is_list(elements) do Enum.reduce(elements, new(), fn {prefix, value}, trie -> put(trie, prefix, value) end) rescue FunctionClauseError -> raise arg_err(:bad_keyvals, elements) end @doc """ Removes the value associated with `prefix` and returns the matched prefix,value-pair and the new Iptrie. Options include: - `default: value` to return if `prefix` could not be matched (defaults to `nil`) - `match: :lpm` to match on longest prefix instead of an exact match ## Examples iex> t = new([{"1.1.1.0/24", 1}, {"172.16.58.3", 2}, {"acdc:1975::/32", 3}]) iex> {{"172.16.58.3", 2}, t2} = pop(t, "1.1.1.99") iex> get(t2, "1.1.1.99") nil iex> t = new([{"1.1.1.0/24", 1}, {"172.16.58.3", 2}, {"acdc:1975::/32", 3}]) iex> # t is unchanged iex> {{"1.1.1.33", :notfound}, ^t} = pop(t, "1.1.1.33", default: :notfound) iex> t = new([{"1.1.1.0/24", 1}, {"172.16.58.3", 2}, {"acdc:1975::/32", 3}]) iex> # lpm match iex> {{"1.1.1.0/24", 1}, t2} = pop(t, "1.1.1.33", match: :lpm) iex> get(t2, "1.1.1.0/24") nil """ @spec pop(t, prefix, keyword) :: {{prefix, any}, t} def pop(trie, prefix, opts \\ []) def pop(%__MODULE__{} = trie, prefix, opts) do pfx = Pfx.new(prefix) tree = radix(trie, pfx.maxlen) {{bits, val}, rdx} = Radix.pop(tree, pfx.bits, opts) { {Pfx.marshall(%{pfx | bits: bits}, prefix), val}, Map.put(trie, pfx.maxlen, rdx) } rescue err -> raise err end def pop(trie, _prefix, _opts), do: raise(arg_err(:bad_trie, trie)) @doc ~S""" Prunes given `trie` by calling `fun` on neighboring prefixes, possibly replacing them with their parent. The callback `fun` is invoked with either a 5- or a 6-tuple: - `{p0, p1, v1, p2, v2}` for neighboring `p1`, `p2`, their parent `p0` is not in `trie` - `{p0, v0, p1, v1, p2, v2}` for the parent, its value and that of its two neighboring children. The callback decides what happens by returning either: - `{:ok, value}`, value will be stored under `p0` and the neighboring prefixes `p1, p2` are deleted - nil (or anything else really), in which case the tree is not changed. ## Examples iex> adder = fn ...> {_p0, _p1, v1, _p2, v2} -> {:ok, v1 + v2} ...> {_p0, v0, _p1, v1, _p2, v2} -> {:ok, v0 + v1 + v2} ...> end iex> ipt = new() ...> |> put("1.1.1.0/26", 0) ...> |> put("1.1.1.64/26", 1) ...> |> put("1.1.1.128/26", 2) ...> |> put("1.1.1.192/26", 3) iex> prune(ipt, adder) ...> |> to_list() ...> |> Enum.map(fn {p, v} -> {"#{p}", v} end) [ {"1.1.1.0/25", 1}, {"1.1.1.128/25", 5} ] iex> iex> prune(ipt, adder, recurse: true) ...> |> to_list() ...> |> Enum.map(fn {p, v} -> {"#{p}", v} end) [{"1.1.1.0/24", 6}] # summerize all /24's inside 10.10.0.0/16 # -> only 10.10.40.0/24 is missing iex> slash24s = Pfx.partition("10.10.0.0/16", 24) ...> |> Enum.with_index() ...> |> new() ...> |> delete("10.10.40.0/24") iex> iex> prune(slash24s, fn _ -> {:ok, 0} end, recurse: true) ...> |> to_list() ...> |> Enum.map(fn {p, v} -> {"#{p}", v} end) [ {"10.10.0.0/19", 0}, {"10.10.32.0/21", 0}, {"10.10.41.0/24", 41}, {"10.10.42.0/23", 0}, {"10.10.44.0/22", 0}, {"10.10.48.0/20", 0}, {"10.10.64.0/18", 0}, {"10.10.128.0/17", 0} ] """ @spec prune(t, (tuple -> nil | {:ok, any}), Keyword.t()) :: t def prune(trie, fun, opts \\ []) def prune(%__MODULE__{} = trie, fun, opts) when is_function(fun, 1) do types(trie) |> Enum.map(fn type -> {type, radix(trie, type)} end) |> Enum.map(fn {type, rdx} -> {type, prunep(rdx, type, fun, opts)} end) |> Enum.filter(fn {_type, rdx} -> not Radix.empty?(rdx) end) |> Enum.reduce(Iptrie.new(), fn {type, rdx}, ipt -> Map.put(ipt, type, rdx) end) end def prune(%__MODULE__{} = _trie, fun, _opts), do: raise(arg_err(:bad_fun, {fun, 1})) def prune(trie, _fun, _opts), do: raise(arg_err(:bad_trie, trie)) defp prunep(rdx, type, fun, opts) do callback = fn {k0, k1, v1, k2, v2} -> fun.({Pfx.new(k0, type), Pfx.new(k1, type), v1, Pfx.new(k2, type), v2}) {k0, v0, k1, v1, k2, v2} -> fun.({Pfx.new(k0, type), v0, Pfx.new(k1, type), v1, Pfx.new(k2, type), v2}) end Radix.prune(rdx, callback, opts) end @doc """ Puts the prefix,value-pairs in `elements` into `trie`. This always uses an exact match for prefix, updating its value if it exists. ## Example iex> ipt = new([{"1.1.1.0/24", 0}, {"1.1.1.1", 0}, {"1.1.1.1", "x"}]) iex> iex> get(ipt, "1.1.1.1") {"1.1.1.1", "x"} """ @spec put(t, list({prefix(), any})) :: t def put(%__MODULE__{} = trie, elements) when is_list(elements) do Enum.reduce(elements, trie, fn {k, v}, t -> put(t, k, v) end) rescue FunctionClauseError -> raise arg_err(:bad_keyvals, elements) end def put(%__MODULE__{} = _trie, elements), do: raise(arg_err(:bad_keyvals, elements)) def put(trie, _elements), do: raise(arg_err(:bad_trie, trie)) @doc """ Puts `value` under `prefix` in `trie`. This always uses an exact match for `prefix`, replacing its value if it exists. ## Example iex> ipt = new() ...> |> put("1.1.1.0/24", 0) ...> |> put("1.1.1.1", 1) ...> |> put("1.1.1.1", "x") iex> iex> get(ipt, "1.1.1.1") {"1.1.1.1", "x"} """ @spec put(t, prefix(), any) :: t def put(%__MODULE__{} = trie, prefix, value) do pfx = Pfx.new(prefix) tree = radix(trie, pfx.maxlen) Map.put(trie, pfx.maxlen, Radix.put(tree, pfx.bits, value)) rescue err -> raise err end def put(trie, _prefix, _value), do: raise(arg_err(:bad_trie, trie)) @doc """ Returns the `Radix` tree for given `type` in `trie`. If `trie` has no radix tree for given `type` it will return a new empty radix tree. ## Example iex> ipt = new() ...> |> put("1.1.1.0/24", 1) ...> |> put("2.2.2.0/24", 2) ...> |> put("acdc:1975::/32", 3) ...> |> put("acdc:2021::/32", 4) iex> iex> radix(ipt, 32) {0, {6, [{<<1, 1, 1>>, 1}], [{<<2, 2, 2>>, 2}]}, nil} iex> iex> radix(ipt, 128) {0, nil, {18, [{<<172, 220, 25, 117>>, 3}], [{<<172, 220, 32, 33>>, 4}]}} iex> radix(ipt, 48) {0, nil, nil} iex> iex> has_type?(ipt, 48) false """ @spec radix(t, type) :: Radix.tree() def radix(%__MODULE__{} = trie, type) when is_type(type), do: Map.get(trie, type) || Radix.new() def radix(%__MODULE__{} = _trie, type), do: raise(arg_err(:bad_type, type)) def radix(trie, _type), do: raise(arg_err(:bad_trie, trie)) @doc ~S""" Invokes `fun` on all prefix,value-pairs in all radix trees in `trie`. The function `fun` is called with the prefix (a `t:Pfx.t/0` struct), value and `acc` accumulator and should return an updated accumulator. The result is the last accumulator returned. ## Example iex> ipt = new() ...> |> put("1.1.1.0/24", 1) ...> |> put("2.2.2.0/24", 2) ...> |> put("acdc:1975::/32", 3) ...> |> put("acdc:2021::/32", 4) iex> iex> reduce(ipt, 0, fn _pfx, value, acc -> acc + value end) 10 iex> iex> reduce(ipt, %{}, fn pfx, value, acc -> Map.put(acc, "#{pfx}", value) end) %{ "1.1.1.0/24" => 1, "2.2.2.0/24" => 2, "acdc:1975:0:0:0:0:0:0/32" => 3, "acdc:2021:0:0:0:0:0:0/32" => 4 } """ @spec reduce(t, any, (Pfx.t(), any, any -> any)) :: any def reduce(%__MODULE__{} = trie, acc, fun) when is_function(fun, 3) do types(trie) |> Enum.reduce(acc, fn type, acc -> reduce(trie, type, acc, fun) end) rescue err -> raise err end def reduce(trie, _acc, _fun), do: raise(arg_err(:bad_trie, trie)) @doc """ Invokes `fun` on each prefix,value-pair in the radix tree of given `type` in `trie`. The function `fun` is called with the prefix (a `t:Pfx.t/0` struct), value and `acc` accumulator and should return an updated accumulator. The result is the last accumulator returned. ## Example iex> ipt = new() ...> |> put("1.1.1.0/24", 1) ...> |> put("2.2.2.0/24", 2) ...> |> put("acdc:1975::/32", 3) ...> |> put("acdc:2021::/32", 4) iex> iex> reduce(ipt, 32, 0, fn _pfx, value, acc -> acc + value end) 3 iex> reduce(ipt, 48, 0, fn _pfx, value, acc -> acc + value end) 0 iex> reduce(ipt, 128, 0, fn _pfx, value, acc -> acc + value end) 7 """ @spec reduce(t, type, any, (Pfx.t(), any, any -> any)) :: any def reduce(%__MODULE__{} = trie, type, acc, fun) when is_type(type) and is_function(fun, 3) do reducer = fn bits, val, acc -> fun.(Pfx.new(bits, type), val, acc) end radix(trie, type) |> Radix.reduce(acc, reducer) end def reduce(%__MODULE__{} = _trie, type, _acc, fun) when is_function(fun, 3), do: raise(arg_err(:bad_type, type)) def reduce(%__MODULE__{} = _trie, _types, _acc, fun), do: raise(arg_err(:bad_fun, {fun, 3})) def reduce(trie, _types, _acc, _fun), do: raise(arg_err(:bad_trie, trie)) @doc """ Splits `trie` into two Iptries using given list of `prefixes`. Returns a new trie with prefix,value-pairs that were matched by given `prefixes` and the old trie with those pairs removed. If a prefix was not found in given `trie` it is ignored. By default an exact match is used, specify `match: :lpm` to use longest prefix match instead. ## Examples iex> t = new([{"1.1.1.0/24", 1}, {"2.2.2.0/24", 2}, {"3.3.3.0/30", 3}]) iex> {t2, t3} = split(t, ["2.2.2.0/24", "3.3.3.0/30"]) iex> count(t2) 2 iex> get(t2, "2.2.2.0/24") {"2.2.2.0/24", 2} iex> get(t2, "3.3.3.0/30") {"3.3.3.0/30", 3} iex> count(t3) 1 iex> get(t3, "1.1.1.0/24") {"1.1.1.0/24", 1} # use longest prefix match iex> t = new([{"1.1.1.0/24", 1}, {"2.2.2.0/24", 2}, {"3.3.3.0/30", 3}]) iex> {t4, t5} = split(t, ["2.2.2.2", "3.3.3.3"], match: :lpm) iex> count(t4) 2 iex> get(t4, "2.2.2.0/24") {"2.2.2.0/24", 2} iex> get(t4, "3.3.3.0/30") {"3.3.3.0/30", 3} iex> count(t5) 1 iex> get(t5, "1.1.1.0/24") {"1.1.1.0/24", 1} """ @spec split(t, [prefix], keyword) :: {t, t} def split(trie, prefixes, opts \\ []) def split(%__MODULE__{} = trie, prefixes, opts) when is_list(prefixes) do t = take(trie, prefixes, opts) {t, drop(trie, keys(t))} rescue err -> raise err end def split(%__MODULE__{} = _trie, prefixes, _opts), do: raise(arg_err(:bad_pfxs, prefixes)) def split(trie, _prefixes, _opts), do: raise(arg_err(:bad_trie, trie)) @doc """ Returns a new Iptrie containing only given `prefixes` that were found in `trie`. If a given prefix does not exist, it is ignored. Optionally specifiy `match: :lpm` to use a longest prefix match instead of exact, which is the default. ## Examples iex> t = new([{"1.1.1.0/24", 1}, {"2.2.2.0/24", 2}, {"acdc::/16", 3}]) iex> t2 = take(t, ["1.1.1.0/24", "acdc::/16"]) iex> count(t2) 2 iex> get(t2, "1.1.1.0/24") {"1.1.1.0/24", 1} iex> get(t2, "acdc::/16") {"acfc00:e968:6179::de52:7100/16", 3} # use longest match iex> t = new([{"1.1.1.0/24", 1}, {"2.2.2.0/24", 2}, {"acdc::/16", 3}]) iex> t3 = take(t, ["1.1.1.1", "acfdf8:f53e:61e4::18"], match: :lpm) iex> count(t3) 2 iex> get(t3, "1.1.1.0/24") {"1.1.1.0/24", 1} iex> get(t3, "acdc::/16") {"acfc00:e968:6179::de52:7100/16", 3} # ignore missing prefixes iex> t = new([{"1.1.1.0/24", 1}, {"2.2.2.0/24", 2}, {"acdc::/16", 3}]) iex> t4 = take(t, ["1.1.1.1", "3.3.3.3"], match: :lpm) iex> count(t4) 1 iex> get(t4, "1.1.1.0/24") {"1.1.1.0/24", 1} """ @spec take(t, [prefix], keyword) :: t def take(trie, prefixes, opts \\ []) def take(%__MODULE__{} = trie, prefixes, opts) when is_list(prefixes) do fun = fn pfx, t -> case match(opts).(trie, pfx) do nil -> t {pfx, val} -> put(t, pfx, val) end end Enum.reduce(prefixes, new(), fun) rescue err -> raise err end def take(%__MODULE__{} = _trie, prefixes, _opts), do: raise(arg_err(:bad_pfxs, prefixes)) def take(trie, _prefixes, _opts), do: raise(arg_err(:bad_trie, trie)) @doc """ Returns all prefix,value-pairs from all available radix trees in `trie`. ## Examples iex> ipt = new() ...> |> put("1.1.1.0/24", 1) ...> |> put("2.2.2.0/24", 2) ...> |> put("acdc:1975::/32", 3) ...> |> put("acdc:2021::/32", 4) iex> iex> to_list(ipt) [ {%Pfx{bits: <<1, 1, 1>>, maxlen: 32}, 1}, {%Pfx{bits: <<2, 2, 2>>, maxlen: 32}, 2}, {%Pfx{bits: <<0xacdc::16, 0x1975::16>>, maxlen: 128}, 3}, {%Pfx{bits: <<0xacdc::16, 0x2021::16>>, maxlen: 128}, 4} ] """ @spec to_list(t) :: list({prefix, any}) def to_list(%__MODULE__{} = trie) do types(trie) |> Enum.map(fn type -> to_list(trie, type) end) |> List.flatten() rescue err -> raise err end def to_list(trie), do: raise(arg_err(:bad_trie, trie)) @doc """ Returns the prefix,value-pairs from the radix trees in `trie` for given `type`. If the radix tree for `type` does not exist, an empty list is returned. ## Examples iex> ipt = new() ...> |> put("1.1.1.0/24", 1) ...> |> put("2.2.2.0/24", 2) ...> |> put("acdc:1975::/32", 3) ...> |> put("acdc:2021::/32", 4) iex> iex> to_list(ipt, 32) [ {%Pfx{bits: <<1, 1, 1>>, maxlen: 32}, 1}, {%Pfx{bits: <<2, 2, 2>>, maxlen: 32}, 2} ] iex> to_list(ipt, 128) [ {%Pfx{bits: <<0xacdc::16, 0x1975::16>>, maxlen: 128}, 3}, {%Pfx{bits: <<0xacdc::16, 0x2021::16>>, maxlen: 128}, 4} ] iex> to_list(ipt, 48) [] """ @spec to_list(t, type) :: list({prefix, any}) def to_list(%__MODULE__{} = trie, type) when is_type(type) do # and type >= 0 do tree = radix(trie, type) Radix.to_list(tree) |> Enum.map(fn {bits, value} -> {Pfx.new(bits, type), value} end) end def to_list(%__MODULE__{} = _trie, type), do: raise(arg_err(:bad_type, type)) def to_list(trie, _type), do: raise(arg_err(:bad_trie, trie)) @doc """ Returns a list of types available in given `trie`. ## Example iex> t = new([{"1.1.1.1", 1}, {"2001:db8::", 2}]) iex> types(t) [32, 128] """ @spec types(t) :: [type] def types(%__MODULE__{} = trie), do: Map.keys(trie) |> Enum.filter(fn x -> is_type(x) end) def types(trie), do: raise(arg_err(:bad_trie, trie)) @doc """ Looks up `prefix` and update the matched entry, only if found. Uses longest prefix match, so search `prefix` is usually matched by some less specific prefix. If matched, `fun` is called on its value. If `prefix` had no longest prefix match, the `trie` is returned unchanged. ## Examples iex> ipt = new() ...> |> put("1.1.1.0/24", 0) ...> |> update("1.1.1.0", fn x -> x + 1 end) ...> |> update("1.1.1.1", fn x -> x + 1 end) ...> |> update("2.2.2.2", fn x -> x + 1 end) iex> get(ipt, "1.1.1.0/24") {"1.1.1.0/24", 2} iex> lookup(ipt, "2.2.2.2") nil """ @spec update(t, prefix, (any -> any)) :: t def update(%__MODULE__{} = trie, prefix, fun) when is_function(fun, 1) do pfx = Pfx.new(prefix) tree = radix(trie, pfx.maxlen) case Radix.lookup(tree, pfx.bits) do nil -> trie {bits, value} -> Map.put(trie, pfx.maxlen, Radix.put(tree, bits, fun.(value))) end rescue err -> raise err end def update(%__MODULE__{} = _trie, _prefix, fun), do: raise(arg_err(:bad_fun, {fun, 1})) def update(trie, _prefix, _fun), do: raise(arg_err(:bad_trie, trie)) @doc """ Looks up `prefix` and, if found, update its value or insert the `default` under `prefix`. Uses longest prefix match, so search `prefix` is usually matched by some less specific prefix. If matched, `fun` is called on the entry's value. If `prefix` had no longest prefix match, the `default` is inserted under `prefix` and `fun` is not called. ## Examples iex> ipt = new() ...> |> update("1.1.1.0/24", 0, fn x -> x + 1 end) ...> |> update("1.1.1.0", 0, fn x -> x + 1 end) ...> |> update("1.1.1.1", 0, fn x -> x + 1 end) ...> |> update("2.2.2.2", 0, fn x -> x + 1 end) iex> lookup(ipt, "1.1.1.2") {"1.1.1.0/24", 2} iex> iex> # probably not what you wanted: iex> lookup(ipt, "2.2.2.2") {"2.2.2.2", 0} """ @spec update(t, prefix, any, (any -> any)) :: t def update(%__MODULE__{} = trie, prefix, default, fun) when is_function(fun, 1) do pfx = Pfx.new(prefix) tree = radix(trie, pfx.maxlen) Map.put(trie, pfx.maxlen, Radix.update(tree, pfx.bits, default, fun)) rescue err -> raise err end def update(%__MODULE__{} = _trie, _prefix, _default, fun), do: raise(arg_err(:bad_fun, {fun, 1})) def update(trie, _prefix, _default, _fun), do: raise(arg_err(:bad_trie, trie)) @doc ~S""" Returns all the values stored in all radix trees in `trie`. ## Example iex> ipt = new() ...> |> put("1.1.1.0/24", 1) ...> |> put("2.2.2.0/24", 2) ...> |> put("acdc:1975::/32", 3) ...> |> put("acdc:2021::/32", 4) iex> iex> values(ipt) [1, 2, 3, 4] """ @spec values(t) :: list(any) def values(%__MODULE__{} = trie) do types(trie) |> Enum.map(fn type -> values(trie, type) end) |> List.flatten() rescue err -> raise err end def values(trie), do: raise(arg_err(:bad_trie, trie)) @doc ~S""" Returns the values stored in the radix trees in `trie` for given `type`. Where `type` is a either single maxlen or a list thereof. ## Example iex> ipt = new() ...> |> put("1.1.1.0/24", 1) ...> |> put("2.2.2.0/24", 2) ...> |> put("acdc:1975::/32", 3) ...> |> put("acdc:2021::/32", 4) iex> iex> values(ipt, 32) [1, 2] iex> iex> values(ipt, 128) [3, 4] iex> iex> values(ipt, 48) [] """ @spec values(t, type) :: list(any) def values(%__MODULE__{} = trie, type) when is_type(type), do: radix(trie, type) |> Radix.values() def values(%__MODULE__{} = _trie, type), do: raise(arg_err(:bad_type, type)) def values(trie, _type), do: raise(arg_err(:bad_trie, trie)) end
lib/iptrie.ex
0.895137
0.489809
iptrie.ex
starcoder
defmodule WHATWG.URL.WwwFormUrlencoded do @moduledoc """ Functions to work with `application/x-www-form-urlencoded`. The `application/x-www-form-urlencoded` percent-encode set is [defined](https://url.spec.whatwg.org/#application-x-www-form-urlencoded-percent-encode-set) as: > The `application/x-www-form-urlencoded `percent-encode set is the component percent-encode set and U+0021 (!), U+0027 (') to U+0029 RIGHT PARENTHESIS, inclusive, and U+007E (~). For performance, it reverses the condition by using the note in the specification: > The `application/x-www-form-urlencoded` percent-encode set contains all code points, except the ASCII alphanumeric, U+002A (*), U+002D (-), U+002E (.), and U+005F (_). See also: - [IANA - application/x-www-form-urlencoded](https://www.iana.org/assignments/media-types/application/x-www-form-urlencoded) - [`application/x-www-form-urlencoded` in URL Standard](https://url.spec.whatwg.org/#application/x-www-form-urlencoded) """ alias WHATWG.PercentEncoding @doc """ Parses a string into a list of pairs. ### Examples iex> parse("") [] iex> parse("&&") [] iex> parse("a=b=c") [{"a", "b=c"}] iex> parse("==a") [{"", "=a"}] iex> parse("%20a+=") [{" a ", ""}] iex> parse("=+b%20") [{"", " b "}] iex> parse("a") [{"a", ""}] """ def parse(string) when is_binary(string) do sequences = :binary.split(string, "&", [:global]) sequences |> Enum.reduce([], fn "", acc -> acc bytes, acc -> split_kv(bytes, acc) end) |> Enum.reverse() end defp split_kv(bytes, acc) do case :binary.split(bytes, "=") do [k, v] -> [{parse_decode(k), parse_decode(v)} | acc] [k] -> [{parse_decode(k), ""} | acc] end end defp parse_decode(bytes), do: PercentEncoding.decode_bytes(bytes, true) @doc """ Serializes a list of pairs into a string. ### Examples iex> serialize([{"a ", ""}]) "a+=" iex> serialize([{"a", "1"}, {"a", "2"}]) "a=1&a=2" iex> serialize([{:a, "1"}]) ** (ArgumentError) expected a list with two-element tuples with binary elements must be given, got an entry: {:a, "1"} iex> serialize([{"a", 1}]) ** (ArgumentError) expected a list with two-element tuples with binary elements must be given, got an entry: {"a", 1} iex> serialize([{:a, 1}], true) "a=1" """ def serialize(list, to_str \\ false) when is_list(list) do list |> Enum.reduce([], &reduce_serialize_in_reverse(&1, &2, to_str)) |> prune_head() |> Enum.reverse() |> IO.iodata_to_binary() end defp reduce_serialize_in_reverse({k, v}, acc, _to_str) when is_binary(k) and is_binary(v), do: [?&, [encode_bytes(k), "=", encode_bytes(v)] | acc] defp reduce_serialize_in_reverse({k, v}, acc, true), do: [?&, [encode_bytes(to_string(k)), "=", encode_bytes(to_string(v))] | acc] defp reduce_serialize_in_reverse(pair, _acc, _to_str) do raise( ArgumentError, "expected a list with two-element tuples with binary elements must be given, got an entry: #{ inspect(pair) }" ) end defp prune_head([?& | t]), do: t defp prune_head(list), do: list def encode_bytes(bytes), do: PercentEncoding.encode_bytes(bytes, &percent_encode_set?/1, true) @percent_encode_except '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz*-._' defp percent_encode_set?(byte) when is_integer(byte) and byte not in @percent_encode_except, do: true defp percent_encode_set?(byte) when is_integer(byte), do: false end
lib/whatwg/url/www_form_urlencoded.ex
0.838713
0.551242
www_form_urlencoded.ex
starcoder