code
stringlengths 114
1.05M
| path
stringlengths 3
312
| quality_prob
float64 0.5
0.99
| learning_prob
float64 0.2
1
| filename
stringlengths 3
168
| kind
stringclasses 1
value |
|---|---|---|---|---|---|
defmodule Kernel.SpecialForms do
@moduledoc """
In this module we define Elixir special forms. Special forms
cannot be overriden by the developer and are the basic
building blocks of Elixir code.
Some of those forms are lexical (like `alias`, `import`, etc).
The macros `{}`, `[]` and `<<>>` are also special forms used
to define data structures, respectively tuples, lists and binaries.
This module also documents Elixir's pseudo variables (`__MODULE__`,
`__FILE__`, `__ENV__` and `__CALLER__`). Pseudo variables return
information about Elixir's compilation environment and can only
be read, never assigned to.
Finally, it also documents 3 special forms (`__block__`,
`__scope__` and `__aliases__`), which are not intended to be
called directly by the developer but they appear in quoted
contents since they are essential in Elixir's constructions.
"""
@doc """
Defines a new tuple.
## Examples
:{}.(1,2,3)
{ 1, 2, 3 }
"""
defmacro :{}.(args)
@doc """
Defines a new list.
## Examples
:[].(1,2,3)
[ 1, 2, 3 ]
"""
defmacro :[].(args)
@doc """
Defines a new bitstring.
## Examples
iex> << 1, 2, 3 >>
<< 1, 2, 3 >>
## Bitstring types
A bitstring may contain many parts and those may have
specific types. Most of the time, Elixir will figure out
the part's type and won't require any work from you:
iex> <<102, "oo">>
"foo"
Above we have two parts: the first is an integer and the
second is a binary. If we use any other Elixir expression,
Elixir can no longer guess the type:
iex> rest = "oo"
...> <<102, rest>>
** (ArgumentError) argument error
When a variable or expression is given as a binary part,
Elixir defaults the type of that part to an unsigned
little-endian integer. In the example above, since we haven't
specified a type, Elixir expected an integer but we passed a
binary, resulting in `ArgumentError`. We can solve this by
explicitly tagging it as a binary:
<<102, rest :: binary>>
The type can be integer, float, binary, bytes, bitstring,
bits, utf8, utf16 or utf32, e.g.:
<<102 :: float, rest :: binary>>
Integer can be any arbitrary precision integer. A float is an
IEEE 754 binary32 or binary64 floating point number. A bitstring
is an arbitrary series of bits. A binary is a special case of
bitstring that has a total size divisible by 8.
The utf8, utf16, and utf32 types are for UTF code points.
The bits type is an alias for bitstring. The bytes type is an
alias for binary.
The signedness can also be given as signed or unsigned. The
signedness only matters for matching. If unspecified, it
defaults to unsigned. Example:
iex> <<-100 :: signed, _rest :: binary>> = <<-100, "foo">>
<<156,102,111,111>>
This match would have failed if we did not specify that the
value -100 is signed. If we're matching into a variable instead
of a value, the signedness won't be checked; rather, the number
will simply be interpreted as having the given (or implied)
signedness, e.g.:
iex> <<val, _rest :: binary>> = <<-100, "foo">>
...> val
156
Here, `val` is interpreted as unsigned.
Signedness is only relevant on integers.
The endianness of a part can be big, little or native (the
latter meaning it will be resolved at VM load time). Passing
many options can be done by giving a list:
<<102 :: [integer, native], rest :: binary>>
Or:
<<102 :: [unsigned, big, integer], rest :: binary>>
And so on.
Endianness only makes sense for integers and some UTF code
point types (utf16 and utf32).
Finally, we can also specify size and unit for each part. The
unit is multiplied by the size to give the effective size of
the part:
iex> <<102, _rest :: [size(2), unit(8)]>> = "foo"
"foo"
iex> <<102, _rest :: size(16)>> = "foo"
"foo"
iex> <<102, _rest :: size(32)>> = "foo"
** (MatchError) no match of right hand side value: "foo"
In the example above, the first two expressions matches
because the string "foo" takes 24 bits and we are matching
against a part of 24 bits as well, 8 of which are taken by
the integer 102 and the remaining 16 bits are specified on
the rest. On the last example, we expect a rest with size 32,
which won't match.
Size and unit are not applicable to utf8, utf16, and utf32.
The default size for integers is 8. For floats, it is 64. For
binaries, it is the size of the binary. Only the last binary
in a binary match can use the default size (all others must
have their size specified explicitly). Bitstrings do not have
a default size.
Size can also be specified using a syntax shortcut. Instead of
writing `size(8)`, one can write just `8` and it will be interpreted
as `size(8)`
iex> << 1 :: 3 >> == << 1 :: size(3) >>
true
The default unit for integers, floats, and bitstrings is 1. For
binaries, it is 8.
For floats, unit * size must result in 32 or 64, corresponding
to binary32 and binary64, respectively.
"""
defmacro :<<>>.(args)
@doc """
`alias` is used to setup atom aliases, often useful with modules names.
## Examples
`alias` can be used to setup an alias for any module:
defmodule Math do
alias MyKeyword, as: Keyword
end
In the example above, we have set up `MyKeyword` to be alias
as `Keyword`. So now, any reference to `Keyword` will be
automatically replaced by `MyKeyword`.
In case one wants to access the original `Keyword`, it can be done
by accessing Elixir:
Keyword.values #=> uses MyKeyword.values
Elixir.Keyword.values #=> uses Keyword.values
Notice that calling `alias` without the `as:` option automatically
sets an alias based on the last part of the module. For example:
alias Foo.Bar.Baz
Is the same as:
alias Foo.Bar.Baz, as: Baz
## Lexical scope
`import`, `require` and `alias` are called directives and all
have lexical scope. This means you can set up aliases inside
specific functions and it won't affect the overall scope.
"""
defmacro alias(module, opts)
@doc """
`require` is used to require the presence of external
modules so macros can be invoked.
## Examples
Notice that usually modules should not be required before usage,
the only exception is if you want to use the macros from a module.
In such cases, you need to explicitly require them.
Let's suppose you created your own `if` implementation in the module
`MyMacros`. If you want to invoke it, you need to first explicitly
require the `MyMacros`:
defmodule Math do
require MyMacros
MyMacros.if do_something, it_works
end
An attempt to call a macro that was not loaded will raise an error.
## Alias shortcut
`require` also accepts `as:` as an option so it automatically sets
up an alias. Please check `alias` for more information.
"""
defmacro require(module, opts)
@doc """
`import` allows one to easily access functions or macros from
others modules without using the qualified name.
## Examples
If you are using several functions from a given module, you can
import those functions and reference them as local functions,
for example:
iex> import List
...> flatten([1,[2],3])
[1,2,3]
## Selector
By default, Elixir imports functions and macros from the given
module, except the ones starting with underscore (which are
usually callbacks):
import List
A developer can change this behavior to include all macros and
functions, regardless if it starts with underscore, by passing
`:all` as first argument:
import :all, List
It can also be customized to import only all functions or
all macros:
import :functions, List
import :macros, List
Alternatively, Elixir allows a developer to specify `:only`
or `:except` as a fine grained control on what to import (or
not):
import List, only: [flatten: 1]
## Lexical scope
It is important to notice that `import` is lexical. This means you
can import specific macros inside specific functions:
defmodule Math do
def some_function do
# 1) Disable `if/2` from Kernel
import Kernel, except: [if: 2]
# 2) Require the new `if` macro from MyMacros
import MyMacros
# 3) Use the new macro
if do_something, it_works
end
end
In the example above, we imported macros from `MyMacros`,
replacing the original `if/2` implementation by our own
during that specific function. All other functions in that
module will still be able to use the original one.
## Alias/Require shortcut
All imported modules are also required by default. `import`
also accepts `as:` as an option so it automatically sets up
an alias. Please check `alias` for more information.
## Warnings
If you import a module and you don't use any of the imported
functions or macros from this module, Elixir is going to issue
a warning implying the import is not being used.
In case the import is generated automatically by a macro,
Elixir won't emit any warnings though, since the import
was not explicitly defined.
Both warning behaviors could be changed by explicitily
setting the `:warn` option to true or false.
## Ambiguous function/macro names
If two modules `A` and `B` are imported and they both contain
a `foo` function with an arity of `1`, an error is only emitted
if an ambiguous call to `foo/1` is actually made; that is, the
errors are emitted lazily, not eagerly.
"""
defmacro import(module, opts)
@doc """
Returns the current environment information as a `Macro.Env`
record. In the environment you can access the current filename,
line numbers, set up aliases, the current function and others.
"""
defmacro __ENV__
@doc """
Returns the current module name as an atom or nil otherwise.
Although the module can be accessed in the __ENV__, this macro
is a convenient shortcut.
"""
defmacro __MODULE__
@doc """
Returns the current file name as a binary.
Although the file can be accessed in the __ENV__, this macro
is a convenient shortcut.
"""
defmacro __FILE__
@doc """
Returns the current directory as a binary.
"""
defmacro __DIR__
@doc """
Allows you to get the representation of any expression.
## Examples
iex> quote do: sum(1, 2, 3)
{ :sum, [], [1, 2, 3] }
## Explanation
Any Elixir code can be represented using Elixir data structures.
The building block of Elixir macros is a tuple with three elements,
for example:
{ :sum, [], [1, 2, 3] }
The tuple above represents a function call to sum passing 1, 2 and
3 as arguments. The tuple elements are:
* The first element of the tuple is always an atom or
another tuple in the same representation;
* The second element of the tuple represents metadata;
* The third element of the tuple are the arguments for the
function call. The third argument may be an atom, which is
usually a variable (or a local call);
## Options
* `:unquote` - When false, disables unquoting. Useful when you have a quote
inside another quote and want to control which quote is
able to unquote;
* `:location` - When set to `:keep`, keeps the current line and file on quotes.
Read the Stacktrace information section below for more information;
* `:hygiene` - Allows a developer to disable hygiene selectively;
* `:context` - Sets the context resolution happens at;
## Macro literals
Besides the tuple described above, Elixir has a few literals that
when quoted return themselves. They are:
:sum #=> Atoms
1 #=> Integers
2.0 #=> Floats
[1,2] #=> Lists
"binaries" #=> Binaries
{key, value} #=> Tuple with two elements
## Hygiene and context
Elixir macros are hygienic via means of deferred resolution.
This means variables, aliases and imports defined inside the
quoted refer to the context that defined the macro and not
the context where the macro is expanded.
For this mechanism to work, every quoted code is attached
to a context. Consider the following example:
defmodule ContextSample do
def hello do
quote do: world
end
end
ContextSample.hello
#=> {:world,[],ContextSample}
Notice how the third element of the returned tuple is the
module name. This means that the variable is associated to the
ContextSample module and only code generated by this module
will be able to access that particular `world` variable.
While this means macros from the same module could have
conflicting variables, it also allows different quotes from
the same module to access them.
The context can be disabled or changed by explicitly setting
the context option. All hygiene mechanisms are based on such
context and we are going to explore each of them in the following
subsections.
### Hygiene in variables
Consider the following example:
defmodule Hygiene do
defmacro no_interference do
quote do: a = 1
end
end
require Hygiene
a = 10
Hygiene.no_interference
a #=> 10
In the example above, `a` returns 10 even if the macro
is apparently setting it to 1 because variables defined
in the macro does not affect the context the macro is executed.
If you want to set or get a variable in the user context, you
can do it with the help of the `var!` macro:
defmodule NoHygiene do
defmacro interference do
quote do: var!(a) = 1
end
end
require NoHygiene
a = 10
NoHygiene.interference
a #=> 1
It is important to understand that quoted variables are scoped
to the module they are defined. That said, even if two modules
define the same quoted variable `a`, their values are going
to be independent:
defmodule Hygiene1 do
defmacro var1 do
quote do: a = 1
end
end
defmodule Hygiene2 do
defmacro var2 do
quote do: a = 2
end
end
Calling macros `var1` and `var2` are not going to change their
each other values for `a`. This is useful because quoted
variables from different modules cannot conflict. If you desire
to explicitly access a variable from another module, we can once
again use `var!` macro, but explicitly passing a second argument:
# Access the variable a from Hygiene1
quote do: var!(a, Hygiene1) = 2
Hygiene for variables can be disabled overall as:
quote hygiene: [vars: false], do: x
### Hygiene in aliases
Aliases inside quote are hygienic by default.
Consider the following example:
defmodule Hygiene do
alias HashDict, as: D
defmacro no_interference do
quote do: D.new
end
end
require Hygiene
Hygiene.no_interference #=> #HashDict<[]>
Notice that, even though the alias `D` is not available
in the context the macro is expanded, the code above works
because `D` still expands to `HashDict`.
In some particular cases you may want to access an alias
or a module defined in the caller. In such scenarios, you
can access it by disabling hygiene with `hygiene: [aliases: false]`
or by using the `alias!` macro inside the quote:
defmodule Hygiene do
# This will expand to Elixir.Nested.hello
defmacro no_interference do
quote do: Nested.hello
end
# This will expand to Nested.hello for
# whatever is Nested in the caller
defmacro interference do
quote do: alias!(Nested).hello
end
end
defmodule Parent do
defmodule Nested do
def hello, do: "world"
end
require Hygiene
Hygiene.no_interference
#=> ** (UndefinedFunctionError) ...
Hygiene.interference
#=> "world"
end
## Hygiene in imports
Similar to aliases, imports in Elixir hygienic. Consider the
following code:
defmodule Hygiene do
defmacrop get_size do
quote do
size("hello")
end
end
def return_size do
import Kernel, except: [size: 1]
get_size
end
end
Hygiene.return_size #=> 5
Notice how `return_size` returns 5 even though the `size/1`
function is not imported.
Elixir is smart enough to delay the resolution to the latest
moment possible. So, if you call `size("hello")` inside quote,
but no `size/1` function is available, it is then expanded on
the caller:
defmodule Lazy do
defmacrop get_size do
import Kernel, except: [size: 1]
quote do
size([a: 1, b: 2])
end
end
def return_size do
import Kernel, except: [size: 1]
import Dict, only: [size: 1]
get_size
end
end
Lazy.return_size #=> 2
As in aliases, imports expansion can be explicitly disabled
via the `hygiene: [imports: false]` option.
## Stacktrace information
One of Elixir goals is to provide proper stacktrace whenever there is an
exception. In order to work properly with macros, the default behavior
in quote is to not set a line. When a macro is invoked and the quoted
expressions is expanded, the call site line is inserted.
This is a good behavior for the majority of the cases, except if the macro
is defining new functions. Consider this example:
defmodule MyServer do
use GenServer.Behaviour
end
`GenServer.Behaviour` defines new functions in our `MyServer` module.
However, if there is an exception in any of these functions, we want
the stacktrace to point to the `GenServer.Behaviour` and not the line
that calls `use GenServer.Behaviour`. For this reason, there is an
option called `:location` that when set to `:keep` keeps the original
line and file lines instead of setting them to 0:
quote location: :keep do
def handle_call(request, _from, state) do
{ :reply, :undef, state }
end
end
It is important to warn though that `location: :keep` evaluates the
code as if it was defined inside `GenServer.Behaviour` file, in
particular, the macro `__FILE__` and exceptions happening inside
the quote will always point to `GenServer.Behaviour` file.
"""
defmacro quote(opts, block)
@doc """
When used inside quoting, marks that the variable should
not be hygienized. The argument can be either a variable
node (i.e. a tuple with three elements where the last
one is an atom) or an atom representing the variable name.
Check `quote/2` for more information.
"""
defmacro var!(var)
@doc """
Defines a variable in the given context.
Check `quote/2` for more information.
"""
defmacro var!(var, context)
@doc """
When used inside quoting, marks that the alias should not
be hygienezed. This means the alias will be expanded when
the macro is expanded.
"""
defmacro alias!(alias)
@doc """
Unquotes the given expression from inside a macro.
## Examples
Imagine the situation you have a variable `name` and
you want to inject it inside some quote. The first attempt
would be:
value = 13
quote do: sum(1, value, 3)
Which would then return:
{ :sum, [], [1, { :value, [], quoted }, 3] }
Which is not the expected result. For this, we use unquote:
value = 13
quote do: sum(1, unquote(value), 3)
#=> { :sum, [], [1, 13, 3] }
"""
name = :unquote
defmacro unquote(name)(expr)
@doc """
Unquotes the given list expanding its arguments. Similar
to unquote.
## Examples
values = [2,3,4]
quote do: sum(1, unquote_splicing(values), 5)
#=> { :sum, [], [1, 2, 3, 4, 5] }
"""
name = :unquote_splicing
defmacro unquote(name)(expr)
@doc """
List comprehensions allow you to quickly build a list from another list:
iex> lc n inlist [1,2,3,4], do: n * 2
[2,4,6,8]
A comprehension accepts many generators and also filters. Generators
are defined using both `inlist` and `inbits` operators, allowing you
to loop lists and bitstrings:
# A list generator:
iex> lc n inlist [1,2,3,4], do: n * 2
[2,4,6,8]
# A bit string generator:
iex> lc <<n>> inbits <<1,2,3,4>>, do: n * 2
[2,4,6,8]
# A generator from a variable:
iex> list = [1,2,3,4]
...> lc n inlist list, do: n * 2
[2,4,6,8]
# A comprehension with two generators
iex> lc x inlist [1,2], y inlist [2,3], do: x*y
[2,3,4,6]
Filters can also be given:
# A comprehension with a generator and a filter
iex> lc n inlist [1,2,3,4,5,6], rem(n, 2) == 0, do: n
[2,4,6]
Bit string generators are quite useful when you need to
organize bit string streams:
iex> pixels = <<213,45,132,64,76,32,76,0,0,234,32,15>>
iex> lc <<r::8,g::8,b::8>> inbits pixels, do: {r,g,b}
[{213,45,132},{64,76,32},{76,0,0},{234,32,15}]
"""
defmacro lc(args)
@doc """
Defines a bit comprehension. It follows the same syntax as
a list comprehension but expects each element returned to
be a bitstring. For example, here is how to remove all
spaces from a string:
iex> bc <<c>> inbits " hello world ", c != ? , do: <<c>>
"helloworld"
"""
defmacro bc(args)
@doc """
This is the special form used whenever we have a block
of expressions in Elixir. This special form is private
and should not be invoked directly:
iex> quote do: (1; 2; 3)
{ :__block__, [], [1,2,3] }
"""
defmacro __block__(args)
@doc """
This is the special form used whenever we have to temporarily
change the scope information of a block. Used when `quote` is
invoked with `location: :keep` to execute a given block as if
it belonged to another file.
quote location: :keep, do: 1
#=> { :__scope__, [line: 1], [[file: "iex"],[do: 1]] }
Check `quote/1` for more information.
"""
defmacro __scope__(opts, args)
@doc """
This is the special form used to hold aliases information.
It is usually compiled to an atom:
quote do: Foo.Bar #=>
{ :__aliases__, [], [:Foo,:Bar] }
Elixir represents `Foo.Bar` as `__aliases__` so calls can be
unambiguously identified by the operator `:.`. For example:
quote do: Foo.bar #=>
{{:.,[],[{:__aliases__,[],[:Foo]},:bar]},[],[]}
Whenever an expression iterator sees a `:.` as the tuple key,
it can be sure that it represents a call and the second argument
is the list is an atom.
On the other hand, aliases holds some properties:
1) The head element of aliases can be any term;
2) The tail elements of aliases are guaranteed to always be atoms;
3) When the head element of aliases is the atom :Elixir, no expansion happen;
4) When the head element of aliases is not an atom, it is expanded at runtime:
quote do: some_var.Foo
{:__aliases__,[],[{:some_var,[],:quoted},:Bar]}
Since `some_var` is not available at compilation time, the compiler
expands such expression to:
Module.concat [some_var, Foo]
"""
defmacro __aliases__(args)
end
|
lib/elixir/lib/kernel/special_forms.ex
| 0.894706
| 0.533033
|
special_forms.ex
|
starcoder
|
defmodule Mix.Tasks.FixSlackMessageFormatting do
use Mix.Task
require Logger
import Ecto.Query, warn: false
alias ChatApi.{Messages, Repo, Slack, SlackAuthorizations}
alias ChatApi.Messages.Message
alias ChatApi.SlackAuthorizations.SlackAuthorization
@shortdoc "Fixes Slack message formatting for links and user IDs."
@moduledoc """
This task handles fixing Slack message formatting. For example, Slack has its own
markup for URLs and mailto links, which we want to convert to conventional markdown.
Slack also sends raw user IDs, which we want to convert to the user's display name.
Example:
```
$ mix fix_slack_message_formatting
$ mix fix_slack_message_formatting [ACCOUNT_TOKEN]
```
On Heroku:
```
$ heroku run "POOL_SIZE=2 mix fix_slack_message_formatting"
$ heroku run "POOL_SIZE=2 mix fix_slack_message_formatting [ACCOUNT_TOKEN]"
```
"""
@spec run([binary()]) :: :ok
def run(args) do
Application.ensure_all_started(:chat_api)
Message
|> where([m], ilike(m.body, "%<@U%"))
|> filter_args(args)
|> Repo.all()
|> Enum.each(fn %Message{account_id: account_id, body: body} = message ->
case find_valid_slack_authorization(account_id) do
%SlackAuthorization{} = authorization ->
Messages.update_message(message, %{
body: Slack.Helpers.sanitize_slack_message(body, authorization),
metadata: Slack.Helpers.get_slack_message_metadata(body)
})
_ ->
nil
end
end)
end
@spec find_valid_slack_authorization(binary()) :: SlackAuthorization.t() | nil
def find_valid_slack_authorization(account_id) do
account_id
|> SlackAuthorizations.list_slack_authorizations_by_account()
|> Enum.find(fn auth ->
String.contains?(auth.scope, "users:read") &&
String.contains?(auth.scope, "users:read.email")
end)
end
@spec filter_args(Ecto.Query.t(), [binary()] | []) :: Ecto.Query.t()
def filter_args(query, []), do: query
def filter_args(query, [account_id]) do
query |> where(account_id: ^account_id)
end
def filter_args(query, _), do: query
end
|
lib/mix/tasks/fix_slack_message_formatting.ex
| 0.765155
| 0.557875
|
fix_slack_message_formatting.ex
|
starcoder
|
defmodule Surface.Compiler.Helpers do
alias Surface.AST
alias Surface.Compiler.CompileMeta
alias Surface.IOHelper
def interpolation_to_quoted!(text, meta) do
with {:ok, expr} <- Code.string_to_quoted(text, file: meta.file, line: meta.line),
:ok <- validate_interpolation(expr, meta) do
expr
else
{:error, {line, error, token}} ->
IOHelper.syntax_error(error <> token, meta.file, line)
{:error, message} ->
IOHelper.compile_error(message, meta.file, meta.line)
_ ->
IOHelper.syntax_error(
"invalid interpolation '#{text}'",
meta.file,
meta.line
)
end
end
def attribute_expr_to_quoted!(value, _attribute_name, :css_class, meta) do
with {:ok, expr} <-
Code.string_to_quoted("Surface.css_class([#{value}])", line: meta.line, file: meta.file) do
expr
else
{:error, {line, error, token}} ->
IOHelper.syntax_error(
error <> token,
meta.file,
line
)
_ ->
IOHelper.syntax_error(
"invalid css class expression '#{value}'",
meta.file,
meta.line
)
end
end
def attribute_expr_to_quoted!(value, attribute_name, :map, meta) do
# Using :placeholder here because to_string(attribute_name) can screw with the representation
with {:ok, {event_value_func, meta, [:placeholder | opts]}} <-
Code.string_to_quoted("Surface.map_value(:placeholder, #{value})",
line: meta.line,
file: meta.file
) do
{event_value_func, meta, [attribute_name | opts]}
else
{:error, {line, error, token}} ->
IOHelper.syntax_error(
error <> token,
meta.file,
line
)
_ ->
IOHelper.syntax_error(
"invalid map expression '#{value}'",
meta.file,
meta.line
)
end
end
def attribute_expr_to_quoted!(value, attribute_name, :keyword, meta) do
# Using :placeholder here because to_string(attribute_name) can screw with the representation
with {:ok, {event_value_func, meta, [:placeholder | opts]}} <-
Code.string_to_quoted("Surface.keyword_value(:placeholder, #{value})",
line: meta.line,
file: meta.file
) do
{event_value_func, meta, [attribute_name | opts]}
else
{:error, {line, error, token}} ->
IOHelper.syntax_error(
error <> token,
meta.file,
line
)
_ ->
IOHelper.syntax_error(
"invalid keyword expression '#{value}'",
meta.file,
meta.line
)
end
end
def attribute_expr_to_quoted!(value, attribute_name, :event, meta) do
cid =
cond do
Module.open?(meta.caller.module) and
Module.get_attribute(meta.caller.module, :component_type) == Surface.LiveComponent ->
"@myself"
true ->
"nil"
end
# Using :placeholder here because to_string(attribute_name) can screw with the representation
with {:ok, {event_value_func, meta, [:placeholder | opts]}} <-
Code.string_to_quoted("Surface.event_value(:placeholder, [#{value}], #{cid})",
line: meta.line,
file: meta.file
) do
{event_value_func, meta, [attribute_name | opts]}
else
{:error, {line, error, token}} ->
IOHelper.syntax_error(
error <> token,
meta.file,
line
)
_ ->
IOHelper.syntax_error(
"invalid event expression '#{value}'",
meta.file,
meta.line
)
end
end
def attribute_expr_to_quoted!(value, _attribute_name, :bindings, meta) do
with {:ok, {:identity, _, expr}} <-
Code.string_to_quoted("identity(#{value})", line: meta.line, file: meta.file) do
if Enum.count(expr) == 1 do
Enum.at(expr, 0)
else
expr
end
else
{:error, {line, error, token}} ->
IOHelper.syntax_error(
error <> token,
meta.file,
line
)
_ ->
IOHelper.syntax_error(
"invalid list expression '#{value}'",
meta.file,
meta.line
)
end
end
def attribute_expr_to_quoted!(value, attribute_name, :list, meta) do
with {:ok, expr} <- Code.string_to_quoted(value, line: meta.line, file: meta.file) do
handle_list_expr(attribute_name, expr)
else
{:error, {line, error, token}} ->
IOHelper.syntax_error(
error <> token,
meta.file,
line
)
_ ->
IOHelper.syntax_error(
"invalid list expression '#{value}'",
meta.file,
meta.line
)
end
end
def attribute_expr_to_quoted!(value, _attribute_name, :generator, meta) do
with {:ok, {:for, _, expr}} when is_list(expr) <-
Code.string_to_quoted("for #{value}", line: meta.line, file: meta.file) do
expr
else
{:error, {line, error, token}} ->
IOHelper.syntax_error(
error <> token,
meta.file,
line
)
_ ->
IOHelper.syntax_error(
"invalid generator expression '#{value}'",
meta.file,
meta.line
)
end
end
def attribute_expr_to_quoted!(value, _attribute_name, _type, meta) do
case Code.string_to_quoted(value, line: meta.line, file: meta.file) do
{:ok, expr} ->
expr
{:error, {line, error, token}} ->
IOHelper.syntax_error(
error <> token,
meta.file,
line
)
end
end
defp handle_list_expr(_name, {:<-, _, [binding, value]}) do
{binding, value}
end
defp handle_list_expr(_name, expr) when is_list(expr), do: expr
defp handle_list_expr(name, expr) do
quote generated: true do
case unquote(expr) do
value when is_list(value) ->
value
value ->
raise "invalid value for property \"#{unquote(name)}\". Expected a :list, got: #{
inspect(value)
}"
end
end
end
defp validate_interpolation({:@, _, [{:inner_content, _, args}]}, _meta) when is_list(args) do
{:error,
"""
the `inner_content` anonymous function should be called using \
the dot-notation. Use `@inner_content.([])` instead of `@inner_content([])`\
"""}
end
defp validate_interpolation(
{{:., _, [{{:., _, [_, :inner_content]}, _, []}]}, _, _},
_meta
) do
:ok
end
defp validate_interpolation({{:., _, dotted_args} = expr, metadata, args} = expression, meta) do
if List.last(dotted_args) == :inner_content and !Keyword.get(metadata, :no_parens, false) do
bad_str = Macro.to_string(expression)
args = if Enum.empty?(args), do: [args], else: args
# This constructs the syntax tree for dot-notation access to the inner_content function
replacement_str =
Macro.to_string(
{{:., [line: meta.line, file: meta.file],
[{expr, Keyword.put(metadata, :no_parens, true), []}]},
[line: meta.line, file: meta.file], args}
)
# to fix the lack of no_parens metadata on elixir < 1.10
|> String.replace("inner_content().(", "inner_content.(")
{:error,
"""
the `inner_content` anonymous function should be called using \
the dot-notation. Use `#{replacement_str}` instead of `#{bad_str}`\
"""}
else
[expr | args]
|> Enum.map(fn arg -> validate_interpolation(arg, meta) end)
|> Enum.find(:ok, &match?({:error, _}, &1))
end
end
defp validate_interpolation({func, _, args}, meta) when is_atom(func) and is_list(args) do
args
|> Enum.map(fn arg -> validate_interpolation(arg, meta) end)
|> Enum.find(:ok, &match?({:error, _}, &1))
end
defp validate_interpolation({func, _, args}, _meta) when is_atom(func) and is_atom(args),
do: :ok
defp validate_interpolation({func, _, args}, meta) when is_tuple(func) and is_list(args) do
[func | args]
|> Enum.map(fn arg -> validate_interpolation(arg, meta) end)
|> Enum.find(:ok, &match?({:error, _}, &1))
end
defp validate_interpolation({func, _, args}, meta) when is_tuple(func) and is_atom(args) do
validate_interpolation(func, meta)
end
defp validate_interpolation(expr, meta) when is_tuple(expr) do
expr
|> Tuple.to_list()
|> Enum.map(fn arg -> validate_interpolation(arg, meta) end)
|> Enum.find(:ok, &match?({:error, _}, &1))
end
defp validate_interpolation(expr, meta) when is_list(expr) do
expr
|> Enum.map(fn arg -> validate_interpolation(arg, meta) end)
|> Enum.find(:ok, &match?({:error, _}, &1))
end
defp validate_interpolation(_expr, _meta), do: :ok
def to_meta(%{line: line} = tree_meta, %CompileMeta{
line_offset: offset,
file: file,
caller: caller
}) do
AST.Meta
|> Kernel.struct(tree_meta)
# The rational here is that offset is the offset from the start of the file to the first line in the
# surface expression.
|> Map.put(:line, line + offset - 1)
|> Map.put(:line_offset, offset)
|> Map.put(:file, file)
|> Map.put(:caller, caller)
end
def to_meta(%{line: line} = tree_meta, %AST.Meta{line_offset: offset} = parent_meta) do
parent_meta
|> Map.merge(tree_meta)
|> Map.put(:line, line + offset - 1)
end
def did_you_mean(target, list) do
Enum.reduce(list, {nil, 0}, &max_similar(&1, to_string(target), &2))
end
defp max_similar(source, target, {_, current} = best) do
score = source |> to_string() |> String.jaro_distance(target)
if score < current, do: best, else: {source, score}
end
def list_to_string(_singular, _plural, []) do
""
end
def list_to_string(singular, _plural, [item]) do
"#{singular} #{inspect(item)}"
end
def list_to_string(_singular, plural, items) do
[last | rest] = items |> Enum.map(&inspect/1) |> Enum.reverse()
"#{plural} #{rest |> Enum.reverse() |> Enum.join(", ")} and #{last}"
end
@blanks ' \n\r\t\v\b\f\e\d\a'
def blank?([]), do: true
def blank?([h | t]), do: blank?(h) && blank?(t)
def blank?(""), do: true
def blank?(char) when char in @blanks, do: true
def blank?(<<h, t::binary>>) when h in @blanks, do: blank?(t)
def blank?(_), do: false
def is_blank_or_empty(%AST.Text{value: value}),
do: blank?(value)
def is_blank_or_empty(%AST.Template{children: children}),
do: Enum.all?(children, &is_blank_or_empty/1)
def is_blank_or_empty(_node), do: false
def actual_module(mod_str, env) do
{:ok, ast} = Code.string_to_quoted(mod_str)
case Macro.expand(ast, env) do
mod when is_atom(mod) ->
{:ok, mod}
_ ->
{:error, "#{mod_str} is not a valid module name"}
end
end
def check_module_loaded(module, mod_str) do
case Code.ensure_compiled(module) do
{:module, mod} ->
{:ok, mod}
{:error, _reason} ->
{:error, "module #{mod_str} could not be loaded"}
end
end
def check_module_is_component(module, mod_str) do
if function_exported?(module, :component_type, 0) do
{:ok, module}
else
{:error, "module #{mod_str} is not a component"}
end
end
def module_name(name, caller) do
with {:ok, mod} <- actual_module(name, caller),
{:ok, mod} <- check_module_loaded(mod, name) do
check_module_is_component(mod, name)
end
end
end
|
lib/surface/compiler/helpers.ex
| 0.639849
| 0.401219
|
helpers.ex
|
starcoder
|
defmodule Jaxon.ParseError do
@type t :: %__MODULE__{
message: String.t() | nil,
unexpected: {:incomplete, String.t()} | {:error, String.t()} | nil,
expected: [atom()] | nil
}
defexception [:message, :unexpected, :expected]
defp event_to_pretty_name({:incomplete, {event, _}, _}) do
event_to_pretty_name(event)
end
defp event_to_pretty_name({:incomplete, str}) do
"incomplete string `#{String.slice(str, 0..15)}`"
end
defp event_to_pretty_name({:string, str}) do
"string \"#{str}\""
end
defp event_to_pretty_name({event, _}) do
event_to_pretty_name(event)
end
defp event_to_pretty_name(:integer) do
"number"
end
defp event_to_pretty_name(:value) do
"string, number, object, array"
end
defp event_to_pretty_name(:key) do
"key"
end
defp event_to_pretty_name(:end_object) do
"closing brace"
end
defp event_to_pretty_name(:end_array) do
"a closing bracket"
end
defp event_to_pretty_name(:comma) do
"comma"
end
defp event_to_pretty_name(:colon) do
"colon"
end
defp event_to_pretty_name(:end_stream) do
"end of stream"
end
defp event_to_pretty_name(event) do
to_string(event)
end
@spec message(t()) :: String.t()
def message(%{message: msg}) when is_binary(msg) do
msg
end
def message(%{unexpected: {:error, context}}) do
"Syntax error at `#{context}`"
end
def message(%{unexpected: unexpected, expected: []}) do
"Unexpected #{event_to_pretty_name(unexpected)}"
end
def message(%{unexpected: unexpected, expected: expected}) do
expected =
expected
|> Enum.map(&event_to_pretty_name/1)
|> Enum.split(-1)
|> case do
{[], [one]} ->
one
{h, [t]} ->
Enum.join(h, ", ") <> " or " <> t
end
"Unexpected #{event_to_pretty_name(unexpected)}, expected a #{expected} instead."
end
def unexpected_event(got, expected) do
%__MODULE__{
unexpected: got,
expected: expected
}
end
def syntax_error(context) do
%__MODULE__{
message: "Syntax error at `#{inspect(context)}`"
}
end
end
|
lib/jaxon/parse_error.ex
| 0.772144
| 0.462594
|
parse_error.ex
|
starcoder
|
defmodule Grizzly.ZWave.Commands.MultiChannelEndpointReport do
@moduledoc """
This command is used to advertise the number of End Points implemented by the sending node.
Params:
* `:dynamic` - whether the node implements a dynamic number of End Points (required)
* `:identical` - whether all end points have identical capabilities (required)
* `:endpoints` - the number of endpoints (required)
* `:aggregated_endpoints` - the number of Aggregated End Points implemented by this node (optional - v4)
"""
@behaviour Grizzly.ZWave.Command
alias Grizzly.ZWave.Command
alias Grizzly.ZWave.CommandClasses.MultiChannel
@type param ::
{:dynamic, boolean}
| {:identical, boolean}
| {:endpoints, byte}
| {:aggregated_endpoints, byte}
@impl true
@spec new([param()]) :: {:ok, Command.t()}
def new(params) do
command = %Command{
name: :multi_channel_endpoint_report,
command_byte: 0x08,
command_class: MultiChannel,
params: params,
impl: __MODULE__
}
{:ok, command}
end
@impl true
def encode_params(command) do
dynamic_bit = if Command.param!(command, :dynamic), do: 0x01, else: 0x00
identical_bit = if Command.param!(command, :identical), do: 0x01, else: 0x00
endpoints = Command.param!(command, :endpoints)
aggregated_endpoints = Command.param(command, :aggregated_endpoints)
if aggregated_endpoints == nil do
<<dynamic_bit::size(1), identical_bit::size(1), 0x00::size(6), 0x00::size(1),
endpoints::size(7)>>
else
<<dynamic_bit::size(1), identical_bit::size(1), 0x00::size(6), 0x00::size(1),
endpoints::size(7), 0x00::size(1), aggregated_endpoints::size(7)>>
end
end
@impl true
# v4
def decode_params(
<<dynamic_bit::size(1), identical_bit::size(1), 0x00::size(6), 0x00::size(1),
endpoints::size(7), 0x00::size(1), aggregated_endpoints::size(7)>>
) do
dynamic? = dynamic_bit == 0x01
identical? = identical_bit == 0x01
{:ok,
[
dynamic: dynamic?,
identical: identical?,
endpoints: endpoints,
aggregated_endpoints: aggregated_endpoints
]}
end
def decode_params(
<<dynamic_bit::size(1), identical_bit::size(1), 0x00::size(6), 0x00::size(1),
endpoints::size(7)>>
) do
dynamic? = dynamic_bit == 0x01
identical? = identical_bit == 0x01
{:ok,
[
dynamic: dynamic?,
identical: identical?,
endpoints: endpoints,
aggregated_endpoints: 0
]}
end
end
|
lib/grizzly/zwave/commands/multi_channel_endpoint_report.ex
| 0.850918
| 0.467818
|
multi_channel_endpoint_report.ex
|
starcoder
|
defmodule Web.Skill do
@moduledoc """
Bounded context for the Phoenix app talking to the data layer
"""
alias Data.Effect
alias Data.Skill
alias Data.Repo
alias Game.Skills
alias Web.Filter
alias Web.Pagination
import Ecto.Query
@behaviour Filter
@doc """
Load all skills
"""
@spec all(Keyword.t()) :: [Skill.t()]
def all(opts \\ []) do
opts = Enum.into(opts, %{})
Skill
|> order_by([s], asc: s.level, asc: s.id)
|> preload([:classes])
|> Filter.filter(opts[:filter], __MODULE__)
|> Pagination.paginate(opts)
end
@impl Filter
def filter_on_attribute({"level_from", level}, query) do
query |> where([s], s.level >= ^level)
end
def filter_on_attribute({"level_to", level}, query) do
query |> where([s], s.level <= ^level)
end
def filter_on_attribute({"tag", value}, query) do
query
|> where([n], fragment("? @> ?::varchar[]", n.tags, [^value]))
end
def filter_on_attribute({:enabled, value}, query) do
query |> where([s], s.is_enabled == ^value)
end
def filter_on_attribute(_, query), do: query
@doc """
Get a skill
"""
@spec get(id :: integer) :: Skill.t()
def get(id) do
Skill |> Repo.get(id)
end
@doc """
Get a changeset for a new page
"""
@spec new() :: Ecto.Changeset.t()
def new(), do: %Skill{} |> Skill.changeset(%{})
@doc """
Get a changeset for an edit page
"""
@spec edit(skill :: Skill.t()) :: changeset :: map
def edit(skill), do: skill |> Skill.changeset(%{})
@doc """
Create a skill
"""
@spec create(map) :: {:ok, Skill.t()} | {:error, changeset :: map}
def create(params) do
changeset = %Skill{} |> Skill.changeset(cast_params(params))
case changeset |> Repo.insert() do
{:ok, skill} ->
Skills.insert(skill)
{:ok, skill}
{:error, changeset} ->
{:error, changeset}
end
end
@doc """
Update a skill
"""
@spec update(id :: integer, params :: map) :: {:ok, Skill.t()} | {:error, changeset :: map}
def update(id, params) do
skill = id |> get()
changeset = skill |> Skill.changeset(cast_params(params))
case changeset |> Repo.update() do
{:ok, skill} ->
Skills.reload(skill)
{:ok, skill}
{:error, changeset} ->
{:error, changeset}
end
end
@doc """
Cast params into what `Data.Item` expects
"""
@spec cast_params(params :: map) :: map
def cast_params(params) do
params
|> parse_effects()
|> parse_tags()
end
defp parse_effects(params = %{"effects" => effects}) do
case Poison.decode(effects) do
{:ok, effects} -> effects |> cast_effects(params)
_ -> params
end
end
defp parse_effects(params), do: params
defp cast_effects(effects, params) do
effects =
effects
|> Enum.map(fn effect ->
case Effect.load(effect) do
{:ok, effect} -> effect
_ -> nil
end
end)
|> Enum.reject(&is_nil/1)
Map.put(params, "effects", effects)
end
def parse_tags(params = %{"tags" => tags}) do
tags =
tags
|> String.split(",")
|> Enum.map(&String.trim/1)
params
|> Map.put("tags", tags)
end
def parse_tags(params), do: params
end
|
lib/web/skill.ex
| 0.826747
| 0.425426
|
skill.ex
|
starcoder
|
defmodule AWS.CodePipeline do
@moduledoc """
AWS CodePipeline
**Overview**
This is the AWS CodePipeline API Reference. This guide provides
descriptions of the actions and data types for AWS CodePipeline. Some
functionality for your pipeline is only configurable through the API. For
additional information, see the [AWS CodePipeline User
Guide](http://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html).
You can use the AWS CodePipeline API to work with pipelines, stages,
actions, gates, and transitions, as described below.
*Pipelines* are models of automated release processes. Each pipeline is
uniquely named, and consists of actions, gates, and stages.
You can work with pipelines by calling:
<ul> <li> `CreatePipeline`, which creates a uniquely-named pipeline.
</li> <li> `DeletePipeline`, which deletes the specified pipeline.
</li> <li> `GetPipeline`, which returns information about the pipeline
structure and pipeline metadata, including the pipeline Amazon Resource
Name (ARN).
</li> <li> `GetPipelineExecution`, which returns information about a
specific execution of a pipeline.
</li> <li> `GetPipelineState`, which returns information about the current
state of the stages and actions of a pipeline.
</li> <li> `ListPipelines`, which gets a summary of all of the pipelines
associated with your account.
</li> <li> `ListPipelineExecutions`, which gets a summary of the most
recent executions for a pipeline.
</li> <li> `StartPipelineExecution`, which runs the the most recent
revision of an artifact through the pipeline.
</li> <li> `UpdatePipeline`, which updates a pipeline with edits or changes
to the structure of the pipeline.
</li> </ul> Pipelines include *stages*, which are logical groupings of
gates and actions. Each stage contains one or more actions that must
complete before the next stage begins. A stage will result in success or
failure. If a stage fails, then the pipeline stops at that stage and will
remain stopped until either a new version of an artifact appears in the
source location, or a user takes action to re-run the most recent artifact
through the pipeline. You can call `GetPipelineState`, which displays the
status of a pipeline, including the status of stages in the pipeline, or
`GetPipeline`, which returns the entire structure of the pipeline,
including the stages of that pipeline. For more information about the
structure of stages and actions, also refer to the [AWS CodePipeline
Pipeline Structure
Reference](http://docs.aws.amazon.com/codepipeline/latest/userguide/pipeline-structure.html).
Pipeline stages include *actions*, which are categorized into categories
such as source or build actions performed within a stage of a pipeline. For
example, you can use a source action to import artifacts into a pipeline
from a source such as Amazon S3. Like stages, you do not work with actions
directly in most cases, but you do define and interact with actions when
working with pipeline operations such as `CreatePipeline` and
`GetPipelineState`.
Pipelines also include *transitions*, which allow the transition of
artifacts from one stage to the next in a pipeline after the actions in one
stage complete.
You can work with transitions by calling:
<ul> <li> `DisableStageTransition`, which prevents artifacts from
transitioning to the next stage in a pipeline.
</li> <li> `EnableStageTransition`, which enables transition of artifacts
between stages in a pipeline.
</li> </ul> **Using the API to integrate with AWS CodePipeline**
For third-party integrators or developers who want to create their own
integrations with AWS CodePipeline, the expected sequence varies from the
standard API user. In order to integrate with AWS CodePipeline, developers
will need to work with the following items:
**Jobs**, which are instances of an action. For example, a job for a source
action might import a revision of an artifact from a source.
You can work with jobs by calling:
<ul> <li> `AcknowledgeJob`, which confirms whether a job worker has
received the specified job,
</li> <li> `GetJobDetails`, which returns the details of a job,
</li> <li> `PollForJobs`, which determines whether there are any jobs to
act upon,
</li> <li> `PutJobFailureResult`, which provides details of a job failure,
and
</li> <li> `PutJobSuccessResult`, which provides details of a job success.
</li> </ul> **Third party jobs**, which are instances of an action created
by a partner action and integrated into AWS CodePipeline. Partner actions
are created by members of the AWS Partner Network.
You can work with third party jobs by calling:
<ul> <li> `AcknowledgeThirdPartyJob`, which confirms whether a job worker
has received the specified job,
</li> <li> `GetThirdPartyJobDetails`, which requests the details of a job
for a partner action,
</li> <li> `PollForThirdPartyJobs`, which determines whether there are any
jobs to act upon,
</li> <li> `PutThirdPartyJobFailureResult`, which provides details of a job
failure, and
</li> <li> `PutThirdPartyJobSuccessResult`, which provides details of a job
success.
</li> </ul>
"""
@doc """
Returns information about a specified job and whether that job has been
received by the job worker. Only used for custom actions.
"""
def acknowledge_job(client, input, options \\ []) do
request(client, "AcknowledgeJob", input, options)
end
@doc """
Confirms a job worker has received the specified job. Only used for partner
actions.
"""
def acknowledge_third_party_job(client, input, options \\ []) do
request(client, "AcknowledgeThirdPartyJob", input, options)
end
@doc """
Creates a new custom action that can be used in all pipelines associated
with the AWS account. Only used for custom actions.
"""
def create_custom_action_type(client, input, options \\ []) do
request(client, "CreateCustomActionType", input, options)
end
@doc """
Creates a pipeline.
"""
def create_pipeline(client, input, options \\ []) do
request(client, "CreatePipeline", input, options)
end
@doc """
Marks a custom action as deleted. PollForJobs for the custom action will
fail after the action is marked for deletion. Only used for custom actions.
<important> You cannot recreate a custom action after it has been deleted
unless you increase the version number of the action.
</important>
"""
def delete_custom_action_type(client, input, options \\ []) do
request(client, "DeleteCustomActionType", input, options)
end
@doc """
Deletes the specified pipeline.
"""
def delete_pipeline(client, input, options \\ []) do
request(client, "DeletePipeline", input, options)
end
@doc """
Prevents artifacts in a pipeline from transitioning to the next stage in
the pipeline.
"""
def disable_stage_transition(client, input, options \\ []) do
request(client, "DisableStageTransition", input, options)
end
@doc """
Enables artifacts in a pipeline to transition to a stage in a pipeline.
"""
def enable_stage_transition(client, input, options \\ []) do
request(client, "EnableStageTransition", input, options)
end
@doc """
Returns information about a job. Only used for custom actions.
<important> When this API is called, AWS CodePipeline returns temporary
credentials for the Amazon S3 bucket used to store artifacts for the
pipeline, if the action requires access to that Amazon S3 bucket for input
or output artifacts. Additionally, this API returns any secret values
defined for the action.
</important>
"""
def get_job_details(client, input, options \\ []) do
request(client, "GetJobDetails", input, options)
end
@doc """
Returns the metadata, structure, stages, and actions of a pipeline. Can be
used to return the entire structure of a pipeline in JSON format, which can
then be modified and used to update the pipeline structure with
`UpdatePipeline`.
"""
def get_pipeline(client, input, options \\ []) do
request(client, "GetPipeline", input, options)
end
@doc """
Returns information about an execution of a pipeline, including details
about artifacts, the pipeline execution ID, and the name, version, and
status of the pipeline.
"""
def get_pipeline_execution(client, input, options \\ []) do
request(client, "GetPipelineExecution", input, options)
end
@doc """
Returns information about the state of a pipeline, including the stages and
actions.
"""
def get_pipeline_state(client, input, options \\ []) do
request(client, "GetPipelineState", input, options)
end
@doc """
Requests the details of a job for a third party action. Only used for
partner actions.
<important> When this API is called, AWS CodePipeline returns temporary
credentials for the Amazon S3 bucket used to store artifacts for the
pipeline, if the action requires access to that Amazon S3 bucket for input
or output artifacts. Additionally, this API returns any secret values
defined for the action.
</important>
"""
def get_third_party_job_details(client, input, options \\ []) do
request(client, "GetThirdPartyJobDetails", input, options)
end
@doc """
Gets a summary of all AWS CodePipeline action types associated with your
account.
"""
def list_action_types(client, input, options \\ []) do
request(client, "ListActionTypes", input, options)
end
@doc """
Gets a summary of the most recent executions for a pipeline.
"""
def list_pipeline_executions(client, input, options \\ []) do
request(client, "ListPipelineExecutions", input, options)
end
@doc """
Gets a summary of all of the pipelines associated with your account.
"""
def list_pipelines(client, input, options \\ []) do
request(client, "ListPipelines", input, options)
end
@doc """
Returns information about any jobs for AWS CodePipeline to act upon.
<important> When this API is called, AWS CodePipeline returns temporary
credentials for the Amazon S3 bucket used to store artifacts for the
pipeline, if the action requires access to that Amazon S3 bucket for input
or output artifacts. Additionally, this API returns any secret values
defined for the action.
</important>
"""
def poll_for_jobs(client, input, options \\ []) do
request(client, "PollForJobs", input, options)
end
@doc """
Determines whether there are any third party jobs for a job worker to act
on. Only used for partner actions.
<important> When this API is called, AWS CodePipeline returns temporary
credentials for the Amazon S3 bucket used to store artifacts for the
pipeline, if the action requires access to that Amazon S3 bucket for input
or output artifacts.
</important>
"""
def poll_for_third_party_jobs(client, input, options \\ []) do
request(client, "PollForThirdPartyJobs", input, options)
end
@doc """
Provides information to AWS CodePipeline about new revisions to a source.
"""
def put_action_revision(client, input, options \\ []) do
request(client, "PutActionRevision", input, options)
end
@doc """
Provides the response to a manual approval request to AWS CodePipeline.
Valid responses include Approved and Rejected.
"""
def put_approval_result(client, input, options \\ []) do
request(client, "PutApprovalResult", input, options)
end
@doc """
Represents the failure of a job as returned to the pipeline by a job
worker. Only used for custom actions.
"""
def put_job_failure_result(client, input, options \\ []) do
request(client, "PutJobFailureResult", input, options)
end
@doc """
Represents the success of a job as returned to the pipeline by a job
worker. Only used for custom actions.
"""
def put_job_success_result(client, input, options \\ []) do
request(client, "PutJobSuccessResult", input, options)
end
@doc """
Represents the failure of a third party job as returned to the pipeline by
a job worker. Only used for partner actions.
"""
def put_third_party_job_failure_result(client, input, options \\ []) do
request(client, "PutThirdPartyJobFailureResult", input, options)
end
@doc """
Represents the success of a third party job as returned to the pipeline by
a job worker. Only used for partner actions.
"""
def put_third_party_job_success_result(client, input, options \\ []) do
request(client, "PutThirdPartyJobSuccessResult", input, options)
end
@doc """
Resumes the pipeline execution by retrying the last failed actions in a
stage.
"""
def retry_stage_execution(client, input, options \\ []) do
request(client, "RetryStageExecution", input, options)
end
@doc """
Starts the specified pipeline. Specifically, it begins processing the
latest commit to the source location specified as part of the pipeline.
"""
def start_pipeline_execution(client, input, options \\ []) do
request(client, "StartPipelineExecution", input, options)
end
@doc """
Updates a specified pipeline with edits or changes to its structure. Use a
JSON file with the pipeline structure in conjunction with UpdatePipeline to
provide the full structure of the pipeline. Updating the pipeline increases
the version number of the pipeline by 1.
"""
def update_pipeline(client, input, options \\ []) do
request(client, "UpdatePipeline", input, options)
end
@spec request(map(), binary(), map(), list()) ::
{:ok, Poison.Parser.t | nil, Poison.Response.t} |
{:error, Poison.Parser.t} |
{:error, HTTPoison.Error.t}
defp request(client, action, input, options) do
client = %{client | service: "codepipeline"}
host = get_host("codepipeline", client)
url = get_url(host, client)
headers = [{"Host", host},
{"Content-Type", "application/x-amz-json-1.1"},
{"X-Amz-Target", "CodePipeline_20150709.#{action}"}]
payload = Poison.Encoder.encode(input, [])
headers = AWS.Request.sign_v4(client, "POST", url, headers, payload)
case HTTPoison.post(url, payload, headers, options) do
{:ok, response=%HTTPoison.Response{status_code: 200, body: ""}} ->
{:ok, nil, response}
{:ok, response=%HTTPoison.Response{status_code: 200, body: body}} ->
{:ok, Poison.Parser.parse!(body), response}
{:ok, _response=%HTTPoison.Response{body: body}} ->
error = Poison.Parser.parse!(body)
exception = error["__type"]
message = error["message"]
{:error, {exception, message}}
{:error, %HTTPoison.Error{reason: reason}} ->
{:error, %HTTPoison.Error{reason: reason}}
end
end
defp get_host(endpoint_prefix, client) do
if client.region == "local" do
"localhost"
else
"#{endpoint_prefix}.#{client.region}.#{client.endpoint}"
end
end
defp get_url(host, %{:proto => proto, :port => port}) do
"#{proto}://#{host}:#{port}/"
end
end
|
lib/aws/code_pipeline.ex
| 0.927741
| 0.753648
|
code_pipeline.ex
|
starcoder
|
defmodule Zaryn.P2P.GeoPatch do
@moduledoc """
Provide functions for Geographical Patching from IP address
Each patch is represented by 3 digits in hexadecimal form (ie. AAA, F3C)
"""
alias __MODULE__.GeoIP
@doc """
Get a patch from an IP address
"""
@spec from_ip(:inet.ip_address()) :: binary()
def from_ip({127, 0, 0, 1}), do: compute_random_patch()
def from_ip(ip) when is_tuple(ip) do
case GeoIP.get_coordinates(ip) do
{0.0, 0.0} ->
compute_random_patch()
{lat, lon} ->
compute_patch(lat, lon)
end
end
defp compute_random_patch do
list_char = Enum.concat([?0..?9, ?A..?F])
Enum.take_random(list_char, 3) |> List.to_string()
end
defp compute_patch(lat, lon) do
lat_sign = sign(lat)
lon_sign = sign(lon)
fdc = [lat / 90, lon / 180]
sd =
[(lat - lat_sign * 45) / 2, (lon - lon_sign * 90) / 2]
|> resolve_with_sign([lat, lon])
sdc = [List.first(sd) / 22.5, List.last(sd) / 45]
td =
[
(List.first(sd) - lat_sign * 11.25) / 2,
(List.last(sd) - lon_sign * 22.5) / 2
]
|> resolve_with_sign(sd)
tdc = [List.first(td) / 5.625, List.last(td) / 11.25]
patch =
[index_patch(fdc), index_patch(sdc), index_patch(tdc)]
|> Enum.join("")
patch
end
defp index_patch([f_i, s_i]) when f_i > 0.5 and f_i <= 1 and s_i < -0.5 and s_i >= -1, do: '0'
defp index_patch([f_i, s_i]) when f_i > 0.5 and f_i <= 1 and s_i < 0 and s_i >= -0.5, do: '1'
defp index_patch([f_i, s_i]) when f_i > 0.5 and f_i <= 1 and s_i < 0.5 and s_i >= 0, do: '2'
defp index_patch([f_i, s_i]) when f_i > 0.5 and f_i <= 1 and s_i < 1 and s_i >= 0.5, do: '3'
defp index_patch([f_i, s_i]) when f_i > 0 and f_i <= 0.5 and s_i < -0.5 and s_i >= -1, do: '4'
defp index_patch([f_i, s_i]) when f_i > 0 and f_i <= 0.5 and s_i < 0 and s_i >= -0.5, do: '5'
defp index_patch([f_i, s_i]) when f_i > 0 and f_i <= 0.5 and s_i < 0.5 and s_i >= 0, do: '6'
defp index_patch([f_i, s_i]) when f_i > 0 and f_i <= 0.5 and s_i < 1 and s_i >= 0.5, do: '7'
defp index_patch([f_i, s_i]) when f_i > -0.5 and f_i <= 0 and s_i < -0.5 and s_i >= -1, do: '8'
defp index_patch([f_i, s_i]) when f_i > -0.5 and f_i <= 0 and s_i < 0 and s_i >= -0.5, do: '9'
defp index_patch([f_i, s_i]) when f_i > -0.5 and f_i <= 0 and s_i < 0.5 and s_i >= 0, do: 'A'
defp index_patch([f_i, s_i]) when f_i > -0.5 and f_i <= 0 and s_i < 1 and s_i >= 0.5, do: 'B'
defp index_patch([f_i, s_i]) when f_i > -1 and f_i <= -0.5 and s_i < -0.5 and s_i >= -1, do: 'C'
defp index_patch([f_i, s_i]) when f_i > -1 and f_i <= -0.5 and s_i < 0 and s_i >= -0.5, do: 'D'
defp index_patch([f_i, s_i]) when f_i > -1 and f_i <= -0.5 and s_i < 0.5 and s_i >= 0, do: 'E'
defp index_patch([f_i, s_i]) when f_i > -1 and f_i <= -0.5 and s_i < 1 and s_i >= 0.5, do: 'F'
defp sign(number) when number < 0, do: -1
defp sign(number) when number >= 0, do: 1
defp resolve_with_sign([first, second], [first2, second2]) do
[
do_resolve_with_sign(first, first2),
do_resolve_with_sign(second, second2)
]
end
defp do_resolve_with_sign(x1, x2) do
if sign(x1) == sign(x2) do
x1
else
x2 / 2
end
end
end
|
lib/zaryn/p2p/geo_patch.ex
| 0.716219
| 0.442275
|
geo_patch.ex
|
starcoder
|
defmodule DBConnection.OwnershipError do
defexception [:message]
def exception(message), do: %DBConnection.OwnershipError{message: message}
end
defmodule DBConnection.Ownership do
@moduledoc """
A DBConnection pool that requires explicit checkout and checkin
as a mechanism to coordinate between processes.
## Options
* `:ownership_mode` - When mode is `:manual`, all connections must
be explicitly checked out before by using `ownership_checkout/2`.
Otherwise, mode is `:auto` and connections are checked out
implicitly. `{:shared, owner}` mode is also supported so
processes are allowed on demand. On all cases, checkins are
explicit via `ownership_checkin/2`. Defaults to `:auto`.
* `:ownership_timeout` - The maximum time that a process is allowed to own
a connection, default `60_000`. This timeout exists mostly for sanity
checking purposes and can be increased at will, since DBConnection
automatically checks in connections whenever there is a mode change.
* `:ownership_log` - The `Logger.level` to log ownership changes, or `nil`
not to log, default `nil`.
There are also two experimental options, `:post_checkout` and `:pre_checkin`
which allows a developer to configure what happens when a connection is
checked out and checked in. Those options are meant to be used during tests,
and have the following behaviour:
* `:post_checkout` - it must be an anonymous function that receives the
connection module, the connection state and it must return either
`{:ok, connection_module, connection_state}` or
`{:disconnect, err, connection_module, connection_state}`. This allows
the developer to change the connection module on post checkout. However,
in case of disconnects, the return `connection_module` must be the same
as the `connection_module` given. Defaults to simply returning the given
connection module and state.
* `:pre_checkin` - it must be an anonymous function that receives the
checkin reason (`:checkin`, `{:disconnect, err}` or `{:stop, err}`),
the connection module and the connection state returned by `post_checkout`.
It must return either `{:ok, connection_module, connection_state}` or
`{:disconnect, err, connection_module, connection_state}` where the connection
module is the module given to `:post_checkout` Defaults to simply returning
the given connection module and state.
## Caller handling
If the `:caller` option is given on checkout with a pid and no pool is
assigned to the current process, a connection will be allowed from the
given pid and used on checkout. This is useful when multiple tasks need
to collaborate on the same connection (hence the `:infinity` timeout).
"""
alias DBConnection.Ownership.Manager
alias DBConnection.Holder
@doc false
def child_spec(args) do
Supervisor.Spec.worker(Manager, [args])
end
@doc """
Explicitly checks a connection out from the ownership manager.
It may return `:ok` if the connection is checked out.
`{:already, :owner | :allowed}` if the caller process already
has a connection, `:error` if it could be not checked out or
raise if there was an error.
"""
@spec ownership_checkout(GenServer.server, Keyword.t) ::
:ok | {:already, :owner | :allowed}
def ownership_checkout(manager, opts) do
with {:ok, pid} <- Manager.checkout(manager, opts) do
case Holder.checkout(pid, opts) do
{:ok, pool_ref, _module, _state} ->
Holder.checkin(pool_ref)
{:error, err} ->
raise err
end
end
end
@doc """
Changes the ownership mode.
`mode` may be `:auto`, `:manual` or `{:shared, owner}`.
The operation will always succeed when setting the mode to
`:auto` or `:manual`. It may fail with reason `:not_owner`
or `:not_found` when setting `{:shared, pid}` and the
given pid does not own any connection. May return
`:already_shared` if another process set the ownership
mode to `{:shared, _}` and is still alive.
"""
@spec ownership_mode(GenServer.server, :auto | :manual | {:shared, pid}, Keyword.t) ::
:ok | :already_shared | :not_owner | :not_found
defdelegate ownership_mode(manager, mode, opts), to: Manager, as: :mode
@doc """
Checks a connection back in.
A connection can only be checked back in by its owner.
"""
@spec ownership_checkin(GenServer.server, Keyword.t) ::
:ok | :not_owner | :not_found
defdelegate ownership_checkin(manager, opts), to: Manager, as: :checkin
@doc """
Allows the process given by `allow` to use the connection checked out
by `owner_or_allowed`.
It may return `:ok` if the connection is checked out.
`{:already, :owner | :allowed}` if the `allow` process already
has a connection. `owner_or_allowed` may either be the owner or any
other allowed process. Returns `:not_found` if the given process
does not have any connection checked out.
"""
@spec ownership_allow(GenServer.server, owner_or_allowed :: pid, allow :: pid, Keyword.t) ::
:ok | {:already, :owner | :allowed} | :not_found
defdelegate ownership_allow(manager, owner, allow, opts), to: Manager, as: :allow
end
|
deps/db_connection/lib/db_connection/ownership.ex
| 0.813831
| 0.422147
|
ownership.ex
|
starcoder
|
defmodule Serum.Plugins.TableOfContents do
@moduledoc """
A Serum plugin that inserts a table of contents.
## Using the Plugin
First, add this plugin to your `serum.exs`:
%{
plugins: [
#{__MODULE__ |> to_string() |> String.replace_prefix("Elixir.", "")}
]
}
This plugin works with both pages(`.md`, `.html`, and `.html.eex`) and blog
posts(`.md`). Insert the `<serum-toc>` tag at the position you want to
display a table of contents at.
<serum-toc start="2" end="4"></serum-toc>
The `start` and `end` attributes define a range of heading level this plugin
recognizes. In the case of the above example, `<h1>`, `<h5>`, and `<h6>` tags
are ignored when generating a table of contents.
After this plugin has run, each `<serum-toc>` tag is replaced with an
unordered list:
<ul id="toc" class="serum-toc">
<li class="indent-0">
<a href="#s_1">
<span class="number">1</span>
Section 1
</a>
</li>
<!-- More list items here... -->
</ul>
This plugin produces a "flat" unordered list. However, each list item tag has
an `indent-x` class, where `x` is an indentation level (from 0 to 5) of the
current item in the list. You can utilize this when working on stylesheets.
The `id` attribute of each target heading tag is used when hyperlinks are
generated. If the element does not have an `id`, the plugin will set one
appropriately.
## Notes
You may use `<serum-toc>` tag more than once in a single page. However, all
occurrences of this tag will be replaced with a table of contents generated
using the attributes of the first one. That is, for example, all three tags
in the code below expand to the same table of contents, showing a 2-level
deep list.
<serum-toc start="2" end="3"></serum-toc>
...
<serum-toc></serum-toc>
...
<serum-toc></serum-toc>
It's recommended that you wrap a `<serum-toc>` tag with a `<div>` tag when
using in a markdown file, to ensure a well-formed structure of HTML output.
<div><serum-toc ...></serum-toc></div>
And finally, make sure you close every `<serum-toc>` tag properly
with `</serum-toc>`.
"""
@behaviour Serum.Plugin
alias Serum.HtmlTreeHelper, as: Html
serum_ver = Version.parse!(Mix.Project.config()[:version])
serum_req = "~> #{serum_ver.major}.#{serum_ver.minor}"
def name, do: "Table of Contents"
def version, do: "1.0.0"
def elixir, do: ">= 1.6.0"
def serum, do: unquote(serum_req)
def description, do: "Inserts a table of contents into pages or posts."
def implements,
do: [
:rendering_fragment
]
def rendering_fragment(html, metadata)
def rendering_fragment(html, %{type: :page}), do: {:ok, insert_toc(html)}
def rendering_fragment(html, %{type: :post}), do: {:ok, insert_toc(html)}
def rendering_fragment(html, _), do: {:ok, html}
@spec insert_toc(Html.tree()) :: Html.tree()
defp insert_toc(html) do
case Floki.find(html, "serum-toc") do
[] ->
html
[{"serum-toc", attr_list, _} | _] ->
{start, end_} = get_range(attr_list)
state = {start, end_, start, [0], []}
{new_tree, new_state} = Html.traverse(html, state, &tree_fun/2)
items = new_state |> elem(4) |> Enum.reverse()
toc = {"ul", [{"id", "toc"}, {"class", "serum-toc"}], items}
Html.traverse(new_tree, fn
{"serum-toc", _, _} -> toc
x -> x
end)
end
end
@spec get_range([{binary(), binary()}]) :: {integer(), integer()}
defp get_range(attr_list) do
attr_map = Map.new(attr_list)
start = attr_map["start"]
end_ = attr_map["end"]
start = (start && parse_h_level(start, 1)) || 1
end_ = (end_ && parse_h_level(end_, 6)) || 6
end_ = max(start, end_)
{start, end_}
end
@spec parse_h_level(binary(), integer()) :: integer()
defp parse_h_level(str, default) do
case Integer.parse(str) do
{level, ""} -> max(1, min(level, 6))
_ -> default
end
end
@spec tree_fun(Html.tree(), term()) :: {Html.tree(), term()}
defp tree_fun(tree, state)
defp tree_fun({<<?h::8, ch::8, _::binary>>, _, _} = tree, state) when ch in ?1..?6 do
{start, end_, prev_level, counts, items} = state
level = ch - ?0
if level >= start and level <= end_ do
new_counts = update_counts(counts, level, prev_level)
num_dot = new_counts |> Enum.reverse() |> Enum.join(".")
{tree2, id} = try_set_id(tree, "s_#{num_dot}")
link = toc_link(tree2, num_dot, id)
item = {"li", [{"class", "indent-#{level - start}"}], [link]}
new_state = {start, end_, level, new_counts, [item | items]}
{tree2, new_state}
else
{tree, state}
end
end
defp tree_fun(x, state), do: {x, state}
@spec strip_a_tags(Html.tree()) :: Html.tree()
defp strip_a_tags(tree)
defp strip_a_tags({"a", _, children}), do: children
defp strip_a_tags(x), do: x
@spec update_counts([integer()], integer(), integer()) :: [integer()]
defp update_counts(counts, level, prev_level) do
case level - prev_level do
0 ->
[x | xs] = counts
[x + 1 | xs]
diff when diff < 0 ->
[x | xs] = Enum.drop(counts, -diff)
[x + 1 | xs]
diff when diff > 0 ->
List.duplicate(1, diff) ++ counts
end
end
@spec toc_link(Html.tree(), binary(), binary()) :: Html.tree()
defp toc_link({_, _, children} = _header_tag, num_dot, target_id) do
num_span = {"span", [{"class", "number"}], [num_dot]}
contents = Html.traverse(children, &strip_a_tags/1)
{"a", [{"href", <<?#, target_id::binary>>}], [num_span | contents]}
end
@spec try_set_id(Html.tree(), binary()) :: {Html.tree(), binary()}
defp try_set_id({tag_name, attrs, children} = tree, new_id) do
case Enum.find(attrs, fn {k, _} -> k === "id" end) do
{"id", id} -> {tree, id}
nil -> {{tag_name, [{"id", new_id} | attrs], children}, new_id}
end
end
end
|
lib/serum/plugins/table_of_contents.ex
| 0.855323
| 0.650675
|
table_of_contents.ex
|
starcoder
|
# based on XKCD 287 - https://xkcd.com/287/
defmodule Orderer do
@moduledoc """
Generates orders of an exact target amount
"""
@type menu() :: [Item.t, ...]
@type order() :: [Entry.t, ...]
@spec generate({menu(), Money.t}) :: [order()]
@doc ~S"""
Wrapper around generation routine to assure that items are sorted
"""
def generate({menu, total}) do
suborders(Item.sort(menu), total)
end
@spec trivial_orders(Item.t, Money.t) :: [order()]
@doc ~S"""
Returns an array of an order containing a multiple of a single item with the desired total if one exists
or an empty array if none exist. Uses integer div/rem. Only ever returns one, but we're calling it plural
since it returns an array.
## Examples
iex> Orderer.trivial_orders(Item.parse("Борщ,$1.23"),Money.parse!("$7.38"))
[[%Entry{item: %Item{name: "Борщ", price: %Money{amount: 123, currency: :USD}},quantity: 6}]]
iex> Orderer.trivial_orders(Item.parse("Борщ,$1.23"),Money.parse!("$7.00"))
[]
"""
def trivial_orders(%Item{price: %Money{amount: price}} = item, %Money{amount: amount} = total) when amount > 0 and rem(amount, price)==0 do
[[Entry.new(item, div(total.amount, item.price.amount))]]
end
def trivial_orders(_item, _total) do
[]
end
@spec add_base_order([order()], Item.t, integer) :: [order()]
@doc ~S"""
Append a single item to each order if the quantity is positive
## Examples
iex> Orderer.add_base_order([
...> [%Entry{item: %Item{name: "avocado", price: %Money{amount: 100, currency: :USD}}, quantity: 1}, %Entry{item: %Item{name: "bacon", price: %Money{amount: 200, currency: :USD}}, quantity: 2}],
...> [%Entry{item: %Item{name: "cheddar", price: %Money{amount: 300, currency: :USD}}, quantity: 3}]],
...> %Item{name: "doritos", price: %Money{amount: 400, currency: :USD}}, 4)
[[%Entry{item: %Item{name: "avocado", price: %Money{amount: 100, currency: :USD}}, quantity: 1},
%Entry{item: %Item{name: "bacon", price: %Money{amount: 200, currency: :USD}}, quantity: 2},
%Entry{item: %Item{name: "doritos", price: %Money{amount: 400, currency: :USD}}, quantity: 4}],
[%Entry{item: %Item{name: "cheddar", price: %Money{amount: 300, currency: :USD}}, quantity: 3},
%Entry{item: %Item{name: "doritos", price: %Money{amount: 400, currency: :USD}}, quantity: 4}]]
iex> Orderer.add_base_order([
...> [%Entry{item: %Item{name: "avocado", price: %Money{amount: 100, currency: :USD}}, quantity: 1}, %Entry{item: %Item{name: "bacon", price: %Money{amount: 200, currency: :USD}}, quantity: 2}],
...> [%Entry{item: %Item{name: "cheddar", price: %Money{amount: 300, currency: :USD}}, quantity: 3}]],
...> %Item{name: "doritos", price: %Money{amount: 400, currency: :USD}}, 0)
[[%Entry{item: %Item{name: "avocado", price: %Money{amount: 100, currency: :USD}}, quantity: 1},
%Entry{item: %Item{name: "bacon", price: %Money{amount: 200, currency: :USD}}, quantity: 2}],
[%Entry{item: %Item{name: "cheddar", price: %Money{amount: 300, currency: :USD}}, quantity: 3}]]
"""
def add_base_order(orders, item, quantity) when quantity > 0 do
Enum.map(orders, fn order -> order ++ [Entry.new(item, quantity)] end)
end
def add_base_order(orders, _, _) do
orders
end
@spec suborders(menu(), Money.t) :: [order()]
@doc ~S"""
Return a list of orders with the given total. Expects menu to be sorted with most expensive items first.
iex> Orderer.suborders(Item.parse(["Cheezborger,$3.15","Hamborger,$2.95","Pepsi,$1.65","Chips,$1.00"]), Money.parse!("$5.80"))
[[%Entry{item: %Item{name: "Chips", price: %Money{amount: 100, currency: :USD}}, quantity: 1},
%Entry{item: %Item{name: "Pepsi", price: %Money{amount: 165, currency: :USD}}, quantity: 1},
%Entry{item: %Item{name: "Cheezborger", price: %Money{amount: 315, currency: :USD}}, quantity: 1}]]
"""
def suborders(menu, %Money{amount: amount}) when length(menu) == 0 or amount <= 0 do
[]
end
def suborders(menu, total) do
[item | items] = menu
Enum.reduce(0..div(total.amount, item.price.amount), trivial_orders(item, total),
fn quantity, results ->
subtotal = Money.subtract(total, Money.multiply(item.price, quantity))
results ++ (Orderer.suborders(items, subtotal) |> add_base_order(item, quantity))
end)
end
end
|
lib/orderer.ex
| 0.758108
| 0.447038
|
orderer.ex
|
starcoder
|
defmodule Sizeable do
@moduledoc """
A library to make file sizes human-readable
"""
require Logger
@bits ~w(b Kb Mb Gb Tb Pb Eb Zb Yb)
@bytes ~w(B KB MB GB TB PB EB ZB YB)
@doc """
see `filesize(value, options)`
"""
def filesize(value) do
filesize(value, [])
end
def filesize(value, options) when is_map(options) do
Logger.warn("Using maps for options is deprecated. Please use Keyword Lists.")
filesize(value, Map.to_list(options))
end
def filesize(value, options) when is_bitstring(value) do
case Integer.parse(value) do
{parsed, _rem} -> filesize(parsed, options)
:error -> raise "Value is not a Number"
end
end
def filesize(value, options) when is_integer(value) do
{parsed, _rem} = value |> Integer.to_string() |> Float.parse()
filesize(parsed, options)
end
def filesize(0.0, options) do
spacer = Keyword.get(options, :spacer, " ")
bits = Keyword.get(options, :bits, false)
output = Keyword.get(options, :output, :string)
{:ok, unit} = case bits do
true -> Enum.fetch(@bits, 0)
false -> Enum.fetch(@bytes, 0)
end
filesize_output(output, 0, unit, spacer)
end
@doc """
Returns a human-readable string for the given numeric value.
## Arguments:
- `value` (Integer/Float/String) representing the filesize to be converted.
- `options` (Struct) representing the options to determine base, rounding and units.
## Options
- `bits`: `true` if the result should be in bits, `false` if in bytes. Defaults to `false`.
- `spacer`: the string that should be between the number and the unit. Defaults to `" "`.
- `round`: the precision that the number should be rounded down to. Defaults to `2`.
- `base`: the base for exponent calculation. `2` for binary-based numbers, any other Integer can be used. Defaults to `2`.
- `output`: the ouput format to be used, possible options are :string, :list, :map. Defaults to :string.
## Example - Get bit-sized file size for 1024 byte
Sizeable.filesize(1024, bits: true)
"8 Kb"
"""
def filesize(value, options) when (is_float(value) and is_list(options)) do
bits = Keyword.get(options, :bits, false)
base = Keyword.get(options, :base, 2)
spacer = Keyword.get(options, :spacer, " ")
round = Keyword.get(options, :round, 2)
output = Keyword.get(options, :output, :string)
ceil = if base > 2 do 1000 else 1024 end
neg = value < 0
value = case neg do
true -> -value
false -> value
end
value = if bits do 8 * value else value end
{exponent, _rem} = :math.log(value)/:math.log(ceil)
|> Float.floor
|> Float.to_string
|> Integer.parse
result = Float.round(value / :math.pow(ceil, exponent), base)
result = if Float.floor(result) == result do
round result
else
Float.round(result, round)
end
{:ok, unit} = case bits do
true -> Enum.fetch(@bits, exponent)
false -> Enum.fetch(@bytes, exponent)
end
result = case neg do
true -> result * -1
false -> result
end
filesize_output(output, result, unit, spacer)
end
def filesize(_value, options) when is_list(options) do
raise "Invalid Value"
end
def filesize(_value, _options) do
raise "Invalid Options Argument"
end
def filesize_output(output, result, unit, spacer) do
case output do
:string -> Enum.join([result, unit], spacer)
:list -> [result, unit]
:map -> %{result: result, unit: unit}
_ -> raise "Invalid `#{output}` output value, possible options are :string, :list, :map"
end
end
end
|
lib/sizeable.ex
| 0.888952
| 0.608769
|
sizeable.ex
|
starcoder
|
defmodule Membrane.Element.RawVideo.Parser do
@moduledoc """
Simple module responsible for splitting the incoming buffers into
frames of raw (uncompressed) video frames of desired format.
The parser sends proper caps when moves to playing state.
No data analysis is done, this element simply ensures that
the resulting packets have proper size.
"""
use Membrane.Filter
alias Membrane.{Buffer, Payload}
alias Membrane.Caps.Video.Raw
def_input_pad :input, demand_unit: :bytes, caps: :any
def_output_pad :output, caps: {Raw, aligned: true}
def_options format: [
type: :atom,
spec: Raw.format_t(),
description: """
Format used to encode pixels of the video frame.
"""
],
width: [
type: :int,
description: """
Width of a frame in pixels.
"""
],
height: [
type: :int,
description: """
Height of a frame in pixels.
"""
],
framerate: [
type: :tuple,
spec: Raw.framerate_t(),
default: {0, 1},
description: """
Framerate of video stream. Passed forward in caps.
"""
]
@impl true
def handle_init(opts) do
with {:ok, frame_size} <- Raw.frame_size(opts.format, opts.width, opts.height) do
caps = %Raw{
format: opts.format,
width: opts.width,
height: opts.height,
framerate: opts.framerate,
aligned: true
}
{num, denom} = caps.framerate
frame_duration = if num == 0, do: 0, else: Ratio.new(denom * Membrane.Time.second(), num)
{:ok,
%{
caps: caps,
timestamp: 0,
frame_duration: frame_duration,
frame_size: frame_size,
queue: <<>>
}}
end
end
@impl true
def handle_prepared_to_playing(_ctx, state) do
{{:ok, caps: {:output, state.caps}}, state}
end
@impl true
def handle_demand(:output, bufs, :buffers, _ctx, state) do
{{:ok, demand: {:input, bufs * state.frame_size}}, state}
end
def handle_demand(:output, size, :bytes, _ctx, state) do
{{:ok, demand: {:input, size}}, state}
end
@impl true
def handle_caps(:input, caps, _ctx, state) do
# Do not forward caps
{num, denom} = caps.framerate
frame_duration = if num == 0, do: 0, else: Ratio.new(denom * Membrane.Time.second(), num)
{:ok, %{state | frame_duration: frame_duration}}
end
@impl true
def handle_process(:input, %Buffer{metadata: metadata, payload: raw_payload}, _ctx, state) do
%{frame_size: frame_size} = state
payload = state.queue <> Payload.to_binary(raw_payload)
size = byte_size(payload)
if size < frame_size do
{:ok, %{state | queue: payload}}
else
if Map.has_key?(metadata, :timestamp),
do: raise("Buffer shouldn't contain timestamp in the metadata.")
{bufs, tail} = split_into_buffers(payload, frame_size)
{bufs, state} =
Enum.map_reduce(bufs, state, fn buffer, state_acc ->
{%Buffer{buffer | metadata: %{pts: state_acc.timestamp}}, bump_timestamp(state_acc)}
end)
{{:ok, buffer: {:output, bufs}}, %{state | queue: tail}}
end
end
@impl true
def handle_prepared_to_stopped(_ctx, state) do
{:ok, %{state | queue: <<>>}}
end
defp bump_timestamp(%{caps: %{framerate: {0, _}}} = state) do
state
end
defp bump_timestamp(state) do
use Ratio
%{timestamp: timestamp, frame_duration: frame_duration} = state
timestamp = timestamp + frame_duration
%{state | timestamp: timestamp}
end
defp split_into_buffers(data, frame_size, acc \\ [])
defp split_into_buffers(data, frame_size, acc) when byte_size(data) < frame_size do
{acc |> Enum.reverse(), data}
end
defp split_into_buffers(data, frame_size, acc) do
<<frame::bytes-size(frame_size), tail::binary>> = data
split_into_buffers(tail, frame_size, [%Buffer{payload: frame} | acc])
end
end
|
lib/membrane_element_rawvideo/parser.ex
| 0.870253
| 0.557785
|
parser.ex
|
starcoder
|
defmodule Daguex.Image do
@type key :: String.t
@type variant_t :: %{identifier: identifier, width: integer, height: integer, type: Daguex.ImageFile.type}
@type format :: String.t
@type t :: %__MODULE__{
key: key,
width: integer,
height: integer,
type: String.t,
variants: %{format => variant_t},
variants_mod: %{format => variant_t | :removed},
data: Map.t,
data_mod: Map.t
}
@enforce_keys [:key, :width, :height, :type]
defstruct [
key: nil,
width: 0,
height: 0,
type: nil,
variants: %{},
variants_mod: %{},
data: %{},
data_mod: %{}
]
def from_image_file(%Daguex.ImageFile{} = image_file, key) do
%__MODULE__{key: key, width: image_file.width, height: image_file.height, type: image_file.type}
end
@spec add_variant(t, format, key, integer, integer, Daguex.ImageFile.type) :: t
def add_variant(image = %__MODULE__{variants_mod: variants_mod}, format, key, width, height, type) do
%{image | variants_mod: Map.put(variants_mod, format, build_variant(key, width, height, type))}
end
defp build_variant(key, width, height, type) do
%{"key" => key, "width" => width, "height" => height, "type" => type}
end
def rm_variant(image = %__MODULE__{variants_mod: variants_mod}, format) do
%{image | variants_mod: Map.put(variants_mod, format, :removed)}
end
def get_variant(%__MODULE__{variants: variants, variants_mod: variants_mod}, format) do
case Map.get(variants_mod, format) do
:removed -> nil
nil -> Map.get(variants, format)
variant -> variant
end
end
def has_variant?(%__MODULE__{variants: variants, variants_mod: variants_mod}, format) do
case Map.get(variants_mod, format) do
:removed -> nil
nil -> Map.has_key?(variants, format)
_ -> true
end
end
def apply_variants_mod(image_or_mod, target_image \\ nil)
def apply_variants_mod(%__MODULE__{variants: variants, variants_mod: variants_mod} = image, nil) do
%{image | variants: do_apply_variants_mod(variants_mod, variants), variants_mod: %{}}
end
def apply_variants_mod(mod, %__MODULE__{variants: variants} = image) do
%{image | variants: do_apply_variants_mod(mod, variants), variants_mod: %{}}
end
defp do_apply_variants_mod(mod, variants) do
Enum.reduce(mod, variants, fn
{key, :removed}, variants -> Map.delete(variants, key)
{key, value}, variants -> Map.put(variants, key, value)
end)
end
def variants(%__MODULE__{variants: variants, variants_mod: variants_mod}) do
do_apply_variants_mod(variants_mod, variants)
end
def variants_with_origal(image) do
variants(image) |> Map.put("orig", build_variant(image.key, image.width, image.height, image.type))
end
def put_data(image = %__MODULE__{data_mod: data_mod}, keys, value) do
%{image | data_mod: do_put_data(data_mod, keys, value)}
end
def rm_data(image, keys) do
put_data(image, keys, :removed)
end
def get_data(%__MODULE__{data: data, data_mod: data_mod}, keys, default_value \\ nil) do
case do_get_data(data_mod, keys) do
:removed -> default_value
nil -> do_get_data(data, keys)
value -> value
end
end
def apply_data_mod(image_or_mod, target_image \\ nil)
def apply_data_mod(%__MODULE__{data_mod: data_mod} = image, nil) do
apply_data_mod(data_mod, image)
end
def apply_data_mod(mod, %__MODULE__{data: data} = image) do
%{image| data: do_apply_mod(mod, data), data_mod: %{}}
end
defp do_put_data(data, [key|tail], value) do
data = case data do
map when is_map(map) -> data
_ -> %{}
end
Map.put(data, key, do_put_data(Map.get(data, key), tail, value))
end
defp do_put_data(_data, [], value), do: value
defp do_get_data(:removed, _), do: :removed
defp do_get_data(nil, _), do: nil
defp do_get_data(data, [key|tail]) do
case data do
map when is_map(map) -> do_get_data(Map.get(data, key), tail)
_ -> nil
end
end
defp do_get_data(data, []), do: data
defp do_apply_mod(nil, data), do: data
defp do_apply_mod(mod, nil), do: do_apply_mod(mod, %{})
defp do_apply_mod(mod, data) when is_map(mod) do
Enum.reduce(mod, data, fn
{key, :removed}, data -> Map.delete(data, key)
{key, map}, data when is_map(map) -> Map.put(data, key, do_apply_mod(map, Map.get(data, key)))
{key, value}, data -> Map.put(data, key, value)
end)
end
end
|
lib/daguex/image.ex
| 0.800107
| 0.609321
|
image.ex
|
starcoder
|
defmodule Geo.PostGIS.Extension do
@moduledoc """
PostGIS extension for Postgrex. Supports Geometry and Geography data types.
## Examples
Create a new Postgrex Types module:
Postgrex.Types.define(MyApp.PostgresTypes, [Geo.PostGIS.Extension], [])
If using with Ecto, you may want something like thing instead:
Postgrex.Types.define(MyApp.PostgresTypes,
[Geo.PostGIS.Extension] ++ Ecto.Adapters.Postgres.extensions(),
json: Poison)
opts = [hostname: "localhost", username: "postgres", database: "geo_postgrex_test",
types: MyApp.PostgresTypes ]
[hostname: "localhost", username: "postgres", database: "geo_postgrex_test",
types: MyApp.PostgresTypes]
{:ok, pid} = Postgrex.Connection.start_link(opts)
{:ok, #PID<0.115.0>}
geo = %Geo.Point{coordinates: {30, -90}, srid: 4326}
%Geo.Point{coordinates: {30, -90}, srid: 4326}
{:ok, _} = Postgrex.Connection.query(pid, "CREATE TABLE point_test (id int, geom geometry(Point, 4326))")
{:ok, %Postgrex.Result{columns: nil, command: :create_table, num_rows: 0, rows: nil}}
{:ok, _} = Postgrex.Connection.query(pid, "INSERT INTO point_test VALUES ($1, $2)", [42, geo])
{:ok, %Postgrex.Result{columns: nil, command: :insert, num_rows: 1, rows: nil}}
Postgrex.Connection.query(pid, "SELECT * FROM point_test")
{:ok, %Postgrex.Result{columns: ["id", "geom"], command: :select, num_rows: 1,
rows: [{42, %Geo.Point{coordinates: {30.0, -90.0}, srid: 4326}}]}}
"""
@behaviour Postgrex.Extension
@geo_types [
Geo.GeometryCollection,
Geo.LineString,
Geo.LineStringZ,
Geo.MultiLineString,
Geo.MultiLineStringZ,
Geo.MultiPoint,
Geo.MultiPointZ,
Geo.MultiPolygon,
Geo.MultiPolygonZ,
Geo.Point,
Geo.PointZ,
Geo.PointM,
Geo.PointZM,
Geo.Polygon,
Geo.PolygonZ
]
def init(opts) do
Keyword.get(opts, :decode_copy, :copy)
end
def matching(_) do
[type: "geometry", type: "geography"]
end
def format(_) do
:binary
end
def encode(_opts) do
quote location: :keep do
%x{} = geom when x in unquote(@geo_types) ->
data = Geo.WKB.encode_to_iodata(geom)
[<<IO.iodata_length(data)::int32>> | data]
end
end
def decode(:reference) do
quote location: :keep do
<<len::int32, wkb::binary-size(len)>> ->
Geo.WKB.decode!(wkb)
end
end
def decode(:copy) do
quote location: :keep do
<<len::int32, wkb::binary-size(len)>> ->
Geo.WKB.decode!(:binary.copy(wkb))
end
end
end
|
lib/geo_postgis/extension.ex
| 0.639961
| 0.425665
|
extension.ex
|
starcoder
|
defmodule Nestru do
@moduledoc "README.md"
|> File.read!()
|> String.split("[//]: # (Documentation)\n")
|> Enum.at(1)
|> String.trim("\n")
@doc """
Creates a nested struct from the given map.
The first argument is a map having key-value pairs. Supports both string
and atom keys in the map.
The second argument is a struct's module atom.
The third argument is a context value to be passed to implemented
functions of `Nestru.PreDecoder` and `Nestru.Decoder` protocols.
To give a hint on how to decode nested struct values or a list of such values
for the given field, implement `Nestru.Decoder` protocol for the struct.
Function calls `struct/2` to build the struct's value.
Keys in the map that don't exist in the struct are automatically discarded.
"""
def from_map(map, struct_module, context \\ [])
def from_map(%{} = map, struct_module, context) do
case prepare_map(:warn, map, struct_module, context) do
{:ok, nil} ->
{:ok, nil}
{:ok, map} ->
{:ok, struct(struct_module, map)}
{:error, %{} = map} ->
{:error, format_paths(map)}
{:invalid_hint_shape, %{message: {struct_module, value}} = error_map} ->
{:error, %{error_map | message: invalid_hint_shape(struct_module, value)}}
{:invalid_gather_fields_shape, struct_module, value} ->
{:error, %{message: invalid_gather_fields_shape(struct_module, value)}}
{:unexpected_item_value, key, value} ->
{:error, %{message: invalid_item_value(struct_module, key, value)}}
{:unexpected_item_function_return, key, fun, value} ->
{:error, %{message: invalid_item_function_return_value(struct_module, key, fun, value)}}
{:unexpected_atom_for_item_with_list, key, value} ->
{:error, %{message: invalid_atom_for_item_with_list(struct_module, key, value)}}
end
end
def from_map(map, _struct_module, _context) do
map
end
@doc """
Similar to `from_map/3` but checks if enforced struct's fields keys exist
in the given map.
Returns a struct or raises an error.
"""
def from_map!(map, struct_module, context \\ [])
def from_map!(%{} = map, struct_module, context) do
case prepare_map(:raise, map, struct_module, context) do
{:ok, nil} ->
nil
{:ok, map} ->
struct!(struct_module, map)
{:error, %{} = error_map} ->
raise format_raise_message("map", error_map)
{:invalid_hint_shape, %{message: {struct_module, value}}} ->
raise invalid_hint_shape(struct_module, value)
{:invalid_gather_fields_shape, struct_module, value} ->
raise invalid_gather_fields_shape(struct_module, value)
{:unexpected_item_value, key, value} ->
raise invalid_item_value(struct_module, key, value)
{:unexpected_item_function_return, key, fun, value} ->
raise invalid_item_function_return_value(struct_module, key, fun, value)
{:unexpected_atom_for_item_with_list, key, value} ->
raise invalid_atom_for_item_with_list(struct_module, key, value)
end
end
def from_map!(map, struct_module, _context) do
raise """
Can't shape #{inspect(struct_module)} because the given value \
is not a map but #{inspect(map)}.\
"""
end
defp prepare_map(error_mode, map, struct_module, context) do
struct_value = struct_module.__struct__()
struct_info = {struct_value, struct_module}
with {:ok, map} <- gather_fields_map(struct_info, map, context),
{:ok, decode_hint} <- get_decode_hint(struct_info, map, context),
{:ok, _shaped_fields} = ok <- shape_fields(error_mode, struct_info, decode_hint, map) do
ok
end
end
defp gather_fields_map(struct_info, map, context) do
{struct_value, struct_module} = struct_info
struct_value
|> Nestru.PreDecoder.gather_fields_map(context, map)
|> validate_fields_map(struct_module)
end
defp validate_fields_map({:ok, %{}} = ok, _struct_module),
do: ok
defp validate_fields_map({:error, message}, _struct_module),
do: {:error, %{message: message}}
defp validate_fields_map(value, struct_module),
do: {:invalid_gather_fields_shape, struct_module, value}
defp get_decode_hint(struct_info, map, context) do
{struct_value, struct_module} = struct_info
struct_value
|> Nestru.Decoder.from_map_hint(context, map)
|> validate_hint(struct_module)
end
defp validate_hint({:ok, hint} = ok, _struct_module) when is_nil(hint) or is_map(hint),
do: ok
defp validate_hint({:error, %{message: _}} = error, _struct_module),
do: error
defp validate_hint({:error, message}, _struct_module),
do: {:error, %{message: message}}
defp validate_hint(value, struct_module),
do: {:invalid_hint_shape, %{message: {struct_module, value}}}
defp shape_fields(_error_mode, _struct_info, nil = _decode_hint, _map) do
{:ok, nil}
end
defp shape_fields(error_mode, struct_info, decode_hint, map) do
{struct_value, struct_module} = struct_info
struct_keys = struct_value |> Map.keys() |> List.delete(:__struct__)
inform_unknown_keys(error_mode, decode_hint, struct_module, struct_keys)
decode_hint = Map.take(decode_hint, struct_keys)
kvi = decode_hint |> :maps.iterator() |> :maps.next()
with {:ok, acc} <- shape_fields_recursively(error_mode, kvi, map) do
as_is_keys = struct_keys -- Map.keys(decode_hint)
fields =
Enum.reduce(as_is_keys, %{}, fn key, taken_map ->
if has_key?(map, key) do
value = get(map, key)
Map.put(taken_map, key, value)
else
taken_map
end
end)
{:ok, Map.merge(fields, acc)}
end
end
defp inform_unknown_keys(error_mode, map, struct_module, struct_keys) do
if extra_key = List.first(Map.keys(map) -- struct_keys) do
message = """
The decoding hint value for key #{inspect(extra_key)} received from Nestru.Decoder.from_map_hint/3 \
implemented for #{inspect(struct_module)} is unexpected because the struct hasn't a field with such key name.\
"""
if error_mode == :raise do
raise message
else
IO.warn(message)
end
end
:ok
end
defp shape_fields_recursively(error_mode, kvi, map, acc \\ %{})
defp shape_fields_recursively(_error_mode, :none = _kvi, _map, target_map) do
{:ok, target_map}
end
defp shape_fields_recursively(error_mode, {key, fun, iterator}, map, target_map)
when is_function(fun) do
map_value = get(map, key)
case fun.(map_value) do
{:ok, updated_value} ->
target_map = Map.put(target_map, key, updated_value)
shape_fields_recursively(error_mode, :maps.next(iterator), map, target_map)
{:error, %{message: _, path: path} = error_map} = error ->
validate_path!(path, error, fun)
{:error, insert_to_path(error_map, key, map)}
{:error, message} ->
{:error, insert_to_path(%{message: message}, key, map)}
value ->
{:unexpected_item_function_return, key, fun, value}
end
end
defp shape_fields_recursively(error_mode, {key, module, iterator}, map, target_map)
when is_atom(module) do
if function_exported?(module, :__struct__, 0) do
result =
case get(map, key) do
[_ | _] ->
{:unexpected_atom_for_item_with_list, key, module}
nil ->
{:ok, nil}
map_value ->
shape_nested_struct(error_mode, map, key, map_value, module)
end
case result do
{:ok, shaped_value} ->
target_map = Map.put(target_map, key, shaped_value)
shape_fields_recursively(error_mode, :maps.next(iterator), map, target_map)
error ->
error
end
else
{:unexpected_item_value, key, module}
end
end
defp shape_fields_recursively(_error_mode, kvi, _map, _acc) do
{key, value, _iterator} = kvi
{:unexpected_item_value, key, value}
end
defp shape_nested_struct(error_mode, map, key, map_value, module) do
shaped_value =
if error_mode == :raise do
from_map!(map_value, module)
else
from_map(map_value, module)
end
case shaped_value do
struct when error_mode == :raise ->
{:ok, struct}
{:ok, _struct} = ok ->
ok
{:error, error_map} ->
{:error, insert_to_path(error_map, key, map)}
end
end
defp validate_path!(path, error, fun) do
unless Enum.all?(path, &(not is_nil(&1) and (is_atom(&1) or is_binary(&1) or is_number(&1)))) do
raise """
Error path can contain only not nil atoms, binaries or integers. \
Error is #{inspect(error)}, received from function #{inspect(fun)}.\
"""
end
end
defp insert_to_path(error_map, key, map_value) do
key = resolve_key(map_value, key)
insert_to_path(error_map, key)
end
defp insert_to_path(error_map, key_or_idx) do
path =
Enum.concat([
List.wrap(key_or_idx),
Map.get(error_map, :path, [])
])
Map.put(error_map, :path, path)
end
defp resolve_key(map, key) do
existing_key(map, key) ||
(is_binary(key) && existing_key(map, String.to_existing_atom(key))) ||
(is_atom(key) && existing_key(map, to_string(key))) || key
end
defp existing_key(map, key) do
if Map.has_key?(map, key), do: key
end
defp invalid_gather_fields_shape(struct_module, value) do
"""
Expected a {:ok, map} | {:error, term} value from Nestru.PreDecoder.gather_fields_map/3 \
function implemented for #{inspect(struct_module)}, received #{inspect(value)} instead.\
"""
end
defp invalid_hint_shape(struct_module, value) do
"""
Expected a {:ok, nil | map} | {:error, term} value from Nestru.Decoder.from_map_hint/3 \
function implemented for #{inspect(struct_module)}, received #{inspect(value)} instead.\
"""
end
defp invalid_item_function_return_value(struct_module, key, fun, value) do
"""
Expected {:ok, term}, {:error, %{message: term, path: list}}, or %{:error, term} \
return value from the anonymous function for the key defined in the following \
{:ok, %{#{inspect(key)} => #{inspect(fun)}}} tuple returned from Nestru.Decoder.from_map_hint/3 \
function implemented for #{inspect(struct_module)}, received #{inspect(value)} instead.\
"""
end
defp invalid_item_value(struct_module, key, value) do
"""
Expected a struct's module atom or a function value for #{inspect(key)} key received \
from Nestru.Decoder.from_map_hint/3 function implemented for #{inspect(struct_module)}, \
received #{inspect(value)} instead.\
"""
end
defp invalid_atom_for_item_with_list(struct_module, key, value) do
"""
Unexpected #{inspect(value)} value received for #{inspect(key)} key \
from Nestru.Decoder.from_map_hint/3 function implemented for #{inspect(struct_module)}. \
You can return &Nestru.from_list_of_maps(&1, #{inspect(value)}) as a hint \
for list decoding.\
"""
end
@doc """
Returns whether the given key exists in the given map as a binary or as an atom.
"""
def has_key?(map, key) when is_binary(key) do
Map.has_key?(map, key) or Map.has_key?(map, String.to_existing_atom(key))
end
def has_key?(map, key) when is_atom(key) do
Map.has_key?(map, key) or Map.has_key?(map, to_string(key))
end
@doc """
Gets the value for a specific key in map. Lookups a binary or an atom key.
If key is present in map then its value value is returned. Otherwise, default is returned.
If default is not provided, nil is used.
"""
def get(map, key, default \\ nil)
def get(map, key, default) when is_binary(key) do
Map.get(map, key, Map.get(map, String.to_existing_atom(key), default))
end
def get(map, key, default) when is_atom(key) do
Map.get(map, key, Map.get(map, to_string(key), default))
end
@doc """
Creates a map from the given nested struct.
Casts each field's value to a map recursively, whether it is a struct or
a list of structs.
To give a hint to the function of how to generate a map, implement
`Nestru.Encoder` protocol for the struct. That can be used to keep
additional type information for the field that can have a value of various
struct types.
"""
def to_map(struct) do
case cast_to_map(struct) do
{:invalid_hint_shape, %{message: {struct_module, value}} = error_map} ->
{:error, %{error_map | message: invalid_to_map_value_message(struct_module, value)}}
{:ok, _value} = ok ->
ok
{:error, map} ->
{:error, format_paths(map)}
end
end
@doc """
Similar to `to_map/1`.
Returns a map or raises an error.
"""
def to_map!(struct) do
case cast_to_map(struct) do
{:ok, map} ->
map
{:invalid_hint_shape, %{message: {struct_module, value}}} ->
raise invalid_to_map_value_message(struct_module, value)
{:error, error_map} ->
raise format_raise_message("struct", error_map)
end
end
defp cast_to_map(struct, kvi \\ nil, acc \\ {[], %{}})
defp cast_to_map(%module{} = struct, _kvi, {path, _target_map} = acc) do
case struct |> Nestru.Encoder.to_map() |> validate_hint(module) do
{:ok, map} -> cast_to_map(map, nil, acc)
{tag, %{} = map} -> {tag, Map.put(map, :path, path)}
end
end
defp cast_to_map([_ | _] = list, _kvi, {path, _target_map} = _acc) do
list
|> reduce_via_cast_to_map(path)
|> maybe_ok_reverse()
end
defp cast_to_map(value, _kvi, _acc) when not is_map(value) do
{:ok, value}
end
defp cast_to_map(map, nil, acc) do
kvi =
map
|> :maps.iterator()
|> :maps.next()
cast_to_map(map, kvi, acc)
end
defp cast_to_map(_map, :none, {_path, target_map} = _acc) do
{:ok, target_map}
end
defp cast_to_map(map, {key, value, iterator}, {path, target_map}) do
with {:ok, casted_value} <- cast_to_map(value, nil, {[key | path], %{}}) do
target_map = Map.put(target_map, key, casted_value)
kvi = :maps.next(iterator)
cast_to_map(map, kvi, {path, target_map})
end
end
defp reduce_via_cast_to_map(list, path) do
list
|> Enum.with_index()
|> Enum.reduce_while([], fn {item, idx}, acc ->
case cast_to_map(item, nil, {[], %{}}) do
{:ok, casted_item} ->
{:cont, [casted_item | acc]}
{:error, error_map} ->
keys_list =
path
|> Enum.reverse()
|> Enum.concat([idx])
{:halt, {:error, insert_to_path(error_map, keys_list)}}
end
end)
end
defp maybe_ok_reverse([_ | _] = list), do: {:ok, Enum.reverse(list)}
defp maybe_ok_reverse([]), do: {:ok, []}
defp maybe_ok_reverse({:error, _map} = error), do: error
defp invalid_to_map_value_message(struct_module, value) do
"""
Expected a {:ok, nil | map} | {:error, term} value from Nestru.Encoder.to_map/1 \
function implemented for #{inspect(struct_module)}, received #{inspect(value)} instead.\
"""
end
defp format_paths(map) do
keys =
map
|> Map.get(:path, [])
|> Enum.map(&to_access_fun/1)
Map.put(map, :get_in_keys, keys)
end
defp to_access_fun(key) when is_atom(key) or is_binary(key), do: Access.key!(key)
defp to_access_fun(key) when is_integer(key), do: Access.at!(key)
defp format_raise_message(object, map) do
keys =
map
|> Map.get(:path, [])
|> Enum.map_join(", ", &to_access_string/1)
"""
#{stringify(map.message)}
See details by calling get_in/2 with the #{object} and the following keys: [#{keys}]\
"""
end
defp to_access_string(key) when is_atom(key) or is_binary(key),
do: "Access.key!(#{inspect(key)})"
defp to_access_string(key) when is_integer(key),
do: "Access.at!(#{key})"
defp stringify(value) when is_binary(value), do: value
defp stringify(value), do: inspect(value)
@doc """
Creates a list of nested structs from the given list of maps.
The first argument is a list of maps.
If the second argument is a struct's module atom, then the function calls
the `from_map/3` on each input list item.
If the second argument is a list of struct module atoms, the function
calls the `from_map/3` function on each input list item with the module atom
taken at the same index of the second list.
In this case, both arguments should be of equal length.
The third argument is a context value to be passed to implemented
functions of `Nestru.PreDecoder` and `Nestru.Decoder` protocols.
The function returns a list of structs or the first error from `from_map/3`
function.
"""
def from_list_of_maps(list, struct_atoms, context \\ [])
def from_list_of_maps([_ | _] = list, struct_atoms, context) do
list
|> reduce_via_from_map(struct_atoms, context)
|> maybe_ok_reverse()
end
def from_list_of_maps(list, _struct_atoms, _context) do
{:ok, list}
end
@doc """
Similar to `from_list_of_maps/2` but checks if enforced struct's fields keys
exist in the given maps.
Returns a struct or raises an error.
"""
def from_list_of_maps!(list, struct_atoms, context \\ [])
def from_list_of_maps!([_ | _] = list, struct_atoms, context) do
case list |> reduce_via_from_map(struct_atoms, context) |> maybe_ok_reverse() do
{:ok, list} -> list
{:error, %{message: message}} -> raise message
end
end
def from_list_of_maps!(list, _struct_atoms, _context) do
list
end
defp reduce_via_from_map(list, [_ | _] = struct_atoms, context)
when length(list) == length(struct_atoms) do
list
|> Enum.with_index()
|> Enum.reduce_while([], fn {item, idx}, acc ->
struct_module = Enum.at(struct_atoms, idx)
case from_map(item, struct_module, context) do
{:ok, casted_item} ->
{:cont, [casted_item | acc]}
{:error, map} ->
{:halt, {:error, insert_to_path(map, idx)}}
end
end)
end
defp reduce_via_from_map(list, struct_atoms, context) when is_atom(struct_atoms) do
list
|> Enum.with_index()
|> Enum.reduce_while([], fn {item, idx}, acc ->
case from_map(item, struct_atoms, context) do
{:ok, casted_item} ->
{:cont, [casted_item | acc]}
{:error, map} ->
{:halt, {:error, insert_to_path(map, idx)}}
end
end)
end
defp reduce_via_from_map(list, struct_atoms, _context) do
{:error,
%{
message: """
The map's list length (#{length(list)}) is expected to be equal to \
the struct module atoms list length (#{length(struct_atoms)}).\
"""
}}
end
end
|
lib/nestru.ex
| 0.852276
| 0.567277
|
nestru.ex
|
starcoder
|
defmodule ExUnit.Filters do
@moduledoc """
Conveniences for parsing and evaluating filters.
"""
@type t :: list({ atom, any } | atom)
@doc """
Normalizes include and excludes to remove duplicates
and keep precedence.
## Examples
iex> ExUnit.Filters.normalize(nil, nil)
{ [], [] }
iex> ExUnit.Filters.normalize([:foo, :bar, :bar], [:foo, :baz])
{ [:foo, :bar], [:baz] }
"""
@spec normalize(t | nil, t | nil) :: { t, t }
def normalize(include, exclude) do
include = include |> List.wrap |> Enum.uniq
exclude = exclude |> List.wrap |> Enum.uniq |> Kernel.--(include)
{ include, exclude }
end
@doc """
Parses the given filters, as one would receive from the command line.
## Examples
iex> ExUnit.Filters.parse(["foo:bar", "baz"])
[{:foo, "bar"}, :baz]
"""
@spec parse([String.t]) :: t
def parse(filters) do
Enum.map filters, fn filter ->
case String.split(filter, ":", global: false) do
[key, value] -> { binary_to_atom(key), parse_value(value) }
[key] -> binary_to_atom(key)
end
end
end
defp parse_value("true"), do: true
defp parse_value("false"), do: false
defp parse_value(value), do: value
@doc """
Evaluates the include and exclude filters against the
given tags. Expects filters to be normalized into a keyword
list where each key is an atom and the value is a list.
## Examples
iex> ExUnit.Filters.eval([foo: "bar"], [:foo], [foo: "bar"])
:ok
iex> ExUnit.Filters.eval([foo: "bar"], [:foo], [foo: "baz"])
{ :error, :foo }
"""
@spec eval(t, t, Keyword.t) :: :ok | { :error, atom }
def eval(include, exclude, tags) do
excluded = Enum.find_value exclude, &has_tag(&1, tags)
if !excluded or Enum.any?(include, &has_tag(&1, tags)) do
:ok
else
{ :error, excluded }
end
end
defp has_tag({ key, value }, tags) when is_atom(key),
do: Keyword.fetch(tags, key) == { :ok, value } and key
defp has_tag(key, tags) when is_atom(key),
do: Keyword.has_key?(tags, key) and key
end
|
lib/ex_unit/lib/ex_unit/filters.ex
| 0.887223
| 0.532668
|
filters.ex
|
starcoder
|
defmodule Cluster.Strategy.Kubernetes.DNS do
@moduledoc """
This clustering strategy works by loading all your Erlang nodes (within Pods) in the current Kubernetes
namespace. It will fetch the addresses of all pods under a shared headless service and attempt to connect.
It will continually monitor and update its connections every 5s.
It assumes that all Erlang nodes were launched under a base name, are using longnames, and are unique
based on their FQDN, rather than the base hostname. In other words, in the following
longname, `<basename>@<ip>`, `basename` would be the value configured through
`application_name`.
An example configuration is below:
config :libcluster,
topologies: [
k8s_example: [
strategy: #{__MODULE__},
config: [
service: "myapp-headless",
application_name: "myapp",
polling_interval: 10_000]]]
"""
use GenServer
use Cluster.Strategy
import Cluster.Logger
alias Cluster.Strategy.State
@default_polling_interval 5_000
def start_link(args), do: GenServer.start_link(__MODULE__, args)
@impl true
def init([%State{meta: nil} = state]) do
init([%State{state | :meta => MapSet.new()}])
end
def init([%State{} = state]) do
{:ok, load(state), 0}
end
@impl true
def handle_info(:timeout, state) do
handle_info(:load, state)
end
def handle_info(:load, state) do
{:noreply, load(state)}
end
def handle_info(_, state) do
{:noreply, state}
end
defp load(%State{topology: topology, meta: meta} = state) do
new_nodelist = MapSet.new(get_nodes(state))
added = MapSet.difference(new_nodelist, meta)
removed = MapSet.difference(meta, new_nodelist)
new_nodelist =
case Cluster.Strategy.disconnect_nodes(
topology,
state.disconnect,
state.list_nodes,
MapSet.to_list(removed)
) do
:ok ->
new_nodelist
{:error, bad_nodes} ->
# Add back the nodes which should have been removed, but which couldn't be for some reason
Enum.reduce(bad_nodes, new_nodelist, fn {n, _}, acc ->
MapSet.put(acc, n)
end)
end
new_nodelist =
case Cluster.Strategy.connect_nodes(
topology,
state.connect,
state.list_nodes,
MapSet.to_list(added)
) do
:ok ->
new_nodelist
{:error, bad_nodes} ->
# Remove the nodes which should have been added, but couldn't be for some reason
Enum.reduce(bad_nodes, new_nodelist, fn {n, _}, acc ->
MapSet.delete(acc, n)
end)
end
Process.send_after(
self(),
:load,
polling_interval(state)
)
%State{state | :meta => new_nodelist}
end
@spec get_nodes(State.t()) :: [atom()]
defp get_nodes(%State{topology: topology, config: config}) do
app_name = Keyword.fetch!(config, :application_name)
service = Keyword.fetch!(config, :service)
resolver = Keyword.get(config, :resolver, &:inet_res.getbyname(&1, :a))
cond do
app_name != nil and service != nil ->
headless_service = to_charlist(service)
case resolver.(headless_service) do
{:ok, {:hostent, _fqdn, [], :inet, _value, addresses}} ->
parse_response(addresses, app_name)
{:error, reason} ->
error(topology, "lookup against #{service} failed: #{inspect(reason)}")
[]
end
app_name == nil ->
warn(
topology,
"kubernetes.DNS strategy is selected, but :application_name is not configured!"
)
[]
service == nil ->
warn(topology, "kubernetes strategy is selected, but :service is not configured!")
[]
:else ->
warn(topology, "kubernetes strategy is selected, but is not configured!")
[]
end
end
defp polling_interval(%State{config: config}) do
Keyword.get(config, :polling_interval, @default_polling_interval)
end
defp parse_response(addresses, app_name) do
addresses
|> Enum.map(&:inet_parse.ntoa(&1))
|> Enum.map(&"#{app_name}@#{&1}")
|> Enum.map(&String.to_atom(&1))
end
end
|
lib/strategy/kubernetes_dns.ex
| 0.759047
| 0.526038
|
kubernetes_dns.ex
|
starcoder
|
defmodule Data.Quest do
@moduledoc """
Quest schema
"""
use Data.Schema
alias Data.Script
alias Data.NPC
alias Data.QuestRelation
alias Data.QuestStep
schema "quests" do
field(:name, :string)
field(:description, :string)
field(:completed_message, :string)
field(:level, :integer)
field(:experience, :integer)
field(:currency, :integer, default: 0)
field(:script, {:array, Script.Line})
belongs_to(:giver, NPC)
has_many(:quest_steps, QuestStep)
has_many(:parent_relations, QuestRelation, foreign_key: :child_id)
has_many(:parents, through: [:parent_relations, :parent])
has_many(:child_relations, QuestRelation, foreign_key: :parent_id)
has_many(:children, through: [:child_relations, :child])
timestamps()
end
def changeset(struct, params) do
struct
|> cast(params, [
:name,
:description,
:completed_message,
:level,
:experience,
:currency,
:script,
:giver_id
])
|> validate_required([
:name,
:description,
:completed_message,
:level,
:experience,
:currency,
:script,
:giver_id
])
|> validate_giver_is_a_giver()
|> Script.validate_script()
|> validate_script()
|> foreign_key_constraint(:giver_id)
end
defp validate_giver_is_a_giver(changeset) do
case get_field(changeset, :giver_id) do
nil ->
changeset
giver_id ->
case Repo.get(NPC, giver_id) do
%{is_quest_giver: true} -> changeset
_ -> add_error(changeset, :giver_id, "must be marked as a quest giver")
end
end
end
defp validate_script(changeset) do
case get_field(changeset, :script) do
nil -> changeset
script -> _validate_script(changeset, script)
end
end
defp _validate_script(changeset, script) do
case Script.valid_for_quest?(script) do
true ->
changeset
false ->
add_error(
changeset,
:script,
"must include one conversation that has a trigger with quest"
)
end
end
end
|
lib/data/quest.ex
| 0.568536
| 0.482185
|
quest.ex
|
starcoder
|
defmodule Ratatouille.Renderer.Box do
@moduledoc """
This defines the internal representation of a rectangular region---a box---for
rendering, as well as logic for transforming these boxes.
Boxes live on a coordinate plane. The y-axis is inverted so that the y values
increase as the box's height grows.
--------------> x
|
| ________
| | |
| |______|
v
y
A `Box` struct stores the coordinates for two corners of the box---the
top-left and bottom-right corners--from which the remaining attributes
(height, width, other corners) can be computed.
_________________
| |
| A |
| |
| |
| |
| B |
|_________________|
A: top-left corner, e.g. (0, 0)
B: bottom-right corner, e.g. (10, 10)
For rendering purposes, the outermost box will typically have a top-left
corner (0, 0) and a bottom-right corner (x, y) where x is the number of rows
and y is the number of columns on the terminal.
This outermost box can then be subdivided as necessary to render different
elements of the view.
"""
alias ExTermbox.Position
alias __MODULE__
@enforce_keys [:top_left, :bottom_right]
defstruct [:top_left, :bottom_right]
def translate(
%Box{
top_left: %Position{x: x1, y: y1},
bottom_right: %Position{x: x2, y: y2}
},
dx,
dy
) do
%Box{
top_left: %Position{x: x1 + dx, y: y1 + dy},
bottom_right: %Position{x: x2 + dx, y: y2 + dy}
}
end
def consume(
%Box{top_left: %Position{x: x1, y: y1}} = box,
dx,
dy
) do
%Box{
box
| top_left: %Position{x: x1 + dx, y: y1 + dy}
}
end
def padded(%Box{top_left: top_left, bottom_right: bottom_right}, [top: top, left: left, bottom: bottom, right: right]) do
%Box{
top_left: top_left |> Position.translate(left, top),
bottom_right: bottom_right |> Position.translate(-right, -bottom)
}
end
def positions(%Box{
top_left: %Position{x: x1, y: y1},
bottom_right: %Position{x: x2, y: y2}
}) do
for x <- x1..x2, y <- y1..y2, do: %Position{x: x, y: y}
end
@doc """
Given a box, returns a slice of the y axis with `n` rows from the top.
"""
def head(box, n) do
%Box{
box
| bottom_right: %Position{box.bottom_right | y: box.top_left.y + n - 1}
}
end
@doc """
Given a box, returns a slice of the y axis with `n` rows from the bottom.
"""
def tail(box, n) do
%Box{
box
| top_left: %Position{box.top_left | y: box.bottom_right.y - n + 1}
}
end
def top_left(%Box{top_left: top_left}), do: top_left
def top_right(%Box{top_left: %Position{y: y}, bottom_right: %Position{x: x}}),
do: %Position{x: x, y: y}
def bottom_left(%Box{top_left: %Position{x: x}, bottom_right: %Position{y: y}}),
do: %Position{x: x, y: y}
def bottom_right(%Box{bottom_right: bottom_right}), do: bottom_right
def width(%Box{top_left: %Position{x: x1}, bottom_right: %Position{x: x2}}),
do: x2 - x1 + 1
def height(%Box{top_left: %Position{y: y1}, bottom_right: %Position{y: y2}}),
do: y2 - y1 + 1
def contains?(
%Box{
top_left: %Position{x: x1, y: y1},
bottom_right: %Position{x: x2, y: y2}
},
%Position{x: x, y: y}
) do
x in x1..x2 && y in y1..y2
end
def from_dimensions(width, height, origin \\ %Position{x: 0, y: 0}) do
dx = width - 1
dy = height - 1
%Box{
top_left: origin,
bottom_right: %Position{x: origin.x + dx, y: origin.y + dy}
}
end
end
|
lib/ratatouille/renderer/box.ex
| 0.892972
| 0.802168
|
box.ex
|
starcoder
|
defmodule AWS.Discovery do
@moduledoc """
AWS Application Discovery Service
AWS Application Discovery Service helps you plan application migration
projects by automatically identifying servers, virtual machines (VMs),
software, and software dependencies running in your on-premises data
centers. Application Discovery Service also collects application
performance data, which can help you assess the outcome of your migration.
The data collected by Application Discovery Service is securely retained in
an AWS-hosted and managed database in the cloud. You can export the data as
a CSV or XML file into your preferred visualization tool or cloud-migration
solution to plan your migration. For more information, see [AWS Application
Discovery Service FAQ](http://aws.amazon.com/application-discovery/faqs/).
Application Discovery Service offers two modes of operation:
<ul> <li> **Agentless discovery** mode is recommended for environments that
use VMware vCenter Server. This mode doesn't require you to install an
agent on each host. Agentless discovery gathers server information
regardless of the operating systems, which minimizes the time required for
initial on-premises infrastructure assessment. Agentless discovery doesn't
collect information about software and software dependencies. It also
doesn't work in non-VMware environments.
</li> <li> **Agent-based discovery** mode collects a richer set of data
than agentless discovery by using the AWS Application Discovery Agent,
which you install on one or more hosts in your data center. The agent
captures infrastructure and application information, including an inventory
of installed software applications, system and process performance,
resource utilization, and network dependencies between workloads. The
information collected by agents is secured at rest and in transit to the
Application Discovery Service database in the cloud.
</li> </ul> We recommend that you use agent-based discovery for non-VMware
environments and to collect information about software and software
dependencies. You can also run agent-based and agentless discovery
simultaneously. Use agentless discovery to quickly complete the initial
infrastructure assessment and then install agents on select hosts.
Application Discovery Service integrates with application discovery
solutions from AWS Partner Network (APN) partners. Third-party application
discovery tools can query Application Discovery Service and write to the
Application Discovery Service database using a public API. You can then
import the data into either a visualization tool or cloud-migration
solution.
<important> Application Discovery Service doesn't gather sensitive
information. All data is handled according to the [AWS Privacy
Policy](http://aws.amazon.com/privacy/). You can operate Application
Discovery Service offline to inspect collected data before it is shared
with the service.
</important> Your AWS account must be granted access to Application
Discovery Service, a process called *whitelisting*. This is true for AWS
partners and customers alike. To request access, [sign up for Application
Discovery Service](http://aws.amazon.com/application-discovery/).
This API reference provides descriptions, syntax, and usage examples for
each of the actions and data types for Application Discovery Service. The
topic for each action shows the API request parameters and the response.
Alternatively, you can use one of the AWS SDKs to access an API that is
tailored to the programming language or platform that you're using. For
more information, see [AWS SDKs](http://aws.amazon.com/tools/#SDKs).
This guide is intended for use with the [ *AWS Application Discovery
Service User Guide*
](http://docs.aws.amazon.com/application-discovery/latest/userguide/).
"""
@doc """
Associates one or more configuration items with an application.
"""
def associate_configuration_items_to_application(client, input, options \\ []) do
request(client, "AssociateConfigurationItemsToApplication", input, options)
end
@doc """
Creates an application with the given name and description.
"""
def create_application(client, input, options \\ []) do
request(client, "CreateApplication", input, options)
end
@doc """
Creates one or more tags for configuration items. Tags are metadata that
help you categorize IT assets. This API accepts a list of multiple
configuration items.
"""
def create_tags(client, input, options \\ []) do
request(client, "CreateTags", input, options)
end
@doc """
Deletes a list of applications and their associations with configuration
items.
"""
def delete_applications(client, input, options \\ []) do
request(client, "DeleteApplications", input, options)
end
@doc """
Deletes the association between configuration items and one or more tags.
This API accepts a list of multiple configuration items.
"""
def delete_tags(client, input, options \\ []) do
request(client, "DeleteTags", input, options)
end
@doc """
Lists agents or the Connector by ID or lists all agents/Connectors
associated with your user account if you did not specify an ID.
"""
def describe_agents(client, input, options \\ []) do
request(client, "DescribeAgents", input, options)
end
@doc """
Retrieves attributes for a list of configuration item IDs. All of the
supplied IDs must be for the same asset type (server, application, process,
or connection). Output fields are specific to the asset type selected. For
example, the output for a *server* configuration item includes a list of
attributes about the server, such as host name, operating system, and
number of network cards.
For a complete list of outputs for each asset type, see [Using the
DescribeConfigurations
Action](http://docs.aws.amazon.com/application-discovery/latest/APIReference/discovery-api-queries.html#DescribeConfigurations).
"""
def describe_configurations(client, input, options \\ []) do
request(client, "DescribeConfigurations", input, options)
end
@doc """
Deprecated. Use `DescribeExportTasks` instead.
Retrieves the status of a given export process. You can retrieve status
from a maximum of 100 processes.
"""
def describe_export_configurations(client, input, options \\ []) do
request(client, "DescribeExportConfigurations", input, options)
end
@doc """
Retrieve status of one or more export tasks. You can retrieve the status of
up to 100 export tasks.
"""
def describe_export_tasks(client, input, options \\ []) do
request(client, "DescribeExportTasks", input, options)
end
@doc """
Retrieves a list of configuration items that are tagged with a specific
tag. Or retrieves a list of all tags assigned to a specific configuration
item.
"""
def describe_tags(client, input, options \\ []) do
request(client, "DescribeTags", input, options)
end
@doc """
Disassociates one or more configuration items from an application.
"""
def disassociate_configuration_items_from_application(client, input, options \\ []) do
request(client, "DisassociateConfigurationItemsFromApplication", input, options)
end
@doc """
Deprecated. Use `StartExportTask` instead.
Exports all discovered configuration data to an Amazon S3 bucket or an
application that enables you to view and evaluate the data. Data includes
tags and tag associations, processes, connections, servers, and system
performance. This API returns an export ID that you can query using the
*DescribeExportConfigurations* API. The system imposes a limit of two
configuration exports in six hours.
"""
def export_configurations(client, input, options \\ []) do
request(client, "ExportConfigurations", input, options)
end
@doc """
Retrieves a short summary of discovered assets.
"""
def get_discovery_summary(client, input, options \\ []) do
request(client, "GetDiscoverySummary", input, options)
end
@doc """
Retrieves a list of configuration items according to criteria that you
specify in a filter. The filter criteria identifies the relationship
requirements.
"""
def list_configurations(client, input, options \\ []) do
request(client, "ListConfigurations", input, options)
end
@doc """
Retrieves a list of servers that are one network hop away from a specified
server.
"""
def list_server_neighbors(client, input, options \\ []) do
request(client, "ListServerNeighbors", input, options)
end
@doc """
Instructs the specified agents or connectors to start collecting data.
"""
def start_data_collection_by_agent_ids(client, input, options \\ []) do
request(client, "StartDataCollectionByAgentIds", input, options)
end
@doc """
Begins the export of discovered data to an S3 bucket.
If you specify `agentIds` in a filter, the task exports up to 72 hours of
detailed data collected by the identified Application Discovery Agent,
including network, process, and performance details. A time range for
exported agent data may be set by using `startTime` and `endTime`. Export
of detailed agent data is limited to five concurrently running exports.
If you do not include an `agentIds` filter, summary data is exported that
includes both AWS Agentless Discovery Connector data and summary data from
AWS Discovery Agents. Export of summary data is limited to two exports per
day.
"""
def start_export_task(client, input, options \\ []) do
request(client, "StartExportTask", input, options)
end
@doc """
Instructs the specified agents or connectors to stop collecting data.
"""
def stop_data_collection_by_agent_ids(client, input, options \\ []) do
request(client, "StopDataCollectionByAgentIds", input, options)
end
@doc """
Updates metadata about an application.
"""
def update_application(client, input, options \\ []) do
request(client, "UpdateApplication", input, options)
end
@spec request(map(), binary(), map(), list()) ::
{:ok, Poison.Parser.t | nil, Poison.Response.t} |
{:error, Poison.Parser.t} |
{:error, HTTPoison.Error.t}
defp request(client, action, input, options) do
client = %{client | service: "discovery"}
host = get_host("discovery", client)
url = get_url(host, client)
headers = [{"Host", host},
{"Content-Type", "application/x-amz-json-1.1"},
{"X-Amz-Target", "AWSPoseidonService_V2015_11_01.#{action}"}]
payload = Poison.Encoder.encode(input, [])
headers = AWS.Request.sign_v4(client, "POST", url, headers, payload)
case HTTPoison.post(url, payload, headers, options) do
{:ok, response=%HTTPoison.Response{status_code: 200, body: ""}} ->
{:ok, nil, response}
{:ok, response=%HTTPoison.Response{status_code: 200, body: body}} ->
{:ok, Poison.Parser.parse!(body), response}
{:ok, _response=%HTTPoison.Response{body: body}} ->
error = Poison.Parser.parse!(body)
exception = error["__type"]
message = error["message"]
{:error, {exception, message}}
{:error, %HTTPoison.Error{reason: reason}} ->
{:error, %HTTPoison.Error{reason: reason}}
end
end
defp get_host(endpoint_prefix, client) do
if client.region == "local" do
"localhost"
else
"#{endpoint_prefix}.#{client.region}.#{client.endpoint}"
end
end
defp get_url(host, %{:proto => proto, :port => port}) do
"#{proto}://#{host}:#{port}/"
end
end
|
lib/aws/discovery.ex
| 0.854384
| 0.574872
|
discovery.ex
|
starcoder
|
defmodule Spinlock do
@moduledoc """
Documentation for Spinlock.
"""
@doc """
Initialize the circular buffer for the spinlock.
## Examples
iex> Spinlock.init
%{buffer: [0], position: 0, last_value: 0 }
"""
def init do
%{ buffer: [0], position: 0, last_value: 0 }
end
@doc """
Perform the next state in the spinlock algorithm.
## Examples
iex> Spinlock.step(Spinlock.init, 3)
%{ buffer: [0,1], position: 1, last_value: 1 }
"""
def step(state, width) do
# last_value + 1 == length(buffer)
new_position = rem(state.position + width, state.last_value + 1) + 1
new_value = state.last_value + 1
new_buffer = List.insert_at(state.buffer, new_position, new_value)
%{ buffer: new_buffer, position: new_position, last_value: new_value }
end
@doc """
Solve part I of the puzzle
"""
def step_n(state, _width, 0) do
state
end
def step_n(state, width, n) do
step_n(step(state, width), width, n-1)
end
@doc """
Initialize the circular buffer for the spinlock, part II.
## Examples
iex> Spinlock.init2
%{position: 0, last_value: 0, value_after_0: -1 }
"""
def init2 do
%{ position: 0, last_value: 0, value_after_0: -1 }
end
@doc """
Perform the next state in the spinlock algorithm.
## Examples
iex> Spinlock.step2(Spinlock.init2, 3)
%{ position: 1, last_value: 1, value_after_0: 1 }
"""
def step2(state, width) do
# last_value + 1 == length(buffer)
new_position = rem(state.position + width, state.last_value + 1) + 1
new_value = state.last_value + 1
new_value_after_0 =
case new_position do
1 -> IO.puts("new value at pos 1: #{new_value}"); new_value
_ -> state.value_after_0
end
%{ position: new_position, last_value: new_value, value_after_0: new_value_after_0 }
end
@doc """
Solve part II of the puzzle
"""
def step_n2(state, _width, 0) do
state
end
def step_n2(state, width, n) do
step_n2(step2(state, width), width, n-1)
end
end
|
2017/17-spinlock/lib/spinlock.ex
| 0.745491
| 0.436742
|
spinlock.ex
|
starcoder
|
defmodule Prolly.BloomFilter do
require Vector
@moduledoc """
Use a Bloom filter when you want to keep track of whether
you have seen a given value or not.
For example, the quesetion "have I seen the string `foo` so far in the stream?"
is a reasonble question for a Bloom filter.
Specifically, a Bloom filter can tell you two things:
1. When a value *may* be in a set.
2. When a value is definitely not in a set
Carefully note that a Bloom filter can only tell you that a value
might be in a set or that a value is definitely not in a set.
It cannot tell you that a value is definitely in a set.
"""
@opaque t :: %__MODULE__{
filter: Vector.t,
hash_fns: list((String.t -> integer)),
m: pos_integer
}
defstruct [filter: nil, hash_fns: nil, m: 1]
@doc """
Create a Bloom filter.
iex> alias Prolly.BloomFilter
iex> BloomFilter.new(20,
...> [fn(value) -> :crypto.hash(:sha, value) |> :crypto.bytes_to_integer() end,
...> fn(value) -> :crypto.hash(:md5, value) |> :crypto.bytes_to_integer() end,
...> fn(value) -> :crypto.hash(:sha256, value) |> :crypto.bytes_to_integer() end]).filter
...> |> Enum.to_list
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
"""
@spec new(pos_integer, list((String.t -> integer))) :: t
def new(filter_size, hash_fns) when is_integer(filter_size) do
filter = Vector.new(Enum.map(1..filter_size, fn _ -> 0 end))
%__MODULE__{
filter: filter,
hash_fns: hash_fns,
m: filter_size
}
end
@doc """
Find the optimal number of hash functions for a given filter size and expected input size
## Examples
iex> alias Prolly.BloomFilter
iex> BloomFilter.optimal_number_of_hashes(10000, 1000)
7
"""
@spec optimal_number_of_hashes(pos_integer, pos_integer) :: pos_integer
def optimal_number_of_hashes(filter_size, input_size)
when is_integer(filter_size) and is_integer(input_size) and filter_size > 0 and input_size > 0 do
(filter_size / input_size) * :math.log(2) |> round
end
@doc """
Find the false positive rate for a given filter size, expected input size, and number of hash functions
## Examples
iex> alias Prolly.BloomFilter
iex> BloomFilter.false_positive_rate(10000, 3000, 3) |> (fn(n) -> :erlang.round(n * 100) / 100 end).()
0.21
"""
@spec false_positive_rate(pos_integer, pos_integer, pos_integer) :: float
def false_positive_rate(filter_size, input_size, number_of_hashes) do
:math.pow(1 - :math.exp(-number_of_hashes * input_size / filter_size), number_of_hashes)
end
@doc """
Test if something might be in a bloom filter
## Examples
iex> alias Prolly.BloomFilter
iex> bf = BloomFilter.new(20,
...> [fn(value) -> :crypto.hash(:sha, value) |> :crypto.bytes_to_integer() end,
...> fn(value) -> :crypto.hash(:md5, value) |> :crypto.bytes_to_integer() end,
...> fn(value) -> :crypto.hash(:sha256, value) |> :crypto.bytes_to_integer() end])
iex> bf = BloomFilter.update(bf, "hi")
iex> BloomFilter.possible_member?(bf, "hi")
true
iex> alias Prolly.BloomFilter
iex> bf = BloomFilter.new(20,
...> [fn(value) -> :crypto.hash(:sha, value) |> :crypto.bytes_to_integer() end,
...> fn(value) -> :crypto.hash(:md5, value) |> :crypto.bytes_to_integer() end,
...> fn(value) -> :crypto.hash(:sha256, value) |> :crypto.bytes_to_integer() end])
iex> bf = BloomFilter.update(bf, "hi")
iex> BloomFilter.possible_member?(bf, "this is not hi!")
false
iex> alias Prolly.BloomFilter
iex> bf = BloomFilter.new(20,
...> [fn(value) -> :crypto.hash(:sha, value) |> :crypto.bytes_to_integer() end,
...> fn(value) -> :crypto.hash(:md5, value) |> :crypto.bytes_to_integer() end,
...> fn(value) -> :crypto.hash(:sha256, value) |> :crypto.bytes_to_integer() end])
iex> bf = BloomFilter.update(bf, 7777777)
iex> BloomFilter.possible_member?(bf, 7777777)
true
"""
@spec possible_member?(t, String.Chars) :: boolean
def possible_member?(%__MODULE__{filter: filter, hash_fns: hash_fns, m: m}, value) when is_binary(value) do
Stream.take_while(hash_fns, fn(hash_fn) ->
filter[compute_index(hash_fn, value, m)] == 1
end)
|> Enum.count
|> (fn(ones) -> ones == Enum.count(hash_fns) end).()
end
def possible_member?(%__MODULE__{} = bloom_filter, value) do
possible_member?(bloom_filter, to_string(value))
end
@doc """
Add a value to a bloom filter
This operation runs in time proportional to the number
of hash functions.
## Examples
iex> alias Prolly.BloomFilter
iex> bf = BloomFilter.new(20,
...> [fn(value) -> :crypto.hash(:sha, value) |> :crypto.bytes_to_integer() end,
...> fn(value) -> :crypto.hash(:md5, value) |> :crypto.bytes_to_integer() end,
...> fn(value) -> :crypto.hash(:sha256, value) |> :crypto.bytes_to_integer() end])
iex> BloomFilter.update(bf, "hi").filter |> Enum.to_list
[0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0]
iex> alias Prolly.BloomFilter
iex> bf = BloomFilter.new(20,
...> [fn(value) -> :crypto.hash(:sha, value) |> :crypto.bytes_to_integer() end,
...> fn(value) -> :crypto.hash(:md5, value) |> :crypto.bytes_to_integer() end,
...> fn(value) -> :crypto.hash(:sha256, value) |> :crypto.bytes_to_integer() end])
iex> BloomFilter.update(bf, 12345).filter |> Enum.to_list
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0]
"""
@spec update(t, String.Chars) :: t
def update(%__MODULE__{filter: filter, hash_fns: hash_fns, m: m} = bloom_filter, value) when is_binary(value) do
new_filter =
Enum.reduce(hash_fns, filter, fn(hash_fn, acc) ->
index = compute_index(hash_fn, value, m)
Vector.put(acc, index, 1)
end)
%{bloom_filter | filter: new_filter}
end
def update(%__MODULE__{} = bloom_filter, value) do
update(bloom_filter, to_string(value))
end
@spec union(t, t) :: t
def union(bloom_filter1, bloom_filter2) do
raise UndefinedFunctionError
end
@spec intersection(t, t) :: t
def intersection(bloom_filter1, bloom_filter2) do
raise UndefinedFunctionError
end
defp compute_index(hash_fn, value, k) do
hash_fn.(value) |> (fn(n) -> rem(n, k) end).()
end
end
|
lib/prolly/bloom_filter.ex
| 0.901799
| 0.727274
|
bloom_filter.ex
|
starcoder
|
defmodule Bigtable.Mutations do
@moduledoc """
Provides functions to build Bigtable mutations that are used when forming
row mutation requests.
"""
alias Google.Bigtable.V2.{MutateRowsRequest, Mutation, TimestampRange}
alias MutateRowsRequest.Entry
alias Mutation.{DeleteFromColumn, DeleteFromFamily, DeleteFromRow, SetCell}
@doc """
Builds a `Google.Bigtable.V2.MutateRowsRequest.Entry` for use with `Google.Bigtable.V2.MutateRowRequest` and `Google.Bigtable.V2.MutateRowsRequest`.
## Examples
iex> Bigtable.Mutations.build("Row#123")
%Google.Bigtable.V2.MutateRowsRequest.Entry{mutations: [], row_key: "Row#123"}
"""
@spec build(binary()) :: Entry.t()
def build(row_key) when is_binary(row_key) do
Entry.new(row_key: row_key)
end
@doc """
Creates a `Google.Bigtable.V2.Mutation.SetCell` given a `Google.Bigtable.V2.Mutation`, family name, column qualifier, and timestamp micros.
The provided timestamp corresponds to the timestamp of the cell into which new data should be written.
Use -1 for current Bigtable server time. Otherwise, the client should set this value itself, noting that the default value is a timestamp of zero if the field is left unspecified.
Values must match the granularity of the table (e.g. micros, millis)
## Examples
iex> Mutations.build("Row#123") |> Mutations.set_cell("family", "column", "value")
%Google.Bigtable.V2.MutateRowsRequest.Entry{
mutations: [
%Google.Bigtable.V2.Mutation{
mutation: {:set_cell,
%Google.Bigtable.V2.Mutation.SetCell{
column_qualifier: "column",
family_name: "family",
timestamp_micros: -1,
value: "value"
}}
}
],
row_key: "Row#123"
}
"""
@spec set_cell(Entry.t(), binary(), binary(), binary(), integer()) :: Entry.t()
def set_cell(%Entry{} = mutation, family, column, value, timestamp \\ -1)
when is_binary(family) and is_binary(column) and is_integer(timestamp) do
set_mutation =
SetCell.new(
family_name: family,
column_qualifier: column,
value: value,
timestamp_micros: timestamp
)
add_mutation(mutation, :set_cell, set_mutation)
end
@doc """
Creates a `Google.Bigtable.V2.Mutation.DeleteFromColumn` given a `Google.Bigtable.V2.Mutation`, family name, column qualifier, and time range.
Time range is a keyword list that should contain optional start_timestamp_micros and end_timestamp_micros.
If not provided, start is treated as 0 and end is treated as infinity
## Examples
iex> Mutations.build("Row#123") |> Mutations.delete_from_column("family", "column")
%Google.Bigtable.V2.MutateRowsRequest.Entry{
mutations: [
%Google.Bigtable.V2.Mutation{
mutation: {:delete_from_column,
%Google.Bigtable.V2.Mutation.DeleteFromColumn{
column_qualifier: "column",
family_name: "family",
time_range: %Google.Bigtable.V2.TimestampRange{
end_timestamp_micros: 0,
start_timestamp_micros: 0
}
}}
}
],
row_key: "Row#123"
}
"""
@spec delete_from_column(Entry.t(), binary(), binary(), Keyword.t()) :: Entry.t()
def delete_from_column(%Entry{} = mutation_struct, family, column, time_range \\ [])
when is_binary(family) and is_binary(column) do
time_range = create_time_range(time_range)
mutation =
DeleteFromColumn.new(
family_name: family,
column_qualifier: column,
time_range: time_range
)
add_mutation(mutation_struct, :delete_from_column, mutation)
end
@doc """
Creates a `Google.Bigtable.V2.Mutation.DeleteFromFamily` given a `Google.Bigtable.V2.Mutation` and family name.
## Examples
iex> Mutations.build("Row#123") |> Mutations.delete_from_family("family")
%Google.Bigtable.V2.MutateRowsRequest.Entry{
mutations: [
%Google.Bigtable.V2.Mutation{
mutation: {:delete_from_family,
%Google.Bigtable.V2.Mutation.DeleteFromFamily{family_name: "family"}}
}
],
row_key: "Row#123"
}
"""
@spec delete_from_family(Entry.t(), binary()) :: Entry.t()
def delete_from_family(%Entry{} = mutation_struct, family) when is_binary(family) do
mutation = DeleteFromFamily.new(family_name: family)
add_mutation(mutation_struct, :delete_from_family, mutation)
end
@doc """
Creates a `Google.Bigtable.V2.Mutation.DeleteFromRow` given a `Google.Bigtable.V2.Mutation`.
## Examples
iex> Mutations.build("Row#123") |> Mutations.delete_from_row()
%Google.Bigtable.V2.MutateRowsRequest.Entry{
mutations: [
%Google.Bigtable.V2.Mutation{
mutation: {:delete_from_row, %Google.Bigtable.V2.Mutation.DeleteFromRow{}}
}
],
row_key: "Row#123"
}
"""
@spec delete_from_row(Entry.t()) :: Entry.t()
def delete_from_row(%Entry{} = mutation_struct) do
mutation = DeleteFromRow.new()
add_mutation(mutation_struct, :delete_from_row, mutation)
end
# Adds an additional V2.Mutation to the given mutation struct
@spec add_mutation(Entry.t(), atom(), Mutation.t()) :: Entry.t()
defp add_mutation(%Entry{} = mutation_struct, type, mutation) do
%{
mutation_struct
| mutations: mutation_struct.mutations ++ [Mutation.new(mutation: {type, mutation})]
}
end
# Creates a time range that can be used for column deletes
@spec create_time_range(Keyword.t()) :: TimestampRange.t()
defp create_time_range(time_range) do
start_timestamp_micros = Keyword.get(time_range, :start)
end_timestamp_micros = Keyword.get(time_range, :end)
time_range = TimestampRange.new()
time_range =
case start_timestamp_micros do
nil -> time_range
micros -> %{time_range | start_timestamp_micros: micros}
end
time_range =
case end_timestamp_micros do
nil -> time_range
micros -> %{time_range | end_timestamp_micros: micros}
end
time_range
end
end
|
lib/data/mutations.ex
| 0.908255
| 0.463141
|
mutations.ex
|
starcoder
|
defmodule BN.FQP do
defstruct [:coef, :modulus_coef, :dim]
alias BN.FQ
@type t :: %__MODULE__{
coef: [FQ.t()],
modulus_coef: [integer()]
}
@spec new([integer()], [integer()], keyword()) :: t() | no_return
def new(coef, modulus_coef, params \\ []) do
modulus = Keyword.get(params, :modulus, FQ.default_modulus())
coef_size = Enum.count(coef)
modulus_coef_size = Enum.count(modulus_coef)
if coef_size != modulus_coef_size,
do:
raise(ArgumentError,
message: "Coefficients and modulus coefficients have different dimensions"
)
fq_coef =
Enum.map(coef, fn coef_el ->
FQ.new(coef_el, modulus: modulus)
end)
%__MODULE__{
coef: fq_coef,
modulus_coef: modulus_coef,
dim: coef_size
}
end
@spec add(t(), t()) :: t() | no_return
def add(
fqp1 = %__MODULE__{dim: dim1, modulus_coef: modulus_coef1},
fqp2 = %__MODULE__{dim: dim2, modulus_coef: modulus_coef2}
)
when dim1 == dim2 and modulus_coef1 == modulus_coef2 do
coef =
fqp1.coef
|> Enum.zip(fqp2.coef)
|> Enum.map(fn {coef1, coef2} ->
FQ.add(coef1, coef2)
end)
%__MODULE__{modulus_coef: modulus_coef1, dim: dim1, coef: coef}
end
def add(_, _), do: raise(ArgumentError, message: "Can't add elements of different fields")
@spec sub(t(), t()) :: t() | no_return
def sub(
fqp1 = %__MODULE__{dim: dim1, modulus_coef: modulus_coef1},
fqp2 = %__MODULE__{dim: dim2, modulus_coef: modulus_coef2}
)
when dim1 == dim2 and modulus_coef1 == modulus_coef2 do
coef =
fqp1.coef
|> Enum.zip(fqp2.coef)
|> Enum.map(fn {coef1, coef2} ->
FQ.sub(coef1, coef2)
end)
%__MODULE__{modulus_coef: modulus_coef1, dim: dim1, coef: coef}
end
def sub(_, _), do: raise(ArgumentError, message: "Can't substact elements of different fields")
@spec mult(t(), t() | FQ.t() | integer()) :: t() | no_return
def mult(
fqp = %__MODULE__{dim: dim, modulus_coef: modulus_coef},
fq = %FQ{}
) do
coef =
Enum.map(fqp.coef, fn coef ->
FQ.mult(coef, fq)
end)
%__MODULE__{modulus_coef: modulus_coef, dim: dim, coef: coef}
end
def mult(
fqp = %__MODULE__{dim: dim, modulus_coef: modulus_coef},
number
)
when is_integer(number) do
coef =
Enum.map(fqp.coef, fn coef ->
FQ.mult(coef, number)
end)
%__MODULE__{modulus_coef: modulus_coef, dim: dim, coef: coef}
end
def mult(
fqp1 = %__MODULE__{dim: dim1, modulus_coef: modulus_coef1},
fqp2 = %__MODULE__{dim: dim2, modulus_coef: modulus_coef2}
)
when dim1 == dim2 and modulus_coef1 == modulus_coef2 do
pol_coef = List.duplicate(FQ.new(0), dim1 * 2 - 1)
intermediate_result =
Enum.reduce(0..(dim1 - 1), pol_coef, fn i, acc1 ->
Enum.reduce(0..(dim1 - 1), acc1, fn j, acc2 ->
cur_acc = Enum.at(acc2, i + j)
summand = FQ.mult(Enum.at(fqp1.coef, i), Enum.at(fqp2.coef, j))
List.replace_at(acc2, i + j, FQ.add(cur_acc, summand))
end)
end)
coef =
mult_modulus_coef(
Enum.reverse(intermediate_result),
modulus_coef1,
dim1
)
%__MODULE__{modulus_coef: modulus_coef1, dim: dim1, coef: coef}
end
def mult(_, _), do: raise(ArgumentError, message: "Can't multiply elements of different fields")
@spec divide(t(), t()) :: t() | no_return
def divide(fqp1, fqp2) do
inverse = inverse(fqp2)
mult(fqp1, inverse)
end
@spec inverse(t()) :: t() | no_return
def inverse(fqp) do
lm = [FQ.new(1)] ++ List.duplicate(FQ.new(0), fqp.dim)
hm = List.duplicate(FQ.new(0), fqp.dim + 1)
low = fqp.coef ++ [FQ.new(0)]
high = fqp.modulus_coef ++ [1]
deg_low = deg(low)
calculate_inverse({high, low}, {hm, lm}, fqp, deg_low)
end
@spec pow(t(), integer()) :: t() | no_return
def pow(base, exp) do
cond do
exp == 0 ->
coef = [1] ++ List.duplicate([0], base.dim - 1)
new(coef, base.modulus_coef)
exp == 1 ->
base
rem(exp, 2) == 0 ->
base
|> mult(base)
|> pow(div(exp, 2))
true ->
base
|> mult(base)
|> pow(div(exp, 2))
|> mult(base)
end
end
@spec zero?(t()) :: boolean()
def zero?(fqp) do
Enum.all?(fqp.coef, fn cur_coef ->
cur_coef.value == 0
end)
end
@spec negate(t()) :: t()
def negate(fqp) do
neg_coef = Enum.map(fqp.coef, fn coef -> FQ.new(-coef.value) end)
%{fqp | coef: neg_coef}
end
defp calculate_inverse({_, low}, {_, lm}, fqp, deg_low) when deg_low == 0 do
coef =
lm
|> Enum.take(fqp.dim)
|> Enum.map(fn el ->
FQ.divide(el, Enum.at(low, 0))
end)
new(coef, fqp.modulus_coef)
end
defp calculate_inverse({high, low}, {hm, lm}, fqp, _deg_low) do
r = poly_rounded_div(high, low)
r = r ++ List.duplicate(FQ.new(0), fqp.dim + 1 - Enum.count(r))
nm = hm
new = high
{nm, new} =
0..fqp.dim
|> Enum.reduce({nm, new}, fn i, {nm, new} ->
0..(fqp.dim - i)
|> Enum.reduce({nm, new}, fn j, {nm, new} ->
nmmult = lm |> Enum.at(i) |> FQ.new() |> FQ.mult(Enum.at(r, j))
new_nm_val = nm |> Enum.at(i + j) |> FQ.new() |> FQ.sub(nmmult)
nm = List.replace_at(nm, i + j, new_nm_val)
newmult = low |> Enum.at(i) |> FQ.new() |> FQ.mult(Enum.at(r, j))
new_val = new |> Enum.at(i + j) |> FQ.new() |> FQ.sub(newmult)
new = List.replace_at(new, i + j, new_val)
{nm, new}
end)
end)
deg_low = deg(new)
calculate_inverse({low, new}, {lm, nm}, fqp, deg_low)
end
defp poly_rounded_div(a, b) do
dega = deg(a)
degb = deg(b)
temp = a
output = List.duplicate(FQ.new(0), Enum.count(a))
output =
if dega - degb >= 0 do
{output, _} =
0..(dega - degb)
|> Enum.to_list()
|> Enum.reverse()
|> Enum.reduce({output, temp}, fn i, {out_acc, temp_acc} ->
new_val =
temp_acc
|> Enum.at(degb + i)
|> FQ.new()
|> FQ.divide(Enum.at(b, degb))
|> FQ.add(Enum.at(out_acc, i))
new_out_acc = List.replace_at(out_acc, i, new_val)
new_temp_acc =
0..degb
|> Enum.reduce(temp_acc, fn j, acc ->
updated_value =
acc |> Enum.at(i + j) |> FQ.new() |> FQ.sub(Enum.at(new_out_acc, j))
List.replace_at(
acc,
i + j,
updated_value
)
end)
{new_out_acc, new_temp_acc}
end)
output
else
output
end
dego = deg(output)
Enum.take(output, dego + 1)
end
defp deg(list) do
idx =
list
|> Enum.reverse()
|> Enum.find_index(fn el ->
if is_integer(el) do
el != 0
else
el.value != 0
end
end)
if is_nil(idx), do: 0, else: Enum.count(list) - idx - 1
end
defp mult_modulus_coef(pol_coef = [cur | tail_pol_coef], modulus_coef, dim)
when length(pol_coef) > dim do
current_idx = Enum.count(pol_coef) - dim - 1
tail_pol_coef = Enum.reverse(tail_pol_coef)
cur_result =
Enum.reduce(0..(dim - 1), tail_pol_coef, fn i, acc ->
current_acc_el = acc |> Enum.at(i + current_idx)
subtrahend = modulus_coef |> Enum.at(i) |> FQ.new() |> FQ.mult(cur)
updated_acc_el = FQ.sub(current_acc_el, subtrahend)
List.replace_at(acc, current_idx + i, updated_acc_el)
end)
cur_result
|> Enum.reverse()
|> mult_modulus_coef(modulus_coef, dim)
end
defp mult_modulus_coef(pol_coef, _, _), do: Enum.reverse(pol_coef)
end
|
lib/bn/fqp.ex
| 0.87672
| 0.442938
|
fqp.ex
|
starcoder
|
defmodule AWS.Detective do
@moduledoc """
Detective uses machine learning and purpose-built visualizations to help
you analyze and investigate security issues across your Amazon Web Services
(AWS) workloads. Detective automatically extracts time-based events such as
login attempts, API calls, and network traffic from AWS CloudTrail and
Amazon Virtual Private Cloud (Amazon VPC) flow logs. It also extracts
findings detected by Amazon GuardDuty.
The Detective API primarily supports the creation and management of
behavior graphs. A behavior graph contains the extracted data from a set of
member accounts, and is created and managed by a master account.
Every behavior graph is specific to a Region. You can only use the API to
manage graphs that belong to the Region that is associated with the
currently selected endpoint.
A Detective master account can use the Detective API to do the following:
<ul> <li> Enable and disable Detective. Enabling Detective creates a new
behavior graph.
</li> <li> View the list of member accounts in a behavior graph.
</li> <li> Add member accounts to a behavior graph.
</li> <li> Remove member accounts from a behavior graph.
</li> </ul> A member account can use the Detective API to do the following:
<ul> <li> View the list of behavior graphs that they are invited to.
</li> <li> Accept an invitation to contribute to a behavior graph.
</li> <li> Decline an invitation to contribute to a behavior graph.
</li> <li> Remove their account from a behavior graph.
</li> </ul> All API actions are logged as CloudTrail events. See [Logging
Detective API Calls with
CloudTrail](https://docs.aws.amazon.com/detective/latest/adminguide/logging-using-cloudtrail.html).
"""
@doc """
Accepts an invitation for the member account to contribute data to a
behavior graph. This operation can only be called by an invited member
account.
The request provides the ARN of behavior graph.
The member account status in the graph must be `INVITED`.
"""
def accept_invitation(client, input, options \\ []) do
path_ = "/invitation"
headers = []
query_ = []
request(client, :put, path_, query_, headers, input, options, nil)
end
@doc """
Creates a new behavior graph for the calling account, and sets that account
as the master account. This operation is called by the account that is
enabling Detective.
Before you try to enable Detective, make sure that your account has been
enrolled in Amazon GuardDuty for at least 48 hours. If you do not meet this
requirement, you cannot enable Detective. If you do meet the GuardDuty
prerequisite, then when you make the request to enable Detective, it checks
whether your data volume is within the Detective quota. If it exceeds the
quota, then you cannot enable Detective.
The operation also enables Detective for the calling account in the
currently selected Region. It returns the ARN of the new behavior graph.
`CreateGraph` triggers a process to create the corresponding data tables
for the new behavior graph.
An account can only be the master account for one behavior graph within a
Region. If the same account calls `CreateGraph` with the same master
account, it always returns the same behavior graph ARN. It does not create
a new behavior graph.
"""
def create_graph(client, input, options \\ []) do
path_ = "/graph"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Sends a request to invite the specified AWS accounts to be member accounts
in the behavior graph. This operation can only be called by the master
account for a behavior graph.
`CreateMembers` verifies the accounts and then sends invitations to the
verified accounts.
The request provides the behavior graph ARN and the list of accounts to
invite.
The response separates the requested accounts into two lists:
<ul> <li> The accounts that `CreateMembers` was able to start the
verification for. This list includes member accounts that are being
verified, that have passed verification and are being sent an invitation,
and that have failed verification.
</li> <li> The accounts that `CreateMembers` was unable to process. This
list includes accounts that were already invited to be member accounts in
the behavior graph.
</li> </ul>
"""
def create_members(client, input, options \\ []) do
path_ = "/graph/members"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Disables the specified behavior graph and queues it to be deleted. This
operation removes the graph from each member account's list of behavior
graphs.
`DeleteGraph` can only be called by the master account for a behavior
graph.
"""
def delete_graph(client, input, options \\ []) do
path_ = "/graph/removal"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Deletes one or more member accounts from the master account behavior graph.
This operation can only be called by a Detective master account. That
account cannot use `DeleteMembers` to delete their own account from the
behavior graph. To disable a behavior graph, the master account uses the
`DeleteGraph` API method.
"""
def delete_members(client, input, options \\ []) do
path_ = "/graph/members/removal"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Removes the member account from the specified behavior graph. This
operation can only be called by a member account that has the `ENABLED`
status.
"""
def disassociate_membership(client, input, options \\ []) do
path_ = "/membership/removal"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Returns the membership details for specified member accounts for a behavior
graph.
"""
def get_members(client, input, options \\ []) do
path_ = "/graph/members/get"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Returns the list of behavior graphs that the calling account is a master
of. This operation can only be called by a master account.
Because an account can currently only be the master of one behavior graph
within a Region, the results always contain a single graph.
"""
def list_graphs(client, input, options \\ []) do
path_ = "/graphs/list"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Retrieves the list of open and accepted behavior graph invitations for the
member account. This operation can only be called by a member account.
Open invitations are invitations that the member account has not responded
to.
The results do not include behavior graphs for which the member account
declined the invitation. The results also do not include behavior graphs
that the member account resigned from or was removed from.
"""
def list_invitations(client, input, options \\ []) do
path_ = "/invitations/list"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Retrieves the list of member accounts for a behavior graph. Does not return
member accounts that were removed from the behavior graph.
"""
def list_members(client, input, options \\ []) do
path_ = "/graph/members/list"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Rejects an invitation to contribute the account data to a behavior graph.
This operation must be called by a member account that has the `INVITED`
status.
"""
def reject_invitation(client, input, options \\ []) do
path_ = "/invitation/removal"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Sends a request to enable data ingest for a member account that has a
status of `ACCEPTED_BUT_DISABLED`.
For valid member accounts, the status is updated as follows.
<ul> <li> If Detective enabled the member account, then the new status is
`ENABLED`.
</li> <li> If Detective cannot enable the member account, the status
remains `ACCEPTED_BUT_DISABLED`.
</li> </ul>
"""
def start_monitoring_member(client, input, options \\ []) do
path_ = "/graph/member/monitoringstate"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@spec request(AWS.Client.t(), binary(), binary(), list(), list(), map(), list(), pos_integer()) ::
{:ok, map() | nil, map()}
| {:error, term()}
defp request(client, method, path, query, headers, input, options, success_status_code) do
client = %{client | service: "detective"}
host = build_host("api.detective", client)
url = host
|> build_url(path, client)
|> add_query(query, client)
additional_headers = [{"Host", host}, {"Content-Type", "application/x-amz-json-1.1"}]
headers = AWS.Request.add_headers(additional_headers, headers)
payload = encode!(client, input)
headers = AWS.Request.sign_v4(client, method, url, headers, payload)
perform_request(client, method, url, payload, headers, options, success_status_code)
end
defp perform_request(client, method, url, payload, headers, options, success_status_code) do
case AWS.Client.request(client, method, url, payload, headers, options) do
{:ok, %{status_code: status_code, body: body} = response}
when is_nil(success_status_code) and status_code in [200, 202, 204]
when status_code == success_status_code ->
body = if(body != "", do: decode!(client, body))
{:ok, body, response}
{:ok, response} ->
{:error, {:unexpected_response, response}}
error = {:error, _reason} -> error
end
end
defp build_host(_endpoint_prefix, %{region: "local", endpoint: endpoint}) do
endpoint
end
defp build_host(_endpoint_prefix, %{region: "local"}) do
"localhost"
end
defp build_host(endpoint_prefix, %{region: region, endpoint: endpoint}) do
"#{endpoint_prefix}.#{region}.#{endpoint}"
end
defp build_url(host, path, %{:proto => proto, :port => port}) do
"#{proto}://#{host}:#{port}#{path}"
end
defp add_query(url, [], _client) do
url
end
defp add_query(url, query, client) do
querystring = encode!(client, query, :query)
"#{url}?#{querystring}"
end
defp encode!(client, payload, format \\ :json) do
AWS.Client.encode!(client, payload, format)
end
defp decode!(client, payload) do
AWS.Client.decode!(client, payload, :json)
end
end
|
lib/aws/generated/detective.ex
| 0.874533
| 0.634359
|
detective.ex
|
starcoder
|
defmodule FuzzyCompare do
@moduledoc """
This module compares two strings for their similarity and uses multiple
approaches to get high quality results.
## Getting started
In order to compare two strings with each other do the following:
iex> FuzzyCompare.similarity("<NAME>", "monet, claude")
0.95
## Inner workings
Imagine you had to [match some names](https://en.wikipedia.org/wiki/Record_linkage).
Try to match the following list of painters:
* `"<NAME>"`
* `"<NAME>"`
* `"<NAME>"`
For a human it is easy to see that some of the names have just been flipped
and that others are different but similar sounding.
A first approrach could be to compare the strings with a string similarity
function like the
[Jaro-Winkler](https://en.wikipedia.org/wiki/Jaro%E2%80%93Winkler_distance)
function.
iex> String.jaro_distance("<NAME>", "<NAME>")
0.6032763532763533
iex> String.jaro_distance("<NAME>", "<NAME>")
0.6749287749287749
This is not an improvement over exact equality.
In order to improve the results this library uses two different approaches,
`FuzzyCompare.ChunkSet` and `FuzzyCompare.SortedChunks`.
### Sorted chunks
This approach yields good results when words within a string have been
shuffled around. The strategy will sort all substrings by words and compare
the sorted strings.
iex> FuzzyCompare.SortedChunks.substring_similarity("<NAME>", "<NAME>")
1.0
iex(4)> FuzzyCompare.SortedChunks.substring_similarity("<NAME>", "<NAME>")
0.6944444444444443
### Chunkset
The chunkset approach is best in scenarios when the strings contain other
substrings that are not relevant to what is being searched for.
iex> FuzzyCompare.ChunkSet.standard_similarity("<NAME>", "<NAME> was the wife of <NAME>")
1.0
### Substring comparison
Should one of the strings be much longer than the other the library will
attempt to compare matching substrings only.
## Credits
This library is inspired by a [seatgeek blogpost from 2011](https://chairnerd.seatgeek.com/fuzzywuzzy-fuzzy-string-matching-in-python/).
"""
alias FuzzyCompare.{
ChunkSet,
Preprocessed,
Preprocessor,
SortedChunks,
StandardStringComparison,
Strategy,
SubstringComparison
}
@bias 0.95
@doc """
Compares two binaries for their similarity and returns a float in the range of
`0.0` and `1.0` where `0.0` means no similarity and `1.0` means exactly alike.
## Examples
iex> FuzzyCompare.similarity("Oscar-<NAME>", "monet, claude")
0.95
iex> String.jaro_distance("Oscar-<NAME>", "monet, claude")
0.6032763532763533
## Preprocessing
The ratio function expects either strings or the `FuzzyCompare.Preprocessed` struct.
When comparing a large list of strings against always the same string it is
advisable to run the preprocessing once and pass the `FuzzyCompare.Preprocessed` struct.
That way you pay for preprocessing of the constant string only once.
"""
@spec similarity(binary() | Preprocessed.t(), binary() | Preprocessed.t()) :: float()
def similarity(left, right) when is_binary(left) and is_binary(right) do
{processed_left, processed_right} = Preprocessor.process(left, right)
similarity(processed_left, processed_right)
end
def similarity(%Preprocessed{} = left, %Preprocessed{} = right) do
case Strategy.determine_strategy(left, right) do
:standard -> standard_similarity(left, right)
{:substring, scale} -> substring_similarity(left, right, scale)
end
end
@spec substring_similarity(Preprocessed.t(), Preprocessed.t(), number()) :: float()
defp substring_similarity(
%Preprocessed{} = left,
%Preprocessed{} = right,
substring_scale
) do
[
StandardStringComparison.similarity(left.string, right.string),
SubstringComparison.similarity(left.string, right.string),
SortedChunks.substring_similarity(left, right) * @bias * substring_scale,
ChunkSet.substring_similarity(left, right) * @bias * substring_scale
]
|> Enum.max()
end
@spec standard_similarity(Preprocessed.t(), Preprocessed.t()) :: float()
defp standard_similarity(%Preprocessed{} = left, %Preprocessed{} = right) do
[
StandardStringComparison.similarity(left.string, right.string),
SortedChunks.standard_similarity(left, right) * @bias,
ChunkSet.standard_similarity(left, right) * @bias
]
|> Enum.max()
end
end
|
lib/fuzzy_compare.ex
| 0.88397
| 0.698548
|
fuzzy_compare.ex
|
starcoder
|
defmodule Timber.JSON do
@moduledoc false
# This module wraps all JSON encoding functions making it easy
# to change the underlying JSON encoder. This is necessary if/when
# we decide to make the JSON encoder configurable.
# Convenience function for encoding data to JSON. This is necessary to allow for
# configurable JSON parsers.
@doc false
@spec encode_to_binary(any) :: {:ok, String.t()} | {:error, term}
def encode_to_binary(data) do
Jason.encode(data, escape: :json)
end
# Convenience function for encoding data to JSON. This is necessary to allow for
# configurable JSON parsers.
@doc false
@spec encode_to_binary!(any) :: String.t()
def encode_to_binary!(data) do
Jason.encode!(data, escape: :json)
end
# Convenience function that attempts to encode the provided argument to JSON.
# If the encoding fails a `nil` value is returned. If you want the actual error
# please use `encode_to_binary/1`.
@doc false
@spec try_encode_to_binary(any) :: nil | String.t()
def try_encode_to_binary(data) do
case encode_to_binary(data) do
{:ok, json} -> json
{:error, _error} -> nil
end
end
# Convenience function for encoding data to JSON. This is necessary to allow for
# configurable JSON parsers.
@doc false
@spec encode_to_iodata(any) :: {:ok, iodata} | {:error, term}
def encode_to_iodata(data) do
Jason.encode_to_iodata(data, escape: :json)
end
# Convenience function for encoding data to JSON. This is necessary to allow for
# configurable JSON parsers.
@doc false
@spec encode_to_iodata!(any) :: iodata
def encode_to_iodata!(data) do
Jason.encode_to_iodata!(data, escape: :json)
end
# Convenience function that attempts to encode the provided argument to JSON.
# If the encoding fails a `nil` value is returned. If you want the actual error
# please use `encode_to_iodata/1`.
@doc false
@spec try_encode_to_iodata(any) :: nil | iodata
def try_encode_to_iodata(data) do
case encode_to_binary(data) do
{:ok, json} -> json
{:error, _error} -> nil
end
end
end
|
lib/timber/json.ex
| 0.76856
| 0.491578
|
json.ex
|
starcoder
|
defprotocol Bamboo.Formatter do
@moduledoc ~S"""
Converts data to email addresses.
The passed in options is currently a map with the key `:type` and a value of
`:from`, `:to`, `:cc` or `:bcc`. This makes it so that you can pattern match
and return a different address depending on if the address is being used in
the from, to, cc or bcc.
## Simple example
Let's say you have a user struct like this.
defmodule MyApp.User do
defstruct first_name: nil, last_name: nil, email: nil
end
Bamboo can automatically format this struct if you implement the Bamboo.Formatter
protocol.
defimpl Bamboo.Formatter, for: MyApp.User do
# Used by `to`, `bcc`, `cc` and `from`
def format_email_address(user, _opts) do
fullname = "#{user.first_name} #{user.last_name}"
{fullname, user.email}
end
end
Now you can create emails like this, and the user will be formatted correctly
user = %User{first_name: "John", last_name: "Doe", email: "<EMAIL>"}
Bamboo.Email.new_email(from: user)
## Customize formatting based on from, to, cc or bcc
This can be helpful if you want to add the name of the app when sending on
behalf of a user.
defimpl Bamboo.Formatter, for: MyApp.User do
# Include the app name when used in a from address
def format_email_address(user, %{type: :from}) do
fullname = "#{user.first_name} #{user.last_name}"
{fullname <> " (Sent from MyApp)", user.email}
end
# Just use the name for all other types
def format_email_address(user, _opts) do
fullname = "#{user.first_name} #{user.last_name}"
{fullname, user.email}
end
end
"""
@doc ~S"""
Receives data and opts and should return a string or a 2 item tuple {name, address}
opts is a map with the key `:type` and a value of
`:from`, `:to`, `:cc` or `:bcc`. You can pattern match on this to customize
the address.
"""
@type opts :: %{type: :from | :to | :cc | :bcc}
@spec format_email_address(any, opts) :: Bamboo.Email.address()
def format_email_address(data, opts)
end
defimpl Bamboo.Formatter, for: List do
def format_email_address(email_addresses, opts) do
email_addresses |> Enum.map(&Bamboo.Formatter.format_email_address(&1, opts))
end
end
defimpl Bamboo.Formatter, for: BitString do
def format_email_address(email_address, _opts) do
{nil, email_address}
end
end
defimpl Bamboo.Formatter, for: Tuple do
def format_email_address(already_formatted_email, _opts) do
already_formatted_email
end
end
defimpl Bamboo.Formatter, for: Map do
def format_email_address(invalid_address, _opts) do
raise ArgumentError, """
The format of the address was invalid. Got #{inspect(invalid_address)}.
Expected a string, e.g. "<EMAIL>", a 2 item tuple {name, address}, or
something that implements the Bamboo.Formatter protocol.
Example:
defimpl Bamboo.Formatter, for: MyApp.User do
def format_email_address(user, _opts) do
{user.name, user.email}
end
end
"""
end
end
|
lib/bamboo/formatter.ex
| 0.865736
| 0.510313
|
formatter.ex
|
starcoder
|
defmodule Ambry.Series do
@moduledoc """
Functions for dealing with Series.
"""
import Ambry.SearchUtils
import Ecto.Query
alias Ambry.{PubSub, Repo}
alias Ambry.Series.{Series, SeriesBook, SeriesFlat}
@doc """
Returns a limited list of series and whether or not there are more.
By default, it will limit to the first 10 results. Supply `offset` and `limit`
to change this. Also can optionally filter by the given `filter` string.
## Examples
iex> list_series()
{[%SeriesFlat{}, ...], true}
"""
def list_series(offset \\ 0, limit \\ 10, filters \\ %{}, order \\ [asc: :name]) do
over_limit = limit + 1
series =
offset
|> SeriesFlat.paginate(over_limit)
|> SeriesFlat.filter(filters)
|> SeriesFlat.order(order)
|> Repo.all()
series_to_return = Enum.slice(series, 0, limit)
{series_to_return, series != series_to_return}
end
@doc """
Returns the number of series.
## Examples
iex> count_series()
1
"""
@spec count_series :: integer()
def count_series do
Repo.one(from s in Series, select: count(s.id))
end
@doc """
Gets a single series.
Raises `Ecto.NoResultsError` if the Series does not exist.
## Examples
iex> get_series!(123)
%Series{}
iex> get_series!(456)
** (Ecto.NoResultsError)
"""
def get_series!(id) do
series_book_query = from sb in SeriesBook, order_by: [asc: sb.book_number]
Series
|> preload(series_books: ^{series_book_query, [:book]})
|> Repo.get!(id)
end
@doc """
Creates a series.
## Examples
iex> create_series(%{field: value})
{:ok, %Series{}}
iex> create_series(%{field: bad_value})
{:error, %Ecto.Changeset{}}
"""
def create_series(attrs) do
%Series{}
|> Series.changeset(attrs)
|> Repo.insert()
|> tap(&PubSub.broadcast_create/1)
end
@doc """
Updates a series.
## Examples
iex> update_series(series, %{field: new_value})
{:ok, %Series{}}
iex> update_series(series, %{field: bad_value})
{:error, %Ecto.Changeset{}}
"""
def update_series(%Series{} = series, attrs) do
series
|> Series.changeset(attrs)
|> Repo.update()
|> tap(&PubSub.broadcast_update/1)
end
@doc """
Deletes a series.
## Examples
iex> delete_series(series)
{:ok, %Series{}}
iex> delete_series(series)
{:error, %Ecto.Changeset{}}
"""
def delete_series(%Series{} = series) do
series
|> Repo.delete()
|> tap(&PubSub.broadcast_delete/1)
end
@doc """
Returns an `%Ecto.Changeset{}` for tracking series changes.
## Examples
iex> change_series(series)
%Ecto.Changeset{data: %Series{}}
"""
def change_series(%Series{} = series, attrs \\ %{}) do
Series.changeset(series, attrs)
end
@doc """
Gets a series and all of its books.
Books are listed in ascending order based on series book number.
"""
def get_series_with_books!(series_id) do
series_book_query = from sb in SeriesBook, order_by: [asc: sb.book_number]
Series
|> preload(series_books: ^{series_book_query, [book: [:authors, series_books: :series]]})
|> Repo.get!(series_id)
end
@doc """
Finds series that match a query string.
Returns a list of tuples of the form `{jaro_distance, series}`.
"""
def search(query_string, limit \\ 15) do
name_query = "%#{query_string}%"
query = from s in Series, where: ilike(s.name, ^name_query), limit: ^limit
series_book_query = from sb in SeriesBook, order_by: [asc: sb.book_number]
query
|> preload(series_books: ^{series_book_query, [book: [:authors, series_books: :series]]})
|> Repo.all()
|> sort_by_jaro(query_string, :name)
end
@doc """
Returns all series for use in `Select` components.
"""
def for_select do
query = from s in Series, select: {s.name, s.id}, order_by: s.name
Repo.all(query)
end
end
|
lib/ambry/series.ex
| 0.897627
| 0.607343
|
series.ex
|
starcoder
|
defmodule Rummage.Ecto.Hook.Search do
@moduledoc """
`Rummage.Ecto.Hook.Search` is the default search hook that comes with
`Rummage.Ecto`.
This module provides a operations that can add searching functionality to
a pipeline of `Ecto` queries. This module works by taking fields, and `search_type`,
`search_term` and `assoc` associated with those `fields`.
NOTE: This module doesn't return a list of entries, but a `Ecto.Query.t`.
This module `uses` `Rummage.Ecto.Hook`.
_____________________________________________________________________________
# ABOUT:
## Arguments:
This Hook expects a `queryable` (an `Ecto.Queryable`) and
`search_params` (a `Map`). The map should be in the format:
`%{field_name: %{assoc: [], search_term: true, search_type: :eq}}`
Details:
* `field_name`: The field name to search by.
* `assoc`: List of associations in the search.
* `search_term`: Term to compare the `field_name` against.
* `search_type`: Determines the kind of search to perform. If `:eq`, it
expects the `field_name`'s value to be equal to `search_term`,
If `lt`, it expects it to be less than `search_term`.
To see all the `search_type`s, check
`Rummage.Ecto.Services.BuildSearchQuery`
* `search_expr`: This is optional. Defaults to `:where`. This is the way current
search expression is appended to the existing query.
To see all the `search_expr`s, check
`Rummage.Ecto.Services.BuildSearchQuery`
For example, if we want to search products with `available` = `true`, we would
do the following:
```elixir
Rummage.Ecto.Hook.Search.run(Product, %{available: %{assoc: [],
search_type: :eq,
search_term: true}})
```
This can be used for a search with multiple fields as well. Say, we want to
search for products that are `available`, but have a price less than `10.0`.
```elixir
Rummage.Ecto.Hook.Search.run(Product,
%{available: %{assoc: [],
search_type: :eq,
search_term: true},
%{price: %{assoc: [],
search_type: :lt,
search_term: 10.0}})
```
## Assoications:
Assocaitions can be given to this module's run function as a key corresponding
to params associated with a field. For example, if we want to search products
that belong to a category with category_name, "super", we would do the
following:
```elixir
category_name_params = %{assoc: [inner: :category], search_term: "super",
search_type: :eq, search_expr: :where}
Rummage.Ecto.Hook.Search.run(Product, %{category_name: category_name_params})
```
The above operation will return an `Ecto.Query.t` struct which represents
a query equivalent to:
```elixir
from p0 in Product
|> join(:inner, :category)
|> where([p, c], c.category_name == ^"super")
```
____________________________________________________________________________
# ASSUMPTIONS/NOTES:
* This Hook has the default `search_type` of `:ilike`, which is
case-insensitive.
* This Hook has the default `search_expr` of `:where`.
* This Hook assumes that the field passed is a field on the `Ecto.Schema`
that corresponds to the last association in the `assoc` list or the `Ecto.Schema`
that corresponds to the `from` in `queryable`, if `assoc` is an empty list.
NOTE: It is adviced to not use multiple associated searches in one operation
as `assoc` still has some minor bugs when used with multiple searches. If you
need to use two searches with associations, I would pipe the call to another
search operation:
```elixir
Search.run(queryable, %{field1: %{assoc: [inner: :some_assoc]}}
|> Search.run(%{field2: %{assoc: [inner: :some_assoc2]}}
```
____________________________________________________________________________
# USAGE:
For a regular search:
This returns a `queryable` which upon running will give a list of `Parent`(s)
searched by ascending `field_1`
```elixir
alias Rummage.Ecto.Hook.Search
searched_queryable = Search.run(Parent, %{field_1: %{assoc: [],
search_type: :like, search_term: "field_!"}}})
```
For a case-insensitive search:
This returns a `queryable` which upon running will give a list of `Parent`(s)
searched by ascending case insensitive `field_1`.
Keep in mind that `case_insensitive` can only be called for `text` fields
```elixir
alias Rummage.Ecto.Hook.Search
searched_queryable = Search.run(Parent, %{field_1: %{assoc: [],
search_type: :ilike, search_term: "field_!"}}})
```
There are many other `search_types`. Check out
`Rummage.Ecto.Services.BuildSearchQuery` docs to explore more `search_types`
This module can be overridden with a custom module while using `Rummage.Ecto`
in `Ecto` struct module:
In the `Ecto` module:
```elixir
Rummage.Ecto.rummage(queryable, rummage, search: CustomHook)
```
OR
Globally for all models in `config.exs`:
```elixir
config :my_app,
Rummage.Ecto,
.search: CustomHook
```
The `CustomHook` must use `Rummage.Ecto.Hook`. For examples of `CustomHook`,
check out some `custom_hooks` that are shipped with `Rummage.Ecto`:
`Rummage.Ecto.CustomHook.SimpleSearch`, `Rummage.Ecto.CustomHook.SimpleSort`,
Rummage.Ecto.CustomHook.SimplePaginate
"""
use Rummage.Ecto.Hook
import Ecto.Query
@expected_keys ~w{search_type assoc search_term}a
@err_msg ~s{Error in params, No values given for keys: }
alias Rummage.Ecto.Services.BuildSearchQuery
@doc ~S"""
This is the callback implementation of Rummage.Ecto.Hook.run/2.
Builds a search `Ecto.Query.t` on top of a given `Ecto.Query.t` variable
with given `params`.
Besides an `Ecto.Query.t` an `Ecto.Schema` module can also be passed as it
implements `Ecto.Queryable`
Params is a `Map`, keys of which are field names which will be searched for and
value corresponding to that key is a list of params for that key, which
should include the keys: `#{Enum.join(@expected_keys, ", ")}`.
This function expects a `search_expr`, `search_type` and a list of
`associations` (empty for none). The `search_term` is what the `field`
will be matched to based on the `search_type` and `search_expr`.
If no `search_expr` is given, it defaults to `where`.
For all `search_exprs`, refer to `Rummage.Ecto.Services.BuildSearchQuery`.
For all `search_types`, refer to `Rummage.Ecto.Services.BuildSearchQuery`.
If an expected key isn't given, a `Runtime Error` is raised.
NOTE:This hook isn't responsible for doing type validations. That's the
responsibility of the user sending `search_term` and `search_type`. Same
goes for the validity of `assoc`.
## Examples
When search_params are empty, it simply returns the same `queryable`:
iex> alias Rummage.Ecto.Hook.Search
iex> import Ecto.Query
iex> Search.run(Parent, %{})
Parent
When a non-empty map is passed as a field `params`, but with a missing key:
iex> alias Rummage.Ecto.Hook.Search
iex> import Ecto.Query
iex> Search.run(Parent, %{field: %{assoc: []}})
** (RuntimeError) Error in params, No values given for keys: search_type, search_term
When a valid map of params is passed with an `Ecto.Schema` module:
iex> alias Rummage.Ecto.Hook.Search
iex> import Ecto.Query
iex> search_params = %{field1: %{assoc: [],
...> search_type: :like, search_term: "field1", search_expr: :where}}
iex> Search.run(Rummage.Ecto.Product, search_params)
#Ecto.Query<from p0 in subquery(from p0 in Rummage.Ecto.Product), where: like(p0.field1, ^"%field1%")>
When a valid map of params is passed with an `Ecto.Query.t`:
iex> alias Rummage.Ecto.Hook.Search
iex> import Ecto.Query
iex> search_params = %{field1: %{assoc: [],
...> search_type: :like, search_term: "field1", search_expr: :where}}
iex> query = from p0 in "products"
iex> Search.run(query, search_params)
#Ecto.Query<from p0 in subquery(from p0 in "products"), where: like(p0.field1, ^"%field1%")>
When a valid map of params is passed with an `Ecto.Query.t`, with `assoc`s:
iex> alias Rummage.Ecto.Hook.Search
iex> import Ecto.Query
iex> search_params = %{field1: %{assoc: [inner: :category],
...> search_type: :like, search_term: "field1", search_expr: :or_where}}
iex> query = from p0 in "products"
iex> Search.run(query, search_params)
#Ecto.Query<from p0 in subquery(from p0 in "products"), join: c1 in assoc(p0, :category), or_where: like(c1.field1, ^"%field1%")>
When a valid map of params is passed with an `Ecto.Query.t`, with `assoc`s, with
different join types:
iex> alias Rummage.Ecto.Hook.Search
iex> import Ecto.Query
iex> search_params = %{field1: %{assoc: [inner: :category, left: :category, cross: :category],
...> search_type: :like, search_term: "field1", search_expr: :where}}
iex> query = from p0 in "products"
iex> Search.run(query, search_params)
#Ecto.Query<from p0 in subquery(from p0 in "products"), join: c1 in assoc(p0, :category), left_join: c2 in assoc(c1, :category), cross_join: c3 in assoc(c2, :category), where: like(c3.field1, ^"%field1%")>
When a valid map of params is passed with an `Ecto.Query.t`, searching on
a boolean param
iex> alias Rummage.Ecto.Hook.Search
iex> import Ecto.Query
iex> search_params = %{available: %{assoc: [],
...> search_type: :eq, search_term: true, search_expr: :where}}
iex> query = from p0 in "products"
iex> Search.run(query, search_params)
#Ecto.Query<from p0 in subquery(from p0 in "products"), where: p0.available == ^true>
When a valid map of params is passed with an `Ecto.Query.t`, searching on
a float param
iex> alias Rummage.Ecto.Hook.Search
iex> import Ecto.Query
iex> search_params = %{price: %{assoc: [],
...> search_type: :gteq, search_term: 10.0, search_expr: :where}}
iex> query = from p0 in "products"
iex> Search.run(query, search_params)
#Ecto.Query<from p0 in subquery(from p0 in "products"), where: p0.price >= ^10.0>
When a valid map of params is passed with an `Ecto.Query.t`, searching on
a boolean param, but with a wrong `search_type`.
NOTE: This doesn't validate the search_type of search_term
iex> alias Rummage.Ecto.Hook.Search
iex> import Ecto.Query
iex> search_params = %{available: %{assoc: [],
...> search_type: :ilike, search_term: true, search_expr: :where}}
iex> query = from p0 in "products"
iex> Search.run(query, search_params)
** (ArgumentError) argument error
"""
@spec run(Ecto.Query.t(), map()) :: Ecto.Query.t()
def run(q, s), do: handle_search(q, s)
# Helper function which handles addition of search query on top of
# the sent queryable variable, for all search fields.
defp handle_search(queryable, search_params) do
search_params
|> Map.to_list()
|> Enum.reduce(queryable, &search_queryable(&1, &2))
end
# Helper function which handles addition of search query on top of
# the sent queryable variable, for ONE search fields.
# This delegates the query building to `BuildSearchQuery` module
defp search_queryable(param, queryable) do
field = elem(param, 0)
field_params = elem(param, 1)
:ok = validate_params(field_params)
assocs = Map.get(field_params, :assoc)
search_type = Map.get(field_params, :search_type)
search_term = Map.get(field_params, :search_term)
search_expr = Map.get(field_params, :search_expr, :where)
field = resolve_field(field, queryable)
assocs
|> Enum.reduce(from(e in subquery(queryable)), &join_by_assoc(&1, &2))
|> BuildSearchQuery.run(field, {search_expr, search_type}, search_term)
end
# Helper function which handles associations in a query with a join
# type.
defp join_by_assoc({join, assoc}, query) do
join(query, join, [..., p1], p2 in assoc(p1, ^assoc))
end
# NOTE: These functions can be used in future for multiple search fields that
# are associated.
# defp applied_associations(queryable) when is_atom(queryable), do: []
# defp applied_associations(queryable), do: Enum.map(queryable.joins, & Atom.to_string(elem(&1.assoc, 1)))
# Helper function that validates the list of params based on
# @expected_keys list
defp validate_params(params) do
key_validations = Enum.map(@expected_keys, &Map.fetch(params, &1))
case Enum.filter(key_validations, &(&1 == :error)) do
[] -> :ok
_ -> raise @err_msg <> missing_keys(key_validations)
end
end
# Helper function used to build error message using missing keys
defp missing_keys(key_validations) do
key_validations
|> Enum.with_index()
|> Enum.filter(fn {v, _i} -> v == :error end)
|> Enum.map(fn {_v, i} -> Enum.at(@expected_keys, i) end)
|> Enum.map(&to_string/1)
|> Enum.join(", ")
end
@doc """
Callback implementation for Rummage.Ecto.Hook.format_params/3.
This function ensures that params for each field have keys `assoc`, `search_type` and
`search_expr` which are essential for running this hook module.
## Examples
iex> alias Rummage.Ecto.Hook.Search
iex> Search.format_params(Parent, %{field: %{}}, [])
%{field: %{assoc: [], search_expr: :where, search_type: :eq}}
iex> alias Rummage.Ecto.Hook.Search
iex> Search.format_params(Parent, %{field: 1}, [])
** (RuntimeError) No scope `field` of type search defined in the Elixir.Parent
"""
@spec format_params(Ecto.Query.t(), map(), keyword()) :: map()
def format_params(queryable, search_params, _opts) do
search_params
|> Map.to_list()
|> Enum.map(&put_keys(&1, queryable))
|> Enum.into(%{})
end
defp put_keys({field, %{} = field_params}, _queryable) do
field_params =
field_params
|> Map.put_new(:assoc, [])
|> Map.put_new(:search_type, :eq)
|> Map.put_new(:search_expr, :where)
{field, field_params}
end
defp put_keys({search_scope, field_value}, queryable) do
module = get_module(queryable)
name = :"__rummage_search_#{search_scope}"
{field, search_params} =
case function_exported?(module, name, 1) do
true -> apply(module, name, [field_value])
_ -> raise "No scope `#{search_scope}` of type search defined in the #{module}"
end
put_keys({field, search_params}, queryable)
end
end
|
lib/rummage_ecto/hooks/search.ex
| 0.830525
| 0.935405
|
search.ex
|
starcoder
|
defprotocol Bolt.Sips.ResponseEncoder.Json do
@moduledoc """
Protocol controlling how a value is made jsonable.
Its only purpose is to convert Bolt Sips specific structures into elixir buit-in types
which can be encoed in json by Jason.
## Deriving
If the provided default implementation don't fit your need, you can override with your own
implementation.
### Example
Let's assume that you don't want Node's id available as they are Neo4j's ones and are not
reliable because of id reuse and you want to have you own `uuid` in place.
Instead of:
```
{
id: 0,
labels: ["TestNode"],
properties: {
uuid: "837806a7-6c37-4630-9f6c-9aa7ad0129ed"
value: "my node"
}
}
```
you want:
```
{
uuid: "837806a7-6c37-4630-9f6c-9aa7ad0129ed",
labels: ["TestNode"],
properties: {
value: "my node"
}
}
```
You can achieve that with the following implementation:
```
defimpl Bolt.Sips.ResponseEncoder.Json, for: Bolt.Sips.Types.Node do
def encode(node) do
new_props = Map.drop(node.properties, :uuid)
node
|> Map.from_struct()
|> Map.put(:uuid, node.properties.uuid)
|> Map.put(:properties, new_props)
end
end
```
It is also possible to provide implementation that returns structs or updated Bolt.Sips.Types,
the use of a final `Bolt.Sips.ResponseEncoder.Json.encode()` will ensure that these values will
be converted to jsonable ones.
"""
@fallback_to_any true
@doc """
Convert a value in a jsonable format
"""
@spec encode(any()) :: any()
def encode(value)
end
alias Bolt.Sips.{Types, ResponseEncoder}
defimpl ResponseEncoder.Json, for: Types.DateTimeWithTZOffset do
@spec encode(Types.DateTimeWithTZOffset.t()) :: String.t()
def encode(value) do
{:ok, dt} = Types.DateTimeWithTZOffset.format_param(value)
ResponseEncoder.Json.encode(dt)
end
end
defimpl ResponseEncoder.Json, for: Types.TimeWithTZOffset do
@spec encode(Types.TimeWithTZOffset.t()) :: String.t()
def encode(struct) do
{:ok, t} = Types.TimeWithTZOffset.format_param(struct)
ResponseEncoder.Json.encode(t)
end
end
defimpl ResponseEncoder.Json, for: Types.Duration do
@spec encode(Types.Duration.t()) :: String.t()
def encode(struct) do
{:ok, d} = Types.Duration.format_param(struct)
ResponseEncoder.Json.encode(d)
end
end
defimpl ResponseEncoder.Json, for: Types.Point do
@spec encode(Types.Point.t()) :: map()
def encode(struct) do
{:ok, pt} = Types.Point.format_param(struct)
ResponseEncoder.Json.encode(pt)
end
end
defimpl ResponseEncoder.Json,
for: [Types.Node, Types.Relationship, Types.UnboundRelationship, Types.Path] do
@spec encode(struct()) :: map()
def encode(value) do
value
|> Map.from_struct()
|> ResponseEncoder.Json.encode()
end
end
defimpl ResponseEncoder.Json, for: Any do
@spec encode(any()) :: any()
def encode(value) when is_list(value) do
value
|> Enum.map(&ResponseEncoder.Json.encode/1)
end
def encode(%{__struct__: _} = value) do
value
|> Map.from_struct()
|> ResponseEncoder.Json.encode()
end
def encode(value) when is_map(value) do
value
|> Enum.into(%{}, fn {k, val} -> {k, ResponseEncoder.Json.encode(val)} end)
end
def encode(value) do
value
end
end
|
lib/bolt_sips/response_encoder/json.ex
| 0.926719
| 0.829077
|
json.ex
|
starcoder
|
defmodule Advent.Y2021.D25 do
@moduledoc """
https://adventofcode.com/2021/day/25
"""
@doc """
"""
@spec part_one(Enumerable.t()) :: non_neg_integer()
def part_one(input) do
input
|> parse_input()
|> Stream.iterate(&step/1)
|> Stream.chunk_every(2, 1)
|> Stream.with_index(1)
|> Enum.find_value(fn {[g1, g2], turn} ->
if g1 == g2, do: turn
end)
end
@doc """
"""
@spec part_two(any()) :: any()
def part_two(_input) do
0
end
defp parse_input(input) do
grid =
input
|> Stream.with_index()
|> Stream.flat_map(fn {line, row} ->
line
|> String.graphemes()
|> Enum.with_index()
|> Enum.map(fn {x, col} -> {x, {col, row}} end)
end)
|> Stream.reject(fn {x, _} -> x == "." end)
|> Stream.map(fn
{">", coord} -> {coord, :>}
{"v", coord} -> {coord, :v}
end)
|> Map.new()
{{max_x, _}, _} = Enum.max_by(grid, fn {{x, _}, _} -> x end)
{{_, max_y}, _} = Enum.max_by(grid, fn {{_, y}, _} -> y end)
{grid, {max_x, max_y}}
end
defp step({grid, {max_x, max_y}}) do
# Move east herd
grid =
grid
|> Enum.map(fn
self = {{x, y}, :>} ->
shift = {rem(x + 1, max_x + 1), y}
if Map.has_key?(grid, shift) do
self
else
{shift, :>}
end
self ->
self
end)
|> Map.new()
# Move south herd
grid =
grid
|> Enum.map(fn
self = {{x, y}, :v} ->
shift = {x, rem(y + 1, max_y + 1)}
if Map.has_key?(grid, shift) do
self
else
{shift, :v}
end
self ->
self
end)
|> Map.new()
{grid, {max_x, max_y}}
end
defp print_grid({grid, {max_x, max_y}}) do
IO.puts("")
for y <- 0..max_y do
for x <- 0..max_x, into: "" do
case Map.get(grid, {x, y}) do
:> -> ">"
:v -> "v"
nil -> "."
end
end
end
|> Enum.join("\n")
|> IO.puts()
IO.puts("")
end
end
|
lib/advent/y2021/d25.ex
| 0.684897
| 0.476701
|
d25.ex
|
starcoder
|
defmodule Chunky.Math.Operations do
@moduledoc """
The Operations module provides functions and macros for making particular repeated
operations and series easier to work with. Most of these are just simplifications
around enumerations over values, with support for either Integer or Fraction values.
To use the macro operations, you need to `require Chunky.Math.Operations` first, and
then you can import specific operations, like `import Chunky.Math.Operations, only: [summation: 3]`.
"""
alias Chunky.Fraction
@doc """
Run a summation across an expression. Any `key` or variable name can be used, along
with a range and an expression:
```elixir
# simple summations of `2k + 1` over the range of 1 to 10
summation k, 1..10 do
k * 2 + 1
end
```
Summations can also be nested:
```elixir
# find y^2 + x + 1 over a 2 dimensional range
summation x, 5..50 do
summation y, 3..30 do
y * y + x + 1
end
end
```
If Fraction values are detected, they will be automatically handled as well:
```elixir
# Sum the fraction series 1/n for 1/1 to 1/100
summation den, 1..100 do
Fraction.new(1, den)
end
```
Any enumerable can be passed to summation:
```elixir
# sum the divisors of 100
summation k, Math.factors(100) do
k
end
```
"""
defmacro summation(key, range, do: expression) do
quote do
unquote(range)
|> Enum.map(fn k_a ->
var!(unquote(key)) = k_a
unquote(expression)
end)
|> Enum.reduce(
0,
fn v, acc ->
case v do
# we're summing fractions.
%Fraction{} -> Fraction.add(v, acc)
# integers...
_ -> v + acc
end
end
)
end
end
@doc """
Run a product across an expression. Any `key` or variable name can be used, along
with a range and an expression:
```elixir
# 1 * 2 * 3 * 4 * ... * 100
product k, 1..100 do
k
end
```
Products can be nested:
```elixir
# Step product of `j^3 + k^2 + 3`
product k, 2..6 do
product j, k..10 do
j * j * j + k * k + 3
end
end
```
Fractions are also supported:
```elixir
# fractional series of 1/2 * 2/3 * ... 100/101
product k, 1..100 do
Fraction.new(k, k + 1)
end
```
Any enumerable can be passed to product:
```elixir
# multiply the divisors of 100
product k, Math.factors(100) do
k
end
```
"""
defmacro product(key, range, do: expression) do
quote do
unquote(range)
|> Enum.map(fn k_a ->
var!(unquote(key)) = k_a
unquote(expression)
end)
# |> Enum.sum()
|> Enum.reduce(
1,
fn v, acc ->
case v do
# we're running a product over fractions.
%Fraction{} -> Fraction.multiply(v, acc)
# integers...
_ -> v * acc
end
end
)
end
end
end
|
lib/math/operations.ex
| 0.879062
| 0.959307
|
operations.ex
|
starcoder
|
defmodule Openflow.Action.NxRegLoad do
@moduledoc """
Copies value[0:n_bits] to dst[ofs:ofs+n_bits], where a[b:c] denotes the bits
within 'a' numbered 'b' through 'c' (not including bit 'c'). Bit numbering
starts at 0 for the least-significant bit, 1 for the next most significant
bit, and so on.
'dst' is an nxm_header with nxm_hasmask=0. See the documentation for
NXAST_REG_MOVE, above, for the permitted fields and for the side effects of
loading them.
The 'ofs' and 'n_bits' fields are combined into a single 'ofs_nbits' field
to avoid enlarging the structure by another 8 bytes. To allow 'n_bits' to
take a value between 1 and 64 (inclusive) while taking up only 6 bits, it is
also stored as one less than its true value:
```
15 6 5 0
+------------------------------+------------------+
| ofs | n_bits - 1 |
+------------------------------+------------------+
```
The switch will reject actions for which ofs+n_bits is greater than the
width of 'dst', or in which any bits in 'value' with value 2n_bits or
greater are set to 1, with error type OFPET_BAD_ACTION, code
OFPBAC_BAD_ARGUMENT.
"""
import Bitwise
defstruct(
n_bits: 0,
offset: 0,
dst_field: nil,
value: nil
)
@experimenter 0x00002320
@nxast 7
alias __MODULE__
alias Openflow.Action.Experimenter
def new(options \\ []) do
dst_field = options[:dst_field] || raise "dst_field must be specified"
value = options[:value] || raise "value must be specified"
default_n_bits = Openflow.Match.n_bits_of(dst_field)
%NxRegLoad{
n_bits: options[:n_bits] || default_n_bits,
offset: options[:offset] || 0,
dst_field: dst_field,
value: value
}
end
def to_binary(%NxRegLoad{} = load) do
ofs_nbits = load.offset <<< 6 ||| load.n_bits - 1
dst_field_bin = Openflow.Match.codec_header(load.dst_field)
value_int =
load.value
|> Openflow.Match.encode_value(load.dst_field)
|> :binary.decode_unsigned(:big)
Experimenter.pack_exp_header(<<
@experimenter::32,
@nxast::16,
ofs_nbits::16,
dst_field_bin::4-bytes,
value_int::size(8)-unit(8)
>>)
end
def read(<<@experimenter::32, @nxast::16, body::bytes>>) do
<<ofs::10, n_bits::6, dst_field_bin::4-bytes, value_bin::bytes>> = body
dst_field = Openflow.Match.codec_header(dst_field_bin)
value = Openflow.Match.decode_value(value_bin, dst_field)
%NxRegLoad{n_bits: n_bits + 1, offset: ofs, dst_field: dst_field, value: value}
end
end
|
lib/openflow/actions/nx_reg_load.ex
| 0.783368
| 0.790773
|
nx_reg_load.ex
|
starcoder
|
defmodule Markdown do
@moduledoc """
Markdown to HTML conversion.
## Dirty Scheduling
This relies on a NIF wrapping the hoedown library.
By default the NIF is deemed as clean for input lower than 30k characters. For
inputs over this value, it is likely the render time will take over 1ms and thus
it should be scheduled on a dirty scheduler.
Since it is impossible to know beforehand, if an input will take over 1ms to be
processed, the 30k threshold is considered an arbitrary value. See
[subvisual/markdown#1](https://github.com/subvisual/markdown/pulls/1).
This value can be configured by setting the following in your `config/config.exs`:
```elixir
config :markdown, dirty_scheduling_threshold: 50_000
```
"""
@on_load {:init, 0}
app = Mix.Project.config()[:app]
def init do
path = :filename.join(:code.priv_dir(unquote(app)), 'markdown')
:ok = :erlang.load_nif(path, 0)
case Application.get_env(:markdown, :dirty_scheduling_threshold) do
nil -> :ok
value -> set_nif_threshold(value)
end
:ok
end
@doc ~S"""
Converts a Markdown document to HTML:
iex> Markdown.to_html "# Hello World"
"<h1>Hello World</h1>\n"
iex> Markdown.to_html "http://elixir-lang.org/", autolink: true
"<p><a href=\"http://elixir-lang.org/\">http://elixir-lang.org/</a></p>\n"
Available output options (all default to false):
* `:autolink` - Automatically turn URLs into links.
* `:disable_indented_code` - Don't indented code blocks as `<code>`.
* `:escape` - Escape all HTML tags.
* `:fenced_code` - Enables fenced code blocks.
* `:hard_wrap` - Replace line breaks with `<hr>` tags.
* `:highlight` - Replace `==highlight==` blocks with `<mark>` tags.
* `:math` - Parse TeX-based `$$math$$` syntax.
* `:math_explicit` - Requires `math: true`. Parse `$math$` as inline and
* `$$math$$` as blocks, instead of attempting to guess.
* `:no_intra_emphasis` - Don't parse `underscores_between_words` as `<em>` tags.
* `:quote` - Render "quotation marks" as `<q>` tags.
* `:skip_html` - Strip HTML tags.
* `:space_headers` - Require a space after `#` in the headers.
* `:strikethrough` - Parse `~~text~~` as `<del>` tags.
* `:superscript` - Parse `^text` as `<sup>` tags.
* `:tables` - Enables Markdown Extra style tables.
* `:underline` - Parse `_text_` as `<u>` tags.
* `:use_xhtml` - Use XHTML instead of HTML.
"""
@spec to_html(doc :: String.t()) :: String.t()
@spec to_html(doc :: String.t(), options :: Keyword.t()) :: String.t()
def to_html(doc, options \\ [])
def to_html(_, _) do
exit(:nif_library_not_loaded)
end
def set_nif_threshold(_) do
exit(:nif_library_not_loaded)
end
end
|
lib/markdown.ex
| 0.805058
| 0.801509
|
markdown.ex
|
starcoder
|
defmodule Blockchain.Block do
@moduledoc """
Represents one block within the blockchain.
This module provides functions to hash blocks, validate proof, and find
proofs (mine) for blocks.
"""
require Logger
alias Blockchain.{Transaction, Chain}
@type t() :: %Blockchain.Block{
index: integer,
transactions: [Blockchain.Transaction.t()],
tx_hash: h() | nil,
nonce: p(),
parent: h()
}
@typedoc "A block hash (SHA256)"
@type h :: binary()
@typedoc "Block nonce field"
@type p :: integer
defstruct index: 0, transactions: [], nonce: 0, parent: "", tx_hash: nil
@doc """
Compute the hash of a block
"""
@spec hash(block :: t()) :: h()
def hash(%__MODULE__{tx_hash: nil} = block), do: hash(optimize(block))
def hash(block) do
:crypto.hash_init(:sha256)
|> :crypto.hash_update(<<block.index::little-unsigned-32>>)
|> :crypto.hash_update(block.tx_hash)
|> :crypto.hash_update(<<block.nonce::little-unsigned-32>>)
|> :crypto.hash_update(block.parent)
|> :crypto.hash_final()
end
@doc """
Optimize a block by saving the transaction hash inside it
"""
@spec optimize(block :: t()) :: t()
def optimize(block) do
%__MODULE__{block | tx_hash: Transaction.hash(block.transactions)}
end
@doc """
Check if a block is valid
"""
@spec valid?(block :: t()) :: boolean
def valid?(block) do
proof = valid_proof?(block)
if not proof, do: Logger.warn("invalid proof")
optimized = block.tx_hash == nil or block == optimize(block)
if not optimized, do: Logger.warn("tx optimization was wrong")
transactions = Enum.all?(block.transactions, &Transaction.valid?(&1))
if not transactions, do: Logger.warn("invalid transactions")
proof and transactions and optimized
end
@doc """
Check if a block has a valid proof inside
"""
@spec valid_proof?(block :: t()) :: boolean
def valid_proof?(block) do
challenge(hash(block), 16)
end
defp challenge(_hash, 0), do: true
defp challenge(<<1::size(1), _::bitstring>>, _), do: false
defp challenge(<<0::size(1), rest::bitstring>>, n), do: challenge(rest, n - 1)
@doc """
Find a valid proof for a given hash
"""
def mine(block) do
if valid_proof?(block) do
block
else
mine(%__MODULE__{block | nonce: block.nonce + 1})
end
end
defmodule Enum do
@moduledoc """
Simple wrapper module to iterate through a Chain
It's ugly, don't look at that.
"""
alias Blockchain.Block
@enforce_keys [:head, :chain]
defstruct [:head, :chain]
@opaque t() :: %__MODULE__{head: Block.t(), chain: Chain.t()}
@spec new(head :: Block.h() | Block.t(), chain :: Chain.t()) :: Blockchain.Block.Enum.t()
def new(head, chain) when is_binary(head),
do: new(Chain.lookup(chain, head), chain)
def new(%Block{} = head, %Chain{} = chain),
do: %__MODULE__{head: head, chain: chain}
def next(%__MODULE__{chain: nil}), do: nil
def next(%__MODULE__{head: nil}), do: nil
def next(%__MODULE__{head: head, chain: chain} = enum),
do: %__MODULE__{enum | head: Chain.lookup(chain, head.parent)}
end
defimpl Enumerable, for: Block.Enum do
def count(_list), do: {:error, __MODULE__}
def slice(_list), do: {:error, __MODULE__}
def member?(_list, _value), do: {:error, __MODULE__}
def reduce(_, {:halt, acc}, _fun), do: {:halted, acc}
def reduce(list, {:suspend, acc}, fun), do: {:suspended, acc, &reduce(list, &1, fun)}
def reduce(%Blockchain.Block.Enum{head: nil}, {:cont, acc}, _fun), do: {:done, acc}
def reduce(enum, {:cont, acc}, fun),
do: reduce(Enum.next(enum), fun.(enum.head, acc), fun)
end
end
defimpl String.Chars, for: Blockchain.Block do
def to_string(%{index: index, transactions: txs, nonce: nonce, parent: parent} = block) do
parent = Base.url_encode64(parent, padding: false)
message =
if nonce == 0 do
["Block ##{index}"]
else
hash = Blockchain.Block.hash(block) |> Base.url_encode64(padding: false)
["Block ##{index} #{hash}", "Nonce: #{nonce}"]
end ++
["Parent: #{parent}"] ++
if Enum.empty?(txs),
do: ["No transaction"],
else: ["Transactions:"] ++ Enum.map(txs, &" #{&1}")
Enum.join(message, "\n ")
end
end
|
apps/blockchain/lib/blockchain/block.ex
| 0.853669
| 0.533458
|
block.ex
|
starcoder
|
defmodule OpenHours.TimeSlot do
@moduledoc """
This module contains all functions to work with time slots.
"""
import OpenHours.Common
alias OpenHours.{TimeSlot, Schedule, Interval}
@typedoc """
Struct composed by a start datetime and an end datetime.
"""
@type t :: %__MODULE__{starts_at: DateTime.t(), ends_at: DateTime.t()}
@enforce_keys [:starts_at, :ends_at]
defstruct [:starts_at, :ends_at]
@doc """
Calculates a list of time slots between two dates based on a Schedule. It follows the same rules
as `OpenHours.Schedule.in_hours?/2`.
"""
@spec between(OpenHours.Schedule.t(), DateTime.t(), DateTime.t()) :: [t()]
def between(
%Schedule{time_zone: schedule_tz} = schedule,
%DateTime{time_zone: start_tz} = starts_at,
%DateTime{} = ends_at
)
when schedule_tz != start_tz do
{:ok, shifted_starts_at} = DateTime.shift_zone(starts_at, schedule_tz, Tzdata.TimeZoneDatabase)
between(schedule, shifted_starts_at, ends_at)
end
def between(
%Schedule{time_zone: schedule_tz} = schedule,
%DateTime{} = starts_at,
%DateTime{time_zone: end_tz} = ends_at
)
when schedule_tz != end_tz do
{:ok, shifted_ends_at} = DateTime.shift_zone(ends_at, schedule_tz, Tzdata.TimeZoneDatabase)
between(schedule, starts_at, shifted_ends_at)
end
def between(%Schedule{} = schedule, %DateTime{} = starts_at, %DateTime{} = ends_at) do
starts_at
|> DateTime.to_date()
|> Date.range(DateTime.to_date(ends_at))
|> Enum.reject(&Enum.member?(schedule.holidays, &1))
|> Enum.flat_map(&time_slots_for(schedule, starts_at, ends_at, &1))
end
defp time_slots_for(
%Schedule{} = schedule,
%DateTime{} = _starts_at,
%DateTime{} = _ends_at,
%Date{} = day
) do
schedule
|> get_intervals_for(day)
|> Enum.map(fn {interval_start, interval_end} ->
%TimeSlot{
starts_at:
DateTime.from_naive!(
build_date_time(day, interval_start),
schedule.time_zone,
Tzdata.TimeZoneDatabase
),
ends_at:
DateTime.from_naive!(
build_date_time(day, interval_end),
schedule.time_zone,
Tzdata.TimeZoneDatabase
)
}
end)
end
defp build_date_time(%Date{} = day, time) do
with {:ok, date} <- NaiveDateTime.new(day, time), do: date
end
defp get_intervals_for(
%Schedule{hours: hours, shifts: shifts, breaks: breaks},
%Date{} = day
) do
case Enum.find(shifts, fn {shift_date, _} -> shift_date == day end) do
{_shift_date, intervals} ->
intervals
_ ->
day_intervals = Map.get(hours, weekday(day), [])
breaks
|> Enum.find(breaks, fn {break_date, _} -> break_date == day end)
|> case do
{_, day_breaks} -> Interval.difference(day_intervals, day_breaks)
_ -> day_intervals
end
end
end
end
|
lib/open_hours/time_slot.ex
| 0.87802
| 0.549943
|
time_slot.ex
|
starcoder
|
defmodule Croma.Monad do
@moduledoc """
This module defines an interface for [monad](https://en.wikipedia.org/wiki/Monad).
Modules that `use` this module must provide concrete implementations of the following:
- `@type t(a)`
- `@spec pure(a) :: t(a) when a: any`
- `@spec bind(t(a), (a -> t(b))) :: t(b) when a: any, b: any`
Note that the order of parameters in `map`/`ap` is different from that of Haskell counterparts,
in order to leverage Elixir's pipe operator `|>`.
Using concrete implementations of the above interfaces, this module generates default implementations of some functions/macros.
See `Croma.Result` for the generated functions/macros.
`Croma.Monad` also provides `bind`-less syntax similar to Haskell's do-notation with `m/1` macro.
"""
# Here I don't use Erlang behaviour
# since it seems that behaviour and typespecs don't provide a way to refer to a type
# that will be defined in the module that use this module.
defmacro __using__(_) do
quote do
@spec pure(a) :: t(a) when a: any
@spec bind(t(a), (a -> t(b))) :: t(b) when a: any, b: any
@doc """
Default implementation of Functor's `fmap` operation.
Modules that implement `Croma.Monad` may override this default implementation.
Note that the order of arguments is different from the Haskell counterpart, in order to leverage Elixir's pipe operator `|>`.
"""
@spec map(t(a), (a -> b)) :: t(b) when a: any, b: any
def map(ma, f) do
bind(ma, fn(a) -> pure f.(a) end)
end
@doc """
Default implementation of Applicative's `ap` operation.
Modules that implement `Croma.Monad` may override this default implementation.
Note that the order of arguments is different from the Haskell counterpart, in order to leverage Elixir's pipe operator `|>`.
"""
@spec ap(t(a), t((a -> b))) :: t(b) when a: any, b: any
def ap(ma, mf) do
bind(mf, fn f -> map(ma, f) end)
end
@doc """
Converts the given list of monadic (to be precise, applicative) objects into a monadic object that contains a single list.
Modules that implement `Croma.Monad` may override this default implementation.
## Examples (using Croma.Result)
iex> Croma.Result.sequence([{:ok, 1}, {:ok, 2}, {:ok, 3}])
{:ok, [1, 2, 3]}
iex> Croma.Result.sequence([{:ok, 1}, {:error, :foo}, {:ok, 3}])
{:error, :foo}
"""
@spec sequence([t(a)]) :: t([a]) when a: any
def sequence([]), do: pure []
def sequence([h | t]) do
# Note that the default implementation is not tail-recursive
bind(h, fn(a) ->
bind(sequence(t), fn(as) ->
pure [a | as]
end)
end)
end
defoverridable [
map: 2,
ap: 2,
sequence: 1,
]
@doc """
A macro that provides Hakell-like do-notation.
## Examples
MonadImpl.m do
x <- mx
y <- my
pure f(x, y)
end
is expanded to
MonadImpl.bind(mx, fn x ->
MonadImpl.bind(my, fn y ->
MonadImpl.pure f(x, y)
end)
end)
"""
defmacro m(do: block) do
case block do
{:__block__, _, unwrapped} -> Croma.Monad.DoImpl.do_expr(__MODULE__, unwrapped)
_ -> Croma.Monad.DoImpl.do_expr(__MODULE__, [block])
end
end
end
end
defmodule DoImpl do
@moduledoc false
def do_expr(module, [{:<-, _, [l, r]}]) do
quote do
unquote(module).bind(unquote(r), fn(unquote(l)) -> unquote(l) end)
end
end
def do_expr(module, [{:<-, _, [l, r]} | rest]) do
quote do
unquote(module).bind(unquote(r), fn(unquote(l)) -> unquote(do_expr(module, rest)) end)
end
end
def do_expr(module, [expr]) do
case expr do
{:pure, n, args} -> {{:., n, [module, :pure]}, n, args}
_ -> expr
end
end
def do_expr(module, [expr | rest]) do
quote do
unquote(expr)
unquote(do_expr(module, rest))
end
end
end
end
|
lib/croma/monad.ex
| 0.882238
| 0.739681
|
monad.ex
|
starcoder
|
defmodule Snitch.Data.Model.ProductBrand do
@moduledoc """
Product Brand API
"""
use Snitch.Data.Model
alias Ecto.Multi
alias Snitch.Data.Schema.{Image, ProductBrand}
alias Snitch.Data.Model.Image, as: ImageModel
alias Snitch.Tools.Helper.ImageUploader
@doc """
Returns all Product Brands
"""
@spec get_all() :: [ProductBrand.t()]
def get_all do
Repo.all(ProductBrand)
end
@doc """
Creates a `ProductBrand` with supplied params.
If an `image` is also supplied in `params` then the image
is associated with the product brand and is uploaded at
location specified by the configuration.
#### See
`Snitch.Tools.Helper.ImageUploader`
The image to be uploaded is expected as `%Plug.Upload{}`
struct.
params = %{name: "xyz",
image: %Plug.Upload{content_type: "image/png"}
}
The `association` between `product brand` and the `image` is through
a middle table.
#### See
`Snitch.Data.Schema.ProductBrandImage`
"""
@spec create(map) ::
{:ok, ProductBrand.t()} | {:error, Ecto.Changeset.t()} | {:error, String.t()}
def create(%{"image" => image} = params) do
ImageModel.create(ProductBrand, params, "brand_image")
end
def create(params) do
QH.create(ProductBrand, params, Repo)
end
@doc """
Updates the ProductBrand` with the supplied params.
If an `image` present in supplied `params` then the image associated earlier
is removed from both the `image` table as well as the upload location and
the association is removed.
The new supplied image in the `params` is then associated with the product brand
and is uploaded at the storage location. Look at the section in `create/1` for
the configuration.
"""
@spec update(ProductBrand.t(), map) :: {:ok, ProductBrand.t()} | {:error, Ecto.Changeset.t()}
def update(model, %{"image" => image} = params) do
ImageModel.update(ProductBrand, model, params, "brand_image")
end
def update(model, params) do
QH.update(ProductBrand, params, model, Repo)
end
@doc """
Returns an Product Brand.
Takes Product Brand id as input.
"""
@spec get(integer) :: {:ok, ProductBrand.t()} | {:error, atom}
def get(id) do
QH.get(ProductBrand, id, Repo)
end
@doc """
Deletes the Product Brand.
# Note
Upon deletion any `image` associated with the product brand
is removed from both the database table as well as the upload
location.
"""
@spec delete(non_neg_integer() | ProductBrand.t()) ::
{:ok, ProductBrand.t()} | {:error, Ecto.Changeset.t()} | {:error, :not_found}
def delete(id) do
with {:ok, %ProductBrand{} = brand_struct} <- get(id),
brand <- brand_struct |> Repo.preload(:image),
changeset <- ProductBrand.delete_changeset(brand, %{}) do
delete_product_brand(brand, brand.image, changeset)
else
{:error, msg} -> {:error, msg}
end
end
defp delete_product_brand(_brand, nil, changeset) do
Repo.delete(changeset)
end
defp delete_product_brand(brand, image, changeset) do
ImageModel.delete(brand, image, changeset)
end
end
|
apps/snitch_core/lib/core/data/model/product_brand.ex
| 0.874151
| 0.49469
|
product_brand.ex
|
starcoder
|
defmodule Nomex.Request do
@moduledoc """
Wrapper module for `HTTPoison.Base` and contains some convenience
defmacro functions to keep other modules DRY
"""
alias Nomex.{ Request, Response }
use HTTPoison.Base
@doc """
Creates 2 functions with the following names:
```
function_name
function_name!
```
Both functions will issue a GET request for the `path` specified.
The first function will return a tuple.
The second function will return a `%Nomex.Response{}` or raise an exception.
"""
defmacro meta_get(function_name, path) do
function_name! = banged function_name
quote do
@doc """
issues a GET request to `<NOMAD_HOST>/v1#{unquote(path)}`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
"""
@spec unquote(function_name)() :: Response.tuple_t
def unquote(function_name)() do
Request.request(:get, [unquote(path)])
end
@doc """
issues a GET request to `<NOMAD_HOST>/v1#{unquote(path)}`
returns a `%Nomex.Response{}` or raises exception
"""
@spec unquote(function_name!)() :: Response.t
def unquote(function_name!)() do
Request.request!(:get, [unquote(path)])
end
end
end
@doc """
Creates 2 functions with the following names:
```
function_name(param_id)
function_name!(param_id)
```
Both functions will issue a GET request for the `path` specified, but append `/param_id` at the end of the `path`.
The first function will return a tuple.
The second function will return a `Nomex.Response` or raise an exception.
"""
defmacro meta_get_id(function_name, path) do
function_name! = banged function_name
quote do
@doc """
issues a GET request to `<NOMAD_HOST>/v1#{unquote(path)}/<param_id>`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
"""
@spec unquote(function_name)(String.t) :: Response.tuple_t
def unquote(function_name)(param_id) do
path = Path.join unquote(path), param_id
Request.request(:get, [path])
end
@doc """
issues a GET request to `<NOMAD_HOST>/v1#{unquote(path)}/<param_id>`
returns a `%Nomex.Response{}` or raises exception
"""
@spec unquote(function_name!)(String.t) :: Response.t
def unquote(function_name!)(param_id) do
path = Path.join unquote(path), param_id
Request.request!(:get, [path])
end
end
end
@doc """
Creates 2 functions with the following names:
```
function_name(prefix)
function_name!(prefix)
```
Both functions will issue a GET request for the `path` specified, and adds a querystring parameter `?prefix=#(prefix)`.
The first function will return a tuple.
The second function will return a `Nomex.Response` or raise an exception.
"""
defmacro meta_get_prefix(function_name, path) do
function_name! = banged function_name
quote do
@doc """
issues a GET request to `<NOMAD_HOST>/v1#{unquote(path)}?prefix=<prefix>`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
"""
@spec unquote(function_name)(String.t) :: Response.tuple_t
def unquote(function_name)(prefix) do
params = [ params: %{ prefix: prefix } ]
Request.request(:get, [unquote(path), [], params])
end
@doc """
issues a GET request to `<NOMAD_HOST>/v1#{unquote(path)}?prefix=<prefix>`
returns a `%Nomex.Response{}` or raises exception
"""
@spec unquote(function_name!)(String.t) :: Response.t
def unquote(function_name!)(prefix) do
params = [ params: %{ prefix: prefix } ]
Request.request!(:get, [unquote(path), [], params])
end
end
end
def base do
host = URI.parse(Nomex.host())
version = "/#{Nomex.version()}"
URI.merge(host, version) |> to_string
end
def process_url(url) do
base() <> url
end
# consider removing these methods, too much abstraction?
def request(method, params) do
{ status, response } = apply(Request, method, params)
{ status, Response.parse(response) }
end
def request!(method, params) do
response = apply(Request, :"#{method}!", params)
Response.parse response
end
defp banged(function_name) do
:"#{function_name}!"
end
defp process_headers(headers) do
case Nomex.token do
nil -> headers
_ -> [{ "X-Nomad-Token", Nomex.token } | headers]
end
end
end
|
lib/nomex/request.ex
| 0.834441
| 0.830181
|
request.ex
|
starcoder
|
defmodule Membrane.Dashboard.Charts.Helpers do
@moduledoc """
Module has functions useful for Membrane.Dashboard.Charts.Full and Membrane.Dashboard.Charts.Update.
"""
import Membrane.Dashboard.Helpers
import Ecto.Query, only: [from: 2]
alias Membrane.Dashboard.Repo
alias Membrane.Dashboard.Charts
require Logger
@type rows_t :: [[term()]]
@type interval_t ::
{time_from :: non_neg_integer(), time_to :: non_neg_integer(),
accuracy :: non_neg_integer()}
@type series_t :: [
{{path_id :: non_neg_integer(), data :: list(integer())}, accumulator :: any()}
]
@doc """
Queries all measurements for given time range, metric and accuracy and returns them together
with mapping of its component path ids to the path strings.
"""
@spec query_measurements(non_neg_integer(), non_neg_integer(), String.t(), non_neg_integer()) ::
{:ok, rows_t(), Charts.chart_paths_mapping_t()} | :error
def query_measurements(time_from, time_to, metric, accuracy) do
with {:ok, %Postgrex.Result{rows: measurements_rows}} <-
Repo.query(measurements_query(time_from, time_to, metric, accuracy)),
component_path_rows <- Repo.all(component_paths_query(measurements_rows)) do
paths_mapping = Map.new(component_path_rows)
{:ok, measurements_rows, paths_mapping}
else
error ->
Logger.error(
"Encountered error while querying database for charts data: #{inspect(error)}"
)
:error
end
end
defp measurements_query(time_from, time_to, metric, accuracy) do
accuracy_in_seconds = to_seconds(accuracy)
"""
SELECT
floor(extract(epoch from "time")/#{accuracy_in_seconds})*#{accuracy_in_seconds} AS time,
component_path_id,
value
FROM measurements
WHERE time BETWEEN '#{parse_time(time_from)}' AND '#{parse_time(time_to)}' AND metric = '#{metric}'
ORDER BY time;
"""
end
defp component_paths_query(measurements_rows) do
ids =
measurements_rows
|> Enum.map(fn [_time, path_id | _] -> path_id end)
|> Enum.uniq()
from(cp in "component_paths", where: cp.id in ^ids, select: {cp.id, cp.path})
end
@doc """
Gets `time` as UNIX time in milliseconds and converts it to seconds.
"""
@spec to_seconds(non_neg_integer()) :: float()
def to_seconds(time),
do: time / 1000
@doc """
Calculates number of values that should appear in timeline's interval.
For explanation on the interval see `timeline_interval/3`.
"""
@spec timeline_interval_size(non_neg_integer(), non_neg_integer(), non_neg_integer()) ::
non_neg_integer()
def timeline_interval_size(from, to, accuracy) do
accuracy_in_seconds = to_seconds(accuracy)
[from, to] = [
apply_accuracy(from, accuracy_in_seconds),
apply_accuracy(to, accuracy_in_seconds)
]
floor((to - from) / accuracy_in_seconds) + 1
end
@doc """
Time in uPlot have to be discrete, so every event from database will land in one specific timestamp from returned interval.
Returns list of timestamps between `from` and `to` where two neighboring values differ by `accuracy` milliseconds.
## Example
iex> Membrane.Dashboard.Charts.Helpers.timeline_interval(1619776875855, 1619776875905, 10)
[1619776875.8500001, 1619776875.8600001, 1619776875.8700001, 1619776875.88, 1619776875.89, 1619776875.9]
"""
@spec timeline_timestamps(non_neg_integer(), non_neg_integer(), non_neg_integer()) :: [float()]
def timeline_timestamps(from, to, accuracy) do
accuracy_in_seconds = to_seconds(accuracy)
size = timeline_interval_size(from, to, accuracy)
from = apply_accuracy(from, accuracy_in_seconds)
for x <- 1..size, do: from + x * accuracy_in_seconds
end
@doc """
Applies accuracy to a time represented in a number of milliseconds to match format returned from database.
"""
@spec apply_accuracy(non_neg_integer(), float()) :: float()
def apply_accuracy(time, accuracy),
do: floor(time / (1000 * accuracy)) * accuracy
end
|
lib/membrane_dashboard/charts/helpers.ex
| 0.929224
| 0.479565
|
helpers.ex
|
starcoder
|
defmodule Cashtrail.Entities.Entity do
@moduledoc """
This is an `Ecto.Schema` struct that represents an entity of the application.
## Definition
According to [Techopedia](https://www.techopedia.com/definition/14360/entity-computing),
an entity is any singular, identifiable, and separate object. It refers to individuals,
organizations, systems, bits of data, or even distinct system components that are
considered significant in and of themselves.
So, in this application, an entity is a division of what the data belongs to. This
can be an individual, an organization, a department, a church, a group of friends,
and whatever you want to control the finances.
So you can separate your finances from the company finances. Or have
Personal Finances and Family finances separated. Or control the finances of some
organization by departments.
Each user can create their entity to control their finances and includes other
users as a member of the entity. You can see `Cashtrail.Entities.EntityMember`
to know more about this.
## Multitenancy
Each Entity generates a tenant through new schemas in the Postgres database.
This happens to separate logically the data of entities. This can help to maintain
data integrity and security, as this makes it harder to one data from one entity
flow to another entity, like one account trying to relate a currency from another
entity, for instance.
You can manually generate or drop tenants using the `Cashtrail.Entities.Tenants`
module.
## Fields
* `:id` - The unique id of the entity.
* `:name` - The name (or description) of the entity.
* `:type` - The type of the entity, that can be:
* `:personal` - if the entity is used for personal reasons, like control
your finances, your family finances, personal project finances,
or something like that.
* `:company` - if the entity is used to control the finances of a company.
* `:other` - if the entity is used to control the finances for other reasons.
* `:owner` - The owner of the entity. The owner is usually who has created the
entity and has all permissions over an entity, including to delete it. If a
user is deleted, all his entities are excluded too. The ownership of an entity
can be transferred as well.
* `:owner_id` - The id of the owner of the entity.
* `:members` - The members of the entity. You can read more about this at
`Cashtrail.Entities.EntityMember`.
* `:inserted_at` - When the entity was inserted at the first time.
* `:updated_at` - When the entity was updated at the last time.
* `:archived_at` - When the entity was archived.
See `Cashtrail.Entities` to know how to list, get, insert, update, delete, and
transfer the ownership of an entity.
"""
use Ecto.Schema
import Ecto.Changeset
alias Cashtrail.{Entities, Users}
@type t :: %Cashtrail.Entities.Entity{
id: Ecto.UUID.t() | nil,
name: String.t() | nil,
type: atom() | nil,
owner_id: Ecto.UUID.t() | nil,
owner: Ecto.Association.NotLoaded.t() | Users.User.t() | nil,
members: Ecto.Association.NotLoaded.t() | list(Entities.EntityMember.t()),
archived_at: NaiveDateTime.t() | nil,
inserted_at: NaiveDateTime.t() | nil,
updated_at: NaiveDateTime.t() | nil,
__meta__: Ecto.Schema.Metadata.t()
}
@derive [Cashtrail.Statuses.WithStatus]
@primary_key {:id, :binary_id, autogenerate: true}
@foreign_key_type :binary_id
schema "entities" do
field :name, :string
field :type, Ecto.Enum, values: [:personal, :company, :other], default: :personal
belongs_to :owner, Users.User
has_many :members, Entities.EntityMember
field :archived_at, :naive_datetime
timestamps()
end
@doc false
@spec changeset(t() | Ecto.Changeset.t(t()), map) :: Ecto.Changeset.t(t())
def changeset(entity, attrs) do
entity
|> cast(attrs, [:name, :type])
|> validate_required([:name, :owner_id])
|> foreign_key_constraint(:owner_id)
end
@doc false
@spec transfer_changeset(t() | Ecto.Changeset.t(t()), map) :: Ecto.Changeset.t(t())
def transfer_changeset(entity, attrs) do
entity
|> cast(attrs, [:owner_id])
|> validate_required([:owner_id])
|> foreign_key_constraint(:owner_id)
end
@spec archive_changeset(t | Ecto.Changeset.t()) :: Ecto.Changeset.t()
def archive_changeset(entity) do
change(entity, %{archived_at: NaiveDateTime.utc_now() |> NaiveDateTime.truncate(:second)})
end
@spec unarchive_changeset(t | Ecto.Changeset.t()) :: Ecto.Changeset.t()
def unarchive_changeset(entity) do
change(entity, %{archived_at: nil})
end
end
|
apps/cashtrail/lib/cashtrail/entities/entity.ex
| 0.833223
| 0.708566
|
entity.ex
|
starcoder
|
defmodule ExOkex.Spot.Private do
@moduledoc """
Spot account client.
[API docs](https://www.okex.com/docs/en/#spot-README)
"""
alias ExOkex.Spot.Private
@type params :: map
@type config :: ExOkex.Config.t()
@type response :: ExOkex.Api.response()
@doc """
Place a new order.
Refer to params listed in [API docs](https://www.okex.com/docs/en/#spot-orders)
## Examples
iex> ExOkex.Spot.Private.create_order(%{type: "limit", side: "buy", product_id: "ETH-USD", price: "0.50", size: "1.0"})
{:ok, %{
"client_oid" => "oktspot79",
"error_code" => "",
"error_message" => "",
"order_id" => "2510789768709120",
"result" => true
}}
"""
defdelegate create_order(params, config \\ nil), to: Private.CreateOrder
@doc """
Place multiple orders for specific trading pairs (up to 4 trading pairs, maximum 4 orders each)
https://www.okex.com/docs/en/#spot-batch
## Examples
iex> ExOkex.Spot.Private.create_bulk_orders([
%{
"client_oid" => "20180728",
"instrument_id" => "btc-usdt",
"side" => "sell",
"type" => "limit",
"size" => "0.001",
"price" => "10001",
"margin_trading" => "1"
},
%{
"client_oid":"20180729",
"instrument_id":"btc-usdt",
"side":"sell",
"type":"limit",
"size":"0.001",
"price":"10002",
"margin_trading ":"1"
}
])
{:ok, %{
"btc_usdt" => [
%{"client_oid" => "20180728", "error_code" => 0, "error_message" => "", "order_id" => "2510832677159936", "result" => true},
%{"client_oid" => "20180729", "error_code" => 0, "error_message" => "", "order_id" => "2510832677225472", "result" => true}
]
}}
"""
defdelegate create_bulk_orders(params, config \\ nil), to: Private.CreateBulkOrders
defdelegate create_batch_orders(params, config \\ nil),
to: Private.CreateBulkOrders,
as: :create_bulk_orders
@doc """
Cancelling an unfilled order.
https://www.okex.com/docs/en/#spot-revocation
## Example
iex> ExOkex.Spot.Private.cancel_orders("btc-usdt", ["1611729012263936"])
# TODO: Add response
"""
defdelegate cancel_orders(instrument_id, order_ids \\ [], params \\ %{}, config \\ nil),
to: Private.CancelOrders
@doc """
Amend multiple open orders for a specific trading pair (up to 10 orders)
https://www.okex.com/docs/en/#spot-amend_batch
## Examples
iex> ExOkex.Spot.Private.amend_bulk_orders([
%{"order_id" => "305512815291895607","instrument_id" => "BTC-USDT","new_size" => "2"},
%{"order_id" => "305512815291895606","instrument_id" => "BTC-USDT","new_size" => "1"}
])
"""
@spec amend_bulk_orders([params], config | nil) :: response
defdelegate amend_bulk_orders(params, config \\ nil), to: Private.AmendBulkOrders
@doc """
List accounts.
## Examples
iex> ExOkex.Spot.Private.list_accounts()
{:ok, [
%{
"available" => "0.005",
"balance" => "0.005",
"currency" => "BTC",
"frozen" => "0",
"hold" => "0",
"holds" => "0",
"id" => "2006257"
}
]}
"""
defdelegate list_accounts(config \\ nil), to: Private.ListAccounts
@doc """
Get the balance, amount available/on hold of a token in spot account.
[Spot Trading Account of a Currency](https://www.okex.com/docs/en/#spot-singleness)
## Example
iex> ExOkex.Spot.Private.get_account("btc")
{:ok, %{
"available" => "0.005",
"balance" => "0.005",
"currency" => "btc",
"frozen" => "0",
"hold" => "0",
"holds" => "0",
"id" => "2006057"
}}
"""
defdelegate get_account(currency, config \\ nil), to: Private.GetAccount
end
|
lib/ex_okex/spot/private.ex
| 0.582966
| 0.44903
|
private.ex
|
starcoder
|
defmodule RandomCache do
@moduledoc """
This modules implements a simple cache, using 1 ets table for it.
For using it, you need to start it:
iex> RandomCache.start_link(:my_cache, 1000)
Or add it to your supervisor tree, like: `worker(RandomCache, [:my_cache, 1000])`
## Using
iex> RandomCache.start_link(:my_cache, 1000)
{:ok, #PID<0.60.0>}
iex> RandomCache.put(:my_cache, "id", "value")
:ok
iex> RandomCache.get(:my_cache, "id", touch = false)
"value"
## Design
ets table to save the key values pairs. Once the cache is full, random elements get evicted.
"""
use GenServer
@table RandomCache
defstruct table: nil, size: 0
@doc """
Creates a Rand Cache of the given size as part of a supervision tree with a registered name
"""
def start_link(name, size) do
Agent.start_link(__MODULE__, :init, [name, size], [name: name])
end
@doc """
Stores the given `value` under `key` in `cache`. If `cache` already has `key`, the stored
`value` is replaced by the new one.
"""
def put(name, key, value), do: Agent.get(name, __MODULE__, :handle_put, [key, value])
@doc """
Updates a `value` in `cache`. If `key` is not present in `cache` then nothing is done.
The function assumes, that the element exists in a cache.
"""
def update(name, key, value, _touch \\ true) do
:ets.update_element(name, key, {2, value})
:ok
end
@doc """
Returns the `value` associated with `key` in `cache`. If `cache` does not contain `key`,
returns nil.
"""
def get(name, key, _touch \\ true) do
case :ets.lookup(name, key) do
[{_, value}] -> value
[] -> nil
end
end
@doc """
Removes the entry stored under the given `key` from cache.
"""
def delete(name, key) do #do: Agent.get(name, __MODULE__, :handle_delete, [key])
:ets.delete(name, key)
:ok
end
@doc false
def init(name, size) do
:ets.new(name, [:named_table, :public, :ordered_set, {:read_concurrency, true}])
%RandomCache{table: name, size: size}
end
@doc false
def handle_put(state = %{table: table}, key, value) do
:ets.insert(table, {key, value})
clean_oversize(state)
:ok
end
defp clean_oversize(%{table: table, size: size}) do
table_size = :ets.info(table, :size)
if table_size > size do
del_pos = :rand.uniform(table_size)-1
[{del_key, _}] = :ets.slot(table, del_pos)
:ets.delete(table, del_key)
true
else nil end
end
end
|
lib/random_cache.ex
| 0.811153
| 0.475849
|
random_cache.ex
|
starcoder
|
defmodule LiqenCore.CMS do
alias LiqenCore.CMS.{Entry,
ExternalHTML,
MediumPost,
Author}
alias LiqenCore.Accounts
alias LiqenCore.Repo
@moduledoc """
Content Management System of Liqen Core.
- This module handles user permissions for managing content.
"""
@typedoc """
Represents an entry. It has four fields: "id", "title", "author" and "content":
```
%{
id: "1",
title: "Digimon Adventures",
author: %{
id: "42",
username: "tai",
name: "<NAME>"
},
entry_type: :medium,
content: %{
uri: "http://medium.com/..."
}
}
```
Depending on the entry type (indicated by an atom in the "entry_type" field), the
shape of the "content" field may vary.
The module `LiqenCore.CMS.EntryContent` has all the type definitions of the
possible `content` values.
Currently, we allow the following entry types:
| Type | entry_type | content |
| :------------------- | :----------------- | :---------- |
| External HTML | `external_html` | `t:LiqenCore.CMS.EntryContent.external_html_content/0` |
| Medium article | `medium` | `t:LiqenCore.CMS.EntryContent.medium_content/0` |
"""
@type entry :: %{
id: number,
title: String.t,
author: LiqenCore.Accounts.user,
entry_type: String.t,
content: LiqenCore.CMS.EntryContent.t
}
@doc """
Returns one entry
"""
def get_entry(id) do
Entry
|> get(id)
|> take()
end
@doc """
Returns the list of all entries
"""
def list_entries do
Entry
|> get_all()
|> take()
end
@doc """
Creates a generic entry
"""
def create_entry(params) do
%Entry{}
|> Entry.changeset(params)
|> Repo.insert()
|> take()
end
@doc """
Creates an entry of type `external_html`
"""
def create_external_html(%Author{} = author, params) do
params
|> prepare_entry_params(:external_html)
|> Ecto.Changeset.put_change(:author_id, author.id)
|> Repo.insert()
|> put_content()
|> take()
end
@doc """
Creates an entry of type `medium_post`
"""
def create_medium_post(%Author{} = author, params) do
params
|> prepare_entry_params(:medium_post)
|> Ecto.Changeset.put_change(:author_id, author.id)
|> Repo.insert()
|> put_content()
|> take()
end
def ensure_author_exists(%Accounts.User{} = user) do
%Author{user_id: user.id}
|> Ecto.Changeset.change()
|> Ecto.Changeset.unique_constraint(:user_id)
|> Repo.insert()
|> handle_existing_author()
end
defp handle_existing_author({:ok, author}), do: author
defp handle_existing_author({:error, changeset}) do
Repo.get_by!(Author, user_id: changeset.data.user_id)
end
defp prepare_entry_params(params, type) do
{name, module} =
case type do
:external_html ->
{"external_html", ExternalHTML}
:medium_post ->
{"medium_post", MediumPost}
end
params = Map.put(params, :entry_type, name)
%Entry{}
|> Entry.changeset(params)
|> Ecto.Changeset.cast_assoc(type, with: &module.changeset/2)
end
defp take(list) when is_list(list) do
list =
list
|> Enum.map(&take(&1))
|> Enum.map(fn {:ok, obj} -> obj end)
{:ok, list}
end
defp take({:ok, %Entry{} = object}) do
entry = Map.take(object, [:id, :title, :entry_type])
{:ok, content} = take({:ok, Map.get(object, :content)})
{:ok, Map.put(entry, :content, content)}
end
defp take({:ok, %ExternalHTML{} = object}) do
{:ok, Map.take(object, [:uri])}
end
defp take({:ok, %MediumPost{} = object}) do
{:ok, Map.take(object, [:uri, :title, :publishing_date, :license, :tags])}
end
defp take(any), do: any
defp get(struct, id) do
case Repo.get(struct, id) do
%{} = object ->
{:ok, object}
_ ->
{:error, :not_found}
end
end
defp get_all(struct) do
struct
|> Repo.all()
|> Enum.map(fn obj -> {:ok, obj} end)
end
defp put_content({:ok, %Entry{} = object}) do
content =
case Map.get(object, :entry_type) do
"external_html" -> Map.get(object, :external_html)
"medium_post" -> Map.get(object, :medium_post)
_ -> nil
end
{:ok, Map.put(object, :content, content)}
end
defp put_content(any), do: any
end
|
lib/liqen_core/cms/cms.ex
| 0.782995
| 0.644547
|
cms.ex
|
starcoder
|
defmodule Clex.CL10 do
@moduledoc ~S"""
This module provides an interface into the [OpenCL 1.0 API](https://www.khronos.org/registry/OpenCL/sdk/1.0/docs/man/xhtml/).
"""
# Selectively pull in functions + docs from Clex.CL for OpenCL 1.0
use Clex.VersionedApi
# Platform
add_cl_func :platform, :get_platform_ids, []
add_cl_func :platform, :get_platform_info, [platform]
# Devices
add_cl_func :devices, :get_device_ids, [platform, device_type]
add_cl_func :devices, :get_device_info, [device]
# Context
add_cl_func :context, :create_context, [devices]
add_cl_func :context, :create_context_from_type, [platform, device_type]
add_cl_func :context, :release_context, [context]
add_cl_func :context, :retain_context, [context]
add_cl_func :context, :get_context_info, [context]
# Command Queues
add_cl_func :command_queues, :create_queue, [context, device, properties]
add_cl_func :command_queues, :create_queue, [context, device]
add_cl_func :command_queues, :release_queue, [queue]
add_cl_func :command_queues, :retain_queue, [queue]
# Memory Objects
add_cl_func :memory_objects, :create_buffer, [context, flags, size]
add_cl_func :memory_objects, :create_buffer, [context, flags, size, data]
add_cl_func :memory_objects, :enqueue_read_buffer, [queue, buffer, offset, size, waitlist]
add_cl_func :memory_objects, :enqueue_write_buffer, [queue, buffer, offset, size, data, waitlist]
add_cl_func :memory_objects, :retain_mem_object, [buffer]
add_cl_func :memory_objects, :release_mem_object, [buffer]
add_cl_func :memory_objects, :create_image2d, [context, flags, image_format, width, height, row_pitch, data]
add_cl_func :memory_objects, :create_image3d, [context, flags, image_format, width, height, depth, row_pitch, slice_pitch, data]
add_cl_func :memory_objects, :get_supported_image_formats, [context, flags, image_type]
add_cl_func :memory_objects, :enqueue_read_image, [queue, image, origin, region, row_pitch, slice_pitch, waitlist]
add_cl_func :memory_objects, :enqueue_write_image, [queue, image, origin, region, row_pitch, slice_pitch, data, waitlist]
add_cl_func :memory_objects, :enqueue_copy_image, [queue, src_image, dest_image, src_origin, dest_origin, region, waitlist]
add_cl_func :memory_objects, :enqueue_copy_image_to_buffer, [queue, src_image, dest_buffer, src_origin, region, dest_offset, waitlist]
add_cl_func :memory_objects, :enqueue_copy_buffer, [queue, src_buffer, dest_buffer, src_offset, dest_offset, cb, waitlist]
add_cl_func :memory_objects, :enqueue_copy_buffer_to_image, [queue, src_buffer, dest_image, src_offset, dest_origin, region, waitlist]
add_cl_func :memory_objects, :enqueue_read_buffer, [queue, buffer, offset, size]
add_cl_func :memory_objects, :enqueue_write_buffer, [queue, buffer, offset, size, data]
add_cl_func :memory_objects, :enqueue_read_image, [queue, image, origin, region, row_pitch, slice_pitch]
add_cl_func :memory_objects, :enqueue_write_image, [queue, image, origin, region, row_pitch, slice_pitch, data]
add_cl_func :memory_objects, :enqueue_copy_image, [queue, src_image, dest_image, src_origin, dest_origin, region]
add_cl_func :memory_objects, :enqueue_copy_image_to_buffer, [queue, src_image, dest_buffer, src_origin, region, dest_offset]
add_cl_func :memory_objects, :enqueue_copy_buffer, [queue, src_buffer, dest_buffer, src_offset, dest_offset, cb]
add_cl_func :memory_objects, :enqueue_copy_buffer_to_image, [queue, src_buffer, dest_image, src_offset, dest_origin, region]
add_cl_func :memory_objects, :get_mem_object_info, [buffer]
add_cl_func :memory_objects, :get_image_info, [image]
# Sampler Objects
add_cl_func :sampler_objects, :create_sampler, [context, normalized, addressing_mode, filter_mode]
add_cl_func :sampler_objects, :retain_sampler, [sampler]
add_cl_func :sampler_objects, :release_sampler, [sampler]
add_cl_func :sampler_objects, :get_sampler_info, [sampler]
# Program Objects
add_cl_func :program_objects, :create_program_with_source, [context, source]
add_cl_func :program_objects, :create_program_with_binary, [context, device_binaries]
add_cl_func :program_objects, :retain_program, [program]
add_cl_func :program_objects, :release_program, [program]
add_cl_func :program_objects, :unload_compiler, []
add_cl_func :program_objects, :build_program, [program, devices, options]
add_cl_func :program_objects, :build_program, [program, devices]
add_cl_func :program_objects, :get_program_info, [program]
add_cl_func :program_objects, :get_program_build_info, [program, device]
# Kernel Objects
add_cl_func :kernel_objects, :create_kernel, [program, name]
add_cl_func :kernel_objects, :create_kernels_in_program, [program]
add_cl_func :kernel_objects, :retain_kernel, [kernel]
add_cl_func :kernel_objects, :release_kernel, [kernel]
add_cl_func :kernel_objects, :set_kernel_arg, [kernel, index, arg]
add_cl_func :kernel_objects, :get_kernel_info, [kernel]
add_cl_func :kernel_objects, :get_kernel_workgroup_info, [kernel, device]
# Executing Kernels
add_cl_func :exec_kernels, :enqueue_nd_range_kernel, [queue, kernel, global_work_size, local_work_size, waitlist]
add_cl_func :exec_kernels, :enqueue_task, [queue, kernel, waitlist]
add_cl_func :exec_kernels, :enqueue_nd_range_kernel, [queue, kernel, global_work_size, local_work_size]
add_cl_func :exec_kernels, :enqueue_task, [queue, kernel]
# clEnqueueNativeKernel
# Event Objects
add_cl_func :event_objects, :wait_for_events, [waitlist]
add_cl_func :event_objects, :get_event_info, [event]
add_cl_func :event_objects, :retain_event, [event]
add_cl_func :event_objects, :release_event, [event]
# Synchronization
add_cl_func :synchronization, :enqueue_marker, [queue]
add_cl_func :synchronization, :enqueue_wait_for_events, [queue, waitlist]
add_cl_func :synchronization, :enqueue_barrier, [queue]
# Profiling Operations on Memory Objects and Kernels
# clGetEventProfilingInfo
# Flush and Finish
add_cl_func :flush_and_finish, :flush, [queue]
add_cl_func :flush_and_finish, :finish, [queue]
end
|
lib/clex/cl10.ex
| 0.712332
| 0.407245
|
cl10.ex
|
starcoder
|
defmodule AWS.CloudWatchLogs do
@moduledoc """
You can use Amazon CloudWatch Logs to monitor, store, and access your log
files from Amazon EC2 instances, AWS CloudTrail, or other sources. You can
then retrieve the associated log data from CloudWatch Logs using the
CloudWatch console, CloudWatch Logs commands in the AWS CLI, CloudWatch
Logs API, or CloudWatch Logs SDK.
You can use CloudWatch Logs to:
<ul> <li> **Monitor logs from EC2 instances in real-time**: You can use
CloudWatch Logs to monitor applications and systems using log data. For
example, CloudWatch Logs can track the number of errors that occur in your
application logs and send you a notification whenever the rate of errors
exceeds a threshold that you specify. CloudWatch Logs uses your log data
for monitoring; so, no code changes are required. For example, you can
monitor application logs for specific literal terms (such as
"NullReferenceException") or count the number of occurrences of a literal
term at a particular position in log data (such as "404" status codes in an
Apache access log). When the term you are searching for is found,
CloudWatch Logs reports the data to a CloudWatch metric that you specify.
</li> <li> **Monitor AWS CloudTrail logged events**: You can create alarms
in CloudWatch and receive notifications of particular API activity as
captured by CloudTrail and use the notification to perform troubleshooting.
</li> <li> **Archive log data**: You can use CloudWatch Logs to store your
log data in highly durable storage. You can change the log retention
setting so that any log events older than this setting are automatically
deleted. The CloudWatch Logs agent makes it easy to quickly send both
rotated and non-rotated log data off of a host and into the log service.
You can then access the raw log data when you need it.
</li> </ul>
"""
@doc """
Associates the specified AWS Key Management Service (AWS KMS) customer
master key (CMK) with the specified log group.
Associating an AWS KMS CMK with a log group overrides any existing
associations between the log group and a CMK. After a CMK is associated
with a log group, all newly ingested data for the log group is encrypted
using the CMK. This association is stored as long as the data encrypted
with the CMK is still within Amazon CloudWatch Logs. This enables Amazon
CloudWatch Logs to decrypt this data whenever it is requested.
<note> **Important:** CloudWatch Logs supports only symmetric CMKs. Do not
use an associate an asymmetric CMK with your log group. For more
information, see [Using Symmetric and Asymmetric
Keys](https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html).
</note> Note that it can take up to 5 minutes for this operation to take
effect.
If you attempt to associate a CMK with a log group but the CMK does not
exist or the CMK is disabled, you will receive an
`InvalidParameterException` error.
"""
def associate_kms_key(client, input, options \\ []) do
request(client, "AssociateKmsKey", input, options)
end
@doc """
Cancels the specified export task.
The task must be in the `PENDING` or `RUNNING` state.
"""
def cancel_export_task(client, input, options \\ []) do
request(client, "CancelExportTask", input, options)
end
@doc """
Creates an export task, which allows you to efficiently export data from a
log group to an Amazon S3 bucket.
This is an asynchronous call. If all the required information is provided,
this operation initiates an export task and responds with the ID of the
task. After the task has started, you can use
[DescribeExportTasks](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_DescribeExportTasks.html)
to get the status of the export task. Each account can only have one active
(`RUNNING` or `PENDING`) export task at a time. To cancel an export task,
use
[CancelExportTask](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CancelExportTask.html).
You can export logs from multiple log groups or multiple time ranges to the
same S3 bucket. To separate out log data for each export task, you can
specify a prefix to be used as the Amazon S3 key prefix for all exported
objects.
Exporting to S3 buckets that are encrypted with AES-256 is supported.
Exporting to S3 buckets encrypted with SSE-KMS is not supported.
"""
def create_export_task(client, input, options \\ []) do
request(client, "CreateExportTask", input, options)
end
@doc """
Creates a log group with the specified name.
You can create up to 20,000 log groups per account.
You must use the following guidelines when naming a log group:
<ul> <li> Log group names must be unique within a region for an AWS
account.
</li> <li> Log group names can be between 1 and 512 characters long.
</li> <li> Log group names consist of the following characters: a-z, A-Z,
0-9, '_' (underscore), '-' (hyphen), '/' (forward slash), '.' (period), and
'#' (number sign)
</li> </ul> If you associate a AWS Key Management Service (AWS KMS)
customer master key (CMK) with the log group, ingested data is encrypted
using the CMK. This association is stored as long as the data encrypted
with the CMK is still within Amazon CloudWatch Logs. This enables Amazon
CloudWatch Logs to decrypt this data whenever it is requested.
If you attempt to associate a CMK with the log group but the CMK does not
exist or the CMK is disabled, you will receive an
`InvalidParameterException` error.
<note> **Important:** CloudWatch Logs supports only symmetric CMKs. Do not
associate an asymmetric CMK with your log group. For more information, see
[Using Symmetric and Asymmetric
Keys](https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html).
</note>
"""
def create_log_group(client, input, options \\ []) do
request(client, "CreateLogGroup", input, options)
end
@doc """
Creates a log stream for the specified log group.
There is no limit on the number of log streams that you can create for a
log group. There is a limit of 50 TPS on `CreateLogStream` operations,
after which transactions are throttled.
You must use the following guidelines when naming a log stream:
<ul> <li> Log stream names must be unique within the log group.
</li> <li> Log stream names can be between 1 and 512 characters long.
</li> <li> The ':' (colon) and '*' (asterisk) characters are not allowed.
</li> </ul>
"""
def create_log_stream(client, input, options \\ []) do
request(client, "CreateLogStream", input, options)
end
@doc """
Deletes the specified destination, and eventually disables all the
subscription filters that publish to it. This operation does not delete the
physical resource encapsulated by the destination.
"""
def delete_destination(client, input, options \\ []) do
request(client, "DeleteDestination", input, options)
end
@doc """
Deletes the specified log group and permanently deletes all the archived
log events associated with the log group.
"""
def delete_log_group(client, input, options \\ []) do
request(client, "DeleteLogGroup", input, options)
end
@doc """
Deletes the specified log stream and permanently deletes all the archived
log events associated with the log stream.
"""
def delete_log_stream(client, input, options \\ []) do
request(client, "DeleteLogStream", input, options)
end
@doc """
Deletes the specified metric filter.
"""
def delete_metric_filter(client, input, options \\ []) do
request(client, "DeleteMetricFilter", input, options)
end
@doc """
"""
def delete_query_definition(client, input, options \\ []) do
request(client, "DeleteQueryDefinition", input, options)
end
@doc """
Deletes a resource policy from this account. This revokes the access of the
identities in that policy to put log events to this account.
"""
def delete_resource_policy(client, input, options \\ []) do
request(client, "DeleteResourcePolicy", input, options)
end
@doc """
Deletes the specified retention policy.
Log events do not expire if they belong to log groups without a retention
policy.
"""
def delete_retention_policy(client, input, options \\ []) do
request(client, "DeleteRetentionPolicy", input, options)
end
@doc """
Deletes the specified subscription filter.
"""
def delete_subscription_filter(client, input, options \\ []) do
request(client, "DeleteSubscriptionFilter", input, options)
end
@doc """
Lists all your destinations. The results are ASCII-sorted by destination
name.
"""
def describe_destinations(client, input, options \\ []) do
request(client, "DescribeDestinations", input, options)
end
@doc """
Lists the specified export tasks. You can list all your export tasks or
filter the results based on task ID or task status.
"""
def describe_export_tasks(client, input, options \\ []) do
request(client, "DescribeExportTasks", input, options)
end
@doc """
Lists the specified log groups. You can list all your log groups or filter
the results by prefix. The results are ASCII-sorted by log group name.
"""
def describe_log_groups(client, input, options \\ []) do
request(client, "DescribeLogGroups", input, options)
end
@doc """
Lists the log streams for the specified log group. You can list all the log
streams or filter the results by prefix. You can also control how the
results are ordered.
This operation has a limit of five transactions per second, after which
transactions are throttled.
"""
def describe_log_streams(client, input, options \\ []) do
request(client, "DescribeLogStreams", input, options)
end
@doc """
Lists the specified metric filters. You can list all the metric filters or
filter the results by log name, prefix, metric name, or metric namespace.
The results are ASCII-sorted by filter name.
"""
def describe_metric_filters(client, input, options \\ []) do
request(client, "DescribeMetricFilters", input, options)
end
@doc """
Returns a list of CloudWatch Logs Insights queries that are scheduled,
executing, or have been executed recently in this account. You can request
all queries, or limit it to queries of a specific log group or queries with
a certain status.
"""
def describe_queries(client, input, options \\ []) do
request(client, "DescribeQueries", input, options)
end
@doc """
"""
def describe_query_definitions(client, input, options \\ []) do
request(client, "DescribeQueryDefinitions", input, options)
end
@doc """
Lists the resource policies in this account.
"""
def describe_resource_policies(client, input, options \\ []) do
request(client, "DescribeResourcePolicies", input, options)
end
@doc """
Lists the subscription filters for the specified log group. You can list
all the subscription filters or filter the results by prefix. The results
are ASCII-sorted by filter name.
"""
def describe_subscription_filters(client, input, options \\ []) do
request(client, "DescribeSubscriptionFilters", input, options)
end
@doc """
Disassociates the associated AWS Key Management Service (AWS KMS) customer
master key (CMK) from the specified log group.
After the AWS KMS CMK is disassociated from the log group, AWS CloudWatch
Logs stops encrypting newly ingested data for the log group. All previously
ingested data remains encrypted, and AWS CloudWatch Logs requires
permissions for the CMK whenever the encrypted data is requested.
Note that it can take up to 5 minutes for this operation to take effect.
"""
def disassociate_kms_key(client, input, options \\ []) do
request(client, "DisassociateKmsKey", input, options)
end
@doc """
Lists log events from the specified log group. You can list all the log
events or filter the results using a filter pattern, a time range, and the
name of the log stream.
By default, this operation returns as many log events as can fit in 1 MB
(up to 10,000 log events), or all the events found within the time range
that you specify. If the results include a token, then there are more log
events available, and you can get additional results by specifying the
token in a subsequent call.
"""
def filter_log_events(client, input, options \\ []) do
request(client, "FilterLogEvents", input, options)
end
@doc """
Lists log events from the specified log stream. You can list all the log
events or filter using a time range.
By default, this operation returns as many log events as can fit in a
response size of 1MB (up to 10,000 log events). You can get additional log
events by specifying one of the tokens in a subsequent call.
"""
def get_log_events(client, input, options \\ []) do
request(client, "GetLogEvents", input, options)
end
@doc """
Returns a list of the fields that are included in log events in the
specified log group, along with the percentage of log events that contain
each field. The search is limited to a time period that you specify.
In the results, fields that start with @ are fields generated by CloudWatch
Logs. For example, `@timestamp` is the timestamp of each log event. For
more information about the fields that are generated by CloudWatch logs,
see [Supported Logs and Discovered
Fields](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_AnalyzeLogData-discoverable-fields.html).
The response results are sorted by the frequency percentage, starting with
the highest percentage.
"""
def get_log_group_fields(client, input, options \\ []) do
request(client, "GetLogGroupFields", input, options)
end
@doc """
Retrieves all the fields and values of a single log event. All fields are
retrieved, even if the original query that produced the `logRecordPointer`
retrieved only a subset of fields. Fields are returned as field name/field
value pairs.
Additionally, the entire unparsed log event is returned within `@message`.
"""
def get_log_record(client, input, options \\ []) do
request(client, "GetLogRecord", input, options)
end
@doc """
Returns the results from the specified query.
Only the fields requested in the query are returned, along with a `@ptr`
field which is the identifier for the log record. You can use the value of
`@ptr` in a
[GetLogRecord](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_GetLogRecord.html)
operation to get the full log record.
`GetQueryResults` does not start a query execution. To run a query, use
[StartQuery](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_StartQuery.html).
If the value of the `Status` field in the output is `Running`, this
operation returns only partial results. If you see a value of `Scheduled`
or `Running` for the status, you can retry the operation later to see the
final results.
"""
def get_query_results(client, input, options \\ []) do
request(client, "GetQueryResults", input, options)
end
@doc """
Lists the tags for the specified log group.
"""
def list_tags_log_group(client, input, options \\ []) do
request(client, "ListTagsLogGroup", input, options)
end
@doc """
Creates or updates a destination. This operation is used only to create
destinations for cross-account subscriptions.
A destination encapsulates a physical resource (such as an Amazon Kinesis
stream) and enables you to subscribe to a real-time stream of log events
for a different account, ingested using
[PutLogEvents](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutLogEvents.html).
Through an access policy, a destination controls what is written to it. By
default, `PutDestination` does not set any access policy with the
destination, which means a cross-account user cannot call
[PutSubscriptionFilter](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutSubscriptionFilter.html)
against this destination. To enable this, the destination owner must call
[PutDestinationPolicy](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDestinationPolicy.html)
after `PutDestination`.
"""
def put_destination(client, input, options \\ []) do
request(client, "PutDestination", input, options)
end
@doc """
Creates or updates an access policy associated with an existing
destination. An access policy is an [IAM policy
document](https://docs.aws.amazon.com/IAM/latest/UserGuide/policies_overview.html)
that is used to authorize claims to register a subscription filter against
a given destination.
"""
def put_destination_policy(client, input, options \\ []) do
request(client, "PutDestinationPolicy", input, options)
end
@doc """
Uploads a batch of log events to the specified log stream.
You must include the sequence token obtained from the response of the
previous call. An upload in a newly created log stream does not require a
sequence token. You can also get the sequence token in the
`expectedSequenceToken` field from `InvalidSequenceTokenException`. If you
call `PutLogEvents` twice within a narrow time period using the same value
for `sequenceToken`, both calls may be successful, or one may be rejected.
The batch of events must satisfy the following constraints:
<ul> <li> The maximum batch size is 1,048,576 bytes, and this size is
calculated as the sum of all event messages in UTF-8, plus 26 bytes for
each log event.
</li> <li> None of the log events in the batch can be more than 2 hours in
the future.
</li> <li> None of the log events in the batch can be older than 14 days or
older than the retention period of the log group.
</li> <li> The log events in the batch must be in chronological ordered by
their timestamp. The timestamp is the time the event occurred, expressed as
the number of milliseconds after Jan 1, 1970 00:00:00 UTC. (In AWS Tools
for PowerShell and the AWS SDK for .NET, the timestamp is specified in .NET
format: yyyy-mm-ddThh:mm:ss. For example, 2017-09-15T13:45:30.)
</li> <li> A batch of log events in a single request cannot span more than
24 hours. Otherwise, the operation fails.
</li> <li> The maximum number of log events in a batch is 10,000.
</li> <li> There is a quota of 5 requests per second per log stream.
Additional requests are throttled. This quota can't be changed.
</li> </ul> If a call to PutLogEvents returns "UnrecognizedClientException"
the most likely cause is an invalid AWS access key ID or secret key.
"""
def put_log_events(client, input, options \\ []) do
request(client, "PutLogEvents", input, options)
end
@doc """
Creates or updates a metric filter and associates it with the specified log
group. Metric filters allow you to configure rules to extract metric data
from log events ingested through
[PutLogEvents](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutLogEvents.html).
The maximum number of metric filters that can be associated with a log
group is 100.
"""
def put_metric_filter(client, input, options \\ []) do
request(client, "PutMetricFilter", input, options)
end
@doc """
"""
def put_query_definition(client, input, options \\ []) do
request(client, "PutQueryDefinition", input, options)
end
@doc """
Creates or updates a resource policy allowing other AWS services to put log
events to this account, such as Amazon Route 53. An account can have up to
10 resource policies per region.
"""
def put_resource_policy(client, input, options \\ []) do
request(client, "PutResourcePolicy", input, options)
end
@doc """
Sets the retention of the specified log group. A retention policy allows
you to configure the number of days for which to retain log events in the
specified log group.
"""
def put_retention_policy(client, input, options \\ []) do
request(client, "PutRetentionPolicy", input, options)
end
@doc """
Creates or updates a subscription filter and associates it with the
specified log group. Subscription filters allow you to subscribe to a
real-time stream of log events ingested through
[PutLogEvents](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutLogEvents.html)
and have them delivered to a specific destination. Currently, the supported
destinations are:
<ul> <li> An Amazon Kinesis stream belonging to the same account as the
subscription filter, for same-account delivery.
</li> <li> A logical destination that belongs to a different account, for
cross-account delivery.
</li> <li> An Amazon Kinesis Firehose delivery stream that belongs to the
same account as the subscription filter, for same-account delivery.
</li> <li> An AWS Lambda function that belongs to the same account as the
subscription filter, for same-account delivery.
</li> </ul> There can only be one subscription filter associated with a log
group. If you are updating an existing filter, you must specify the correct
name in `filterName`. Otherwise, the call fails because you cannot
associate a second filter with a log group.
"""
def put_subscription_filter(client, input, options \\ []) do
request(client, "PutSubscriptionFilter", input, options)
end
@doc """
Schedules a query of a log group using CloudWatch Logs Insights. You
specify the log group and time range to query, and the query string to use.
For more information, see [CloudWatch Logs Insights Query
Syntax](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax.html).
Queries time out after 15 minutes of execution. If your queries are timing
out, reduce the time range being searched, or partition your query into a
number of queries.
"""
def start_query(client, input, options \\ []) do
request(client, "StartQuery", input, options)
end
@doc """
Stops a CloudWatch Logs Insights query that is in progress. If the query
has already ended, the operation returns an error indicating that the
specified query is not running.
"""
def stop_query(client, input, options \\ []) do
request(client, "StopQuery", input, options)
end
@doc """
Adds or updates the specified tags for the specified log group.
To list the tags for a log group, use
[ListTagsLogGroup](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_ListTagsLogGroup.html).
To remove tags, use
[UntagLogGroup](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_UntagLogGroup.html).
For more information about tags, see [Tag Log Groups in Amazon CloudWatch
Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html#log-group-tagging)
in the *Amazon CloudWatch Logs User Guide*.
"""
def tag_log_group(client, input, options \\ []) do
request(client, "TagLogGroup", input, options)
end
@doc """
Tests the filter pattern of a metric filter against a sample of log event
messages. You can use this operation to validate the correctness of a
metric filter pattern.
"""
def test_metric_filter(client, input, options \\ []) do
request(client, "TestMetricFilter", input, options)
end
@doc """
Removes the specified tags from the specified log group.
To list the tags for a log group, use
[ListTagsLogGroup](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_ListTagsLogGroup.html).
To add tags, use
[TagLogGroup](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_TagLogGroup.html).
"""
def untag_log_group(client, input, options \\ []) do
request(client, "UntagLogGroup", input, options)
end
@spec request(AWS.Client.t(), binary(), map(), list()) ::
{:ok, Poison.Parser.t() | nil, Poison.Response.t()}
| {:error, Poison.Parser.t()}
| {:error, HTTPoison.Error.t()}
defp request(client, action, input, options) do
client = %{client | service: "logs"}
host = build_host("logs", client)
url = build_url(host, client)
headers = [
{"Host", host},
{"Content-Type", "application/x-amz-json-1.1"},
{"X-Amz-Target", "Logs_20140328.#{action}"}
]
payload = Poison.Encoder.encode(input, %{})
headers = AWS.Request.sign_v4(client, "POST", url, headers, payload)
case HTTPoison.post(url, payload, headers, options) do
{:ok, %HTTPoison.Response{status_code: 200, body: ""} = response} ->
{:ok, nil, response}
{:ok, %HTTPoison.Response{status_code: 200, body: body} = response} ->
{:ok, Poison.Parser.parse!(body, %{}), response}
{:ok, %HTTPoison.Response{body: body}} ->
error = Poison.Parser.parse!(body, %{})
{:error, error}
{:error, %HTTPoison.Error{reason: reason}} ->
{:error, %HTTPoison.Error{reason: reason}}
end
end
defp build_host(_endpoint_prefix, %{region: "local"}) do
"localhost"
end
defp build_host(endpoint_prefix, %{region: region, endpoint: endpoint}) do
"#{endpoint_prefix}.#{region}.#{endpoint}"
end
defp build_url(host, %{:proto => proto, :port => port}) do
"#{proto}://#{host}:#{port}/"
end
end
|
lib/aws/cloud_watch_logs.ex
| 0.869382
| 0.650065
|
cloud_watch_logs.ex
|
starcoder
|
defmodule STL.Parser.Nimble do
@moduledoc """
A STL Parser written using https://hexdocs.pm/nimble_parsec/NimbleParsec.html
Implements STL.Parser behaviour. Also includes triangle count and STL bounding
box analysis steps during parser output formatting.
Developer's note: "I think my post processing steps could potentially be done during parsing by leveraging
all of NimbleParsec's features, but I don't understand NimbleParsec well enough yet to try."
"""
import NimbleParsec
alias STL
alias STL.{Facet, Geo}
@behaviour STL.Parser
ranges = [?a..?z, ?A..?Z]
float_ranges = [?0..?9, ?., ?e, ?E, ?-, ?+]
defparsecp(:ws, ignore(repeat(ascii_char([?\t, 32, ?\n, ?\r]))), inline: true)
point =
empty()
|> concat(parsec(:ws))
|> ascii_string(float_ranges, min: 1)
|> concat(parsec(:ws))
|> ascii_string(float_ranges, min: 1)
|> concat(parsec(:ws))
|> ascii_string(float_ranges, min: 1)
defparsecp(:point, point, inline: true)
facet =
ignore(string("facet"))
|> concat(parsec(:ws))
|> ignore(string("normal"))
|> concat(parsec(:ws))
|> tag(parsec(:point), :normal)
|> concat(parsec(:ws))
|> ignore(string("outer"))
|> concat(parsec(:ws))
|> ignore(string("loop"))
|> concat(parsec(:ws))
|> ignore(string("vertex"))
|> tag(
empty()
|> wrap(parsec(:point))
|> concat(parsec(:ws))
|> ignore(string("vertex"))
|> wrap(parsec(:point))
|> concat(parsec(:ws))
|> ignore(string("vertex"))
|> wrap(parsec(:point)),
:vertexes
)
|> concat(parsec(:ws))
|> ignore(string("endloop"))
|> concat(parsec(:ws))
|> ignore(string("endfacet"))
|> concat(parsec(:ws))
defparsecp(:facet, facet, inline: true)
stl =
parsec(:ws)
|> ignore(string("solid "))
|> concat(parsec(:ws))
|> tag(ascii_string(ranges, min: 1), :name)
|> concat(parsec(:ws))
|> times(parsec(:facet), min: 1)
|> concat(parsec(:ws))
|> ignore(string("endsolid"))
|> concat(parsec(:ws))
|> ignore(optional(ascii_string(ranges, min: 1)))
|> concat(parsec(:ws))
# `defparsecp(:nimble_parse_stl, ...)` compiles to the equivelant of:
# defp nimble_parse_stl(...) do
# ...
# end
defparsecp(:nimble_parse_stl, stl, inline: true)
@doc """
Reads a file using `File.read!()` then calls `Parser.Nimble.parse!()`
"""
def parse_file!(file) do
file
|> File.read!()
|> parse!()
end
@doc """
Uses NimbleParsec to parse a complete STL binary and formats result into a %STL{}
struct with triangle count and bounding box analysis. Does not calculate surface area.
"""
def parse!(binary) do
# nimble_parse_stl/1 is a private function generated from `defparsecp(:nimble_parse_stl, stl, inline: true)`
# For more information, go to https://hexdocs.pm/nimble_parsec/NimbleParsec.html
binary
|> nimble_parse_stl()
|> case do
{:ok, parsed, _, _, _, _} ->
build_struct(parsed)
{:error, reason, _, _, _, _} ->
raise ArgumentError, reason
end
end
defp build_struct([{:name, [name]} | parsed]) do
build_facets(%STL{name: name}, parsed)
end
defp build_struct(parsed) do
build_facets(%STL{}, parsed)
end
defp build_facets(stl, parsed, tris \\ 0, extremes \\ nil)
defp build_facets(%STL{} = stl, [], tris, extremes),
do: %{stl | triangle_count: tris, bounding_box: box_from_extremes(extremes)}
defp build_facets(%STL{} = stl, [{:normal, point} | parsed], tris, extremes) do
add_vertexes_to_facet(stl, %Facet{normal: parse_point(point)}, parsed, tris + 1, extremes)
end
defp add_vertexes_to_facet(
%STL{facets: facets} = stl,
%Facet{} = facet,
[
{:vertexes, vertexes} | parsed
],
tris,
extremes
) do
parsed_vertexes = parse_vertex_points(vertexes)
new_facet = %Facet{facet | vertexes: parsed_vertexes}
surface_area = Geo.facet_area(new_facet)
build_facets(
%STL{stl | facets: [%Facet{new_facet | surface_area: surface_area} | facets]},
parsed,
tris,
update_extremes(extremes, parsed_vertexes)
)
end
defp parse_vertex_points([vertex_1, vertex_2, vertex_3]) do
{parse_point(vertex_1), parse_point(vertex_2), parse_point(vertex_3)}
end
defp parse_point([x, y, z]) do
{parse_float(x), parse_float(y), parse_float(z)}
end
defp parse_float(float) do
case Float.parse(float) do
{float, _} ->
float
_ ->
raise ArgumentError, "Malformed float from STL #{inspect(float)}"
end
end
defp update_extremes(extremes, {a, b, c}) do
extremes
|> do_update_extremes(a)
|> do_update_extremes(b)
|> do_update_extremes(c)
end
defp do_update_extremes(nil, {x, y, z}), do: {x, x, y, y, z, z}
# x1, y1, and z1 are upper extremes
# x2, y2, and z2 are lower extremes
defp do_update_extremes({x1, x2, y1, y2, z1, z2}, {x, y, z}) do
x1 = if(x > x1, do: x, else: x1)
x2 = if(x < x2, do: x, else: x2)
y1 = if(y > y1, do: y, else: y1)
y2 = if(y < y2, do: y, else: y2)
z1 = if(z > z1, do: z, else: z1)
z2 = if(z < z2, do: z, else: z2)
{x1, x2, y1, y2, z1, z2}
end
defp box_from_extremes({x1, x2, y1, y2, z1, z2}) do
for x <- [x1, x2],
y <- [y1, y2],
z <- [z1, z2],
do: {x, y, z}
end
end
|
lib/stl/parser/nimble.ex
| 0.818447
| 0.45048
|
nimble.ex
|
starcoder
|
defmodule Combine.Helpers do
@moduledoc "Helpers for building custom parsers."
defmacro __using__(_) do
quote do
require Combine.Helpers
import Combine.Helpers
@type parser :: Combine.parser
@type previous_parser :: Combine.previous_parser
end
end
@doc ~S"""
Macro helper for building a custom parser.
A custom parser validates the next input against some rules. If the validation
succeeds, the parser should:
- add one term to the result
- update the position
- remove the parsed part from the input
Otherwise, the parser should return a corresponding error message.
For example, let's take a look at the implementation of `Combine.Parsers.Text.string/2`,
which matches a required string and outputs it:
```
defparser string(%ParserState{status: :ok, line: line, column: col, input: input, results: results} = state, expected)
when is_binary(expected)
do
byte_size = :erlang.size(expected)
case input do
<<^expected::binary-size(byte_size), rest::binary>> ->
# string has been matched -> add the term, and update the position
new_col = col + byte_size
%{state | :column => new_col, :input => rest, :results => [expected|results]}
_ ->
# no match -> report an error
%{state | :status => :error, :error => "Expected `#{expected}`, but was not found at line #{line}, column #{col}."}
end
end
```
The macro above will generate a function which takes two arguments. The first
argument (parser state) can be omitted (i.e. you can use the macro as
`string(expected_string)`). In this case, you're just creating a basic parser
specification.
However, you can also chain parsers by providing the first argument:
```
parser1()
|> string(expected_string)
```
In this example, the state produced by `parser1` is used when invoking the
`string` parser. In other words, `string` parser parses the remaining output.
On success, the final result will contain terms emitted by both parsers.
Note: if your parser doesn't output exactly one term it might not work properly
with other parsers which rely on this property, especially those from
`Combine.Parsers.Base`. As a rule, try to always output exactly one term. If you
need to produce more terms, you can group them in a list, a tuple, or a map. If
you don't want to produce anything, you can produce the atom `:__ignore`, which
will be later removed from the output.
"""
defmacro defparser(call, do: body) do
mod = Map.get(__CALLER__, :module)
call = Macro.postwalk(call, fn {x, y, nil} -> {x, y, mod}; expr -> expr end)
body = Macro.postwalk(body, fn {x, y, nil} -> {x, y, mod}; expr -> expr end)
{name, args} = case call do
{:when, _, [{name, _, args}|_]} -> {name, args}
{name, _, args} -> {name, args}
end
impl_name = :"#{Atom.to_string(name)}_impl"
call = case call do
{:when, when_env, [{_name, name_env, args}|rest]} ->
{:when, when_env, [{impl_name, name_env, args}|rest]}
{_name, name_env, args} ->
{impl_name, name_env, args}
end
other_args = case args do
[_] -> []
[_|rest] -> rest
_ -> raise(ArgumentError, "Invalid defparser arguments: (#{Macro.to_string args})")
end
quote do
def unquote(name)(parser \\ nil, unquote_splicing(other_args))
when parser == nil or is_function(parser, 1)
do
if parser == nil do
fn state -> unquote(impl_name)(state, unquote_splicing(other_args)) end
else
fn
%Combine.ParserState{status: :ok} = state ->
unquote(impl_name)(parser.(state), unquote_splicing(other_args))
%Combine.ParserState{} = state ->
state
end
end
end
defp unquote(impl_name)(%Combine.ParserState{status: :error} = state, unquote_splicing(other_args)), do: state
defp unquote(call) do
unquote(body)
end
end
end
end
|
deps/combine/lib/combine/helpers.ex
| 0.819749
| 0.865679
|
helpers.ex
|
starcoder
|
defmodule Grizzly.ZWave.CommandClasses.NetworkManagementInclusion do
@moduledoc """
Network Management Inclusion Command Class
This command class provides the commands for adding and removing Z-Wave nodes
to the Z-Wave network
"""
@behaviour Grizzly.ZWave.CommandClass
alias Grizzly.ZWave.{DSK, CommandClasses, Security}
@typedoc """
The status of the inclusion process
* `:done` - the inclusion process is done without error
* `:failed` - the inclusion process is done with failure, the device is not
included
* `:security_failed` - the inclusion process is done, the device is included
but their was an error during the security negotiations. Device \
functionality will be degraded.
"""
@type node_add_status() :: :done | :failed | :security_failed
@impl Grizzly.ZWave.CommandClass
def byte(), do: 0x34
@impl Grizzly.ZWave.CommandClass
def name(), do: :network_management_inclusion
@doc """
Parse the node add status byte into an atom
"""
@spec parse_node_add_status(0x06 | 0x07 | 0x09) :: node_add_status()
def parse_node_add_status(0x06), do: :done
def parse_node_add_status(0x07), do: :failed
def parse_node_add_status(0x09), do: :security_failed
@doc """
Encode a `node_add_status()` to a byte
"""
@spec node_add_status_to_byte(node_add_status()) :: 0x06 | 0x07 | 0x09
def node_add_status_to_byte(:done), do: 0x06
def node_add_status_to_byte(:failed), do: 0x07
def node_add_status_to_byte(:security_failed), do: 0x09
@typedoc """
Command classes have different ways they are support for each device
"""
@type tagged_command_classes() ::
{:non_secure_supported, [CommandClasses.command_class()]}
| {:non_secure_controlled, [CommandClasses.command_class()]}
| {:secure_supported, [CommandClasses.command_class()]}
| {:secure_controlled, [CommandClasses.command_class()]}
@typedoc """
Node info report
node information from a node add status report
* `:listening?` - is the device a listening device
* `:basic_device_class` - the basic device class
* `:generic_device_class` - the generic device class
* `:specific_device_class` - the specific device class
* `:command_classes` - list of command classes the new device supports
* `:keys_granted` - S2 keys granted by the user during the time of inclusion
version 2 and above
* `:kex_fail_type` - the type of key exchange failure if there is one version
2 and above
* `input_dsk` - the DSK of the device version 3 and above. If the info report is used
"""
@type node_info_report() :: %{
required(:seq_number) => byte(),
required(:node_id) => Grizzly.ZWave.node_id(),
required(:status) => node_add_status(),
required(:listening?) => boolean(),
required(:basic_device_class) => byte(),
required(:generic_device_class) => byte(),
required(:specific_device_class) => byte(),
required(:command_classes) => [tagged_command_classes()],
optional(:keys_granted) => [Security.key()],
optional(:kex_fail_type) => Security.key_exchange_fail_type(),
optional(:input_dsk) => Security.key_exchange_fail_type()
}
@typedoc """
Extended node info report
Node information from an extended node add status report
* `:listening?` - is the device a listening device
* `:basic_device_class` - the basic device class
* `:generic_device_class` - the generic device class
* `:specific_device_class` - the specific device class
* `:command_classes` - list of command classes the new device supports
* `:keys_granted` - S2 keys granted by the user during the time of inclusion
* `:kex_fail_type` - the type of key exchange failure if there is one
"""
@type extended_node_info_report() :: %{
required(:listening?) => boolean(),
required(:basic_device_class) => byte(),
required(:generic_device_class) => byte(),
required(:specific_device_class) => byte(),
required(:command_classes) => [tagged_command_classes()],
required(:keys_granted) => [Security.key()],
required(:kex_fail_type) => Security.key_exchange_fail_type()
}
@doc """
Parse node information from node add status and extended node add status reports
"""
@spec parse_node_info(binary()) :: node_info_report() | extended_node_info_report()
def parse_node_info(
<<node_info_length, listening?::1, _::7, _opt_func, basic_device_class,
generic_device_class, specific_device_class, more_info::binary>>
) do
# TODO: decode the command classes correctly (currently assuming no extended command classes)
# TODO: decode the device classes correctly
# node info length includes: node_info_length, listening?, opt_func, and 3 devices classes
# to get the length of command classes we have to subject 6 bytes.
command_class_length = node_info_length - 6
Map.new()
|> Map.put(:listening?, listening? == 1)
|> Map.put(:basic_device_class, basic_device_class)
|> Map.put(:generic_device_class, generic_device_class)
|> Map.put(:specific_device_class, specific_device_class)
|> parse_additional_node_info(more_info, command_class_length)
end
defp parse_additional_node_info(node_info, additional_info, command_class_length) do
<<command_classes_bin::binary-size(command_class_length), more_info::binary>> =
additional_info
command_classes = CommandClasses.command_class_list_from_binary(command_classes_bin)
node_info
|> Map.put(:command_classes, command_classes)
|> parse_optional_fields(more_info)
end
defp parse_optional_fields(info, <<>>), do: info
defp parse_optional_fields(info, <<keys_granted, kex_fail_type>>) do
info
|> put_security_info(keys_granted, kex_fail_type)
end
defp parse_optional_fields(info, <<keys_granted, kex_fail_type, 0x00>>) do
info
|> put_security_info(keys_granted, kex_fail_type)
end
defp parse_optional_fields(info, <<keys_granted, kex_fail_type, 16, dsk::binary-size(16)>>) do
info
|> put_security_info(keys_granted, kex_fail_type)
|> put_dsk(dsk)
end
defp put_security_info(info, keys_granted, kex_fail_type) do
info
|> Map.put(:keys_granted, Security.byte_to_keys(keys_granted))
|> Map.put(:kex_fail_type, Security.failed_type_from_byte(kex_fail_type))
end
defp put_dsk(info, dsk_bin) do
info
|> Map.put(:input_dsk, DSK.new(dsk_bin))
end
end
|
lib/grizzly/zwave/command_classes/network_management_inclusion.ex
| 0.827026
| 0.480783
|
network_management_inclusion.ex
|
starcoder
|
defmodule Membrane.MP4.Track do
@moduledoc """
A module defining a structure that represents an MPEG-4 track.
All new samples of a track must be stored in the structure first in order
to build a sample table of a regular MP4 container. Samples that were stored
can be flushed later in form of chunks.
"""
alias Membrane.MP4.Helper
alias __MODULE__.SampleTable
@type t :: %__MODULE__{
id: pos_integer,
content: struct,
height: non_neg_integer,
width: non_neg_integer,
timescale: pos_integer,
sample_table: SampleTable.t(),
duration: non_neg_integer | nil,
movie_duration: non_neg_integer | nil
}
@enforce_keys [:id, :content, :height, :width, :timescale]
defstruct @enforce_keys ++
[sample_table: %SampleTable{}, duration: nil, movie_duration: nil]
@spec new(%{
id: pos_integer,
content: struct,
height: non_neg_integer,
width: non_neg_integer,
timescale: pos_integer
}) :: __MODULE__.t()
def new(config) do
struct!(__MODULE__, config)
end
@spec store_sample(__MODULE__.t(), Membrane.Buffer.t()) :: __MODULE__.t()
def store_sample(track, buffer) do
Map.update!(track, :sample_table, &SampleTable.store_sample(&1, buffer))
end
@spec current_chunk_duration(__MODULE__.t()) :: non_neg_integer
def current_chunk_duration(%{sample_table: sample_table}) do
SampleTable.chunk_duration(sample_table)
end
@spec flush_chunk(__MODULE__.t(), non_neg_integer) :: {binary, __MODULE__.t()}
def flush_chunk(track, chunk_offset) do
{chunk, sample_table} = SampleTable.flush_chunk(track.sample_table, chunk_offset)
{chunk, %{track | sample_table: sample_table}}
end
@spec finalize(__MODULE__.t(), pos_integer) :: __MODULE__.t()
def finalize(track, movie_timescale) do
track
|> put_durations(movie_timescale)
|> Map.update!(:sample_table, &SampleTable.reverse/1)
end
defp put_durations(track, movie_timescale) do
use Ratio
duration =
track.sample_table.decoding_deltas
|> Enum.reduce(0, &(&1.sample_count * &1.sample_delta + &2))
%{
track
| duration: Helper.timescalify(duration, track.timescale),
movie_duration: Helper.timescalify(duration, movie_timescale)
}
end
end
|
lib/membrane_mp4/track.ex
| 0.876634
| 0.536616
|
track.ex
|
starcoder
|
defmodule Fiet.Atom do
@moduledoc """
Atom parser, comply with [RFC 4287](https://tools.ietf.org/html/rfc4287).
## Text constructs
Fiet supports two out of three text contructs in Atom: `text` and `html`.
`xhtml` is not supported.
In text constructs fields, the returning format is `{format, data}`. If "type"
attribute does not exist in the tag, format will be `text` by default.
For example, `<title type="html">Less: <em> &lt; </em></title>` will
give you `{:html, "Less: <em> &lt; </em>"}`.
## Person constructs
There are three attributes in Person construct: name, uri and email, both
contributors and authors returned by the parser will be in `Fiet.Atom.Person`
struct.
See `Fiet.Atom.Person` for more information.
"""
alias Fiet.Atom
defmodule ParsingError do
defexception [:reason]
def message(%__MODULE__{reason: reason}) do
{error_type, term} = reason
format_message(error_type, term)
end
defp format_message(:not_atom, root_tag) do
"unexpected root tag #{inspect(root_tag)}, expected \"feed\""
end
end
@doc """
Parses Atom document feed.
## Example
iex> Fiet.Atom.parse(atom)
{:ok,
%Fiet.Atom.Feed{
authors: [],
categories: [
%Fiet.Atom.Category{label: "Space", scheme: nil, term: "space"},
%Fiet.Atom.Category{label: "Science", scheme: nil, term: "science"}
],
contributors: [],
entries: [
%Fiet.Atom.Entry{
authors: [
%Fiet.Atom.Person{
email: "<EMAIL>",
name: "<NAME>",
uri: "http://example.org/"
}
],
categories: [],
content: {:text, "Test Content"},
contributors: [
%Fiet.Atom.Person{email: nil, name: "<NAME>", uri: nil},
%Fiet.Atom.Person{email: nil, name: "<NAME>", uri: nil}
],
id: "tag:example.org,2003:3.2397",
link: %Fiet.Atom.Link{
href: "http://example.org/audio/ph34r_my_podcast.mp3",
href_lang: nil,
length: "1337",
rel: "enclosure",
title: nil,
type: "audio/mpeg"
},
published: nil,
rights: {:xhtml, :skipped},
source: nil,
summary: nil,
title: {:text, "Atom draft-07 snapshot"},
updated: "2005-07-31T12:29:29Z"
}
],
generator: %Fiet.Atom.Generator{
text: "\n Example Toolkit\n ",
uri: "http://www.example.com/",
version: "1.0"
},
icon: nil,
id: "tag:example.org,2003:3",
link: %Fiet.Atom.Link{
href: "http://example.org/feed.atom",
href_lang: nil,
length: nil,
rel: "self",
title: nil,
type: "application/atom+xml"
},
logo: nil,
rights: {:text, "Copyright (c) 2003, <NAME>"},
subtitle: {:html,
"\n A <em>lot</em> of effort\n went into making this effortless\n "},
title: {:text, "dive into mark"},
updated: "2005-07-31T12:29:29Z"
}}
"""
def parse(document) when is_binary(document) do
try do
Fiet.StackParser.parse(document, %Atom.Feed{}, __MODULE__)
rescue
exception in ParsingError ->
{:error, exception.reason}
else
{:ok, %Atom.Feed{} = feed} ->
{:ok, feed}
{:ok, {:not_atom, _root_tag} = reason} ->
{:error, %ParsingError{reason: reason}}
{:error, _reason} = error ->
error
end
end
@doc false
def handle_event(:start_element, {root_tag, _, _}, [], _feed) when root_tag != "feed" do
{:stop, {:not_atom, root_tag}}
end
def handle_event(:start_element, {"entry", _, _}, [{"feed", _, _} | []], feed) do
%{entries: entries} = feed
%{feed | entries: [%Atom.Entry{} | entries]}
end
def handle_event(:end_element, {"entry", _, _}, [{"feed", _, _} | []], feed) do
%{
links: links,
entries: entries,
categories: categories,
authors: authors,
contributors: contributors
} = feed
%{
feed
| links: Enum.reverse(links),
entries: Enum.reverse(entries),
categories: Enum.reverse(categories),
authors: Enum.reverse(authors),
contributors: Enum.reverse(contributors)
}
end
@people_tags [
{"author", :authors},
{"contributor", :contributors}
]
@person_tags [
{"name", :name},
{"email", :email},
{"uri", :uri}
]
Enum.each(@people_tags, fn {people_tag, people_key} ->
def handle_event(:start_element, {unquote(people_tag), _, _}, [{"feed", _, _} | _], feed) do
people = [%Atom.Person{} | feed.unquote(people_key)]
Map.put(feed, unquote(people_key), people)
end
def handle_event(:start_element, {unquote(people_tag), _, _}, [{"entry", _, _} | _], feed) do
[entry | entries] = feed.entries
people = [%Atom.Person{} | entry.unquote(people_key)]
entry = Map.put(entry, unquote(people_key), people)
%{feed | entries: [entry | entries]}
end
Enum.each(@person_tags, fn {person_tag, person_key} ->
def handle_event(
:end_element,
{unquote(person_tag), _, content},
[{unquote(people_tag), _, _} | [{"feed", _, _} | _]],
feed
) do
[person | people] = feed.unquote(people_key)
person = Map.put(person, unquote(person_key), content)
Map.put(feed, unquote(people_key), [person | people])
end
def handle_event(
:end_element,
{unquote(person_tag), _, content},
[{unquote(people_tag), _, _} | [{"entry", _, _} | _]],
feed
) do
[entry | entries] = feed.entries
[person | people] = entry.unquote(people_key)
person = Map.put(person, unquote(person_key), content)
entry = Map.put(entry, unquote(people_key), [person | people])
%{feed | entries: [entry | entries]}
end
end)
end)
def handle_event(:start_element, _element, _stack, feed) do
feed
end
@feed_simple_tags [
{"id", :id},
{"updated", :updated},
{"logo", :logo},
{"icon", :icon}
]
Enum.each(@feed_simple_tags, fn {feed_tag, feed_key} ->
def handle_event(:end_element, {unquote(feed_tag), _, content}, [{"feed", _, _} | _], feed) do
Map.put(feed, unquote(feed_key), content)
end
end)
def handle_event(:end_element, {"category", _, _} = element, [{"feed", _, _} | _], feed) do
category = Atom.Category.from_element(element)
%{feed | categories: [category | feed.categories]}
end
def handle_event(:end_element, {"link", _, _} = element, [{"feed", _, _} | _], feed) do
%{links: links} = feed
link = Atom.Link.from_element(element)
%{feed | links: [link | links]}
end
def handle_event(:end_element, {"generator", _, _} = element, [{"feed", _, _} | _], feed) do
generator = Atom.Generator.from_element(element)
%{feed | generator: generator}
end
@entry_simple_tags [
{"id", :id},
{"published", :published},
{"updated", :updated}
]
Enum.each(@entry_simple_tags, fn {tag_name, key} ->
def handle_event(:end_element, {unquote(tag_name), _, content}, [{"entry", _, _} | _], feed) do
%{entries: [entry | entries]} = feed
entry = Map.put(entry, unquote(key), content)
%{feed | entries: [entry | entries]}
end
end)
@feed_text_construct_tags [
{"title", :title},
{"subtitle", :subtitle},
{"rights", :rights}
]
Enum.each(@feed_text_construct_tags, fn {tag_name, key} ->
def handle_event(
:end_element,
{unquote(tag_name), attributes, content},
[{"feed", _, _} | _],
feed
) do
case extract_text(attributes, content) do
{:ok, content} ->
Map.put(feed, unquote(key), content)
{:error, _reason} ->
feed
end
end
end)
@entry_text_construct_tags [
{"title", :title},
{"summary", :summary},
{"content", :content},
{"rights", :rights}
]
Enum.each(@entry_text_construct_tags, fn {tag_name, key} ->
def handle_event(
:end_element,
{unquote(tag_name), attributes, content},
[{"entry", _, _} | _],
feed
) do
case extract_text(attributes, content) do
{:ok, content} ->
%{entries: [entry | entries]} = feed
entry = Map.put(entry, unquote(key), content)
%{feed | entries: [entry | entries]}
{:error, _reason} ->
feed
end
end
end)
def handle_event(:end_element, {"link", _, _} = element, [{"entry", _, _} | _], feed) do
%{entries: [entry | entries]} = feed
%{links: links} = entry
link = Atom.Link.from_element(element)
entry = Map.put(entry, :links, [link | links])
%{feed | entries: [entry | entries]}
end
def handle_event(:end_element, _element, _stack, feed) do
feed
end
defp extract_text(attributes, content) do
case get_attribute_value(attributes, "type") do
type when is_nil(type) or type == "text" ->
{:ok, {:text, content}}
"html" ->
{:ok, {:html, content}}
"xhtml" ->
{:ok, {:xhtml, :skipped}}
type ->
{:error, "type #{inspect(type)} is not supported"}
end
end
defp get_attribute_value(attributes, name) do
for({key, value} <- attributes, key == name, do: value)
|> List.first()
end
end
|
lib/fiet/atom.ex
| 0.849035
| 0.595257
|
atom.ex
|
starcoder
|
defmodule Kiq.Periodic.Crontab do
@moduledoc """
Generate and evaluate the structs used to evaluate periodic jobs.
The `Crontab` module provides parsing and evaluation for standard cron
expressions. Expressions are composed of rules specifying the minutes, hours,
days, months and weekdays. Rules for each field are comprised of literal
values, wildcards, step values or ranges:
* `*` - Wildcard, matches any value (0, 1, 2, ...)
* `0` — Literal, matches only itself (only 0)
* `*/15` — Step, matches any value that is a multiple (0, 15, 30, 45)
* `0-5` — Range, matches any value within the range (0, 1, 2, 3, 4, 5)
Each part may have multiple rules, where rules are separated by a comma. The
allowed values for each field are as follows:
* `minute` - 0-59
* `hour` - 0-23
* `days` - 1-31
* `month` - 1-12 (or aliases, `JAN`, `FEB`, `MAR`, etc.)
* `weekdays` - 0-6 (or aliases, `SUN`, `MON`, `TUE`, etc.)
For more in depth information see the man documentation for `cron` and
`crontab` in your system. Alternatively you can experiment with various
expressions online at [Crontab Guru](http://crontab.guru/).
## Examples
# The first minute of every hour
Crontab.parse!("0 * * * *")
# Every fifteen minutes during standard business hours
Crontab.parse!("*/15 9-17 * * *")
# Once a day at midnight during december
Crontab.parse!("0 0 * DEC *")
# Once an hour during both rush hours on Friday the 13th
Crontab.parse!("0 7-9,4-6 13 * FRI")
"""
alias Kiq.Periodic.Parser
@type expression :: [:*] | list(non_neg_integer())
@type t :: %__MODULE__{
minutes: expression(),
hours: expression(),
days: expression(),
months: expression(),
weekdays: expression()
}
defstruct minutes: [:*], hours: [:*], days: [:*], months: [:*], weekdays: [:*]
# Evaluation
@doc """
Evaluate whether a crontab matches a datetime. The current datetime in UTC is
used as the default.
## Examples
iex> Kiq.Periodic.Crontab.now?(%Crontab{})
true
iex> crontab = Crontab.parse!("* * * * *")
...> Kiq.Periodic.Crontab.now?(crontab)
true
iex> crontab = Crontab.parse!("59 23 1 1 6")
...> Kiq.Periodic.Crontab.now?(crontab)
false
"""
@spec now?(crontab :: t(), datetime :: DateTime.t()) :: boolean()
def now?(%__MODULE__{} = crontab, datetime \\ DateTime.utc_now()) do
crontab
|> Map.from_struct()
|> Enum.all?(fn {part, values} ->
Enum.any?(values, &matches_rule?(part, &1, datetime))
end)
end
defp matches_rule?(_part, :*, _date_time), do: true
defp matches_rule?(:minutes, minute, datetime), do: minute == datetime.minute
defp matches_rule?(:hours, hour, datetime), do: hour == datetime.hour
defp matches_rule?(:days, day, datetime), do: day == datetime.day
defp matches_rule?(:months, month, datetime), do: month == datetime.month
defp matches_rule?(:weekdays, weekday, datetime), do: weekday == Date.day_of_week(datetime)
# Parsing
@part_ranges %{
minutes: {0, 59},
hours: {0, 23},
days: {1, 31},
months: {1, 12},
weekdays: {0, 6}
}
@doc """
Parse a crontab expression string into a Crontab struct.
## Examples
iex> Kiq.Periodic.Crontab.parse!("0 6,12,18 * * *")
%Crontab{minutes: [0], hours: [6, 12, 18]}
iex> Kiq.Periodic.Crontab.parse!("0-2,4-6 */12 * * *")
%Crontab{minutes: [0, 1, 2, 4, 5, 6], hours: [0, 12]}
iex> Kiq.Periodic.Crontab.parse!("* * 20,21 SEP,NOV *")
%Crontab{days: [20, 21], months: [9, 11]}
iex> Kiq.Periodic.Crontab.parse!("0 12 * * SUN")
%Crontab{minutes: [0], hours: [12], weekdays: [0]}
"""
@spec parse!(input :: binary()) :: t()
def parse!(input) when is_binary(input) do
case Parser.crontab(input) do
{:ok, parsed, _, _, _, _} ->
struct!(__MODULE__, expand(parsed))
{:error, message, _, _, _, _} ->
raise ArgumentError, message
end
end
defp expand(parsed) when is_list(parsed), do: Enum.map(parsed, &expand/1)
defp expand({part, expressions}) do
{min, max} = Map.get(@part_ranges, part)
expanded =
expressions
|> Enum.flat_map(&expand(&1, min, max))
|> :lists.usort()
{part, expanded}
end
defp expand({:wild, _value}, _min, _max), do: [:*]
defp expand({:literal, value}, min, max) when value in min..max, do: [value]
defp expand({:step, value}, min, max) when value in (min + 1)..max do
for step <- min..max, rem(step, value) == 0, do: step
end
defp expand({:range, [first, last]}, min, max) when first >= min and last <= max do
for step <- first..last, do: step
end
defp expand({_type, value}, min, max) do
raise ArgumentError, "Unexpected value #{inspect(value)} outside of range #{min}..#{max}"
end
end
|
lib/kiq/periodic/crontab.ex
| 0.909702
| 0.751717
|
crontab.ex
|
starcoder
|
defmodule Okta.TrustedOrigins do
@moduledoc """
The `Okta.TrustedOrigins` module provides access methods to the [Okta Trusted Origins API](https://developer.okta.com/docs/reference/api/trusted-origins/).
All methods require a Tesla Client struct created with `Okta.client(base_url, api_key)`.
## Examples
client = Okta.Client("https://dev-000000.okta.com", "<PASSWORD>")
{:ok, result, _env} = Okta.TrustedOrigins.list_trusted_origins(client)
"""
@trusted_origins_api "/api/v1/trustedOrigins"
@doc """
Creates a new trusted origin
The scopes parameter is a List with one or both of `:cors` and `:redirect`
## Examples
```
client = Okta.Client("https://dev-000000.okta.com", "<PASSWORD>")
{:ok, result, _env} = Okta.TrustedOrigins.create_trusted_origin(client, "Test", "https://example.com/test", [:cors, :redirect])
```
https://developer.okta.com/docs/reference/api/trusted-origins/#create-trusted-origin
"""
@spec create_trusted_origin(Okta.client(), String.t(), String.t(), [:cors | :redirect]) ::
Okta.result()
def create_trusted_origin(client, name, origin, scopes) do
Tesla.post(client, @trusted_origins_api, trusted_origin_body(name, origin, scopes))
|> Okta.result()
end
@doc """
Gets a trusted origin by ID
## Examples
```
client = Okta.Client("https://dev-000000.okta.com", "<PASSWORD>")
{:ok, result, _env} = Okta.TrustedOrigins.get_trusted_origin(client, "tosue7JvguwJ7U6kz0g3")
```
https://developer.okta.com/docs/reference/api/trusted-origins/#get-trusted-origin
"""
@spec get_trusted_origin(Okta.client(), String.t()) :: Okta.result()
def get_trusted_origin(client, trusted_origin_id) do
Tesla.get(client, @trusted_origins_api <> "/#{trusted_origin_id}") |> Okta.result()
end
@doc """
Lists all trusted origins
A subset of trusted origins can be returned that match a supported filter expression or query criteria.
## Examples
```
client = Okta.Client("https://dev-000000.okta.com", "<PASSWORD>")
{:ok, result, _env} = Okta.TrustedOrigins.list_trusted_origins(client, limit: 1000)
```
https://developer.okta.com/docs/reference/api/trusted-origins/#list-trusted-origins
"""
@spec list_trusted_origins(Okta.client(), keyword()) :: Okta.result()
def list_trusted_origins(client, opts \\ []) do
Tesla.get(client, @trusted_origins_api, query: opts) |> Okta.result()
end
@doc """
Lists all trusted origins with a filter
## Examples
```
client = Okta.Client("https://dev-000000.okta.com", "<PASSWORD>")
{:ok, result, _env} = Okta.TrustedOrigins.filter_trusted_origins(client, "(id eq \"tosue7JvguwJ7U6kz0g3\" or id eq \"tos10hzarOl8zfPM80g4\")")
```
https://developer.okta.com/docs/reference/api/trusted-origins/#list-trusted-origins-with-a-filter
"""
@spec filter_trusted_origins(Okta.client(), String.t(), keyword()) :: Okta.result()
def filter_trusted_origins(client, filter, opts \\ []) do
list_trusted_origins(client, Keyword.merge(opts, filter: filter))
end
@doc """
Updates a trusted origin
The scopes parameter is a List with one or both of `:cors` and `:redirect`
## Examples
```
client = Okta.Client("https://dev-000000.okta.com", "<PASSWORD>")
{:ok, result, _env} = Okta.TrustedOrigins.update_trusted_origin(client, "tosue7JvguwJ7U6kz0g3", "Test", "https://example.com/test", [:cors, :redirect])
```
https://developer.okta.com/docs/reference/api/trusted-origins/#update-trusted-origin
"""
@spec update_trusted_origin(Okta.client(), String.t(), String.t(), String.t(), [
:cors | :redirect
]) ::
Okta.result()
def update_trusted_origin(client, trusted_origin_id, name, origin, scopes) do
Tesla.put(
client,
@trusted_origins_api <> "/#{trusted_origin_id}",
trusted_origin_body(name, origin, scopes)
)
|> Okta.result()
end
@doc """
Activates an existing trusted origin
## Examples
```
client = Okta.Client("https://dev-000000.okta.com", "<PASSWORD>")
{:ok, result, _env} = Okta.TrustedOrigins.activate_trusted_origin(client, "tosue7JvguwJ7U6kz0g3")
```
https://developer.okta.com/docs/reference/api/trusted-origins/#activate-trusted-origin
"""
@spec activate_trusted_origin(Okta.client(), String.t()) :: Okta.result()
def activate_trusted_origin(client, trusted_origin_id) do
Tesla.post(client, @trusted_origins_api <> "/#{trusted_origin_id}/lifecycle/activate", %{})
|> Okta.result()
end
@doc """
Deactivates an existing trusted origin
## Examples
```
client = Okta.Client("https://dev-000000.okta.com", "<PASSWORD>")
{:ok, result, _env} = Okta.TrustedOrigins.deactivate_trusted_origin(client, "tosue7JvguwJ7U6kz0g3")
```
https://developer.okta.com/docs/reference/api/trusted-origins/#deactivate-trusted-origin
"""
@spec deactivate_trusted_origin(Okta.client(), String.t()) :: Okta.result()
def deactivate_trusted_origin(client, trusted_origin_id) do
Tesla.post(client, @trusted_origins_api <> "/#{trusted_origin_id}/lifecycle/deactivate", %{})
|> Okta.result()
end
@doc """
Deletes an existing trusted origin
## Examples
```
client = Okta.Client("https://dev-000000.okta.com", "thisismykeycreatedinokta")
{:ok, result, _env} = Okta.TrustedOrigins.delete_trusted_origin(client, "tosue7JvguwJ7U6kz0g3")
```
https://developer.okta.com/docs/reference/api/trusted-origins/#delete-trusted-origin
"""
@spec delete_trusted_origin(Okta.client(), String.t()) :: Okta.result()
def delete_trusted_origin(client, trusted_origin_id) do
Tesla.delete(client, @trusted_origins_api <> "/#{trusted_origin_id}")
|> Okta.result()
end
defp trusted_origin_body(name, origin, scopes) do
scopes =
Enum.reduce(scopes, [], fn scope, new_scopes ->
case scope do
:cors -> [%{type: "CORS"} | new_scopes]
:redirect -> [%{type: "REDIRECT"} | new_scopes]
_ -> new_scopes
end
end)
%{
name: name,
origin: origin,
scopes: scopes
}
end
end
|
lib/okta/trusted_origins.ex
| 0.873384
| 0.863679
|
trusted_origins.ex
|
starcoder
|
defmodule Filtrex.Params do
@moduledoc """
`Filtrex.Params` is a module that parses parameters similar to Phoenix, such as:
```
%{"due_date_between" => %{"start" => "2016-03-10", "end" => "2016-03-20"}, "text_column" => "Buy milk"}
```
"""
@doc "Converts a string-key map to atoms from whitelist"
def sanitize(map, whitelist) when is_map(map) do
sanitize_value(map, Enum.map(whitelist, &to_string/1))
end
defp sanitize_value(map, whitelist) when is_map(map) do
Enum.reduce_while(map, {:ok, %{}}, fn ({key, value}, {:ok, acc}) ->
cond do
is_atom(key) ->
case sanitize_value(value, whitelist) do
{:ok, sanitized} -> {:cont, {:ok, Map.put(acc, key, sanitized)}}
error -> {:halt, error}
end
key in whitelist ->
atom = String.to_existing_atom(key)
case sanitize_value(value, whitelist) do
{:ok, sanitized} -> {:cont, {:ok, Map.put(acc, atom, sanitized)}}
error -> {:halt, error}
end
not is_binary(key) ->
{:halt, {:error, "Invalid key. Only string keys are supported."}}
true ->
{:halt, {:error, "Unknown key '#{key}'"}}
end
end)
end
defp sanitize_value(list, whitelist) when is_list(list) do
Enum.reduce_while(list, {:ok, []}, fn (value, {:ok, acc}) ->
case sanitize_value(value, whitelist) do
{:ok, sanitized} -> {:cont, {:ok, acc ++ [sanitized]}}
error -> {:halt, error}
end
end)
end
defp sanitize_value(value, _), do: {:ok, value}
@doc "Converts parameters to a list of conditions"
def parse_conditions(configs, params) when is_map(params) do
Enum.reduce(params, {:ok, []}, fn
{key, value}, {:ok, conditions} ->
convert_and_add_condition(configs, key, value, conditions)
_, {:error, reason} ->
{:error, reason}
end)
end
defp convert_and_add_condition(configs, key, value, conditions) do
case Filtrex.Condition.param_key_type(configs, key) do
{:ok, module, config, column, comparator} ->
attrs = %{inverse: false, column: column, comparator: comparator, value: value}
parse_and_add_condition(config, module, convert_value_in_attrs(attrs), conditions)
{:error, reason} -> {:error, reason}
end
end
defp parse_and_add_condition(config, module, attrs, conditions) do
case module.parse(config, attrs) do
{:error, reason} -> {:error, reason}
{:ok, condition} -> {:ok, conditions ++ [condition]}
end
end
defp convert_value_in_attrs(attrs = %{value: value}) do
Map.put(attrs, :value, convert_value(value))
end
defp convert_value(map) when is_map(map) do
Enum.map(map, fn
{key, value} when is_binary(key) ->
{String.to_atom(key), value}
{key, value} -> {key, value}
end) |> Enum.into(%{})
end
defp convert_value(value), do: value
end
|
lib/filtrex/params.ex
| 0.782746
| 0.844601
|
params.ex
|
starcoder
|
defmodule Stein.MFA.OneTimePassword.Secret do
@moduledoc """
`Stein.MFA.OneTimePassword.Secret` contains the struct and functions for generation of `:pot` useful secret keys
and Google Authenticator compatible (QR-) presentable urls for them.
"""
@typedoc "Secret type; totp or hotp"
@type stype :: :totp | :hotp
@typedoc "Hash algorithim used"
@type algorithm :: :SHA1 | :SHA256 | :SHA512
@typedoc "How many digits to generate in a token"
@type digits :: 6 | 8
@typedoc "Generically a OTP secret, of either type. May or may not be valid "
@type t :: %__MODULE__{
type: stype,
label: String.t(),
secret_value: binary,
issuer: String.t() | nil,
algorithm: algorithm,
digits: digits
}
@typedoc "a Time-based OTP secret, with a valid period"
@type totp_t :: %__MODULE__{
type: :totp,
label: String.t(),
secret_value: binary,
issuer: String.t() | nil,
algorithm: algorithm,
digits: digits,
period: pos_integer()
}
@typedoc "an HMAC-based OTP secret, with a valid counter"
@type hotp_t :: %__MODULE__{
type: :hotp,
label: String.t(),
secret_value: binary,
issuer: String.t() | nil,
algorithm: algorithm,
digits: digits,
counter: non_neg_integer()
}
@enforce_keys [:label, :secret_value]
defstruct(
type: :totp,
label: nil,
secret_value: nil,
issuer: nil,
algorithm: :SHA1,
digits: 6,
counter: nil,
period: nil
)
@doc "Creates a new Time-based secret"
@spec new_totp(String.t(),
issuer: String.t(),
bits: pos_integer(),
algorithm: algorithm,
period: pos_integer()
) ::
totp_t()
def new_totp(label, opts \\ []) do
secret_value = generate_secret(opts[:bits] || 160)
%__MODULE__{
type: :totp,
label: label,
secret_value: secret_value,
# overrideables
issuer: opts[:issuer] || default_issuer(),
algorithm: opts[:algorithm] || :SHA1,
period: opts[:period] || 30
}
end
@doc "Creates a new HMAC/counter-based secret"
@spec new_hotp(String.t(),
issuer: String.t(),
bits: pos_integer(),
initial_counter: non_neg_integer()
) :: hotp_t()
def new_hotp(label, opts \\ []) do
secret_value = generate_secret(opts[:bits] || 160)
%__MODULE__{
type: :hotp,
label: label,
secret_value: secret_value,
# overrideables
issuer: opts[:issuer] || default_issuer(),
algorithm: opts[:algorithm] || :SHA1,
counter: opts[:initial_counter] || 0
}
end
# Returns the configed issuer or nil. Can be overrided with issuer keyword in each of above
defp default_issuer, do: Application.get_env(:stein_mfa, :one_time_password_issuer)
@spec generate_secret(pos_integer()) :: binary
# Generates a base32 encoded shared secret (K) of the given number of bits to the closest byte.
# Minimum of 128 per https://tools.ietf.org/html/rfc4226#section-4 R6
defp generate_secret(bits) when bits > 128 do
:crypto.strong_rand_bytes(div(bits, 8)) |> :pot_base32.encode()
end
@spec enrollment_url(t) :: String.t()
@doc """
Generates a Google Authenticator format url per https://github.com/google/google-authenticator/wiki/Key-Uri-Format
"""
def enrollment_url(%__MODULE__{} = s) do
"otpauth://#{s.type}/#{URI.encode(label_maybe_with_issuer(s))}?" <> paramaters(s)
end
defp label_maybe_with_issuer(%__MODULE__{issuer: nil} = s), do: s.label
defp label_maybe_with_issuer(%__MODULE__{} = s), do: "#{s.issuer}:#{s.label}"
@spec paramaters(hotp_t) :: String.t()
defp paramaters(%__MODULE__{type: :hotp, counter: c} = s) when not is_nil(c) do
_parameters(s, :counter)
end
@spec paramaters(totp_t) :: String.t()
defp paramaters(%__MODULE__{type: :totp, period: p} = s) when not is_nil(p) do
_parameters(s, :period)
end
defp paramaters(%__MODULE__{type: :totp}) do
raise ArgumentError, "TOTP must have period"
end
defp paramaters(%__MODULE__{type: :hotp}) do
raise ArgumentError, "HOTP must have counter"
end
defp _parameters(s, key) do
Map.take(s, [:issuer, :algorithm, :digits, key])
|> Map.put("secret", s.secret_value)
|> Enum.filter(&(!is_nil(elem(&1, 1))))
|> URI.encode_query()
end
end
|
lib/stein/mfa/one_time_password/secret.ex
| 0.798894
| 0.552721
|
secret.ex
|
starcoder
|
defmodule Storex.Diff do
@doc """
Check difference between two arguments.
```elixir
Storex.Diff.check(%{name: "John"}, %{name: "Adam"})
[%{a: "u", p: [:name], t: "Adam"}]
```
Result explanation:
```
a: action
n - none
u - update
d - delete
i - insert
t: to
p: path
```
"""
def check(source, changed) do
diff(source, changed, [], [])
end
defp diff(source, changed, changes, path) when is_list(source) and is_list(changed) do
source = Enum.with_index(source)
changed = Enum.with_index(changed)
compare_list(source, changed, changes, path)
end
defp diff(source, changed, changes, path) when is_map(source) and is_map(changed) do
compare_map(source, changed, changes, path)
end
defp diff(source, changed, changes, path) do
if source === changed do
changes
else
[%{a: "u", t: changed, p: path} | changes]
end
end
defp compare_list([{l, li} | lt], [{r, _ri} | rt], changes, path) do
changes = diff(l, r, changes, path ++ [li])
compare_list(lt, rt, changes, path)
end
defp compare_list([{_l, li} | lt], [], changes, path) do
changes = [%{a: "d", p: path ++ [li]} | changes]
compare_list(lt, [], changes, path)
end
defp compare_list([], [{r, ri} | rt], changes, path) do
changes = [%{a: "i", t: r, p: path ++ [ri]} | changes]
compare_list([], rt, changes, path)
end
defp compare_list([], [], changes, _), do: changes
defp compare_map(%NaiveDateTime{} = source, %NaiveDateTime{} = changed, changes, path) do
source = NaiveDateTime.to_string(source)
changed = NaiveDateTime.to_string(changed)
diff(source, changed, changes, path)
end
defp compare_map(%DateTime{} = source, %DateTime{} = changed, changes, path) do
source = DateTime.to_string(source)
changed = DateTime.to_string(changed)
diff(source, changed, changes, path)
end
defp compare_map(%{__struct__: _} = source, %{__struct__: _} = changed, changes, path) do
source = Map.from_struct(source)
changed = Map.from_struct(changed)
compare_map(source, changed, changes, path)
end
defp compare_map(%{__struct__: _} = source, %{} = changed, changes, path) do
source = Map.from_struct(source)
compare_map(source, changed, changes, path)
end
defp compare_map(%{} = source, %{__struct__: _} = changed, changes, path) do
changed = Map.from_struct(changed)
compare_map(source, changed, changes, path)
end
defp compare_map(%{} = source, %{} = changed, changes, path) do
changes = Enum.reduce(source, changes, &compare_map(&1, &2, changed, path, true))
Enum.reduce(changed, changes, &compare_map(&1, &2, source, path, false))
end
defp compare_map({key, value}, acc, changed, path, true) do
case Map.has_key?(changed, key) do
false ->
[%{a: "d", p: path ++ [key]} | acc]
true ->
changed_value = Map.get(changed, key)
diff(value, changed_value, acc, path ++ [key])
end
end
defp compare_map({key, value}, acc, source, path, false) do
case Map.has_key?(source, key) do
false -> [%{a: "i", t: value, p: path ++ [key]} | acc]
true -> acc
end
end
end
|
lib/storex/diff.ex
| 0.666931
| 0.76882
|
diff.ex
|
starcoder
|
defmodule Multicodec do
@moduledoc """
This module provides encoding, decoding, and convenience functions for working with [Multicodec](https://github.com/multiformats/multicodec).
## Overview
> Compact self-describing codecs. Save space by using predefined multicodec tables.
## Motivation
[Multistreams](https://github.com/multiformats/multistream) are self-describing protocol/encoding streams. Multicodec uses an agreed-upon "protocol table". It is designed for use in short strings, such as keys or identifiers (i.e [CID](https://github.com/ipld/cid)).
## Protocol Description
`multicodec` is a _self-describing multiformat_, it wraps other formats with a tiny bit of self-description. A multicodec identifier may either be a varint (in a byte string) or a symbol (in a text string).
A chunk of data identified by multicodec will look like this:
```sh
<multicodec><encoded-data>
# To reduce the cognitive load, we sometimes might write the same line as:
<mc><data>
```
Another useful scenario is when using the multicodec as part of the keys to access data, example:
```
# suppose we have a value and a key to retrieve it
"<key>" -> <value>
# we can use multicodec with the key to know what codec the value is in
"<mc><key>" -> <value>
```
It is worth noting that multicodec works very well in conjunction with [multihash](https://github.com/multiformats/multihash) and [multiaddr](https://github.com/multiformats/multiaddr), as you can prefix those values with a multicodec to tell what they are.
## Codecs
All codecs are passed as strings. The reason for this is to avoid burdening the consumer with an ever-growing list of atoms that can contribute to exhausting the atom pool of a VM.
If you would like to translate these strings into atoms, they are available via the `codecs/1` function, and can be transformed like so:
```elixir
# probably you would want to fix the kebab casing too, not shown here
Multicodec.codecs() |> Enum.map(&String.to_atom/1)
```
Codecs can only be added if they are added officially to Multicodec. We do not deviate from the standard when possible.
## Encoding
All data to encode should be an Elixir binary. It is up to the caller to properly encode the given payload, and it is the job of Multicodec to add metadata to describe that payload. Encoding using Multicodec does not perform any extra encoding or transformation on your data. It simply adds an unsigned variable integer prefix to allow unlimited amounts of codecs, and to cleanly decode them, returning the codec if desired.
# Decoding
There are 2 main ways of decoding data. The first and most common is to use `codec_decode/1` and `codec_decode!/1`, which return a tuple of `{data, codec}`. The second option if you do not care about the codec in some piece of code is to use `decode/1` and `decode!/1`. Multicodec does not modify the returned data - it is up to you if you need further decoding, for example decoding `bencode`.
"""
alias Multicodec.{MulticodecMapping, CodecTable}
@typedoc """
A binary encoded with Multicodec.
"""
@type multicodec_binary() :: binary()
@typedoc """
A codec used to encode a binary as a Multicodec.
"""
@type multi_codec() :: MulticodecMapping.multi_codec()
@typedoc """
A binary representation of a multicodec code encoded as an unsigned varint.
"""
@type prefix() :: MulticodecMapping.prefix()
@doc """
Encodes a binary using Multicodec using the given codec name.
Raises an ArgumentError if the codec does not exist or the provided data is not a binary.
## Examples
iex> Multicodec.encode!("d3:fool3:bar3:baze3:qux4:norfe", "bencode")
"cd3:fool3:bar3:baze3:qux4:norfe"
iex> Multicodec.encode!(<<22, 68, 139, 191, 190, 36, 62, 35, 171, 224, 129, 249, 63, 46, 47, 7, 119, 7, 178, 223, 184, 3, 249, 238, 66, 166, 153, 175, 101, 42, 40, 29>>, "sha2-256")
<<18, 22, 68, 139, 191, 190, 36, 62, 35, 171, 224, 129, 249, 63, 46, 47, 7, 119, 7, 178, 223, 184, 3, 249, 238, 66, 166, 153, 175, 101, 42, 40, 29>>
iex> Multicodec.encode!("legal_thing.torrent", "torrent-file")
"|legal_thing.torrent"
"""
@spec encode!(binary(), multi_codec()) :: multicodec_binary()
def encode!(data, codec) when is_binary(data) and is_binary(codec) do
<<do_prefix_for(codec)::binary, data::binary>>
end
def encode!(_data, _codec) do
raise ArgumentError, "Data must be a binary and codec must be a valid codec string."
end
@doc """
Encodes a binary using Multicodec using the given codec name.
Raises an ArgumentError if the codec does not exist or the provided data is not a binary.
## Examples
iex> Multicodec.encode("EiC5TSe5k00", "protobuf")
{:ok, "PEiC5TSe5k00"}
iex> :crypto.hash(:sha, "secret recipe") |> Multicodec.encode("sha1")
{:ok,
<<17, 139, 95, 199, 243, 128, 172, 237, 254, 18, 189, 127, 227, 208, 152, 232,
107, 238, 26, 35, 106>>}
iex> Multicodec.encode("Taco Tuesday", "mr-yotsuya-at-ikkoku")
{:error, "unsupported codec - \\"mr-yotsuya-at-ikkoku\\""}
"""
@spec encode(binary(), multi_codec()) :: {:ok, multicodec_binary()} | {:error, term()}
def encode(data, codec) do
{:ok, encode!(data, codec)}
rescue
e in ArgumentError -> {:error, Exception.message(e)}
end
@doc """
Decodes a Multicodec encoded binary.
If you need the codec returned with the data, use `codec_decode!/1` instead.
Raises an ArgumentError if the given binary is not Multicodec encoded.
## Examples
iex> Multicodec.decode!(<<0, 99, 111, 117, 110, 116, 32, 98, 114, 111, 99, 99, 117, 108, 97>>)
"count broccula"
iex> Multicodec.decode!(<<51, 0, 99, 114, 105, 115, 112, 121>>)
<<0, 99, 114, 105, 115, 112, 121>>
iex> :crypto.hash(:md5, "soup of the eon") |> Multicodec.encode!("md5") |> Multicodec.decode!()
<<83, 202, 110, 26, 47, 119, 193, 71, 113, 201, 88, 92, 162, 222, 37, 108>>
"""
@spec decode!(multicodec_binary()) :: binary()
def decode!(data) when is_binary(data) do
do_decode(data)
end
def decode!(_data) do
raise ArgumentError, "data must be a Multicodec encoded binary."
end
@doc """
Decodes a Multicodec encoded binary.
If you need the codec returned with the data, use `codec_decode/1` instead.
Returns an error if the given binary is not Multicodec encoded.
## Examples
iex> Multicodec.decode(<<0, 66, 101, 115, 116, 32, 76, 117, 115, 104, 32, 97, 108, 98, 117, 109, 44, 32, 83, 112, 111, 111, 107, 121, 32, 111, 114, 32, 83, 112, 108, 105, 116>>)
{:ok, "Best Lush album, Spooky or Split"}
iex> Multicodec.decode(<<224, 3, 104, 116, 116, 112, 58, 47, 47, 122, 111, 109, 98, 111, 46, 99, 111, 109>>)
{:ok, "http://zombo.com"}
iex> :crypto.hash(:md4, "pass@word") |> Multicodec.encode!("md4") |> Multicodec.decode()
{:ok,
<<110, 141, 9, 114, 67, 195, 143, 146, 109, 201, 188, 52, 200, 125, 93, 225>>}
iex> Multicodec.decode(<<>>)
{:error, "data is not Multicodec encoded."}
"""
@spec decode(multicodec_binary()) :: {:ok, binary()} | {:error, term()}
def decode(data) when is_binary(data) do
{:ok, decode!(data)}
rescue
e in ArgumentError -> {:error, Exception.message(e)}
end
@doc """
Decodes a Multicodec encoded binary, and returning a tuple of the data and the codec used to encode it.
Raises an ArgumentError if the given binary is not Multicodec encoded.
## Examples
iex> Multicodec.codec_decode!(<<0, 87, 104, 101, 110, 32, 116, 104, 101, 32, 112, 101, 110, 100, 117, 108, 117, 109, 32, 115, 119, 105, 110, 103, 115, 44, 32, 105, 116, 32, 99, 117, 116, 115>>)
{"When the pendulum swings, it cuts", "identity"}
iex> Multicodec.codec_decode!(<<51, 0, 99, 114, 105, 115, 112, 121>>)
{<<0, 99, 114, 105, 115, 112, 121>>, "multibase"}
iex> :crypto.hash(:md5, "soup of the eon") |> Multicodec.encode!("md5") |> Multicodec.codec_decode!()
{<<83, 202, 110, 26, 47, 119, 193, 71, 113, 201, 88, 92, 162, 222, 37, 108>>, "md5"}
"""
@spec codec_decode!(multicodec_binary()) :: {binary(), multi_codec()}
def codec_decode!(data) when is_binary(data) do
do_codec_decode(data)
end
def codec_decode!(_data) do
raise ArgumentError, "data must be a Multicodec encoded binary."
end
@doc """
Decodes a Multicodec encoded binary, and returning a tuple of the data and the codec used to encode it.
Returns an error if the given binary is not Multicodec encoded.
## Examples
iex> Multicodec.codec_decode(<<0, 83, 108, 111, 119, 100, 105, 118, 101, 32, 116, 111, 32, 109, 121, 32, 100, 114, 101, 97, 109, 115>>)
{:ok, {"Slowdive to my dreams", "identity"}}
iex> Multicodec.codec_decode(<<51, 0, 99, 114, 105, 115, 112, 121>>)
{:ok, {<<0, 99, 114, 105, 115, 112, 121>>, "multibase"}}
iex> Multicodec.codec_decode(<<>>)
{:error, "data is not Multicodec encoded."}
"""
@spec codec_decode(multicodec_binary()) :: {:ok,{binary(), multi_codec()}} | {:error, term()}
def codec_decode(data) when is_binary(data) do
{:ok, codec_decode!(data)}
rescue
e in ArgumentError -> {:error, Exception.message(e)}
end
@doc """
Returns the codec used to encode a Multicodec encoded binary.
Raises an ArgumentError if the given binary is not Multicodec encoded.
## Examples
iex> Multicodec.codec!(<<0, 67, 105, 114, 99, 108, 101, 32, 116, 104, 101, 32, 111, 110, 101, 115, 32, 116, 104, 97, 116, 32, 99, 111, 109, 101, 32, 97, 108, 105, 118, 101>>)
"identity"
iex> :crypto.hash(:sha512, "F") |> Multicodec.encode!("sha2-512") |> Multicodec.codec!()
"sha2-512"
iex> Multicodec.codec!("q")
"dag-cbor"
"""
@spec codec!(multicodec_binary()) :: multi_codec()
def codec!(data) when is_binary(data) do
{_, codec} = codec_decode!(data)
codec
end
@doc """
Returns the codec used to encode a Multicodec encoded binary.
Returns an error if the given binary is not Multicodec encoded.
## Examples
iex> Multicodec.codec(<<6, 73, 32, 97, 109, 32, 97, 32, 115, 99, 105, 101, 110, 116, 105, 115, 116>>)
{:ok, "tcp"}
iex> Multicodec.codec(<<0x22>>)
{:ok, "murmur3"}
iex> Multicodec.encode!("I am a scientist, I seek to understand me", "identity") |> Multicodec.codec()
{:ok, "identity"}
iex> Multicodec.codec(<<>>)
{:error, "data is not Multicodec encoded."}
"""
@spec codec(multicodec_binary()) :: {:ok, multi_codec()} | {:error, term()}
def codec(data) when is_binary(data) do
{:ok, codec!(data)}
rescue
e in ArgumentError -> {:error, Exception.message(e)}
end
@doc """
Returns a list of codecs that can be used to encode data with Multicodec.
"""
@spec codecs() :: [multi_codec()]
def codecs() do
unquote(Enum.map(CodecTable.codec_mappings(), fn(%{codec: codec}) -> codec end))
end
@doc """
Returns a full mapping of codecs, codes, and prefixes used by Multicodec.
Each entry in the list is a mapping specification of how to encode data with Multicodec.
"""
@spec mappings() :: [MulticodecMapping.t()]
def mappings() do
unquote(Macro.escape(CodecTable.codec_mappings))
end
@doc """
Returns the prefix that should be used with the given codec.
Raises an error if the given binary is not Multicodec encoded.
## Examples
iex> Multicodec.prefix_for!("git-raw")
"x"
iex> Multicodec.prefix_for!("bitcoin-block")
<<176, 1>>
iex> Multicodec.prefix_for!("skein1024-512")
<<160, 231, 2>>
"""
@spec prefix_for!(multi_codec()) :: prefix()
def prefix_for!(codec) when is_binary(codec) do
do_prefix_for(codec)
end
@doc """
Returns the prefix that should be used with the given codec.
Returns an error if the given binary is not Multicodec encoded.
## Examples
iex> Multicodec.prefix_for("blake2b-272")
{:ok, <<162, 228, 2>>}
iex> Multicodec.prefix_for("bitcoin-block")
{:ok, <<176, 1>>}
iex> Multicodec.prefix_for("ip6")
{:ok, ")"}
iex> Multicodec.prefix_for("Glorious Leader")
{:error, "unsupported codec - \\"Glorious Leader\\""}
"""
@spec prefix_for(multi_codec()) :: {:ok, prefix()} | {:error, term()}
def prefix_for(codec) do
{:ok, prefix_for!(codec)}
rescue
e in ArgumentError -> {:error, Exception.message(e)}
end
@doc """
Returns true if the given codec is a valid Multicodec name.
## Examples
iex> Multicodec.codec?("sha2-256")
true
iex> Multicodec.codec?("dag-pb")
true
iex> Multicodec.codec?("peanut brittle")
false
"""
@spec codec?(multi_codec()) :: boolean()
def codec?(codec) do
case Multicodec.prefix_for(codec) do
{:ok, _prefix} -> true
_ -> false
end
end
#===============================================================================
# Private
#===============================================================================
defp do_codec_decode(<<>>) do
raise ArgumentError, "data is not Multicodec encoded."
end
defp do_codec_decode(data) do
{prefix, decoded_data} = decode_varint(data) #Varint.LEB128.decode(data)
{decoded_data, codec_for(prefix)}
end
defp do_prefix_for(codec)
for %{prefix: prefix, codec: codec} <- CodecTable.codec_mappings() do
defp do_prefix_for(unquote(codec)) do
unquote(prefix)
end
end
defp do_prefix_for(codec) when is_binary(codec) do
raise ArgumentError, "unsupported codec - #{inspect codec, binaries: :as_strings}"
end
defp codec_for(code)
for %{codec: codec, code: code} <- CodecTable.codec_mappings() do
defp codec_for(unquote(code)) do
unquote(codec)
end
end
defp codec_for(code) when is_integer(code) do
raise ArgumentError, "unsupported code - #{inspect code, binaries: :as_strings}"
end
defp do_decode(<<>>) do
raise ArgumentError, "data is not Multicodec encoded."
end
defp do_decode(data) do
{_prefix, decoded_data} = decode_varint(data) #Varint.LEB128.decode(data)
decoded_data
end
defp decode_varint(data) do
#temporary patch until we can replace or pull request varint
Varint.LEB128.decode(data)
rescue
FunctionClauseError -> raise ArgumentError, "data is not a varint."
end
end
|
lib/multicodec.ex
| 0.931649
| 0.906818
|
multicodec.ex
|
starcoder
|
defmodule DemoProcesses do
@moduledoc """
Documentation for DemoProcesses.
"""
alias DemoProcesses.{Step00, Step02, Step03, Utils}
@doc """
Start Step00 "remembering" process and a have a short conversation with it.
Effectively...
pid = Step00.start_process()
send(pid, {:remember, "Processes are powerful!"})
send(pid, {:value, self()})
# listen for the answer
receive do
{:remembered, value} ->
# received the `value` answer
end
"""
def step_00 do
Utils.clear()
# start the process that remembers something
pid = Step00.start_process()
# tell the process to remember the value "Processes are powerful!"
send(pid, {:remember, "Processes are powerful!"})
Utils.say "Remember \"Processes are powerful!\""
# ask the process the value it is remembering
send(pid, {:value, self()})
Utils.say "What are you remembering?"
# listen for the process to respond
receive do
{:remembered, value} ->
Utils.say "Thank you. I was told #{inspect value}"
other ->
Utils.say "Don't know what you're talking about... #{inspect other}"
after
5_000 -> raise("Didn't get a response after 5 seconds of waiting.")
end
end
@doc """
Simple single process sample of doing all the same work as step_02 but
sequentially.
"""
def step_01 do
Utils.clear()
data = ["Adam", "John", "Jill", "Beth", "Carl", "Zoe", "Juan", "Mark",
"Tom", "Samantha", "Paul", "Steven"]
Enum.each(data, fn(name) ->
cond do
Regex.match?(~r/^[a-m]/i, name) ->
Utils.say("Sorted #{inspect name} to LOW half", delay: :lookup)
Utils.say("A special welcome to #{inspect name}!")
Regex.match?(~r/^[n-z]/i, name) ->
Utils.say("Sorted #{inspect name} to HIGH half", delay: :lookup)
Utils.say("#{inspect name}, you rock!")
true -> nil
end
end)
IO.puts("---- All names sorted")
:ok
end
@doc """
Simple 3 process example showing easy concurrency and handling of slower
IO operations.
"""
def step_02 do
Utils.clear()
data = ["Adam", "John", "Jill", "Beth", "Carl", "Zoe", "Juan", "Mark",
"Tom", "Samantha", "Paul", "Steven"]
# start the "rock" IO process
rock_pid = Step02.hire_rocker()
# start the "welcome" IO process
welcome_pid = Step02.hire_welcomer()
# start the "sorter" process, give it the other pids
sorter_pid = Step02.hire_sorter(rock_pid, welcome_pid)
# randomize the names and send them all to the sorter process
Enum.each(data, fn(name) -> send(sorter_pid, {:sort, name}) end)
IO.puts("---- All messages sent to Sorter")
:ok
end
@doc """
Redo of step_01 with a GenServer to formalize the "call" idea.
"""
def step_03 do
Utils.clear()
# start the process that remembers something
{:ok, pid} = Step03.start_link()
# tell the process to remember the value "Processes are powerful!"
Utils.say "Remember \"Processes are powerful!\""
:ok = Step03.remember_sync(pid, "Processes are powerful!")
# ask the process the value it is remembering
Utils.say "What are you remembering?"
value = Step03.value_sync(pid)
Utils.say "Thank you. I was told #{inspect value}"
end
@doc """
Redo of step_03 where async used intentionally.
"""
def step_04 do
Utils.clear()
# start the process that remembers something
{:ok, pid} = Step03.start_link()
# tell the process to remember the value "Processes are powerful!"
Utils.say "Remember \"Processes are powerful!\""
Step03.remember_async(pid, "Processes are powerful!")
Utils.say "Doing other stuff..."
Utils.say "Doing other stuff, again..."
# ask the process the value it is remembering
Utils.say "What did I tell you to remember?"
Step03.value_async(pid)
receive do
{:remembered, value} -> Utils.say "Ah right. Thanks! Got #{inspect value}"
other -> Utils.say "Huh? #{inspect other}?"
end
end
end
|
lib/demo_processes.ex
| 0.680666
| 0.475666
|
demo_processes.ex
|
starcoder
|
defmodule Alerts.InformedEntitySet do
@moduledoc """
Represents the superset of all InformedEntities for an Alert.
Simplifies matching, since we can compare a single InformedEntity to see if
it's present in the InformedEntitySet. If it's not, there's no way for it
to match any of the InformedEntities inside.
"""
alias Alerts.InformedEntity, as: IE
defstruct route: MapSet.new(),
route_type: MapSet.new(),
stop: MapSet.new(),
trip: MapSet.new(),
direction_id: MapSet.new(),
activities: MapSet.new(),
entities: []
@type t :: %__MODULE__{
route: MapSet.t(),
route_type: MapSet.t(),
stop: MapSet.t(),
trip: MapSet.t(),
direction_id: MapSet.t(),
activities: MapSet.t(),
entities: [IE.t()]
}
@doc "Create a new InformedEntitySet from a list of InformedEntitys"
@spec new([IE.t()]) :: t
def new(%__MODULE__{} = entity_set) do
entity_set
end
def new(informed_entities) when is_list(informed_entities) do
struct = %__MODULE__{entities: informed_entities}
Enum.reduce(informed_entities, struct, &add_entity_to_set/2)
end
@doc "Returns whether the given entity matches the set"
@spec match?(t, IE.t()) :: boolean
def match?(%__MODULE__{} = set, %IE{} = entity) do
entity
|> Map.from_struct()
|> Enum.all?(&field_in_set?(set, &1))
|> try_all_entity_match(set, entity)
end
defp add_entity_to_set(entity, set) do
entity
|> Map.from_struct()
|> Enum.reduce(set, &add_entity_field_to_set/2)
end
defp add_entity_field_to_set({:activities, %MapSet{} = value}, set) do
map_set = MapSet.union(set.activities, MapSet.new(value))
Map.put(set, :activities, map_set)
end
defp add_entity_field_to_set({key, value}, set) do
map_set = Map.get(set, key)
map_set = MapSet.put(map_set, value)
Map.put(set, key, map_set)
end
defp field_in_set?(set, key_value_pair)
defp field_in_set?(_set, {_, nil}) do
# nil values match everything
true
end
defp field_in_set?(set, {:activities, %MapSet{} = value}) do
IE.mapsets_match?(set.activities, value)
end
defp field_in_set?(set, {key, value}) do
map_set = Map.get(set, key)
# either the value is in the map, or there's an entity that matches
# everything (nil)
MapSet.member?(map_set, value) or MapSet.member?(map_set, nil)
end
defp try_all_entity_match(false, _set, _entity) do
false
end
defp try_all_entity_match(true, set, entity) do
# we only try matching against the whole set when the MapSets overlapped
Enum.any?(set, &IE.match?(&1, entity))
end
end
defimpl Enumerable, for: Alerts.InformedEntitySet do
def count(_set) do
{:error, __MODULE__}
end
def member?(_set, %Alerts.InformedEntitySet{}) do
{:error, __MODULE__}
end
def member?(_set, _other) do
{:ok, false}
end
def reduce(%{entities: entities}, acc, fun) do
Enumerable.reduce(entities, acc, fun)
end
def slice(_set) do
{:error, __MODULE__}
end
end
|
apps/alerts/lib/informed_entity_set.ex
| 0.792745
| 0.461199
|
informed_entity_set.ex
|
starcoder
|
defmodule Mix.Tasks.Phx.Gen.PrettyHtml do
@shortdoc "Generates controller, views, and context for an HTML resource"
@moduledoc """
Generates controller, views, and context for an HTML resource.
mix phx.gen.html Accounts User users name:string age:integer
The first argument is the context module followed by the schema module
and its plural name (used as the schema table name).
The context is an Elixir module that serves as an API boundary for
the given resource. A context often holds many related resources.
Therefore, if the context already exists, it will be augmented with
functions for the given resource.
> Note: A resource may also be split
> over distinct contexts (such as `Accounts.User` and `Payments.User`).
The schema is responsible for mapping the database fields into an
Elixir struct. It is followed by an optional list of attributes,
with their respective names and types. See `mix phx.gen.schema`
for more information on attributes.
Overall, this generator will add the following files to `lib/`:
* a context module in `lib/app/accounts.ex` for the accounts API
* a schema in `lib/app/accounts/user.ex`, with an `users` table
* a view in `lib/app_web/views/user_view.ex`
* a controller in `lib/app_web/controllers/user_controller.ex`
* default CRUD templates in `lib/app_web/templates/user`
## The context app
A migration file for the repository and test files for the context and
controller features will also be generated.
The location of the web files (controllers, views, templates, etc) in an
umbrella application will vary based on the `:context_app` config located
in your applications `:generators` configuration. When set, the Phoenix
generators will generate web files directly in your lib and test folders
since the application is assumed to be isolated to web specific functionality.
If `:context_app` is not set, the generators will place web related lib
and test files in a `web/` directory since the application is assumed
to be handling both web and domain specific functionality.
Example configuration:
config :my_app_web, :generators, context_app: :my_app
Alternatively, the `--context-app` option may be supplied to the generator:
mix phx.gen.html Sales User users --context-app warehouse
## Web namespace
By default, the controller and view will be namespaced by the schema name.
You can customize the web module namespace by passing the `--web` flag with a
module name, for example:
mix phx.gen.html Sales User users --web Sales
Which would generate a `lib/app_web/controllers/sales/user_controller.ex` and
`lib/app_web/views/sales/user_view.ex`.
## Customising the context, schema, tables and migrations
In some cases, you may wish to bootstrap HTML templates, controllers,
and controller tests, but leave internal implementation of the context
or schema to yourself. You can use the `--no-context` and `--no-schema`
flags for file generation control.
You can also change the table name or configure the migrations to
use binary ids for primary keys, see `mix phx.gen.schema` for more
information.
"""
use Mix.Task
alias Mix.Phoenix.{Context, Schema}
alias Mix.Tasks.Phx.Gen
@doc false
def run(args) do
if Mix.Project.umbrella?() do
Mix.raise "mix phx.gen.pretty_html can only be run inside an application directory"
end
{context, schema} = Gen.Context.build(args)
Gen.Context.prompt_for_code_injection(context)
binding = [context: context, schema: schema, inputs: inputs(schema)]
paths = Mix.Phoenix.generator_paths()
prompt_for_conflicts(context)
context
|> copy_new_files(paths, binding)
|> print_shell_instructions()
end
defp prompt_for_conflicts(context) do
context
|> files_to_be_generated()
|> Kernel.++(context_files(context))
|> Mix.Phoenix.prompt_for_conflicts()
end
defp context_files(%Context{generate?: true} = context) do
Gen.Context.files_to_be_generated(context)
end
defp context_files(%Context{generate?: false}) do
[]
end
@doc false
def files_to_be_generated(%Context{schema: schema, context_app: context_app}) do
web_prefix = Mix.Phoenix.web_path(context_app)
test_prefix = Mix.Phoenix.web_test_path(context_app)
web_path = to_string(schema.web_path)
[
{:eex, "controller.ex", Path.join([web_prefix, "controllers", web_path, "#{schema.singular}_controller.ex"])},
{:eex, "edit.html.eex", Path.join([web_prefix, "templates", web_path, schema.singular, "edit.html.eex"])},
{:eex, "form.html.eex", Path.join([web_prefix, "templates", web_path, schema.singular, "form.html.eex"])},
{:eex, "index.html.eex", Path.join([web_prefix, "templates", web_path, schema.singular, "index.html.eex"])},
{:eex, "new.html.eex", Path.join([web_prefix, "templates", web_path, schema.singular, "new.html.eex"])},
{:eex, "show.html.eex", Path.join([web_prefix, "templates", web_path, schema.singular, "show.html.eex"])},
{:eex, "view.ex", Path.join([web_prefix, "views", web_path, "#{schema.singular}_view.ex"])},
{:eex, "controller_test.exs", Path.join([test_prefix, "controllers", web_path, "#{schema.singular}_controller_test.exs"])},
]
end
@doc false
def copy_new_files(%Context{} = context, paths, binding) do
files = files_to_be_generated(context)
Mix.Phoenix.copy_from(paths, "priv/templates/phx.gen.pretty_html", binding, files)
if context.generate?, do: Gen.Context.copy_new_files(context, paths, binding)
context
end
@doc false
def print_shell_instructions(%Context{schema: schema, context_app: ctx_app} = context) do
if schema.web_namespace do
Mix.shell().info """
Add the resource to your #{schema.web_namespace} :browser scope in #{Mix.Phoenix.web_path(ctx_app)}/router.ex:
scope "/#{schema.web_path}", #{inspect Module.concat(context.web_module, schema.web_namespace)}, as: :#{schema.web_path} do
pipe_through :browser
...
resources "/#{schema.plural}", #{inspect schema.alias}Controller
end
"""
else
Mix.shell().info """
Add the resource to your browser scope in #{Mix.Phoenix.web_path(ctx_app)}/router.ex:
resources "/#{schema.plural}", #{inspect schema.alias}Controller
"""
end
if context.generate?, do: Gen.Context.print_shell_instructions(context)
end
@doc false
def inputs(%Schema{} = schema) do
Enum.map(schema.attrs, fn
{_, {:references, _}} ->
{nil, nil, nil}
{key, :integer} ->
{label(key), ~s(<%= number_input f, #{inspect(key)} %>), error(key)}
{key, :float} ->
{label(key), ~s(<%= number_input f, #{inspect(key)}, step: "any" %>), error(key)}
{key, :decimal} ->
{label(key), ~s(<%= number_input f, #{inspect(key)}, step: "any" %>), error(key)}
{key, :boolean} ->
{checkbox_label(key), ~s(<%= checkbox f, #{inspect(key)}, class: "form-check-input" %>), error(key)}
{key, :text} ->
{label(key), ~s(<%= textarea f, #{inspect(key)}, class: "form-control" %>), error(key)}
{key, :date} ->
{label(key), ~s(<%= date_select f, #{inspect(key)} %>), error(key)}
{key, :time} ->
{label(key), ~s(<%= time_select f, #{inspect(key)} %>), error(key)}
{key, :utc_datetime} ->
{label(key), ~s(<%= datetime_select f, #{inspect(key)} %>), error(key)}
{key, :naive_datetime} ->
{label(key), ~s(<%= datetime_select f, #{inspect(key)} %>), error(key)}
{key, {:array, :integer}} ->
{label(key), ~s(<%= multiple_select f, #{inspect(key)}, ["1": 1, "2": 2] %>), error(key)}
{key, {:array, _}} ->
{label(key), ~s(<%= multiple_select f, #{inspect(key)}, ["Option 1": "option1", "Option 2": "option2"] %>), error(key)}
{key, _} ->
{label(key), ~s(<%= text_input f, #{inspect(key)}, class: "form-control" %>), error(key)}
end)
end
defp label(key) do
~s(<%= label f, #{inspect(key)} %>)
end
defp checkbox_label(key) do
~s(<%= label f, #{inspect(key)}, class: "form-check-label" %>)
end
defp error(field) do
~s(<%= error_tag f, #{inspect(field)} %>)
end
end
|
lib/mix/tasks/phx.gen.pretty_html.ex
| 0.863132
| 0.462352
|
phx.gen.pretty_html.ex
|
starcoder
|
defmodule Contex.PointPlot do
@moduledoc """
A simple point plot, plotting points showing y values against x values.
It is possible to specify multiple y columns with the same x column. It is not
yet possible to specify multiple independent series.
The x column can either be numeric or date time data. If numeric, a
`Contex.ContinuousLinearScale` is used to scale the values to the plot,
and if date time, a `Contex.TimeScale` is used.
Fill colours for each y column can be specified with `colours/2`.
A column in the dataset can optionally be used to control the colours. See
`colours/2` and `set_colour_col_name/2`
"""
import Contex.SVG
alias __MODULE__
alias Contex.{Scale, ContinuousLinearScale, TimeScale}
alias Contex.CategoryColourScale
alias Contex.{Dataset, Mapping}
alias Contex.Axis
alias Contex.Utils
defstruct [
:dataset,
:mapping,
:x_scale,
:y_scale,
:fill_scale,
transforms: %{},
axis_label_rotation: :auto,
custom_x_formatter: nil,
custom_y_formatter: nil,
width: 100,
height: 100,
colour_palette: :default
]
@required_mappings [
x_col: :exactly_one,
y_cols: :one_or_more,
fill_col: :zero_or_one
]
@type t() :: %__MODULE__{}
@doc """
Create a new point plot definition and apply defaults.
If the data in the dataset is stored as a list of maps, the `:series_mapping` option is required. This value must be a map of the plot's `:x_col` and `:y_cols` to keys in the map, such as `%{x_col: :column_a, y_cols: [:column_b, column_c]}`. The `:y_cols` value must be a list. Optionally a `:fill_col` mapping can be provided, which is
equivalent to `set_colour_col_name/2`
"""
@spec new(Contex.Dataset.t(), keyword()) :: Contex.PointPlot.t()
def new(%Dataset{} = dataset, options \\ []) do
mapping = Mapping.new(@required_mappings, Keyword.get(options, :mapping), dataset)
%PointPlot{dataset: dataset, mapping: mapping}
|> set_default_scales()
|> set_colour_col_name(mapping.column_map[:fill_col])
end
@doc """
Sets the default scales for the plot based on its column mapping.
"""
@spec set_default_scales(Contex.PointPlot.t()) :: Contex.PointPlot.t()
def set_default_scales(%PointPlot{mapping: %{column_map: column_map}} = plot) do
set_x_col_name(plot, column_map.x_col)
|> set_y_col_names(column_map.y_cols)
end
@doc """
Set the colour palette for fill colours.
Where multiple y columns are defined for the plot, a different colour will be used for
each column.
If a single y column is defined and a colour column is defined (see `set_colour_col_name/2`),
a different colour will be used for each unique value in the colour column.
If a single y column is defined and no colour column is defined, the first colour
in the supplied colour palette will be used to plot the points.
"""
@spec colours(Contex.PointPlot.t(), Contex.CategoryColourScale.colour_palette()) ::
Contex.PointPlot.t()
def colours(plot, colour_palette) when is_list(colour_palette) or is_atom(colour_palette) do
%{plot | colour_palette: colour_palette}
|> set_y_col_names(plot.mapping.column_map.y_cols)
end
def colours(plot, _) do
%{plot | colour_palette: :default}
|> set_y_col_names(plot.mapping.column_map.y_cols)
end
@doc """
Specifies the label rotation value that will be applied to the bottom axis. Accepts integer
values for degrees of rotation or `:auto`. Note that manually set rotation values other than
45 or 90 will be treated as zero. The default value is `:auto`, which sets the rotation to
zero degrees if the number of items on the axis is greater than eight, 45 degrees otherwise.
"""
@spec axis_label_rotation(Contex.PointPlot.t(), integer() | :auto) :: Contex.PointPlot.t()
def axis_label_rotation(%PointPlot{} = plot, rotation) when is_integer(rotation) do
%{plot | axis_label_rotation: rotation}
end
def axis_label_rotation(%PointPlot{} = plot, _) do
%{plot | axis_label_rotation: :auto}
end
@doc false
def set_size(%PointPlot{mapping: %{column_map: column_map}} = plot, width, height) do
# We pretend to set the x & y columns to force a recalculation of scales - may be expensive.
# We only really need to set the range, not recalculate the domain
%{plot | width: width, height: height}
|> set_x_col_name(column_map.x_col)
|> set_y_col_names(column_map.y_cols)
end
@doc ~S"""
Allows the axis tick labels to be overridden. For example, if you have a numeric representation of money and you want to
have the value axis show it as millions of dollars you might do something like:
# Turns 1_234_567.67 into $1.23M
defp money_formatter_millions(value) when is_number(value) do
"$#{:erlang.float_to_binary(value/1_000_000.0, [decimals: 2])}M"
end
defp show_chart(data) do
PointPlot.new(data)
|> PointPlot.custom_x_formatter(&money_formatter_millions/1)
end
"""
@spec custom_x_formatter(Contex.PointPlot.t(), nil | fun) :: Contex.PointPlot.t()
def custom_x_formatter(%PointPlot{} = plot, custom_x_formatter)
when is_function(custom_x_formatter) or custom_x_formatter == nil do
%{plot | custom_x_formatter: custom_x_formatter}
end
@doc ~S"""
Allows the axis tick labels to be overridden. For example, if you have a numeric representation of money and you want to
have the value axis show it as millions of dollars you might do something like:
# Turns 1_234_567.67 into $1.23M
defp money_formatter_millions(value) when is_number(value) do
"$#{:erlang.float_to_binary(value/1_000_000.0, [decimals: 2])}M"
end
defp show_chart(data) do
PointPlot.new(data)
|> PointPlot.custom_y_formatter(&money_formatter_millions/1)
end
"""
@spec custom_y_formatter(Contex.PointPlot.t(), nil | fun) :: Contex.PointPlot.t()
def custom_y_formatter(%PointPlot{} = plot, custom_y_formatter)
when is_function(custom_y_formatter) or custom_y_formatter == nil do
%{plot | custom_y_formatter: custom_y_formatter}
end
@doc false
def get_svg_legend(
%PointPlot{mapping: %{column_map: %{y_cols: y_cols, fill_col: fill_col}}} = plot
)
when length(y_cols) > 0 or is_nil(fill_col) do
# We do the point plotting with an index to look up the colours. For the legend we need the names
series_fill_colours =
CategoryColourScale.new(y_cols)
|> CategoryColourScale.set_palette(plot.colour_palette)
Contex.Legend.to_svg(series_fill_colours)
end
def get_svg_legend(%PointPlot{fill_scale: scale}) do
Contex.Legend.to_svg(scale)
end
def get_svg_legend(_), do: ""
@doc false
def to_svg(%PointPlot{x_scale: x_scale, y_scale: y_scale} = plot) do
x_scale = %{x_scale | custom_tick_formatter: plot.custom_x_formatter}
y_scale = %{y_scale | custom_tick_formatter: plot.custom_y_formatter}
axis_x = get_x_axis(x_scale, plot)
axis_y = Axis.new_left_axis(y_scale) |> Axis.set_offset(plot.width)
[
Axis.to_svg(axis_x),
Axis.to_svg(axis_y),
"<g>",
get_svg_points(plot),
"</g>"
]
end
defp get_x_axis(x_scale, plot) do
rotation =
case plot.axis_label_rotation do
:auto ->
if length(Scale.ticks_range(x_scale)) > 8, do: 45, else: 0
degrees ->
degrees
end
x_scale
|> Axis.new_bottom_axis()
|> Axis.set_offset(plot.height)
|> Kernel.struct(rotation: rotation)
end
defp get_svg_points(%PointPlot{dataset: dataset} = plot) do
dataset.data
|> Enum.map(fn row -> get_svg_point(plot, row) end)
end
# defp get_svg_line(%PointPlot{dataset: dataset, x_scale: x_scale, y_scale: y_scale} = plot) do
# x_col_index = Dataset.column_index(dataset, plot.x_col)
# y_col_index = Dataset.column_index(dataset, plot.y_col)
# x_tx_fn = Scale.domain_to_range_fn(x_scale)
# y_tx_fn = Scale.domain_to_range_fn(y_scale)
# style = ~s|stroke="red" stroke-width="2" fill="none" stroke-dasharray="13,2" stroke-linejoin="round" |
# last_item = Enum.count(dataset.data) - 1
# path = ["M",
# dataset.data
# |> Stream.map(fn row ->
# x = Dataset.value(row, x_col_index)
# y = Dataset.value(row, y_col_index)
# {x_tx_fn.(x), y_tx_fn.(y)}
# end)
# |> Stream.with_index()
# |> Enum.map(fn {{x_plot, y_plot}, i} ->
# case i < last_item do
# true -> ~s|#{x_plot} #{y_plot} L |
# _ -> ~s|#{x_plot} #{y_plot}|
# end
# end)
# ]
# [~s|<path d="|, path, ~s|"|, style, "></path>"]
# end
defp get_svg_point(
%PointPlot{
mapping: %{accessors: accessors, column_map: %{y_cols: y_cols}},
transforms: transforms,
fill_scale: fill_scale
},
row
)
when length(y_cols) == 1 do
x =
accessors.x_col.(row)
|> transforms.x.()
y =
hd(accessors.y_cols).(row)
|> transforms.y.()
fill_data =
case accessors.fill_col.(row) do
nil -> 0
val -> val
end
fill = CategoryColourScale.colour_for_value(fill_scale, fill_data)
get_svg_point(x, y, fill)
end
defp get_svg_point(
%PointPlot{
mapping: %{accessors: accessors},
transforms: transforms,
fill_scale: fill_scale
},
row
) do
x =
accessors.x_col.(row)
|> transforms.x.()
Enum.with_index(accessors.y_cols)
|> Enum.map(fn {accessor, index} ->
y = accessor.(row) |> transforms.y.()
fill = CategoryColourScale.colour_for_value(fill_scale, index)
get_svg_point(x, y, fill)
end)
end
defp get_svg_point(x, y, fill) when is_number(x) and is_number(y) do
circle(x, y, 3, fill: fill)
end
defp get_svg_point(_x, _y, _fill), do: ""
@doc """
Specify which column in the dataset is used for the x values.
This column must contain numeric or date time data.
"""
@spec set_x_col_name(Contex.PointPlot.t(), Contex.Dataset.column_name()) :: Contex.PointPlot.t()
def set_x_col_name(
%PointPlot{dataset: dataset, width: width, mapping: mapping} = plot,
x_col_name
) do
mapping = Mapping.update(mapping, %{x_col: x_col_name})
x_scale = create_scale_for_column(dataset, x_col_name, {0, width})
x_transform = Scale.domain_to_range_fn(x_scale)
transforms = Map.merge(plot.transforms, %{x: x_transform})
%{plot | x_scale: x_scale, transforms: transforms, mapping: mapping}
end
@doc """
Specify which column(s) in the dataset is/are used for the y values.
These columns must contain numeric data.
Where more than one y column is specified the colours are used to identify data from
each column.
"""
@spec set_y_col_names(Contex.PointPlot.t(), [Contex.Dataset.column_name()]) ::
Contex.PointPlot.t()
def set_y_col_names(
%PointPlot{dataset: dataset, height: height, mapping: mapping} = plot,
y_col_names
)
when is_list(y_col_names) do
mapping = Mapping.update(mapping, %{y_cols: y_col_names})
{min, max} =
get_overall_domain(dataset, y_col_names)
|> Utils.fixup_value_range()
y_scale =
ContinuousLinearScale.new()
|> ContinuousLinearScale.domain(min, max)
|> Scale.set_range(height, 0)
y_transform = Scale.domain_to_range_fn(y_scale)
transforms = Map.merge(plot.transforms, %{y: y_transform})
fill_indices =
Enum.with_index(y_col_names)
|> Enum.map(fn {_, index} -> index end)
series_fill_colours =
CategoryColourScale.new(fill_indices)
|> CategoryColourScale.set_palette(plot.colour_palette)
%{
plot
| y_scale: y_scale,
transforms: transforms,
fill_scale: series_fill_colours,
mapping: mapping
}
end
defp get_overall_domain(dataset, col_names) do
combiner = fn {min1, max1}, {min2, max2} ->
{Utils.safe_min(min1, min2), Utils.safe_max(max1, max2)}
end
Enum.reduce(col_names, {nil, nil}, fn col, acc_extents ->
inner_extents = Dataset.column_extents(dataset, col)
combiner.(acc_extents, inner_extents)
end)
end
defp create_scale_for_column(dataset, column, {r_min, r_max}) do
{min, max} = Dataset.column_extents(dataset, column)
case Dataset.guess_column_type(dataset, column) do
:datetime ->
TimeScale.new()
|> TimeScale.domain(min, max)
|> Scale.set_range(r_min, r_max)
:number ->
ContinuousLinearScale.new()
|> ContinuousLinearScale.domain(min, max)
|> Scale.set_range(r_min, r_max)
end
end
@doc """
If a single y column is specified, it is possible to use another column to control the point colour.
Note: This is ignored if there are multiple y columns.
"""
@spec set_colour_col_name(Contex.PointPlot.t(), Contex.Dataset.column_name()) ::
Contex.PointPlot.t()
def set_colour_col_name(%PointPlot{} = plot, nil), do: plot
def set_colour_col_name(%PointPlot{dataset: dataset, mapping: mapping} = plot, fill_col_name) do
mapping = Mapping.update(mapping, %{fill_col: fill_col_name})
vals = Dataset.unique_values(dataset, fill_col_name)
colour_scale = CategoryColourScale.new(vals)
%{plot | fill_scale: colour_scale, mapping: mapping}
end
end
|
lib/chart/pointplot.ex
| 0.956877
| 0.948728
|
pointplot.ex
|
starcoder
|
defmodule Commanded.ProcessManagers.ProcessManager do
@moduledoc """
Macro used to define a process manager.
A process manager is responsible for coordinating one or more aggregates.
It handles events and dispatches commands in response. Process managers have
state that can be used to track which aggregates are being orchestrated.
Process managers can be used to implement long-running transactions by
following the saga pattern. This is a sequence of commands and their
compensating commands which can be used to rollback on failure.
Use the `Commanded.ProcessManagers.ProcessManager` macro in your process
manager module and implement the callback functions defined in the behaviour:
- `c:interested?/1`
- `c:handle/2`
- `c:apply/2`
- `c:error/3`
Please read the [Process managers](process-managers.html) guide for more
detail.
### Example
defmodule ExampleProcessManager do
use Commanded.ProcessManagers.ProcessManager,
application: ExampleApp,
name: "ExampleProcessManager"
defstruct []
def interested?(%AnEvent{uuid: uuid}), do: {:start, uuid}
def handle(%ExampleProcessManager{}, %ExampleEvent{}) do
[
%ExampleCommand{}
]
end
def error({:error, failure}, %ExampleEvent{}, _failure_context) do
# Retry, skip, ignore, or stop process manager on error handling event
end
def error({:error, failure}, %ExampleCommand{}, _failure_context) do
# Retry, skip, ignore, or stop process manager on error dispatching command
end
end
Start the process manager (or configure as a worker inside a
[Supervisor](supervision.html))
{:ok, process_manager} = ExampleProcessManager.start_link()
## `c:init/1` callback
An `c:init/1` function can be defined in your process manager which is used to
provide runtime configuration. This callback function must return
`{:ok, config}` with the updated config.
### Example
The `c:init/1` function is used to define the process manager's application
and name based upon a value provided at runtime:
defmodule ExampleProcessManager do
use Commanded.ProcessManagers.ProcessManager
def init(config) do
{tenant, config} = Keyword.pop!(config, :tenant)
config =
config
|> Keyword.put(:application, Module.concat([ExampleApp, tenant]))
|> Keyword.put(:name, Module.concat([__MODULE__, tenant]))
{:ok, config}
end
end
Usage:
{:ok, _pid} = ExampleProcessManager.start_link(tenant: :tenant1)
## Error handling
You can define an `c:error/3` callback function to handle any errors or
exceptions during event handling or returned by commands dispatched from your
process manager. The function is passed the error (e.g. `{:error, :failure}`),
the failed event or command, and a failure context.
See `Commanded.ProcessManagers.FailureContext` for details.
Use pattern matching on the error and/or failed event/command to explicitly
handle certain errors, events, or commands. You can choose to retry, skip,
ignore, or stop the process manager after a command dispatch error.
The default behaviour, if you don't provide an `c:error/3` callback, is to
stop the process manager using the exact error reason returned from the
event handler function or command dispatch. You should supervise your
process managers to ensure they are restarted on error.
### Example
defmodule ExampleProcessManager do
use Commanded.ProcessManagers.ProcessManager,
application: ExampleApp,
name: "ExampleProcessManager"
# stop process manager after three failures
def error({:error, _failure}, _failed_command, %{context: %{failures: failures}})
when failures >= 2
do
{:stop, :too_many_failures}
end
# retry command, record failure count in context map
def error({:error, _failure}, _failed_command, %{context: context}) do
context = Map.update(context, :failures, 1, fn failures -> failures + 1 end)
{:retry, context}
end
end
## Idle process timeouts
Each instance of a process manager will run indefinitely once started. To
reduce memory usage you can configure an idle timeout, in milliseconds,
after which the process will be shutdown.
The process will be restarted whenever another event is routed to it and its
state will be rehydrated from the instance snapshot.
### Example
defmodule ExampleProcessManager do
use Commanded.ProcessManagers.ProcessManager,
application: ExampleApp,
name: "ExampleProcessManager"
idle_timeout: :timer.minutes(10)
end
## Event handling timeout
You can configure a timeout for event handling to ensure that events are
processed in a timely manner without getting stuck.
An `event_timeout` option, defined in milliseconds, may be provided when using
the `Commanded.ProcessManagers.ProcessManager` macro at compile time:
defmodule TransferMoneyProcessManager do
use Commanded.ProcessManagers.ProcessManager,
application: ExampleApp,
name: "TransferMoneyProcessManager",
router: BankRouter,
event_timeout: :timer.minutes(10)
end
Or may be configured when starting a process manager:
{:ok, _pid} = TransferMoneyProcessManager.start_link(
event_timeout: :timer.hours(1)
)
After the timeout has elapsed, indicating the process manager has not
processed an event within the configured period, the process manager is
stopped. The process manager will be restarted if supervised and will retry
the event, this should help resolve transient problems.
## Consistency
For each process manager you can define its consistency, as one of either
`:strong` or `:eventual`.
This setting is used when dispatching commands and specifying the
`consistency` option.
When you dispatch a command using `:strong` consistency, after successful
command dispatch the process will block until all process managers configured
to use `:strong` consistency have processed the domain events created by the
command.
The default setting is `:eventual` consistency. Command dispatch will return
immediately upon confirmation of event persistence, not waiting for any
process managers.
### Example
Define a process manager with `:strong` consistency:
defmodule ExampleProcessManager do
use Commanded.ProcessManagers.ProcessManager,
application: ExampleApp,
name: "ExampleProcessManager",
consistency: :strong
end
## Dynamic application
A process manager's application can be provided as an option to `start_link/1`.
This can be used to start the same process manager multiple times, each using a
separate Commanded application and event store.
### Example
Start an process manager for each tenant in a multi-tenanted app, guaranteeing
that the data and processing remains isolated between tenants.
for tenant <- [:tenant1, :tenant2, :tenant3] do
{:ok, _app} = MyApp.Application.start_link(name: tenant)
{:ok, _handler} = ExampleProcessManager.start_link(application: tenant)
end
Typically you would start the event handlers using a supervisor:
children =
for tenant <- [:tenant1, :tenant2, :tenant3] do
{ExampleProcessManager, application: tenant}
end
Supervisor.start_link(children, strategy: :one_for_one)
The above example requires three named Commanded applications to have already
been started.
"""
alias Commanded.ProcessManagers.FailureContext
@type domain_event :: struct
@type command :: struct
@type process_manager :: struct
@type process_uuid :: String.t() | [String.t()]
@type consistency :: :eventual | :strong
@doc """
Optional callback function called to configure the process manager before it
starts.
It is passed the merged compile-time and runtime config, and must return the
updated config.
"""
@callback init(config :: Keyword.t()) :: {:ok, Keyword.t()}
@doc """
Is the process manager interested in the given command?
The `c:interested?/1` function is used to indicate which events the process
manager receives. The response is used to route the event to an existing
instance or start a new process instance:
- `{:start, process_uuid}` - create a new instance of the process manager.
- `{:start!, process_uuid}` - create a new instance of the process manager (strict).
- `{:continue, process_uuid}` - continue execution of an existing process manager.
- `{:continue!, process_uuid}` - continue execution of an existing process manager (strict).
- `{:stop, process_uuid}` - stop an existing process manager, shutdown its
process, and delete its persisted state.
- `false` - ignore the event.
You can return a list of process identifiers when a single domain event is to
be handled by multiple process instances.
## Strict process routing
Using strict routing, with `:start!` or `:continue`, enforces the following
validation checks:
- `{:start!, process_uuid}` - validate process does not already exist.
- `{:continue!, process_uuid}` - validate process already exists.
If the check fails an error will be passed to the `error/3` callback function:
- `{:error, {:start!, :process_already_started}}`
- `{:error, {:continue!, :process_not_started}}`
The `error/3` function can choose to `:stop` the process or `:skip` the
problematic event.
"""
@callback interested?(domain_event) ::
{:start, process_uuid}
| {:start!, process_uuid}
| {:continue, process_uuid}
| {:continue!, process_uuid}
| {:stop, process_uuid}
| false
@doc """
Process manager instance handles a domain event, returning any commands to
dispatch.
A `c:handle/2` function can be defined for each `:start` and `:continue`
tagged event previously specified. It receives the process manager's state and
the event to be handled. It must return the commands to be dispatched. This
may be none, a single command, or many commands.
The `c:handle/2` function can be omitted if you do not need to dispatch a
command and are only mutating the process manager's state.
"""
@callback handle(process_manager, domain_event) :: command | list(command) | {:error, term}
@doc """
Mutate the process manager's state by applying the domain event.
The `c:apply/2` function is used to mutate the process manager's state. It
receives the current state and the domain event, and must return the modified
state.
This callback function is optional, the default behaviour is to retain the
process manager's current state.
"""
@callback apply(process_manager, domain_event) :: process_manager
@doc """
Called when a command dispatch or event handling returns an error.
The `c:error/3` function allows you to control how event handling and command
dispatch and failures are handled. The function is passed the error (e.g.
`{:error, :failure}`), the failed event (during failed event handling) or
failed command (during failed dispatch), and a failure context struct (see
`Commanded.ProcessManagers.FailureContext` for details).
The failure context contains a context map you can use to pass transient state
between failures. For example it can be used to count the number of failures.
You can return one of the following responses depending upon the
error severity:
- `{:retry, context}` - retry the failed command, provide a context
map or `Commanded.ProcessManagers.FailureContext` struct, containing any
state passed to subsequent failures. This could be used to count the number
of retries, failing after too many attempts.
- `{:retry, delay, context}` - retry the failed command, after sleeping for
the requested delay (in milliseconds). Context is a map or
`Commanded.ProcessManagers.FailureContext` as described in
`{:retry, context}` above.
- `{:stop, reason}` - stop the process manager with the given reason.
For event handling failures, when failure source is an event, you can also
return:
- `:skip` - to skip the problematic event. No commands will be dispatched.
For command dispatch failures, when failure source is a command, you can also
return:
- `:skip` - skip the failed command and continue dispatching any pending
commands.
- `{:skip, :continue_pending}` - skip the failed command, but continue
dispatching any pending commands.
- `{:skip, :discard_pending}` - discard the failed command and any pending
commands.
- `{:continue, commands, context}` - continue dispatching the given commands.
This allows you to retry the failed command, modify it and retry, drop it
or drop all pending commands by passing an empty list `[]`. Context is a map
as described in `{:retry, context}` above.
"""
@callback error(
error :: {:error, term()},
failure_source :: command | domain_event,
failure_context :: FailureContext.t()
) ::
{:continue, commands :: list(command), context :: map()}
| {:retry, context :: map() | FailureContext.t()}
| {:retry, delay :: non_neg_integer(), context :: map() | FailureContext.t()}
| :skip
| {:skip, :discard_pending}
| {:skip, :continue_pending}
| {:stop, reason :: term()}
alias Commanded.ProcessManagers.ProcessManager
alias Commanded.ProcessManagers.ProcessRouter
@doc false
defmacro __using__(opts) do
quote location: :keep do
@before_compile unquote(__MODULE__)
@behaviour ProcessManager
@opts unquote(opts)
def start_link(opts \\ []) do
opts = Keyword.merge(@opts, opts)
{application, name, config} = ProcessManager.parse_config!(__MODULE__, opts)
ProcessRouter.start_link(application, name, __MODULE__, config)
end
@doc """
Provides a child specification to allow the event handler to be easily
supervised.
## Example
Supervisor.start_link([
{ExampleProcessManager, []}
], strategy: :one_for_one)
"""
def child_spec(opts) do
opts = Keyword.merge(@opts, opts)
{application, name, config} = ProcessManager.parse_config!(__MODULE__, opts)
default = %{
id: {__MODULE__, application, name},
start: {ProcessRouter, :start_link, [application, name, __MODULE__, config]},
restart: :permanent,
type: :worker
}
Supervisor.child_spec(default, [])
end
@doc false
def init(config), do: {:ok, config}
defoverridable init: 1
end
end
@doc false
defmacro __before_compile__(_env) do
# Include default fallback functions at end, with lowest precedence
quote generated: true do
@doc false
def interested?(_event), do: false
@doc false
def handle(_process_manager, _event), do: []
@doc false
def apply(process_manager, _event), do: process_manager
@doc false
def error({:error, reason}, _command, _failure_context), do: {:stop, reason}
end
end
@doc """
Get the identity of the current process instance.
This must only be called within a process manager's `handle/2` or `apply/2`
callback function.
## Example
defmodule ExampleProcessManager do
use Commanded.ProcessManagers.ProcessManager,
application: MyApp.Application,
name: __MODULE__
def interested?(%ProcessStarted{uuids: uuids}), do: {:start, uuids}
def handle(%IdentityProcessManager{}, %ProcessStarted{} = event) do
# Identify which uuid is associated with the current instance from the
# list of uuids in the event.
uuid = Commanded.ProcessManagers.ProcessManager.identity()
# ...
end
end
"""
defdelegate identity(), to: Commanded.ProcessManagers.ProcessManagerInstance
# GenServer start options
@start_opts [:debug, :name, :timeout, :spawn_opt, :hibernate_after]
# Process manager configuration options
@handler_opts [
:application,
:name,
:consistency,
:start_from,
:subscribe_to,
:subscription_opts,
:event_timeout,
:idle_timeout
]
def parse_config!(module, config) do
{:ok, config} = module.init(config)
{_valid, invalid} = Keyword.split(config, @start_opts ++ @handler_opts)
if Enum.any?(invalid) do
raise ArgumentError,
inspect(module) <> " specifies invalid options: " <> inspect(Keyword.keys(invalid))
end
{application, config} = Keyword.pop(config, :application)
unless application do
raise ArgumentError, inspect(module) <> " expects :application option"
end
{name, config} = Keyword.pop(config, :name)
name = parse_name(name)
unless name do
raise ArgumentError, inspect(module) <> " expects :name option"
end
{application, name, config}
end
@doc false
def parse_name(name) when name in [nil, ""], do: nil
def parse_name(name) when is_binary(name), do: name
def parse_name(name), do: inspect(name)
end
|
lib/commanded/process_managers/process_manager.ex
| 0.87251
| 0.52074
|
process_manager.ex
|
starcoder
|
defmodule Depot.Adapter.InMemory do
@moduledoc """
Depot Adapter using an `Agent` for in memory storage.
## Direct usage
iex> filesystem = Depot.Adapter.InMemory.configure(name: InMemoryFileSystem)
iex> start_supervised(filesystem)
iex> :ok = Depot.write(filesystem, "test.txt", "Hello World")
iex> {:ok, "Hello World"} = Depot.read(filesystem, "test.txt")
## Usage with a module
defmodule InMemoryFileSystem do
use Depot.Filesystem,
adapter: Depot.Adapter.InMemory
end
start_supervised(InMemoryFileSystem)
InMemoryFileSystem.write("test.txt", "Hello World")
{:ok, "Hello World"} = InMemoryFileSystem.read("test.txt")
"""
use Agent
defmodule Config do
@moduledoc false
defstruct name: nil
end
@behaviour Depot.Adapter
@impl Depot.Adapter
def starts_processes, do: true
def start_link({__MODULE__, %Config{} = config}) do
start_link(config)
end
def start_link(%Config{} = config) do
Agent.start_link(fn -> %{} end, name: Depot.Registry.via(__MODULE__, config.name))
end
@impl Depot.Adapter
def configure(opts) do
config = %Config{
name: Keyword.fetch!(opts, :name)
}
{__MODULE__, config}
end
@impl Depot.Adapter
def write(config, path, contents) do
Agent.update(Depot.Registry.via(__MODULE__, config.name), fn state ->
put_in(state, accessor(path, %{}), IO.iodata_to_binary(contents))
end)
end
@impl Depot.Adapter
def read(config, path) do
Agent.get(Depot.Registry.via(__MODULE__, config.name), fn state ->
case get_in(state, accessor(path)) do
binary when is_binary(binary) -> {:ok, binary}
_ -> {:error, :enoent}
end
end)
end
@impl Depot.Adapter
def delete(%Config{} = config, path) do
Agent.update(Depot.Registry.via(__MODULE__, config.name), fn state ->
{_, state} = pop_in(state, accessor(path))
state
end)
:ok
end
@impl Depot.Adapter
def move(%Config{} = config, source, destination) do
Agent.get_and_update(Depot.Registry.via(__MODULE__, config.name), fn state ->
case get_in(state, accessor(source)) do
binary when is_binary(binary) ->
{_, state} =
state |> put_in(accessor(destination, %{}), binary) |> pop_in(accessor(source))
{:ok, state}
_ ->
{{:error, :enoent}, state}
end
end)
end
@impl Depot.Adapter
def copy(%Config{} = config, source, destination) do
Agent.get_and_update(Depot.Registry.via(__MODULE__, config.name), fn state ->
case get_in(state, accessor(source)) do
binary when is_binary(binary) -> {:ok, put_in(state, accessor(destination, %{}), binary)}
_ -> {{:error, :enoent}, state}
end
end)
end
@impl Depot.Adapter
def file_exists(%Config{} = config, path) do
Agent.get(Depot.Registry.via(__MODULE__, config.name), fn state ->
case get_in(state, accessor(path)) do
binary when is_binary(binary) -> {:ok, :exists}
_ -> {:ok, :missing}
end
end)
end
@impl Depot.Adapter
def list_contents(%Config{} = config, path) do
contents =
Agent.get(Depot.Registry.via(__MODULE__, config.name), fn state ->
paths =
case get_in(state, accessor(path)) do
%{} = map -> map
_ -> %{}
end
for {path, x} <- paths do
case x do
%{} ->
%Depot.Stat.Dir{
name: path,
size: 0,
mtime: 0
}
bin when is_binary(bin) ->
%Depot.Stat.File{
name: path,
size: byte_size(bin),
mtime: 0
}
end
end
end)
{:ok, contents}
end
defp accessor(path, default \\ nil) when is_binary(path) do
path
|> Path.absname("/")
|> Path.split()
|> do_accessor([], default)
|> Enum.reverse()
end
defp do_accessor([segment], acc, default) do
[Access.key(segment, default) | acc]
end
defp do_accessor([segment | rest], acc, default) do
do_accessor(rest, [Access.key(segment, %{}) | acc], default)
end
end
|
lib/depot/adapter/in_memory.ex
| 0.770681
| 0.404184
|
in_memory.ex
|
starcoder
|
defmodule AWS.ECS do
@moduledoc """
Amazon Elastic Container Service
Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast,
container management service.
It makes it easy to run, stop, and manage Docker containers on a cluster. You
can host your cluster on a serverless infrastructure that's managed by Amazon
ECS by launching your services or tasks on Fargate. For more control, you can
host your tasks on a cluster of Amazon Elastic Compute Cloud (Amazon EC2)
instances that you manage.
Amazon ECS makes it easy to launch and stop container-based applications with
simple API calls. This makes it easy to get the state of your cluster from a
centralized service, and gives you access to many familiar Amazon EC2 features.
You can use Amazon ECS to schedule the placement of containers across your
cluster based on your resource needs, isolation policies, and availability
requirements. With Amazon ECS, you don't need to operate your own cluster
management and configuration management systems. You also don't need to worry
about scaling your management infrastructure.
"""
alias AWS.Client
alias AWS.Request
def metadata do
%AWS.ServiceMetadata{
abbreviation: "Amazon ECS",
api_version: "2014-11-13",
content_type: "application/x-amz-json-1.1",
credential_scope: nil,
endpoint_prefix: "ecs",
global?: false,
protocol: "json",
service_id: "ECS",
signature_version: "v4",
signing_name: "ecs",
target_prefix: "AmazonEC2ContainerServiceV20141113"
}
end
@doc """
Creates a new capacity provider.
Capacity providers are associated with an Amazon ECS cluster and are used in
capacity provider strategies to facilitate cluster auto scaling.
Only capacity providers that use an Auto Scaling group can be created. Amazon
ECS tasks on Fargate use the `FARGATE` and `FARGATE_SPOT` capacity providers.
These providers are available to all accounts in the Amazon Web Services Regions
that Fargate supports.
"""
def create_capacity_provider(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CreateCapacityProvider", input, options)
end
@doc """
Creates a new Amazon ECS cluster.
By default, your account receives a `default` cluster when you launch your first
container instance. However, you can create your own cluster with a unique name
with the `CreateCluster` action.
When you call the `CreateCluster` API operation, Amazon ECS attempts to create
the Amazon ECS service-linked role for your account. This is so that it can
manage required resources in other Amazon Web Services services on your behalf.
However, if the IAM user that makes the call doesn't have permissions to create
the service-linked role, it isn't created. For more information, see [Using Service-Linked Roles for Amazon
ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using-service-linked-roles.html)
in the *Amazon Elastic Container Service Developer Guide*.
"""
def create_cluster(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CreateCluster", input, options)
end
@doc """
Runs and maintains your desired number of tasks from a specified task
definition.
If the number of tasks running in a service drops below the `desiredCount`,
Amazon ECS runs another copy of the task in the specified cluster. To update an
existing service, see the UpdateService action.
In addition to maintaining the desired count of tasks in your service, you can
optionally run your service behind one or more load balancers. The load
balancers distribute traffic across the tasks that are associated with the
service. For more information, see [Service Load Balancing](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html)
in the *Amazon Elastic Container Service Developer Guide*.
Tasks for services that don't use a load balancer are considered healthy if
they're in the `RUNNING` state. Tasks for services that use a load balancer are
considered healthy if they're in the `RUNNING` state and the container instance
that they're hosted on is reported as healthy by the load balancer.
There are two service scheduler strategies available:
* `REPLICA` - The replica scheduling strategy places and maintains
your desired number of tasks across your cluster. By default, the service
scheduler spreads tasks across Availability Zones. You can use task placement
strategies and constraints to customize task placement decisions. For more
information, see [Service Scheduler Concepts](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html)
in the *Amazon Elastic Container Service Developer Guide*.
* `DAEMON` - The daemon scheduling strategy deploys exactly one task
on each active container instance that meets all of the task placement
constraints that you specify in your cluster. The service scheduler also
evaluates the task placement constraints for running tasks. It also stops tasks
that don't meet the placement constraints. When using this strategy, you don't
need to specify a desired number of tasks, a task placement strategy, or use
Service Auto Scaling policies. For more information, see [Service Scheduler Concepts](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html)
in the *Amazon Elastic Container Service Developer Guide*.
You can optionally specify a deployment configuration for your service. The
deployment is initiated by changing properties. For example, the deployment
might be initiated by the task definition or by your desired count of a service.
This is done with an `UpdateService` operation. The default value for a replica
service for `minimumHealthyPercent` is 100%. The default value for a daemon
service for `minimumHealthyPercent` is 0%.
If a service uses the `ECS` deployment controller, the minimum healthy percent
represents a lower limit on the number of tasks in a service that must remain in
the `RUNNING` state during a deployment. Specifically, it represents it as a
percentage of your desired number of tasks (rounded up to the nearest integer).
This happens when any of your container instances are in the `DRAINING` state if
the service contains tasks using the EC2 launch type. Using this parameter, you
can deploy without using additional cluster capacity. For example, if you set
your service to have desired number of four tasks and a minimum healthy percent
of 50%, the scheduler might stop two existing tasks to free up cluster capacity
before starting two new tasks. If they're in the `RUNNING` state, tasks for
services that don't use a load balancer are considered healthy . If they're in
the `RUNNING` state and reported as healthy by the load balancer, tasks for
services that *do* use a load balancer are considered healthy . The default
value for minimum healthy percent is 100%.
If a service uses the `ECS` deployment controller, the ## maximum percent
parameter represents an upper limit on the number of tasks in a service that are
allowed in the `RUNNING` or `PENDING` state during a deployment. Specifically,
it represents it as a percentage of the desired number of tasks (rounded down to
the nearest integer). This happens when any of your container instances are in
the `DRAINING` state if the service contains tasks using the EC2 launch type.
Using this parameter, you can define the deployment batch size. For example, if
your service has a desired number of four tasks and a maximum percent value of
200%, the scheduler may start four new tasks before stopping the four older
tasks (provided that the cluster resources required to do this are available).
The default value for maximum percent is 200%.
If a service uses either the `CODE_DEPLOY` or `EXTERNAL` deployment controller
types and tasks that use the EC2 launch type, the ## minimum healthy percent
and **maximum percent** values are used only to define the lower and upper limit
on the number of the tasks in the service that remain in the `RUNNING` state.
This is while the container instances are in the `DRAINING` state. If the tasks
in the service use the Fargate launch type, the minimum healthy percent and
maximum percent values aren't used. This is the case even if they're currently
visible when describing your service.
When creating a service that uses the `EXTERNAL` deployment controller, you can
specify only parameters that aren't controlled at the task set level. The only
required parameter is the service name. You control your services using the
`CreateTaskSet` operation. For more information, see [Amazon ECS Deployment Types](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-types.html)
in the *Amazon Elastic Container Service Developer Guide*.
When the service scheduler launches new tasks, it determines task placement in
your cluster using the following logic:
* Determine which of the container instances in your cluster can
support the task definition of your service. For example, they have the required
CPU, memory, ports, and container instance attributes.
* By default, the service scheduler attempts to balance tasks across
Availability Zones in this manner. This is the case even if you can choose a
different placement strategy with the `placementStrategy` parameter.
* Sort the valid container instances, giving priority to
instances that have the fewest number of running tasks for this service in their
respective Availability Zone. For example, if zone A has one running service
task and zones B and C each have zero, valid container instances in either zone
B or C are considered optimal for placement.
* Place the new service task on a valid container
instance in an optimal Availability Zone based on the previous steps, favoring
container instances with the fewest number of running tasks for this service.
"""
def create_service(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CreateService", input, options)
end
@doc """
Create a task set in the specified cluster and service.
This is used when a service uses the `EXTERNAL` deployment controller type. For
more information, see [Amazon ECS Deployment Types](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-types.html)
in the *Amazon Elastic Container Service Developer Guide*.
"""
def create_task_set(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CreateTaskSet", input, options)
end
@doc """
Disables an account setting for a specified IAM user, IAM role, or the root user
for an account.
"""
def delete_account_setting(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteAccountSetting", input, options)
end
@doc """
Deletes one or more custom attributes from an Amazon ECS resource.
"""
def delete_attributes(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteAttributes", input, options)
end
@doc """
Deletes the specified capacity provider.
The `FARGATE` and `FARGATE_SPOT` capacity providers are reserved and can't be
deleted. You can disassociate them from a cluster using either the
`PutClusterCapacityProviders` API or by deleting the cluster.
Prior to a capacity provider being deleted, the capacity provider must be
removed from the capacity provider strategy from all services. The
`UpdateService` API can be used to remove a capacity provider from a service's
capacity provider strategy. When updating a service, the `forceNewDeployment`
option can be used to ensure that any tasks using the Amazon EC2 instance
capacity provided by the capacity provider are transitioned to use the capacity
from the remaining capacity providers. Only capacity providers that aren't
associated with a cluster can be deleted. To remove a capacity provider from a
cluster, you can either use `PutClusterCapacityProviders` or delete the cluster.
"""
def delete_capacity_provider(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteCapacityProvider", input, options)
end
@doc """
Deletes the specified cluster.
The cluster transitions to the `INACTIVE` state. Clusters with an `INACTIVE`
status might remain discoverable in your account for a period of time. However,
this behavior is subject to change in the future. We don't recommend that you
rely on `INACTIVE` clusters persisting.
You must deregister all container instances from this cluster before you may
delete it. You can list the container instances in a cluster with
`ListContainerInstances` and deregister them with `DeregisterContainerInstance`.
"""
def delete_cluster(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteCluster", input, options)
end
@doc """
Deletes a specified service within a cluster.
You can delete a service if you have no running tasks in it and the desired task
count is zero. If the service is actively maintaining tasks, you can't delete
it, and you must update the service to a desired task count of zero. For more
information, see `UpdateService`.
When you delete a service, if there are still running tasks that require
cleanup, the service status moves from `ACTIVE` to `DRAINING`, and the service
is no longer visible in the console or in the `ListServices` API operation.
After all tasks have transitioned to either `STOPPING` or `STOPPED` status, the
service status moves from `DRAINING` to `INACTIVE`. Services in the `DRAINING`
or `INACTIVE` status can still be viewed with the `DescribeServices` API
operation. However, in the future, `INACTIVE` services may be cleaned up and
purged from Amazon ECS record keeping, and `DescribeServices` calls on those
services return a `ServiceNotFoundException` error.
If you attempt to create a new service with the same name as an existing service
in either `ACTIVE` or `DRAINING` status, you receive an error.
"""
def delete_service(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteService", input, options)
end
@doc """
Deletes a specified task set within a service.
This is used when a service uses the `EXTERNAL` deployment controller type. For
more information, see [Amazon ECS Deployment Types](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-types.html)
in the *Amazon Elastic Container Service Developer Guide*.
"""
def delete_task_set(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteTaskSet", input, options)
end
@doc """
Deregisters an Amazon ECS container instance from the specified cluster.
This instance is no longer available to run tasks.
If you intend to use the container instance for some other purpose after
deregistration, we recommend that you stop all of the tasks running on the
container instance before deregistration. That prevents any orphaned tasks from
consuming resources.
Deregistering a container instance removes the instance from a cluster, but it
doesn't terminate the EC2 instance. If you are finished using the instance, be
sure to terminate it in the Amazon EC2 console to stop billing.
If you terminate a running container instance, Amazon ECS automatically
deregisters the instance from your cluster (stopped container instances or
instances with disconnected agents aren't automatically deregistered when
terminated).
"""
def deregister_container_instance(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeregisterContainerInstance", input, options)
end
@doc """
Deregisters the specified task definition by family and revision.
Upon deregistration, the task definition is marked as `INACTIVE`. Existing tasks
and services that reference an `INACTIVE` task definition continue to run
without disruption. Existing services that reference an `INACTIVE` task
definition can still scale up or down by modifying the service's desired count.
You can't use an `INACTIVE` task definition to run new tasks or create new
services, and you can't update an existing service to reference an `INACTIVE`
task definition. However, there may be up to a 10-minute window following
deregistration where these restrictions have not yet taken effect.
At this time, `INACTIVE` task definitions remain discoverable in your account
indefinitely. However, this behavior is subject to change in the future. We
don't recommend that you rely on `INACTIVE` task definitions persisting beyond
the lifecycle of any associated tasks and services.
"""
def deregister_task_definition(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeregisterTaskDefinition", input, options)
end
@doc """
Describes one or more of your capacity providers.
"""
def describe_capacity_providers(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DescribeCapacityProviders", input, options)
end
@doc """
Describes one or more of your clusters.
"""
def describe_clusters(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DescribeClusters", input, options)
end
@doc """
Describes one or more container instances.
Returns metadata about each container instance requested.
"""
def describe_container_instances(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DescribeContainerInstances", input, options)
end
@doc """
Describes the specified services running in your cluster.
"""
def describe_services(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DescribeServices", input, options)
end
@doc """
Describes a task definition.
You can specify a `family` and `revision` to find information about a specific
task definition, or you can simply specify the family to find the latest
`ACTIVE` revision in that family.
You can only describe `INACTIVE` task definitions while an active task or
service references them.
"""
def describe_task_definition(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DescribeTaskDefinition", input, options)
end
@doc """
Describes the task sets in the specified cluster and service.
This is used when a service uses the `EXTERNAL` deployment controller type. For
more information, see [Amazon ECS Deployment Types](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-types.html)
in the *Amazon Elastic Container Service Developer Guide*.
"""
def describe_task_sets(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DescribeTaskSets", input, options)
end
@doc """
Describes a specified task or tasks.
"""
def describe_tasks(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DescribeTasks", input, options)
end
@doc """
This action is only used by the Amazon ECS agent, and it is not intended for use
outside of the agent.
Returns an endpoint for the Amazon ECS agent to poll for updates.
"""
def discover_poll_endpoint(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DiscoverPollEndpoint", input, options)
end
@doc """
Runs a command remotely on a container within a task.
"""
def execute_command(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ExecuteCommand", input, options)
end
@doc """
Lists the account settings for a specified principal.
"""
def list_account_settings(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListAccountSettings", input, options)
end
@doc """
Lists the attributes for Amazon ECS resources within a specified target type and
cluster.
When you specify a target type and cluster, `ListAttributes` returns a list of
attribute objects, one for each attribute on each resource. You can filter the
list of results to a single attribute name to only return results that have that
name. You can also filter the results by attribute name and value. You can do
this, for example, to see which container instances in a cluster are running a
Linux AMI (`ecs.os-type=linux`).
"""
def list_attributes(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListAttributes", input, options)
end
@doc """
Returns a list of existing clusters.
"""
def list_clusters(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListClusters", input, options)
end
@doc """
Returns a list of container instances in a specified cluster.
You can filter the results of a `ListContainerInstances` operation with cluster
query language statements inside the `filter` parameter. For more information,
see [Cluster Query Language](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cluster-query-language.html)
in the *Amazon Elastic Container Service Developer Guide*.
"""
def list_container_instances(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListContainerInstances", input, options)
end
@doc """
Returns a list of services.
You can filter the results by cluster, launch type, and scheduling strategy.
"""
def list_services(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListServices", input, options)
end
@doc """
List the tags for an Amazon ECS resource.
"""
def list_tags_for_resource(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListTagsForResource", input, options)
end
@doc """
Returns a list of task definition families that are registered to your account.
This list includes task definition families that no longer have any `ACTIVE`
task definition revisions.
You can filter out task definition families that don't contain any `ACTIVE` task
definition revisions by setting the `status` parameter to `ACTIVE`. You can also
filter the results with the `familyPrefix` parameter.
"""
def list_task_definition_families(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListTaskDefinitionFamilies", input, options)
end
@doc """
Returns a list of task definitions that are registered to your account.
You can filter the results by family name with the `familyPrefix` parameter or
by status with the `status` parameter.
"""
def list_task_definitions(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListTaskDefinitions", input, options)
end
@doc """
Returns a list of tasks.
You can filter the results by cluster, task definition family, container
instance, launch type, what IAM principal started the task, or by the desired
status of the task.
Recently stopped tasks might appear in the returned results. Currently, stopped
tasks appear in the returned results for at least one hour.
"""
def list_tasks(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListTasks", input, options)
end
@doc """
Modifies an account setting.
Account settings are set on a per-Region basis.
If you change the account setting for the root user, the default settings for
all of the IAM users and roles that no individual account setting was specified
are reset for. For more information, see [Account Settings](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html)
in the *Amazon Elastic Container Service Developer Guide*.
When `serviceLongArnFormat`, `taskLongArnFormat`, or
`containerInstanceLongArnFormat` are specified, the Amazon Resource Name (ARN)
and resource ID format of the resource type for a specified IAM user, IAM role,
or the root user for an account is affected. The opt-in and opt-out account
setting must be set for each Amazon ECS resource separately. The ARN and
resource ID format of a resource is defined by the opt-in status of the IAM user
or role that created the resource. You must enable this setting to use Amazon
ECS features such as resource tagging.
When `awsvpcTrunking` is specified, the elastic network interface (ENI) limit
for any new container instances that support the feature is changed. If
`awsvpcTrunking` is enabled, any new container instances that support the
feature are launched have the increased ENI limits available to them. For more
information, see [Elastic Network Interface Trunking](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container-instance-eni.html)
in the *Amazon Elastic Container Service Developer Guide*.
When `containerInsights` is specified, the default setting indicating whether
CloudWatch Container Insights is enabled for your clusters is changed. If
`containerInsights` is enabled, any new clusters that are created will have
Container Insights enabled unless you disable it during cluster creation. For
more information, see [CloudWatch Container Insights](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cloudwatch-container-insights.html)
in the *Amazon Elastic Container Service Developer Guide*.
"""
def put_account_setting(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "PutAccountSetting", input, options)
end
@doc """
Modifies an account setting for all IAM users on an account for whom no
individual account setting has been specified.
Account settings are set on a per-Region basis.
"""
def put_account_setting_default(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "PutAccountSettingDefault", input, options)
end
@doc """
Create or update an attribute on an Amazon ECS resource.
If the attribute doesn't exist, it's created. If the attribute exists, its value
is replaced with the specified value. To delete an attribute, use
`DeleteAttributes`. For more information, see
[Attributes](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-constraints.html#attributes)
in the *Amazon Elastic Container Service Developer Guide*.
"""
def put_attributes(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "PutAttributes", input, options)
end
@doc """
Modifies the available capacity providers and the default capacity provider
strategy for a cluster.
You must specify both the available capacity providers and a default capacity
provider strategy for the cluster. If the specified cluster has existing
capacity providers associated with it, you must specify all existing capacity
providers in addition to any new ones you want to add. Any existing capacity
providers that are associated with a cluster that are omitted from a
`PutClusterCapacityProviders` API call will be disassociated with the cluster.
You can only disassociate an existing capacity provider from a cluster if it's
not being used by any existing tasks.
When creating a service or running a task on a cluster, if no capacity provider
or launch type is specified, then the cluster's default capacity provider
strategy is used. We recommend that you define a default capacity provider
strategy for your cluster. However, you must specify an empty array (`[]`) to
bypass defining a default strategy.
"""
def put_cluster_capacity_providers(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "PutClusterCapacityProviders", input, options)
end
@doc """
This action is only used by the Amazon ECS agent, and it is not intended for use
outside of the agent.
Registers an EC2 instance into the specified cluster. This instance becomes
available to place containers on.
"""
def register_container_instance(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "RegisterContainerInstance", input, options)
end
@doc """
Registers a new task definition from the supplied `family` and
`containerDefinitions`.
Optionally, you can add data volumes to your containers with the `volumes`
parameter. For more information about task definition parameters and defaults,
see [Amazon ECS Task Definitions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_defintions.html)
in the *Amazon Elastic Container Service Developer Guide*.
You can specify an IAM role for your task with the `taskRoleArn` parameter. When
you specify an IAM role for a task, its containers can then use the latest
versions of the CLI or SDKs to make API requests to the Amazon Web Services
services that are specified in the IAM policy that's associated with the role.
For more information, see [IAM Roles for Tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html)
in the *Amazon Elastic Container Service Developer Guide*.
You can specify a Docker networking mode for the containers in your task
definition with the `networkMode` parameter. The available network modes
correspond to those described in [Network settings](https://docs.docker.com/engine/reference/run/#/network-settings) in
the Docker run reference. If you specify the `awsvpc` network mode, the task is
allocated an elastic network interface, and you must specify a
`NetworkConfiguration` when you create a service or run a task with the task
definition. For more information, see [Task Networking](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html)
in the *Amazon Elastic Container Service Developer Guide*.
"""
def register_task_definition(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "RegisterTaskDefinition", input, options)
end
@doc """
Starts a new task using the specified task definition.
You can allow Amazon ECS to place tasks for you, or you can customize how Amazon
ECS places tasks using placement constraints and placement strategies. For more
information, see [Scheduling Tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/scheduling_tasks.html)
in the *Amazon Elastic Container Service Developer Guide*.
Alternatively, you can use `StartTask` to use your own scheduler or place tasks
manually on specific container instances.
The Amazon ECS API follows an eventual consistency model. This is because the
distributed nature of the system supporting the API. This means that the result
of an API command you run that affects your Amazon ECS resources might not be
immediately visible to all subsequent commands you run. Keep this in mind when
you carry out an API command that immediately follows a previous API command.
To manage eventual consistency, you can do the following:
* Confirm the state of the resource before you run a command to
modify it. Run the DescribeTasks command using an exponential backoff algorithm
to ensure that you allow enough time for the previous command to propagate
through the system. To do this, run the DescribeTasks command repeatedly,
starting with a couple of seconds of wait time and increasing gradually up to
five minutes of wait time.
* Add wait time between subsequent commands, even if the
DescribeTasks command returns an accurate response. Apply an exponential backoff
algorithm starting with a couple of seconds of wait time, and increase gradually
up to about five minutes of wait time.
"""
def run_task(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "RunTask", input, options)
end
@doc """
Starts a new task from the specified task definition on the specified container
instance or instances.
Alternatively, you can use `RunTask` to place tasks for you. For more
information, see [Scheduling Tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/scheduling_tasks.html)
in the *Amazon Elastic Container Service Developer Guide*.
"""
def start_task(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "StartTask", input, options)
end
@doc """
Stops a running task.
Any tags associated with the task will be deleted.
When `StopTask` is called on a task, the equivalent of `docker stop` is issued
to the containers running in the task. This results in a `SIGTERM` value and a
default 30-second timeout, after which the `SIGKILL` value is sent and the
containers are forcibly stopped. If the container handles the `SIGTERM` value
gracefully and exits within 30 seconds from receiving it, no `SIGKILL` value is
sent.
The default 30-second timeout can be configured on the Amazon ECS container
agent with the `ECS_CONTAINER_STOP_TIMEOUT` variable. For more information, see
[Amazon ECS Container Agent Configuration](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html)
in the *Amazon Elastic Container Service Developer Guide*.
"""
def stop_task(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "StopTask", input, options)
end
@doc """
This action is only used by the Amazon ECS agent, and it is not intended for use
outside of the agent.
Sent to acknowledge that an attachment changed states.
"""
def submit_attachment_state_changes(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "SubmitAttachmentStateChanges", input, options)
end
@doc """
This action is only used by the Amazon ECS agent, and it is not intended for use
outside of the agent.
Sent to acknowledge that a container changed states.
"""
def submit_container_state_change(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "SubmitContainerStateChange", input, options)
end
@doc """
This action is only used by the Amazon ECS agent, and it is not intended for use
outside of the agent.
Sent to acknowledge that a task changed states.
"""
def submit_task_state_change(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "SubmitTaskStateChange", input, options)
end
@doc """
Associates the specified tags to a resource with the specified `resourceArn`.
If existing tags on a resource aren't specified in the request parameters, they
aren't changed. When a resource is deleted, the tags that are associated with
that resource are deleted as well.
"""
def tag_resource(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "TagResource", input, options)
end
@doc """
Deletes specified tags from a resource.
"""
def untag_resource(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UntagResource", input, options)
end
@doc """
Modifies the parameters for a capacity provider.
"""
def update_capacity_provider(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateCapacityProvider", input, options)
end
@doc """
Updates the cluster.
"""
def update_cluster(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateCluster", input, options)
end
@doc """
Modifies the settings to use for a cluster.
"""
def update_cluster_settings(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateClusterSettings", input, options)
end
@doc """
Updates the Amazon ECS container agent on a specified container instance.
Updating the Amazon ECS container agent doesn't interrupt running tasks or
services on the container instance. The process for updating the agent differs
depending on whether your container instance was launched with the Amazon
ECS-optimized AMI or another operating system.
The `UpdateContainerAgent` API isn't supported for container instances using the
Amazon ECS-optimized Amazon Linux 2 (arm64) AMI. To update the container agent,
you can update the `ecs-init` package. This updates the agent. For more
information, see [Updating the Amazon ECS container agent](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/agent-update-ecs-ami.html)
in the *Amazon Elastic Container Service Developer Guide*.
The `UpdateContainerAgent` API requires an Amazon ECS-optimized AMI or Amazon
Linux AMI with the `ecs-init` service installed and running. For help updating
the Amazon ECS container agent on other operating systems, see [Manually updating the Amazon ECS container
agent](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-update.html#manually_update_agent)
in the *Amazon Elastic Container Service Developer Guide*.
"""
def update_container_agent(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateContainerAgent", input, options)
end
@doc """
Modifies the status of an Amazon ECS container instance.
Once a container instance has reached an `ACTIVE` state, you can change the
status of a container instance to `DRAINING` to manually remove an instance from
a cluster, for example to perform system updates, update the Docker daemon, or
scale down the cluster size.
A container instance can't be changed to `DRAINING` until it has reached an
`ACTIVE` status. If the instance is in any other status, an error will be
received.
When you set a container instance to `DRAINING`, Amazon ECS prevents new tasks
from being scheduled for placement on the container instance and replacement
service tasks are started on other container instances in the cluster if the
resources are available. Service tasks on the container instance that are in the
`PENDING` state are stopped immediately.
Service tasks on the container instance that are in the `RUNNING` state are
stopped and replaced according to the service's deployment configuration
parameters, `minimumHealthyPercent` and `maximumPercent`. You can change the
deployment configuration of your service using `UpdateService`.
* If `minimumHealthyPercent` is below 100%, the scheduler can ignore
`desiredCount` temporarily during task replacement. For example, `desiredCount`
is four tasks, a minimum of 50% allows the scheduler to stop two existing tasks
before starting two new tasks. If the minimum is 100%, the service scheduler
can't remove existing tasks until the replacement tasks are considered healthy.
Tasks for services that do not use a load balancer are considered healthy if
they're in the `RUNNING` state. Tasks for services that use a load balancer are
considered healthy if they're in the `RUNNING` state and the container instance
they're hosted on is reported as healthy by the load balancer.
* The `maximumPercent` parameter represents an upper limit on the
number of running tasks during task replacement. You can use this to define the
replacement batch size. For example, if `desiredCount` is four tasks, a maximum
of 200% starts four new tasks before stopping the four tasks to be drained,
provided that the cluster resources required to do this are available. If the
maximum is 100%, then replacement tasks can't start until the draining tasks
have stopped.
Any `PENDING` or `RUNNING` tasks that do not belong to a service aren't
affected. You must wait for them to finish or stop them manually.
A container instance has completed draining when it has no more `RUNNING` tasks.
You can verify this using `ListTasks`.
When a container instance has been drained, you can set a container instance to
`ACTIVE` status and once it has reached that status the Amazon ECS scheduler can
begin scheduling tasks on the instance again.
"""
def update_container_instances_state(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateContainerInstancesState", input, options)
end
@doc """
Updating the task placement strategies and constraints on an Amazon ECS service
remains in preview and is a Beta Service as defined by and subject to the Beta
Service Participation Service Terms located at
[https://aws.amazon.com/service-terms](https://aws.amazon.com/service-terms) ("Beta Terms").
These Beta Terms apply to your participation in this preview.
Modifies the parameters of a service.
For services using the rolling update (`ECS`) deployment controller, the desired
count, deployment configuration, network configuration, task placement
constraints and strategies, or task definition used can be updated.
For services using the blue/green (`CODE_DEPLOY`) deployment controller, only
the desired count, deployment configuration, task placement constraints and
strategies, and health check grace period can be updated using this API. If the
network configuration, platform version, or task definition need to be updated,
a new CodeDeploy deployment is created. For more information, see
[CreateDeployment](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_CreateDeployment.html)
in the *CodeDeploy API Reference*.
For services using an external deployment controller, you can update only the
desired count, task placement constraints and strategies, and health check grace
period using this API. If the launch type, load balancer, network configuration,
platform version, or task definition need to be updated, create a new task set.
For more information, see `CreateTaskSet`.
You can add to or subtract from the number of instantiations of a task
definition in a service by specifying the cluster that the service is running in
and a new `desiredCount` parameter.
If you have updated the Docker image of your application, you can create a new
task definition with that image and deploy it to your service. The service
scheduler uses the minimum healthy percent and maximum percent parameters (in
the service's deployment configuration) to determine the deployment strategy.
If your updated Docker image uses the same tag as what is in the existing task
definition for your service (for example, `my_image:latest`), you don't need to
create a new revision of your task definition. You can update the service using
the `forceNewDeployment` option. The new tasks launched by the deployment pull
the current image/tag combination from your repository when they start.
You can also update the deployment configuration of a service. When a deployment
is triggered by updating the task definition of a service, the service scheduler
uses the deployment configuration parameters, `minimumHealthyPercent` and
`maximumPercent`, to determine the deployment strategy.
* If `minimumHealthyPercent` is below 100%, the scheduler can ignore
`desiredCount` temporarily during a deployment. For example, if `desiredCount`
is four tasks, a minimum of 50% allows the scheduler to stop two existing tasks
before starting two new tasks. Tasks for services that don't use a load balancer
are considered healthy if they're in the `RUNNING` state. Tasks for services
that use a load balancer are considered healthy if they're in the `RUNNING`
state and the container instance they're hosted on is reported as healthy by the
load balancer.
* The `maximumPercent` parameter represents an upper limit on the
number of running tasks during a deployment. You can use it to define the
deployment batch size. For example, if `desiredCount` is four tasks, a maximum
of 200% starts four new tasks before stopping the four older tasks (provided
that the cluster resources required to do this are available).
When `UpdateService` stops a task during a deployment, the equivalent of `docker
stop` is issued to the containers running in the task. This results in a
`SIGTERM` and a 30-second timeout. After this, `SIGKILL` is sent and the
containers are forcibly stopped. If the container handles the `SIGTERM`
gracefully and exits within 30 seconds from receiving it, no `SIGKILL` is sent.
When the service scheduler launches new tasks, it determines task placement in
your cluster with the following logic.
* Determine which of the container instances in your cluster can
support your service's task definition. For example, they have the required CPU,
memory, ports, and container instance attributes.
* By default, the service scheduler attempts to balance tasks across
Availability Zones in this manner even though you can choose a different
placement strategy.
* Sort the valid container instances by the fewest
number of running tasks for this service in the same Availability Zone as the
instance. For example, if zone A has one running service task and zones B and C
each have zero, valid container instances in either zone B or C are considered
optimal for placement.
* Place the new service task on a valid container
instance in an optimal Availability Zone (based on the previous steps), favoring
container instances with the fewest number of running tasks for this service.
When the service scheduler stops running tasks, it attempts to maintain balance
across the Availability Zones in your cluster using the following logic:
* Sort the container instances by the largest number of running
tasks for this service in the same Availability Zone as the instance. For
example, if zone A has one running service task and zones B and C each have two,
container instances in either zone B or C are considered optimal for
termination.
* Stop the task on a container instance in an optimal Availability
Zone (based on the previous steps), favoring container instances with the
largest number of running tasks for this service.
"""
def update_service(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateService", input, options)
end
@doc """
Modifies which task set in a service is the primary task set.
Any parameters that are updated on the primary task set in a service will
transition to the service. This is used when a service uses the `EXTERNAL`
deployment controller type. For more information, see [Amazon ECS Deployment Types](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-types.html)
in the *Amazon Elastic Container Service Developer Guide*.
"""
def update_service_primary_task_set(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateServicePrimaryTaskSet", input, options)
end
@doc """
Modifies a task set.
This is used when a service uses the `EXTERNAL` deployment controller type. For
more information, see [Amazon ECS Deployment Types](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-types.html)
in the *Amazon Elastic Container Service Developer Guide*.
"""
def update_task_set(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateTaskSet", input, options)
end
end
|
lib/aws/generated/ecs.ex
| 0.885594
| 0.427546
|
ecs.ex
|
starcoder
|
defmodule Synacor.Token do
@moduledoc """
Tokenize binary data into instructions
"""
@doc """
Get the value at the given memory address
"""
def get_value(offset, bin) do
skip = offset * 2
<<_skip::binary-size(skip), value::little-integer-size(16), _rest::binary>> = bin
value
end
@doc """
Put the value at the given memory address
"""
def put_value(value, offset, bin) do
skip = offset * 2
<<skip::binary-size(skip), _old::little-integer-size(16), rest::binary>> = bin
skip <> <<value::little-integer-size(16)>> <> rest
end
@doc """
Parse the instruction at the given offset
"""
def get_instruction(offset, bin) do
skip = offset * 2
<<_skip::binary-size(skip), rest::binary>> = bin
{op, _rest} = next_token(rest)
op
end
def disassemble_file(input_path, output_path) do
input_path
|> File.read!
|> disassemble(output_path)
end
@doc """
Convert a binary to assembly file
"""
def disassemble(bin, output_path, annotations \\ Map.new) do
result =
bin
|> analyze
|> Enum.reduce([], &merge_outs/2)
|> Enum.reverse
|> Enum.map(&(op_to_string(&1, annotations)))
File.write!(output_path, result)
end
@doc """
Generate a list of instructions from a file
"""
def analyze_file(path) do
path
|> File.read!
|> analyze
end
@doc """
Generate a list of instructions from a binary
"""
def analyze(bin) do
bin
|> analyze(0, [])
end
defp analyze(<<>>, _pc, acc), do: Enum.reverse(acc)
defp analyze(bin, pc, acc) do
{op, rest} = next_token(bin)
inc = case op do
{:unknown, _} -> 1
value -> tuple_size(value)
end
analyze(rest, pc + inc, [{pc, op} | acc])
end
# {opcode_name, opcode_value, number of args}
@opcodes [
{:halt, 0, 0},
{:set, 1, 2},
{:push, 2, 1},
{:pop, 3, 1},
{:eq, 4, 3},
{:gt, 5, 3},
{:jmp, 6, 1},
{:jt, 7, 2},
{:jf, 8, 2},
{:add, 9, 3},
{:mult, 10, 3},
{:mod, 11, 3},
{:and, 12, 3},
{:or, 13, 3},
{:not, 14, 2},
{:rmem, 15, 2},
{:wmem, 16, 2},
{:call, 17, 1},
{:ret, 18, 0},
{:out, 19, 1},
{:in, 20, 1},
{:noop, 21, 0}
]
for {name, value, args} <- @opcodes do
IO.puts "Adding #{inspect name}: #{inspect value}"
defp next_token(<<unquote(value)::little-integer-size(16), rest::binary>>) do
# IO.puts "Parsed: #{inspect unquote(name)}, #{inspect unquote(value)}, #{inspect unquote(args)}"
take_args(unquote(name), unquote(args), rest)
end
end
defp next_token(<<v::little-integer-size(16), rest::binary>>) do
# IO.puts "Unknown opcode: #{inspect v}"
{{:unknown, [v]}, rest}
end
defp next_token(<<>>) do
{{:end_of_stream}, <<>>}
end
for {name, _value, args} <- @opcodes do
def instruction_length(unquote(name)), do: 1 + unquote(args)
end
def instruction_length(_), do: 1
defp take_args(name, 0, rest), do: {{name}, rest}
defp take_args(name, 1, <<v::little-integer-size(16), rest::binary>>), do: {{name, value(v)}, rest}
defp take_args(name, 2, <<v::little-integer-size(16), v1::little-integer-size(16), rest::binary>>), do: {{name, value(v), value(v1)}, rest}
defp take_args(name, 3, <<v::little-integer-size(16), v1::little-integer-size(16), v2::little-integer-size(16), rest::binary>>), do: {{name, value(v), value(v1), value(v2)}, rest}
defp value(v) when v <= 32767, do: {:value, v}
defp value(v) when v >= 32768 and v <= 32775, do: {:reg, v - 32768}
defp merge_outs({line, {:out, {:value, ?\n}}}, acc) do
[{line, {:out_newline, {:value, :newline}}} | acc]
end
defp merge_outs({_line, {:out, {:value, x}}}, [{line, {:out, {:value, y}}} | rest]) do
[{line, {:out, {:value, y ++ [x]}}} | rest]
end
defp merge_outs({line, {:out, {:value, x}}}, acc) do
[{line, {:out, {:value, [x]}}} | acc]
end
defp merge_outs(op, acc), do: [op | acc]
defp op_to_string({line, op}, annotations) do
line_num = line |> Integer.to_string |> String.pad_leading(5, "0")
str = "[#{line_num}] #{inspect op}"
str = case Map.get(annotations, line) do
nil -> str
result -> str <> "\t\t\t\t\# #{result}"
end
str <> "\n"
end
end
|
lib/synacor/token.ex
| 0.724188
| 0.55646
|
token.ex
|
starcoder
|
defmodule Bolt.Sips.Internals.BoltProtocolHelper do
@moduledoc false
alias Bolt.Sips.Internals.PackStream.Message
alias Bolt.Sips.Internals.Error
@recv_timeout 10_000
@zero_chunk <<0x00, 0x00>>
@summary ~w(success ignored failure)a
@doc """
Sends a message using the Bolt protocol and PackStream encoding.
Message have to be in the form of {message_type, [data]}.
"""
@spec send_message(atom(), port(), integer(), Bolt.Sips.Internals.PackStream.Message.raw()) ::
:ok | {:error, any()}
def send_message(transport, port, bolt_version, message) do
message
|> Message.encode(bolt_version)
|> (fn data -> transport.send(port, data) end).()
end
@doc """
Receives data.
This function is supposed to be called after a request to the server has been
made. It receives data chunks, mends them (if they were split between frames)
and decodes them using PackStream.
When just a single message is received (i.e. to acknowledge a command), this
function returns a tuple with two items, the first being the signature and the
second being the message(s) itself. If a list of messages is received it will
return a list of the former.
The same goes for the messages: If there was a single data point in a message
said data point will be returned by itself. If there were multiple data
points, the list will be returned.
The signature is represented as one of the following:
* `:success`
* `:record`
* `:ignored`
* `:failure`
## Options
See "Shared options" in the documentation of this module.
"""
@spec receive_data(atom(), port(), integer(), Keyword.t(), list()) ::
{atom(), Bolt.Sips.Internals.PackStream.value()}
| {:error, any()}
| Bolt.Sips.Internals.Error.t()
def receive_data(transport, port, bolt_version, options \\ [], previous \\ []) do
with {:ok, data} <- do_receive_data(transport, port, options) do
case Message.decode(data, bolt_version) do
{:record, _} = data ->
receive_data(transport, port, bolt_version, options, [data | previous])
{status, _} = data when status in @summary and previous == [] ->
data
{status, _} = data when status in @summary ->
Enum.reverse([data | previous])
other ->
{:error, Error.exception(other, port, :receive_data)}
end
else
other ->
# Should be the line below to have a cleaner typespec
# Keep the old return value to not break usage
# {:error, Error.exception(other, port, :receive_data)}
Error.exception(other, port, :receive_data)
end
end
@spec do_receive_data(atom(), port(), Keyword.t()) :: {:ok, binary()}
defp do_receive_data(transport, port, options) do
recv_timeout = get_recv_timeout(options)
case transport.recv(port, 2, recv_timeout) do
{:ok, <<chunk_size::16>>} ->
do_receive_data_(transport, port, chunk_size, options, <<>>)
other ->
other
end
end
@spec do_receive_data_(atom(), port(), integer(), Keyword.t(), binary()) :: {:ok, binary()}
defp do_receive_data_(transport, port, chunk_size, options, old_data) do
recv_timeout = get_recv_timeout(options)
with {:ok, data} <- transport.recv(port, chunk_size, recv_timeout),
{:ok, marker} <- transport.recv(port, 2, recv_timeout) do
case marker do
@zero_chunk ->
{:ok, <<old_data::binary, data::binary>>}
<<chunk_size::16>> ->
data = <<old_data::binary, data::binary>>
do_receive_data_(transport, port, chunk_size, options, data)
end
else
other ->
Error.exception(other, port, :recv)
end
end
@doc """
Define timeout
"""
@spec get_recv_timeout(Keyword.t()) :: integer()
def get_recv_timeout(options) do
Keyword.get(options, :recv_timeout, @recv_timeout)
end
@doc """
Deal with message without data.
## Example
iex> BoltProtocolHelper.treat_simple_message(:reset, :gen_tcp, port, 1, [])
:ok
"""
@spec treat_simple_message(
Bolt.Sips.Internals.Message.out_signature(),
atom(),
port(),
integer(),
Keyword.t()
) :: :ok | Error.t()
def treat_simple_message(message, transport, port, bolt_version, options) do
send_message(transport, port, bolt_version, {message, []})
case receive_data(transport, port, bolt_version, options) do
{:success, %{}} ->
:ok
error ->
Error.exception(error, port, message)
end
end
end
|
lib/bolt_sips/internals/bolt_protocol_helper.ex
| 0.81721
| 0.55103
|
bolt_protocol_helper.ex
|
starcoder
|
defmodule Nebulex.Cache do
@moduledoc ~S"""
Cache's main interface; defines the cache abstraction layer which is
highly inspired by [Ecto](https://github.com/elixir-ecto/ecto).
A Cache maps to an underlying implementation, controlled by the
adapter. For example, Nebulex ships with a default adapter that
implements a local generational cache.
When used, the Cache expects the `:otp_app` and `:adapter` as options.
The `:otp_app` should point to an OTP application that has the cache
configuration. For example, the Cache:
defmodule MyCache do
use Nebulex.Cache,
otp_app: :my_app,
adapter: Nebulex.Adapters.Local
end
Could be configured with:
config :my_app, MyCache,
stats: true,
backend: :shards,
gc_interval: :timer.seconds(3600),
max_size: 200_000,
gc_cleanup_min_timeout: 10_000,
gc_cleanup_max_timeout: 900_000
Most of the configuration that goes into the `config` is specific
to the adapter. For this particular example, you can check
[`Nebulex.Adapters.Local`](https://hexdocs.pm/nebulex/Nebulex.Adapters.Local.html)
for more information. In spite of this, the following configuration values
are shared across all adapters:
* `:name`- The name of the Cache supervisor process (Optional). If it is
not passed within the options, the name of the cache module will be used
as the name by default.
* `:stats` - The stats are supposed to be handled by the adapters, hence,
it is recommended to check the adapters' documentation for supported
stats, config, and so on. Nevertheless, Nebulex built-in adapters
provide support for stats by setting the `:stats` option to `true`
(Defaults to `false`). You can get the stats info by calling
`Nebulex.Cache.Stats.info(cache_or_name)` at any time. For more
information, See `Nebulex.Cache.Stats`.
**NOTE:** It is highly recommendable to check the adapters' documentation.
## Distributed topologies
Nebulex provides the following adapters for distributed topologies:
* `Nebulex.Adapters.Partitioned` - Partitioned cache topology.
* `Nebulex.Adapters.Replicated` - Replicated cache topology.
These adapters work more as wrappers for an existing local adapter and provide
the distributed topology on top of it. Optionally, you can set the adapter for
the primary cache storage with the option `:primary_storage_adapter`. Defaults
to `Nebulex.Adapters.Local`.
"""
@type t :: module
@typedoc "Cache entry key"
@type key :: any
@typedoc "Cache entry value"
@type value :: any
@typedoc "Cache entries"
@type entries :: map | [{key, value}]
@typedoc "Cache action options"
@type opts :: Keyword.t()
@doc false
defmacro __using__(opts) do
quote bind_quoted: [opts: opts] do
@behaviour Nebulex.Cache
alias Nebulex.Hook
alias Nebulex.Cache.{Entry, Persistence, Queryable, Stats, Transaction}
{otp_app, adapter, behaviours} = Nebulex.Cache.Supervisor.compile_config(opts)
@otp_app otp_app
@adapter adapter
@opts opts
@default_dynamic_cache opts[:default_dynamic_cache] || __MODULE__
@before_compile adapter
## Config and metadata
@impl true
def config do
{:ok, config} = Nebulex.Cache.Supervisor.runtime_config(__MODULE__, @otp_app, [])
config
end
@impl true
def __adapter__, do: @adapter
## Process lifecycle
@doc false
def child_spec(opts) do
%{
id: __MODULE__,
start: {__MODULE__, :start_link, [opts]},
type: :supervisor
}
end
@impl true
def start_link(opts \\ []) do
Nebulex.Cache.Supervisor.start_link(__MODULE__, @otp_app, @adapter, opts)
end
@impl true
def stop(timeout \\ 5000) do
Supervisor.stop(get_dynamic_cache(), :normal, timeout)
end
@compile {:inline, get_dynamic_cache: 0}
@impl true
def get_dynamic_cache do
Process.get({__MODULE__, :dynamic_cache}, @default_dynamic_cache)
end
@impl true
def put_dynamic_cache(dynamic) when is_atom(dynamic) or is_pid(dynamic) do
Process.put({__MODULE__, :dynamic_cache}, dynamic) || @default_dynamic_cache
end
@impl true
def with_dynamic_cache(name, fun) do
default_dynamic_cache = get_dynamic_cache()
try do
_ = put_dynamic_cache(name)
fun.()
after
_ = put_dynamic_cache(default_dynamic_cache)
end
end
## Entries
@impl true
def get(key, opts \\ []) do
Entry.get(get_dynamic_cache(), key, opts)
end
@impl true
def get!(key, opts \\ []) do
Entry.get!(get_dynamic_cache(), key, opts)
end
@impl true
def get_all(keys, opts \\ []) do
Entry.get_all(get_dynamic_cache(), keys, opts)
end
@impl true
def put(key, value, opts \\ []) do
Entry.put(get_dynamic_cache(), key, value, opts)
end
@impl true
def put_all(entries, opts \\ []) do
Entry.put_all(get_dynamic_cache(), entries, opts)
end
@impl true
def put_new(key, value, opts \\ []) do
Entry.put_new(get_dynamic_cache(), key, value, opts)
end
@impl true
def put_new!(key, value, opts \\ []) do
Entry.put_new!(get_dynamic_cache(), key, value, opts)
end
@impl true
def put_new_all(entries, opts \\ []) do
Entry.put_new_all(get_dynamic_cache(), entries, opts)
end
@impl true
def replace(key, value, opts \\ []) do
Entry.replace(get_dynamic_cache(), key, value, opts)
end
@impl true
def replace!(key, value, opts \\ []) do
Entry.replace!(get_dynamic_cache(), key, value, opts)
end
@impl true
def delete(key, opts \\ []) do
Entry.delete(get_dynamic_cache(), key, opts)
end
@impl true
def take(key, opts \\ []) do
Entry.take(get_dynamic_cache(), key, opts)
end
@impl true
def take!(key, opts \\ []) do
Entry.take!(get_dynamic_cache(), key, opts)
end
@impl true
def has_key?(key) do
Entry.has_key?(get_dynamic_cache(), key)
end
@impl true
def get_and_update(key, fun, opts \\ []) do
Entry.get_and_update(get_dynamic_cache(), key, fun, opts)
end
@impl true
def update(key, initial, fun, opts \\ []) do
Entry.update(get_dynamic_cache(), key, initial, fun, opts)
end
@impl true
def incr(key, incr \\ 1, opts \\ []) do
Entry.incr(get_dynamic_cache(), key, incr, opts)
end
@impl true
def ttl(key) do
Entry.ttl(get_dynamic_cache(), key)
end
@impl true
def expire(key, ttl) do
Entry.expire(get_dynamic_cache(), key, ttl)
end
@impl true
def touch(key) do
Entry.touch(get_dynamic_cache(), key)
end
@impl true
def size do
Entry.size(get_dynamic_cache())
end
@impl true
def flush do
Entry.flush(get_dynamic_cache())
end
## Queryable
if Nebulex.Adapter.Queryable in behaviours do
@impl true
def all(query \\ nil, opts \\ []) do
Queryable.all(get_dynamic_cache(), query, opts)
end
@impl true
def stream(query \\ nil, opts \\ []) do
Queryable.stream(get_dynamic_cache(), query, opts)
end
end
## Persistence
if Nebulex.Adapter.Persistence in behaviours do
@impl true
def dump(path, opts \\ []) do
Persistence.dump(get_dynamic_cache(), path, opts)
end
@impl true
def load(path, opts \\ []) do
Persistence.load(get_dynamic_cache(), path, opts)
end
end
## Transactions
if Nebulex.Adapter.Transaction in behaviours do
@impl true
def transaction(opts \\ [], fun) do
Transaction.transaction(get_dynamic_cache(), opts, fun)
end
@impl true
def in_transaction? do
Transaction.in_transaction?(get_dynamic_cache())
end
end
end
end
## User callbacks
@optional_callbacks init: 1
@doc """
A callback executed when the cache starts or when configuration is read.
"""
@callback init(config :: Keyword.t()) :: {:ok, Keyword.t()} | :ignore
## Nebulex.Adapter
@doc """
Returns the adapter tied to the cache.
"""
@callback __adapter__ :: Nebulex.Adapter.t()
@doc """
Returns the adapter configuration stored in the `:otp_app` environment.
If the `c:init/1` callback is implemented in the cache, it will be invoked.
"""
@callback config() :: Keyword.t()
@doc """
Starts a supervision and return `{:ok, pid}` or just `:ok` if nothing
needs to be done.
Returns `{:error, {:already_started, pid}}` if the cache is already
started or `{:error, term}` in case anything else goes wrong.
## Options
See the configuration in the moduledoc for options shared between adapters,
for adapter-specific configuration see the adapter's documentation.
"""
@callback start_link(opts) ::
{:ok, pid}
| {:error, {:already_started, pid}}
| {:error, term}
@doc """
Shuts down the cache.
"""
@callback stop(timeout) :: :ok
@doc """
Returns the atom name or pid of the current cache
(based on Ecto dynamic repo).
See also `c:put_dynamic_cache/1`.
"""
@callback get_dynamic_cache() :: atom() | pid()
@doc """
Sets the dynamic cache to be used in further commands
(based on Ecto dynamic repo).
There might be cases where we want to have different cache instances but
accessing them through the same cache module. By default, when you call
`MyApp.Cache.start_link/1`, it will start a cache with the name
`MyApp.Cache`. But it is also possible to start multiple caches by using
a different name for each of them:
MyApp.Cache.start_link(name: :cache1)
MyApp.Cache.start_link(name: :cache2, backend: :shards)
However, once the cache is started, it is not possible to interact directly
with it, since all operations through `MyApp.Cache` are sent by default to
the cache named `MyApp.Cache`. But you can change the default cache at
compile-time:
use Nebulex.Cache, default_dynamic_cache: :cache_name
Or anytime at runtime by calling `put_dynamic_cache/1`:
MyApp.Cache.put_dynamic_cache(:another_cache_name)
From this moment on, all future commands performed by the current process
will run on `:another_cache_name`.
"""
@callback put_dynamic_cache(atom() | pid()) :: atom() | pid()
@doc """
Executes the function `fun` for the given dynamic cache.
## Example
MyCache.with_dynamic_cache(:cache_name, fn ->
MyCache.put("foo", "var")
end)
See `c:get_dynamic_cache/0` and `c:put_dynamic_cache/1`.
"""
@callback with_dynamic_cache(atom() | pid(), fun) :: term
@doc """
Gets a value from Cache where the key matches the given `key`.
Returns `nil` if no result was found.
## Options
See the "Shared options" section at the module documentation for more options.
## Example
iex> MyCache.put("foo", "bar")
:ok
iex> MyCache.get("foo")
"bar"
iex> MyCache.get(:non_existent_key)
nil
"""
@callback get(key, opts) :: value
@doc """
Similar to `get/2` but raises `KeyError` if `key` is not found.
## Options
See the "Shared options" section at the module documentation for more options.
## Example
MyCache.get!(:a)
"""
@callback get!(key, opts) :: value
@doc """
Returns a `map` with all the key-value pairs in the Cache where the key
is in `keys`.
If `keys` contains keys that are not in the Cache, they're simply ignored.
## Options
See the "Shared options" section at the module documentation for more options.
## Example
iex> MyCache.put_all([a: 1, c: 3])
:ok
iex> MyCache.get_all([:a, :b, :c])
%{a: 1, c: 3}
"""
@callback get_all(keys :: [key], opts) :: map
@doc """
Puts the given `value` under `key` into the Cache.
If `key` already holds an entry, it is overwritten. Any previous
time to live associated with the key is discarded on successful
`put` operation.
## Options
* `:ttl` - (positive integer or `:infinity`) Defines the time-to-live
(or expiry time) for the given key in **milliseconds**. Defaults
to `:infinity`.
See the "Shared options" section at the module documentation for more options.
## Example
iex> MyCache.put("foo", "bar")
:ok
If the value is nil, then it is not stored (operation is skipped):
iex> MyCache.put("foo", nil)
:ok
Put key with time-to-live:
iex> MyCache.put("foo", "bar", ttl: 10_000)
:ok
Using Nebulex.Time for TTL:
iex> import Nebulex.Time
Nebulex.Time
iex> MyCache.put("foo", "bar", ttl: expiry_time(10))
:ok
iex> MyCache.put("foo", "bar", ttl: expiry_time(10, :minute))
:ok
iex> MyCache.put("foo", "bar", ttl: expiry_time(1, :hour))
:ok
"""
@callback put(key, value, opts) :: :ok
@doc """
Puts the given `entries` (key/value pairs) into the Cache. It replaces
existing values with new values (just as regular `put`).
## Options
* `:ttl` - (positive integer or `:infinity`) Defines the time-to-live
(or expiry time) for the given key in **milliseconds**. Defaults
to `:infinity`.
See the "Shared options" section at the module documentation for more options.
## Example
iex> MyCache.put_all(apples: 3, bananas: 1)
:ok
iex> MyCache.put_all(%{apples: 2, oranges: 1}, ttl: 10_000)
:ok
Ideally, this operation should be atomic, so all given keys are put at once.
But it depends purely on the adapter's implementation and the backend used
internally by the adapter. Hence, it is recommended to review the adapter's
documentation.
"""
@callback put_all(entries, opts) :: :ok
@doc """
Puts the given `value` under `key` into the cache, only if it does not
already exist.
Returns `true` if a value was set, otherwise, `false` is returned.
## Options
* `:ttl` - (positive integer or `:infinity`) Defines the time-to-live
(or expiry time) for the given key in **milliseconds**. Defaults
to `:infinity`.
See the "Shared options" section at the module documentation for more options.
## Example
iex> MyCache.put_new("foo", "bar")
true
iex> MyCache.put_new("foo", "bar")
false
If the value is nil, it is not stored (operation is skipped):
iex> MyCache.put_new("other", nil)
true
"""
@callback put_new(key, value, opts) :: boolean
@doc """
Similar to `put_new/3` but raises `Nebulex.KeyAlreadyExistsError` if the
key already exists.
## Options
* `:ttl` - (positive integer or `:infinity`) Defines the time-to-live
(or expiry time) for the given key in **milliseconds**. Defaults
to `:infinity`.
See the "Shared options" section at the module documentation for more options.
## Example
iex> MyCache.put_new!("foo", "bar")
true
"""
@callback put_new!(key, value, opts) :: true
@doc """
Puts the given `entries` (key/value pairs) into the `cache`. It will not
perform any operation at all even if just a single key already exists.
Returns `true` if all entries were successfully set. It returns `false`
if no key was set (at least one key already existed).
## Options
* `:ttl` - (positive integer or `:infinity`) Defines the time-to-live
(or expiry time) for the given key in **milliseconds**. Defaults
to `:infinity`.
See the "Shared options" section at the module documentation for more options.
## Example
iex> MyCache.put_new_all(apples: 3, bananas: 1)
true
iex> MyCache.put_new_all(%{apples: 3, oranges: 1}, ttl: 10_000)
false
Ideally, this operation should be atomic, so all given keys are put at once.
But it depends purely on the adapter's implementation and the backend used
internally by the adapter. Hence, it is recommended to review the adapter's
documentation.
"""
@callback put_new_all(entries, opts) :: boolean
@doc """
Alters the entry stored under `key`, but only if the entry already exists
into the Cache.
Returns `true` if a value was set, otherwise, `false` is returned.
## Options
* `:ttl` - (positive integer or `:infinity`) Defines the time-to-live
(or expiry time) for the given key in **milliseconds**. Defaults
to `:infinity`.
See the "Shared options" section at the module documentation for more options.
## Example
iex> MyCache.replace("foo", "bar")
false
iex> MyCache.put_new("foo", "bar")
true
iex> MyCache.replace("foo", "bar2")
true
Update current value and TTL:
iex> MyCache.replace("foo", "bar3", ttl: 10_000)
true
"""
@callback replace(key, value, opts) :: boolean
@doc """
Similar to `replace/3` but raises `KeyError` if `key` is not found.
## Options
* `:ttl` - (positive integer or `:infinity`) Defines the time-to-live
(or expiry time) for the given key in **milliseconds**. Defaults
to `:infinity`.
See the "Shared options" section at the module documentation for more options.
## Example
iex> MyCache.replace!("foo", "bar")
true
"""
@callback replace!(key, value, opts) :: true
@doc """
Deletes the entry in Cache for a specific `key`.
## Options
See the "Shared options" section at the module documentation for more options.
## Example
iex> MyCache.put(:a, 1)
:ok
iex> MyCache.delete(:a)
:ok
iex> MyCache.get(:a)
:ok
iex> MyCache.delete(:non_existent_key)
:ok
"""
@callback delete(key, opts) :: :ok
@doc """
Returns and removes the value associated with `key` in the Cache.
If the `key` does not exist, then `nil` is returned.
## Options
See the "Shared options" section at the module documentation for more options.
## Examples
iex> MyCache.put(:a, 1)
:ok
iex> MyCache.take(:a)
1
iex> MyCache.take(:a)
nil
"""
@callback take(key, opts) :: value
@doc """
Similar to `take/2` but raises `KeyError` if `key` is not found.
## Options
See the "Shared options" section at the module documentation for more options.
## Example
MyCache.take!(:a)
"""
@callback take!(key, opts) :: value
@doc """
Returns whether the given `key` exists in the Cache.
## Examples
iex> MyCache.put(:a, 1)
:ok
iex> MyCache.has_key?(:a)
true
iex> MyCache.has_key?(:b)
false
"""
@callback has_key?(key) :: boolean
@doc """
Gets the value from `key` and updates it, all in one pass.
`fun` is called with the current cached value under `key` (or `nil`
if `key` hasn't been cached) and must return a two-element tuple:
the "get" value (the retrieved value, which can be operated on before
being returned) and the new value to be stored under `key`. `fun` may
also return `:pop`, which means the current value shall be removed
from Cache and returned.
The returned value is a tuple with the "get" value returned by
`fun` and the new updated value under `key`.
## Options
* `:ttl` - (positive integer or `:infinity`) Defines the time-to-live
(or expiry time) for the given key in **milliseconds**. Defaults
to `:infinity`.
See the "Shared options" section at the module documentation for more options.
## Examples
Update nonexistent key:
iex> MyCache.get_and_update(:a, fn current_value ->
...> {current_value, "value!"}
...> end)
{nil, "value!"}
Update existing key:
iex> MyCache.get_and_update(:a, fn current_value ->
...> {current_value, "new value!"}
...> end)
{"value!", "new value!"}
Pop/remove value if exist:
iex> MyCache.get_and_update(:a, fn _ -> :pop end)
{"new value!", nil}
Pop/remove nonexistent key:
iex> MyCache.get_and_update(:b, fn _ -> :pop end)
{nil, nil}
"""
@callback get_and_update(key, (value -> {get, update} | :pop), opts) :: {get, update}
when get: value, update: value
@doc """
Updates the cached `key` with the given function.
If `key` is present in Cache with value `value`, `fun` is invoked with
argument `value` and its result is used as the new value of `key`.
If `key` is not present in Cache, `initial` is inserted as the value of `key`.
The initial value will not be passed through the update function.
## Options
* `:ttl` - (positive integer or `:infinity`) Defines the time-to-live
(or expiry time) for the given key in **milliseconds**. Defaults
to `:infinity`.
See the "Shared options" section at the module documentation for more options.
## Examples
iex> MyCache.update(:a, 1, &(&1 * 2))
1
iex> MyCache.update(:a, 1, &(&1 * 2))
2
"""
@callback update(key, initial :: value, (value -> value), opts) :: value
@doc """
Increments or decrements the counter mapped to the given `key`.
If `incr >= 0` (positive value) then the current value is incremented by
that amount, otherwise, it means the X is a negative value so the current
value is decremented by the same amount.
## Options
* `:ttl` - (positive integer or `:infinity`) Defines the time-to-live
(or expiry time) for the given key in **milliseconds**. Defaults
to `:infinity`.
See the "Shared options" section at the module documentation for more options.
## Examples
iex> MyCache.incr(:a)
1
iex> MyCache.incr(:a, 2)
3
iex> MyCache.incr(:a, -1)
2
"""
@callback incr(key, incr :: integer, opts) :: integer
@doc """
Returns the remaining time-to-live for the given `key`. If the `key` does not
exist, then `nil` is returned.
## Examples
iex> MyCache.put(:a, 1, ttl: 5000)
:ok
iex> MyCache.put(:b, 2)
:ok
iex> MyCache.ttl(:a)
_remaining_ttl
iex> MyCache.ttl(:b)
:infinity
iex> MyCache.ttl(:c)
nil
"""
@callback ttl(key) :: timeout | nil
@doc """
Returns `true` if the given `key` exists and the new `ttl` was successfully
updated, otherwise, `false` is returned.
## Examples
iex> MyCache.put(:a, 1)
:ok
iex> MyCache.expire(:a, 5)
true
iex> MyCache.expire(:a, :infinity)
true
iex> MyCache.ttl(:b, 5)
false
"""
@callback expire(key, ttl :: timeout) :: boolean
@doc """
Returns `true` if the given `key` exists and the last access time was
successfully updated, otherwise, `false` is returned.
## Examples
iex> MyCache.put(:a, 1)
:ok
iex> MyCache.touch(:a)
true
iex> MyCache.ttl(:b)
false
"""
@callback touch(key) :: boolean
@doc """
Returns the total number of cached entries.
## Examples
iex> :ok = Enum.each(1..10, &MyCache.put(&1, &1))
iex> MyCache.size()
10
iex> :ok = Enum.each(1..5, &MyCache.delete(&1))
iex> MyCache.size()
5
"""
@callback size() :: integer
@doc """
Flushes the cache and returns the number of evicted keys.
## Examples
iex> :ok = Enum.each(1..5, &MyCache.put(&1, &1))
iex> MyCache.flush()
5
iex> MyCache.size()
0
"""
@callback flush() :: integer
## Nebulex.Adapter.Queryable
@optional_callbacks all: 2, stream: 2
@doc """
Fetches all entries from cache matching the given `query`.
If the `query` is `nil`, it fetches all entries from cache; this is common
for all adapters. However, the `query` could be any other value, which
depends entirely on the adapter's implementation; see the "Query"
section below.
May raise `Nebulex.QueryError` if query validation fails.
## Options
* `:return` - Tells the query what to return from the matched entries.
The possible values are: `:key`, `:value`, and `:entry` (`{key, value}`
pairs). Defaults to `:key`. This option is supported by the build-in
adapters, but it is recommended to check the adapter's documentation
to confirm its compatibility with this option.
See the "Shared options" section at the module documentation for more options.
## Example
Populate the cache with some entries:
iex> :ok = Enum.each(1..5, &MyCache.put(&1, &1 * 2))
Fetch all (with default params):
iex> MyCache.all()
[1, 2, 3, 4, 5]
Fetch all entries and return values:
iex> MyCache.all(nil, return: :value)
[2, 4, 6, 8, 10]
Fetch all entries that match with the given query assuming we are using
`Nebulex.Adapters.Local` adapter:
iex> query = [{{:_, :"$1", :"$2", :_, :_}, [{:>, :"$2", 5}], [:"$1"]}]
iex> MyCache.all(query)
[3, 4, 5]
## Query
Query spec is defined by the adapter, hence, it is recommended to review
adapters documentation. For instance, the built-in `Nebulex.Adapters.Local`
adapter supports `nil | :unexpired | :expired | :ets.match_spec()` as query
value.
## Examples
Additional built-in queries for `Nebulex.Adapters.Local` adapter:
iex> unexpired = MyCache.all(:unexpired)
iex> expired = MyCache.all(:expired)
If we are using Nebulex.Adapters.Local adapter, the stored entry tuple
`{:entry, key, value, version, expire_at}`, then the match spec could be
something like:
iex> spec = [{{:entry, :"$1", :"$2", :_, :_}, [{:>, :"$2", 5}], [{{:"$1", :"$2"}}]}]
iex> MyCache.all(spec)
[{3, 6}, {4, 8}, {5, 10}]
The same previous query but using `Ex2ms`:
iex> import Ex2ms
Ex2ms
iex> spec =
...> fun do
...> {_. key, value, _, _} when value > 5 -> {key, value}
...> end
iex> MyCache.all(spec)
[{3, 6}, {4, 8}, {5, 10}]
"""
@callback all(query :: term, opts) :: [any]
@doc """
Similar to `all/2` but returns a lazy enumerable that emits all entries
from the cache matching the given `query`.
May raise `Nebulex.QueryError` if query validation fails.
## Options
* `:return` - Tells the query what to return from the matched entries.
The possible values are: `:key`, `:value`, and `:entry` (`{key, value}`
pairs). Defaults to `:key`. This option is supported by the build-in
adapters, but it is recommended to check the adapter's documentation
to confirm its compatibility with this option.
* `:page_size` - Positive integer (>= 1) that defines the page size for
the stream (defaults to `10`).
See the "Shared options" section at the module documentation for more options.
## Examples
Populate the cache with some entries:
iex> :ok = Enum.each(1..5, &MyCache.put(&1, &1 * 2))
Stream all (with default params):
iex> MyCache.stream() |> Enum.to_list()
[1, 2, 3, 4, 5]
Stream all entries and return values:
iex> MyCache.stream(nil, return: :value, page_size: 3) |> Enum.to_list()
[2, 4, 6, 8, 10]
Additional built-in queries for `Nebulex.Adapters.Local` adapter:
iex> unexpired_stream = MyCache.stream(:unexpired)
iex> expired_stream = MyCache.stream(:expired)
If we are using Nebulex.Adapters.Local adapter, the stored entry tuple
`{:entry, key, value, version, expire_at}`, then the match spec could be
something like:
iex> spec = [{{:entry, :"$1", :"$2", :_, :_}, [{:>, :"$2", 5}], [{{:"$1", :"$2"}}]}]
iex> MyCache.stream(spec, page_size: 3) |> Enum.to_list()
[{3, 6}, {4, 8}, {5, 10}]
The same previous query but using `Ex2ms`:
iex> import Ex2ms
Ex2ms
iex> spec =
...> fun do
...> {_, key, value, _, _} when value > 5 -> {key, value}
...> end
iex> spec |> MyCache.stream(page_size: 3) |> Enum.to_list()
[{3, 6}, {4, 8}, {5, 10}]
"""
@callback stream(query :: term, opts) :: Enum.t()
## Nebulex.Adapter.Persistence
@optional_callbacks dump: 2, load: 2
@doc """
Dumps a cache to the given file `path`.
Returns `:ok` if successful, or `{:error, reason}` if an error occurs.
## Options
This operation relies entirely on the adapter implementation, which means the
options depend on each of them. For that reason, it is recommended to review
the documentation of the adapter to be used. The built-in adapters inherit
the default implementation from `Nebulex.Adapter.Persistence`, hence, review
the available options there.
## Examples
Populate the cache with some entries:
iex> entries = for x <- 1..10, into: %{}, do: {x, x}
iex> MyCache.set_many(entries)
:ok
Dump cache to a file:
iex> MyCache.dump("my_cache")
:ok
"""
@callback dump(path :: Path.t(), opts) :: :ok | {:error, term}
@doc """
Loads a dumped cache from the given `path`.
Returns `:ok` if successful, or `{:error, reason}` if an error occurs.
## Options
Similar to `dump/2`, this operation relies entirely on the adapter
implementation, therefore, it is recommended to review the documentation
of the adapter to be used. Similarly, the built-in adapters inherit the
default implementation from `Nebulex.Adapter.Persistence`, hence, review
the available options there.
## Examples
Populate the cache with some entries:
iex> entries = for x <- 1..10, into: %{}, do: {x, x}
iex> MyCache.set_many(entries)
:ok
Dump cache to a file:
iex> MyCache.dump("my_cache")
:ok
Load the cache from a file:
iex> MyCache.load("my_cache")
:ok
"""
@callback load(path :: Path.t(), opts) :: :ok | {:error, term}
## Nebulex.Adapter.Transaction
@optional_callbacks transaction: 2, in_transaction?: 0
@doc """
Runs the given function inside a transaction.
A successful transaction returns the value returned by the function.
## Options
See the "Shared options" section at the module documentation for more options.
## Examples
MyCache.transaction fn ->
alice = MyCache.get(:alice)
bob = MyCache.get(:bob)
MyCache.put(:alice, %{alice | balance: alice.balance + 100})
MyCache.put(:bob, %{bob | balance: bob.balance + 100})
end
Locking only the involved key (recommended):
MyCache.transaction [keys: [:alice, :bob]], fn ->
alice = MyCache.get(:alice)
bob = MyCache.get(:bob)
MyCache.put(:alice, %{alice | balance: alice.balance + 100})
MyCache.put(:bob, %{bob | balance: bob.balance + 100})
end
"""
@callback transaction(opts, function :: fun) :: term
@doc """
Returns `true` if the current process is inside a transaction.
## Examples
MyCache.in_transaction?
#=> false
MyCache.transaction(fn ->
MyCache.in_transaction? #=> true
end)
"""
@callback in_transaction?() :: boolean
end
|
lib/nebulex/cache.ex
| 0.88782
| 0.603056
|
cache.ex
|
starcoder
|
defmodule APIacAuthBasic do
@behaviour Plug
@behaviour APIac.Authenticator
use Bitwise
@moduledoc """
An `APIac.Authenticator` plug for API authentication using the HTTP `Basic` scheme
The HTTP `Basic` scheme simply consists in transmitting a client and its password
in the `Authorization` HTTP header. It is base64-encoded:
```http
GET /api/accounts HTTP/1.1
Host: example.com
Authorization: Basic Y2xpZW50X2lkOmNsaWVudF9wYXNzd29yZA==
Accept: */*
```
The decoded value of `Y2xpZW50X2lkOmNsaWVudF9wYXNzd29yZA==` is `client_id:client_password`
This scheme is also sometimes called *APIKey* by some API managers.
## Security considerations
The password is transmitted in cleartext form (base64 is not a encryption scheme).
Therefore, you should only use this scheme on encrypted connections (HTTPS).
## Plug options
- `realm`: a mandatory `String.t` that conforms to the HTTP quoted-string syntax,
however without the surrounding quotes (which will be added automatically when
needed). Defaults to `default_realm`
- `callback`: a function that will return the password of a client. When a
callback is configured, it takes precedence over the clients in the config
files, which will not be used. The returned value can be:
- A cleartext password (`String.t`)
- An `Expwd.Hashed{}` (hashed password)
- `nil` if the client is not known
- `set_error_response`: function called when authentication failed. Defaults to
`APIacAuthBasic.send_error_response/3`
- `error_response_verbosity`: one of `:debug`, `:normal` or `:minimal`.
Defaults to `:normal`
## Application configuration
`{client_id, client_secret}` pairs can be configured in you application configuration files. There will be compiled at **compile time**. If you need runtime configurability,
use the `callback` option instead.
Storing cleartext password requires special care, for instance: using *.secret.exs files, encrypted storage of these config files, etc. Consider using hashed password instead, such
as `%Expwd.Hashed{}`.
Pairs a to be set separately for each realm in the `clients` key, as following:
``` elixir
config :apiac_auth_basic,
clients: %{
# using Expwd Hashed portable password
"realm_a" => [
{"client_1", "expwd:sha256:lYOmCIZUR603rPiIN0agzBHFyZDw9xEtETfbe6Q1ubU"},
{"client_2", "expwd:sha256:mnAWHn1tSHEOCj6sMDIrB9BTRuD4yZkiLbjx9x2i3ug"},
{"client_3", "expwd:sha256:9RYrMJSmXJSN4CSJZtOX0Xs+vP94meTaSzGc+oFcwqM"},
{"client_4", "expwd:sha256:aCL154jd8bNw868cbsCUw3skHun1n6fGYhBiITSmREw"},
{"client_5", "expwd:sha256:xSE6MkeC+gW7R/lEZKxsWGDs1MlqEV4u693fCBNlV4g"}
],
"realm_b" => [
{"client_1", "expwd:sha256:lYOmCIZUR603rPiIN0agzBHFyZDw9xEtETfbe6Q1ubU"}
],
# UNSAFE: cleartext passwords set directly in the config file
"realm_c" => [
{"client_6", "cleartext password"},
{"client_7", "cleartext password again"}
]
}
```
"""
@default_realm "default_realm"
@typedoc """
The callback function returns an Expwd.Hashed.t or a client_secret (String.t) so as
to prevent developers to [unsecurely compare passwords](https://codahale.com/a-lesson-in-timing-attacks/).
Return `nil` if the client could not be gound for this realm
"""
@type callback_fun ::
(APIac.realm(), APIac.client() -> Expwd.Hashed.t() | client_secret | nil)
@type client_secret :: String.t()
@doc """
Plug initialization callback
"""
@impl Plug
@spec init(Plug.opts()) :: Plug.opts()
def init(opts) do
if is_binary(opts[:realm]) and not APIac.rfc7230_quotedstring?("\"#{opts[:realm]}\""),
do: raise("Invalid realm string (do not conform with RFC7230 quoted string)")
realm = if opts[:realm], do: opts[:realm], else: @default_realm
opts
|> Enum.into(%{})
|> Map.put_new(:realm, @default_realm)
|> Map.put_new(:clients, Application.get_env(:apiac_auth_basic, :clients)[realm] || [])
|> Map.put_new(:callback, nil)
|> Map.put_new(:set_error_response, &APIacAuthBasic.send_error_response/3)
|> Map.put_new(:error_response_verbosity, :normal)
end
@doc """
Plug pipeline callback
"""
@impl Plug
@spec call(Plug.Conn.t(), Plug.opts()) :: Plug.Conn.t()
def call(conn, %{} = opts) do
if APIac.authenticated?(conn) do
conn
else
do_call(conn, opts)
end
end
def do_call(conn, opts) do
with {:ok, conn, credentials} <- extract_credentials(conn, opts),
{:ok, conn} <- validate_credentials(conn, credentials, opts) do
conn
else
{:error, conn, %APIac.Authenticator.Unauthorized{} = error} ->
opts[:set_error_response].(conn, error, opts)
end
end
@doc """
`APIac.Authenticator` credential extractor callback
Returns the credentials under the form `{client_id, client_secret}` where both
variables are binaries
"""
@impl APIac.Authenticator
def extract_credentials(conn, _opts) do
parse_authz_header(conn)
end
defp parse_authz_header(conn) do
case Plug.Conn.get_req_header(conn, "authorization") do
# Only one header value should be returned
# (https://stackoverflow.com/questions/29282578/multiple-http-authorization-headers)
["Basic " <> auth_token] ->
# rfc7235 syntax allows multiple spaces before the base64 token
case Base.decode64(String.trim_leading(auth_token, "\s")) do
{:ok, decodedbinary} ->
# nothing indicates we should trim extra whitespaces (a passowrd could contain one for instance)
case String.split(decodedbinary, ":", trim: false) do
[client_id, client_secret] ->
if not ctl_char?(client_secret) and not ctl_char?(client_secret) do
{:ok, conn, {client_id, client_secret}}
else
{:error, conn,
%APIac.Authenticator.Unauthorized{
authenticator: __MODULE__,
reason: :invalid_client_id_or_client_secret
}}
end
_ ->
{:error, conn,
%APIac.Authenticator.Unauthorized{
authenticator: __MODULE__,
reason: :invalid_credential_format
}}
end
_ ->
{:error, conn,
%APIac.Authenticator.Unauthorized{
authenticator: __MODULE__,
reason: :invalid_credential_format
}}
end
_ ->
{:error, conn,
%APIac.Authenticator.Unauthorized{
authenticator: __MODULE__,
reason: :credentials_not_found
}}
end
end
defp ctl_char?(str) do
Regex.run(~r/[\x00-\x1F\x7F]/, str) != nil
end
@doc """
`APIac.Authenticator` credential validator callback
"""
@impl APIac.Authenticator
def validate_credentials(conn, {client_id, client_secret}, %{callback: callback} = opts)
when is_function(callback) do
case callback.(opts[:realm], client_id) do
nil ->
{:error, conn,
%APIac.Authenticator.Unauthorized{authenticator: __MODULE__, reason: :client_not_found}}
stored_client_secret ->
if Expwd.secure_compare(client_secret, stored_client_secret) == true do
conn =
conn
|> Plug.Conn.put_private(:apiac_authenticator, __MODULE__)
|> Plug.Conn.put_private(:apiac_client, client_id)
|> Plug.Conn.put_private(:apiac_realm, opts[:realm])
{:ok, conn}
else
{:error, conn,
%APIac.Authenticator.Unauthorized{
authenticator: __MODULE__,
reason: :invalid_client_secret
}}
end
end
end
@impl APIac.Authenticator
def validate_credentials(conn, {client_id, client_secret}, opts) do
case List.keyfind(opts[:clients], client_id, 0) do
nil ->
{:error, conn,
%APIac.Authenticator.Unauthorized{authenticator: __MODULE__, reason: :client_not_found}}
{_stored_client_id, stored_client_secret} ->
if Expwd.secure_compare(client_secret, stored_client_secret) == true do
conn =
conn
|> Plug.Conn.put_private(:apiac_authenticator, __MODULE__)
|> Plug.Conn.put_private(:apiac_client, client_id)
|> Plug.Conn.put_private(:apiac_realm, opts[:realm])
{:ok, conn}
else
{:error, conn,
%APIac.Authenticator.Unauthorized{
authenticator: __MODULE__,
reason: :invalid_client_secret
}}
end
end
end
@doc """
Implementation of the `APIac.Authenticator` callback
## Verbosity
The following elements in the HTTP response are set depending on the value
of the `:error_response_verbosity` option:
| Error response verbosity | HTTP Status | Headers | Body |
|:-------------------------:|--------------------|--------------------------------------------------------|---------------------------------------------------------|
| `:debug` | Unauthorized (401) | WWW-Authenticate with `Basic` scheme and `realm` param | `APIac.Authenticator.Unauthorized` exception's message |
| `:normal` | Unauthorized (401) | WWW-Authenticate with `Basic` scheme and `realm` param | |
| `:minimal` | Unauthorized (401) | | |
Note: the behaviour when the verbosity is `:minimal` may not be conformant
to the HTTP specification as at least one scheme should be returned in
the `WWW-Authenticate` header.
"""
@impl APIac.Authenticator
def send_error_response(conn, error, opts) do
case opts[:error_response_verbosity] do
:debug ->
conn
|> APIac.set_WWWauthenticate_challenge("Basic", %{"realm" => "#{opts[:realm]}"})
|> Plug.Conn.send_resp(:unauthorized, Exception.message(error))
|> Plug.Conn.halt()
:normal ->
conn
|> APIac.set_WWWauthenticate_challenge("Basic", %{"realm" => "#{opts[:realm]}"})
|> Plug.Conn.send_resp(:unauthorized, "")
|> Plug.Conn.halt()
:minimal ->
conn
|> Plug.Conn.send_resp(:unauthorized, "")
|> Plug.Conn.halt()
end
end
@doc """
Sets the HTTP `WWW-authenticate` header when no such a scheme is used for
authentication.
Sets the HTTP `WWW-Authenticate` header with the `Basic` scheme and the realm
name, when the `Basic` scheme was not used in the request. When this scheme is
used in the request, response will be sent by `#{__MODULE__}.send_error_response/3`.
This allows advertising that the `Basic` scheme is available, without stopping
the plug pipeline.
Raises a exception when the error response verbosity is set to `:minimal` since
it does not set the `WWW-Authenticate` header.
"""
@spec set_WWWauthenticate_header(
Plug.Conn.t(),
%APIac.Authenticator.Unauthorized{},
any()
) :: Plug.Conn.t()
def set_WWWauthenticate_header(_conn, _err, %{:error_response_verbosity => :minimal}) do
raise "#{__ENV__.function} not accepted when :error_response_verbosity is set to :minimal"
end
def set_WWWauthenticate_header(
conn,
%APIac.Authenticator.Unauthorized{reason: :credentials_not_found},
opts
) do
conn
|> APIac.set_WWWauthenticate_challenge("Basic", %{"realm" => "#{opts[:realm]}"})
end
def set_WWWauthenticate_header(conn, error, opts) do
send_error_response(conn, error, opts)
end
@doc """
Saves failure in a `Plug.Conn.t()`'s private field and returns the `conn`
See the `APIac.AuthFailureResponseData` module for more information.
"""
@spec save_authentication_failure_response(
Plug.Conn.t(),
%APIac.Authenticator.Unauthorized{},
any()
) :: Plug.Conn.t()
def save_authentication_failure_response(conn, error, opts) do
failure_response_data =
case opts[:error_response_verbosity] do
:debug ->
%APIac.AuthFailureResponseData{
module: __MODULE__,
reason: error.reason,
www_authenticate_header: {"Basic", %{"realm" => "#{opts[:realm]}"}},
status_code: :unauthorized,
body: Exception.message(error)
}
:normal ->
%APIac.AuthFailureResponseData{
module: __MODULE__,
reason: error.reason,
www_authenticate_header: {"Basic", %{"realm" => "#{opts[:realm]}"}},
status_code: :unauthorized,
body: ""
}
:minimal ->
%APIac.AuthFailureResponseData{
module: __MODULE__,
reason: error.reason,
www_authenticate_header: nil,
status_code: :unauthorized,
body: ""
}
end
APIac.AuthFailureResponseData.put(conn, failure_response_data)
end
end
|
lib/apiac_auth_basic.ex
| 0.93318
| 0.694626
|
apiac_auth_basic.ex
|
starcoder
|
defmodule Plug do
@moduledoc """
The plug specification.
There are two kind of plugs: function plugs and module plugs.
#### Function plugs
A function plug is any function that receives a connection and a set of
options and returns a connection. Its type signature must be:
(Plug.Conn.t, Plug.opts) :: Plug.Conn.t
#### Module plugs
A module plug is an extension of the function plug. It is a module that must
export:
* a `c:call/2` function with the signature defined above
* an `c:init/1` function which takes a set of options and initializes it.
The result returned by `c:init/1` is passed as second argument to `c:call/2`. Note
that `c:init/1` may be called during compilation and as such it must not return
pids, ports or values that are specific to the runtime.
The API expected by a module plug is defined as a behaviour by the
`Plug` module (this module).
## Examples
Here's an example of a function plug:
def json_header_plug(conn, _opts) do
Plug.Conn.put_resp_content_type(conn, "application/json")
end
Here's an example of a module plug:
defmodule JSONHeaderPlug do
import Plug.Conn
def init(opts) do
opts
end
def call(conn, _opts) do
put_resp_content_type(conn, "application/json")
end
end
## The Plug pipeline
The `Plug.Builder` module provides conveniences for building plug
pipelines.
"""
@type opts ::
binary
| tuple
| atom
| integer
| float
| [opts]
| %{optional(opts) => opts}
| MapSet.t()
@callback init(opts) :: opts
@callback call(conn :: Plug.Conn.t(), opts) :: Plug.Conn.t()
require Logger
@doc """
Run a series of Plugs at runtime.
The plugs given here can be either a tuple, representing a module plug
and their options, or a simple function that receives a connection and
returns a connection.
If any of the plugs halt, the remaining plugs are not invoked. If the
given connection was already halted, none of the plugs are invoked
either.
While `Plug.Builder` works at compile-time, this is a straight-forward
alternative that works at runtime.
## Examples
Plug.run(conn, [{Plug.Head, []}, &IO.inspect/1])
## Options
* `:log_on_halt` - a log level to be used if a Plug halts
"""
@spec run(Plug.Conn.t(), [{module, opts} | (Plug.Conn.t() -> Plug.Conn.t())], Keyword.t()) ::
Plug.Conn.t()
def run(conn, plugs, opts \\ [])
def run(%Plug.Conn{halted: true} = conn, _plugs, _opts),
do: conn
def run(%Plug.Conn{} = conn, plugs, opts),
do: do_run(conn, plugs, Keyword.get(opts, :log_on_halt))
defp do_run(conn, [{mod, opts} | plugs], level) when is_atom(mod) do
case mod.call(conn, mod.init(opts)) do
%Plug.Conn{halted: true} = conn ->
level && Logger.log(level, "Plug halted in #{inspect(mod)}.call/2")
conn
%Plug.Conn{} = conn ->
do_run(conn, plugs, level)
other ->
raise "expected #{inspect(mod)} to return Plug.Conn, got: #{inspect(other)}"
end
end
defp do_run(conn, [fun | plugs], level) when is_function(fun, 1) do
case fun.(conn) do
%Plug.Conn{halted: true} = conn ->
level && Logger.log(level, "Plug halted in #{inspect(fun)}")
conn
%Plug.Conn{} = conn ->
do_run(conn, plugs, level)
other ->
raise "expected #{inspect(fun)} to return Plug.Conn, got: #{inspect(other)}"
end
end
defp do_run(conn, [], _level), do: conn
@doc """
Forwards requests to another Plug setting the connection to a trailing subpath of the request.
The `path_info` on the forwarded connection will only include the trailing segments
of the request path supplied to forward, while `conn.script_name` will
retain the correct base path for e.g. url generation.
## Example
defmodule Router do
def init(opts), do: opts
def call(conn, opts) do
case conn do
# Match subdomain
%{host: "admin." <> _} ->
AdminRouter.call(conn, opts)
# Match path on localhost
%{host: "localhost", path_info: ["admin" | rest]} ->
Plug.forward(conn, rest, AdminRouter, opts)
_ ->
MainRouter.call(conn, opts)
end
end
end
"""
@spec forward(Plug.Conn.t(), [String.t()], atom, Plug.opts()) :: Plug.Conn.t()
def forward(%Plug.Conn{path_info: path, script_name: script} = conn, new_path, target, opts) do
{base, split_path} = Enum.split(path, length(path) - length(new_path))
conn = do_forward(target, %{conn | path_info: split_path, script_name: script ++ base}, opts)
%{conn | path_info: path, script_name: script}
end
defp do_forward({mod, fun}, conn, opts), do: apply(mod, fun, [conn, opts])
defp do_forward(mod, conn, opts), do: mod.call(conn, opts)
end
|
lib/plug.ex
| 0.90058
| 0.674064
|
plug.ex
|
starcoder
|
defmodule Gringotts.Gateways.Monei do
@moduledoc """
[MONEI][home] gateway implementation.
For reference see [MONEI's API (v1) documentation][docs].
The following features of MONEI are implemented:
| Action | Method | `type` |
| ------ | ------ | ------ |
| Pre-authorize | `authorize/3` | `PA` |
| Capture | `capture/3` | `CP` |
| Refund | `refund/3` | `RF` |
| Reversal | `void/2` | `RV` |
| Debit | `purchase/3` | `DB` |
| Tokenization / Registrations | `store/2` | |
> **What's this last column `type`?**
>
> That's the `paymentType` of the request, which you can ignore unless you'd
> like to contribute to this module. Please read the [MONEI Guides][docs].
[home]: https://monei.net
[docs]: https://docs.monei.net
## The `opts` argument
Most `Gringotts` API calls accept an optional `keyword` list `opts` to supply
[optional arguments][extra-arg-docs] for transactions with the MONEI
gateway. The following keys are supported:
| Key | Remark |
| ---- | --- |
| [`billing`][ba] | Address of the customer, which can be used for AVS risk check. |
| [`cart`][cart] | **Not Implemented** |
| [`custom`][custom] | It's a map of "name"-"value" pairs, and all of it is echoed back in the response. |
| [`customer`][c] | Annotate transactions with customer info on your Monei account, and helps in risk management. |
| [`invoice_id`][b] | Merchant provided invoice identifier, must be unique per transaction with Monei. |
| [`transaction_id`][b] | Merchant provided token for a transaction, must be unique per transaction with Monei. |
| [`category`][b] | The category of the transaction. |
| [`merchant`][m] | Information about the merchant, which overrides the cardholder's bank statement. |
| [`register`][t] | Also store payment data included in this request for future use. |
| [`shipping`][sa] | Location of recipient of goods, for logistics. |
| [`shipping_customer`][c] | Recipient details, could be different from `customer`. |
> These keys are being implemented, track progress in [issue #36][iss36]!
[extra-arg-docs]: https://docs.monei.net/reference/parameters
[ba]: https://docs.monei.net/reference/parameters#billing-address
[cart]: https://docs.monei.net/reference/parameters#cart
[custom]: https://docs.monei.net/reference/parameters#custom-parameters
[c]: https://docs.monei.net/reference/parameters#customer
[b]: https://docs.monei.net/reference/parameters#basic
[m]: https://docs.monei.net/reference/parameters#merchant
[t]: https://docs.monei.net/reference/parameters#tokenization
[sa]: https://docs.monei.net/reference/parameters#shipping-address
[iss36]: https://github.com/aviabird/gringotts/issues/36
## Registering your MONEI account at `Gringotts`
After [making an account on MONEI][dashboard], head to the dashboard and find
your account "secrets" in the `Sub-Accounts > Overview` section.
Here's how the secrets map to the required configuration parameters for MONEI:
| Config parameter | MONEI secret |
| ------- | ---- |
| `:userId` | **User ID** |
| `:entityId` | **Channel ID** |
| `:password` | **Password** |
Your Application config **must include the `:userId`, `:entityId`, `:password`
fields** and would look something like this:
config :gringotts, Gringotts.Gateways.Monei,
userId: "your_secret_user_id",
password: "<PASSWORD>",
entityId: "your_secret_channel_id"
[dashboard]: https://dashboard.monei.net/signin
## Scope of this module
* MONEI does not process money in cents, and the `amount` is rounded to 2
decimal places.
* Although MONEI supports payments from [various][all-card-list]
[cards][card-acc], [banks][bank-acc] and [virtual accounts][virtual-acc]
(like some wallets), this library only accepts payments by [(supported)
cards][all-card-list].
[all-card-list]: https://support.monei.net/charges-and-refunds/accepted-credit-cards-payment-methods
[card-acc]: https://docs.monei.net/reference/parameters#card
[bank-acc]: https://docs.monei.net/reference/parameters#bank-account
[virtual-acc]: https://docs.monei.net/reference/parameters#virtual-account
## Supported countries
MONEI supports the countries listed [here][all-country-list]
## Supported currencies
MONEI supports the currecncies [listed here][all-currency-list], and ***this
module*** supports a subset of those:
:AED, :AFN, :ANG, :AOA, :AWG, :AZN, :BAM, :BGN, :BRL, :BYN, :CDF, :CHF, :CUC,
:EGP, :EUR, :GBP, :GEL, :GHS, :MDL, :MGA, :MKD, :MWK, :MZN, :NAD, :NGN, :NIO,
:NOK, :NPR, :NZD, :PAB, :PEN, :PGK, :PHP, :PKR, :PLN, :PYG, :QAR, :RSD, :RUB,
:RWF, :SAR, :SCR, :SDG, :SEK, :SGD, :SHP, :SLL, :SOS, :SRD, :STD, :SYP, :SZL,
:THB, :TJS, :TOP, :TRY, :TTD, :TWD, :TZS, :UAH, :UGX, :USD, :UYU, :UZS, :VND,
:VUV, :WST, :XAF, :XCD, :XOF, :XPF, :YER, :ZAR, :ZMW, :ZWL
> Please [raise an issue][new-issue] if you'd like us to add support for more
> currencies
[all-currency-list]: https://support.monei.net/international/currencies-supported-by-monei
[new-issue]: https://github.com/aviabird/gringotts/issues
[all-country-list]: https://support.monei.net/international/what-countries-does-monei-support
## Following the examples
1. First, set up a sample application and configure it to work with MONEI.
- You could do that from scratch by following our [Getting Started](#) guide.
- To save you time, we recommend [cloning our example repo][example-repo]
that gives you a pre-configured sample app ready-to-go.
+ You could use the same config or update it the with your "secrets"
that you see in `Dashboard > Sub-accounts` as described
[above](#module-registering-your-monei-account-at-gringotts).
2. To save a lot of time, create a [`.iex.exs`][iex-docs] file as shown in
[this gist][monei.iex.exs] to introduce a set of handy bindings and
aliases.
We'll be using these bindings in the examples below.
[example-repo]: https://github.com/aviabird/gringotts_example
[iex-docs]: https://hexdocs.pm/iex/IEx.html#module-the-iex-exs-file
[monei.iex.exs]: https://gist.github.com/oyeb/a2e2ac5986cc90a12a6136f6bf1357e5
## TODO
* [Backoffice operations](https://docs.monei.net/tutorials/manage-payments/backoffice)
- Credit
- Rebill
* [Recurring payments](https://docs.monei.net/recurring)
* [Reporting](https://docs.monei.net/tutorials/reporting)
"""
use Gringotts.Gateways.Base
use Gringotts.Adapter, required_config: [:userId, :entityId, :password]
import Poison, only: [decode: 1]
alias Gringotts.{CreditCard, Response, Money}
@base_url "https://test.monei-api.net"
@default_headers ["Content-Type": "application/x-www-form-urlencoded", charset: "UTF-8"]
@supported_currencies [
"AED", "AFN", "ANG", "AOA", "AWG", "AZN", "BAM", "BGN", "BRL", "BYN", "CDF",
"CHF", "CUC", "EGP", "EUR", "GBP", "GEL", "GHS", "MDL", "MGA", "MKD", "MWK",
"MZN", "NAD", "NGN", "NIO", "NOK", "NPR", "NZD", "PAB", "PEN", "PGK", "PHP",
"PKR", "PLN", "PYG", "QAR", "RSD", "RUB", "RWF", "SAR", "SCR", "SDG", "SEK",
"SGD", "SHP", "SLL", "SOS", "SRD", "STD", "SYP", "SZL", "THB", "TJS", "TOP",
"TRY", "TTD", "TWD", "TZS", "UAH", "UGX", "USD", "UYU", "UZS", "VND", "VUV",
"WST", "XAF", "XCD", "XOF", "XPF", "YER", "ZAR", "ZMW", "ZWL"
]
@version "v1"
@cvc_code_translator %{
"M" => "pass",
"N" => "fail",
"P" => "not_processed",
"U" => "issuer_unable",
"S" => "issuer_unable"
}
@avs_code_translator %{
"F" => {"pass", "pass"},
"A" => {"pass", "fail"},
"Z" => {"fail", "pass"},
"N" => {"fail", "fail"},
"U" => {"error", "error"},
nil => {nil, nil}
}
# MONEI supports payment by card, bank account and even something obscure:
# virtual account opts has the auth keys.
@doc """
Performs a (pre) Authorize operation.
The authorization validates the `card` details with the banking network,
places a hold on the transaction `amount` in the customer’s issuing bank and
also triggers risk management. Funds are not transferred.
MONEI returns an ID string which can be used to:
* `capture/3` _an_ amount.
* `void/2` a pre-authorization.
## Note
* The `:register` option when set to `true` will store this card for future
use, and you will recieve a registration `token` in the `:token` field of
the `Response` struct.
* A stand-alone pre-authorization [expires in
72hrs](https://docs.monei.net/tutorials/manage-payments/backoffice).
## Example
The following example shows how one would (pre) authorize a payment of $42 on
a sample `card`.
iex> amount = Money.new(42, :USD)
iex> card = %Gringotts.CreditCard{first_name: "Harry", last_name: "Potter", number: "4200000000000000", year: 2099, month: 12, verification_code: "123", brand: "VISA"}
iex> {:ok, auth_result} = Gringotts.authorize(Gringotts.Gateways.Monei, amount, card, opts)
iex> auth_result.id # This is the authorization ID
iex> auth_result.token # This is the registration ID/token
"""
@spec authorize(Money.t(), CreditCard.t(), keyword) :: {:ok | :error, Response.t()}
def authorize(amount, %CreditCard{} = card, opts) do
{currency, value} = Money.to_string(amount)
params =
[
paymentType: "PA",
amount: value
] ++ card_params(card)
commit(:post, "payments", params, [{:currency, currency} | opts])
end
@doc """
Captures a pre-authorized `amount`.
`amount` is transferred to the merchant account by MONEI when it is smaller or
equal to the amount used in the pre-authorization referenced by `payment_id`.
## Note
MONEI allows partial captures and unlike many other gateways, does not release
the remaining amount back to the payment source. Thus, the same
pre-authorisation ID can be used to perform multiple captures, till:
* all the pre-authorized amount is captured or,
* the remaining amount is explicitly "reversed" via `void/2`. **[citation-needed]**
## Example
The following example shows how one would (partially) capture a previously
authorized a payment worth $35 by referencing the obtained authorization `id`.
iex> amount = Money.new(35, :USD)
iex> {:ok, capture_result} = Gringotts.capture(Gringotts.Gateways.Monei, amount, auth_result.id, opts)
"""
@spec capture(String.t(), Money.t(), keyword) :: {:ok | :error, Response.t()}
def capture(payment_id, amount, opts)
def capture(<<payment_id::bytes-size(32)>>, amount, opts) do
{currency, value} = Money.to_string(amount)
params = [
paymentType: "CP",
amount: value
]
commit(:post, "payments/#{payment_id}", params, [{:currency, currency} | opts])
end
@doc """
Transfers `amount` from the customer to the merchant.
MONEI attempts to process a purchase on behalf of the customer, by debiting
`amount` from the customer's account by charging the customer's `card`.
## Note
* The `:register` option when set to `true` will store this card for future
use, and you will recieve a registration `token` in the `:token` field of
the `Response` struct.
## Example
The following example shows how one would process a payment worth $42 in
one-shot, without (pre) authorization.
iex> amount = Money.new(42, :USD)
iex> card = %Gringotts.CreditCard{first_name: "Harry", last_name: "Potter", number: "4200000000000000", year: 2099, month: 12, verification_code: "123", brand: "VISA"}
iex> {:ok, purchase_result} = Gringotts.purchase(Gringotts.Gateways.Monei, amount, card, opts)
iex> purchase_result.token # This is the registration ID/token
"""
@spec purchase(Money.t(), CreditCard.t(), keyword) :: {:ok | :error, Response.t()}
def purchase(amount, %CreditCard{} = card, opts) do
{currency, value} = Money.to_string(amount)
params =
[
paymentType: "DB",
amount: value
] ++ card_params(card)
commit(:post, "payments", params, [{:currency, currency} | opts])
end
@doc """
Refunds the `amount` to the customer's account with reference to a prior transfer.
MONEI processes a full or partial refund worth `amount`, referencing a
previous `purchase/3` or `capture/3`.
The end customer will always see two bookings/records on his statement.
Refer MONEI's [Backoffice
Operations](https://docs.monei.net/tutorials/manage-payments/backoffice)
guide.
## Example
The following example shows how one would (completely) refund a previous
purchase (and similarily for captures).
iex> amount = Money.new(42, :USD)
iex> {:ok, refund_result} = Gringotts.refund(Gringotts.Gateways.Monei, purchase_result.id, amount)
"""
@spec refund(Money.t(), String.t(), keyword) :: {:ok | :error, Response.t()}
def refund(amount, <<payment_id::bytes-size(32)>>, opts) do
{currency, value} = Money.to_string(amount)
params = [
paymentType: "RF",
amount: value
]
commit(:post, "payments/#{payment_id}", params, [{:currency, currency} | opts])
end
@doc """
Stores the payment-source data for later use.
MONEI can store the payment-source details, for example card or bank details
which can be used to effectively process _One-Click_ and _Recurring_ payments,
and return a registration token for reference.
The registration token is available in the `Response.id` field.
It is recommended to associate these details with a "Customer" by passing
customer details in the `opts`.
## Note
* _One-Click_ and _Recurring_ payments are currently not implemented.
* Payment details can be saved during a `purchase/3` or `capture/3`.
## Example
The following example shows how one would store a card (a payment-source) for
future use.
iex> card = %Gringotts.CreditCard{first_name: "Harry", last_name: "Potter", number: "4200000000000000", year: 2099, month: 12, verification_code: "123", brand: "VISA"}
iex> {:ok, store_result} = Gringotts.store(Gringotts.Gateways.Monei, card)
iex> store_result.id # This is the registration token
"""
@spec store(CreditCard.t(), keyword) :: {:ok | :error, Response.t()}
def store(%CreditCard{} = card, opts) do
params = card_params(card)
commit(:post, "registrations", params, opts)
end
@doc """
WIP
**MONEI unstore does not seem to work. MONEI always returns a `403`**
Deletes previously stored payment-source data.
"""
@spec unstore(String.t(), keyword) :: {:ok | :error, Response.t()}
def unstore(registration_id, opts)
def unstore(<<registration_id::bytes-size(32)>>, opts) do
commit(:delete, "registrations/#{registration_id}", [], opts)
end
@doc """
Voids the referenced payment.
This method attempts a reversal of the either a previous `purchase/3`,
`capture/3` or `authorize/3` referenced by `payment_id`.
As a consequence, the customer will never see any booking on his
statement. Refer MONEI's [Backoffice
Operations](https://docs.monei.net/tutorials/manage-payments/backoffice)
guide.
## Voiding a previous authorization
MONEI will reverse the authorization by sending a "reversal request" to the
payment source (card issuer) to clear the funds held against the
authorization. If some of the authorized amount was captured, only the
remaining amount is cleared. **[citation-needed]**
## Voiding a previous purchase
MONEI will reverse the payment, by sending all the amount back to the
customer. Note that this is not the same as `refund/3`.
## Example
The following example shows how one would void a previous (pre)
authorization. Remember that our `capture/3` example only did a partial
capture.
iex> {:ok, void_result} = Gringotts.void(Gringotts.Gateways.Monei, auth_result.id, opts)
"""
@spec void(String.t(), keyword) :: {:ok | :error, Response.t()}
def void(payment_id, opts)
def void(<<payment_id::bytes-size(32)>>, opts) do
params = [paymentType: "RV"]
commit(:post, "payments/#{payment_id}", params, opts)
end
defp card_params(card) do
[
"card.number": card.number,
"card.holder": CreditCard.full_name(card),
"card.expiryMonth": card.month |> Integer.to_string() |> String.pad_leading(2, "0"),
"card.expiryYear": card.year |> Integer.to_string(),
"card.cvv": card.verification_code,
paymentBrand: card.brand
]
end
defp auth_params(opts) do
[
"authentication.userId": opts[:config][:userId],
"authentication.password": opts[:config][:password],
"authentication.entityId": opts[:config][:entityId]
]
end
# Makes the request to MONEI's network.
@spec commit(atom, String.t(), keyword, keyword) :: {:ok | :error, Response.t()}
defp commit(:post, endpoint, params, opts) do
url = "#{base_url(opts)}/#{version(opts)}/#{endpoint}"
case expand_params(Keyword.delete(opts, :config), params[:paymentType]) do
{:error, reason} ->
{:error, Response.error(reason: reason)}
validated_params ->
url
|> HTTPoison.post({:form, params ++ validated_params ++ auth_params(opts)}, @default_headers)
|> respond
end
end
# This clause is only used by `unstore/2`
defp commit(:delete, endpoint, _params, opts) do
base_url = "#{base_url(opts)}/#{version(opts)}/#{endpoint}"
auth_params = auth_params(opts)
query_string = auth_params |> URI.encode_query()
base_url <> "?" <> query_string
|> HTTPoison.delete()
|> respond
end
# Parses MONEI's response and returns a `Gringotts.Response` struct in a
# `:ok`, `:error` tuple.
@spec respond(term) :: {:ok | :error, Response.t()}
defp respond(monei_response)
defp respond({:ok, %{status_code: 200, body: body}}) do
common = [raw: body, status_code: 200]
with {:ok, decoded_json} <- decode(body),
{:ok, results} <- parse_response(decoded_json) do
{:ok, Response.success(common ++ results)}
else
{:not_ok, errors} ->
{:ok, Response.error(common ++ errors)}
{:error, _} ->
{:error, Response.error([reason: "undefined response from monei"] ++ common)}
end
end
defp respond({:ok, %{status_code: status_code, body: body}}) do
{:error, Response.error(status_code: status_code, raw: body)}
end
defp respond({:error, %HTTPoison.Error{} = error}) do
{
:error,
Response.error(
reason: "network related failure",
message: "HTTPoison says '#{error.reason}' [ID: #{error.id || "nil"}]"
)
}
end
defp parse_response(%{"result" => result} = data) do
{address, zip_code} = @avs_code_translator[result["avsResponse"]]
results = [
id: data["id"],
token: data["registrationId"],
gateway_code: result["code"],
message: result["description"],
fraud_review: data["risk"],
cvc_result: @cvc_code_translator[result["cvvResponse"]],
avs_result: %{address: address, zip_code: zip_code}
]
non_nil_params = Enum.filter(results, fn {_, v} -> v != nil end)
verify(non_nil_params)
end
defp verify(results) do
if String.match?(results[:gateway_code], ~r{^(000\.000\.|000\.100\.1|000\.[36])}) do
{:ok, results}
else
{:not_ok, [{:reason, results[:message]} | results]}
end
end
defp expand_params(params, action_type) do
Enum.reduce_while(params, [], fn {k, v}, acc ->
case k do
:currency ->
if valid_currency?(v),
do: {:cont, [{:currency, v} | acc]},
else: {:halt, {:error, "Invalid currency"}}
:customer ->
{:cont, acc ++ make(action_type, "customer", v)}
:merchant ->
{:cont, acc ++ make(action_type, "merchant", v)}
:billing ->
{:cont, acc ++ make(action_type, "billing", v)}
:shipping ->
{:cont, acc ++ make(action_type, "shipping", v)}
:invoice_id ->
{:cont, [{"merchantInvoiceId", v} | acc]}
:transaction_id ->
{:cont, [{"merchantTransactionId", v} | acc]}
:category ->
{:cont, [{"transactionCategory", v} | acc]}
:shipping_customer ->
{:cont, acc ++ make(action_type, "shipping.customer", v)}
:custom ->
{:cont, acc ++ make_custom(v)}
:register ->
{:cont, acc ++ make(action_type, :register, v)}
unsupported ->
{:halt, {:error, "Unsupported optional param '#{unsupported}'"}}
end
end)
end
defp valid_currency?(currency) do
currency in @supported_currencies
end
defp parse_response(%{"result" => result} = data) do
{address, zip_code} = @avs_code_translator[result["avsResponse"]]
results = [
code: result["code"],
description: result["description"],
risk: data["risk"]["score"],
cvc_result: @cvc_code_translator[result["cvvResponse"]],
avs_result: [address: address, zip_code: zip_code],
raw: data,
token: data["registrationId"]
]
filtered = Enum.filter(results, fn {_, v} -> v != nil end)
verify(filtered)
end
defp verify(results) do
if String.match?(results[:code], ~r{^(000\.000\.|000\.100\.1|000\.[36])}) do
{:ok, results}
else
{:error, [{:reason, results[:description]} | results]}
end
end
defp make(action_type, _prefix, _param) when action_type in ["CP", "RF", "RV"], do: []
defp make(action_type, prefix, param) do
case prefix do
:register ->
if action_type in ["PA", "DB"], do: [createRegistration: true], else: []
_ -> Enum.into(param, [], fn {k, v} -> {"#{prefix}.#{k}", v} end)
end
end
defp make_custom(custom_map) do
Enum.into(custom_map, [], fn {k, v} -> {"customParameters[#{k}]", "#{v}"} end)
end
defp base_url(opts), do: opts[:config][:test_url] || @base_url
defp version(opts), do: opts[:config][:api_version] || @version
end
|
lib/gringotts/gateways/monei.ex
| 0.858244
| 0.680627
|
monei.ex
|
starcoder
|
defmodule AWS.KinesisVideoSignaling do
@moduledoc """
Kinesis Video Streams Signaling Service is a intermediate service that
establishes a communication channel for discovering peers, transmitting offers
and answers in order to establish peer-to-peer connection in webRTC technology.
"""
@doc """
Gets the Interactive Connectivity Establishment (ICE) server configuration
information, including URIs, username, and password which can be used to
configure the WebRTC connection.
The ICE component uses this configuration information to setup the WebRTC
connection, including authenticating with the Traversal Using Relays around NAT
(TURN) relay server.
TURN is a protocol that is used to improve the connectivity of peer-to-peer
applications. By providing a cloud-based relay service, TURN ensures that a
connection can be established even when one or more peers are incapable of a
direct peer-to-peer connection. For more information, see [A REST API For Access To TURN Services](https://tools.ietf.org/html/draft-uberti-rtcweb-turn-rest-00).
You can invoke this API to establish a fallback mechanism in case either of the
peers is unable to establish a direct peer-to-peer connection over a signaling
channel. You must specify either a signaling channel ARN or the client ID in
order to invoke this API.
"""
def get_ice_server_config(client, input, options \\ []) do
path_ = "/v1/get-ice-server-config"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
This API allows you to connect WebRTC-enabled devices with Alexa display
devices.
When invoked, it sends the Alexa Session Description Protocol (SDP) offer to the
master peer. The offer is delivered as soon as the master is connected to the
specified signaling channel. This API returns the SDP answer from the connected
master. If the master is not connected to the signaling channel, redelivery
requests are made until the message expires.
"""
def send_alexa_offer_to_master(client, input, options \\ []) do
path_ = "/v1/send-alexa-offer-to-master"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@spec request(AWS.Client.t(), binary(), binary(), list(), list(), map(), list(), pos_integer()) ::
{:ok, map() | nil, map()}
| {:error, term()}
defp request(client, method, path, query, headers, input, options, success_status_code) do
client = %{client | service: "kinesisvideo"}
host = build_host("kinesisvideo", client)
url = host
|> build_url(path, client)
|> add_query(query, client)
additional_headers = [{"Host", host}, {"Content-Type", "application/x-amz-json-1.1"}]
headers = AWS.Request.add_headers(additional_headers, headers)
payload = encode!(client, input)
headers = AWS.Request.sign_v4(client, method, url, headers, payload)
perform_request(client, method, url, payload, headers, options, success_status_code)
end
defp perform_request(client, method, url, payload, headers, options, success_status_code) do
case AWS.Client.request(client, method, url, payload, headers, options) do
{:ok, %{status_code: status_code, body: body} = response}
when is_nil(success_status_code) and status_code in [200, 202, 204]
when status_code == success_status_code ->
body = if(body != "", do: decode!(client, body))
{:ok, body, response}
{:ok, response} ->
{:error, {:unexpected_response, response}}
error = {:error, _reason} -> error
end
end
defp build_host(_endpoint_prefix, %{region: "local", endpoint: endpoint}) do
endpoint
end
defp build_host(_endpoint_prefix, %{region: "local"}) do
"localhost"
end
defp build_host(endpoint_prefix, %{region: region, endpoint: endpoint}) do
"#{endpoint_prefix}.#{region}.#{endpoint}"
end
defp build_url(host, path, %{:proto => proto, :port => port}) do
"#{proto}://#{host}:#{port}#{path}"
end
defp add_query(url, [], _client) do
url
end
defp add_query(url, query, client) do
querystring = encode!(client, query, :query)
"#{url}?#{querystring}"
end
defp encode!(client, payload, format \\ :json) do
AWS.Client.encode!(client, payload, format)
end
defp decode!(client, payload) do
AWS.Client.decode!(client, payload, :json)
end
end
|
lib/aws/generated/kinesis_video_signaling.ex
| 0.747892
| 0.456168
|
kinesis_video_signaling.ex
|
starcoder
|
defmodule Sourceror.Range do
@moduledoc false
import Sourceror.Identifier, only: [is_unary_op: 1, is_binary_op: 1]
defp split_on_newline(string) do
String.split(string, ~r/\n|\r\n|\r/)
end
@spec get_range(Macro.t()) :: Sourceror.range()
def get_range(quoted)
# Module aliases
def get_range({:__aliases__, meta, segments}) do
start_pos = Keyword.take(meta, [:line, :column])
last_segment_length = List.last(segments) |> to_string() |> String.length()
end_pos = meta[:last] |> Keyword.update!(:column, &(&1 + last_segment_length))
%{start: start_pos, end: end_pos}
end
# Strings
def get_range({:__block__, meta, [string]}) when is_binary(string) do
lines = split_on_newline(string)
last_line = List.last(lines) || ""
end_line = meta[:line] + length(lines)
end_line =
if meta[:delimiter] in [~S/"""/, ~S/'''/] do
end_line
else
end_line - 1
end
end_column =
if meta[:delimiter] in [~S/"""/, ~S/'''/] do
meta[:column] + String.length(meta[:delimiter])
else
count = meta[:column] + String.length(last_line) + String.length(meta[:delimiter])
if end_line == meta[:line] do
count + 1
else
count
end
end
%{
start: Keyword.take(meta, [:line, :column]),
end: [line: end_line, column: end_column]
}
end
# Integers, Floats
def get_range({:__block__, meta, [number]}) when is_integer(number) or is_float(number) do
%{
start: Keyword.take(meta, [:line, :column]),
end: [line: meta[:line], column: meta[:column] + String.length(meta[:token])]
}
end
# Atoms
def get_range({:__block__, meta, [atom]}) when is_atom(atom) do
start_pos = Keyword.take(meta, [:line, :column])
string = Atom.to_string(atom)
delimiter = meta[:delimiter] || ""
lines = split_on_newline(string)
last_line = List.last(lines) || ""
end_line = meta[:line] + length(lines) - 1
end_column = meta[:column] + String.length(last_line) + String.length(delimiter)
end_column =
cond do
end_line == meta[:line] && meta[:delimiter] ->
# Column and first delimiter
end_column + 2
end_line == meta[:line] ->
# Just the colon
end_column + 1
end_line != meta[:line] ->
# You're beautiful as you are, Courage
end_column
end
%{
start: start_pos,
end: [line: end_line, column: end_column]
}
end
# Block with no parenthesis
def get_range({:__block__, _, args} = quoted) do
if Sourceror.has_closing_line?(quoted) do
get_range_for_node_with_closing_line(quoted)
else
{first, rest} = List.pop_at(args, 0)
{last, _} = List.pop_at(rest, -1, first)
%{
start: get_range(first).start,
end: get_range(last).end
}
end
end
# Variables
def get_range({form, meta, context}) when is_atom(form) and is_atom(context) do
start_pos = Keyword.take(meta, [:line, :column])
end_pos = [
line: start_pos[:line],
column: start_pos[:column] + String.length(Atom.to_string(form))
]
%{start: start_pos, end: end_pos}
end
# Access syntax
def get_range({{:., _, [Access, :get]}, _, _} = quoted) do
get_range_for_node_with_closing_line(quoted)
end
# Qualified tuple
def get_range({{:., _, [_, :{}]}, _, _} = quoted) do
get_range_for_node_with_closing_line(quoted)
end
# Interpolated atoms
def get_range({{:., _, [:erlang, :binary_to_atom]}, meta, [interpolation, :utf8]}) do
interpolation =
Macro.update_meta(interpolation, &Keyword.put(&1, :delimiter, meta[:delimiter]))
get_range_for_interpolation(interpolation)
end
# Qualified call
def get_range({{:., _, [left, right]}, meta, []} = quoted) when is_atom(right) do
if Sourceror.has_closing_line?(quoted) do
get_range_for_node_with_closing_line(quoted)
else
start_pos = get_range(left).start
identifier_pos = Keyword.take(meta, [:line, :column])
parens_length =
if meta[:no_parens] do
0
else
2
end
end_pos = [
line: identifier_pos[:line],
column:
identifier_pos[:column] + String.length(Atom.to_string(right)) +
parens_length
]
%{start: start_pos, end: end_pos}
end
end
# Qualified call with arguments
def get_range({{:., _, [left, _]}, _meta, args} = quoted) do
if Sourceror.has_closing_line?(quoted) do
get_range_for_node_with_closing_line(quoted)
else
start_pos = get_range(left).start
end_pos = get_range(List.last(args) || left).end
%{start: start_pos, end: end_pos}
end
end
# Unary operators
def get_range({op, meta, [arg]}) when is_unary_op(op) do
start_pos = Keyword.take(meta, [:line, :column])
arg_range = get_range(arg)
end_column =
if arg_range.end[:line] == meta[:line] do
arg_range.end[:column]
else
arg_range.end[:column] + String.length(to_string(op))
end
%{start: start_pos, end: [line: arg_range.end[:line], column: end_column]}
end
# Binary operators
def get_range({op, _, [left, right]}) when is_binary_op(op) do
%{
start: get_range(left).start,
end: get_range(right).end
}
end
# Stepped ranges
def get_range({:"..//", _, [left, _middle, right]}) do
%{
start: get_range(left).start,
end: get_range(right).end
}
end
# Bitstrings and interpolations
def get_range({:<<>>, meta, _} = quoted) do
if meta[:delimiter] do
get_range_for_interpolation(quoted)
else
get_range_for_bitstring(quoted)
end
end
# Sigils
def get_range({sigil, meta, [{:<<>>, _, segments}, modifiers]} = quoted)
when is_list(modifiers) do
case Atom.to_string(sigil) do
<<"sigil_", _name>> ->
# Congratulations, it's a sigil!
start_pos = Keyword.take(meta, [:line, :column])
end_pos = get_end_pos_for_interpolation_segments(segments, start_pos)
%{
start: start_pos,
end: Keyword.update!(end_pos, :column, &(&1 + length(modifiers)))
}
_ ->
get_range_for_unqualified_call(quoted)
end
end
# Unqualified calls
def get_range({call, _, _} = quoted) when is_atom(call) do
get_range_for_unqualified_call(quoted)
end
def get_range_for_unqualified_call({_call, meta, args} = quoted) do
if Sourceror.has_closing_line?(quoted) do
get_range_for_node_with_closing_line(quoted)
else
start_pos = Keyword.take(meta, [:line, :column])
end_pos = get_range(List.last(args)).end
%{start: start_pos, end: end_pos}
end
end
def get_range_for_node_with_closing_line({_, meta, _} = quoted) do
start_position = Sourceror.get_start_position(quoted)
end_position = Sourceror.get_end_position(quoted)
end_position =
if Keyword.has_key?(meta, :end) do
Keyword.update!(end_position, :column, &(&1 + 3))
else
# If it doesn't have an end token, then it has either a ), a ] or a }
Keyword.update!(end_position, :column, &(&1 + 1))
end
%{start: start_position, end: end_position}
end
def get_range_for_interpolation({:<<>>, meta, segments}) do
start_pos = Keyword.take(meta, [:line, :column])
end_pos = get_end_pos_for_interpolation_segments(segments, start_pos)
%{start: start_pos, end: end_pos}
end
def get_end_pos_for_interpolation_segments(segments, start_pos) do
end_pos =
Enum.reduce(segments, start_pos, fn
string, pos when is_binary(string) ->
lines = split_on_newline(string)
length = String.length(List.last(lines) || "")
[
line: pos[:line] + length(lines) - 1,
column: pos[:column] + length
]
{:"::", _, [{_, meta, _}, {:binary, _, _}]}, _pos ->
meta
|> Keyword.get(:closing)
|> Keyword.take([:line, :column])
# Add the closing }
|> Keyword.update!(:column, &(&1 + 1))
end)
Keyword.update!(end_pos, :column, &(&1 + 1))
end
def get_range_for_bitstring(quoted) do
range = get_range_for_node_with_closing_line(quoted)
# get_range_for_node_with_closing_line/1 will add 1 to the ending column
# because it assumes it ends with ), ] or }, but bitstring closing token is
# >>, so we need to add another 1
update_in(range, [:end, :column], &(&1 + 1))
end
end
|
lib/sourceror/range.ex
| 0.588889
| 0.512937
|
range.ex
|
starcoder
|
defmodule ExContract.Predicates do
@moduledoc """
Predicate functions and operators that are useful in contract specifications.
To use the operator versions of the predicates, this module must be imported in the using module.
"""
@doc """
Logical exclusive or: is either `p` or `q` true, but not both?
## Examples
iex> import ExContract.Predicates
ExContract.Predicates
iex> xor(true, true)
false
iex> xor(true, false)
true
iex> xor(false, true)
true
iex> xor(false, false)
false
"""
@spec xor(boolean, boolean) :: boolean
def xor(p, q), do: (p || q) && !(p && q)
@doc """
Logical exclusive or operator: `p <|> q` means `xor(p, q)`.
Note that the `<|>` operator has higher precedence than many other operators and it may be
necessary to parenthesize the expressions on either side of the operator to get the
expected result.
## Examples
iex> import ExContract.Predicates
ExContract.Predicates
iex> true <|> true
false
iex> true <|> false
true
iex> false <|> true
true
iex> false <|> false
false
iex> x = 2
2
iex> y = 4
4
iex> (x - y < 0) <|> (y <= x)
true
"""
def (p <|> q), do: xor(p, q)
@doc """
Logical implication: does `p` imply `q`?
## Examples
iex> import ExContract.Predicates
ExContract.Predicates
iex> implies?(true, true)
true
iex> implies?(true, false)
false
iex> implies?(false, true)
true
iex> implies?(false, false)
true
"""
@spec implies?(boolean, boolean) :: boolean
def implies?(p, q), do: !p || q
@doc """
Logical implication operator: `p ~> q` means `implies?(p, q)`.
Note that the `~>` operator has higher precedence than many other operators and it may be
necessary to parenthesize the expressions on either side of the operator to get the
expected result.
## Examples
iex> import ExContract.Predicates
ExContract.Predicates
iex> true ~> true
true
iex> true ~> false
false
iex> false ~> true
true
iex> false ~> false
true
iex> x = 2
2
iex> y = 4
4
iex> (x - y < 0) ~> (y > x)
true
"""
def (p ~> q), do: implies?(p, q)
end
|
lib/ex_contract/predicates.ex
| 0.890555
| 0.452536
|
predicates.ex
|
starcoder
|
defmodule Benchee.Formatters.Console.RunTime do
@moduledoc """
This deals with just the formatting of the run time results. They are similar
to the way the memory results are formatted, but different enough to where the
abstractions start to break down pretty significantly, so I wanted to extract
these two things into separate modules to avoid confusion.
"""
alias Benchee.{
Benchmark.Scenario,
Conversion,
Conversion.Count,
Conversion.Duration,
Conversion.Unit,
Formatters.Console.Helpers,
Statistics
}
@type unit_per_statistic :: %{atom => Unit.t()}
@ips_width 13
@average_width 15
@deviation_width 11
@median_width 15
@percentile_width 15
@minimum_width 15
@maximum_width 15
@sample_size_width 15
@mode_width 25
@doc """
Formats the run time statistics to a report suitable for output on the CLI.
## Examples
```
iex> memory_statistics = %Benchee.Statistics{average: 100.0}
iex> scenarios = [
...> %Benchee.Benchmark.Scenario{
...> name: "<NAME>",
...> run_time_statistics: %Benchee.Statistics{
...> average: 200.0, ips: 5000.0,std_dev_ratio: 0.1, median: 190.0, percentiles: %{99 => 300.1},
...> minimum: 100.1, maximum: 200.2, sample_size: 10_101, mode: 333.2
...> },
...> memory_usage_statistics: memory_statistics
...> },
...> %Benchee.Benchmark.Scenario{
...> name: "Job 2",
...> run_time_statistics: %Benchee.Statistics{
...> average: 400.0, ips: 2500.0, std_dev_ratio: 0.2, median: 390.0, percentiles: %{99 => 500.1},
...> minimum: 200.2, maximum: 400.4, sample_size: 20_202, mode: [612.3, 554.1]
...> },
...> memory_usage_statistics: memory_statistics
...> }
...> ]
iex> configuration = %{comparison: false, unit_scaling: :best, extended_statistics: true}
iex> Benchee.Formatters.Console.RunTime.format_scenarios(scenarios, configuration)
["\nName ips average deviation median 99th %\n",
"My Job 5 K 200 ns ±10.00% 190 ns 300.10 ns\n",
"Job 2 2.50 K 400 ns ±20.00% 390 ns 500.10 ns\n",
"\nExtended statistics: \n",
"\nName minimum maximum sample size mode\n",
"My Job 100.10 ns 200.20 ns 10.10 K 333.20 ns\n",
"Job 2 200.20 ns 400.40 ns 20.20 K 612.30 ns, 554.10 ns\n"]
```
"""
@spec format_scenarios([Scenario.t()], map) :: [String.t(), ...]
def format_scenarios(scenarios, config) do
if run_time_measurements_present?(scenarios) do
render(scenarios, config)
else
[]
end
end
defp run_time_measurements_present?(scenarios) do
Enum.any?(scenarios, fn scenario ->
scenario.run_time_statistics.sample_size > 0
end)
end
defp render(scenarios, config) do
%{unit_scaling: scaling_strategy} = config
units = Conversion.units(scenarios, scaling_strategy)
label_width = Helpers.label_width(scenarios)
List.flatten([
column_descriptors(label_width),
scenario_reports(scenarios, units, label_width),
comparison_report(scenarios, units, label_width, config),
extended_statistics_report(scenarios, units, label_width, config)
])
end
@spec extended_statistics_report([Scenario.t()], unit_per_statistic, integer, map) :: [
String.t()
]
defp extended_statistics_report(scenarios, units, label_width, %{extended_statistics: true}) do
[
Helpers.descriptor("Extended statistics"),
extended_column_descriptors(label_width)
| extended_statistics(scenarios, units, label_width)
]
end
defp extended_statistics_report(_, _, _, _) do
[]
end
@spec extended_statistics([Scenario.t()], unit_per_statistic, integer) :: [String.t()]
defp extended_statistics(scenarios, units, label_width) do
Enum.map(scenarios, fn scenario ->
format_scenario_extended(scenario, units, label_width)
end)
end
@spec format_scenario_extended(Scenario.t(), unit_per_statistic, integer) :: String.t()
defp format_scenario_extended(scenario, %{run_time: run_time_unit}, label_width) do
%Scenario{
name: name,
run_time_statistics: %Statistics{
minimum: minimum,
maximum: maximum,
sample_size: sample_size,
mode: mode
}
} = scenario
"~*s~*ts~*ts~*ts~*ts\n"
|> :io_lib.format([
-label_width,
name,
@minimum_width,
duration_output(minimum, run_time_unit),
@maximum_width,
duration_output(maximum, run_time_unit),
@sample_size_width,
Count.format(sample_size),
@mode_width,
Helpers.mode_out(mode, run_time_unit)
])
|> to_string
end
@spec extended_column_descriptors(integer) :: String.t()
defp extended_column_descriptors(label_width) do
"\n~*s~*s~*s~*s~*s\n"
|> :io_lib.format([
-label_width,
"Name",
@minimum_width,
"minimum",
@maximum_width,
"maximum",
@sample_size_width,
"sample size",
@mode_width,
"mode"
])
|> to_string
end
@spec column_descriptors(integer) :: String.t()
defp column_descriptors(label_width) do
"\n~*s~*s~*s~*s~*s~*s\n"
|> :io_lib.format([
-label_width,
"Name",
@ips_width,
"ips",
@average_width,
"average",
@deviation_width,
"deviation",
@median_width,
"median",
@percentile_width,
"99th %"
])
|> to_string
end
@spec scenario_reports([Scenario.t()], unit_per_statistic, integer) :: [String.t()]
defp scenario_reports(scenarios, units, label_width) do
Enum.map(scenarios, fn scenario ->
format_scenario(scenario, units, label_width)
end)
end
@spec format_scenario(Scenario.t(), unit_per_statistic, integer) :: String.t()
defp format_scenario(scenario, %{run_time: run_time_unit, ips: ips_unit}, label_width) do
%Scenario{
name: name,
run_time_statistics: %Statistics{
average: average,
ips: ips,
std_dev_ratio: std_dev_ratio,
median: median,
percentiles: %{99 => percentile_99}
}
} = scenario
"~*s~*ts~*ts~*ts~*ts~*ts\n"
|> :io_lib.format([
-label_width,
name,
@ips_width,
Helpers.count_output(ips, ips_unit),
@average_width,
duration_output(average, run_time_unit),
@deviation_width,
Helpers.deviation_output(std_dev_ratio),
@median_width,
duration_output(median, run_time_unit),
@percentile_width,
duration_output(percentile_99, run_time_unit)
])
|> to_string
end
@spec comparison_report([Scenario.t()], unit_per_statistic, integer, map) :: [String.t()]
defp comparison_report(scenarios, units, label_width, config)
# No need for a comparison when only one benchmark was run
defp comparison_report([_scenario], _, _, _), do: []
defp comparison_report(_, _, _, %{comparison: false}), do: []
defp comparison_report([scenario | other_scenarios], units, label_width, _) do
[
Helpers.descriptor("Comparison"),
reference_report(scenario, units, label_width)
| comparisons(scenario, units, label_width, other_scenarios)
]
end
defp reference_report(scenario, %{ips: ips_unit}, label_width) do
%Scenario{
name: name,
run_time_statistics: %Statistics{ips: ips}
} = scenario
"~*s~*s\n"
|> :io_lib.format([-label_width, name, @ips_width, Helpers.count_output(ips, ips_unit)])
|> to_string
end
@spec comparisons(Scenario.t(), unit_per_statistic, integer, [Scenario.t()]) :: [String.t()]
defp comparisons(scenario, units, label_width, scenarios_to_compare) do
%Scenario{run_time_statistics: reference_stats} = scenario
Enum.map(scenarios_to_compare, fn scenario = %Scenario{run_time_statistics: job_stats} ->
slower = reference_stats.ips / job_stats.ips
format_comparison(scenario, units, label_width, slower)
end)
end
defp format_comparison(scenario, %{ips: ips_unit}, label_width, slower) do
%Scenario{name: name, run_time_statistics: %Statistics{ips: ips}} = scenario
ips_format = Helpers.count_output(ips, ips_unit)
"~*s~*s - ~.2fx slower\n"
|> :io_lib.format([-label_width, name, @ips_width, ips_format, slower])
|> to_string
end
defp duration_output(duration, unit) do
Duration.format({Duration.scale(duration, unit), unit})
end
end
|
lib/benchee/formatters/console/run_time.ex
| 0.871174
| 0.757234
|
run_time.ex
|
starcoder
|
defmodule Bank.Transactions do
@moduledoc """
Module that provides banking transactions.
The following transactions are available: `deposit`, `withdraw`, `transfer`, `split` and `exchange`.
"""
alias Bank.Accounts
alias Ecto.Changeset
@doc """
Transaction `deposit`.
## Examples
iex> {:ok, account} = Accounts.create_account(%{account_owner: "Tupac"})
iex> Transactions.deposit(account.id, 100)
{:ok, "successfuly deposit transaction - current balance: #{Money.new(100, :BRL)}"}
iex> Transactions.deposit(99, 100)
{:error, "account: 99 not found"}
"""
def deposit(account_id, amount, currency \\ :BRL) do
amount
|> is_valid_amount?(account_id, :deposit)
|> transaction(currency)
end
@doc """
Transaction `withdraw`.
## Examples
iex> {:ok, account} = Accounts.create_account(%{account_owner: "Tupac", currency: Money.new(:BRL, 500), balance: "R$ 500,00" })
iex> Transactions.withdraw(account.id, 100)
{:ok, "successfuly withdraw transaction - current balance: #{Money.new(400, :BRL)}"}
iex> {:ok, account} = Accounts.create_account(%{account_owner: "Tupac", currency: Money.new(:BRL, 500), balance: "R$ 500,00" })
iex> Transactions.withdraw(account.id, 1000)
{:error, "sorry, you don't have enought balance - current balance: #{Money.new(500, :BRL)}"}
"""
def withdraw(account_id, amount) do
amount
|> is_valid_amount?(account_id, :withdraw)
|> transaction(:BRL)
end
@doc """
Transaction `transfer`.
## Examples
iex> {:ok, account1} = Accounts.create_account(%{account_owner: "Tupac", currency: Money.new(:BRL, 500), balance: "R$ 500,00" })
iex> {:ok, account2} = Accounts.create_account(%{account_owner: "Dre", currency: Money.new(:BRL, 500), balance: "R$ 500,00" })
iex> Transactions.transfer(account1.id, account2.id, 200)
{:ok, "successfuly transfer #{Money.new(200, :BRL)} to Dre - current balance: #{
Money.new(300, :BRL)
}"}
iex> {:ok, account1} = Accounts.create_account(%{account_owner: "Tupac", currency: Money.new(:BRL, 500), balance: "R$ 500,00" })
iex> {:ok, account2} = Accounts.create_account(%{account_owner: "Dre", currency: Money.new(:USD, 500), balance: "U$ 500,00" })
iex> Transactions.transfer(account1.id, account2.id, 100)
{:error, "cannot transfer monies with different currencies"}
"""
def transfer(from_account, to_account, amount) do
case from_account != to_account do
true ->
amount
|> is_valid_amount?(from_account, :withdraw)
|> do_transfer(from_account, to_account)
false ->
{:error, "same account - choose another account to transfer"}
end
end
@doc """
Transaction `split`.
## Examples
iex> {:ok, account1} = Accounts.create_account(%{account_owner: "Tupac", currency: Money.new(:BRL, 500), balance: "R$ 500,00" })
iex> {:ok, account2} = Accounts.create_account(%{account_owner: "Dre", currency: Money.new(:BRL, 500), balance: "R$ 500,00" })
iex> Transactions.split(account1.id, [account2.id], 200)
[{:ok, "successfuly transfer #{Money.new(200, :BRL)} to Dre - current balance: #{
Money.new(300, :BRL)
}"}]
iex> {:ok, account1} = Accounts.create_account(%{account_owner: "Tupac", currency: Money.new(:BRL, 500), balance: "R$ 500,00" })
iex> Transactions.transfer(account1.id, account1.id, 100)
{:error, "same account - choose another account to transfer"}
"""
def split(from_account, accounts, amount) do
id = for x <- Accounts.list_accounts(), do: x.id
elements =
accounts
|> Enum.filter(fn el ->
Enum.member?(id, el)
end)
Enum.map(elements, &transfer(from_account, &1, amount / Enum.count(elements)))
end
@doc """
Transaction `exchange`.
## Examples
iex> Transactions.exchange(:BRL, :ERROR, 100)
{Cldr.UnknownCurrencyError, "The currency :ERROR is invalid"}
"""
def exchange(from_currency, to_currency, amount) do
value = cast(amount)
case Decimal.gt?(value, 0) do
true ->
value
|> Money.new(from_currency)
|> Money.to_currency(to_currency)
|> case do
{:ok, currency} ->
{:ok, "successfuly exchange: #{Money.to_string(currency) |> elem(1)}"}
{:error, reason} ->
reason
end
false ->
raise(ArgumentError, message: "amount #{value} is not allowed for exchange")
end
end
defp is_valid_amount?(amount, id, opts) do
account = Accounts.get_account!(id)
result = cast(amount)
result
|> Decimal.gt?(0)
|> case do
true ->
case opts do
:deposit ->
{:ok, %{amount: result, operation: opts, account: account}}
:withdraw ->
result
|> has_balance?(account, opts)
end
false ->
raise(ArgumentError, message: "amount #{amount} is not allowed for #{opts}")
end
rescue
_e in Ecto.NoResultsError ->
{:error, "account: #{id} not found"}
end
defp cast(amount) do
amount
|> Decimal.cast()
|> elem(1)
end
defp has_balance?(amount, account, opts) do
account.currency
|> Money.to_decimal()
|> Decimal.sub(amount)
|> Decimal.negative?()
|> case do
true ->
{:error, "sorry, you don't have enought balance - current balance: #{account.currency}"}
false ->
{:ok, %{amount: amount, operation: opts, account: account}}
end
end
defp transaction({:ok, attrs}, currency) do
attrs[:amount]
|> Money.new(
if attrs[:operation] == :withdraw,
do: get_currency(attrs[:account].currency),
else: currency
)
|> case do
{:error, {_, reason}} ->
{:error, reason}
result ->
result
|> operation(attrs[:account], attrs[:operation])
|> update_balance(attrs[:account].id, attrs[:operation])
end
end
defp transaction({:error, reason}, _currency), do: {:error, reason}
defp get_currency(%Money{amount: _, currency: currency}), do: currency
defp update_balance(attrs, id, opts) do
case attrs do
{:error, reason} ->
{:error, reason}
changeset ->
case Accounts.update_account(id, changeset) do
{:ok, account} ->
{:ok, "successfuly #{opts} transaction - current balance: #{account.balance}"}
{:error, changeset} ->
{:error, changeset}
end
end
end
defp operation(amount, account, :deposit) do
account.currency
|> Money.add(amount)
|> update_attrs()
end
defp operation(amount, account, :withdraw) do
account.currency
|> Money.sub(amount)
|> update_attrs()
end
defp do_transfer({:error, reason}, _, _), do: {:error, reason}
defp do_transfer({:ok, attrs}, from_account, to_account) do
amount =
attrs[:amount]
|> Money.new(get_currency(attrs[:account].currency))
from = Accounts.get_account!(from_account)
to = Accounts.get_account!(to_account)
from_attrs = Changeset.change(from, operation(amount, from, :withdraw))
to_attrs = Changeset.change(to, operation(amount, to, :deposit))
case Accounts.update_multi(from_attrs, to_attrs) do
{:ok, _} ->
{:ok,
"successfuly transfer #{amount} to #{to.account_owner} - current balance: #{
from_attrs.changes.balance
}"}
{:error, :from_account, changeset, _} ->
{:error, changeset}
{:error, :to_account, changeset, _} ->
{:error, changeset}
end
rescue
_e in FunctionClauseError ->
{:error, "cannot transfer monies with different currencies"}
_e in Ecto.NoResultsError ->
{:error, "account: #{to_account} not found"}
end
defp update_attrs({:error, {_, reason}}), do: {:error, reason}
defp update_attrs({:ok, amount}) do
%{
currency: Money.round(amount, currency_digits: 2),
balance: elem(Money.to_string(amount), 1)
}
end
end
|
lib/bank/transactions.ex
| 0.869894
| 0.475057
|
transactions.ex
|
starcoder
|
defmodule StepFlow.Workflows do
@moduledoc """
The Workflows context.
"""
import Ecto.Query, warn: false
alias StepFlow.Artifacts.Artifact
alias StepFlow.Jobs
alias StepFlow.Jobs.Status
alias StepFlow.Progressions.Progression
alias StepFlow.Repo
alias StepFlow.Workflows.Workflow
require Logger
@doc """
Returns the list of workflows.
## Examples
iex> list_workflows()
[%Workflow{}, ...]
"""
def list_workflows(params \\ %{}) do
page =
Map.get(params, "page", 0)
|> StepFlow.Integer.force()
size =
Map.get(params, "size", 10)
|> StepFlow.Integer.force()
offset = page * size
query =
case Map.get(params, "rights") do
nil ->
from(workflow in Workflow)
user_rights ->
from(
workflow in Workflow,
join: rights in assoc(workflow, :rights),
where: rights.action == "view",
where: fragment("?::varchar[] && ?::varchar[]", rights.groups, ^user_rights)
)
end
query =
from(workflow in subquery(query))
|> filter_query(params, :video_id)
|> filter_query(params, :identifier)
|> filter_query(params, :version_major)
|> filter_query(params, :version_minor)
|> filter_query(params, :version_micro)
|> date_before_filter_query(params, :before_date)
|> date_after_filter_query(params, :after_date)
query = filter_status(query, params)
query =
case StepFlow.Map.get_by_key_or_atom(params, :ids) do
nil ->
query
identifiers ->
from(workflow in query, where: workflow.id in ^identifiers)
end
query =
case StepFlow.Map.get_by_key_or_atom(params, :workflow_ids) do
nil ->
query
workflow_ids ->
from(
workflow in query,
where: workflow.identifier in ^workflow_ids
)
end
total_query = from(item in subquery(query), select: count(item.id))
total =
Repo.all(total_query)
|> List.first()
query =
from(
workflow in subquery(query),
order_by: [desc: :inserted_at],
offset: ^offset,
limit: ^size
)
workflows =
Repo.all(query)
|> Repo.preload([:jobs, :artifacts, :rights])
|> preload_workflows
%{
data: workflows,
total: total,
page: page,
size: size
}
end
defp get_status(status, completed_status) do
if status != nil do
if Status.state_enum_label(:completed) in status do
if status == completed_status do
:completed
else
nil
end
else
if Status.state_enum_label(:error) in status do
:error
else
:processing
end
end
else
nil
end
end
defp filter_status(query, params) do
status = Map.get(params, "state")
completed_status = [
Status.state_enum_label(:completed)
]
case get_status(status, completed_status) do
nil ->
query
:completed ->
from(
workflow in query,
left_join: artifact in assoc(workflow, :artifacts),
where: not is_nil(artifact.id)
)
:error ->
completed_jobs_to_exclude =
from(
workflow in query,
join: job in assoc(workflow, :jobs),
join: status in assoc(job, :status),
where: status.state in ^completed_status,
group_by: workflow.id
)
from(
workflow in query,
join: job in assoc(workflow, :jobs),
join: status in assoc(job, :status),
where: status.state in ^status,
group_by: workflow.id,
except: ^completed_jobs_to_exclude
)
:processing ->
from(
workflow in query,
join: jobs in assoc(workflow, :jobs),
join: status in assoc(jobs, :status),
where: status.state in ^status
)
end
end
defp filter_query(query, params, key) do
case StepFlow.Map.get_by_key_or_atom(params, key) do
nil ->
query
value ->
from(workflow in query, where: field(workflow, ^key) == ^value)
end
end
defp date_before_filter_query(query, params, key) do
case StepFlow.Map.get_by_key_or_atom(params, key) do
nil ->
query
date_value ->
date = Date.from_iso8601!(date_value)
from(workflow in query, where: fragment("?::date", workflow.inserted_at) <= ^date)
end
end
defp date_after_filter_query(query, params, key) do
case StepFlow.Map.get_by_key_or_atom(params, key) do
nil ->
query
date_value ->
date = Date.from_iso8601!(date_value)
from(workflow in query, where: fragment("?::date", workflow.inserted_at) >= ^date)
end
end
@doc """
Gets a single workflows.
Raises `Ecto.NoResultsError` if the Workflow does not exist.
## Examples
iex> get_workflows!(123)
%Workflow{}
iex> get_workflows!(456)
** (Ecto.NoResultsError)
"""
def get_workflow!(id) do
Repo.get!(Workflow, id)
|> Repo.preload([:jobs, :artifacts, :rights])
|> preload_workflow
end
defp preload_workflow(workflow) do
jobs = Repo.preload(workflow.jobs, [:status, :progressions])
steps =
workflow
|> Map.get(:steps)
|> get_step_status(jobs)
workflow
|> Map.put(:steps, steps)
|> Map.put(:jobs, jobs)
end
defp preload_workflows(workflows, result \\ [])
defp preload_workflows([], result), do: result
defp preload_workflows([workflow | workflows], result) do
result = List.insert_at(result, -1, workflow |> preload_workflow)
preload_workflows(workflows, result)
end
def get_step_status(steps, workflow_jobs, result \\ [])
def get_step_status([], _workflow_jobs, result), do: result
def get_step_status(nil, _workflow_jobs, result), do: result
def get_step_status([step | steps], workflow_jobs, result) do
name = StepFlow.Map.get_by_key_or_atom(step, :name)
step_id = StepFlow.Map.get_by_key_or_atom(step, :id)
jobs =
workflow_jobs
|> Enum.filter(fn job -> job.name == name && job.step_id == step_id end)
completed = count_status(jobs, :completed)
errors = count_status(jobs, :error)
skipped = count_status(jobs, :skipped)
processing = count_status(jobs, :processing)
queued = count_status(jobs, :queued)
job_status = %{
total: length(jobs),
completed: completed,
errors: errors,
processing: processing,
queued: queued,
skipped: skipped
}
status =
cond do
errors > 0 -> :error
processing > 0 -> :processing
queued > 0 -> :processing
skipped > 0 -> :skipped
completed > 0 -> :completed
# TO DO: change this case into to_start as not started yet
true -> :queued
end
step =
step
|> Map.put(:status, status)
|> Map.put(:jobs, job_status)
result = List.insert_at(result, -1, step)
get_step_status(steps, workflow_jobs, result)
end
def get_step_definition(job) do
job = Repo.preload(job, workflow: [:jobs])
step =
Enum.filter(job.workflow.steps, fn step ->
Map.get(step, "id") == job.step_id
end)
|> List.first()
%{step: step, workflow: job.workflow}
end
defp count_status(jobs, status, count \\ 0)
defp count_status([], _status, count), do: count
defp count_status([job | jobs], status, count) do
count_completed =
job.status
|> Enum.filter(fn s -> s.state == :completed end)
|> length
# A job with at least one status.state at :completed is considered :completed
count =
if count_completed >= 1 do
if status == :completed do
count + 1
else
count
end
else
case status do
:processing ->
count_processing(job, count)
:error ->
count_error(job, count)
:skipped ->
count_skipped(job, count)
:queued ->
count_queued(job, count)
:completed ->
count
_ ->
raise RuntimeError
Logger.error("unereachable")
count
end
end
count_status(jobs, status, count)
end
defp count_processing(job, count) do
if job.progressions == [] do
count
else
last_progression =
job.progressions
|> Progression.get_last_progression()
last_status =
job.status
|> Status.get_last_status()
cond do
last_status == nil -> count + 1
last_progression.updated_at > last_status.updated_at -> count + 1
true -> count
end
end
end
defp count_error(job, count) do
if Enum.map(job.status, fn s -> s.state end)
|> List.last()
|> Kernel.==(:error) do
count + 1
else
count
end
end
defp count_skipped(job, count) do
if Enum.map(job.status, fn s -> s.state end)
|> List.last()
|> Kernel.==(:skipped) do
count + 1
else
count
end
end
defp count_queued(job, count) do
case {Enum.map(job.status, fn s -> s.state end) |> List.last(), job.progressions} do
{nil, []} ->
count + 1
{nil, _} ->
count
{:retrying, []} ->
count + 1
{:retrying, _} ->
last_progression = job.progressions |> Progression.get_last_progression()
last_status = job.status |> Status.get_last_status()
if last_progression.updated_at > last_status.updated_at do
count
else
count + 1
end
{_state, _} ->
count
end
end
@doc """
Creates a workflow.
## Examples
iex> create_workflow(%{field: value})
{:ok, %Workflow{}}
iex> create_workflow(%{field: bad_value})
{:error, %Ecto.Changeset{}}
"""
def create_workflow(attrs \\ %{}) do
%Workflow{}
|> Workflow.changeset(attrs)
|> Repo.insert()
end
@doc """
Updates a workflow.
## Examples
iex> update_workflow(workflow, %{field: new_value})
{:ok, %Workflow{}}
iex> update_workflow(workflow, %{field: bad_value})
{:error, %Ecto.Changeset{}}
"""
def update_workflow(%Workflow{} = workflow, attrs) do
workflow
|> Workflow.changeset(attrs)
|> Repo.update()
end
def notification_from_job(job_id, description \\ nil) do
job = Jobs.get_job!(job_id)
topic = "update_workflow_" <> Integer.to_string(job.workflow_id)
channel = StepFlow.Configuration.get_slack_channel()
if StepFlow.Configuration.get_slack_token() != nil and description != nil and channel != nil do
exposed_domain_name = StepFlow.Configuration.get_exposed_domain_name()
send(
:step_flow_slack_bot,
{:message,
"Error for job #{job.name} ##{job_id} <#{exposed_domain_name}/workflows/#{
job.workflow_id
} |Open Workflow>\n```#{description}```", channel}
)
end
StepFlow.Notification.send(topic, %{workflow_id: job.workflow_id})
end
@doc """
Deletes a Workflow.
## Examples
iex> delete_workflow(workflow)
{:ok, %Workflow{}}
iex> delete_workflow(workflow)
{:error, %Ecto.Changeset{}}
"""
def delete_workflow(%Workflow{} = workflow) do
Repo.delete(workflow)
end
@doc """
Returns an `%Ecto.Changeset{}` for tracking workflow changes.
## Examples
iex> change_workflow(workflow)
%Ecto.Changeset{source: %Workflow{}}
"""
def change_workflow(%Workflow{} = workflow) do
Workflow.changeset(workflow, %{})
end
def get_workflow_history(%{scale: scale}) do
Enum.map(
0..49,
fn index ->
%{
total: query_total(scale, -index, -index - 1),
rosetta:
query_by_identifier(scale, -index, -index - 1, "FranceTV Studio Ingest Rosetta"),
ingest_rdf:
query_by_identifier(scale, -index, -index - 1, "FranceTélévisions Rdf Ingest"),
ingest_dash:
query_by_identifier(scale, -index, -index - 1, "FranceTélévisions Dash Ingest"),
process_acs: query_by_identifier(scale, -index, -index - 1, "FranceTélévisions ACS"),
process_acs_standalone:
query_by_identifier(scale, -index, -index - 1, "FranceTélévisions ACS (standalone)"),
errors: query_by_status(scale, -index, -index - 1, "error")
}
end
)
end
defp query_total(scale, delta_min, delta_max) do
Repo.aggregate(
from(
workflow in Workflow,
where:
workflow.inserted_at > datetime_add(^NaiveDateTime.utc_now(), ^delta_max, ^scale) and
workflow.inserted_at < datetime_add(^NaiveDateTime.utc_now(), ^delta_min, ^scale)
),
:count,
:id
)
end
defp query_by_identifier(scale, delta_min, delta_max, identifier) do
Repo.aggregate(
from(
workflow in Workflow,
where:
workflow.identifier == ^identifier and
workflow.inserted_at > datetime_add(^NaiveDateTime.utc_now(), ^delta_max, ^scale) and
workflow.inserted_at < datetime_add(^NaiveDateTime.utc_now(), ^delta_min, ^scale)
),
:count,
:id
)
end
defp query_by_status(scale, delta_min, delta_max, status) do
Repo.aggregate(
from(
status in Status,
where:
status.state == ^status and
status.inserted_at > datetime_add(^NaiveDateTime.utc_now(), ^delta_max, ^scale) and
status.inserted_at < datetime_add(^NaiveDateTime.utc_now(), ^delta_min, ^scale)
),
:count,
:id
)
end
def get_completed_statistics(scale, delta) do
query =
from(
workflow in Workflow,
inner_join:
artifacts in subquery(
from(
artifacts in Artifact,
where:
artifacts.inserted_at > datetime_add(^NaiveDateTime.utc_now(), ^delta, ^scale),
group_by: artifacts.workflow_id,
select: %{
workflow_id: artifacts.workflow_id,
inserted_at: max(artifacts.inserted_at)
}
)
),
on: workflow.id == artifacts.workflow_id,
group_by: workflow.identifier,
select: %{
count: count(),
duration:
fragment(
"EXTRACT(EPOCH FROM (SELECT avg(? - ?)))",
artifacts.inserted_at,
workflow.inserted_at
),
identifier: workflow.identifier
}
)
Repo.all(query)
end
end
|
lib/step_flow/workflows/workflows.ex
| 0.811788
| 0.589066
|
workflows.ex
|
starcoder
|
defmodule Bolt.Cogs.Kick do
@moduledoc false
@behaviour Nosedrum.Command
alias Nosedrum.Predicates
alias Bolt.{Converters, ErrorFormatters, Helpers, Humanizer, ModLog, Repo, Schema.Infraction}
alias Nostrum.Api
require Logger
@impl true
def usage, do: ["kick <user:member> [reason:str...]"]
@impl true
def description,
do: """
Kick the given member with an optional reason.
An infraction is stored in the infraction database, and can be retrieved later.
Requires the `KICK_MEMBERS` permission.
**Examples**:
```rs
// kick Dude without an explicit reason
kick @Dude#0001
// kick Dude with an explicit reason
kick @Dude#0001 spamming cats when asked to post ducks
```
"""
@impl true
def predicates,
do: [&Predicates.guild_only/1, Predicates.has_permission(:kick_members)]
@impl true
def command(msg, [user | reason_list]) do
response =
with reason <- Enum.join(reason_list, " "),
{:ok, member} <- Converters.to_member(msg.guild_id, user),
{:ok, true} <- Helpers.is_above(msg.guild_id, msg.author.id, member.user.id),
{:ok} <- Api.remove_guild_member(msg.guild_id, member.user.id),
infraction <- %{
type: "kick",
guild_id: msg.guild_id,
user_id: member.user.id,
actor_id: msg.author.id,
reason: if(reason != "", do: reason, else: nil)
},
changeset <- Infraction.changeset(%Infraction{}, infraction),
{:ok, _created_infraction} <- Repo.insert(changeset) do
ModLog.emit(
msg.guild_id,
"INFRACTION_CREATE",
"#{Humanizer.human_user(msg.author)} kicked" <>
" #{Humanizer.human_user(member.user)}" <>
if(reason != "", do: " with reason `#{reason}`", else: "")
)
response = "👌 kicked #{Humanizer.human_user(member.user)})"
if reason != "" do
response <> " with reason `#{Helpers.clean_content(reason)}`"
else
response
end
else
{:ok, false} ->
"🚫 you need to be above the target user in the role hierarchy"
error ->
ErrorFormatters.fmt(msg, error)
end
{:ok, _msg} = Api.create_message(msg.channel_id, response)
end
def command(msg, _args) do
response = "ℹ️ usage: `kick <user:member> [reason:str...]`"
{:ok, _msg} = Api.create_message(msg.channel_id, response)
end
end
|
lib/bolt/cogs/kick.ex
| 0.828558
| 0.518546
|
kick.ex
|
starcoder
|
defmodule Entice.Logic.Skills do
use Entice.Logic.Skill
use Entice.Logic.Attributes
defskill NoSkill, id: 0 do
def description, do: "Non-existing skill as a placeholder for empty skillbar slots."
def cast_time, do: 0
def recharge_time, do: 0
def energy_cost, do: 0
end
defskill HealingSignet, id: 1 do
def description, do: "You gain 82...154...172 Health. You have -40 armor while using this skill."
def cast_time, do: 2000
def recharge_time, do: 4000
def energy_cost, do: 0
def apply_effect(_target, caster),
do: heal(caster, 10)
end
defskill ResurrectionSignet, id: 2 do
def description, do: "Resurrects target party member (100% Health, 25% Energy). This signet only recharges when you gain a morale boost."
def cast_time, do: 3000
def recharge_time, do: 0
def energy_cost, do: 0
def check_requirements(target, _caster),
do: require_dead(target)
def apply_effect(target, _caster),
do: resurrect(target, 100, 25)
end
defskill SignetOfCapture, id: 3 do
def description, do: "Choose one skill from a nearby dead Boss of your profession. Signet of Capture is permanently replaced by that skill. If that skill was elite, gain 250 XP for every level you have earned."
def cast_time, do: 2000
def recharge_time, do: 2000
def energy_cost, do: 0
end
defskill Bamph, id: 4 do
def description, do: "BAMPH!"
def cast_time, do: 0
def recharge_time, do: 0
def energy_cost, do: 0
def apply_effect(target, _caster),
do: damage(target, 10)
end
defskill PowerBlock, id: 5 do
def description, do: "If target foe is casting a spell or chant, that skill and all skills of the same attribute are disabled (1...10...12 seconds) and that skill is interrupted."
def cast_time, do: 250
def recharge_time, do: 20000
def energy_cost, do: 15
end
defskill MantraOfEarth, id: 6 do
def description, do: "(30...78...90 seconds.) Reduces earth damage you take by 26...45...50%. You gain 2 Energy when you take earth damage."
def cast_time, do: 0
def recharge_time, do: 20000
def energy_cost, do: 10
end
defskill MantraOfFlame, id: 7 do
def description, do: "(30...78...90 seconds.) Reduces fire damage you take by 26...45...50%. You gain 2 Energy when you take fire damage."
def cast_time, do: 0
def recharge_time, do: 20000
def energy_cost, do: 10
end
defskill MantraOfFrost, id: 8 do
def description, do: "(30...78...90 seconds.) Reduces cold damage you take by 26...45...50%. You gain 2 Energy when you take cold damage."
def cast_time, do: 0
def recharge_time, do: 20000
def energy_cost, do: 10
end
defskill MantraOfLightning, id: 9 do
def description, do: "(30...78...90 seconds.) Reduces lightning damage you take by 26...45...50%. You gain 2 Energy when you take lightning damage."
def cast_time, do: 0
def recharge_time, do: 20000
def energy_cost, do: 10
end
defskill HexBreaker, id: 10 do
def description, do: "(5...65...80 seconds.) The next hex against you fails and the caster takes 10...39...46 damage."
def cast_time, do: 0
def recharge_time, do: 15000
def energy_cost, do: 5
end
defskill Distortion, id: 11 do
def description, do: "(1...4...5 seconds.) You have 75% chance to block. Block cost: you lose 2 Energy or Distortion ends."
def cast_time, do: 0
def recharge_time, do: 8000
def energy_cost, do: 5
end
end
|
lib/entice/logic/skills/skills.ex
| 0.571408
| 0.50293
|
skills.ex
|
starcoder
|
defmodule ESx.Schema do
@moduledoc """
Define schema for elasticsaerch using Keyword lists and DSL.
## DSL Example
defmodule MyApp.Blog do
use ESx.Schema
index_name "blog" # Optional
document_type "blog" # Optional
mapping _all: [enabled: false], _ttl: [enabled: true, default: "180d"] do
indexes :title, type: "string"
indexes :content, type: "string"
indexes :publish, type: "boolean"
end
settings number_of_shards: 10, number_of_replicas: 2 do
analysis do
filter :ja_posfilter,
type: "kuromoji_neologd_part_of_speech",
stoptags: ["助詞-格助詞-一般", "助詞-終助詞"]
tokenizer :ja_tokenizer,
type: "kuromoji_neologd_tokenizer"
analyzer :default,
type: "custom", tokenizer: "ja_tokenizer",
filter: ["kuromoji_neologd_baseform", "ja_posfilter", "cjk_width"]
end
end
end
## Keyword lists Example
defmodule Something.Schema do
use ESx.Schema
mapping [
_ttl: [
enabled: true,
default: "180d"
],
_all: [
enabled: false
],
properties: [
title: [
type: "string",
analyzer: "ja_analyzer"
],
publish: [
type: "boolean"
],
content: [
type: "string",
analyzer: "ja_analyzer"
]
]
]
settings [
number_of_shards: 1,
number_of_replicas: 0,
analysis: [
analyzer: [
ja_analyzer: [
type: "custom",
tokenizer: "kuromoji_neologd_tokenizer",
filter: ["kuromoji_neologd_baseform", "cjk_width"],
]
]
]
]
end
## Analysis
`ESx.Schema.Analysis`
## Mapping
`ESx.Schema.Mapping`
## Naming
`ESx.Schema.Naming`
## Reflection
Any schema module will generate the `__es_mapping__`, `__es_analysis__`, `__es_naming__`
function that can be used for runtime introspection of the schema:
* `__es_analysis__(:settings)` - Returns the analysis settings
* `__es_analysis__(:to_map)` - Returns the analysis settings as map
* `__es_analysis__(:as_json)` - Returns the analysis settings as map that is the same as :to_map
* `__es_analysis__(:types)` - Returns the fields types, this is part of analysis settings
* `__es_analysis__(:type, field)` - Returns one of the type
* `__es_mapping__(:settings)` - Returns the properties mapping
* `__es_mapping__(:to_map)` - Returns the properties mapping as map
* `__es_mapping__(:as_json)` - Returns the properties mapping as map that is the same as :to_map
* `__es_mapping__(:types)` - Returns the fields types, this is part of properties mapping
* `__es_mapping__(:type, field)` - Returns one of the type
* `__es_naming__(:index_name)` - Returns a document index's name
* `__es_naming__(:document_type)` - Returns a document type's name
"""
@doc false
defmacro __using__(_opts) do
quote do
use ESx.Schema.{Mapping, Analysis, Naming}
def as_indexed_json(%{} = schema, opts) do
types = ESx.Funcs.to_mod(schema).__es_mapping__(:types)
Map.take(schema, Keyword.keys(types))
end
defoverridable as_indexed_json: 2
end
end
end
|
lib/esx/schema.ex
| 0.863305
| 0.517632
|
schema.ex
|
starcoder
|
defmodule Pow.Ecto.Schema.Password do
@moduledoc """
Simple wrapper for password hash and verification.
The password hash format is based on [Pbkdf2](https://github.com/riverrun/pbkdf2_elixir)
## Configuration
This module can be configured by setting the `Pow.Ecto.Schema.Password` key
for the `:pow` app:
config :pow, Pow.Ecto.Schema.Password,
iterations: 100_000,
length: 64,
digest: :sha512,
salt_length: 16
For test environment it's recommended to set the iteration to 1:
config :pow, Pow.Ecto.Schema.Password, iterations: 1
"""
alias Pow.Ecto.Schema.Password.Pbkdf2
@doc """
Generates an encoded PBKDF2 hash.
By default this is a `PBKDF2-SHA512` hash with 100,000 iterations, with a
random salt. The hash, salt, iterations and digest method will be part of
the returned binary. The hash and salt are Base64 encoded.
## Options
* `:iterations` - defaults to 100_000;
* `:length` - a length in octets for the derived key. Defaults to 64;
* `:digest` - an hmac function to use as the pseudo-random function. Defaults to `:sha512`;
* `:salt` - a salt binary to use. Defaults to a randomly generated binary;
* `:salt_length` - a length for the random salt binary. Defaults to 16;
"""
@spec pbkdf2_hash(binary(), Keyword.t() | nil) :: binary()
def pbkdf2_hash(secret, opts \\ nil) do
opts = opts || Application.get_env(:pow, __MODULE__, [])
iterations = Keyword.get(opts, :iterations, 100_000)
length = Keyword.get(opts, :length, 64)
digest = Keyword.get(opts, :digest, :sha512)
salt_length = Keyword.get(opts, :salt_length, 16)
salt = Keyword.get(opts, :salt, :crypto.strong_rand_bytes(salt_length))
hash = Pbkdf2.generate(secret, salt, iterations, length, digest)
encode(digest, iterations, salt, hash)
end
@doc """
Verifies that the secret matches the encoded binary.
A PBKDF2 hash will be generated from the secret with the same options as
found in the encoded binary. The hash, salt, iterations and digest method
is parsed from the encoded binary. The hash and salt decoded as Base64
encoded binaries.
## Options
* `:length` - a length in octets for the derived key. Defaults to 64;
"""
@spec pbkdf2_verify(binary(), binary(), Keyword.t()) :: boolean()
def pbkdf2_verify(secret, secret_hash, opts \\ []) do
secret_hash
|> decode()
|> verify(secret, opts)
end
defp encode(digest, iterations, salt, hash) do
salt = Base.encode64(salt)
hash = Base.encode64(hash)
"$pbkdf2-#{digest}$#{iterations}$#{salt}$#{hash}"
end
defp decode(hash) do
case String.split(hash, "$", trim: true) do
["pbkdf2-" <> digest, iterations, salt, hash] ->
{:ok, salt} = Base.decode64(salt)
{:ok, hash} = Base.decode64(hash)
digest = String.to_existing_atom(digest)
iterations = String.to_integer(iterations)
[digest, iterations, salt, hash]
_ ->
raise_not_valid_password_hash!()
end
end
defp verify([digest, iterations, salt, hash], secret, opts) do
length = Keyword.get(opts, :length, 64)
secret_hash = Pbkdf2.generate(secret, salt, iterations, length, digest)
Pbkdf2.compare(hash, secret_hash)
end
@spec raise_not_valid_password_hash!() :: no_return()
defp raise_not_valid_password_hash!,
do: raise ArgumentError, "not a valid encoded password hash"
end
|
lib/pow/ecto/schema/password.ex
| 0.901679
| 0.562297
|
password.ex
|
starcoder
|
defmodule Bpmn do
@moduledoc """
BPMN Execution Engine
=====================
Hashiru BPMN allows you to execute any BPMN process in Elixir.
Each node in the BPMN process can be mapped to the appropriate Elixir token and added to a process.
Each loaded process will be added to a Registry under the id of that process.
From there they can be loaded by any node and executed by the system.
Node definitions
================
Each node can be represented in Elixir as a token in the following format: {:bpmn_node_type, :any_data_type}
The nodes can return one of the following sets of data:
- {:ok, context} => The process has completed successfully and returned some data in the context
- {:error, _message, %{field: "Error message"}} => Error in the execution of the process with message and fields
- {:manual, _} => The process has reached an external manual activity
- {:fatal, _} => Fatal error in the execution of the process
- {:not_implemented} => Process reached an unimplemented section of the process-
Events
------
### End Event
BPMN definition:
<bpmn:endEvent id="EndEvent_1s3wrav">
<bpmn:incoming>SequenceFlow_1keu1zs</bpmn:incoming>
<bpmn:errorEventDefinition />
</bpmn:endEvent>
Elixir token:
{:bpmn_event_end,
%{
id: "EndEvent_1s3wrav",
name: "END",
incoming: ["SequenceFlow_1keu1zs"],
definitions: [{:error_event_definition, %{}}]
}
}
"""
@doc """
Parse a string representation of a process into an executable process representation
"""
def parse(_process) do
{:ok, %{"start_node_id" => {:bpmn_event_start, %{}}}}
end
@doc """
Get a node from a process by target id
"""
def next(target, process) do
Map.get(process, target)
end
@doc """
Release token to another target node
"""
def releaseToken(targets, context) when is_list(targets) do
targets
|> Task.async_stream(&(releaseToken(&1, context)))
|> Enum.reduce({:ok, context}, &reduce_result/2)
end
def releaseToken(target, context) do
process = Bpmn.Context.get(context, :process)
next = next(target, process)
case next do
nil -> {:error, "Unable to find node '#{target}'"}
_ -> execute(next, context)
end
end
@doc """
Execute a node in the process
"""
def execute({:bpmn_event_start, _} = elem, context), do: Bpmn.Event.Start.tokenIn(elem, context)
def execute({:bpmn_event_end, _} = elem, context), do: Bpmn.Event.End.tokenIn(elem, context)
def execute({:bpmn_event_intermediate, _} = elem, context), do: Bpmn.Event.Intermediate.tokenIn(elem, context)
def execute({:bpmn_event_boundary, _} = elem, context), do: Bpmn.Event.Boundary.tokenIn(elem, context)
def execute({:bpmn_activity_task_user, _} = elem, context), do: Bpmn.Activity.Task.User.tokenIn(elem, context)
def execute({:bpmn_activity_task_script, _} = elem, context), do: Bpmn.Activity.Task.Script.tokenIn(elem, context)
def execute({:bpmn_activity_task_service, _} = elem, context), do: Bpmn.Activity.Task.Service.tokenIn(elem, context)
def execute({:bpmn_activity_task_manual, _} = elem, context), do: Bpmn.Activity.Task.Manual.tokenIn(elem, context)
def execute({:bpmn_activity_task_send, _} = elem, context), do: Bpmn.Activity.Task.Send.tokenIn(elem, context)
def execute({:bpmn_activity_task_receive, _} = elem, context), do: Bpmn.Activity.Task.Receive.tokenIn(elem, context)
def execute({:bpmn_activity_subprocess, _} = elem, context), do: Bpmn.Activity.Subprocess.tokenIn(elem, context)
def execute({:bpmn_activity_subprocess_embeded, _} = elem, context), do: Bpmn.Activity.Subprocess.Embedded.tokenIn(elem, context)
def execute({:bpmn_gateway_exclusive, _} = elem, context), do: Bpmn.Gateway.Exclusive.tokenIn(elem, context)
def execute({:bpmn_gateway_exclusive_event, _} = elem, context), do: Bpmn.Gateway.Exclusive.Event.tokenIn(elem, context)
def execute({:bpmn_gateway_parallel, _} = elem, context), do: Bpmn.Gateway.Parallel.tokenIn(elem, context)
def execute({:bpmn_gateway_inclusive, _} = elem, context), do: Bpmn.Gateway.Inclusive.tokenIn(elem, context)
def execute({:bpmn_gateway_complex, _} = elem, context), do: Bpmn.Gateway.Complex.tokenIn(elem, context)
def execute({:bpmn_sequence_flow, _} = elem, context), do: Bpmn.SequenceFlow.tokenIn(elem, context)
def execute(elem, _, _) do
# IO.inspect(elem)
nil
end
defp reduce_result({:ok, {:ok, _} = result}, {:ok, _}), do: result
defp reduce_result({:ok, {:error, _} = result}, {:ok, _}), do: result
defp reduce_result({:ok, {:error, _}}, {:error, _} = acc), do: acc
defp reduce_result({:ok, {:fatal, _} = result}, _), do: result
defp reduce_result({:ok, {:not_implemented} = result}, _), do: result
defp reduce_result({:ok, {:manual, _} = result}, {:ok, _}), do: result
defp reduce_result({:ok, {:manual, _}}, acc), do: acc
end
|
lib/bpmn.ex
| 0.653901
| 0.504455
|
bpmn.ex
|
starcoder
|
defmodule Graph do
@moduledoc """
This module defines a graph data structure, which supports directed and undirected graphs, in both acyclic and cyclic forms.
It also defines the API for creating, manipulating, and querying that structure.
As far as memory usage is concerned, `Graph` should be fairly compact in memory, but if you want to do a rough
comparison between the memory usage for a graph between `libgraph` and `digraph`, use `:digraph.info/1` and
`Graph.info/1` on the two graphs, and both results will contain memory usage information. Keep in mind we don't have a precise
way to measure the memory usage of a term in memory, whereas ETS is able to give a more precise answer, but we do have
a fairly good way to estimate the usage of a term, and we use that method within `libgraph`.
The Graph struct is structured like so:
- A map of vertex ids to vertices (`vertices`)
- A map of vertex ids to their out neighbors (`out_edges`),
- A map of vertex ids to their in neighbors (`in_edges`), effectively the transposition of `out_edges`
- A map of vertex ids to vertex labels (`vertex_labels`), (labels are only stored if a non-nil label was provided)
- A map of edge ids (where an edge id is simply a tuple of `{vertex_id, vertex_id}`) to a map of edge metadata (`edges`)
- Edge metadata is a map of `label => weight`, and each entry in that map represents a distinct edge. This allows
us to support multiple edges in the same direction between the same pair of vertices, but for many purposes simply
treat them as a single logical edge.
This structure is designed to be as efficient as possible once a graph is built, but it turned out that it is also
quite efficient for manipulating the graph as well. For example, splitting an edge and introducing a new vertex on that
edge can be done with very little effort. We use vertex ids everywhere because we can generate them without any lookups,
we don't incur any copies of the vertex structure, and they are very efficient as keys in a map.
"""
defstruct in_edges: %{},
out_edges: %{},
edges: %{},
vertex_labels: %{},
vertices: %{},
type: :directed,
vertex_identifier: &Graph.Utils.vertex_id/1
alias Graph.{Edge, EdgeSpecificationError}
@typedoc """
Identifier of a vertex. By default a non_neg_integer from `Graph.Utils.vertex_id/1` utilizing `:erlang.phash2`.
"""
@type vertex_id :: non_neg_integer() | term()
@type vertex :: term
@type label :: term
@type edge_weight :: integer | float
@type edge_key :: {vertex_id, vertex_id}
@type edge_value :: %{label => edge_weight}
@type graph_type :: :directed | :undirected
@type t :: %__MODULE__{
in_edges: %{vertex_id => MapSet.t()},
out_edges: %{vertex_id => MapSet.t()},
edges: %{edge_key => edge_value},
vertex_labels: %{vertex_id => term},
vertices: %{vertex_id => vertex},
type: graph_type,
vertex_identifier: (vertex() -> term())
}
@type graph_info :: %{
:num_edges => non_neg_integer(),
:num_vertices => non_neg_integer(),
:size_in_bytes => number(),
:type => :directed | :undirected
}
@doc """
Creates a new graph using the provided options.
## Options
- `type: :directed | :undirected`, specifies what type of graph this is. Defaults to a `:directed` graph.
- `vertex_identifier`: a function which accepts a vertex and returns a unique identifier of said vertex.
Defaults to `Graph.Utils.vertex_id/1`, a hash of the whole vertex utilizing `:erlang.phash2/2`.
## Example
iex> Graph.new()
#Graph<type: directed, vertices: [], edges: []>
iex> g = Graph.new(type: :undirected) |> Graph.add_edges([{:a, :b}, {:b, :a}])
...> Graph.edges(g)
[%Graph.Edge{v1: :a, v2: :b}]
iex> g = Graph.new(type: :directed) |> Graph.add_edges([{:a, :b}, {:b, :a}])
...> Graph.edges(g)
[%Graph.Edge{v1: :a, v2: :b}, %Graph.Edge{v1: :b, v2: :a}]
iex> g = Graph.new(vertex_identifier: fn v -> :erlang.phash2(v) end) |> Graph.add_edges([{:a, :b}, {:b, :a}])
...> Graph.edges(g)
[%Graph.Edge{v1: :a, v2: :b}, %Graph.Edge{v1: :b, v2: :a}]
"""
def new(opts \\ []) do
type = Keyword.get(opts, :type) || :directed
vertex_identifier = Keyword.get(opts, :vertex_identifier) || (&Graph.Utils.vertex_id/1)
%__MODULE__{type: type, vertex_identifier: vertex_identifier}
end
@doc """
Returns a map of summary information about this graph.
NOTE: The `size_in_bytes` value is an estimate, not a perfectly precise value, but
should be close enough to be useful.
## Example
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c, :d])
...> g = g |> Graph.add_edges([{:a, :b}, {:b, :c}])
...> match?(%{type: :directed, num_vertices: 4, num_edges: 2}, Graph.info(g))
true
"""
@spec info(t) :: graph_info()
def info(%__MODULE__{type: type} = g) do
%{
type: type,
num_edges: num_edges(g),
num_vertices: num_vertices(g),
size_in_bytes: Graph.Utils.sizeof(g)
}
end
@doc """
Converts the given Graph to DOT format, which can then be converted to
a number of other formats via Graphviz, e.g. `dot -Tpng out.dot > out.png`.
If labels are set on a vertex, then those labels are used in the DOT output
in place of the vertex itself. If no labels were set, then the vertex is
stringified if it's a primitive type and inspected if it's not, in which
case the inspect output will be quoted and used as the vertex label in the DOT file.
Edge labels and weights will be shown as attributes on the edge definitions, otherwise
they use the same labelling scheme for the involved vertices as described above.
NOTE: Currently this function assumes graphs are directed graphs, but in the future
it will support undirected graphs as well.
## Example
> g = Graph.new |> Graph.add_vertices([:a, :b, :c, :d])
> g = Graph.add_edges([{:a, :b}, {:b, :c}, {:b, :d}, {:c, :d}])
> g = Graph.label_vertex(g, :a, :start)
> g = Graph.label_vertex(g, :d, :finish)
> g = Graph.update_edge(g, :b, :d, weight: 3)
> IO.puts(Graph.to_dot(g))
strict digraph {
start
b
c
finish
start -> b [weight=1]
b -> c [weight=1]
b -> finish [weight=3]
c -> finish [weight=1]
}
"""
@spec to_dot(t) :: {:ok, binary} | {:error, term}
def to_dot(%__MODULE__{} = g) do
Graph.Serializers.DOT.serialize(g)
end
@spec to_edgelist(t) :: {:ok, binary} | {:error, term}
def to_edgelist(%__MODULE__{} = g) do
Graph.Serializers.Edgelist.serialize(g)
end
@doc """
Returns the number of edges in the graph.
Pseudo-edges (label/weight pairs applied to an edge) are not counted, only distinct
vertex pairs where an edge exists between them are counted.
## Example
iex> g = Graph.add_edges(Graph.new, [{:a, :b}, {:b, :c}, {:a, :a}])
...> Graph.num_edges(g)
3
"""
@spec num_edges(t) :: non_neg_integer
def num_edges(%__MODULE__{out_edges: oe, edges: meta}) do
Enum.reduce(oe, 0, fn {from, tos}, sum ->
Enum.reduce(tos, sum, fn to, s ->
s + map_size(Map.get(meta, {from, to}))
end)
end)
end
@doc """
Returns the number of vertices in the graph
## Example
iex> g = Graph.add_vertices(Graph.new, [:a, :b, :c])
...> Graph.num_vertices(g)
3
"""
@spec num_vertices(t) :: non_neg_integer
def num_vertices(%__MODULE__{vertices: vs}) do
map_size(vs)
end
@doc """
Returns true if and only if the graph `g` is a tree.
This function always returns false for undirected graphs.
NOTE: Multiple edges between the same pair of vertices in the same direction are
considered a single edge when determining if the provided graph is a tree.
"""
@spec is_tree?(t) :: boolean
def is_tree?(%__MODULE__{type: :undirected}), do: false
def is_tree?(%__MODULE__{out_edges: es, vertices: vs} = g) do
num_edges = Enum.reduce(es, 0, fn {_, out}, sum -> sum + MapSet.size(out) end)
if num_edges == map_size(vs) - 1 do
length(components(g)) == 1
else
false
end
end
@doc """
Returns true if the graph is an aborescence, a directed acyclic graph,
where the *root*, a vertex, of the arborescence has a unique path from itself
to every other vertex in the graph.
"""
@spec is_arborescence?(t) :: boolean
def is_arborescence?(%__MODULE__{type: :undirected}), do: false
def is_arborescence?(%__MODULE__{} = g), do: Graph.Directed.is_arborescence?(g)
@doc """
Returns the root vertex of the arborescence, if one exists, otherwise nil.
"""
@spec arborescence_root(t) :: vertex | nil
def arborescence_root(%__MODULE__{type: :undirected}), do: nil
def arborescence_root(%__MODULE__{} = g), do: Graph.Directed.arborescence_root(g)
@doc """
Returns true if and only if the graph `g` is acyclic.
"""
@spec is_acyclic?(t) :: boolean
defdelegate is_acyclic?(g), to: Graph.Directed
@doc """
Returns true if the graph `g` is not acyclic.
"""
@spec is_cyclic?(t) :: boolean
def is_cyclic?(%__MODULE__{} = g), do: not is_acyclic?(g)
@doc """
Returns true if graph `g1` is a subgraph of `g2`.
A graph is a subgraph of another graph if it's vertices and edges
are a subset of that graph's vertices and edges.
## Example
iex> g1 = Graph.new |> Graph.add_vertices([:a, :b, :c, :d]) |> Graph.add_edge(:a, :b) |> Graph.add_edge(:b, :c)
...> g2 = Graph.new |> Graph.add_vertices([:b, :c]) |> Graph.add_edge(:b, :c)
...> Graph.is_subgraph?(g2, g1)
true
iex> g1 = Graph.new |> Graph.add_vertices([:a, :b, :c, :d]) |> Graph.add_edges([{:a, :b}, {:b, :c}])
...> g2 = Graph.new |> Graph.add_vertices([:b, :c, :e]) |> Graph.add_edges([{:b, :c}, {:c, :e}])
...> Graph.is_subgraph?(g2, g1)
false
"""
@spec is_subgraph?(t, t) :: boolean
def is_subgraph?(%__MODULE__{} = a, %__MODULE__{} = b) do
meta1 = a.edges
vs1 = a.vertices
meta2 = b.edges
vs2 = b.vertices
for {v, _} <- vs1 do
unless Map.has_key?(vs2, v), do: throw(:not_subgraph)
end
for {edge_key, g1_edge_meta} <- meta1 do
case Map.fetch(meta2, edge_key) do
{:ok, g2_edge_meta} ->
unless MapSet.subset?(MapSet.new(g1_edge_meta), MapSet.new(g2_edge_meta)) do
throw(:not_subgraph)
end
_ ->
throw(:not_subgraph)
end
end
true
catch
:throw, :not_subgraph ->
false
end
@doc """
See `dijkstra/3`.
"""
@spec get_shortest_path(t, vertex, vertex) :: [vertex] | nil
defdelegate get_shortest_path(g, a, b), to: Graph.Pathfinding, as: :dijkstra
@doc """
Gets the shortest path between `a` and `b`.
As indicated by the name, this uses Dijkstra's algorithm for locating the shortest path, which
means that edge weights are taken into account when determining which vertices to search next. By
default, all edges have a weight of 1, so vertices are inspected at random; which causes this algorithm
to perform a naive depth-first search of the graph until a path is found. If your edges are weighted however,
this will allow the algorithm to more intelligently navigate the graph.
## Example
iex> g = Graph.new |> Graph.add_edges([{:a, :b}, {:b, :c}, {:c, :d}, {:b, :d}])
...> Graph.dijkstra(g, :a, :d)
[:a, :b, :d]
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c, :d])
...> g = Graph.add_edges(g, [{:a, :c}, {:b, :c}, {:b, :d}])
...> Graph.dijkstra(g, :a, :d)
nil
"""
@spec dijkstra(t, vertex, vertex) :: [vertex]
defdelegate dijkstra(g, a, b), to: Graph.Pathfinding
@doc """
Gets the shortest path between `a` and `b`.
The A* algorithm is very much like Dijkstra's algorithm, except in addition to edge weights, A*
also considers a heuristic function for determining the lower bound of the cost to go from vertex
`v` to `b`. The lower bound *must* be less than the cost of the shortest path from `v` to `b`, otherwise
it will do more harm than good. Dijkstra's algorithm can be reframed as A* where `lower_bound(v)` is always 0.
This function puts the heuristics in your hands, so you must provide the heuristic function, which should take
a single parameter, `v`, which is the vertex being currently examined. Your heuristic should then determine what the
lower bound for the cost to reach `b` from `v` is, and return that value.
## Example
iex> g = Graph.new |> Graph.add_edges([{:a, :b}, {:b, :c}, {:c, :d}, {:b, :d}])
...> Graph.a_star(g, :a, :d, fn _ -> 0 end)
[:a, :b, :d]
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c, :d])
...> g = Graph.add_edges(g, [{:a, :c}, {:b, :c}, {:b, :d}])
...> Graph.a_star(g, :a, :d, fn _ -> 0 end)
nil
"""
@spec a_star(t, vertex, vertex, (vertex, vertex -> integer)) :: [vertex]
defdelegate a_star(g, a, b, hfun), to: Graph.Pathfinding
@doc """
Builds a list of paths between vertex `a` and vertex `b`.
The algorithm used here is a depth-first search, which evaluates the whole
graph until all paths are found. Order is guaranteed to be deterministic,
but not guaranteed to be in any meaningful order (i.e. shortest to longest).
## Example
iex> g = Graph.new |> Graph.add_edges([{:a, :b}, {:b, :c}, {:c, :d}, {:b, :d}, {:c, :a}])
...> Graph.get_paths(g, :a, :d)
[[:a, :b, :c, :d], [:a, :b, :d]]
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c, :d])
...> g = Graph.add_edges(g, [{:a, :c}, {:b, :c}, {:b, :d}])
...> Graph.get_paths(g, :a, :d)
[]
"""
@spec get_paths(t, vertex, vertex) :: [[vertex]]
defdelegate get_paths(g, a, b), to: Graph.Pathfinding, as: :all
@doc """
Return a list of all the edges, where each edge is expressed as a tuple
of `{A, B}`, where the elements are the vertices involved, and implying the
direction of the edge to be from `A` to `B`.
NOTE: You should be careful when using this on dense graphs, as it produces
lists with whatever you've provided as vertices, with likely many copies of
each. I'm not sure if those copies are shared in-memory as they are unchanged,
so it *should* be fairly compact in memory, but I have not verified that to be sure.
## Example
iex> g = Graph.new |> Graph.add_vertex(:a) |> Graph.add_vertex(:b) |> Graph.add_vertex(:c)
...> g = g |> Graph.add_edge(:a, :c) |> Graph.add_edge(:b, :c)
...> Graph.edges(g)
[%Graph.Edge{v1: :a, v2: :c}, %Graph.Edge{v1: :b, v2: :c}]
"""
@spec edges(t) :: [Edge.t()]
def edges(%__MODULE__{out_edges: edges, edges: meta, vertices: vs}) do
edges
|> Enum.flat_map(fn {source_id, out_neighbors} ->
source = Map.get(vs, source_id)
out_neighbors
|> Enum.flat_map(fn out_neighbor ->
target = Map.get(vs, out_neighbor)
meta = Map.get(meta, {source_id, out_neighbor})
Enum.map(meta, fn {label, weight} ->
Edge.new(source, target, label: label, weight: weight)
end)
end)
end)
end
@doc """
Returns a list of all edges inbound or outbound from vertex `v`.
## Example
iex> g = Graph.new |> Graph.add_edges([{:a, :b}, {:b, :c}])
...> Graph.edges(g, :b)
[%Graph.Edge{v1: :a, v2: :b}, %Graph.Edge{v1: :b, v2: :c}]
iex> g = Graph.new |> Graph.add_edges([{:a, :b}, {:b, :c}])
...> Graph.edges(g, :d)
[]
"""
@spec edges(t, vertex) :: [Edge.t()]
def edges(
%__MODULE__{
in_edges: ie,
out_edges: oe,
edges: meta,
vertices: vs,
vertex_identifier: vertex_identifier
},
v
) do
v_id = vertex_identifier.(v)
v_in = Map.get(ie, v_id) || MapSet.new()
v_out = Map.get(oe, v_id) || MapSet.new()
v_all = MapSet.union(v_in, v_out)
e_in =
Enum.flat_map(v_all, fn v2_id ->
case Map.get(meta, {v2_id, v_id}) do
nil ->
[]
edge_meta when is_map(edge_meta) ->
v2 = Map.get(vs, v2_id)
for {label, weight} <- edge_meta do
Edge.new(v2, v, label: label, weight: weight)
end
end
end)
e_out =
Enum.flat_map(v_all, fn v2_id ->
case Map.get(meta, {v_id, v2_id}) do
nil ->
[]
edge_meta when is_map(edge_meta) ->
v2 = Map.get(vs, v2_id)
for {label, weight} <- edge_meta do
Edge.new(v, v2, label: label, weight: weight)
end
end
end)
e_in ++ e_out
end
@doc """
Returns a list of all edges between `v1` and `v2`.
## Example
iex> g = Graph.new |> Graph.add_edge(:a, :b, label: :uses)
...> g = Graph.add_edge(g, :a, :b, label: :contains)
...> Graph.edges(g, :a, :b)
[%Graph.Edge{v1: :a, v2: :b, label: :contains}, %Graph.Edge{v1: :a, v2: :b, label: :uses}]
iex> g = Graph.new(type: :undirected) |> Graph.add_edge(:a, :b, label: :uses)
...> g = Graph.add_edge(g, :a, :b, label: :contains)
...> Graph.edges(g, :a, :b)
[%Graph.Edge{v1: :a, v2: :b, label: :contains}, %Graph.Edge{v1: :a, v2: :b, label: :uses}]
"""
@spec edges(t, vertex, vertex) :: [Edge.t()]
def edges(%__MODULE__{type: type, edges: meta, vertex_identifier: vertex_identifier}, v1, v2) do
with v1_id <- vertex_identifier.(v1),
v2_id <- vertex_identifier.(v2),
edge_key <- {v1_id, v2_id},
edge_meta <- Map.get(meta, edge_key, %{}) do
case type do
:directed ->
edge_list(v1, v2, edge_meta, type)
:undirected ->
edge_meta2 = Map.get(meta, {v2_id, v1_id}, %{})
merged_meta = Map.merge(edge_meta, edge_meta2)
edge_list(v1, v2, merged_meta, type)
end
end
end
defp edge_list(v1, v2, edge_meta, :undirected) do
for {label, weight} <- edge_meta do
if v1 > v2 do
Edge.new(v2, v1, label: label, weight: weight)
else
Edge.new(v1, v2, label: label, weight: weight)
end
end
end
defp edge_list(v1, v2, edge_meta, _) do
for {label, weight} <- edge_meta do
Edge.new(v1, v2, label: label, weight: weight)
end
end
@doc """
Get an Edge struct for a specific vertex pair, or vertex pair + label.
## Example
iex> g = Graph.new |> Graph.add_edges([{:a, :b}, {:a, :b, label: :contains}, {:a, :b, label: :uses}])
...> Graph.edge(g, :b, :a)
nil
iex> g = Graph.new |> Graph.add_edges([{:a, :b}, {:a, :b, label: :contains}, {:a, :b, label: :uses}])
...> Graph.edge(g, :a, :b)
%Graph.Edge{v1: :a, v2: :b}
iex> g = Graph.new |> Graph.add_edges([{:a, :b}, {:a, :b, label: :contains}, {:a, :b, label: :uses}])
...> Graph.edge(g, :a, :b, :contains)
%Graph.Edge{v1: :a, v2: :b, label: :contains}
iex> g = Graph.new(type: :undirected) |> Graph.add_edges([{:a, :b}, {:a, :b, label: :contains}, {:a, :b, label: :uses}])
...> Graph.edge(g, :a, :b, :contains)
%Graph.Edge{v1: :a, v2: :b, label: :contains}
"""
@spec edge(t, vertex, vertex) :: Edge.t() | nil
@spec edge(t, vertex, vertex, label) :: Edge.t() | nil
def edge(%__MODULE__{} = g, v1, v2) do
edge(g, v1, v2, nil)
end
def edge(%__MODULE__{type: :undirected} = g, v1, v2, label) do
if v1 > v2 do
do_edge(g, v2, v1, label)
else
do_edge(g, v1, v2, label)
end
end
def edge(%__MODULE__{} = g, v1, v2, label) do
do_edge(g, v1, v2, label)
end
defp do_edge(%__MODULE__{edges: meta, vertex_identifier: vertex_identifier}, v1, v2, label) do
with v1_id <- vertex_identifier.(v1),
v2_id <- vertex_identifier.(v2),
edge_key <- {v1_id, v2_id},
{:ok, edge_meta} <- Map.fetch(meta, edge_key),
{:ok, weight} <- Map.fetch(edge_meta, label) do
Edge.new(v1, v2, label: label, weight: weight)
else
_ ->
nil
end
end
@doc """
Returns a list of all the vertices in the graph.
NOTE: You should be careful when using this on large graphs, as the list it produces
contains every vertex on the graph. I have not yet verified whether Erlang ensures that
they are a shared reference with the original, or copies, but if the latter it could result
in running out of memory if the graph is too large.
## Example
iex> g = Graph.new |> Graph.add_vertex(:a) |> Graph.add_vertex(:b)
...> Graph.vertices(g)
[:a, :b]
"""
@spec vertices(t) :: vertex
def vertices(%__MODULE__{vertices: vs}) do
Map.values(vs)
end
@doc """
Returns true if the given vertex exists in the graph. Otherwise false.
## Example
iex> g = Graph.new |> Graph.add_vertices([:a, :b])
...> Graph.has_vertex?(g, :a)
true
iex> g = Graph.new |> Graph.add_vertices([:a, :b])
...> Graph.has_vertex?(g, :c)
false
"""
@spec has_vertex?(t, vertex) :: boolean
def has_vertex?(%__MODULE__{vertices: vs, vertex_identifier: vertex_identifier}, v) do
v_id = vertex_identifier.(v)
Map.has_key?(vs, v_id)
end
@doc """
Returns the label for the given vertex.
If no label was assigned, it returns [].
## Example
iex> g = Graph.new |> Graph.add_vertex(:a) |> Graph.label_vertex(:a, :my_label)
...> Graph.vertex_labels(g, :a)
[:my_label]
"""
@spec vertex_labels(t, vertex) :: term | []
def vertex_labels(%__MODULE__{vertex_labels: labels, vertex_identifier: vertex_identifier}, v) do
with v1_id <- vertex_identifier.(v),
true <- Map.has_key?(labels, v1_id) do
Map.get(labels, v1_id)
else
_ -> []
end
end
@doc """
Adds a new vertex to the graph. If the vertex is already present in the graph, the add is a no-op.
You can provide optional labels for the vertex, aside from the variety of uses this has for working
with graphs, labels will also be used when exporting a graph in DOT format.
## Example
iex> g = Graph.new |> Graph.add_vertex(:a, :mylabel) |> Graph.add_vertex(:a)
...> [:a] = Graph.vertices(g)
...> Graph.vertex_labels(g, :a)
[:mylabel]
iex> g = Graph.new |> Graph.add_vertex(:a, [:mylabel, :other])
...> Graph.vertex_labels(g, :a)
[:mylabel, :other]
"""
@spec add_vertex(t, vertex, label) :: t
def add_vertex(g, v, labels \\ [])
def add_vertex(
%__MODULE__{vertices: vs, vertex_labels: vl, vertex_identifier: vertex_identifier} = g,
v,
labels
)
when is_list(labels) do
id = vertex_identifier.(v)
case Map.get(vs, id) do
nil ->
%__MODULE__{g | vertices: Map.put(vs, id, v), vertex_labels: Map.put(vl, id, labels)}
_ ->
g
end
end
def add_vertex(g, v, label) do
add_vertex(g, v, [label])
end
@doc """
Like `add_vertex/2`, but takes a list of vertices to add to the graph.
## Example
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :a])
...> Graph.vertices(g)
[:a, :b]
"""
@spec add_vertices(t, [vertex]) :: t
def add_vertices(%__MODULE__{} = g, vs) when is_list(vs) do
Enum.reduce(vs, g, &add_vertex(&2, &1))
end
@doc """
Updates the labels for the given vertex.
If no such vertex exists in the graph, `{:error, {:invalid_vertex, v}}` is returned.
## Example
iex> g = Graph.new |> Graph.add_vertex(:a, :foo)
...> [:foo] = Graph.vertex_labels(g, :a)
...> g = Graph.label_vertex(g, :a, :bar)
...> Graph.vertex_labels(g, :a)
[:foo, :bar]
iex> g = Graph.new |> Graph.add_vertex(:a)
...> g = Graph.label_vertex(g, :a, [:foo, :bar])
...> Graph.vertex_labels(g, :a)
[:foo, :bar]
"""
@spec label_vertex(t, vertex, term) :: t | {:error, {:invalid_vertex, vertex}}
def label_vertex(
%__MODULE__{vertices: vs, vertex_labels: labels, vertex_identifier: vertex_identifier} =
g,
v,
vlabels
)
when is_list(vlabels) do
with v_id <- vertex_identifier.(v),
true <- Map.has_key?(vs, v_id),
old_vlabels <- Map.get(labels, v_id),
new_vlabels <- old_vlabels ++ vlabels,
labels <- Map.put(labels, v_id, new_vlabels) do
%__MODULE__{g | vertex_labels: labels}
else
_ -> {:error, {:invalid_vertex, v}}
end
end
def label_vertex(g, v, vlabel) do
label_vertex(g, v, [vlabel])
end
@doc """
iex> graph = Graph.new |> Graph.add_vertex(:a, [:foo, :bar])
...> [:foo, :bar] = Graph.vertex_labels(graph, :a)
...> graph = Graph.remove_vertex_labels(graph, :a)
...> Graph.vertex_labels(graph, :a)
[]
iex> graph = Graph.new |> Graph.add_vertex(:a, [:foo, :bar])
...> [:foo, :bar] = Graph.vertex_labels(graph, :a)
...> Graph.remove_vertex_labels(graph, :b)
{:error, {:invalid_vertex, :b}}
"""
@spec remove_vertex_labels(t, vertex) :: t | {:error, {:invalid_vertex, vertex}}
def remove_vertex_labels(
%__MODULE__{
vertices: vertices,
vertex_labels: vertex_labels,
vertex_identifier: vertex_identifier
} = graph,
vertex
) do
graph.vertex_labels
|> Map.put(vertex, [])
with vertex_id <- vertex_identifier.(vertex),
true <- Map.has_key?(vertices, vertex_id),
labels <- Map.put(vertex_labels, vertex_id, []) do
%__MODULE__{graph | vertex_labels: labels}
else
_ -> {:error, {:invalid_vertex, vertex}}
end
end
@doc """
Replaces `vertex` with `new_vertex` in the graph.
## Example
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c, :d])
...> g = Graph.add_edges(g, [{:a, :b}, {:b, :c}, {:c, :a}, {:c, :d}])
...> [:a, :b, :c, :d] = Graph.vertices(g)
...> g = Graph.replace_vertex(g, :a, :e)
...> [:b, :c, :d, :e] = Graph.vertices(g)
...> Graph.edges(g)
[%Graph.Edge{v1: :b, v2: :c}, %Graph.Edge{v1: :c, v2: :d}, %Graph.Edge{v1: :c, v2: :e}, %Graph.Edge{v1: :e, v2: :b}]
"""
@spec replace_vertex(t, vertex, vertex) :: t | {:error, :no_such_vertex}
def replace_vertex(
%__MODULE__{out_edges: oe, in_edges: ie, edges: em, vertex_identifier: vertex_identifier} =
g,
v,
rv
) do
vs = g.vertices
labels = g.vertex_labels
with v_id <- vertex_identifier.(v),
true <- Map.has_key?(vs, v_id),
rv_id <- vertex_identifier.(rv),
vs <- Map.put(Map.delete(vs, v_id), rv_id, rv) do
oe =
for {from_id, to} = e <- oe, into: %{} do
fid = if from_id == v_id, do: rv_id, else: from_id
cond do
MapSet.member?(to, v_id) ->
{fid, MapSet.put(MapSet.delete(to, v_id), rv_id)}
from_id != fid ->
{fid, to}
:else ->
e
end
end
ie =
for {to_id, from} = e <- ie, into: %{} do
tid = if to_id == v_id, do: rv_id, else: to_id
cond do
MapSet.member?(from, v_id) ->
{tid, MapSet.put(MapSet.delete(from, v_id), rv_id)}
to_id != tid ->
{tid, from}
:else ->
e
end
end
meta =
em
|> Stream.map(fn
{{^v_id, ^v_id}, meta} -> {{rv_id, rv_id}, meta}
{{^v_id, v2_id}, meta} -> {{rv_id, v2_id}, meta}
{{v1_id, ^v_id}, meta} -> {{v1_id, rv_id}, meta}
edge -> edge
end)
|> Enum.into(%{})
labels =
case Map.get(labels, v_id) do
nil -> labels
label -> Map.put(Map.delete(labels, v_id), rv_id, label)
end
%__MODULE__{
g
| vertices: vs,
out_edges: oe,
in_edges: ie,
edges: meta,
vertex_labels: labels
}
else
_ -> {:error, :no_such_vertex}
end
end
@doc """
Removes a vertex from the graph, as well as any edges which refer to that vertex. If the vertex does
not exist in the graph, it is a no-op.
## Example
iex> g = Graph.new |> Graph.add_vertex(:a) |> Graph.add_vertex(:b) |> Graph.add_edge(:a, :b)
...> [:a, :b] = Graph.vertices(g)
...> [%Graph.Edge{v1: :a, v2: :b}] = Graph.edges(g)
...> g = Graph.delete_vertex(g, :b)
...> [:a] = Graph.vertices(g)
...> Graph.edges(g)
[]
"""
@spec delete_vertex(t, vertex) :: t
def delete_vertex(
%__MODULE__{out_edges: oe, in_edges: ie, edges: em, vertex_identifier: vertex_identifier} =
g,
v
) do
vs = g.vertices
ls = g.vertex_labels
with v_id <- vertex_identifier.(v),
true <- Map.has_key?(vs, v_id),
oe <- Map.delete(oe, v_id),
ie <- Map.delete(ie, v_id),
vs <- Map.delete(vs, v_id),
ls <- Map.delete(ls, v_id) do
oe = for {id, ns} <- oe, do: {id, MapSet.delete(ns, v_id)}, into: %{}
ie = for {id, ns} <- ie, do: {id, MapSet.delete(ns, v_id)}, into: %{}
em = for {{id1, id2}, _} = e <- em, v_id != id1 && v_id != id2, do: e, into: %{}
%__MODULE__{g | vertices: vs, vertex_labels: ls, out_edges: oe, in_edges: ie, edges: em}
else
_ -> g
end
end
@doc """
Like `delete_vertex/2`, but takes a list of vertices to delete from the graph.
## Example
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c]) |> Graph.delete_vertices([:a, :b])
...> Graph.vertices(g)
[:c]
"""
@spec delete_vertices(t, [vertex]) :: t
def delete_vertices(%__MODULE__{} = g, vs) when is_list(vs) do
Enum.reduce(vs, g, &delete_vertex(&2, &1))
end
@doc """
Like `add_edge/3` or `add_edge/4`, but takes a `Graph.Edge` struct created with
`Graph.Edge.new/2` or `Graph.Edge.new/3`.
## Example
iex> g = Graph.new |> Graph.add_edge(Graph.Edge.new(:a, :b))
...> [:a, :b] = Graph.vertices(g)
...> Graph.edges(g)
[%Graph.Edge{v1: :a, v2: :b}]
"""
@spec add_edge(t, Edge.t()) :: t
def add_edge(%__MODULE__{} = g, %Edge{v1: v1, v2: v2, label: label, weight: weight}) do
add_edge(g, v1, v2, label: label, weight: weight)
end
@doc """
Adds an edge connecting `v1` to `v2`. If either `v1` or `v2` do not exist in the graph,
they are automatically added. Adding the same edge more than once does not create multiple edges,
each edge is only ever stored once.
Edges have a default weight of 1, and an empty (nil) label. You can change this by passing options
to this function, as shown below.
## Example
iex> g = Graph.new |> Graph.add_edge(:a, :b)
...> [:a, :b] = Graph.vertices(g)
...> Graph.edges(g)
[%Graph.Edge{v1: :a, v2: :b, label: nil, weight: 1}]
iex> g = Graph.new |> Graph.add_edge(:a, :b, label: :foo, weight: 2)
...> [:a, :b] = Graph.vertices(g)
...> Graph.edges(g)
[%Graph.Edge{v1: :a, v2: :b, label: :foo, weight: 2}]
"""
@spec add_edge(t, vertex, vertex) :: t
@spec add_edge(t, vertex, vertex, Edge.edge_opts()) :: t | no_return
def add_edge(g, v1, v2, opts \\ [])
def add_edge(%__MODULE__{type: :undirected} = g, v1, v2, opts) when is_list(opts) do
if v1 > v2 do
do_add_edge(g, v2, v1, opts)
else
do_add_edge(g, v1, v2, opts)
end
end
def add_edge(%__MODULE__{} = g, v1, v2, opts) when is_list(opts) do
do_add_edge(g, v1, v2, opts)
end
defp do_add_edge(%__MODULE__{vertex_identifier: vertex_identifier} = g, v1, v2, opts) do
v1_id = vertex_identifier.(v1)
v2_id = vertex_identifier.(v2)
%__MODULE__{in_edges: ie, out_edges: oe, edges: meta} =
g = g |> add_vertex(v1) |> add_vertex(v2)
out_neighbors =
case Map.get(oe, v1_id) do
nil -> MapSet.new([v2_id])
ms -> MapSet.put(ms, v2_id)
end
in_neighbors =
case Map.get(ie, v2_id) do
nil -> MapSet.new([v1_id])
ms -> MapSet.put(ms, v1_id)
end
edge_meta = Map.get(meta, {v1_id, v2_id}, %{})
{label, weight} = Edge.options_to_meta(opts)
edge_meta = Map.put(edge_meta, label, weight)
%__MODULE__{
g
| in_edges: Map.put(ie, v2_id, in_neighbors),
out_edges: Map.put(oe, v1_id, out_neighbors),
edges: Map.put(meta, {v1_id, v2_id}, edge_meta)
}
end
@doc """
This function is like `add_edge/3`, but for multiple edges at once, it also accepts edge specifications
in a few different ways to make it easy to generate graphs succinctly.
Edges must be provided as a list of `Edge` structs, `{vertex, vertex}` pairs, or
`{vertex, vertex, edge_opts :: [label: term, weight: integer]}`.
See the docs for `Graph.Edge.new/2` or `Graph.Edge.new/3` for more info on creating Edge structs, and
`add_edge/3` for information on edge options.
If an invalid edge specification is provided, raises `Graph.EdgeSpecificationError`.
## Examples
iex> alias Graph.Edge
...> edges = [Edge.new(:a, :b), Edge.new(:b, :c, weight: 2)]
...> g = Graph.new |> Graph.add_vertices([:a, :b, :c]) |> Graph.add_edges(edges)
...> Graph.edges(g)
[%Graph.Edge{v1: :a, v2: :b}, %Graph.Edge{v1: :b, v2: :c, weight: 2}]
iex> g = Graph.new |> Graph.add_edges([{:a, :b}, {:a, :b, label: :foo}, {:a, :b, label: :foo, weight: 2}])
...> Graph.edges(g)
[%Graph.Edge{v1: :a, v2: :b, label: :foo, weight: 2}, %Graph.Edge{v1: :a, v2: :b}]
iex> Graph.new |> Graph.add_vertices([:a, :b, :c]) |> Graph.add_edges([:a, :b])
** (Graph.EdgeSpecificationError) Expected a valid edge specification, but got: :a
"""
@spec add_edges(t, [Edge.t()] | Enumerable.t()) :: t | no_return
def add_edges(%__MODULE__{} = g, es) do
Enum.reduce(es, g, fn
%Edge{} = edge, acc ->
add_edge(acc, edge)
{v1, v2}, acc ->
add_edge(acc, v1, v2)
{v1, v2, opts}, acc when is_list(opts) ->
add_edge(acc, v1, v2, opts)
bad_edge, _acc ->
raise Graph.EdgeSpecificationError, bad_edge
end)
end
@doc """
Splits the edges between `v1` and `v2` by inserting a new vertex, `v3`, deleting
the edges between `v1` and `v2`, and inserting new edges from `v1` to `v3` and from
`v3` to `v2`.
The resulting edges from the split will share the same weight and label as the old edges.
## Example
iex> g = Graph.new |> Graph.add_vertices([:a, :c]) |> Graph.add_edge(:a, :c, weight: 2)
...> g = Graph.split_edge(g, :a, :c, :b)
...> Graph.edges(g)
[%Graph.Edge{v1: :a, v2: :b, weight: 2}, %Graph.Edge{v1: :b, v2: :c, weight: 2}]
iex> g = Graph.new(type: :undirected) |> Graph.add_vertices([:a, :c]) |> Graph.add_edge(:a, :c, weight: 2)
...> g = Graph.split_edge(g, :a, :c, :b)
...> Graph.edges(g)
[%Graph.Edge{v1: :a, v2: :b, weight: 2}, %Graph.Edge{v1: :b, v2: :c, weight: 2}]
"""
@spec split_edge(t, vertex, vertex, vertex) :: t | {:error, :no_such_edge}
def split_edge(%__MODULE__{type: :undirected} = g, v1, v2, v3) do
if v1 > v2 do
do_split_edge(g, v2, v1, v3)
else
do_split_edge(g, v1, v2, v3)
end
end
def split_edge(%__MODULE__{} = g, v1, v2, v3) do
do_split_edge(g, v1, v2, v3)
end
defp do_split_edge(
%__MODULE__{in_edges: ie, out_edges: oe, edges: em, vertex_identifier: vertex_identifier} =
g,
v1,
v2,
v3
) do
with v1_id <- vertex_identifier.(v1),
v2_id <- vertex_identifier.(v2),
{:ok, v1_out} <- Map.fetch(oe, v1_id),
{:ok, v2_in} <- Map.fetch(ie, v2_id),
true <- MapSet.member?(v1_out, v2_id),
meta <- Map.get(em, {v1_id, v2_id}),
v1_out <- MapSet.delete(v1_out, v2_id),
v2_in <- MapSet.delete(v2_in, v1_id) do
g = %__MODULE__{
g
| in_edges: Map.put(ie, v2_id, v2_in),
out_edges: Map.put(oe, v1_id, v1_out)
}
g = add_vertex(g, v3)
Enum.reduce(meta, g, fn {label, weight}, acc ->
acc
|> add_edge(v1, v3, label: label, weight: weight)
|> add_edge(v3, v2, label: label, weight: weight)
end)
else
_ -> {:error, :no_such_edge}
end
end
@doc """
Given two vertices, this function updates the metadata (weight/label) for the unlabelled
edge between those two vertices. If no unlabelled edge exists between them, an error
tuple is returned. If you set a label, the unlabelled edge will be replaced with a new labelled
edge.
## Example
iex> g = Graph.new |> Graph.add_edge(:a, :b) |> Graph.add_edge(:a, :b, label: :bar)
...> %Graph{} = g = Graph.update_edge(g, :a, :b, weight: 2, label: :foo)
...> Graph.edges(g)
[%Graph.Edge{v1: :a, v2: :b, label: :bar}, %Graph.Edge{v1: :a, v2: :b, label: :foo, weight: 2}]
"""
@spec update_edge(t, vertex, vertex, Edge.edge_opts()) :: t | {:error, :no_such_edge}
def update_edge(%__MODULE__{} = g, v1, v2, opts) when is_list(opts) do
update_labelled_edge(g, v1, v2, nil, opts)
end
@doc """
Like `update_edge/4`, but requires you to specify the labelled edge to update.
Th implementation of `update_edge/4` is actually `update_edge(g, v1, v2, nil, opts)`.
## Example
iex> g = Graph.new |> Graph.add_edge(:a, :b) |> Graph.add_edge(:a, :b, label: :bar)
...> %Graph{} = g = Graph.update_labelled_edge(g, :a, :b, :bar, weight: 2, label: :foo)
...> Graph.edges(g)
[%Graph.Edge{v1: :a, v2: :b, label: :foo, weight: 2}, %Graph.Edge{v1: :a, v2: :b}]
iex> g = Graph.new(type: :undirected) |> Graph.add_edge(:a, :b) |> Graph.add_edge(:a, :b, label: :bar)
...> %Graph{} = g = Graph.update_labelled_edge(g, :a, :b, :bar, weight: 2, label: :foo)
...> Graph.edges(g)
[%Graph.Edge{v1: :a, v2: :b, label: :foo, weight: 2}, %Graph.Edge{v1: :a, v2: :b}]
"""
@spec update_labelled_edge(t, vertex, vertex, label, Edge.edge_opts()) ::
t | {:error, :no_such_edge}
def update_labelled_edge(%__MODULE__{type: :undirected} = g, v1, v2, old_label, opts)
when is_list(opts) do
if v1 > v2 do
do_update_labelled_edge(g, v2, v1, old_label, opts)
else
do_update_labelled_edge(g, v1, v2, old_label, opts)
end
end
def update_labelled_edge(%__MODULE__{} = g, v1, v2, old_label, opts) when is_list(opts) do
do_update_labelled_edge(g, v1, v2, old_label, opts)
end
defp do_update_labelled_edge(
%__MODULE__{edges: em, vertex_identifier: vertex_identifier} = g,
v1,
v2,
old_label,
opts
) do
with v1_id <- vertex_identifier.(v1),
v2_id <- vertex_identifier.(v2),
edge_key <- {v1_id, v2_id},
{:ok, meta} <- Map.fetch(em, edge_key),
{:ok, _} <- Map.fetch(meta, old_label),
{new_label, new_weight} <- Edge.options_to_meta(opts) do
case new_label do
^old_label ->
new_meta = Map.put(meta, old_label, new_weight)
%__MODULE__{g | edges: Map.put(em, edge_key, new_meta)}
nil ->
new_meta = Map.put(meta, old_label, new_weight)
%__MODULE__{g | edges: Map.put(em, edge_key, new_meta)}
_ ->
new_meta = Map.put(Map.delete(meta, old_label), new_label, new_weight)
%__MODULE__{g | edges: Map.put(em, edge_key, new_meta)}
end
else
_ ->
{:error, :no_such_edge}
end
end
@doc """
Removes all edges connecting `v1` to `v2`, regardless of label.
If no such edge exists, the graph is returned unmodified.
## Example
iex> g = Graph.new |> Graph.add_edges([{:a, :b}, {:a, :b, label: :foo}])
...> g = Graph.delete_edge(g, :a, :b)
...> [:a, :b] = Graph.vertices(g)
...> Graph.edges(g)
[]
iex> g = Graph.new(type: :undirected) |> Graph.add_edges([{:a, :b}, {:a, :b, label: :foo}])
...> g = Graph.delete_edge(g, :a, :b)
...> [:a, :b] = Graph.vertices(g)
...> Graph.edges(g)
[]
"""
@spec delete_edge(t, vertex, vertex) :: t
def delete_edge(%__MODULE__{type: :undirected} = g, v1, v2) do
if v1 > v2 do
do_delete_edge(g, v2, v1)
else
do_delete_edge(g, v1, v2)
end
end
def delete_edge(%__MODULE__{} = g, v1, v2) do
do_delete_edge(g, v1, v2)
end
defp do_delete_edge(
%__MODULE__{
in_edges: ie,
out_edges: oe,
edges: meta,
vertex_identifier: vertex_identifier
} = g,
v1,
v2
) do
with v1_id <- vertex_identifier.(v1),
v2_id <- vertex_identifier.(v2),
edge_key <- {v1_id, v2_id},
{:ok, v1_out} <- Map.fetch(oe, v1_id),
{:ok, v2_in} <- Map.fetch(ie, v2_id) do
v1_out = MapSet.delete(v1_out, v2_id)
v2_in = MapSet.delete(v2_in, v1_id)
meta = Map.delete(meta, edge_key)
%__MODULE__{
g
| in_edges: Map.put(ie, v2_id, v2_in),
out_edges: Map.put(oe, v1_id, v1_out),
edges: meta
}
else
_ -> g
end
end
@doc """
Removes an edge connecting `v1` to `v2`. A label can be specified to disambiguate the
specific edge you wish to delete, if not provided, the unlabelled edge, if one exists,
will be removed.
If no such edge exists, the graph is returned unmodified.
## Example
iex> g = Graph.new |> Graph.add_edges([{:a, :b}, {:a, :b, label: :foo}])
...> g = Graph.delete_edge(g, :a, :b, nil)
...> [:a, :b] = Graph.vertices(g)
...> Graph.edges(g)
[%Graph.Edge{v1: :a, v2: :b, label: :foo}]
iex> g = Graph.new |> Graph.add_edges([{:a, :b}, {:a, :b, label: :foo}])
...> g = Graph.delete_edge(g, :a, :b, :foo)
...> [:a, :b] = Graph.vertices(g)
...> Graph.edges(g)
[%Graph.Edge{v1: :a, v2: :b, label: nil}]
iex> g = Graph.new(type: :undirected) |> Graph.add_edges([{:a, :b}, {:a, :b, label: :foo}])
...> g = Graph.delete_edge(g, :a, :b, :foo)
...> [:a, :b] = Graph.vertices(g)
...> Graph.edges(g)
[%Graph.Edge{v1: :a, v2: :b, label: nil}]
"""
@spec delete_edge(t, vertex, vertex, label) :: t
def delete_edge(%__MODULE__{type: :undirected} = g, v1, v2, label) do
if v1 > v2 do
do_delete_edge(g, v2, v1, label)
else
do_delete_edge(g, v1, v2, label)
end
end
def delete_edge(%__MODULE__{} = g, v1, v2, label) do
do_delete_edge(g, v1, v2, label)
end
defp do_delete_edge(
%__MODULE__{
in_edges: ie,
out_edges: oe,
edges: meta,
vertex_identifier: vertex_identifier
} = g,
v1,
v2,
label
) do
with v1_id <- vertex_identifier.(v1),
v2_id <- vertex_identifier.(v2),
edge_key <- {v1_id, v2_id},
{:ok, v1_out} <- Map.fetch(oe, v1_id),
{:ok, v2_in} <- Map.fetch(ie, v2_id),
{:ok, edge_meta} <- Map.fetch(meta, edge_key),
{:ok, _} <- Map.fetch(edge_meta, label) do
edge_meta = Map.delete(edge_meta, label)
case map_size(edge_meta) do
0 ->
v1_out = MapSet.delete(v1_out, v2_id)
v2_in = MapSet.delete(v2_in, v1_id)
meta = Map.delete(meta, edge_key)
%__MODULE__{
g
| in_edges: Map.put(ie, v2_id, v2_in),
out_edges: Map.put(oe, v1_id, v1_out),
edges: meta
}
_ ->
meta = Map.put(meta, edge_key, edge_meta)
%__MODULE__{g | edges: meta}
end
else
_ -> g
end
end
@doc """
Like `delete_edge/3`, but takes a list of edge specifications, and deletes the corresponding
edges from the graph, if they exist.
Edge specifications can be `Edge` structs, `{vertex, vertex}` pairs, or `{vertex, vertex, label: label}`
triplets. An invalid specification will cause `Graph.EdgeSpecificationError` to be raised.
## Examples
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c]) |> Graph.add_edge(:a, :b)
...> g = Graph.delete_edges(g, [{:a, :b}])
...> Graph.edges(g)
[]
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c]) |> Graph.add_edge(:a, :b, label: :foo)
...> g = Graph.delete_edges(g, [{:a, :b}])
...> Graph.edges(g)
[]
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c]) |> Graph.add_edge(:a, :b, label: :foo)
...> g = Graph.delete_edges(g, [{:a, :b, label: :bar}])
...> Graph.edges(g)
[%Graph.Edge{v1: :a, v2: :b, label: :foo}]
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c]) |> Graph.add_edge(:a, :b, label: :foo)
...> g = Graph.delete_edges(g, [{:a, :b, label: :foo}])
...> Graph.edges(g)
[]
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c]) |> Graph.add_edge(:a, :b)
...> Graph.delete_edges(g, [:a])
** (Graph.EdgeSpecificationError) Expected a valid edge specification, but got: :a
"""
@spec delete_edges(t, [{vertex, vertex}]) :: t | no_return
def delete_edges(%__MODULE__{} = g, es) when is_list(es) do
Enum.reduce(es, g, fn
{v1, v2}, acc ->
delete_edge(acc, v1, v2)
{v1, v2, [{:label, label}]}, acc ->
delete_edge(acc, v1, v2, label)
%Edge{v1: v1, v2: v2, label: label}, acc ->
delete_edge(acc, v1, v2, label)
bad_edge, _acc ->
raise EdgeSpecificationError, bad_edge
end)
end
@doc """
This function can be used to remove all edges between `v1` and `v2`. This is useful if
you are defining multiple edges between vertices to represent different relationships, but
want to remove them all as if they are a single unit.
## Examples
iex> g = Graph.new |> Graph.add_edges([{:a, :b}, {:a, :b, label: :foo}, {:b, :a}])
...> g = Graph.delete_edges(g, :a, :b)
...> Graph.edges(g)
[%Graph.Edge{v1: :b, v2: :a}]
iex> g = Graph.new(type: :undirected) |> Graph.add_edges([{:a, :b}, {:a, :b, label: :foo}, {:b, :a}])
...> g = Graph.delete_edges(g, :a, :b)
...> Graph.edges(g)
[]
"""
@spec delete_edges(t, vertex, vertex) :: t
def delete_edges(%__MODULE__{type: :undirected} = g, v1, v2) do
if v1 > v2 do
do_delete_edges(g, v2, v1)
else
do_delete_edges(g, v1, v2)
end
end
def delete_edges(%__MODULE__{} = g, v1, v2) do
do_delete_edges(g, v1, v2)
end
defp do_delete_edges(
%__MODULE__{
in_edges: ie,
out_edges: oe,
edges: meta,
vertex_identifier: vertex_identifier
} = g,
v1,
v2
) do
with v1_id <- vertex_identifier.(v1),
v2_id <- vertex_identifier.(v2),
edge_key <- {v1_id, v2_id},
true <- Map.has_key?(meta, edge_key),
v1_out <- Map.get(oe, v1_id),
v2_in <- Map.get(ie, v2_id) do
meta = Map.delete(meta, edge_key)
v1_out = MapSet.delete(v1_out, v2_id)
v2_in = MapSet.delete(v2_in, v1_id)
%__MODULE__{
g
| out_edges: Map.put(oe, v1_id, v1_out),
in_edges: Map.put(ie, v2_id, v2_in),
edges: meta
}
else
_ -> g
end
end
@doc """
The transposition of a graph is another graph with the direction of all the edges reversed.
## Example
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c]) |> Graph.add_edge(:a, :b) |> Graph.add_edge(:b, :c)
...> g |> Graph.transpose |> Graph.edges
[%Graph.Edge{v1: :b, v2: :a}, %Graph.Edge{v1: :c, v2: :b}]
"""
@spec transpose(t) :: t
def transpose(%__MODULE__{in_edges: ie, out_edges: oe, edges: meta} = g) do
meta2 =
meta
|> Enum.reduce(%{}, fn {{v1, v2}, meta}, acc -> Map.put(acc, {v2, v1}, meta) end)
%__MODULE__{g | in_edges: oe, out_edges: ie, edges: meta2}
end
@doc """
Returns a topological ordering of the vertices of graph `g`, if such an ordering exists, otherwise it returns false.
For each vertex in the returned list, no out-neighbors occur earlier in the list.
Multiple edges between two vertices are considered a single edge for purposes of this sort.
## Example
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c, :d])
...> g = Graph.add_edges(g, [{:a, :b}, {:a, :c}, {:b, :c}, {:c, :d}])
...> Graph.topsort(g)
[:a, :b, :c, :d]
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c, :d])
...> g = Graph.add_edges(g, [{:a, :b}, {:a, :c}, {:b, :c}, {:c, :d}, {:c, :a}])
...> Graph.topsort(g)
false
"""
@spec topsort(t) :: [vertex]
def topsort(%__MODULE__{type: :undirected}), do: false
def topsort(%__MODULE__{} = g), do: Graph.Directed.topsort(g)
@doc """
Returns a list of connected components, where each component is a list of vertices.
A *connected component* is a maximal subgraph such that there is a path between each pair of vertices,
considering all edges undirected.
A *subgraph* is a graph whose vertices and edges are a subset of the vertices and edges of the source graph.
A *maximal subgraph* is a subgraph with property `P` where all other subgraphs which contain the same vertices
do not have that same property `P`.
## Example
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c, :d])
...> g = Graph.add_edges(g, [{:a, :b}, {:a, :c}, {:b, :c}, {:c, :d}, {:c, :a}])
...> Graph.components(g)
[[:d, :b, :c, :a]]
"""
@spec components(t) :: [[vertex]]
defdelegate components(g), to: Graph.Directed
@doc """
Returns a list of strongly connected components, where each component is a list of vertices.
A *strongly connected component* is a maximal subgraph such that there is a path between each pair of vertices.
See `components/1` for the definitions of *subgraph* and *maximal subgraph* if you are unfamiliar with the
terminology.
## Example
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c, :d])
...> g = Graph.add_edges(g, [{:a, :b}, {:a, :c}, {:b, :c}, {:c, :d}, {:c, :a}])
...> Graph.strong_components(g)
[[:d], [:b, :c, :a]]
"""
@spec strong_components(t) :: [[vertex]]
defdelegate strong_components(g), to: Graph.Directed
@doc """
Returns an unsorted list of vertices from the graph, such that for each vertex in the list (call it `v`),
there is a path in the graph from some vertex of `vs` to `v`.
As paths of length zero are allowed, the vertices of `vs` are also included in the returned list.
## Example
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c, :d])
...> g = Graph.add_edges(g, [{:a, :b}, {:a, :c}, {:b, :c}, {:c, :d}])
...> Graph.reachable(g, [:a])
[:d, :c, :b, :a]
"""
@spec reachable(t, [vertex]) :: [[vertex]]
defdelegate reachable(g, vs), to: Graph.Directed
@doc """
Returns an unsorted list of vertices from the graph, such that for each vertex in the list (call it `v`),
there is a path in the graph of length one or more from some vertex of `vs` to `v`.
As a consequence, only those vertices of `vs` that are included in some cycle are returned.
## Example
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c, :d])
...> g = Graph.add_edges(g, [{:a, :b}, {:a, :c}, {:b, :c}, {:c, :d}])
...> Graph.reachable_neighbors(g, [:a])
[:d, :c, :b]
"""
@spec reachable_neighbors(t, [vertex]) :: [[vertex]]
defdelegate reachable_neighbors(g, vs), to: Graph.Directed
@doc """
Returns an unsorted list of vertices from the graph, such that for each vertex in the list (call it `v`),
there is a path from `v` to some vertex of `vs`.
As paths of length zero are allowed, the vertices of `vs` are also included in the returned list.
## Example
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c, :d])
...> g = Graph.add_edges(g, [{:a, :b}, {:a, :c}, {:b, :c}, {:c, :d}])
...> Graph.reaching(g, [:d])
[:b, :a, :c, :d]
"""
@spec reaching(t, [vertex]) :: [[vertex]]
defdelegate reaching(g, vs), to: Graph.Directed
@doc """
Returns an unsorted list of vertices from the graph, such that for each vertex in the list (call it `v`),
there is a path of length one or more from `v` to some vertex of `vs`.
As a consequence, only those vertices of `vs` that are included in some cycle are returned.
## Example
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c, :d])
...> g = Graph.add_edges(g, [{:a, :b}, {:a, :c}, {:b, :c}, {:c, :a}, {:b, :d}])
...> Graph.reaching_neighbors(g, [:b])
[:b, :c, :a]
"""
@spec reaching_neighbors(t, [vertex]) :: [[vertex]]
defdelegate reaching_neighbors(g, vs), to: Graph.Directed
@doc """
Returns all vertices of graph `g`. The order is given by a depth-first traversal of the graph,
collecting visited vertices in preorder.
## Example
Our example code constructs a graph which looks like so:
:a
\
:b
/ \
:c :d
/
:e
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c, :d, :e])
...> g = Graph.add_edges(g, [{:a, :b}, {:b, :c}, {:b, :d}, {:c, :e}])
...> Graph.preorder(g)
[:a, :b, :c, :e, :d]
"""
@spec preorder(t) :: [vertex]
defdelegate preorder(g), to: Graph.Directed
@doc """
Returns all vertices of graph `g`. The order is given by a depth-first traversal of the graph,
collecting visited vertices in postorder. More precisely, the vertices visited while searching from an
arbitrarily chosen vertex are collected in postorder, and all those collected vertices are placed before
the subsequently visited vertices.
## Example
Our example code constructs a graph which looks like so:
:a
\
:b
/ \
:c :d
/
:e
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c, :d, :e])
...> g = Graph.add_edges(g, [{:a, :b}, {:b, :c}, {:b, :d}, {:c, :e}])
...> Graph.postorder(g)
[:e, :c, :d, :b, :a]
"""
@spec postorder(t) :: [vertex]
defdelegate postorder(g), to: Graph.Directed
@doc """
Returns a list of vertices from graph `g` which are included in a loop, where a loop is a cycle of length 1.
## Example
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c]) |> Graph.add_edge(:a, :a)
...> Graph.loop_vertices(g)
[:a]
"""
@spec loop_vertices(t) :: [vertex]
defdelegate loop_vertices(g), to: Graph.Directed
@doc """
Detects all maximal cliques in the provided graph.
Returns a list of cliques, where each clique is a list of vertices in the clique.
A clique is a subset `vs` of the vertices in the given graph, which together form a complete graph;
or put another way, every vertex in `vs` is connected to all other vertices in `vs`.
"""
@spec cliques(t) :: [[vertex]]
def cliques(%__MODULE__{type: :directed}) do
raise "cliques/1 can not be called on a directed graph"
end
def cliques(%__MODULE__{vertex_identifier: vertex_identifier} = g) do
# We do vertex ordering as described in Bron-Kerbosch
# to improve the worst-case performance of the algorithm
p =
g
|> k_core_components()
|> Enum.sort_by(fn {k, _} -> k end, fn a, b -> a >= b end)
|> Stream.flat_map(fn {_, vs} -> vs end)
|> Enum.map(&vertex_identifier.(&1))
g
|> detect_cliques(_r = [], p, _x = [], _acc = [])
|> Enum.reverse()
end
@doc """
Detects all maximal cliques of degree `k`.
Returns a list of cliques, where each clique is a list of vertices in the clique.
"""
@spec k_cliques(t, non_neg_integer) :: [[vertex]]
def k_cliques(%__MODULE__{type: :directed}, _k) do
raise "k_cliques/2 can not be called on a directed graph"
end
def k_cliques(%__MODULE__{} = g, k) when is_integer(k) and k >= 0 do
g
|> cliques()
|> Enum.filter(fn clique -> length(clique) == k end)
end
# r is a maximal clique
defp detect_cliques(%__MODULE__{vertices: vs}, r, [], [], acc) do
mapped =
r
|> Stream.map(&Map.get(vs, &1))
|> Enum.reverse()
[mapped | acc]
end
# r is a subset of another clique
defp detect_cliques(_g, _r, [], _x, acc), do: acc
defp detect_cliques(%__MODULE__{in_edges: ie, out_edges: oe} = g, r, [pivot | p], x, acc) do
n = MapSet.union(Map.get(ie, pivot, MapSet.new()), Map.get(oe, pivot, MapSet.new()))
p2 = Enum.filter(p, &Enum.member?(n, &1))
x2 = Enum.filter(x, &Enum.member?(n, &1))
acc2 = detect_cliques(g, [pivot | r], p2, x2, acc)
detect_cliques(g, r, p, [pivot | x], acc2)
end
@doc """
Calculates the k-core for a given graph and value of `k`.
A k-core of the graph is a maximal subgraph of `g` which contains vertices of which all
have a degree of at least `k`. This function returns a new `Graph` which is a subgraph
of `g` containing all vertices which have a coreness >= the desired value of `k`.
If there is no k-core in the graph for the provided value of `k`, an empty `Graph` is returned.
If a negative integer is provided for `k`, a RuntimeError will be raised.
NOTE: For performance reasons, k-core calculations make use of ETS. If you are
sensitive to the number of concurrent ETS tables running in your system, you should
be aware of it's usage here. 2 tables are used, and they are automatically cleaned
up when this function returns.
"""
@spec k_core(t, k :: non_neg_integer) :: t
def k_core(%__MODULE__{} = g, k) when is_integer(k) and k >= 0 do
vs =
g
|> decompose_cores()
|> Stream.filter(fn {_, vk} -> vk >= k end)
|> Enum.map(fn {v, _k} -> v end)
Graph.subgraph(g, vs)
end
def k_core(%__MODULE__{}, k) do
raise "`k` must be a positive number, got `#{inspect(k)}`"
end
@doc """
Groups all vertices by their k-coreness into a single map.
More commonly you will want a specific k-core, in particular the degeneracy core,
for which there are other functions in the API you can use. However if you have
a need to determine which k-core each vertex belongs to, this function can be used
to do just that.
As an example, you can construct the k-core for a given graph like so:
k_core_vertices =
g
|> Graph.k_core_components()
|> Stream.filter(fn {k, _} -> k >= desired_k end)
|> Enum.flat_map(fn {_, vs} -> vs end)
Graph.subgraph(g, k_core_vertices)
"""
@spec k_core_components(t) :: %{(k :: non_neg_integer) => [vertex]}
def k_core_components(%__MODULE__{} = g) do
res =
g
|> decompose_cores()
|> Enum.group_by(fn {_, k} -> k end, fn {v, _} -> v end)
if map_size(res) > 0 do
res
else
%{0 => []}
end
end
@doc """
Determines the k-degeneracy of the given graph.
The degeneracy of graph `g` is the maximum value of `k` for which a k-core
exists in graph `g`.
"""
@spec degeneracy(t) :: non_neg_integer
def degeneracy(%__MODULE__{} = g) do
{_, k} =
g
|> decompose_cores()
|> Enum.max_by(fn {_, k} -> k end, fn -> {nil, 0} end)
k
end
@doc """
Calculates the degeneracy core of a given graph.
The degeneracy core of a graph is the k-core of the graph where the
value of `k` is the degeneracy of the graph. The degeneracy of a graph
is the highest value of `k` which has a non-empty k-core in the graph.
"""
@spec degeneracy_core(t) :: t
def degeneracy_core(%__MODULE__{} = g) do
{_, core} =
g
|> decompose_cores()
|> Enum.group_by(fn {_, k} -> k end, fn {v, _} -> v end)
|> Enum.max_by(fn {k, _} -> k end, fn -> {0, []} end)
Graph.subgraph(g, core)
end
@doc """
Calculates the k-coreness of vertex `v` in graph `g`.
The k-coreness of a vertex is defined as the maximum value of `k`
for which `v` is found in the corresponding k-core of graph `g`.
NOTE: This function decomposes all k-core components to determine the coreness
of a vertex - if you will be trying to determine the coreness of many vertices,
it is recommended to use `k_core_components/1` and then lookup the coreness of a vertex
by querying the resulting map.
"""
@spec coreness(t, vertex) :: non_neg_integer
def coreness(%__MODULE__{} = g, v) do
res =
g
|> decompose_cores()
|> Enum.find(fn
{^v, _} -> true
_ -> false
end)
case res do
{_, k} -> k
_ -> 0
end
end
# This produces a list of {v, k} where k is the largest k-core this vertex belongs to
defp decompose_cores(%__MODULE__{vertices: vs} = g) do
# Rules to remember
# - a k-core of a graph is a subgraph where each vertex has at least `k` neighbors in the subgraph
# - A k-core is not necessarily connected.
# - The core number for each vertex is the highest k-core it is a member of
# - A vertex in a k-core will be, by definition, in a (k-1)-core (cores are nested)
degrees = :ets.new(:k_cores, [:set, keypos: 1])
l = :ets.new(:k_cores_l, [:set, keypos: 1])
try do
# Since we are making many modifications to the graph as we work on it,
# it is more performant to store the list of vertices and their degree in ETS
# and work on it there. This is not strictly necessary, but makes the algorithm
# easier to read and is faster, so unless there is good reason to avoid ETS here
# I think it's a fair compromise.
for {_id, v} <- vs do
:ets.insert(degrees, {v, out_degree(g, v)})
end
decompose_cores(degrees, l, g, 1)
after
:ets.delete(degrees)
:ets.delete(l)
end
end
defp decompose_cores(degrees, l, g, k) do
case :ets.info(degrees, :size) do
0 ->
Enum.reverse(:ets.tab2list(l))
_ ->
# Select all v that have a degree less than `k`
case :ets.select(degrees, [{{:"$1", :"$2"}, [{:<, :"$2", k}], [:"$1"]}]) do
[] ->
decompose_cores(degrees, l, g, k + 1)
matches ->
for v <- matches do
:ets.delete(degrees, v)
for neighbor <- out_neighbors(g, v),
not :ets.member(l, neighbor) and v != neighbor do
:ets.update_counter(degrees, neighbor, {2, -1})
end
:ets.insert(l, {v, k - 1})
end
decompose_cores(degrees, l, g, k)
end
end
end
@doc """
Returns the degree of vertex `v` of graph `g`.
The degree of a vertex is the total number of edges containing that vertex.
For directed graphs this is the same as the sum of the in-degree and out-degree
of the given vertex. For undirected graphs, the in-degree and out-degree are always
the same.
## Example
iex> g = Graph.new(type: :undirected) |> Graph.add_vertices([:a, :b, :c]) |> Graph.add_edge(:a, :b)
...> Graph.degree(g, :b)
1
iex> g = Graph.new() |> Graph.add_vertices([:a, :b, :c]) |> Graph.add_edge(:a, :b)
...> Graph.degree(g, :b)
1
"""
@spec degree(t, vertex) :: non_neg_integer
def degree(%__MODULE__{type: :undirected} = g, v) do
in_degree(g, v)
end
def degree(%__MODULE__{} = g, v) do
in_degree(g, v) + out_degree(g, v)
end
@doc """
Returns the in-degree of vertex `v` of graph `g`.
The *in-degree* of a vertex is the number of edges directed inbound towards that vertex.
For undirected graphs, the in-degree and out-degree are always the same - the sum total
of all edges inbound or outbound from the vertex.
## Example
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c]) |> Graph.add_edge(:a, :b)
...> Graph.in_degree(g, :b)
1
"""
def in_degree(
%__MODULE__{
type: :undirected,
in_edges: ie,
out_edges: oe,
edges: meta,
vertex_identifier: vertex_identifier
},
v
) do
v_id = vertex_identifier.(v)
v_in = Map.get(ie, v_id, MapSet.new())
v_out = Map.get(oe, v_id, MapSet.new())
v_all = MapSet.union(v_in, v_out)
Enum.reduce(v_all, 0, fn v1_id, sum ->
case Map.fetch(meta, {v1_id, v_id}) do
{:ok, edge_meta} ->
sum + map_size(edge_meta)
_ ->
case Map.fetch(meta, {v_id, v1_id}) do
{:ok, edge_meta} -> sum + map_size(edge_meta)
_ -> sum
end
end
end)
end
def in_degree(%__MODULE__{in_edges: ie, edges: meta, vertex_identifier: vertex_identifier}, v) do
with v_id <- vertex_identifier.(v),
{:ok, v_in} <- Map.fetch(ie, v_id) do
Enum.reduce(v_in, 0, fn v1_id, sum ->
sum + map_size(Map.get(meta, {v1_id, v_id}))
end)
else
_ -> 0
end
end
@doc """
Returns the out-degree of vertex `v` of graph `g`.
The *out-degree* of a vertex is the number of edges directed outbound from that vertex.
For undirected graphs, the in-degree and out-degree are always the same - the sum total
of all edges inbound or outbound from the vertex.
## Example
iex> g = Graph.new |> Graph.add_vertices([:a, :b, :c]) |> Graph.add_edge(:a, :b)
...> Graph.out_degree(g, :a)
1
"""
@spec out_degree(t, vertex) :: non_neg_integer
def out_degree(%__MODULE__{type: :undirected} = g, v) do
# Take advantage of the fact that in_degree and out_degree
# are the same for undirected graphs
in_degree(g, v)
end
def out_degree(%__MODULE__{out_edges: oe, edges: meta, vertex_identifier: vertex_identifier}, v) do
with v_id <- vertex_identifier.(v),
{:ok, v_out} <- Map.fetch(oe, v_id) do
Enum.reduce(v_out, 0, fn v2_id, sum ->
sum + map_size(Map.get(meta, {v_id, v2_id}))
end)
else
_ -> 0
end
end
@doc """
Return all neighboring vertices of the given vertex.
## Example
iex> g = Graph.new |> Graph.add_edges([{:a, :b}, {:b, :a}, {:b, :c}, {:c, :a}])
...> Graph.neighbors(g, :a)
[:b, :c]
iex> g = Graph.new |> Graph.add_edges([{:a, :b}, {:b, :a}, {:b, :c}, {:c, :a}])
...> Graph.neighbors(g, :d)
[]
"""
@spec neighbors(t, vertex) :: [vertex]
def neighbors(
%__MODULE__{
in_edges: ie,
out_edges: oe,
vertices: vs,
vertex_identifier: vertex_identifier
},
v
) do
v_id = vertex_identifier.(v)
v_in = Map.get(ie, v_id, MapSet.new())
v_out = Map.get(oe, v_id, MapSet.new())
v_all = MapSet.union(v_in, v_out)
Enum.map(v_all, &Map.get(vs, &1))
end
@doc """
Returns a list of vertices which all have edges coming in to the given vertex `v`.
In the case of undirected graphs, it delegates to `neighbors/2`.
## Example
iex> g = Graph.new |> Graph.add_edges([{:a, :b}, {:a, :b, label: :foo}, {:b, :c}])
...> Graph.in_neighbors(g, :b)
[:a]
"""
@spec in_neighbors(t, vertex) :: [vertex]
def in_neighbors(%__MODULE__{type: :undirected} = g, v) do
neighbors(g, v)
end
def in_neighbors(
%__MODULE__{in_edges: ie, vertices: vs, vertex_identifier: vertex_identifier},
v
) do
with v_id <- vertex_identifier.(v),
{:ok, v_in} <- Map.fetch(ie, v_id) do
Enum.map(v_in, &Map.get(vs, &1))
else
_ -> []
end
end
@doc """
Returns a list of `Graph.Edge` structs representing the in edges to vertex `v`.
In the case of undirected graphs, it delegates to `edges/2`.
## Example
iex> g = Graph.new |> Graph.add_edges([{:a, :b}, {:a, :b, label: :foo}, {:b, :c}])
...> Graph.in_edges(g, :b)
[%Graph.Edge{v1: :a, v2: :b, label: :foo}, %Graph.Edge{v1: :a, v2: :b}]
"""
@spec in_edges(t, vertex) :: Edge.t()
def in_edges(%__MODULE__{type: :undirected} = g, v) do
edges(g, v)
end
def in_edges(
%__MODULE__{
vertices: vs,
in_edges: ie,
edges: meta,
vertex_identifier: vertex_identifier
},
v
) do
with v_id <- vertex_identifier.(v),
{:ok, v_in} <- Map.fetch(ie, v_id) do
Enum.flat_map(v_in, fn v1_id ->
v1 = Map.get(vs, v1_id)
Enum.map(Map.get(meta, {v1_id, v_id}), fn {label, weight} ->
Edge.new(v1, v, label: label, weight: weight)
end)
end)
else
_ -> []
end
end
@doc """
Returns a list of vertices which the given vertex `v` has edges going to.
In the case of undirected graphs, it delegates to `neighbors/2`.
## Example
iex> g = Graph.new |> Graph.add_edges([{:a, :b}, {:a, :b, label: :foo}, {:b, :c}])
...> Graph.out_neighbors(g, :a)
[:b]
"""
@spec out_neighbors(t, vertex) :: [vertex]
def out_neighbors(%__MODULE__{type: :undirected} = g, v) do
neighbors(g, v)
end
def out_neighbors(
%__MODULE__{vertices: vs, out_edges: oe, vertex_identifier: vertex_identifier},
v
) do
with v_id <- vertex_identifier.(v),
{:ok, v_out} <- Map.fetch(oe, v_id) do
Enum.map(v_out, &Map.get(vs, &1))
else
_ -> []
end
end
@doc """
Returns a list of `Graph.Edge` structs representing the out edges from vertex `v`.
In the case of undirected graphs, it delegates to `edges/2`.
## Example
iex> g = Graph.new |> Graph.add_edges([{:a, :b}, {:a, :b, label: :foo}, {:b, :c}])
...> Graph.out_edges(g, :a)
[%Graph.Edge{v1: :a, v2: :b, label: :foo}, %Graph.Edge{v1: :a, v2: :b}]
"""
@spec out_edges(t, vertex) :: Edge.t()
def out_edges(%__MODULE__{type: :undirected} = g, v) do
edges(g, v)
end
def out_edges(
%__MODULE__{
vertices: vs,
out_edges: oe,
edges: meta,
vertex_identifier: vertex_identifier
},
v
) do
with v_id <- vertex_identifier.(v),
{:ok, v_out} <- Map.fetch(oe, v_id) do
Enum.flat_map(v_out, fn v2_id ->
v2 = Map.get(vs, v2_id)
Enum.map(Map.get(meta, {v_id, v2_id}), fn {label, weight} ->
Edge.new(v, v2, label: label, weight: weight)
end)
end)
else
_ ->
[]
end
end
@doc """
Builds a maximal subgraph of `g` which includes all of the vertices in `vs` and the edges which connect them.
See the test suite for example usage.
"""
@spec subgraph(t, [vertex]) :: t
def subgraph(
%__MODULE__{
type: type,
vertices: vertices,
out_edges: oe,
edges: meta,
vertex_identifier: vertex_identifier
},
vs
) do
allowed =
vs
|> Enum.map(&vertex_identifier.(&1))
|> Enum.filter(&Map.has_key?(vertices, &1))
|> MapSet.new()
Enum.reduce(allowed, Graph.new(type: type), fn v_id, sg ->
v = Map.get(vertices, v_id)
sg = Graph.add_vertex(sg, v)
oe
|> Map.get(v_id, MapSet.new())
|> MapSet.intersection(allowed)
|> Enum.reduce(sg, fn v2_id, sg ->
v2 = Map.get(vertices, v2_id)
Enum.reduce(Map.get(meta, {v_id, v2_id}), sg, fn {label, weight}, sg ->
Graph.add_edge(sg, v, v2, label: label, weight: weight)
end)
end)
end)
end
end
|
lib/graph.ex
| 0.937081
| 0.914939
|
graph.ex
|
starcoder
|
defmodule Matrix.Cluster do
@moduledoc """
Holds state about agent centers registered in cluster.
This module is meant to be used when new agent centers are registered / unregistered
to / from cluster.
## Example
Cluster.register_node %AgentCenter{aliaz: "Mars, address: "localhost:4000"}
Cluster.unregister_node "Mars"
"""
use GenServer
alias Matrix.{Env, AgentCenter}
defmodule State do
@moduledoc """
Map where key is agent center alias and value is agent center address.
## Example
%State{nodes: %{"Mars" => "localhost:4000"}
"""
defstruct nodes: nil
@type t :: %__MODULE__{nodes: Map.t}
end
def start_link(_options \\ []) do
GenServer.start_link(__MODULE__, %{}, name: __MODULE__)
end
# Client API
@doc """
Returns all agent centers in cluster.
## Example
Cluster.nodes
# => `[%AgentCenter{aliaz: "Mars", address: "localhost:4000"}]`
"""
@spec nodes :: list[AgentCenter.t]
def nodes do
GenServer.call(__MODULE__, {:nodes})
end
@spec address_for(aliaz :: String.t) :: String.t | nil
def address_for(aliaz) do
GenServer.call(__MODULE__, {:address_for, aliaz})
end
@doc """
Adds new agent center to cluster.
Args:
* agent_center - AgentCenter struct being registered
## Example
Cluster.register_node(%AgentCenter{aliaz: "Mars", address: "MilkyWay"}
"""
@spec register_node(agent_center :: AgentCenter.t) :: {:ok | :exists}
def register_node(agent_center) do
GenServer.call(__MODULE__, {:register_node, agent_center})
end
@doc """
Removes agent center from cluster.
Args:
* aliaz - Alias of agent center being unregistered
## Example
Cluster.unregister_node "Mars"
"""
@spec unregister_node(aliaz :: String.t) :: :ok
def unregister_node(aliaz) do
GenServer.cast(__MODULE__, {:unregister_node, aliaz})
end
@doc """
Check if agent center exists in cluster.
Args:
* aliaz - Alias of agent center
## Example
Cluster.exists? Env.this
# => `true`
Cluster.exists? "Venera"
# => `false`
"""
@spec exist?(aliaz :: String.t) :: boolean
def exist?(aliaz) do
GenServer.call(__MODULE__, {:exist?, aliaz})
end
@doc """
Clears and resets cluster to contain only this agent center.
## Example
Cluster.register_node(%AgentCenter{aliaz: "Mars", address: "MilkyWay"}
Cluster.reset
Cluster.nodes
# => `[Env.this]`
"""
@spec reset :: :ok
def reset do
GenServer.cast(__MODULE__, {:reset})
end
# Server callbacks
def handle_call({:nodes}, _from, state) do
{:reply, nodes_list(state), state}
end
def handle_call({:address_for, aliaz}, _from, state) do
{:reply, state.nodes[aliaz], state}
end
def handle_call({:register_node, %AgentCenter{aliaz: aliaz, address: address}}, _from, state) do
if Map.has_key?(state.nodes, aliaz) do
{:reply, {:error, :exists}, state}
else
nodes = Map.put(state.nodes, aliaz, address)
{:reply, :ok, %State{nodes: nodes}}
end
end
def handle_call({:exist?, aliaz}, _from, state) do
{:reply, Map.has_key?(state.nodes, aliaz), state}
end
def handle_cast({:unregister_node, aliaz}, state) do
nodes = Map.delete(state.nodes, aliaz)
{:noreply, %State{nodes: nodes}}
end
def handle_cast({:reset}, _nodes) do
{:noreply, init_state()}
end
def init(_) do
{:ok, init_state()}
end
defp nodes_list(state) do
state.nodes
|> Enum.map(fn {aliaz, address} ->
%AgentCenter{aliaz: aliaz, address: address}
end)
end
defp init_state do
%State{nodes: %{Env.this_aliaz => Env.this_address}}
end
end
|
lib/matrix/cluster.ex
| 0.886917
| 0.600452
|
cluster.ex
|
starcoder
|
defmodule HomeBot.Monitoring.DailyEnergyMonitoring do
@moduledoc "This job will run some daily checks"
alias HomeBot.DataStore
alias HomeBot.Tools
def run do
check_gas_usage()
check_electricity_usage()
end
def check_gas_usage do
start_time = start_of_yesterday()
end_time = end_of_yesterday()
temperature_yesterday = get_average_temperature(start_time, end_time)
gas_usage_yesterday = get_total_gas_usage(start_time, end_time)
{mean, standard_deviation} = get_mean_std_gas_usage_for_temperature(temperature_yesterday)
if gas_usage_yesterday < mean - 2 * standard_deviation do
HomeBot.Bot.notify_users(
"Gas usage yesterday was lower than expected. Yesterday it was #{gas_usage_yesterday}, normally it is #{
mean
} for the average temperature of #{temperature_yesterday}"
)
end
if gas_usage_yesterday > mean + 2 * standard_deviation do
HomeBot.Bot.notify_users(
"Gas usage yesterday was higher than expected. Yesterday it was #{gas_usage_yesterday}, normally it is #{
mean
} for the average temperature of #{temperature_yesterday}"
)
end
end
def check_electricity_usage do
start_time = start_of_yesterday()
end_time = end_of_yesterday()
weekday = Timex.weekday(Timex.now("Europe/Amsterdam") |> Timex.shift(days: -1))
electricity_usage_yesterday = get_total_electricity_usage(start_time, end_time)
{mean, standard_deviation} = get_mean_std_electricity_usage_for_weekday(weekday)
if electricity_usage_yesterday < mean - 2 * standard_deviation do
HomeBot.Bot.notify_users(
"Electricity usage yesterday was lower than expected. Yesterday it was #{
electricity_usage_yesterday
}, normally it is #{mean} for this day of the week"
)
end
if electricity_usage_yesterday > mean + 2 * standard_deviation do
HomeBot.Bot.notify_users(
"Electricity usage yesterday was higher than expected. Yesterday it was #{
electricity_usage_yesterday
}, normally it is #{mean} for this day of the week"
)
end
end
defp get_average_temperature(start_time, end_time) do
%{"temperature" => temperature} = DataStore.get_average_temperature(start_time, end_time)
round(temperature)
end
defp get_total_gas_usage(start_time, end_time) do
DataStore.get_gas_usage("1h", start_time, end_time)
|> Enum.reduce(0, fn x, acc -> acc + x["usage"] end)
end
defp get_total_electricity_usage(start_time, end_time) do
DataStore.get_electricity_usage("1h", start_time, end_time)
|> Enum.reduce(0, fn x, acc -> acc + x["low_tariff_usage"] + x["normal_tariff_usage"] end)
end
defp get_mean_std_gas_usage_for_temperature(temperature) do
days_with_same_temperature =
HomeBot.DataStore.get_average_temperature_per_day(:all)
|> Enum.filter(fn %{"temperature" => temp} -> temp != nil && round(temp) == temperature end)
|> Enum.map(fn %{"time" => time} -> time end)
previous_usage =
DataStore.get_gas_usage_per_day(:all)
|> Enum.filter(fn %{"time" => time} -> Enum.member?(days_with_same_temperature, time) end)
|> Enum.map(fn x -> x["usage"] end)
mean = Tools.mean(previous_usage)
standard_deviation = Tools.standard_deviation(previous_usage, mean)
{mean, standard_deviation}
end
defp get_mean_std_electricity_usage_for_weekday(weekday) do
historic_usage_values =
DataStore.get_electricity_usage(
"1d",
"2018-01-01T00:00:00Z",
"#{DateTime.to_iso8601(Timex.now())}"
)
|> Enum.filter(fn record -> get_weekday(record["time"]) == weekday end)
|> Enum.map(fn x -> x["low_tariff_usage"] + x["normal_tariff_usage"] end)
mean = Tools.mean(historic_usage_values)
standard_deviation = Tools.standard_deviation(historic_usage_values, mean)
{mean, standard_deviation}
end
defp get_weekday(time_string) do
{:ok, dt, _} = DateTime.from_iso8601(time_string)
Timex.weekday(dt)
end
defp start_of_yesterday do
Timex.now("Europe/Amsterdam")
|> Timex.beginning_of_day()
|> Timex.shift(days: -1)
|> DateTime.to_iso8601()
end
defp end_of_yesterday do
Timex.now("Europe/Amsterdam") |> Timex.beginning_of_day() |> DateTime.to_iso8601()
end
end
|
lib/home_bot/monitoring/daily_energy_monitoring_job.ex
| 0.761627
| 0.565629
|
daily_energy_monitoring_job.ex
|
starcoder
|
defmodule VerifyOrigin do
@moduledoc """
A Plug adapter to protect from CSRF attacks by verifying the `Origin` header.
## Options
* `:origin` - The origin of the server - requests from this origin will always proceed. Defaults to the default hostname configured for your application's endpoint.
* `:strict` - Whether to reject requests that lack an Origin header. Defaults to `true`.
* `:allow_safe` - Whether to enforce the strict mode for safe requests (GET, HEAD). Defaults to `true`.
* `:fallback_to_referer` - If the Origin header is missing, fill it with the origin part of the Referer. Defaults to `false`.
"""
import Plug.Conn
@safe_methods ["GET", "HEAD"]
defmodule UnverifiedOriginError do
@moduledoc "Error raised when origin is unverified."
message = "unverified Origin header"
defexception message: message, plug_status: 403
end
def init(opts \\ []) do
origin = Keyword.get(opts, :origin)
strict = Keyword.get(opts, :strict, true)
allow_safe = Keyword.get(opts, :allow_safe, true)
fallback_to_referer = Keyword.get(opts, :fallback_to_referer, false)
%{
origin: origin,
strict: strict,
allow_safe: allow_safe,
fallback_to_referer: fallback_to_referer
}
end
def call(conn, config = %{origin: nil}) do
call(conn, %{config | origin: get_origin_from_conn(conn)})
end
def call(conn, config) do
origin =
conn
|> get_req_header("origin")
|> fallback_to_referer(conn, config)
|> List.first()
if verified_origin?(conn, origin, config) || skip_verify_origin?(conn) do
conn
else
raise UnverifiedOriginError
end
end
defp verified_origin?(_conn, nil, %{strict: false}),
do: true
defp verified_origin?(%{method: method}, nil, %{allow_safe: true}) when method in @safe_methods,
do: true
defp verified_origin?(_conn, origin, %{origin: allowed_origin}), do: origin == allowed_origin
defp get_origin_from_conn(conn) do
conn
|> Phoenix.Controller.current_url()
|> URI.parse()
|> Map.put(:path, nil)
|> to_string()
end
defp fallback_to_referer([], conn, %{fallback_to_referer: true}) do
get_req_header(conn, "referer")
end
defp fallback_to_referer(origin, _conn, _opts), do: origin
defp skip_verify_origin?(%Plug.Conn{private: %{plug_skip_verify_origin: true}}), do: true
defp skip_verify_origin?(%Plug.Conn{}), do: false
end
|
lib/verify_origin.ex
| 0.862829
| 0.431464
|
verify_origin.ex
|
starcoder
|
defmodule Morphix do
@moduledoc """
Morphix provides convenience methods for dealing with Maps, Lists, and Tuples.
`morphiflat/1` and `morphiflat!/1` flatten maps, discarding top level keys.
### Examples:
```
iex> Morphix.morphiflat %{flatten: %{this: "map"}, if: "you please"}
{:ok, %{this: "map", if: "you please"}}
iex> Morphix.morphiflat! %{flatten: %{this: "map"}, o: "k"}
%{this: "map", o: "k"}
```
`morphify!/2` and `morphify/2` will take either a List or a Tuple as the first argument, and a function as the second. Returns a map, with the keys of the map being the function applied to each member of the input.
### Examples:
```
iex> Morphix.morphify!({[1,2,3], [12], [1,2,3,4]}, &length/1)
%{1 => [12], 3 => [1,2,3], 4 => [1,2,3,4]}
```
`atomorphify/1` and `atomorphiform/1` take a map as an input and return the map with all string keys converted to atoms. `atomorphiform/1` is recursive. `atomorphiform/2` and `atomormiphify/2` take `:safe` as a second argument, they will not convert string keys if the resulting atom has not been defined.
### Examples:
```
iex> Morphix.atomorphify(%{"a" => "2", :a => 2, 'a' => :two})
{:ok, %{:a => 2, 'a' => :two }}
```
`compactify` and `compactiform` take a map or list as an input and returns a filtered map or list, removing any keys or elements with nil values or with an empty map as a value.
`partiphify!/2` and `partiphify/2` take a list `l` and an integer `k` and partition `l` into `k` sublists of balanced size. There will always be `k` lists, even if some must be empty.
### Examples:
```
iex> Morphix.partiphify!([:a, :b, :c, :d, :e, :f], 4)
[[:c], [:d], [:e, :a], [:f, :b]]
iex> Morphix.partiphify!([:a, :b, :c, :d, :e], 4)
[[:b], [:c], [:d], [:e, :a]]
iex> Morphix.partiphify!([:a, :b, :c, :d], 4)
[[:a], [:b], [:c], [:d]]
iex> Morphix.partiphify!([:a, :b, :c], 4)
[[:a], [:b], [:c], []]
```
`equaliform?/2` compares two ordered or unordered lists and returns `true` if they are equal. It also handles nested elements.
### Example:
iex> Morphix.equaliform?([1, ["two", :three], %{a: 1, c: "three", e: %{d: 4, b: 2}}], [[:three, "two"], 1, %{c: "three", a: 1, e: %{b: 2, d: 4}}])
true
`equalify?/2` compares two ordered or unordered lists and returns `true` if they are equal.
### Example:
iex> Morphix.equalify?([1, ["two", :three], %{a: 1, c: "three", e: %{d: 4, b: 2}}], [["two", :three], 1, %{c: "three", a: 1, e: %{b: 2, d: 4}}])
true
"""
use Util.EqualityOperator
@spec morphiflat(map()) :: {:ok | :error, map() | String}
@spec morphiflat!(map()) :: map()
@spec morphify([any], fun()) :: {:ok | :error, map() | String.t()}
@spec morphify(tuple(), fun()) :: {:ok | :error, map() | String.t()}
@spec morphify!([any], fun()) :: map()
@spec morphify!(tuple(), fun()) :: map()
@spec atomorphify(map()) :: {:ok, map()}
@spec atomorphify(map(), :safe) :: {:ok, map()}
@spec atomorphify(map(), list()) :: {:ok, map()}
@spec atomorphify!(map()) :: map()
@spec stringmorphify!(map()) :: map()
@spec stringmorphify!(map(), list()) :: map()
@spec atomorphify!(map(), :safe) :: map()
@spec atomorphify!(map(), list()) :: map()
@spec morphiform!(map(), fun(), list()) :: map()
@spec atomorphiform(map()) :: {:ok, map()}
@spec atomorphiform(map(), :safe) :: {:ok, map()}
@spec atomorphiform(map(), list()) :: {:ok, map()}
@spec atomorphiform!(map()) :: map()
@spec atomorphiform!(map(), :safe) :: map()
@spec atomorphiform!(map(), list()) :: map()
@spec stringmorphiform!(map) :: map()
@spec stringmorphiform!(map, list()) :: map()
@spec compactify(map() | list()) :: {:ok, map()} | {:ok, list()} | {:error, %ArgumentError{}}
@spec compactify!(map() | list()) :: map() | list() | %ArgumentError{}
@spec compactiform!(map() | list()) :: map() | list() | %ArgumentError{}
@spec compactiform(map() | list()) :: {:ok, map()} | {:ok, list()} | {:error, %ArgumentError{}}
@spec partiphify!(list(), integer) :: [list[any]] | no_return
@spec partiphify(list(), integer) :: {:ok, [list[any]]} | {:error, term}
@doc """
Takes a map and returns a flattend version of that map, discarding any nested keys.
### Examples:
```
iex> Morphix.morphiflat! %{you: "will", youwill: %{be: "discarded"}}
%{you: "will", be: "discarded"}
```
"""
def morphiflat!(map) do
flattn(map)
end
@doc """
Takes a map and returns a flattened version of that map. If the map has nested maps (or the maps nested maps have nested maps, etc.) morphiflat moves all nested key/value pairs to the top level, discarding the original keys.
### Examples:
```
iex> Morphix.morphiflat %{this: %{nested: :map, inner: %{twonested: :map, is: "now flat"}}}
{:ok, %{nested: :map, twonested: :map, is: "now flat"}}
```
In the example, the key `:this` is discarded, along with the key `inner`, because they both point to map values.
Will return `{:error, <input> is not a Map}` if the input is not a map.
### Examples:
```
iex> Morphix.morphiflat({1,2,3})
{:error, "{1, 2, 3} is not a Map"}
```
"""
def morphiflat(map) when is_map(map) do
{:ok, flattn(map)}
rescue
exception -> {:error, Exception.message(exception)}
end
def morphiflat(not_map), do: {:error, "#{inspect(not_map)} is not a Map"}
defp flattn(map) do
not_maps = fn {k, v}, acc ->
case is_map(v) do
false -> Map.put_new(acc, k, v)
true -> Map.merge(acc, flattn(v))
end
end
Enum.reduce(map, %{}, not_maps)
end
@doc """
Takes a map as an argument and returns `{:ok, map}`, with string keys converted to atom keys. Does not examine nested maps.
### Examples
```
iex> Morphix.atomorphify(%{"this" => "map", "has" => %{"string" => "keys"}})
{:ok, %{this: "map", has: %{"string" => "keys"}}}
iex> Morphix.atomorphify(%{1 => "2", "1" => 2, "one" => :two})
{:ok, %{1 => "2", "1": 2, one: :two}}
```
"""
def atomorphify(map) when is_map(map) do
{:ok, atomorphify!(map)}
end
@doc """
Takes a map and the `:safe` flag and returns `{:ok, map}`, with string keys converted to existing atoms if possible, and ignored otherwise. Ignores nested maps.
### Examples:
```
iex> :existing_atom
iex> Morphix.atomorphify(%{"existing_atom" => "exists", "non_existent_atom" => "does_not", 1 => "is_ignored"}, :safe)
{:ok, %{ "non_existent_atom" => "does_not", 1 => "is_ignored", existing_atom: "exists"}}
```
"""
def atomorphify(map, :safe) when is_map(map) do
{:ok, atomorphify!(map, :safe)}
end
@doc """
Takes a map and a list of allowed strings to convert to atoms and returns `{:ok, map}`, with string keys in the list converted to atoms. Ignores nested maps.
### Examples:
```
iex> Morphix.atomorphify(%{"allowed_key" => "exists", "non_existent_atom" => "does_not", 1 => "is_ignored"}, ["allowed_key"])
{:ok, %{ "non_existent_atom" => "does_not", 1 => "is_ignored", allowed_key: "exists"}}
```
"""
def atomorphify(map, allowed) when is_map(map) and is_list(allowed) do
{:ok, atomorphify!(map, allowed)}
end
@doc """
Takes a map as an argument and returns the same map with string keys converted to atom keys. Does not examine nested maps.
### Examples
```
iex> Morphix.atomorphify!(%{"this" => "map", "has" => %{"string" => "keys"}})
%{this: "map", has: %{"string" => "keys"}}
iex> Morphix.atomorphify!(%{1 => "2", "1" => 2, "one" => :two})
%{1 => "2", "1": 2, one: :two}
```
"""
def atomorphify!(map) when is_map(map) do
keymorphify!(map, &atomize_binary/2)
end
@doc """
Takes a map and the `:safe` flag and returns the same map, with string keys converted to existing atoms if possible, and ignored otherwise. Ignores nested maps.
### Examples:
```
iex> :existing_atom
iex> Morphix.atomorphify!(%{"existing_atom" => "exists", "non_existent_atom" => "does_not", 1 => "is_ignored"}, :safe)
%{"non_existent_atom" => "does_not", 1 => "is_ignored", existing_atom: "exists"}
```
"""
def atomorphify!(map, :safe) when is_map(map) do
keymorphify!(map, &safe_atomize_binary/2)
end
@doc """
Takes a map and a list of allowed strings to convert to atoms and returns the same map, with string keys in the list converted to atoms. Ignores nested maps.
### Examples:
```
iex> Morphix.atomorphify!(%{"allowed_key" => "exists", "non_existent_atom" => "does_not", 1 => "is_ignored"}, ["allowed_key"])
%{"non_existent_atom" => "does_not", 1 => "is_ignored", allowed_key: "exists"}
```
"""
def atomorphify!(map, []) when is_map(map), do: map
def atomorphify!(map, allowed) when is_map(map) and is_list(allowed) do
keymorphify!(map, &safe_atomize_binary/2, allowed)
end
@doc """
Takes a map as an argument and returns the same map with atom keys converted to string keys. Does not examine nested maps.
### Examples
```
iex> Morphix.stringmorphify!(%{this: "map", has: %{"string" => "keys"} })
%{"this" => "map", "has" => %{"string" => "keys"}}
iex> Morphix.stringmorphify!(%{1 => "2", "1" => 2, one: :two})
%{1 => "2", "1" => 2, "one" => :two}
```
"""
def stringmorphify!(map) when is_map(map) do
keymorphify!(map, &binarize_atom/2)
end
@doc """
Takes a map and a list of allowed atoms as arguments and returns the same map, with any atoms that are in the list converted to strings, and any atoms that are not in the list left as atoms.
### Examples:
```
iex> map = %{memberof: "atoms", embeded: %{"wont" => "convert"}}
iex> Morphix.stringmorphify!(map, [:memberof])
%{:embeded => %{"wont" => "convert"}, "memberof" => "atoms"}
```
```
iex> map = %{id: "fooobarrr", date_of_birth: ~D[2014-04-14]}
iex> Morphix.stringmorphify!(map)
%{"id" => "fooobarrr", "date_of_birth" => ~D[2014-04-14]}
```
"""
def stringmorphify!(map, []) when is_map(map), do: map
def stringmorphify!(map, allowed) when is_map(map) and is_list(allowed) do
keymorphify!(map, &binarize_atom/2, allowed)
end
def stringmorphify!(map, not_allowed) when is_map(map) do
raise(ArgumentError, message: "expecting a list of atoms, got: #{inspect(not_allowed)}")
end
def stringmorphify!(not_map, _) when not is_map(not_map) do
raise(ArgumentError, message: "expecting a map, got: #{inspect(not_map)}")
end
@doc """
Takes a map as an argument and returns `{:ok, map}`, with all string keys (including keys in nested maps) converted to atom keys.
### Examples:
```
iex> Morphix.atomorphiform(%{:this => %{map: %{"has" => "a", :nested => "string", :for => %{a: :key}}}, "the" => %{"other" => %{map: :does}}, as: "well"})
{:ok,%{this: %{map: %{has: "a", nested: "string", for: %{a: :key}}}, the: %{other: %{map: :does}}, as: "well"} }
iex> Morphix.atomorphiform(%{"this" => ["map", %{"has" => ["a", "list"]}], "inside" => "it"})
{:ok, %{this: ["map", %{has: ["a", "list"]}], inside: "it"}}
```
"""
def atomorphiform(map) when is_map(map) do
{:ok, atomorphiform!(map)}
end
@doc """
Takes a map and the `:safe` flag as arguments and returns `{:ok, map}`, with any strings that are existing atoms converted to atoms, and any strings that are not existing atoms left as strings.
Works recursively on embedded maps.
### Examples:
```
iex> [:allowed, :values]
iex> map = %{"allowed" => "atoms", "embed" => %{"will" => "convert", "values" => "to atoms"}}
iex> Morphix.atomorphiform(map, :safe)
{:ok, %{"embed" => %{"will" => "convert", values: "to atoms"}, allowed: "atoms"}}
```
"""
def atomorphiform(map, :safe) when is_map(map) do
{:ok, atomorphiform!(map, :safe)}
end
@doc """
Takes a map and a list of allowed strings as arguments and returns `{:ok, map}`, with any strings that are in the list converted to atoms, and any strings that are not in the list left as strings.
Works recursively on embedded maps.
### Examples:
```
iex> map = %{"memberof" => "atoms", "embed" => %{"will" => "convert", "thelist" => "to atoms"}}
iex> Morphix.atomorphiform(map, ["memberof", "thelist"])
{:ok, %{"embed" => %{"will" => "convert", thelist: "to atoms"}, memberof: "atoms"}}
```
"""
def atomorphiform(map, allowed) when is_map(map) do
{:ok, atomorphiform!(map, allowed)}
end
@doc """
Takes a map as an argument and returns the same map, with all string keys (including keys in nested maps) converted to atom keys.
### Examples:
```
iex> Morphix.atomorphiform!(%{:this => %{map: %{"has" => "a", :nested => "string", :for => %{a: :key}}}, "the" => %{"other" => %{map: :does}}, as: "well"})
%{this: %{map: %{has: "a", nested: "string", for: %{a: :key}}}, the: %{other: %{map: :does}}, as: "well"}
iex> Morphix.atomorphiform!(%{"this" => ["map", %{"has" => ["a", "list"]}], "inside" => "it"})
%{this: ["map", %{has: ["a", "list"]}], inside: "it"}
```
"""
def atomorphiform!(map) when is_map(map) do
morphiform!(map, &atomize_binary/2)
end
@doc """
Takes a map and the `:safe` flag as arguments and returns the same map, with any strings that are existing atoms converted to atoms, and any strings that are not existing atoms left as strings.
Works recursively on embedded maps.
### Examples:
```
iex> [:allowed, :values]
iex> map = %{"allowed" => "atoms", "embed" => %{"will" => "convert", "values" => "to atoms"}}
iex> Morphix.atomorphiform!(map, :safe)
%{"embed" => %{"will" => "convert", values: "to atoms"}, allowed: "atoms"}
```
"""
def atomorphiform!(map, :safe) when is_map(map) do
morphiform!(map, &safe_atomize_binary/2)
end
@doc """
Takes a map and a list of allowed strings as arguments and returns the same map, with any strings that are in the list converted to atoms, and any strings that are not in the list left as strings.
Works recursively on embedded maps.
### Examples:
```
iex> map = %{"memberof" => "atoms", "embed" => %{"will" => "convert", "thelist" => "to atoms"}}
iex> Morphix.atomorphiform!(map, ["memberof", "thelist"])
%{"embed" => %{"will" => "convert", thelist: "to atoms"}, memberof: "atoms"}
```
```
iex> map = %{"id" => "fooobarrr", "date_of_birth" => ~D[2014-04-14]}
%{"date_of_birth" => ~D[2014-04-14], "id" => "fooobarrr"}
iex> Morphix.atomorphiform!(map)
%{id: "fooobarrr", date_of_birth: ~D[2014-04-14]}
```
"""
def atomorphiform!(map, []) when is_map(map), do: map
def atomorphiform!(map, allowed) when is_map(map) and is_list(allowed) do
morphiform!(map, &safe_atomize_binary/2, allowed)
end
def stringmorphiform!(map) when is_map(map) do
morphiform!(map, &stringify_all/2)
end
def stringmorphiform!(map, []) when is_map(map), do: map
def stringmorphiform!(map, allowed) when is_map(map) and is_list(allowed) do
morphiform!(map, &stringify_all/2, allowed)
end
def stringmorphiform!(map, not_allowed) when is_map(map) do
raise(ArgumentError, message: "expecting a list of atoms, got: #{inspect(not_allowed)}")
end
def stringmorphiform!(not_map, _) when not is_map(not_map) do
raise(ArgumentError, message: "expecting a map, got: #{inspect(not_map)}")
end
defp stringify_all(value, []) do
if is_atom(value) do
try do
to_string(value)
rescue
_ -> value
end
else
value
end
end
defp stringify_all(value, allowed) do
if is_atom(value) && Enum.member?(allowed, value) do
to_string(value)
else
value
end
end
defp process_list_item(item, transformer, allowed) do
cond do
is_map(item) -> morphiform!(item, transformer, allowed)
is_list(item) -> Enum.map(item, fn x -> process_list_item(x, transformer, allowed) end)
true -> item
end
end
@doc """
Takes a map and a function as an argument and returns the same map, with all keys transformed by the function.(including keys in nested maps) converted to atom keys.
The function passed in must take two arguments, the key, and an allowed list.
### Examples:
```
iex> Morphix.morphiform!(%{"this" => %{"map" => %{"has" => "a", "nested" => "string", "for" => %{"a" => :key}}}, "the" => %{"other" => %{"map" => :does}},"as" => "well"}, fn key, [] -> String.upcase(key) end)
%{"THIS" => %{"MAP" => %{"HAS" => "a", "NESTED" => "string", "FOR" => %{"A" => :key}}}, "THE" => %{"OTHER" => %{"MAP" => :does}}, "AS" => "well"}
```
"""
def morphiform!(map, transformer, allowed \\ []) when is_map(map) do
morphkeys = fn {k, v}, acc ->
cond do
is_struct(v) ->
Map.put_new(acc, transformer.(k, allowed), v)
is_map(v) ->
Map.put_new(
acc,
transformer.(k, allowed),
morphiform!(v, transformer, allowed)
)
is_list(v) ->
Map.put_new(
acc,
transformer.(k, allowed),
process_list_item(v, transformer, allowed)
)
true ->
Map.put_new(acc, transformer.(k, allowed), v)
end
end
Enum.reduce(map, %{}, morphkeys)
end
defp keymorphify!(map, transformer, allowed \\ []) do
morphkeys = fn {k, v}, acc ->
Map.put_new(acc, transformer.(k, allowed), v)
end
Enum.reduce(map, %{}, morphkeys)
end
defp atomize_binary(value, []) do
if is_binary(value) do
String.to_atom(value)
else
value
end
end
defp binarize_atom(value, []) do
if is_atom(value) do
Atom.to_string(value)
else
value
end
end
defp binarize_atom(value, allowed) do
if is_atom(value) && Enum.member?(allowed, value) do
Atom.to_string(value)
else
value
end
end
defp safe_atomize_binary(value, []) do
if is_binary(value) do
try do
String.to_existing_atom(value)
rescue
_ -> value
end
else
value
end
end
defp safe_atomize_binary(value, allowed) do
if is_binary(value) && Enum.member?(allowed, value) do
String.to_atom(value)
else
value
end
end
@doc """
Takes a List and a function as arguments and returns `{:ok, Map}`, with the keys of the map the result of applying the function to each item in the list.
If the function cannot be applied, will return `{:error, message}`
### Examples
```
iex> Morphix.morphify([[1,2,3], [12], [1,2,3,4]], &Enum.count/1)
{:ok, %{1 => [12], 3 => [1,2,3], 4 => [1,2,3,4]}}
iex> Morphix.morphify({[1,2,3], [12], [1,2,3,4]}, &length/1)
{:ok, %{1 => [12], 3 => [1,2,3], 4 => [1,2,3,4]}}
iex> Morphix.morphify([1,2], &String.length/1)
{:error, "Unable to apply &String.length/1 to each of [1, 2]"}
```
"""
def morphify(enum, funct) when is_tuple(enum), do: morphify(Tuple.to_list(enum), funct)
def morphify(enum, funct) do
{:ok, morphify!(enum, funct)}
rescue
_ -> {:error, "Unable to apply #{inspect(funct)} to each of #{inspect(enum)}"}
end
@doc """
Takes a list and a function as arguments and returns a Map, with the keys of the map the result of applying the function to each item in the list.
### Examples
```
iex> Morphix.morphify!([[1,2,3], [12], [1,2,3,4]], &Enum.count/1)
%{1 => [12], 3 => [1,2,3], 4 => [1,2,3,4]}
```
"""
def morphify!(enum, funct) when is_tuple(enum), do: morphify!(Tuple.to_list(enum), funct)
def morphify!(enum, funct) do
Enum.reduce(enum, %{}, fn x, acc -> Map.put(acc, funct.(x), x) end)
end
@doc """
Takes a map or list and removes keys or elements that have nil values, or are empty maps.
### Examples
```
iex> Morphix.compactify!(%{nil_key: nil, not_nil: "nil"})
%{not_nil: "nil"}
iex> Morphix.compactify!([1, nil, "string", %{key: :value}])
[1, "string", %{key: :value}]
iex> Morphix.compactify!([a: nil, b: 2, c: "string"])
[b: 2, c: "string"]
iex> Morphix.compactify!(%{empty: %{}, not: "not"})
%{not: "not"}
iex> Morphix.compactify!({"not", "a map"})
** (ArgumentError) expecting a map or a list, got: {"not", "a map"}
```
"""
def compactify!(map) when is_map(map) do
map
|> Enum.reject(fn {_k, v} -> is_nil(v) || empty_map(v) end)
|> Enum.into(%{})
end
def compactify!(list) when is_list(list) do
list
|> Keyword.keyword?()
|> compactify!(list)
end
def compactify!(not_map_or_list) do
raise(ArgumentError, message: "expecting a map or a list, got: #{inspect(not_map_or_list)}")
end
defp compactify!(true, list) do
Enum.reject(list, fn {_k, v} -> is_nil(v) end)
end
defp compactify!(false, list) do
Enum.reject(list, fn elem -> is_nil(elem) end)
end
@doc """
Takes a map or a list and removes any keys or elements that have nil values.
### Examples
```
iex> Morphix.compactify(%{nil_key: nil, not_nil: "real value"})
{:ok, %{not_nil: "real value"}}
iex> Morphix.compactify([1, nil, "string", %{key: :value}])
{:ok, [1, "string", %{key: :value}]}
iex> Morphix.compactify([a: nil, b: 2, c: "string"])
{:ok, [b: 2, c: "string"]}
iex> Morphix.compactify(%{empty: %{}, not: "not"})
{:ok, %{not: "not"}}
iex> Morphix.compactify("won't work")
{:error, %ArgumentError{message: "expecting a map or a list, got: \\"won't work\\""}}
```
"""
def compactify(map_or_list) do
{:ok, compactify!(map_or_list)}
rescue
e -> {:error, e}
end
@doc """
Removes keys with nil values from nested maps, eliminates empty maps, and removes nil values from nested lists.
### Examples
```
iex> Morphix.compactiform!(%{nil_nil: nil, not_nil: "a value", nested: %{nil_val: nil, other: "other"}})
%{not_nil: "a value", nested: %{other: "other"}}
iex> Morphix.compactiform!(%{nil_nil: nil, not_nil: "a value", nested: %{nil_val: nil, other: "other", nested_empty: %{}}})
%{not_nil: "a value", nested: %{other: "other"}}
iex> Morphix.compactiform!([nil, "string", %{nil_nil: nil, not_nil: "a value", nested: %{nil_val: nil, other: "other", nested_empty: %{}}}, ["nested", nil, 2]])
["string", %{not_nil: "a value", nested: %{other: "other"}}, ["nested", 2]]
```
"""
def compactiform!(map) when is_map(map) do
compactor = fn {k, v}, acc ->
cond do
is_struct(v) -> Map.put_new(acc, k, v)
is_map(v) and Enum.empty?(v) -> acc
is_map(v) or is_list(v) -> Map.put_new(acc, k, compactiform!(v))
is_nil(v) -> acc
true -> Map.put_new(acc, k, v)
end
end
map
|> Enum.reduce(%{}, compactor)
|> compactify!
end
def compactiform!(list) when is_list(list) do
compactor = fn elem, acc ->
cond do
is_list(elem) and Enum.empty?(elem) -> acc
is_list(elem) or is_map(elem) -> acc ++ [compactiform!(elem)]
is_nil(elem) -> acc
true -> acc ++ [elem]
end
end
list
|> Enum.reduce([], compactor)
|> compactify!
end
def compactiform!(not_map_or_list) do
raise(ArgumentError, message: "expecting a map or a list, got: #{inspect(not_map_or_list)}")
end
@doc """
Removes keys with nil values from maps and nil elements from lists. It also handles nested maps and lists, and treats empty maps as nil values.
### Examples
```
iex> Morphix.compactiform(%{a: nil, b: "not", c: %{d: nil, e: %{}, f: %{g: "value"}}})
{:ok, %{b: "not", c: %{f: %{g: "value"}}}}
iex> Morphix.compactiform(%{has: %{a: ["list", "with", nil]}, and: ["a", %{nested: "map", with: nil}]})
{:ok, %{has: %{a: ["list", "with"]}, and: ["a", %{nested: "map"}]}}
iex> Morphix.compactiform(["list", %{a: "map", with: nil, and_empty: []}])
{:ok, ["list", %{a: "map", and_empty: []}]}
iex> Morphix.compactiform(5)
{:error, %ArgumentError{message: "expecting a map or a list, got: 5"}}
```
"""
def compactiform(map) do
{:ok, compactiform!(map)}
rescue
e -> {:error, e}
end
@doc """
Divides a list into k distinct sub-lists, with partitions being as close to the same size as possible
### Examples
```
iex> Morphix.partiphify!([1,2,3,4,5,6], 4)
[[3], [4], [5, 1], [6, 2]]
iex> Morphix.partiphify!(("abcdefghijklmnop" |> String.split("", trim: true)), 4)
[["a", "b", "c", "d"], ["e", "f", "g", "h"], ["i", "j", "k", "l"], ["m", "n", "o", "p"]]
```
"""
def partiphify!(list, k) when is_list(list) and is_integer(k) do
ceil_div = fn a, b -> Float.ceil(a / b) end
with chunk_size when chunk_size > 0 <-
list
|> Enum.count()
|> Integer.floor_div(k),
true <-
list
|> Enum.count()
|> Integer.mod(k)
|> ceil_div.(chunk_size) > 0 do
list
|> into_buckets(k, chunk_size)
|> distribute_extra()
else
0 ->
list = Enum.chunk_every(list, 1, 1, [])
empty_buckets = k - Enum.count(list)
Enum.reduce(1..empty_buckets, list, fn _, acc -> acc ++ [[]] end)
false ->
chunk_size =
list
|> Enum.count()
|> Integer.floor_div(k)
Enum.chunk_every(list, chunk_size, chunk_size, [])
end
end
defp into_buckets(list, k, chunk_size) do
chunks = Enum.chunk_every(list, chunk_size, chunk_size, [])
extra_buckets = Enum.take(chunks, -(Enum.count(chunks) - k))
k_buckets = chunks -- extra_buckets
{extra_buckets, k_buckets}
end
@doc """
Divides a list into k distinct sub-lists, with partitions being as close to the same size as possible
### Examples
```
iex> Morphix.partiphify([1,2,3,4,5,6], 4)
{:ok, [[3], [4], [5, 1], [6, 2]]}
iex> Morphix.partiphify(("abcdefghijklmnop" |> String.split("", trim: true)), 4)
{:ok, [["a", "b", "c", "d"], ["e", "f", "g", "h"], ["i", "j", "k", "l"], ["m", "n", "o", "p"]]}
```
"""
def partiphify(list, k) do
{:ok, partiphify!(list, k)}
rescue
e -> {:error, e}
end
defp distribute(list, buckets) do
Enum.reduce(list, buckets, fn item, buckets ->
[current_bucket | rest_of_buckets] = buckets
new_bucket = [item | current_bucket]
rest_of_buckets ++ [new_bucket]
end)
end
defp distribute_extra({lists, buckets}) do
with false <- Enum.empty?(lists) do
[current_list | rest] = lists
new_buckets = distribute(current_list, buckets)
distribute_extra({rest, new_buckets})
else
_ -> buckets
end
end
defp empty_map(map) do
is_map(map) && not Map.has_key?(map, :__struct__) && Enum.empty?(map)
end
if !macro_exported?(Kernel, :is_struct, 1) do
defp is_struct(s), do: is_map(s) and Map.has_key?(s, :__struct__)
end
end
|
lib/morphix.ex
| 0.919953
| 0.902524
|
morphix.ex
|
starcoder
|
defmodule Phoenix.Digester do
@digested_file_regex ~r/(-[a-fA-F\d]{32})/
@moduledoc """
Digests and compress static files.
For each file under the given input path, Phoenix will generate a digest
and also compress in `.gz` format. The filename and its digest will be
used to generate the manifest file. It also avoid duplications checking
for already digested files.
"""
@doc """
Digests and compresses the static files and saves them in the given output path.
* `input_path` - The path where the assets are located
* `output_path` - The path where the compiled/compressed files will be saved
"""
@spec compile(String.t, String.t) :: :ok | {:error, :invalid_path}
def compile(input_path, output_path) do
if File.exists?(input_path) do
unless File.exists?(output_path), do: File.mkdir_p!(output_path)
input_path
|> filter_files
|> do_compile(output_path)
|> generate_manifest(output_path)
:ok
else
{:error, :invalid_path}
end
end
defp filter_files(input_path) do
input_path
|> Path.join("**")
|> Path.wildcard
|> Enum.filter(&(!File.dir?(&1) && !compiled_file?(&1)))
|> Enum.map(&(map_file(&1, input_path)))
end
defp do_compile(files, output_path) do
Enum.map(files, fn (file) ->
file
|> digest
|> compress
|> write_to_disk(output_path)
end)
end
defp generate_manifest(files, output_path) do
entries = Enum.reduce(files, %{}, fn (file, acc) ->
Map.put(acc, manifest_join(file.relative_path, file.filename),
manifest_join(file.relative_path, file.digested_filename))
end)
manifest_content = Poison.encode!(entries, [])
File.write!(Path.join(output_path, "manifest.json"), manifest_content)
end
defp manifest_join(".", filename), do: filename
defp manifest_join(path, filename), do: Path.join(path, filename)
defp compiled_file?(file_path) do
Regex.match?(@digested_file_regex, Path.basename(file_path)) ||
Path.extname(file_path) == ".gz" ||
Path.basename(file_path) == "manifest.json"
end
defp map_file(file_path, input_path) do
%{absolute_path: file_path,
relative_path: Path.relative_to(file_path, input_path) |> Path.dirname,
filename: Path.basename(file_path),
content: File.read!(file_path)}
end
defp compress(file) do
Map.put(file, :compressed_content, :zlib.gzip(file.content))
end
defp digest(file) do
name = Path.rootname(file.filename)
extension = Path.extname(file.filename)
digest = Base.encode16(:erlang.md5(file.content), case: :lower)
Map.put(file, :digested_filename, "#{name}-#{digest}#{extension}")
end
defp write_to_disk(file, output_path) do
path = Path.join(output_path, file.relative_path)
File.mkdir_p!(path)
# compressed files
File.write!(Path.join(path, file.digested_filename <> ".gz"), file.compressed_content)
File.write!(Path.join(path, file.filename <> ".gz"), file.compressed_content)
# uncompressed files
File.write!(Path.join(path, file.digested_filename), file.content)
File.write!(Path.join(path, file.filename), file.content)
file
end
end
|
lib/phoenix/digester.ex
| 0.658308
| 0.526951
|
digester.ex
|
starcoder
|
defmodule Sanbase.Math do
require Integer
@epsilon 1.0e-6
def round_float(f) when is_float(f) and (f >= 1 or f <= -1), do: Float.round(f, 2)
def round_float(f) when is_float(f) and f >= 0 and f <= @epsilon, do: 0.0
def round_float(f) when is_float(f) and f < 0 and f >= -@epsilon, do: 0.0
def round_float(f) when is_float(f), do: Float.round(f, 6)
def round_float(i) when is_integer(i), do: round(i * 1.0)
@doc ~s"""
Calculate the % change that occured between the first and the second arguments
## Examples
iex> Sanbase.Math.percent_change(1.0, 2.0)
100.0
iex> Sanbase.Math.percent_change(1.0, 1.05)
5.0
iex> Sanbase.Math.percent_change(0, 2.0)
0.0
iex> Sanbase.Math.percent_change(2.0, 1.0)
-50.0
iex> Sanbase.Math.percent_change(2.0, 0.0)
-100.0
iex> Sanbase.Math.percent_change(2.0, -1)
-150.0
iex> Sanbase.Math.percent_change(10.0, 10.0)
0.0
"""
def percent_change(0, _current), do: 0.0
def percent_change(nil, _current), do: 0.0
def percent_change(_previous, nil), do: 0.0
def percent_change(previous, _current)
when is_number(previous) and previous <= @epsilon,
do: 0.0
def percent_change(previous, current) when is_number(previous) and is_number(current) do
((current / previous - 1) * 100)
|> Float.round(2)
end
@spec percent_of(number(), number(), Keyword.t()) :: number() | nil
def percent_of(part, whole, opts \\ [])
def percent_of(part, whole, opts)
when is_number(part) and is_number(whole) and part >= 0 and whole > 0 and whole >= part do
result =
case Keyword.get(opts, :type, :between_0_and_100) do
:between_0_and_1 ->
part / whole
:between_0_and_100 ->
part / whole * 100
end
case Keyword.get(opts, :precision) do
precision when is_integer(precision) and precision >= 0 ->
Float.floor(result, precision)
nil ->
result
end
end
def percent_of(_, _, _), do: nil
@doc ~S"""
Integer power function. Erlang's :math is using floating point numbers.
Sometimes the result is needed as Integer and not as Float (ex. for using in Decimal.div/1)
and it's inconvenient to polute the code with `round() |> trunc()`
## Examples
iex> Sanbase.Math.ipow(2,2)
4
iex> Sanbase.Math.ipow(-2,2)
4
iex> Sanbase.Math.ipow(-2,3)
-8
iex> Sanbase.Math.ipow(1231232,0)
1
iex> Sanbase.Math.ipow(2,500)
3273390607896141870013189696827599152216642046043064789483291368096133796404674554883270092325904157150886684127560071009217256545885393053328527589376
iex> Sanbase.Math.ipow(10,18)
1_000_000_000_000_000_000
"""
def ipow(base, exp) when is_integer(base) and is_integer(exp) and exp >= 0 do
do_ipow(base, exp)
end
defp do_ipow(_, 0), do: 1
defp do_ipow(x, 1), do: x
defp do_ipow(x, n) when Integer.is_odd(n) do
x * ipow(x, n - 1)
end
defp do_ipow(x, n) do
result = do_ipow(x, div(n, 2))
result * result
end
@doc ~S"""
Convert strings, floats, decimals or integers to integers
## Examples
iex> Sanbase.Math.to_integer("2")
2
iex> Sanbase.Math.to_integer(2.3)
2
iex> Sanbase.Math.to_integer(2.5)
3
iex> Sanbase.Math.to_integer(2.8)
3
iex> Sanbase.Math.to_integer(2.0)
2
iex> Sanbase.Math.to_integer(Decimal.new(2))
2
iex> Sanbase.Math.to_integer(500)
500
"""
def to_integer(x, default_when_nil \\ nil)
def to_integer(nil, default_when_nil), do: default_when_nil
def to_integer(x, _) when is_integer(x), do: x
def to_integer(f, _) when is_float(f), do: f |> round() |> trunc()
def to_integer(%Decimal{} = d, _), do: d |> Decimal.round() |> Decimal.to_integer()
def to_integer(str, _) when is_binary(str) do
case String.trim(str) |> Integer.parse() do
{integer, _} ->
integer
:error ->
{:error, "Cannot parse an integer from #{str}"}
end
end
@doc ~S"""
Convert a string that potentially contains trailing non-digit symbols to an integer
## Examples
iex> Sanbase.Math.str_to_integer_safe("2asd")
2
iex> Sanbase.Math.str_to_integer_safe("222")
222
"""
def str_to_integer_safe(str) do
case Integer.parse(str) do
{num, _rest} -> num
:error -> nil
end
end
@doc ~S"""
Convert strings, floats, decimals or integers to floats
## Examples
iex> Sanbase.Math.to_float("2")
2.0
iex> Sanbase.Math.to_float(2.3)
2.3
iex> Sanbase.Math.to_float(2.5)
2.5
iex> Sanbase.Math.to_float(2.8)
2.8
iex> Sanbase.Math.to_float(2.0)
2.0
iex> Sanbase.Math.to_float(Decimal.new(2))
2.0
iex> Sanbase.Math.to_float(500)
500.0
"""
def to_float(data, default_when_nil \\ nil)
def to_float(nil, default_when_nil), do: default_when_nil
def to_float(fl, _) when is_float(fl), do: fl
def to_float(int, _) when is_integer(int), do: int * 1.0
def to_float(%Decimal{} = d, _) do
d |> Decimal.to_float()
end
def to_float(str, _) when is_binary(str) do
{num, _} = str |> Float.parse()
num
end
@doc ~s"""
Find the min and max in a list in a single pass. The result is returned
as a tuple `{min, max}` or `nil` if the list is empty
## Examples
iex> Sanbase.Math.min_max([1,2,3,-1,2,1])
{-1, 3}
iex> Sanbase.Math.min_max([:a])
{:a, :a}
iex> Sanbase.Math.min_max([])
nil
"""
def min_max([]), do: nil
def min_max([h | rest]) do
rest
|> Enum.reduce({h, h}, fn
elem, {min, max} when elem < min -> {elem, max}
elem, {min, max} when elem > max -> {min, elem}
_, acc -> acc
end)
end
def average(list, opts \\ [])
def average([], _), do: 0
def average(values, opts),
do: Float.round(Enum.sum(values) / length(values), Keyword.get(opts, :precision, 2))
def median([]), do: nil
def median(list) when is_list(list) do
list = Enum.sort(list)
midpoint =
(length(list) / 2)
|> Float.floor()
|> round
{l1, l2} = list |> Enum.split(midpoint)
# l2 is the same length as l1 or 1 element bigger as the midpoint is floored
case length(l2) > length(l1) do
true ->
[med | _] = l2
med
false ->
[m1 | _] = l2
m2 = List.last(l1)
average([m1, m2])
end
end
def simple_moving_average(values, period) do
values
|> Enum.chunk_every(period, 1, :discard)
|> Enum.map(&average/1)
end
def simple_moving_average(list, period, opts) do
value_key = Keyword.fetch!(opts, :value_key)
result =
list
|> Enum.chunk_every(period, 1, :discard)
|> Enum.map(fn elems ->
datetime = Map.get(List.last(elems), :datetime)
values = Enum.map(elems, & &1[value_key])
%{
value_key => average(values),
:datetime => datetime
}
end)
{:ok, result}
end
end
|
lib/sanbase/utils/math.ex
| 0.842637
| 0.615117
|
math.ex
|
starcoder
|
defmodule CRC do
@moduledoc """
This module is used to calculate CRC (Cyclic Redundancy Check) values
for binary data. It uses NIF functions written in C to iterate over
the given binary calculating the CRC checksum value.
CRC implementations have been tested against these online calculators to
validate their correctness to the best of our ability.
https://www.lammertbies.nl/comm/info/crc-calculation.html
http://www.sunshine2k.de/coding/javascript/crc/crc_js.html
"""
@doc """
Calculate a CRC checksum for the `input` based on the crc `params` given.
`params` can be an atom for one of the compiled models. See `CRC.list/0` for
a full list or a Map with parameters to create a model at runtime. The map
given should have all of the following keys:
`width` - (unsigned integer) representation for the width of the CRC in bits
`poly` - (unsigned integer) the polynomial used for the CRC calculation
`init` - (unsigned integer) The initial value used when starting the calculation
`refin` - (boolean) if the input value should be reflected. This is used for changing between endian's
`refout` - (boolean) if the outvalue should be reflected when calculation is completed
`xorout` - (unsigned integer) Final xor value used when completing the CRC calculation
## Examples
%{
width: 16,
poly: 0x1021,
init: 0x00,
refin: false,
refout: false,
xorout: 0x00
}
You can also extend one of the compiled models at runtime by creating a map
with `extend` key set to the model you wish to extend and the keys you wish
to override for that model.
For example to override the initial value for the `:crc_16_ccitt_false` model
to `0x1D0F` you would pass the following Map as params:
`%{extend: :crc_16_ccitt_false, init: 0x1D0F}`
You can learn more about CRC calculation here:
https://www.sunshine2k.de/articles/coding/crc/understanding_crc.html
"""
@spec crc(:crc_algorithm.params(), iodata()) :: :crc_algorithm.value()
defdelegate crc(params, input), to: :crc
@doc """
Initialize a resource to be used for doing CRC calculations. The returned
resource can be used with `crc/2` or `crc_update/2` to calculate CRC checksums.
Resource is created using the same `params` types that are used with `crc/2`:
- atom's for compiled models
- Map with model values
- Map to extend a compiled model.
If used with `crc/2` the returned resource can be re-used multiple times, but
using a map or atom for a compiled model will likely be slightly more
performant.
When using with `crc_update/2` a new resource will be returned with every
call that should be used to continue the calculation.
"""
@spec crc_init(:crc_algorithm.params()) :: :crc_algorithm.resource()
defdelegate crc_init(params), to: :crc
@doc """
Begins or continues a multi-part CRC calculation.
Takes a `resource` from result of `crc_init/1` or previous `crc_update/2`
call, and binary `input`, returns a new `resource` to be used to continue or
finalize the CRC calculation.
"""
@spec crc_update(:crc_algorithm.resource(), iodata()) :: :crc_algorithm.resource()
defdelegate crc_update(resource, input), to: :crc
@doc """
Takes a `resource` result from `crc_update/2` and finalizes the multi-part
CRC calculation.
"""
@spec crc_final(:crc_algorithm.resource()) :: :crc_algorithm.value()
defdelegate crc_final(resource), to: :crc
@doc """
Calculate a CRC checksum for the `input` based on the crc `params` given.
See `CRC.crc/2` for details on valid `params`.
This function has the parameter order reversed to allow easier use with pipelines.
allowing code to be written like:
## Examples
read_data() |> CRC.calculate(:crc_16) |> do_something()
"""
@spec calculate(iodata(), :crc_algorithm.params()) :: :crc_algorithm.value()
def calculate(input, params) do
:crc.crc(params, input)
end
@doc """
Returns a list of all the compiled CRC models.
"""
@spec list() :: [{atom, String.t}]
def list() do
:crc_nif.crc_list()
|> Map.to_list()
|> Enum.map(fn {model, map} -> {model, Map.get(map, model)} end)
end
@doc """
Returns a list of all compiled CRC Models that match the filter given.
Filter is compiled into a regular expression and matched against the model name
and description.
"""
@spec list(binary) :: [{atom, String.t}]
def list(filter) do
list()
|> Enum.filter(&(list_filter(&1, filter)))
end
defp list_filter({model_atom, model_name}, filter) do
atom_string = Atom.to_string(model_atom)
{:ok, rfilter} = Regex.compile(filter)
Regex.match?(rfilter, atom_string) or Regex.match?(rfilter, model_name)
end
use CRC.Legacy
end
|
lib/crc.ex
| 0.941399
| 0.831485
|
crc.ex
|
starcoder
|
defmodule Artheon.Artist do
use Artheon.Web, :model
@gender_female 0
@gender_male 1
@gender_unknown 2
schema "artists" do
field :uid, :string
field :name, :string
field :slug, :string
field :nationality, :string
field :birthday, :integer
field :gender, :integer
field :hometown, :string
field :location, :string
field :created_at, Ecto.DateTime
field :updated_at, Ecto.DateTime
has_many :artworks, Artheon.Artwork
end
@editable_fields [
:uid,
:slug,
:name,
:nationality,
:birthday,
:gender,
:hometown,
:location,
:created_at,
:updated_at
]
@required_fields [
:uid,
:slug,
:name
]
@doc """
Builds a changeset based on the `struct` and `params`.
"""
def changeset(struct), do: changeset(struct, %{})
def changeset(struct, %{
"gender" => gender,
"birthday" => birthday,
"created_at" => created_at,
"updated_at" => updated_at,
} = params) when is_bitstring(gender) do
artist_params = params
|> Map.drop(["gender", "birthday", "created_at", "updated_at"])
|> Map.put("gender", get_gender(gender))
|> Map.put("birthday", parse_birthdate(birthday))
|> Map.put("created_at", to_ecto_datetime(created_at))
|> Map.put("updated_at", to_ecto_datetime(updated_at))
changeset(struct, artist_params)
end
def changeset(struct, params) do
struct
|> cast(params, @editable_fields)
|> validate_required(@required_fields)
|> unique_constraint(:uid)
|> unique_constraint(:slug)
end
@spec parse_birthdate(String.t) :: Integer
defp parse_birthdate(date) when is_integer(date), do: date
defp parse_birthdate(date) when byte_size(date) >= 4 do
with [year] <- Regex.run(~r/\d{4}/, date) do
String.to_integer(year)
else
_ ->
nil
end
end
defp parse_birthdate(_date), do: nil
@spec get_gender(String.t) :: integer()
defp get_gender("male"), do: @gender_male
defp get_gender("female"), do: @gender_female
defp get_gender(_), do: @gender_unknown
end
|
web/models/artist.ex
| 0.5
| 0.463505
|
artist.ex
|
starcoder
|
defmodule EdgeDB.Protocol.Enum do
alias EdgeDB.Protocol.Datatypes
@callback to_atom(integer() | atom()) :: atom()
@callback to_code(atom() | integer()) :: integer()
@callback encode(term()) :: iodata()
@callback decode(bitstring()) :: {term(), bitstring()}
defmacro __using__(_opts \\ []) do
quote do
import EdgeDB.Protocol.Converters
import unquote(__MODULE__)
end
end
defmacro defenum(opts) do
values = Keyword.fetch!(opts, :values)
union? = Keyword.get(opts, :union, false)
typespec_def = define_typespec(values, union?)
guard_name = Keyword.get(opts, :guard)
codes = Keyword.values(values)
guard_def = define_guard(guard_name, codes)
to_atom_funs_def = define_to_atom_funs(values)
to_code_funs_def = define_to_code_funs(values)
datatype_codec = Keyword.get(opts, :datatype, Datatypes.UInt8)
datatype_codec_access_fun_def = define_datatype_codec_access_fun(datatype_codec)
encoder_def = define_enum_encoder(datatype_codec)
decoder_def = define_enum_decoder(datatype_codec)
quote do
@behaviour unquote(__MODULE__)
unquote(typespec_def)
if not is_nil(unquote(guard_name)) do
unquote(guard_def)
end
unquote(datatype_codec_access_fun_def)
unquote(to_atom_funs_def)
unquote(to_code_funs_def)
unquote(encoder_def)
unquote(decoder_def)
defoverridable encode: 1, decode: 1
end
end
defp define_typespec(values, union) do
main_spec =
Enum.reduce(values, nil, fn
{name, code}, nil ->
quote do
unquote(name) | unquote(code)
end
{name, code}, acc ->
quote do
unquote(acc) | unquote(name) | unquote(code)
end
end)
if union do
quote do
@type t() :: list(unquote(main_spec))
end
else
quote do
@type t() :: unquote(main_spec)
end
end
end
defp define_guard(guard_name, codes) do
quote do
defguard unquote(guard_name)(code) when code in unquote(codes)
end
end
defp define_to_atom_funs(values) do
for {name, code} <- values do
quote do
@spec to_atom(unquote(code) | unquote(name)) :: unquote(name)
def to_atom(unquote(code)) do
unquote(name)
end
def to_atom(unquote(name)) do
unquote(name)
end
end
end
end
defp define_to_code_funs(values) do
for {name, code} <- values do
quote do
@spec to_code(unquote(code) | unquote(name)) :: unquote(code)
def to_code(unquote(code)) do
unquote(code)
end
def to_code(unquote(name)) do
unquote(code)
end
end
end
end
defp define_datatype_codec_access_fun(codec) do
quote do
@spec enum_codec() :: module()
def enum_codec do
unquote(codec)
end
end
end
defp define_enum_encoder(codec) do
quote do
@spec encode(t()) :: iodata()
def encode(value) do
value
|> to_code()
|> unquote(codec).encode()
end
end
end
defp define_enum_decoder(codec) do
quote do
@spec decode(bitstring()) :: {t(), bitstring()}
def decode(<<content::binary>>) do
{code, rest} = unquote(codec).decode(content)
{to_atom(code), rest}
end
end
end
end
|
lib/edgedb/protocol/enum.ex
| 0.677581
| 0.451508
|
enum.ex
|
starcoder
|
defmodule Membrane.Caps.Matcher do
@moduledoc """
Module that allows to specify valid caps and verify that they match specification.
Caps specifications (specs) should be in one of the formats:
* simply module name of the desired caps (e.g. `Membrane.Caps.Audio.Raw` or `Raw` with proper alias)
* tuple with module name and keyword list of specs for specific caps fields (e.g. `{Raw, format: :s24le}`)
* list of the formats described above
Field values can be specified in following ways:
* By a raw value for the field (e.g. `:s24le`)
* Using `range/2` for values comparable with `Kernel.<=/2` and `Kernel.>=/2` (e.g. `Matcher.range(0, 255)`)
* With `one_of/1` and a list of valid values (e.g `Matcher.one_of([:u8, :s16le, :s32le])`)
Checks on the values from list are performed recursivly i.e. it can contain another `range/2`,
for example `Matcher.one_of([0, Matcher.range(2, 4), Matcher.range(10, 20)])`
If the specs are defined inside of `Membrane.Element.Base.Mixin.SinkBehaviour.def_input_pads/1` and
`Membrane.Element.Base.Mixin.SourceBehaviour.def_output_pads/1` module name can be ommitted from
`range/2` and `one_of/1` calls.
"""
import Kernel, except: [match?: 2]
require Record
alias Bunch
@type caps_spec_t :: module() | {module(), keyword()}
@type caps_specs_t :: :any | caps_spec_t() | [caps_spec_t()]
defmodule Range do
@moduledoc false
@enforce_keys [:min, :max]
defstruct @enforce_keys
end
@opaque range_t :: %Range{min: any, max: any}
defimpl Inspect, for: Range do
import Inspect.Algebra
@impl true
def inspect(%Range{min: min, max: max}, opts) do
concat(["range(", to_doc(min, opts), ", ", to_doc(max, opts), ")"])
end
end
defmodule OneOf do
@moduledoc false
@enforce_keys [:list]
defstruct @enforce_keys
end
@opaque one_of_t :: %OneOf{list: list()}
defimpl Inspect, for: OneOf do
import Inspect.Algebra
@impl true
def inspect(%OneOf{list: list}, opts) do
concat(["in(", to_doc(list, opts), ")"])
end
end
@doc """
Returns opaque specification of range of valid values for caps field.
"""
@spec range(any, any) :: range_t()
def range(min, max) do
%Range{min: min, max: max}
end
@doc """
Returns opaque specification of list of valid values for caps field.
"""
@spec one_of(list()) :: one_of_t()
def one_of(values) when is_list(values) do
%OneOf{list: values}
end
@doc """
Function used to make sure caps specs are valid.
In particular, valid caps:
* Have shape described by `t:caps_specs_t/0` type
* If they contain keyword list, the keys are present in requested caps type
It returns `:ok` when caps are valid and `{:error, reason}` otherwise
"""
@spec validate_specs(caps_specs_t() | any()) :: :ok | {:error, reason :: tuple()}
def validate_specs(specs_list) when is_list(specs_list) do
specs_list |> Bunch.Enum.try_each(&validate_specs/1)
end
def validate_specs({type, keyword_specs}) do
caps = type.__struct__
caps_keys = caps |> Map.from_struct() |> Map.keys() |> MapSet.new()
spec_keys = keyword_specs |> Keyword.keys() |> MapSet.new()
if MapSet.subset?(spec_keys, caps_keys) do
:ok
else
invalid_keys = spec_keys |> MapSet.difference(caps_keys) |> MapSet.to_list()
{:error, {:invalid_keys, type, invalid_keys}}
end
end
def validate_specs(specs) when is_atom(specs), do: :ok
def validate_specs(specs), do: {:error, {:invalid_specs, specs}}
@doc """
Function determining whether the caps match provided specs.
When `:any` is used as specs, caps can by anything (i.e. they can be invalid)
"""
@spec match?(:any, any()) :: true
@spec match?(caps_specs_t(), struct()) :: boolean()
def match?(:any, _), do: true
def match?(specs, %_{} = caps) when is_list(specs) do
specs |> Enum.any?(fn spec -> match?(spec, caps) end)
end
def match?({type, keyword_specs}, %caps_type{} = caps) do
type == caps_type && keyword_specs |> Enum.all?(fn kv -> kv |> match_caps_entry(caps) end)
end
def match?(type, %caps_type{}) when is_atom(type) do
type == caps_type
end
defp match_caps_entry({spec_key, spec_value}, %{} = caps) do
{:ok, caps_value} = caps |> Map.fetch(spec_key)
match_value(spec_value, caps_value)
end
defp match_value(%OneOf{list: specs}, value) when is_list(specs) do
specs |> Enum.any?(fn spec -> match_value(spec, value) end)
end
defp match_value(%Range{min: min, max: max}, value) do
min <= value && value <= max
end
defp match_value(spec, value) do
spec == value
end
end
|
lib/membrane/caps/matcher.ex
| 0.908456
| 0.714404
|
matcher.ex
|
starcoder
|
defmodule BreakingPP.Model.Cluster do
alias BreakingPP.Model.{Node, Session}
defstruct [
started_nodes: [],
stopped_nodes: [],
sessions: [],
splits: MapSet.new()
]
@type session_id :: {Node.t, String.t}
@type split :: {Node.t, Node.t}
@type t :: %__MODULE__{
started_nodes: [Node.t],
stopped_nodes: [Node.t],
sessions: [Session.t],
splits: MapSet.t(split)}
def new, do: %__MODULE__{}
def started_nodes(%__MODULE__{started_nodes: ns}), do: ns
def stopped_nodes(%__MODULE__{stopped_nodes: ns}), do: ns
def sessions(%__MODULE__{sessions: sessions}), do: sessions
def sessions(%__MODULE__{sessions: sessions, splits: splits}, node) do
Enum.reject(sessions, fn s ->
MapSet.member?(splits, {Session.node(s), node})
end)
end
def start_node(%__MODULE__{}=cluster, node) do
start_nodes(cluster, [node])
end
def start_nodes(%__MODULE__{}=cluster, nodes) do
%{cluster |
started_nodes: nodes ++ cluster.started_nodes,
stopped_nodes: cluster.stopped_nodes -- nodes
}
end
def stop_node(%__MODULE__{}=cluster, node) do
%{cluster |
stopped_nodes: [node|cluster.stopped_nodes],
started_nodes: List.delete(cluster.started_nodes, node),
sessions:
Enum.reject(cluster.sessions, fn s -> Session.node(s) == node end)
}
end
def add_sessions(%__MODULE__{}=cluster, sessions) do
%{cluster | sessions: sessions ++ cluster.sessions}
end
def remove_sessions(%__MODULE__{}=cluster, sessions) do
%{cluster | sessions: cluster.sessions -- sessions}
end
def split(%__MODULE__{}=cluster, node1, node2) do
splits =
cluster.splits
|> MapSet.put({node1, node2})
|> MapSet.put({node2, node1})
%{cluster | splits: splits}
end
def join(%__MODULE__{}=cluster, node1, node2) do
splits =
cluster.splits
|> MapSet.delete({node1, node2})
|> MapSet.delete({node2, node1})
%{cluster | splits: splits}
end
def node_stopped?(%__MODULE__{stopped_nodes: ns}, node) do
Enum.member?(ns, node)
end
def node_started?(%__MODULE__{started_nodes: ns}, node) do
Enum.member?(ns, node)
end
def split_between?(%__MODULE__{splits: splits}, node1, node2) do
MapSet.member?(splits, {node1, node2})
end
end
|
lib/breaking_pp/model/cluster.ex
| 0.691602
| 0.644777
|
cluster.ex
|
starcoder
|
defmodule Ecto.Pool do
@moduledoc """
Behaviour for using a pool of connections.
"""
@typedoc """
A pool process
"""
@type t :: atom | pid
@typedoc """
Opaque connection reference.
Use inside `run/4` and `transaction/4` to retrieve the connection module and
pid or break the transaction.
"""
@opaque ref :: {__MODULE__, module, t}
@typedoc """
The depth of nested transactions.
"""
@type depth :: non_neg_integer
@typedoc """
The time in microseconds spent waiting for a connection from the pool.
"""
@type queue_time :: non_neg_integer
@doc """
Start a pool of connections.
`module` is the connection module, which should define the
`Ecto.Adapters.Connection` callbacks, and `opts` are its (and the pool's)
options.
A pool should support the following options:
* `:name` - The name of the pool
* `:pool_size` - The number of connections to keep in the pool
Returns `{:ok, pid}` on starting the pool.
Returns `{:error, reason}` if the pool could not be started. If the `reason`
is {:already_started, pid}}` a pool with the same name has already been
started.
"""
@callback start_link(module, opts) ::
{:ok, pid} | {:error, any} when opts: Keyword.t
@doc """
Checkout a worker/connection from the pool.
The connection should not be closed if the calling process exits without
returning the connection.
Returns `{:ok, worker, conn, queue_time}` on success, where `worker` is the
worker term and conn is a 2-tuple contain the connection's module and
pid. The `conn` tuple can be retrieved inside a `transaction/4` with
`connection/1`.
Returns `{:error, :noproc}` if the pool is not alive and
`{:error, :noconnect}` if a connection is not available.
"""
@callback checkout(t, timeout) ::
{:ok, worker, conn, queue_time} |
{:error, :noproc | :noconnect} when worker: any, conn: {module, pid}
@doc """
Checkin a worker/connection to the pool.
Called when the top level `run/4` finishes, if `break/2` was not called
inside the fun.
"""
@callback checkin(t, worker, timeout) :: :ok when worker: any
@doc """
Break the current transaction or run.
Called when the function has failed and the connection should no longer be
available to to the calling process.
"""
@callback break(t, worker, timeout) :: :ok when worker: any
@doc """
Open a transaction with a connection from the pool.
The connection should be closed if the calling process exits without
returning the connection.
Returns `{:ok, worker, conn, queue_time}` on success, where `worker` is the
worker term and conn is a 2-tuple contain the connection's module and
pid. The `conn` tuple can be retrieved inside a `transaction/4` with
`connection/2`.
Returns `{:error, :noproc}` if the pool is not alive and
`{:error, :noconnect}` if a connection is not available.
"""
@callback open_transaction(t, timeout) ::
{:ok, worker, conn, queue_time} |
{:error, :noproc | :noconnect} when worker: any, conn: {module, pid}
@doc """
Close the transaction and signal to the worker the work with the connection
is complete.
Called once the transaction at `depth` `1` is finished, if the transaction
is not broken with `break/2`.
"""
@callback close_transaction(t, worker, timeout) :: :ok when worker: any
@doc """
Runs a fun using a connection from a pool.
The connection will be taken from the pool unless we are inside
a `transaction/4` which, in this case, would already have a conn
attached to it.
Returns the value returned by the function wrapped in a tuple
as `{:ok, value}`.
Returns `{:error, :noproc}` if the pool is not alive or
`{:error, :noconnect}` if no connection is available.
## Examples
Pool.run(mod, pool, timeout,
fn(_conn, queue_time) -> queue_time end)
Pool.transaction(mod, pool, timeout,
fn(:opened, _ref, _conn, _queue_time) ->
{:ok, :nested} =
Pool.run(mod, pool, timeout, fn(_conn, nil) ->
:nested
end)
end)
"""
@spec run(module, t, timeout, ((conn, queue_time | nil) -> result)) ::
{:ok, result} | {:error, :noproc | :noconnect}
when result: var, conn: {module, pid}
def run(pool_mod, pool, timeout, fun) do
ref = {__MODULE__, pool_mod, pool}
case Process.get(ref) do
nil ->
do_run(pool_mod, pool, timeout, fun)
%{conn: conn, tainted: false} ->
{:ok, fun.(conn, nil)}
%{} ->
{:error, :noconnect}
end
end
defp do_run(pool_mod, pool, timeout, fun) do
case checkout(pool_mod, pool, timeout) do
{:ok, worker, conn, time} ->
try do
{:ok, fun.(conn, time)}
after
pool_mod.checkin(pool, worker, timeout)
end
{:error, _} = error ->
error
end
end
defp checkout(pool_mod, pool, timeout) do
case pool_mod.checkout(pool, timeout) do
{:ok, _worker, _conn, _time} = ok ->
ok
{:error, reason} = error when reason in [:noproc, :noconnect] ->
error
{:error, err} ->
raise err
end
end
@doc """
Carries out a transaction using a connection from a pool.
Once a transaction is opened, all following calls to `run/4` or
`transaction/4` will use the same connection/worker. If `break/2` is invoked,
all operations will return `{:error, :noconnect}` until the end of the
top level transaction.
Nested calls to pool transaction will "flatten out" transactions. This means
nested calls are mostly no-op and just execute the given function passing
`:already_opened` as first argument. If there is any failure in a nested
transaction, the whole transaction is marked as tainted, ensuring the outer
most call fails.
Returns `{:error, :noproc}` if the pool is not alive, `{:error, :noconnect}`
if no connection is available. Otherwise just returns the given function value
without wrapping.
## Examples
Pool.transaction(mod, pool, timeout,
fn(:opened, _ref, _conn, queue_time) -> queue_time end)
Pool.transaction(mod, pool, timeout,
fn(:opened, ref, _conn, _queue_time) ->
:nested =
Pool.transaction(mod, pool, timeout, fn(:already_opened, _ref, _conn, nil) ->
:nested
end)
end)
Pool.transaction(mod, :pool1, timeout,
fn(:opened, _ref1, _conn1, _queue_time1) ->
:different_pool =
Pool.transaction(mod, :pool2, timeout,
fn(:opened, _ref2, _conn2, _queue_time2) -> :different_pool end)
end)
"""
@spec transaction(module, t, timeout, fun) ::
value | {:error, :noproc} | {:error, :noconnect} | no_return
when fun: (:opened | :already_open, ref, conn, queue_time | nil -> value),
conn: {module, pid},
value: var
def transaction(pool_mod, pool, timeout, fun) do
ref = {__MODULE__, pool_mod, pool}
case Process.get(ref) do
nil ->
case pool_mod.open_transaction(pool, timeout) do
{:ok, worker, conn, time} ->
outer_transaction(ref, worker, conn, time, timeout, fun)
{:error, reason} = error when reason in [:noproc, :noconnect] ->
error
{:error, err} ->
raise err
end
%{conn: conn} ->
inner_transaction(ref, conn, fun)
end
end
defp outer_transaction(ref, worker, conn, time, timeout, fun) do
Process.put(ref, %{worker: worker, conn: conn, tainted: false})
try do
fun.(:opened, ref, conn, time)
catch
# If any error leaked, it should be a bug in Ecto.
kind, reason ->
stack = System.stacktrace()
break(ref, timeout)
:erlang.raise(kind, reason, stack)
else
res ->
close_transaction(ref, Process.get(ref), timeout)
res
after
Process.delete(ref)
end
end
defp inner_transaction(ref, conn, fun) do
try do
fun.(:already_open, ref, conn, nil)
catch
kind, reason ->
stack = System.stacktrace()
tainted(ref, true)
:erlang.raise(kind, reason, stack)
end
end
defp close_transaction({__MODULE__, pool_mod, pool}, %{conn: _, worker: worker}, timeout) do
pool_mod.close_transaction(pool, worker, timeout)
:ok
end
defp close_transaction(_, %{}, _) do
:ok
end
@doc """
Executes the given function giving it the ability to rollback.
Returns `{:ok, value}` if no transaction ocurred,
`{:error, value}` if the user rolled back or
`{:raise, kind, error, stack}` in case there was a failure.
"""
@spec with_rollback(:opened | :already_open, ref, (() -> return)) ::
{:ok, return} | {:error, term} | {:raise, atom, term, Exception.stacktrace}
when return: var
def with_rollback(:opened, ref, fun) do
try do
value = fun.()
case Process.get(ref) do
%{tainted: true} -> {:error, :rollback}
%{tainted: false} -> {:ok, value}
end
catch
:throw, {:ecto_rollback, ^ref, value} ->
{:error, value}
kind, reason ->
stack = System.stacktrace()
{:raise, kind, reason, stack}
after
tainted(ref, false)
end
end
def with_rollback(:already_open, ref, fun) do
try do
{:ok, fun.()}
catch
:throw, {:ecto_rollback, ^ref, value} ->
tainted(ref, true)
{:error, value}
end
end
@doc """
Triggers a rollback that is handled by `with_rollback/2`.
Raises if outside a transaction.
"""
def rollback(pool_mod, pool, value) do
ref = {__MODULE__, pool_mod, pool}
if Process.get(ref) do
throw {:ecto_rollback, ref, value}
else
raise "cannot call rollback outside of transaction"
end
end
defp tainted(ref, bool) do
map = Process.get(ref)
Process.put(ref, %{map | tainted: bool})
:ok
end
@doc """
Breaks the active connection.
Any attempt to use it inside the same transaction
Calling `run/1` inside the same transaction or run (at any depth) will
return `{:error, :noconnect}`.
## Examples
Pool.transaction(mod, pool, timout,
fn(:opened, ref, conn, _queue_time) ->
:ok = Pool.break(ref, timeout)
{:error, :noconnect} = Pool.run(mod, pool, timeout, fn _, _ -> end)
end)
"""
@spec break(ref, timeout) :: :ok
def break({__MODULE__, pool_mod, pool} = ref, timeout) do
case Process.get(ref) do
%{conn: _, worker: worker} = info ->
_ = Process.put(ref, Map.delete(info, :conn))
pool_mod.break(pool, worker, timeout)
%{} ->
:ok
end
end
end
|
lib/ecto/pool.ex
| 0.899151
| 0.714911
|
pool.ex
|
starcoder
|
defmodule Bottle.Number do
@moduledoc """
Provides custom guards for numbers
"""
@doc """
Guard that passes when a number is 0 (including float 0.0)
## Examples
iex> is_zero(0)
true
iex> is_zero(0.0)
true
iex> is_zero(1)
false
"""
defguard is_zero(sub) when is_number(sub) and sub in [0, 0.0]
@doc """
Guard that passes when a number is a pos_integer
## Examples
iex> is_pos_integer(1)
true
iex> is_pos_integer(1.1)
false
iex> is_pos_integer(0)
false
iex> is_pos_integer(-1)
false
"""
defguard is_pos_integer(sub) when is_integer(sub) and sub > 0
@doc """
Guard that passes when a number is a pos_number
## Examples
iex> is_pos_number(1)
true
iex> is_pos_number(1.1)
true
iex> is_pos_number(0)
false
iex> is_pos_number(-1)
false
iex> is_pos_number(-1.1)
false
"""
defguard is_pos_number(sub) when is_number(sub) and sub > 0
@doc """
Guard that passes when a number is a non_neg_integer
## Examples
iex> is_non_neg_integer(1)
true
iex> is_non_neg_integer(1.1)
false
iex> is_non_neg_integer(0)
true
iex> is_non_neg_integer(-1)
false
"""
defguard is_non_neg_integer(sub) when is_integer(sub) and sub >= 0
@doc """
Guard that passes when a number is a non_neg_number
## Examples
iex> is_non_neg_number(1)
true
iex> is_non_neg_number(1.1)
true
iex> is_non_neg_number(0)
true
iex> is_non_neg_number(-1)
false
"""
defguard is_non_neg_number(sub) when is_number(sub) and sub >= 0
@doc """
Guard that passes when a number is a non_neg_float
## Examples
iex> is_non_neg_float(1)
false
iex> is_non_neg_float(1.1)
true
iex> is_non_neg_float(0.0)
true
iex> is_non_neg_float(0)
false
iex> is_non_neg_float(-1)
false
iex> is_non_neg_float(-1.1)
false
"""
defguard is_non_neg_float(sub) when is_float(sub) and sub >= 0
@doc """
Guard that passes when for 0.0
## Examples
iex> is_zero_float(0.0)
true
iex> is_zero_float(0)
false
iex> is_zero_float(1)
false
iex> is_zero_float(1.1)
false
iex> is_zero_float(-1)
false
iex> is_zero_float(-1.1)
false
"""
defguard is_zero_float(sub) when is_float(sub) and sub == 0.0
@doc """
Guard that passes when for any float that is not 0.0
## Examples
iex> is_non_zero_float(1.1)
true
iex> is_non_zero_float(-1.1)
true
iex> is_non_zero_float(0.0)
false
iex> is_non_zero_float(0)
false
iex> is_non_zero_float(1)
false
iex> is_non_zero_float(-1)
false
"""
defguard is_non_zero_float(sub) when is_float(sub) and sub != 0.0
@doc """
Guard that passes when for any positive float
## Examples
iex> is_pos_float(1.1)
true
iex> is_pos_float(-1.1)
false
iex> is_pos_float(0.0)
false
iex> is_pos_float(0)
false
iex> is_pos_float(1)
false
iex> is_pos_float(-1)
false
"""
defguard is_pos_float(sub) when is_float(sub) and sub > 0.0
end
|
lib/bottle/number.ex
| 0.77437
| 0.414454
|
number.ex
|
starcoder
|
defmodule OMG.Performance do
@moduledoc """
OMG network performance tests. Provides general setup and utilities to do the perf tests.
"""
defmacro __using__(_opt) do
quote do
alias OMG.Performance
alias OMG.Performance.ByzantineEvents
alias OMG.Performance.ExtendedPerftest
alias OMG.Performance.Generators
import Performance, only: [timeit: 1]
require Performance
use OMG.Utils.LoggerExt
:ok
end
end
@doc """
Sets up the `OMG.Performance` machinery to a required config. Uses some default values, overridable via:
- `opts`
- system env (some entries)
- `config.exs`
in that order of preference. The configuration chosen is put into `Application`'s environment
Options:
- :ethereum_rpc_url - URL of the Ethereum node's RPC, default `http://localhost:8545`
- :child_chain_url - URL of the Child Chain server's RPC, default `http://localhost:9656`
- :watcher_url - URL of the Watcher's RPC, default `http://localhost:7434`
- :contract_addr - a map with the root chain contract addresses
If you're testing against a local child chain/watcher instances, consider setting the following configuration:
```
config :omg,
deposit_finality_margin: 1
config :omg_watcher,
exit_finality_margin: 1
```
in order to prevent the apps from waiting for unnecessary confirmations
"""
def init(opts \\ []) do
{:ok, _} = Application.ensure_all_started(:briefly)
{:ok, _} = Application.ensure_all_started(:ethereumex)
{:ok, _} = Application.ensure_all_started(:hackney)
{:ok, _} = Application.ensure_all_started(:cowboy)
ethereum_rpc_url =
System.get_env("ETHEREUM_RPC_URL") || Application.get_env(:ethereumex, :url, "http://localhost:8545")
child_chain_url =
System.get_env("CHILD_CHAIN_URL") || Application.get_env(:omg_watcher, :child_chain_url, "http://localhost:9656")
watcher_url =
System.get_env("WATCHER_URL") || Application.get_env(:omg_performance, :watcher_url, "http://localhost:7434")
defaults = [
ethereum_rpc_url: ethereum_rpc_url,
child_chain_url: child_chain_url,
watcher_url: watcher_url,
contract_addr: nil
]
opts = Keyword.merge(defaults, opts)
:ok = Application.put_env(:ethereumex, :request_timeout, :infinity)
:ok = Application.put_env(:ethereumex, :http_options, recv_timeout: :infinity)
:ok = Application.put_env(:ethereumex, :url, opts[:ethereum_rpc_url])
:ok = Application.put_env(:omg_watcher, :child_chain_url, opts[:child_chain_url])
:ok = Application.put_env(:omg_performance, :watcher_url, opts[:watcher_url])
:ok
end
@doc """
Utility macro which causes the expression given to be timed, the timing logged (`info`) and the original result of the
call to be returned
## Examples
iex> use OMG.Performance
iex> timeit 1+2
3
"""
defmacro timeit(call) do
quote do
{duration, result} = :timer.tc(fn -> unquote(call) end)
duration_s = duration / 1_000_000
_ = Logger.info("Lasted #{inspect(duration_s)} seconds")
result
end
end
end
|
apps/omg_performance/lib/performance.ex
| 0.852706
| 0.631708
|
performance.ex
|
starcoder
|
defmodule Vow.FunctionWrapper do
@moduledoc """
This vow wraps an annoymous function for the purpose of improved error
messages and readability of vows.
The `Function` type impelements the `Inspect` protocol in Elixir, but
annoymous functions are printed as something similar to the following:
```
# regex to match something like: #Function<7.91303403/1 in :erl_eval.expr/5>
iex> regex = ~r|^#Function<\\d+\\.\\d+?/1 in|
...> f = fn x -> x end
...> Regex.match?(regex, inspect(f))
true
```
Whereas a named function looks more reasonable:
```
iex> inspect(&Kernel.apply/2)
"&:erlang.apply/2"
```
The `Vow.FunctionWrapper.wrap/2` macro can be used to alleviate this.
```
iex> import Vow.FunctionWrapper, only: :macros
...> inspect(wrap(fn x -> x end))
"fn x -> x end"
```
It can also be used to optionally control the bindings within the annoymous
function for printing purposes.
```
iex> import Vow.FunctionWrapper, only: :macros
...> y = 42
...> inspect(wrap(fn x -> x + y end))
"fn x -> x + y end"
iex> import Vow.FunctionWrapper, only: :macros
...> y = 42
...> inspect(wrap(fn x -> x + y end, y: y))
"fn x -> x + 42 end"
```
"""
defstruct function: nil,
form: nil,
bindings: []
@type t :: %__MODULE__{
function: (term -> boolean),
form: Macro.t(),
bindings: keyword()
}
@doc false
@spec new((term -> boolean), Macro.t(), keyword()) :: t
def new(function, form, bindings \\ []) do
%__MODULE__{
function: function,
form: form,
bindings: bindings
}
end
@doc """
Creates a new `Vow.FunctionWrapper.t` using the AST of `quoted` and
its resolved function.
Optionally, specify the bindings within the quoted form to be used by
the `Inspect` protocol.
"""
@spec wrap(Macro.t(), keyword()) :: Macro.t()
defmacro wrap(quoted, bindings \\ []) do
quote do
Vow.FunctionWrapper.new(
unquote(quoted),
unquote(Macro.escape(quoted)),
unquote(bindings)
)
end
end
# coveralls-ignore-start
defimpl Inspect do
@moduledoc false
@impl Inspect
def inspect(%@for{form: form, bindings: bindings}, opts) do
Macro.to_string(form, fn
{var, _, mod}, string when is_atom(var) and is_atom(mod) ->
if Keyword.has_key?(bindings, var) do
Kernel.inspect(
Keyword.get(bindings, var),
opts_to_keyword(opts)
)
else
string
end
_ast, string ->
string
end)
end
@spec opts_to_keyword(Inspect.Opts.t()) :: keyword
defp opts_to_keyword(opts) do
opts
|> Map.from_struct()
|> Enum.into([])
end
end
# coveralls-ignore-stop
defimpl Vow.Conformable do
@moduledoc false
@impl Vow.Conformable
def conform(%@for{function: fun} = func, path, via, route, val) do
case @protocol.Function.conform(fun, path, via, route, val) do
{:ok, conformed} ->
{:ok, conformed}
{:error, [%{pred: ^fun} = problem]} ->
{:error, [%{problem | pred: func.form}]}
{:error, problems} ->
{:error, problems}
end
end
@impl Vow.Conformable
def unform(_vow, val) do
{:ok, val}
end
@impl Vow.Conformable
def regex?(_vow), do: false
end
if Code.ensure_loaded?(StreamData) do
defimpl Vow.Generatable do
@moduledoc false
@impl Vow.Generatable
def gen(%@for{function: fun}, opts) do
@protocol.Function.gen(fun, opts)
end
end
end
end
|
lib/vow/function_wrapper.ex
| 0.865977
| 0.86293
|
function_wrapper.ex
|
starcoder
|
defmodule PgMoney do
@moduledoc """
Contains the all the basic types and guards to work with the `money` data type.
"""
@type money :: -9_223_372_036_854_775_808..9_223_372_036_854_775_807
@type precision :: non_neg_integer()
@type telemetry :: false | nonempty_list(atom())
@type config :: %{
precision: precision(),
telemetry: telemetry()
}
@storage_size 8
@minimum -9_223_372_036_854_775_808
@maximum +9_223_372_036_854_775_807
@doc """
The minimum integer value possible for the `money` data type.
"""
@spec minimum :: neg_integer()
def minimum, do: @minimum
@doc """
The maximum integer value possible for the `money` data type.
"""
@spec maximum :: pos_integer()
def maximum, do: @maximum
@doc """
The storage size the `money` data type takes up in the database.
"""
@spec storage_size :: non_neg_integer()
def storage_size, do: @storage_size
@doc """
Returns the maximum `Decimal.t()` value for the given `precision`.
"""
@spec max(precision()) :: Decimal.t()
def max(precision),
do:
Decimal.div(
@maximum,
pow(10, precision)
)
@doc """
Returns the minimum `Decimal.t()` value for the given `precision`.
"""
@spec min(precision()) :: Decimal.t()
def min(precision),
do:
Decimal.div(
@minimum,
pow(10, precision)
)
defp pow(_, 0), do: 1
defp pow(n, 1), do: n
defp pow(n, 2), do: square(n)
defp pow(n, x) do
case Integer.mod(x, 2) do
0 ->
x = trunc(x / 2)
square(pow(n, x))
1 ->
n * pow(n, x - 1)
end
end
defp square(x), do: x * x
@doc """
Returns `true` if `value` is an integer and falls between the `minimum/0` and `maximum/0` (inclusive) range of the `money` data type.
"""
defguard is_money(value) when is_integer(value) and @minimum <= value and value <= @maximum
@doc """
Returns `true` if `value` is a valid `t:precision/0`, false otherwise.
"""
defguard is_precision(value) when is_integer(value) and 0 <= value
@doc """
Returns `true` if `value` is a valid `t:telemetry/0`, false otherwise.
"""
defguard is_telemetry(value) when value == false or (is_list(value) and length(value) > 0)
end
|
lib/pg_money.ex
| 0.922665
| 0.530236
|
pg_money.ex
|
starcoder
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.