code
stringlengths 114
1.05M
| path
stringlengths 3
312
| quality_prob
float64 0.5
0.99
| learning_prob
float64 0.2
1
| filename
stringlengths 3
168
| kind
stringclasses 1
value |
|---|---|---|---|---|---|
defmodule AWS.FSx do
@moduledoc """
Amazon FSx is a fully managed service that makes it easy for storage and
application administrators to launch and use shared file storage.
"""
@doc """
Cancels an existing Amazon FSx for Lustre data repository task if that task
is in either the `PENDING` or `EXECUTING` state. When you cancel a task,
Amazon FSx does the following.
<ul> <li> Any files that FSx has already exported are not reverted.
</li> <li> FSx continues to export any files that are "in-flight" when the
cancel operation is received.
</li> <li> FSx does not export any files that have not yet been exported.
</li> </ul>
"""
def cancel_data_repository_task(client, input, options \\ []) do
request(client, "CancelDataRepositoryTask", input, options)
end
@doc """
Creates a backup of an existing Amazon FSx file system. Creating regular
backups for your file system is a best practice, enabling you to restore a
file system from a backup if an issue arises with the original file system.
For Amazon FSx for Lustre file systems, you can create a backup only for
file systems with the following configuration:
<ul> <li> a Persistent deployment type
</li> <li> is *not* linked to an Amazon S3 data respository.
</li> </ul> For more information about backing up Amazon FSx for Lustre
file systems, see [Working with FSx for Lustre
backups](https://docs.aws.amazon.com/fsx/latest/LustreGuide/using-backups-fsx.html).
For more information about backing up Amazon FSx for Lustre file systems,
see [Working with FSx for Windows
backups](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/using-backups.html).
If a backup with the specified client request token exists, and the
parameters match, this operation returns the description of the existing
backup. If a backup specified client request token exists, and the
parameters don't match, this operation returns
`IncompatibleParameterError`. If a backup with the specified client request
token doesn't exist, `CreateBackup` does the following:
<ul> <li> Creates a new Amazon FSx backup with an assigned ID, and an
initial lifecycle state of `CREATING`.
</li> <li> Returns the description of the backup.
</li> </ul> By using the idempotent operation, you can retry a
`CreateBackup` operation without the risk of creating an extra backup. This
approach can be useful when an initial call fails in a way that makes it
unclear whether a backup was created. If you use the same client request
token and the initial call created a backup, the operation returns a
successful result because all the parameters are the same.
The `CreateBackup` operation returns while the backup's lifecycle state is
still `CREATING`. You can check the backup creation status by calling the
`DescribeBackups` operation, which returns the backup state along with
other information.
"""
def create_backup(client, input, options \\ []) do
request(client, "CreateBackup", input, options)
end
@doc """
Creates an Amazon FSx for Lustre data repository task. You use data
repository tasks to perform bulk operations between your Amazon FSx file
system and its linked data repository. An example of a data repository task
is exporting any data and metadata changes, including POSIX metadata, to
files, directories, and symbolic links (symlinks) from your FSx file system
to its linked data repository. A `CreateDataRepositoryTask` operation will
fail if a data repository is not linked to the FSx file system. To learn
more about data repository tasks, see [Using Data Repository
Tasks](https://docs.aws.amazon.com/fsx/latest/LustreGuide/data-repository-tasks.html).
To learn more about linking a data repository to your file system, see
[Setting the Export
Prefix](https://docs.aws.amazon.com/fsx/latest/LustreGuide/export-data-repository.html#export-prefix).
"""
def create_data_repository_task(client, input, options \\ []) do
request(client, "CreateDataRepositoryTask", input, options)
end
@doc """
Creates a new, empty Amazon FSx file system.
If a file system with the specified client request token exists and the
parameters match, `CreateFileSystem` returns the description of the
existing file system. If a file system specified client request token
exists and the parameters don't match, this call returns
`IncompatibleParameterError`. If a file system with the specified client
request token doesn't exist, `CreateFileSystem` does the following:
<ul> <li> Creates a new, empty Amazon FSx file system with an assigned ID,
and an initial lifecycle state of `CREATING`.
</li> <li> Returns the description of the file system.
</li> </ul> This operation requires a client request token in the request
that Amazon FSx uses to ensure idempotent creation. This means that calling
the operation multiple times with the same client request token has no
effect. By using the idempotent operation, you can retry a
`CreateFileSystem` operation without the risk of creating an extra file
system. This approach can be useful when an initial call fails in a way
that makes it unclear whether a file system was created. Examples are if a
transport level timeout occurred, or your connection was reset. If you use
the same client request token and the initial call created a file system,
the client receives success as long as the parameters are the same.
<note> The `CreateFileSystem` call returns while the file system's
lifecycle state is still `CREATING`. You can check the file-system creation
status by calling the `DescribeFileSystems` operation, which returns the
file system state along with other information.
</note>
"""
def create_file_system(client, input, options \\ []) do
request(client, "CreateFileSystem", input, options)
end
@doc """
Creates a new Amazon FSx file system from an existing Amazon FSx backup.
If a file system with the specified client request token exists and the
parameters match, this operation returns the description of the file
system. If a client request token specified by the file system exists and
the parameters don't match, this call returns `IncompatibleParameterError`.
If a file system with the specified client request token doesn't exist,
this operation does the following:
<ul> <li> Creates a new Amazon FSx file system from backup with an assigned
ID, and an initial lifecycle state of `CREATING`.
</li> <li> Returns the description of the file system.
</li> </ul> Parameters like Active Directory, default share name, automatic
backup, and backup settings default to the parameters of the file system
that was backed up, unless overridden. You can explicitly supply other
settings.
By using the idempotent operation, you can retry a
`CreateFileSystemFromBackup` call without the risk of creating an extra
file system. This approach can be useful when an initial call fails in a
way that makes it unclear whether a file system was created. Examples are
if a transport level timeout occurred, or your connection was reset. If you
use the same client request token and the initial call created a file
system, the client receives success as long as the parameters are the same.
<note> The `CreateFileSystemFromBackup` call returns while the file
system's lifecycle state is still `CREATING`. You can check the file-system
creation status by calling the `DescribeFileSystems` operation, which
returns the file system state along with other information.
</note>
"""
def create_file_system_from_backup(client, input, options \\ []) do
request(client, "CreateFileSystemFromBackup", input, options)
end
@doc """
Deletes an Amazon FSx backup, deleting its contents. After deletion, the
backup no longer exists, and its data is gone.
The `DeleteBackup` call returns instantly. The backup will not show up in
later `DescribeBackups` calls.
<important> The data in a deleted backup is also deleted and can't be
recovered by any means.
</important>
"""
def delete_backup(client, input, options \\ []) do
request(client, "DeleteBackup", input, options)
end
@doc """
Deletes a file system, deleting its contents. After deletion, the file
system no longer exists, and its data is gone. Any existing automatic
backups will also be deleted.
By default, when you delete an Amazon FSx for Windows File Server file
system, a final backup is created upon deletion. This final backup is not
subject to the file system's retention policy, and must be manually
deleted.
The `DeleteFileSystem` action returns while the file system has the
`DELETING` status. You can check the file system deletion status by calling
the `DescribeFileSystems` action, which returns a list of file systems in
your account. If you pass the file system ID for a deleted file system, the
`DescribeFileSystems` returns a `FileSystemNotFound` error.
<note> Deleting an Amazon FSx for Lustre file system will fail with a 400
BadRequest if a data repository task is in a `PENDING` or `EXECUTING`
state.
</note> <important> The data in a deleted file system is also deleted and
can't be recovered by any means.
</important>
"""
def delete_file_system(client, input, options \\ []) do
request(client, "DeleteFileSystem", input, options)
end
@doc """
Returns the description of specific Amazon FSx backups, if a `BackupIds`
value is provided for that backup. Otherwise, it returns all backups owned
by your AWS account in the AWS Region of the endpoint that you're calling.
When retrieving all backups, you can optionally specify the `MaxResults`
parameter to limit the number of backups in a response. If more backups
remain, Amazon FSx returns a `NextToken` value in the response. In this
case, send a later request with the `NextToken` request parameter set to
the value of `NextToken` from the last response.
This action is used in an iterative process to retrieve a list of your
backups. `DescribeBackups` is called first without a `NextToken`value. Then
the action continues to be called with the `NextToken` parameter set to the
value of the last `NextToken` value until a response has no `NextToken`.
When using this action, keep the following in mind:
<ul> <li> The implementation might return fewer than `MaxResults` file
system descriptions while still including a `NextToken` value.
</li> <li> The order of backups returned in the response of one
`DescribeBackups` call and the order of backups returned across the
responses of a multi-call iteration is unspecified.
</li> </ul>
"""
def describe_backups(client, input, options \\ []) do
request(client, "DescribeBackups", input, options)
end
@doc """
Returns the description of specific Amazon FSx for Lustre data repository
tasks, if one or more `TaskIds` values are provided in the request, or if
filters are used in the request. You can use filters to narrow the response
to include just tasks for specific file systems, or tasks in a specific
lifecycle state. Otherwise, it returns all data repository tasks owned by
your AWS account in the AWS Region of the endpoint that you're calling.
When retrieving all tasks, you can paginate the response by using the
optional `MaxResults` parameter to limit the number of tasks returned in a
response. If more tasks remain, Amazon FSx returns a `NextToken` value in
the response. In this case, send a later request with the `NextToken`
request parameter set to the value of `NextToken` from the last response.
"""
def describe_data_repository_tasks(client, input, options \\ []) do
request(client, "DescribeDataRepositoryTasks", input, options)
end
@doc """
Returns the description of specific Amazon FSx file systems, if a
`FileSystemIds` value is provided for that file system. Otherwise, it
returns descriptions of all file systems owned by your AWS account in the
AWS Region of the endpoint that you're calling.
When retrieving all file system descriptions, you can optionally specify
the `MaxResults` parameter to limit the number of descriptions in a
response. If more file system descriptions remain, Amazon FSx returns a
`NextToken` value in the response. In this case, send a later request with
the `NextToken` request parameter set to the value of `NextToken` from the
last response.
This action is used in an iterative process to retrieve a list of your file
system descriptions. `DescribeFileSystems` is called first without a
`NextToken`value. Then the action continues to be called with the
`NextToken` parameter set to the value of the last `NextToken` value until
a response has no `NextToken`.
When using this action, keep the following in mind:
<ul> <li> The implementation might return fewer than `MaxResults` file
system descriptions while still including a `NextToken` value.
</li> <li> The order of file systems returned in the response of one
`DescribeFileSystems` call and the order of file systems returned across
the responses of a multicall iteration is unspecified.
</li> </ul>
"""
def describe_file_systems(client, input, options \\ []) do
request(client, "DescribeFileSystems", input, options)
end
@doc """
Lists tags for an Amazon FSx file systems and backups in the case of Amazon
FSx for Windows File Server.
When retrieving all tags, you can optionally specify the `MaxResults`
parameter to limit the number of tags in a response. If more tags remain,
Amazon FSx returns a `NextToken` value in the response. In this case, send
a later request with the `NextToken` request parameter set to the value of
`NextToken` from the last response.
This action is used in an iterative process to retrieve a list of your
tags. `ListTagsForResource` is called first without a `NextToken`value.
Then the action continues to be called with the `NextToken` parameter set
to the value of the last `NextToken` value until a response has no
`NextToken`.
When using this action, keep the following in mind:
<ul> <li> The implementation might return fewer than `MaxResults` file
system descriptions while still including a `NextToken` value.
</li> <li> The order of tags returned in the response of one
`ListTagsForResource` call and the order of tags returned across the
responses of a multi-call iteration is unspecified.
</li> </ul>
"""
def list_tags_for_resource(client, input, options \\ []) do
request(client, "ListTagsForResource", input, options)
end
@doc """
Tags an Amazon FSx resource.
"""
def tag_resource(client, input, options \\ []) do
request(client, "TagResource", input, options)
end
@doc """
This action removes a tag from an Amazon FSx resource.
"""
def untag_resource(client, input, options \\ []) do
request(client, "UntagResource", input, options)
end
@doc """
Use this operation to update the configuration of an existing Amazon FSx
file system. You can update multiple properties in a single request.
For Amazon FSx for Windows File Server file systems, you can update the
following properties:
<ul> <li> AutomaticBackupRetentionDays
</li> <li> DailyAutomaticBackupStartTime
</li> <li> SelfManagedActiveDirectoryConfiguration
</li> <li> StorageCapacity
</li> <li> ThroughputCapacity
</li> <li> WeeklyMaintenanceStartTime
</li> </ul> For Amazon FSx for Lustre file systems, you can update the
following properties:
<ul> <li> AutoImportPolicy
</li> <li> AutomaticBackupRetentionDays
</li> <li> DailyAutomaticBackupStartTime
</li> <li> WeeklyMaintenanceStartTime
</li> </ul>
"""
def update_file_system(client, input, options \\ []) do
request(client, "UpdateFileSystem", input, options)
end
@spec request(AWS.Client.t(), binary(), map(), list()) ::
{:ok, Poison.Parser.t() | nil, Poison.Response.t()}
| {:error, Poison.Parser.t()}
| {:error, HTTPoison.Error.t()}
defp request(client, action, input, options) do
client = %{client | service: "fsx"}
host = build_host("fsx", client)
url = build_url(host, client)
headers = [
{"Host", host},
{"Content-Type", "application/x-amz-json-1.1"},
{"X-Amz-Target", "AWSSimbaAPIService_v20180301.#{action}"}
]
payload = Poison.Encoder.encode(input, %{})
headers = AWS.Request.sign_v4(client, "POST", url, headers, payload)
case HTTPoison.post(url, payload, headers, options) do
{:ok, %HTTPoison.Response{status_code: 200, body: ""} = response} ->
{:ok, nil, response}
{:ok, %HTTPoison.Response{status_code: 200, body: body} = response} ->
{:ok, Poison.Parser.parse!(body, %{}), response}
{:ok, %HTTPoison.Response{body: body}} ->
error = Poison.Parser.parse!(body, %{})
{:error, error}
{:error, %HTTPoison.Error{reason: reason}} ->
{:error, %HTTPoison.Error{reason: reason}}
end
end
defp build_host(_endpoint_prefix, %{region: "local"}) do
"localhost"
end
defp build_host(endpoint_prefix, %{region: region, endpoint: endpoint}) do
"#{endpoint_prefix}.#{region}.#{endpoint}"
end
defp build_url(host, %{:proto => proto, :port => port}) do
"#{proto}://#{host}:#{port}/"
end
end
|
lib/aws/fsx.ex
| 0.847211
| 0.588357
|
fsx.ex
|
starcoder
|
defmodule Pbkdf2 do
@moduledoc """
Elixir wrapper for the Pbkdf2 password hashing function.
Most applications will just need to use the `add_hash/2` and `check_pass/3`
convenience functions in this module.
For a lower-level API, see Pbkdf2.Base.
## Configuration
The following parameter can be set in the config file:
* rounds - computational cost
* the number of rounds
* 160_000 is the default
If you are hashing passwords in your tests, it can be useful to add
the following to the `config/test.exs` file:
config :pbkdf2_elixir,
rounds: 1
NB. do not use this value in production.
## Pbkdf2
Pbkdf2 is a password-based key derivation function
that uses a password, a variable-length salt and an iteration
count and applies a pseudorandom function to these to
produce a key.
The original implementation used SHA-1 as the pseudorandom function,
but this version uses HMAC-SHA-512, the default, or HMAC-SHA-256.
## Warning
It is recommended that you set a maximum length for the password
when using Pbkdf2. This maximum length should not prevent valid users from setting
long passwords. It is instead needed to combat denial-of-service attacks.
As an example, Django sets the maximum length to 4096 bytes.
For more information, see [this link](https://www.djangoproject.com/weblog/2013/sep/15/security/).
"""
use Comeonin
alias Pbkdf2.Base
@doc """
Generate a random salt.
The minimum length of the salt is 8 bytes and the maximum length is
1024. The default length for the salt is 16 bytes. We do not recommend
using a salt shorter than the default.
"""
def gen_salt(salt_length \\ 16)
def gen_salt(salt_length) when salt_length in 8..1024 do
:crypto.strong_rand_bytes(salt_length)
end
def gen_salt(_) do
raise ArgumentError, """
The salt is the wrong length. It should be between 8 and 1024 bytes long.
"""
end
@doc """
Hashes a password with a randomly generated salt.
## Options
In addition to the `:salt_len` option shown below, this function also takes
options that are then passed on to the `hash_password` function in the
`Pbkdf2.Base` module.
See the documentation for `Pbkdf2.Base.hash_password/3` for further details.
* `:salt_len` - the length of the random salt
* the default is 16 (the minimum is 8) bytes
## Examples
The following examples show how to hash a password with a randomly-generated
salt and then verify a password:
iex> hash = Pbkdf2.hash_pwd_salt("password")
...> Pbkdf2.verify_pass("password", hash)
true
iex> hash = Pbkdf2.hash_pwd_salt("password")
...> Pbkdf2.verify_pass("incorrect", hash)
false
"""
@impl true
def hash_pwd_salt(password, opts \\ []) do
Base.hash_password(password, Keyword.get(opts, :salt_len, 16) |> gen_salt, opts)
end
@doc """
Verifies a password by hashing the password and comparing the hashed value
with a stored hash.
See the documentation for `hash_pwd_salt/2` for examples of using this function.
"""
@impl true
def verify_pass(password, stored_hash) do
[alg, rounds, salt, hash] = String.split(stored_hash, "$", trim: true)
digest = if alg =~ "sha512", do: :sha512, else: :sha256
Base.verify_pass(password, hash, salt, digest, rounds, output(stored_hash))
end
defp output("$pbkdf2" <> _), do: :modular
defp output("pbkdf2" <> _), do: :django
end
|
lib/pbkdf2.ex
| 0.891289
| 0.549761
|
pbkdf2.ex
|
starcoder
|
defmodule Shapeshifter.BOB do
@moduledoc """
Module for converting to and from [`BOB`](`t:Shapeshifter.bob/0`) structured
maps.
Usually used internally, although can be used directly for specific use cases
such as converting single inputs and outputs to and from [`BOB`](`t:Shapeshifter.bob/0`)
formatted maps.
"""
import Shapeshifter.Shared
@doc """
Creates a new [`BOB`](`t:Shapeshifter.bob/0`) formatted map from the given
[`Shapeshifter`](`t:Shapeshifter.t/0`) struct.
"""
@spec new(Shapeshifter.t) :: map
def new(%Shapeshifter{src: tx, format: :tx}) do
txid = BSV.Transaction.get_txid(tx)
ins = tx.inputs
|> Enum.with_index
|> Enum.map(&cast_input/1)
outs = tx.outputs
|> Enum.with_index
|> Enum.map(&cast_output/1)
%{
"tx" => %{"h" => txid},
"in" => ins,
"out" => outs,
"lock" => 0
}
end
def new(%Shapeshifter{src: src, format: :txo}) do
ins = Enum.map(src["in"], &cast_input/1)
outs = Enum.map(src["out"], &cast_output/1)
src
|> Map.delete("_id")
|> Map.put("in", ins)
|> Map.put("out", outs)
end
def new(%Shapeshifter{src: src, format: :bob}), do: src
@doc """
Converts the given input parameters to a [`BOB`](`t:Shapeshifter.bob/0`)
formatted input.
Accepts either a [`BSV Input`](`t:BSV.Transaction.Input.t/0`) struct or a
[`TXO`](`t:Shapeshifter.txo/0`) formatted input.
"""
@spec cast_input({BSV.Transaction.Input.t | map, integer}) :: map
def cast_input({%BSV.Transaction.Input{} = src, index}) do
input = %{
"i" => index,
"seq" => src.sequence,
"e" => %{
"h" => src.output_txid,
"i" => src.output_index,
"a" => script_address(src.script.chunks)
}
}
tape = src.script.chunks
|> Enum.with_index
|> Enum.reduce({[%{"i" => 0}], 0}, &from_script_chunk/2)
|> elem(0)
|> Enum.reject(& &1 == %{})
|> Enum.map(fn t -> Map.update!(t, "cell", &Enum.reverse/1) end)
|> Enum.reverse
Map.put(input, "tape", tape)
end
def cast_input(%{"len" => _len} = src),
do: from_txo_object(src)
@doc """
Converts the given output parameters to a [`BOB`](`t:Shapeshifter.bob/0`)
formatted output.
Accepts either a [`BSV Output`](`t:BSV.Transaction.Output.t/0`) struct or a
[`TXO`](`t:Shapeshifter.txo/0`) formatted output.
"""
@spec cast_output({BSV.Transaction.Output.t | map, integer}) :: map
def cast_output({%BSV.Transaction.Output{} = src, index}) do
output = %{
"i" => index,
"e" => %{
"v" => src.satoshis,
"i" => index,
"a" => script_address(src.script.chunks)
}
}
tape = src.script.chunks
|> Enum.with_index
|> Enum.reduce({[%{"i" => 0}], 0}, &from_script_chunk/2)
|> elem(0)
|> Enum.filter(& Map.has_key?(&1, "cell"))
|> Enum.map(fn t -> Map.update!(t, "cell", &Enum.reverse/1) end)
|> Enum.reverse
Map.put(output, "tape", tape)
end
def cast_output(%{"len" => _len} = src),
do: from_txo_object(src)
@doc """
Converts the given [`BOB`](`t:Shapeshifter.bob/0`) formatted transaction back
to a [`BSV Transaction`](`t:BSV.Transaction.t/0`) struct.
"""
@spec to_tx(%Shapeshifter{
src: map,
format: :bob
}) :: BSV.Transaction.t
def to_tx(%Shapeshifter{
src: %{"in" => ins, "out" => outs} = src,
format: :bob
}) do
%BSV.Transaction{
inputs: Enum.map(ins, &to_tx_input/1),
outputs: Enum.map(outs, &to_tx_output/1),
lock_time: src["lock"]
}
end
@doc """
Converts the given [`BOB`](`t:Shapeshifter.bob/0`) formatted input back to a
[`BSV Input`](`t:BSV.Transaction.Input.t/0`) struct.
"""
@spec to_tx_input(map) :: BSV.Transaction.Input.t
def to_tx_input(%{} = src) do
%BSV.Transaction.Input{
output_index: get_in(src, ["e", "i"]),
output_txid: get_in(src, ["e", "h"]),
sequence: src["seq"],
script: to_tx_script(src["tape"])
}
end
@doc """
Converts the given [`BOB`](`t:Shapeshifter.bob/0`) formatted output back to a
[`BSV Output`](`t:BSV.Transaction.Output.t/0`) struct.
"""
@spec to_tx_output(map) :: BSV.Transaction.Output.t
def to_tx_output(%{} = src) do
%BSV.Transaction.Output{
satoshis: get_in(src, ["e", "v"]),
script: to_tx_script(src["tape"])
}
end
# Converts a BSV Script chunk to BOB parameters. The index is given with the
# script chunk.
defp from_script_chunk({opcode, index}, {[%{"i" => i} = head | tape], t})
when is_atom(opcode)
do
head = head
|> Map.put_new("cell", [])
|> Map.update!("cell", fn cells ->
cell = %{
"op" => BSV.Script.OpCode.get(opcode) |> elem(1),
"ops" => Atom.to_string(opcode),
"i" => index - t,
"ii" => index
}
[cell | cells]
end)
case opcode do
:OP_RETURN ->
{[%{"i" => i+1} | [head | tape]], index + 1}
_ ->
{[head | tape], t}
end
end
defp from_script_chunk({"|", index}, {[%{"i" => i } = head | tape], _t}) do
{[%{"i" => i+1} | [head | tape]], index + 1}
end
defp from_script_chunk({data, index}, {[head | tape], t})
when is_binary(data)
do
head = head
|> Map.put_new("cell", [])
|> Map.update!("cell", fn cells ->
cell = %{
"s" => data,
"h" => Base.encode16(data, case: :lower),
"b" => Base.encode64(data),
"i" => index - t,
"ii" => index
}
[cell | cells]
end)
{[head | tape], t}
end
# Converts a TXO formatted input/output to a BOB formatted tape.
defp from_txo_object(%{"len" => len} = src) do
target = Map.take(src, ["i", "seq", "e"])
tape = 0..len-1
|> Enum.reduce({[%{"i" => 0}], 0}, fn i, {tape, t} ->
src
|> Map.take(["o#{i}", "s#{i}", "h#{i}", "b#{i}"])
|> Enum.map(fn {k, v} -> {String.replace(k, ~r/\d+$/, ""), v} end)
|> Enum.into(%{"ii" => i})
|> from_txo_attr({tape, t})
end)
|> elem(0)
|> Enum.filter(& Map.has_key?(&1, "cell"))
|> Enum.map(fn t -> Map.update!(t, "cell", &Enum.reverse/1) end)
|> Enum.reverse
Map.put(target, "tape", tape)
end
# Converts TXO formatted parameters to a BOB formatted cell.
defp from_txo_attr(
%{"o" => opcode, "ii" => index},
{[%{"i" => i} = head | tape], t}
) do
head = head
|> Map.put_new("cell", [])
|> Map.update!("cell", fn cells ->
cell = %{
"op" => BSV.Script.OpCode.get(opcode) |> elem(1),
"ops" => opcode,
"i" => index - t,
"ii" => index
}
[cell | cells]
end)
case opcode do
"OP_RETURN" ->
{[%{"i" => i+1} | [head | tape]], index+1}
_ ->
{[head | tape], t}
end
end
defp from_txo_attr(
%{"s" => "|", "ii" => index},
{[%{"i" => i} = head | tape], _t}
) do
{[%{"i" => i+1} | [head | tape]], index+1}
end
defp from_txo_attr(%{"ii" => index} = cell, {[head | tape], t}) do
head = head
|> Map.put_new("cell", [])
|> Map.update!("cell", fn cells ->
cell = Map.put(cell, "i", index - t)
[cell | cells]
end)
{[head | tape], t}
end
# Converts a BOB formatted tape into a BSV Script struct.
defp to_tx_script(tape) when is_list(tape) do
tape
|> Enum.intersperse("|")
|> Enum.reduce(%BSV.Script{}, &to_tx_script/2)
end
defp to_tx_script(%{"cell" => cells}, script) do
Enum.reduce(cells, script, fn cell, script ->
data = cond do
Map.has_key?(cell, "ops") ->
Map.get(cell, "ops") |> String.to_atom
Map.has_key?(cell, "b") ->
Map.get(cell, "b") |> Base.decode64!
Map.has_key?(cell, "h") ->
Map.get(cell, "h") |> Base.decode16!(case: :mixed)
end
BSV.Script.push(script, data)
end)
end
defp to_tx_script("|", script) do
case List.last(script.chunks) do
:OP_RETURN -> script
_ -> BSV.Script.push(script, "|")
end
end
end
|
lib/shapeshifter/bob.ex
| 0.775732
| 0.706273
|
bob.ex
|
starcoder
|
defmodule EctoList.ListItem do
@moduledoc """
Implements conveniences to change the items order of a list.
"""
@doc """
Insert an list item id in the given index of an order_list
iex> EctoList.ListItem.insert_at([1, 2, 3, 4], 9, 2)
[1, 9, 2, 3, 4]
iex> EctoList.ListItem.insert_at([1, 2, 3], 9, 10)
[1, 2, 3, 9]
iex> EctoList.ListItem.insert_at([1, 2, 9, 3], 9, 2)
[1, 9, 2, 3]
iex> EctoList.ListItem.insert_at([1, 2, 9, 3], 9, 10)
[1, 2, 3, 9]
iex> EctoList.ListItem.insert_at([1, 2, 9, 3], 9, nil)
[1, 2, 9, 3]
"""
def insert_at(order_list, _list_item, nil) do
order_list
end
def insert_at(order_list, list_item, index) do
order_list
|> Enum.reject(&(&1 == list_item))
|> List.insert_at(index - 1, list_item)
end
@doc """
Move the list item id one rank lower in the ordering.
iex> EctoList.ListItem.move_lower([1, 2, 3, 4], 3)
[1, 2, 4, 3]
iex> EctoList.ListItem.move_lower([1, 2, 3, 4], 1)
[2, 1, 3, 4]
iex> EctoList.ListItem.move_lower([1, 2, 3, 4], 4)
[1, 2, 3, 4]
iex> EctoList.ListItem.move_lower([1, 2, 3, 4], 5)
[1, 2, 3, 4]
"""
def move_lower(order_list, list_item) do
index = Enum.find_index(order_list, &(&1 == list_item))
insert_at(order_list, list_item, index && index + 2)
end
@doc """
Move the list item id one rank higher in the ordering.
iex> EctoList.ListItem.move_higher([1, 2, 3, 4], 3)
[1, 3, 2, 4]
iex> EctoList.ListItem.move_higher([1, 2, 3, 4], 1)
[1, 2, 3, 4]
iex> EctoList.ListItem.move_higher([1, 2, 3, 4], 5)
[1, 2, 3, 4]
"""
def move_higher(order_list, list_item) do
index = Enum.find_index(order_list, &(&1 == list_item))
case index do
nil ->
order_list
0 ->
order_list
_ ->
insert_at(order_list, list_item, index)
end
end
@doc """
Move the list item id at the last position in the ordering.
iex> EctoList.ListItem.move_to_bottom([1, 2, 3, 4], 3)
[1, 2, 4, 3]
iex> EctoList.ListItem.move_to_bottom([1, 2, 3, 4], 1)
[2, 3, 4, 1]
iex> EctoList.ListItem.move_to_bottom([1, 2, 3, 4], 4)
[1, 2, 3, 4]
iex> EctoList.ListItem.move_to_bottom([1, 2, 3, 4], 5)
[1, 2, 3, 4, 5]
"""
def move_to_bottom(order_list, list_item) do
length = length(order_list)
case Enum.member?(order_list, list_item) do
true -> insert_at(order_list, list_item, length)
false -> insert_at(order_list, list_item, length + 1)
end
end
@doc """
Move the list item id at the first position in the ordering.
iex> EctoList.ListItem.move_to_top([1, 2, 3, 4], 3)
[3, 1, 2, 4]
iex> EctoList.ListItem.move_to_top([1, 2, 3, 4], 1)
[1, 2, 3, 4]
iex> EctoList.ListItem.move_to_top([1, 2, 3, 4], 5)
[5, 1, 2, 3, 4]
"""
def move_to_top(order_list, list_item) do
insert_at(order_list, list_item, 1)
end
@doc """
Remove the list item id in the ordering.
iex> EctoList.ListItem.remove_from_list([1, 2, 3, 4], 3)
[1, 2, 4]
iex> EctoList.ListItem.remove_from_list([1, 2, 3, 4], 1)
[2, 3, 4]
iex> EctoList.ListItem.remove_from_list([1, 2, 3, 4], 5)
[1, 2, 3, 4]
"""
def remove_from_list(order_list, list_item) do
Enum.reject(order_list, &(&1 == list_item))
end
@doc """
Check if list item id is the first element in the ordering.
iex> EctoList.ListItem.first?([1, 2, 3, 4], 1)
true
iex> EctoList.ListItem.first?([1, 2, 3, 4], 3)
false
iex> EctoList.ListItem.first?([1, 2, 3, 4], 5)
false
"""
def first?(order_list, list_item) do
List.first(order_list) == list_item
end
@doc """
Check if list item id is the last element in the ordering.
iex> EctoList.ListItem.last?([1, 2, 3, 4], 4)
true
iex> EctoList.ListItem.last?([1, 2, 3, 4], 2)
false
iex> EctoList.ListItem.last?([1, 2, 3, 4], 5)
false
"""
def last?(order_list, list_item) do
List.last(order_list) == list_item
end
@doc """
Check if list item id is in the ordering.
iex> EctoList.ListItem.in_list?([1, 2, 3, 4], 3)
true
iex> EctoList.ListItem.in_list?([1, 2, 3, 4], 5)
false
"""
def in_list?(order_list, list_item) do
Enum.member?(order_list, list_item)
end
@doc """
Check if list item id is in the ordering.
iex> EctoList.ListItem.not_in_list?([1, 2, 3, 4], 5)
true
iex> EctoList.ListItem.not_in_list?([1, 2, 3, 4], 3)
false
"""
def not_in_list?(order_list, list_item) do
!Enum.member?(order_list, list_item)
end
@doc """
Return the list item id which is one rank higher in the ordering.
iex> EctoList.ListItem.higher_item([1, 7, 3, 4], 3)
7
iex> EctoList.ListItem.higher_item([1, 2, 3, 4], 1)
nil
iex> EctoList.ListItem.higher_item([1, 2, 3, 4], 5)
nil
"""
def higher_item(order_list, list_item) do
index = Enum.find_index(order_list, &(&1 == list_item))
case index do
nil -> nil
0 -> nil
_ -> Enum.fetch!(order_list, index - 1)
end
end
@doc """
Return the list of ids above the list item id.
iex> EctoList.ListItem.higher_items([1, 2, 3, 4], 3)
[1, 2]
iex> EctoList.ListItem.higher_items([1, 2, 3, 4], 4)
[1, 2, 3]
iex> EctoList.ListItem.higher_items([1, 2, 3, 4], 1)
[]
iex> EctoList.ListItem.higher_items([1, 2, 3, 4], 5)
nil
"""
def higher_items(order_list, list_item) do
index = Enum.find_index(order_list, &(&1 == list_item))
case index do
nil -> nil
0 -> []
_ -> Enum.slice(order_list, 0, index)
end
end
@doc """
Return the list item id which is one rank lower in the ordering.
iex> EctoList.ListItem.lower_item([1, 2, 3, 7], 3)
7
iex> EctoList.ListItem.lower_item([1, 2, 3, 4], 4)
nil
iex> EctoList.ListItem.lower_item([1, 2, 3, 4], 5)
nil
"""
def lower_item(order_list, list_item) do
index = Enum.find_index(order_list, &(&1 == list_item))
last_index = length(order_list) - 1
case index do
nil -> nil
^last_index -> nil
_ -> Enum.fetch!(order_list, index + 1)
end
end
@doc """
Return the list of ids below the list item id.
iex> EctoList.ListItem.lower_items([1, 2, 3, 4], 2)
[3, 4]
iex> EctoList.ListItem.lower_items([1, 2, 3, 4], 1)
[2, 3, 4]
iex> EctoList.ListItem.lower_items([1, 2, 3, 4], 4)
[]
iex> EctoList.ListItem.lower_items([1, 2, 3, 4], 5)
nil
"""
def lower_items(order_list, list_item) do
index = Enum.find_index(order_list, &(&1 == list_item))
last_index = length(order_list) - 1
case index do
nil -> nil
^last_index -> []
_ -> Enum.slice(order_list, index + 1, last_index - index)
end
end
end
|
lib/ecto_list/list_item.ex
| 0.617282
| 0.452294
|
list_item.ex
|
starcoder
|
defmodule Bolt.Sips.Success do
@moduledoc """
Bolt returns success or Error, in response to our requests.
"""
alias __MODULE__
alias Bolt.Sips.Error
defstruct fields: nil,
type: nil,
records: nil,
stats: nil,
notifications: nil,
plan: nil,
profile: nil
@doc """
parses a record received from Bolt and returns `{:ok, %Bolt.Sips.Success{}}`
or `{:error, %Bolt.Sips.Error{}}` if it can't find the success key.
"""
def new(r) do
case Error.new(r) do
{:halt, error} ->
{:error, error}
{:error, error} ->
{:error, error}
{:failure, failure} ->
{:failure, failure}
_ ->
case Keyword.has_key?(r, :success) && Keyword.get_values(r, :success) do
[f | t] ->
%{"fields" => fields} = f
case List.first(t) do
%{"profile" => profile, "stats" => stats, "type" => type} ->
{:ok,
%Success{
fields: fields,
type: type,
profile: profile,
stats: stats,
records: Keyword.get_values(r, :record)
}}
%{"notifications" => notifications, "plan" => plan, "type" => type} ->
{:ok,
%Success{fields: fields, type: type, notifications: notifications, plan: plan}}
%{"plan" => plan, "type" => type} ->
{:ok, %Success{fields: fields, type: type, plan: plan, notifications: []}}
%{"stats" => stats, "type" => type} ->
{:ok,
%Success{
fields: fields,
type: type,
stats: stats,
records: Keyword.get_values(r, :record)
}}
%{"type" => type} ->
{:ok,
%Success{fields: fields, type: type, records: Keyword.get_values(r, :record)}}
end
_ ->
r
end
end
end
end
|
lib/bolt_sips/success.ex
| 0.721056
| 0.450662
|
success.ex
|
starcoder
|
defmodule Zigler.Parser.Error do
@moduledoc """
parses errors emitted by the zig compiler
"""
import NimbleParsec
require Logger
@numbers [?0..?9]
whitespace = ascii_string([?\s, ?\n], min: 1)
errormsg =
ignore(repeat(ascii_char([?\s])))
|> ascii_string([not: ?:], min: 1)
|> ignore(string(":"))
|> ascii_string(@numbers, min: 1)
|> ignore(string(":"))
|> ascii_string(@numbers, min: 1)
|> ignore(string(":")
|> optional(whitespace)
|> string("error:")
|> optional(whitespace))
|> ascii_string([not: ?\n], min: 1)
|> ignore(string("\n"))
defparsec :parse_error, times(errormsg, min: 1)
@doc """
given a zig compiler error message, a directory for the code file, and the temporary
directory where code assembly is taking place, return an appropriate `CompileError`
struct which can be raised to emit a sensible compiler error message.
The temporary directory is stripped (when reasonable) in favor of a "true" source file
and any filename substitutions are performed as well.
"""
def parse(msg, compiler) do
case parse_error(msg) do
{:ok, [path, line, _col | msg], rest, _, _, _} ->
{path, line} = compiler.assembly_dir
|> Path.join(path)
|> Path.expand
|> backreference(String.to_integer(line))
raise CompileError,
file: path,
line: line,
description: IO.iodata_to_binary([msg, "\n" | rest])
_ ->
message = """
this zig compiler hasn't been incorporated into the parser.
Please file a report at:
https://github.com/ityonemo/zigler/issues
""" <> msg
raise CompileError,
description: message
end
end
@spec backreference(Path.t, non_neg_integer) :: {Path.t, non_neg_integer}
@doc """
given a code file path and a line number, calculates the file and line number
of the source document from which it came. Strongly depends on having
fencing comments of the form `// ref: <file> line: <line>` in order to backtrack
this information.
"""
def backreference(path, line) do
path
|> File.stream!
|> Stream.map(&check_ref/1)
|> Stream.take(line)
|> Stream.with_index
|> Enum.reduce({path, line}, &trap_last_ref(&1, &2, line))
end
# initialize the value of the last line to the existing line
defp trap_last_ref({{:ok, [path, line_number], _, _, _, _}, line_idx}, _, line) do
{path, line - line_idx + String.to_integer(line_number) - 1}
end
defp trap_last_ref(_, prev, _), do: prev
path = ascii_string([not: ?\s, not: ?\t], min: 1)
check_ref = ignore(
string("// ref:")
|> concat(whitespace))
|> concat(path)
|> ignore(
whitespace
|> string("line:")
|> concat(whitespace))
|> ascii_string(@numbers, min: 1)
defparsec :check_ref, check_ref
end
|
lib/zigler/parser/error.ex
| 0.699152
| 0.431405
|
error.ex
|
starcoder
|
defmodule Pundit.DefaultPolicy do
@moduledoc """
Default access policies for a given type.
All of the functions here are named for actions in a [Phoenix controller](https://hexdocs.pm/phoenix/controllers.html#actions).
If you `use` this module, then default implementations will be added in your module that all
return `false` by default (default safe, nothing is permitted). All are overrideable.
"""
@doc """
Returns true only if the user should be allowed to see an index (list) of the given things.
"""
@callback index?(thing :: struct() | module(), user :: term()) :: boolean()
@doc """
Returns true only if the user should be allowed to see the given thing.
"""
@callback show?(thing :: struct() | module(), user :: term()) :: boolean()
@doc """
Returns true only if the user should be allowed to create a new kind of thing.
"""
@callback create?(thing :: struct() | module(), user :: term()) :: boolean()
@doc """
Returns true only if the user should be allowed to see a form to create a new thing.
See [the page on Phoenix controllers](https://hexdocs.pm/phoenix/controllers.html#actions) for more details on the
purpose of this action.
"""
@callback new?(thing :: struct() | module(), user :: term()) :: boolean()
@doc """
Returns true only if the user should be allowed to update the attributes of a thing.
"""
@callback update?(thing :: struct() | module(), user :: term()) :: boolean()
@doc """
Returns true only if the user should be allowed to see a form for updating the thing.
See [the page on Phoenix controllers](https://hexdocs.pm/phoenix/controllers.html#actions) for more details on the
purpose of this action.
"""
@callback edit?(thing :: struct() | module(), user :: term()) :: boolean()
@doc """
Returns true only if the user should be allowed to delete a thing.
"""
@callback delete?(thing :: struct() | module(), user :: term()) :: boolean()
defmacro __using__(_) do
quote do
@behaviour Pundit.DefaultPolicy
def index?(_thing, _user), do: false
def show?(_thing, _user), do: false
def create?(_thing, _user), do: false
def new?(_thing, _user), do: false
def update?(_thing, _user), do: false
def edit?(_thing, _user), do: false
def delete?(_thing, _user), do: false
defoverridable index?: 2,
show?: 2,
create?: 2,
new?: 2,
update?: 2,
edit?: 2,
delete?: 2
end
end
end
|
lib/pundit/default_policy.ex
| 0.875361
| 0.444806
|
default_policy.ex
|
starcoder
|
defmodule Exdis.Int64 do
## ------------------------------------------------------------------
## Constant Definitions
## ------------------------------------------------------------------
@min -0x8000000000000000
@max +0x7FFFFFFFFFFFFFFF
@max_decimal_string_length 20
## ------------------------------------------------------------------
## Record and Type Definitions
## ------------------------------------------------------------------
@opaque t :: -0x8000000000000000..0x7FFFFFFFFFFFFFFF
## ------------------------------------------------------------------
## API Functions
## ------------------------------------------------------------------
def add(integer, increment) do
case integer + increment do
integer when integer >= @min and integer <= @max ->
{:ok, integer}
_ when integer >= @min and integer <= @max and increment > 0 ->
{:error, :overflow_or_underflow}
_ when integer >= @min and integer <= @max ->
{:error, :underflow_or_underflow}
end
end
def decimal_string_length(integer) do
byte_size( to_decimal_string(integer) )
end
def from_decimal_string(string, expected_trailing_data \\ "") do
case (
byte_size(string) < (@max_decimal_string_length + byte_size(expected_trailing_data))
and Integer.parse(string))
do
{integer, ^expected_trailing_data} when integer >= @min and integer <= @max ->
{:ok, integer}
{_integer, <<unexpected_trailing_data :: bytes>>} ->
{:error, {:unexpected_trailing_data, unexpected_trailing_data}}
:error ->
{:error, {:not_an_integer, string}}
false ->
{:error, {:string_too_large, byte_size(string)}}
end
end
def from_float(float) do
case trunc(float) do
integer when integer == float ->
{:ok, integer}
_ ->
{:error, :loss_of_precision}
end
end
def min(), do: @min
def max(), do: @max
def max_decimal_string_length(), do: @max_decimal_string_length
def new(integer) when is_integer(integer) and integer >= @min and integer <= @max do
integer
end
def to_decimal_string(integer) do
Integer.to_string(integer)
end
end
|
lib/exdis/int64.ex
| 0.569613
| 0.435511
|
int64.ex
|
starcoder
|
defmodule AppOptex do
alias AppOptex.{Worker, Client}
@moduledoc """
Client library for sending and reading AppOptics API measurements. To auth AppOptics make sure to set the `APPOPTICS_TOKEN` environment variable. This can also be overridden in the Application config.
"""
@doc """
Send one measurement with tags. The measurements are sent to AppOptics asynchronously.
* `name` - Name of the measurement
* `value` - Value of the measurement
* `tags` - A map of tags to send with the measurement. Cannot be empty.
## Examples
iex> AppOptex.measurement("my.metric", 10, %{my_tag: "value"})
:ok
"""
def measurement(name, value, tags) do
GenServer.cast(Worker, {:measurements, [%{name: name, value: value}], tags})
end
@moduledoc """
Client library for sending and reading AppOptics API measurements. To auth AppOptics make sure to set the `APPOPTICS_TOKEN` environment variable. This can also be overridden in the Application config.
"""
@doc """
Send one measurement with tags. The measurements are sent to AppOptics asynchronously.
* `measurement` - Map of the measurement data
* `tags` - A map of tags to send with the measurement. Cannot be empty.
## Examples
iex> AppOptex.measurement(%{name: "my.metric", value: 10}, %{my_tag: "value"})
:ok
"""
def measurement(measurement = %{name: _, value: _}, tags) when is_map(measurement) do
GenServer.cast(Worker, {:measurements, [measurement], tags})
end
@doc """
Send multiple measurements with tags. The measurements are sent to AppOptics asynchronously.
* `measurements` - a batch of metrics to send as a list of maps.
* `tags` - A map of tags to send with the measurement. Cannot be empty.
## Examples
iex> AppOptex.measurements([%{name: "my.metric", value: 1}, %{name: "my.other_metric", value: 5}], %{my_tag: "value"})
:ok
"""
def measurements(measurements, tags) do
GenServer.cast(Worker, {:measurements, measurements, tags})
end
@doc """
Recieve multiple measurements with tags. The measurements are read from AppOptics synchronously.
- `metric name` - the name of the metric you want measurements on.
- `resolution` - the resolution of the measurements in seconds.
- `params` - A map of parameters to restrict the result to possible values include:
- `start_time` - Unix Time of where to start the time search from. This parameter is optional if duration is specified.
- `end_time` - Unix Time of where to end the search. This parameter is optional and defaults to current wall time.
- `duration` - How far back to look in time, measured in seconds. This parameter can be used in combination with endtime to set a starttime N seconds back in time. It is an error to set starttime, endtime and duration.
## Examples
iex> AppOptex.client.read_measurements("my.other_metric", 60, %{duration: 999999})
%{
"attributes" => %{"created_by_ua" => "hackney/1.15.1"},
"links" => [],
"name" => "my.other_metric",
"resolution" => 60,
"series" => [
%{
"measurements" => [%{"time" => 1554720060, "value" => 10.0}],
"tags" => %{"my_tag" => "value"}
}
]
}
"""
def read_measurements(metric_name, resolution, params) do
appoptics_url = Application.get_env(:app_optex, :appoptics_url)
token =
Application.get_env(:app_optex, :appoptics_token)
|> case do
{:system, env_var} -> System.get_env(env_var)
token -> token
end
Client.read_measurements(appoptics_url, token, metric_name, resolution, params)
end
@doc """
Set the global tags that will be applied to all measurements. These can be overriden by tags provided in measurement/3 and measurements/2.
* `tags` - maps of tags to set.
## Examples
iex> AppOptex.put_global_tags(%{my_tag: "value"})
:ok
"""
def put_global_tags(tags) when is_map(tags),
do: GenServer.cast(Worker, {:put_global_tags, tags})
@doc """
Get the global tags that will be applied to all measurements.
## Examples
iex> AppOptex.get_global_tags()
%{my_tag: "value"}
"""
def get_global_tags(),
do: GenServer.call(Worker, {:get_global_tags})
@doc """
Asynchronously add to queue of measurements to be sent to AppOptics later.
## Examples
iex> AppOptex.push_to_queue([%{name: "my.metric", value: 1}], %{test: true})
:ok
"""
def push_to_queue(measurements, tags),
do: GenServer.cast(Worker, {:push_to_queue, measurements, tags})
@doc """
Return the current contents of the measurements queue. The queue format is a list of tuples, each tuple contains a measurements list and a tags map.
## Examples
iex> AppOptex.read_queue
[{[%{name: "my.metric", value: 1}], %{test: true}}]
"""
def read_queue(),
do: GenServer.call(Worker, {:read_queue})
@doc """
Asynchronously send the contents of the queue to AppOptics and clear it.
## Examples
iex> AppOptex.flush_queue()
:ok
"""
def flush_queue(),
do: GenServer.cast(Worker, {:flush_queue})
end
|
lib/app_optex.ex
| 0.909972
| 0.459804
|
app_optex.ex
|
starcoder
|
defmodule K8s.Client do
@moduledoc """
An experimental k8s client.
Functions return `K8s.Client.Operation`s that represent kubernetes operations.
To run operations pass them to: `run/2`, `run/3`, or `run/4`.
When specifying kinds the format should either be in the literal kubernetes kind name (eg `"ServiceAccount"`)
or the downcased version seen in kubectl (eg `"serviceaccount"`). A string or atom may be used.
## Examples
```elixir
"Deployment", "deployment", :Deployment, :deployment
"ServiceAccount", "serviceaccount", :ServiceAccount, :serviceaccount
"HorizontalPodAutoscaler", "horizontalpodautoscaler", :HorizontalPodAutoscaler, :horizontalpodautoscaler
```
"""
alias K8s.Conf
alias K8s.Client.{Operation, Route, Router}
@allow_http_body [:put, :patch, :post]
@type operation_or_error :: Operation.t() | {:error, binary()}
@type option :: {:name, String.t()} | {:namespace, binary() | :all}
@type options :: [option]
@type http_method :: :get | :put | :patch | :post | :head | :options | :delete
@type result :: :ok | {:ok, map()} | {:error, binary()}
@doc "Alias of `create/1`"
defdelegate post(resource), to: __MODULE__, as: :create
@doc "Alias of `replace/1`"
defdelegate update(resource), to: __MODULE__, as: :replace
@doc "Alias of `replace/1`"
defdelegate put(resource), to: __MODULE__, as: :replace
@doc """
Returns a `GET` operation for a resource given a manifest. May be a partial manifest as long as it contains:
* apiVersion
* kind
* metadata.name
* metadata.namespace (if applicable)
[K8s Docs](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/):
> Get will retrieve a specific resource object by name.
## Examples
iex> pod = %{
...> "apiVersion" => "v1",
...> "kind" => "Pod",
...> "metadata" => %{"name" => "nginx-pod", "namespace" => "test"},
...> "spec" => %{"containers" => %{"image" => "nginx"}}
...> }
...> K8s.Client.get(pod)
%K8s.Client.Operation{
method: :get,
path: "/api/v1/namespaces/test/pods/nginx-pod"
}
"""
@spec get(map()) :: operation_or_error
def get(resource = %{}) do
path = Router.path_for(:get, resource)
operation_or_error(path, :get, resource)
end
@doc """
Returns a `GET` operation for a resource by version, kind, name, and optionally namespace.
[K8s Docs](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/):
> Get will retrieve a specific resource object by name.
## Examples
iex> K8s.Client.get("apps/v1", "Deployment", namespace: "test", name: "nginx")
%K8s.Client.Operation{
method: :get,
path: "/apis/apps/v1/namespaces/test/deployments/nginx"
}
iex> K8s.Client.get("apps/v1", :deployment, namespace: "test", name: "nginx")
%K8s.Client.Operation{
method: :get,
path: "/apis/apps/v1/namespaces/test/deployments/nginx"
}
"""
@spec get(binary, binary, options | nil) :: operation_or_error
def get(api_version, kind, opts \\ []) do
path = Router.path_for(:get, api_version, kind, opts)
operation_or_error(path, :get)
end
@doc """
Returns a `GET` operation to list all resources by version, kind, and namespace.
Given the namespace `:all` as an atom, will perform a list across all namespaces.
[K8s Docs](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/):
> List will retrieve all resource objects of a specific type within a namespace, and the results can be restricted to resources matching a selector query.
> List All Namespaces: Like List but retrieves resources across all namespaces.
## Examples
iex> K8s.Client.list("v1", "Pod", namespace: "default")
%K8s.Client.Operation{
method: :get,
path: "/api/v1/namespaces/default/pods"
}
iex> K8s.Client.list("apps/v1", "Deployment", namespace: :all)
%K8s.Client.Operation{
method: :get,
path: "/apis/apps/v1/deployments"
}
"""
@spec list(binary, binary, options | nil) :: operation_or_error
def list(api_version, kind, namespace: :all) do
path = Router.path_for(:list_all_namespaces, api_version, kind)
operation_or_error(path, :get)
end
def list(api_version, kind, namespace: namespace) do
path = Router.path_for(:list, api_version, kind, namespace: namespace)
operation_or_error(path, :get)
end
@doc """
Returns a `POST` operation to create the given resource.
## Examples
iex> deployment = %{
...> "apiVersion" => "apps/v1",
...> "kind" => "Deployment",
...> "metadata" => %{
...> "labels" => %{
...> "app" => "nginx"
...> },
...> "name" => "nginx",
...> "namespace" => "test"
...> },
...> "spec" => %{
...> "replicas" => 2,
...> "selector" => %{
...> "matchLabels" => %{
...> "app" => "nginx"
...> }
...> },
...> "template" => %{
...> "metadata" => %{
...> "labels" => %{
...> "app" => "nginx"
...> }
...> },
...> "spec" => %{
...> "containers" => %{
...> "image" => "nginx",
...> "name" => "nginx"
...> }
...> }
...> }
...> }
...> }
...> K8s.Client.create(deployment)
%K8s.Client.Operation{
method: :post,
path: "/apis/apps/v1/namespaces/test/deployments",
resource: %{
"apiVersion" => "apps/v1",
"kind" => "Deployment",
"metadata" => %{
"labels" => %{
"app" => "nginx"
},
"name" => "nginx",
"namespace" => "test"
},
"spec" => %{
"replicas" => 2,
"selector" => %{
"matchLabels" => %{
"app" => "nginx"
}
},
"template" => %{
"metadata" => %{
"labels" => %{
"app" => "nginx"
}
},
"spec" => %{
"containers" => %{
"image" => "nginx",
"name" => "nginx"
}
}
}
}
}
}
"""
@spec create(map()) :: operation_or_error
def create(
resource = %{
"apiVersion" => api_version,
"kind" => kind,
"metadata" => %{"namespace" => ns}
}
) do
path = Router.path_for(:post, api_version, kind, namespace: ns)
operation_or_error(path, :post, resource)
end
def create(resource = %{"apiVersion" => api_version, "kind" => kind}) do
path = Router.path_for(:post, api_version, kind)
operation_or_error(path, :post, resource)
end
@doc """
Returns a `PATCH` operation to patch the given resource.
[K8s Docs](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/):
> Patch will apply a change to a specific field. How the change is merged is defined per field. Lists may either be replaced or merged. Merging lists will not preserve ordering.
> Patches will never cause optimistic locking failures, and the last write will win. Patches are recommended when the full state is not read before an update, or when failing on optimistic locking is undesirable. When patching complex types, arrays and maps, how the patch is applied is defined on a per-field basis and may either replace the field's current value, or merge the contents into the current value.
## Examples
iex> deployment = %{
...> "apiVersion" => "apps/v1",
...> "kind" => "Deployment",
...> "metadata" => %{
...> "labels" => %{
...> "app" => "nginx"
...> },
...> "name" => "nginx",
...> "namespace" => "test"
...> },
...> "spec" => %{
...> "replicas" => 2,
...> "selector" => %{
...> "matchLabels" => %{
...> "app" => "nginx"
...> }
...> },
...> "template" => %{
...> "metadata" => %{
...> "labels" => %{
...> "app" => "nginx"
...> }
...> },
...> "spec" => %{
...> "containers" => %{
...> "image" => "nginx",
...> "name" => "nginx"
...> }
...> }
...> }
...> }
...> }
...> K8s.Client.patch(deployment)
%K8s.Client.Operation{
method: :patch,
path: "/apis/apps/v1/namespaces/test/deployments/nginx",
resource: %{
"apiVersion" => "apps/v1",
"kind" => "Deployment",
"metadata" => %{
"labels" => %{
"app" => "nginx"
},
"name" => "nginx",
"namespace" => "test"
},
"spec" => %{
"replicas" => 2,
"selector" => %{
"matchLabels" => %{
"app" => "nginx"
}
},
"template" => %{
"metadata" => %{
"labels" => %{
"app" => "nginx"
}
},
"spec" => %{
"containers" => %{
"image" => "nginx",
"name" => "nginx"
}
}
}
}
}
}
"""
@spec patch(map()) :: operation_or_error
def patch(resource = %{}) do
path = Router.path_for(:patch, resource)
operation_or_error(path, :patch, resource)
end
@doc """
Returns a `PUT` operation to replace/update the given resource.
[K8s Docs](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/):
> Replacing a resource object will update the resource by replacing the existing spec with the provided one. For read-then-write operations this is safe because an optimistic lock failure will occur if the resource was modified between the read and write. Note: The ResourceStatus will be ignored by the system and will not be updated. To update the status, one must invoke the specific status update operation.
> Note: Replacing a resource object may not result immediately in changes being propagated to downstream objects. For instance replacing a ConfigMap or Secret resource will not result in all Pods seeing the changes unless the Pods are restarted out of band.
## Examples
iex> deployment = %{
...> "apiVersion" => "apps/v1",
...> "kind" => "Deployment",
...> "metadata" => %{
...> "labels" => %{
...> "app" => "nginx"
...> },
...> "name" => "nginx",
...> "namespace" => "test"
...> },
...> "spec" => %{
...> "replicas" => 2,
...> "selector" => %{
...> "matchLabels" => %{
...> "app" => "nginx"
...> }
...> },
...> "template" => %{
...> "metadata" => %{
...> "labels" => %{
...> "app" => "nginx"
...> }
...> },
...> "spec" => %{
...> "containers" => %{
...> "image" => "nginx",
...> "name" => "nginx"
...> }
...> }
...> }
...> }
...> }
...> K8s.Client.replace(deployment)
%K8s.Client.Operation{
method: :put,
path: "/apis/apps/v1/namespaces/test/deployments/nginx",
resource: %{
"apiVersion" => "apps/v1",
"kind" => "Deployment",
"metadata" => %{
"labels" => %{
"app" => "nginx"
},
"name" => "nginx",
"namespace" => "test"
},
"spec" => %{
"replicas" => 2,
"selector" => %{
"matchLabels" => %{
"app" => "nginx"
}
},
"template" => %{
"metadata" => %{
"labels" => %{
"app" => "nginx"
}
},
"spec" => %{
"containers" => %{
"image" => "nginx",
"name" => "nginx"
}
}
}
}
}
}
"""
@spec replace(map()) :: operation_or_error
def replace(resource = %{}) do
path = Router.path_for(:put, resource)
operation_or_error(path, :put, resource)
end
@doc """
Returns a `DELETE` operation for a resource by manifest. May be a partial manifest as long as it contains:
* apiVersion
* kind
* metadata.name
* metadata.namespace (if applicable)
[K8s Docs](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/):
> Delete will delete a resource. Depending on the specific resource, child objects may or may not be garbage collected by the server. See notes on specific resource objects for details.
## Examples
iex> deployment = %{
...> "apiVersion" => "apps/v1",
...> "kind" => "Deployment",
...> "metadata" => %{
...> "labels" => %{
...> "app" => "nginx"
...> },
...> "name" => "nginx",
...> "namespace" => "test"
...> },
...> "spec" => %{
...> "replicas" => 2,
...> "selector" => %{
...> "matchLabels" => %{
...> "app" => "nginx"
...> }
...> },
...> "template" => %{
...> "metadata" => %{
...> "labels" => %{
...> "app" => "nginx"
...> }
...> },
...> "spec" => %{
...> "containers" => %{
...> "image" => "nginx",
...> "name" => "nginx"
...> }
...> }
...> }
...> }
...> }
...> K8s.Client.delete(deployment)
%K8s.Client.Operation{
method: :delete,
path: "/apis/apps/v1/namespaces/test/deployments/nginx"
}
"""
@spec delete(map()) :: operation_or_error
def delete(resource = %{}) do
path = Router.path_for(:delete, resource)
operation_or_error(path, :delete, resource)
end
@doc """
Returns a `DELETE` operation for a resource by version, kind, name, and optionally namespace.
## Examples
iex> K8s.Client.delete("apps/v1", "Deployment", namespace: "test", name: "nginx")
%K8s.Client.Operation{
method: :delete,
path: "/apis/apps/v1/namespaces/test/deployments/nginx"
}
"""
@spec delete(binary, binary, options | nil) :: operation_or_error
def delete(api_version, kind, opts) do
path = Router.path_for(:delete, api_version, kind, opts)
operation_or_error(path, :delete)
end
@doc """
Returns a `DELETE` collection operation for all instances of a cluster scoped resource kind.
## Examples
iex> K8s.Client.delete_all("extensions/v1beta1", "PodSecurityPolicy")
%K8s.Client.Operation{
method: :delete,
path: "/apis/extensions/v1beta1/podsecuritypolicies"
}
iex> K8s.Client.delete_all("storage.k8s.io/v1", "StorageClass")
%K8s.Client.Operation{
method: :delete,
path: "/apis/storage.k8s.io/v1/storageclasses"
}
"""
@spec delete_all(binary(), binary()) :: operation_or_error
def delete_all(api_version, kind) do
path = Router.path_for(:delete_collection, api_version, kind)
operation_or_error(path, :delete)
end
@doc """
Returns a `DELETE` collection operation for all instances of a resource kind in a specific namespace.
## Examples
iex> Client.delete_all("apps/v1beta1", "ControllerRevision", namespace: "default")
%K8s.Client.Operation{
method: :delete,
path: "/apis/apps/v1beta1/namespaces/default/controllerrevisions"
}
iex> Client.delete_all("apps/v1", "Deployment", namespace: "staging")
%K8s.Client.Operation{
method: :delete,
path: "/apis/apps/v1/namespaces/staging/deployments"
}
"""
@spec delete_all(binary(), binary(), namespace: binary()) :: operation_or_error
def delete_all(api_version, kind, namespace: namespace) do
path = Router.path_for(:delete_collection, api_version, kind, namespace: namespace)
operation_or_error(path, :delete)
end
@doc """
Returns a `GET` operation for a pod's logs given a manifest. May be a partial manifest as long as it contains:
* apiVersion
* kind
* metadata.name
* metadata.namespace
## Examples
iex> pod = %{
...> "apiVersion" => "v1",
...> "kind" => "Pod",
...> "metadata" => %{"name" => "nginx-pod", "namespace" => "test"},
...> "spec" => %{"containers" => %{"image" => "nginx"}}
...> }
...> K8s.Client.get_log(pod)
%K8s.Client.Operation{
method: :get,
path: "/api/v1/namespaces/test/pods/nginx-pod/log"
}
"""
@spec get_log(map()) :: operation_or_error
def get_log(resource = %{}) do
path = Router.path_for(:get_log, resource)
operation_or_error(path, :get, resource)
end
@doc """
Returns a `GET` operation for a pod's logs given a namespace and a pod name.
## Examples
iex> K8s.Client.get_log("v1", "Pod", namespace: "test", name: "nginx-pod")
%K8s.Client.Operation{
method: :get,
path: "/api/v1/namespaces/test/pods/nginx-pod/log"
}
"""
@spec get_log(binary, binary, options) :: operation_or_error
def get_log(api_version, kind, opts) do
path = Router.path_for(:get_log, api_version, kind, opts)
operation_or_error(path, :get)
end
@doc """
Returns a `GET` operation for a resource's status given a manifest. May be a partial manifest as long as it contains:
* apiVersion
* kind
* metadata.name
* metadata.namespace (if applicable)
## Examples
iex> pod = %{
...> "apiVersion" => "v1",
...> "kind" => "Pod",
...> "metadata" => %{"name" => "nginx-pod", "namespace" => "test"},
...> "spec" => %{"containers" => %{"image" => "nginx"}}
...> }
...> K8s.Client.get_status(pod)
%K8s.Client.Operation{
method: :get,
path: "/api/v1/namespaces/test/pods/nginx-pod/status"
}
"""
@spec get_status(map()) :: operation_or_error
def get_status(resource = %{}) do
path = Router.path_for(:get_status, resource)
operation_or_error(path, :get, resource)
end
@doc """
Returns a `GET` operation for a resource's status by version, kind, name, and optionally namespace.
## Examples
iex> K8s.Client.get_status("apps/v1", "Deployment", namespace: "test", name: "nginx")
%K8s.Client.Operation{
method: :get,
path: "/apis/apps/v1/namespaces/test/deployments/nginx/status"
}
"""
@spec get_status(binary, binary, options | nil) :: operation_or_error
def get_status(api_version, kind, opts \\ []) do
path = Router.path_for(:get_status, api_version, kind, opts)
operation_or_error(path, :get)
end
@doc """
Returns a `PATCH` operation for a resource's status given a manifest. May be a partial manifest as long as it contains:
* apiVersion
* kind
* metadata.name
* metadata.namespace (if applicable)
## Examples
iex> pod = %{
...> "apiVersion" => "v1",
...> "kind" => "Pod",
...> "metadata" => %{"name" => "nginx-pod", "namespace" => "test"},
...> "spec" => %{"containers" => %{"image" => "nginx"}}
...> }
...> K8s.Client.patch_status(pod)
%K8s.Client.Operation{
method: :patch,
path: "/api/v1/namespaces/test/pods/nginx-pod/status",
resource: %{
"apiVersion" => "v1",
"kind" => "Pod",
"metadata" => %{"name" => "nginx-pod", "namespace" => "test"},
"spec" => %{"containers" => %{"image" => "nginx"}}
}
}
"""
@spec patch_status(map()) :: operation_or_error
def patch_status(resource = %{}) do
path = Router.path_for(:patch_status, resource)
operation_or_error(path, :patch, resource)
end
@doc """
Returns a `PATCH` operation for a resource's status by version, kind, name, and optionally namespace.
## Examples
iex> K8s.Client.patch_status("apps/v1", "Deployment", namespace: "test", name: "nginx")
%K8s.Client.Operation{
method: :patch,
path: "/apis/apps/v1/namespaces/test/deployments/nginx/status"
}
"""
@spec patch_status(binary, binary, options | nil) :: operation_or_error
def patch_status(api_version, kind, opts \\ []) do
path = Router.path_for(:patch_status, api_version, kind, opts)
operation_or_error(path, :patch)
end
@doc """
Returns a `PUT` operation for a resource's status given a manifest. May be a partial manifest as long as it contains:
* apiVersion
* kind
* metadata.name
* metadata.namespace (if applicable)
## Examples
iex> pod = %{
...> "apiVersion" => "v1",
...> "kind" => "Pod",
...> "metadata" => %{"name" => "nginx-pod", "namespace" => "test"},
...> "spec" => %{"containers" => %{"image" => "nginx"}}
...> }
...> K8s.Client.put_status(pod)
%K8s.Client.Operation{
method: :put,
path: "/api/v1/namespaces/test/pods/nginx-pod/status",
resource: %{
"apiVersion" => "v1",
"kind" => "Pod",
"metadata" => %{"name" => "nginx-pod", "namespace" => "test"},
"spec" => %{"containers" => %{"image" => "nginx"}}
}
}
"""
@spec put_status(map()) :: operation_or_error
def put_status(resource = %{}) do
path = Router.path_for(:put_status, resource)
operation_or_error(path, :put, resource)
end
@doc """
Returns a `PUT` operation for a resource's status by version, kind, name, and optionally namespace.
## Examples
iex> K8s.Client.put_status("apps/v1", "Deployment", namespace: "test", name: "nginx")
%K8s.Client.Operation{
method: :put,
path: "/apis/apps/v1/namespaces/test/deployments/nginx/status"
}
"""
@spec put_status(binary, binary, options | nil) :: operation_or_error
def put_status(api_version, kind, opts \\ []) do
path = Router.path_for(:put_status, api_version, kind, opts)
operation_or_error(path, :put)
end
@doc """
Async run multiple operations. Operations will be returned in same order given.
Operations will not cease in event of failure.
## Example
Get a list of pods, then map each one to an individual `GET` operation:
```elixir
# Get a config reference
conf = K8s.Conf.from_file "~/.kube/config"
# Get the pods
operation = K8s.Client.list("v1", "Pod", namespace: :all)
{:ok, %{"items" => pods}} = K8s.Client.run(operation, conf)
# Map each one to an individual `GET` operation.
operations = Enum.map(pods, fn(%{"metadata" => %{"name" => name, "namespace" => ns}}) ->
K8s.Client.get("v1", "Pod", namespace: ns, name: name)
end)
# Get the results asynchronously
results = K8s.Client.async(operations, conf)
```
"""
@spec async(list(Operation.t()), Conf.t()) :: list({:ok, struct} | {:error, struct})
def async(operations, conf) do
operations
|> Enum.map(&Task.async(fn -> run(&1, conf) end))
|> Enum.map(&Task.await/1)
end
@doc """
Runs a `K8s.Client.Operation`.
## Examples
Running a list pods operation:
```elixir
conf = K8s.Conf.from_file "~/.kube/config"
operation = K8s.Client.list("v1", "Pod", namespace: :all)
{:ok, %{"items" => pods}} = K8s.Client.run(operation, conf)
```
Running a dry-run of a create deployment operation:
```elixir
conf = K8s.Conf.from_file "~/.kube/config"
deployment = %{
"apiVersion" => "apps/v1",
"kind" => "Deployment",
"metadata" => %{
"labels" => %{
"app" => "nginx"
},
"name" => "nginx",
"namespace" => "test"
},
"spec" => %{
"replicas" => 2,
"selector" => %{
"matchLabels" => %{
"app" => "nginx"
}
},
"template" => %{
"metadata" => %{
"labels" => %{
"app" => "nginx"
}
},
"spec" => %{
"containers" => %{
"image" => "nginx",
"name" => "nginx"
}
}
}
}
}
operation = K8s.Client.create(deployment)
# opts is passed to HTTPoison as opts.
opts = [params: %{"dryRun" => "all"}]
:ok = K8s.Client.run(operation, conf, opts)
```
"""
@spec run(Operation.t(), Conf.t()) :: result
def run(request = %{}, config = %{}), do: run(request, config, [])
@doc """
See `run/2`
"""
@spec run(Operation.t(), Conf.t(), keyword()) :: result
def run(request = %{}, config = %{}, opts) when is_list(opts) do
request
|> build_http_req(config, request.resource, opts)
|> handle_response
end
@doc """
See `run/2`
"""
@spec run(Operation.t(), Conf.t(), map(), keyword() | nil) :: result
def run(request = %{}, config = %{}, body = %{}, opts \\ []) do
request
|> build_http_req(config, body, opts)
|> handle_response
end
@spec build_http_req(Operation.t(), Conf.t(), map(), keyword()) ::
{:ok, HTTPoison.Response.t() | HTTPoison.AsyncResponse.t()}
| {:error, HTTPoison.Error.t()}
defp build_http_req(request, config, body, opts) do
request_options = Conf.RequestOptions.generate(config)
url = Path.join(config.url, request.path)
http_headers = headers(request_options)
http_opts = Keyword.merge([ssl: request_options.ssl_options], opts)
case http_body(body, request.method) do
{:ok, http_body} ->
HTTPoison.request(request.method, url, http_body, http_headers, http_opts)
error ->
error
end
end
@spec http_body(any(), atom()) :: {:ok, binary} | {:error, binary}
defp http_body(body, _) when not is_map(body), do: {:ok, ""}
defp http_body(body = %{}, http_method) when http_method in @allow_http_body do
Jason.encode(body)
end
@spec handle_response(
{:ok, HTTPoison.Response.t() | HTTPoison.AsyncResponse.t()}
| {:error, HTTPoison.Error.t()}
) :: :ok | {:ok, map()} | {:error, binary()}
defp handle_response(resp) do
case resp do
{:ok, %HTTPoison.Response{status_code: 200, body: body}} ->
{:ok, Jason.decode!(body)}
{:ok, %HTTPoison.Response{status_code: code}} when code in 201..299 ->
:ok
{:ok, %HTTPoison.Response{status_code: code, body: body}} when code in 400..499 ->
{:error, "HTTP Error: #{code}; #{body}"}
{:error, %HTTPoison.Error{reason: reason}} ->
{:error, "HTTP Client Error: #{reason}"}
end
end
defp headers(ro = %Conf.RequestOptions{}) do
ro.headers ++ [{"Accept", "application/json"}, {"Content-Type", "application/json"}]
end
@spec operation_or_error(binary, http_method, map | nil) :: operation_or_error
defp operation_or_error(path, method, resource \\ nil) do
operation_resource =
case method do
method when method in @allow_http_body -> resource
_ -> nil
end
case path do
{:error, msg} ->
{:error, msg}
path ->
%Operation{
path: path,
method: method,
resource: operation_resource
}
end
end
end
|
lib/k8s/client.ex
| 0.912806
| 0.629091
|
client.ex
|
starcoder
|
defmodule Scenic.Primitive.Group do
@moduledoc """
A container to hold other primitives.
Any styles placed on a group will be inherited by the primitives in the
group. Any transforms placed on a group will be multiplied into the transforms
in the primitives in the group.
## Data
`uids`
The data for an arc is a list of internal uids for the primitives it contains.
You will not typically add these ids yourself. You should use the helper functions
with a callback to do that for you. See Usage below.
## Styles
The group is special in that it accepts all styles and transforms, even if they
are non-standard. These are then inherited by any primitives, including SceneRefs.
Any styles you place on the group itself will be inherited by the primitives
contained in the group. However, these styles will not be inherited by any
component in the group.
## Transforms
If you add a transform to a group, then everything in the group will also be
moved by that transform, including child components. This is a very handy way
to create some UI, then position, scale, or rotate it as needed without having
to adjust the inner elements.
## Usage
You should add/modify primitives via the helper functions in
[`Scenic.Primitives`](Scenic.Primitives.html#group/3)
```elixir
graph
|> group( fn(g) ->
g
|> rect( {200, 100}, fill: :blue )
|> test( "In a Group", fill: :yellow, translate: {20, 40} )
end,
translate: {100, 100},
font: :roboto
)
```
"""
use Scenic.Primitive
alias Scenic.Script
alias Scenic.Primitive
alias Scenic.Primitive.Style
# import IEx
@type t :: [pos_integer]
@type styles_t :: [:hidden | :scissor | atom]
@styles [:hidden, :scissor]
# ============================================================================
# data verification and serialization
@impl Primitive
@spec validate(ids :: [pos_integer]) ::
{:ok, ids :: [pos_integer]} | {:error, String.t()}
def validate(ids) when is_list(ids) do
case Enum.all?(ids, fn n -> is_integer(n) && n >= 0 end) do
true -> {:ok, ids}
false -> err_validation(ids)
end
end
def validate(data), do: err_validation(data)
defp err_validation(data) do
{
:error,
"""
#{IO.ANSI.red()}Invalid Group specification
Received: #{inspect(data)}
#{IO.ANSI.yellow()}
The data for an Group is a list of primitive ids.#{IO.ANSI.default_color()}
"""
}
end
# --------------------------------------------------------
@doc """
Returns a list of styles recognized by this primitive.
"""
@impl Primitive
@spec valid_styles() :: [:hidden, ...]
def valid_styles(), do: @styles
# --------------------------------------------------------
# compiling a group is a special case and is handled in Scenic.Graph.Compiler
@doc false
@impl Primitive
@spec compile(primitive :: Primitive.t(), styles :: Style.t()) :: Script.t()
def compile(%Primitive{module: __MODULE__}, _styles) do
raise "compiling a group is a special case and is handled in Scenic.Graph.Compiler"
end
# ============================================================================
# apis to manipulate the list of child ids
# ----------------------------------------------------------------------------
def insert_at(%Primitive{module: __MODULE__, data: uid_list} = p, index, uid) do
Map.put(
p,
:data,
List.insert_at(uid_list, index, uid)
)
end
# ----------------------------------------------------------------------------
def delete(%Primitive{module: __MODULE__, data: uid_list} = p, uid) do
Map.put(
p,
:data,
Enum.reject(uid_list, fn xid -> xid == uid end)
)
end
# ----------------------------------------------------------------------------
def increment(%Primitive{module: __MODULE__, data: uid_list} = p, offset) do
Map.put(
p,
:data,
Enum.map(uid_list, fn xid -> xid + offset end)
)
end
end
|
lib/scenic/primitive/group.ex
| 0.901796
| 0.828037
|
group.ex
|
starcoder
|
defmodule ExSieve.Node.Grouping do
@moduledoc false
defstruct conditions: nil, combinator: nil, groupings: []
@type t :: %__MODULE__{}
alias ExSieve.Node.{Grouping, Condition}
alias ExSieve.{Config, Utils}
@combinators ~w(or and)
@spec extract(%{binary => term}, atom, Config.t) :: t | {:error, :predicat_not_found | :value_is_empty}
def extract(%{"m" => m, "g" => g, "c" => conditions}, schema, config) when m in @combinators do
conditions
|> do_extract(schema, config, String.to_atom(m))
|> result(extract_groupings(g, schema, config))
end
def extract(%{"m" => m, "c" => conditions}, schema, config) when m in @combinators do
conditions |> do_extract(schema, config, String.to_atom(m)) |> result([])
end
def extract(%{"c" => conditions}, schema, config) do
conditions |> do_extract(schema, config) |> result([])
end
def extract(%{"m" => m} = conditions, schema, config) when m in @combinators do
conditions |> Map.delete("m") |> do_extract(schema, config, String.to_atom(m))
end
def extract(%{"g" => g}, schema, config) do
%Grouping{combinator: :and, conditions: []} |> result(extract_groupings(g, schema, config))
end
def extract(%{"g" => g, "m" => m}, schema, config) when m in @combinators do
%Grouping{combinator: String.to_atom(m), conditions: []} |> result(extract_groupings(g, schema, config))
end
def extract(params, schema, config), do: params |> do_extract(schema, config)
defp result({:error, reason}, _groupings), do: {:error, reason}
defp result(_grouping, {:error, reason}), do: {:error, reason}
defp result(grouping, groupings), do: %Grouping{grouping|groupings: groupings}
defp do_extract(params, schema, config, combinator \\ :and) do
case extract_conditions(params, schema, config) do
{:error, reason} -> {:error, reason}
conditions -> %Grouping{combinator: combinator, conditions: conditions}
end
end
defp extract_groupings(groupings, schema, config) do
groupings |> Enum.map(&extract(&1, schema, config)) |> Utils.get_error(config)
end
defp extract_conditions(params, schema, config) do
params |> Enum.map(&extract_condition(&1, schema)) |> Utils.get_error(config)
end
defp extract_condition({key, value}, schema), do: Condition.extract(key, value, schema)
end
|
lib/ex_sieve/node/grouping.ex
| 0.733547
| 0.403714
|
grouping.ex
|
starcoder
|
defmodule ExHal.Document do
@moduledoc """
A document is the representaion of a single resource in HAL.
"""
@opaque t :: %__MODULE__{}
alias ExHal.{Link, NsReg, Client}
defstruct properties: %{},
links: %{},
client:
@doc("""
Returns a new `%ExHal.Document` representing the HAL document provided.
""")
def parse(hal_str, client \\ ExHal.client())
def parse(hal_str, client) when is_binary(hal_str) do
case Poison.Parser.parse(hal_str) do
{:ok, parsed} -> {:ok, from_parsed_hal(client, parsed)}
{:error, reason, _} -> {:error, reason}
r -> r
end
end
def parse(client, hal_str) do
parse(hal_str, client)
end
@doc """
Returns a new `%ExHal.Document` representing the HAL document provided.
"""
def parse!(hal_str, client \\ ExHal.client())
def parse!(hal_str, client) when is_binary(hal_str) do
{:ok, doc} = parse(client, hal_str)
doc
end
def parse!(client, hal_str) do
parse!(hal_str, client)
end
@doc """
Returns a string representation of this HAL document.
"""
def render!(doc) do
doc.properties
|> Map.merge(links_sections_to_json_map(doc))
|> Poison.encode!()
end
@spec from_parsed_hal(%{}) :: __MODULE__.t()
@spec from_parsed_hal(%{}, Client.t()) :: __MODULE__.t()
@spec from_parsed_hal(Client.t(), %{}) :: __MODULE__.t()
@doc """
Returns new ExHal.Document
"""
def from_parsed_hal(parsed_hal) do
from_parsed_hal(parsed_hal, ExHal.client())
end
def from_parsed_hal(parsed_hal, %ExHal.Client{} = client) do
%__MODULE__{
client: client,
properties: properties_in(parsed_hal),
links: links_in(client, parsed_hal)
}
end
def from_parsed_hal(client = %ExHal.Client{}, parsed_hal), do: from_parsed_hal(parsed_hal, client)
@doc """
Returns true iff the document contains at least one link with the specified rel.
"""
def has_link?(doc, rel) do
Map.has_key?(doc.links, rel)
end
@doc """
**Deprecated**
See to_json_map/1
"""
def to_json_hash(doc), do: to_json_map(doc)
@doc """
Returns a map that matches the shape of the intended JSON output.
"""
def to_json_map(doc) do
doc.properties
|> Map.merge(links_sections_to_json_map(doc))
end
@doc """
Returns `{:ok, <url of specified document>}` or `:error`.
"""
def url(a_doc, default_fn \\ fn _doc -> :error end) do
case ExHal.Locatable.url(a_doc) do
:error -> default_fn.(a_doc)
url -> url
end
end
# Access
@doc """
Fetches value of specified property or links whose `rel` matches
Returns `{:ok, <property value>}` if `name` identifies a property;
`{:ok, [%Link{}, ...]}` if `name` identifies a link;
`:error` othewise
"""
def fetch(a_document, name) do
case get_lazy(a_document, name, fn -> :error end) do
:error -> :error
result -> {:ok, result}
end
end
@doc """
Returns the link or property of the specified name, or `default` if
neither or found.
"""
def get(a_doc, name, default \\ nil) do
get_lazy(a_doc, name, fn -> default end)
end
@doc """
Returns link or property of the specified name, or the result of `default_fun`
if neither are found.
"""
def get_lazy(a_doc, name, default_fun) do
get_property_lazy(a_doc, name, fn -> get_links_lazy(a_doc, name, default_fun) end)
end
@doc """
Returns property value when property exists or `default`
otherwise
"""
def get_property(a_doc, prop_name, default \\ nil) do
Map.get_lazy(a_doc.properties, prop_name, fn -> default end)
end
@doc """
Returns `<property value>` when property exists or result of `default_fun`
otherwise
"""
def get_property_lazy(a_doc, prop_name, default_fun) do
Map.get_lazy(a_doc.properties, prop_name, default_fun)
end
@doc """
Returns `[%Link{}...]` when link exists or result of `default` otherwise.
"""
def get_links(a_doc, link_name, default \\ []) do
Map.get(a_doc.links, link_name, default)
end
@doc """
Returns `[%Link{}...]` when link exists or result of `default_fun` otherwise.
"""
def get_links_lazy(a_doc, link_name, default_fun) do
Map.get_lazy(a_doc.links, link_name, default_fun)
end
# Modification
@doc """
Add or update a property to a Document.
Returns new ExHal.Document with the specified property set to the specified value.
"""
def put_property(doc, name, val) do
%{doc | properties: Map.put(doc.properties, name, val)}
end
@doc """
Add a link to a Document.
Returns new ExHal.Document with the specified link.
"""
def put_link(doc, rel, target, templated \\ false) do
new_rel_links =
Map.get(doc.links, rel, []) ++
[%ExHal.Link{rel: rel, href: target, templated: templated, name: nil}]
%{doc | links: Map.put(doc.links, rel, new_rel_links)}
end
defp links_sections_to_json_map(doc) do
{embedded, references} =
doc.links
|> Map.to_list()
|> Enum.flat_map(fn {_, v} -> v end)
|> Enum.split_with(&Link.embedded?(&1))
%{"_embedded" => render_links(embedded), "_links" => render_links(references)}
end
defp render_links(enum) do
enum
|> Enum.group_by(& &1.rel)
|> Map.to_list()
|> Enum.map(fn {rel, links} -> {rel, Enum.map(links, &Link.to_json_map(&1))} end)
|> Enum.map(fn {rel, fragments} -> {rel, unbox_single_fragments(fragments)} end)
|> Map.new()
end
defp properties_in(parsed_json) do
Map.drop(parsed_json, ["_links", "_embedded"])
end
defp unbox_single_fragments(fragments) do
case fragments do
[fragment] -> fragment
_ -> fragments
end
end
defp links_in(client, parsed_json) do
namespaces = NsReg.from_parsed_json(parsed_json)
embedded_links = embedded_links_in(client, parsed_json)
links = simple_links_in(parsed_json)
|> augment_simple_links_with_embedded_reprs(embedded_links)
|> backfill_missing_links(embedded_links)
|> expand_curies(namespaces)
Enum.group_by(links, fn a_link -> a_link.rel end)
end
defp augment_simple_links_with_embedded_reprs(links, embedded_links) do
links
|> Enum.map(fn link ->
case Enum.find(embedded_links, &(Link.equal?(&1, link))) do
nil -> link
embedded -> %{link | target: embedded.target}
end
end)
end
defp backfill_missing_links(links, embedded_links) do
embedded_links
|> Enum.reduce(links, fn embedded, links ->
case Enum.any?(links, &(Link.equal?(embedded, &1))) do
false -> [embedded | links]
_ -> links
end
end)
end
defp simple_links_in(parsed_json) do
case Map.fetch(parsed_json, "_links") do
{:ok, links} -> links_section_to_links(links)
_ -> []
end
end
defp links_section_to_links(links) do
Enum.flat_map(links, fn {rel, l} ->
List.wrap(l)
|> Enum.filter(& &1["href"])
|> Enum.map(&Link.from_links_entry(rel, &1))
end)
end
defp embedded_links_in(client, parsed_json) do
case Map.fetch(parsed_json, "_embedded") do
{:ok, links} -> embedded_section_to_links(client, links)
_ -> []
end
end
defp embedded_section_to_links(client, links) do
Enum.flat_map(links, fn {rel, l} ->
List.wrap(l)
|> Enum.map(&Link.from_embedded(rel, __MODULE__.from_parsed_hal(client, &1)))
end)
end
defp expand_curies(links, namespaces) do
Enum.flat_map(links, &Link.expand_curie(&1, namespaces))
end
end
defimpl ExHal.Locatable, for: ExHal.Document do
alias ExHal.Link
def url(a_doc) do
case ExHal.get_links_lazy(a_doc, "self", fn -> :error end) do
:error -> :error
[link | _] -> Link.target_url(link)
end
end
end
defimpl Poison.Encoder, for: ExHal.Document do
alias ExHal.Document
def encode(doc, options) do
Poison.Encoder.Map.encode(Document.to_json_map(doc), options)
end
end
|
lib/exhal/document.ex
| 0.880226
| 0.521532
|
document.ex
|
starcoder
|
defmodule Q do
@moduledoc """
Documentation for Q ( Elixir Quantum module ).
"""
require Math
@doc """
|0> qubit ... ( 1, 0 )
## Examples
iex> Q.q0()
%Array{array: [1, 0], shape: {2, nil}}
iex> Q.q0().array
[ 1, 0 ]
"""
def q0(), do: Numexy.new( [ 1, 0 ] )
@doc """
|1> qubit ... ( 0, 1 )
## Examples
iex> Q.q1()
%Array{array: [0, 1], shape: {2, nil}}
iex> Q.q1().array
[ 0, 1 ]
"""
def q1(), do: Numexy.new( [ 0, 1 ] )
@doc """
1 / sqrt( 2 )
## Examples
iex> Q.n07()
0.7071067811865475
"""
def n07(), do: 1 / Math.sqrt( 2 )
@doc """
To bit from number, number list or number matrix.
## Examples
iex> Q.to_bit( 0.9999999999999998 )
1
iex> Q.to_bit( -0.9999999999999998 )
-1
iex> Q.to_bit( 0.0 )
0
iex> Q.to_bit( 0.7071067811865475 )
0.7071067811865475
iex> Q.to_bit( [ 0, 1 ] )
[ 0, 1 ]
iex> Q.to_bit( [ 0.9999999999999998, 0 ] )
[ 1, 0 ]
iex> Q.to_bit( [ 0.9999999999999998, 0, -0.9999999999999998, 0 ] )
[ 1, 0, -1, 0 ]
iex> Q.to_bit( [ [ 0.9999999999999998, 0 ], [ -0.9999999999999998, 0 ] ] )
[ [ 1, 0 ], [ -1, 0 ] ]
"""
def to_bit( 0.9999999999999998 ), do: 1
def to_bit( -0.9999999999999998 ), do: -1
def to_bit( 0.0 ), do: 0
def to_bit( value ) when is_list( value ) do
case value |> List.first |> is_list do
true -> value |> Enum.map( &( &1 |> Enum.map( fn y -> to_bit( y ) end ) ) )
false -> value |> Enum.map( &( to_bit( &1 ) ) )
end
end
def to_bit( %Array{ array: list, shape: _ } ), do: list |> to_bit |> Numexy.new
def to_bit( value ), do: value
@doc """
X gate.
## Examples
iex> Q.x( Q.q0() )
Q.q1()
iex> Q.x( Q.q1() )
Q.q0()
"""
def x( qubit ), do: Numexy.dot( xx(), qubit )
@doc """
for X gate 2x2 matrix ... ( ( 0, 1 ), ( 1, 0 ) )
"""
def xx(), do: Numexy.new [ [ 0, 1 ], [ 1, 0 ] ]
@doc """
Z gate.
## Examples
iex> Q.z( Q.q0() )
Q.q0()
iex> Q.z( Q.q1() )
-1 |> Numexy.mul( Q.q1() )
iex> Q.z( Numexy.new [ 0, 0, 0, 1 ] )
Q.tensordot( Q.q1(), -1 |> Numexy.mul( Q.q1() ), 0 )
iex> Q.z( Numexy.new [ 0, 0, 0, 1 ] )
Numexy.new [ 0, 0, 0, -1 ]
"""
def z( %Array{ array: _list, shape: { 2, nil } } = qubit ), do: Numexy.dot( z2x(), qubit )
def z( %Array{ array: _list, shape: { 4, nil } } = qubit ), do: Numexy.dot( z4x(), qubit )
@doc """
for Z gate 2x2 matrix ... ( ( 0, 1 ), ( 1, 0 ) )
"""
def z2x() do
Numexy.new [
[ 1, 0 ],
[ 0, -1 ]
]
end
@doc """
for Z gate 4x4 matrix ... ( ( 1, 0, 0, 0 ), ( 0, 1, 0, 0 ), ( 0, 0, 1, 0 ), ( 0, 0, 0, -1 ) )
"""
def z4x() do
Numexy.new [
[ 1, 0, 0, 0 ],
[ 0, 1, 0, 0 ],
[ 0, 0, 1, 0 ],
[ 0, 0, 0, -1 ],
]
end
@doc """
Hadamard gate.
## Examples
iex> Q.h( Q.q0() )
Numexy.add( Q.n07() |> Numexy.mul( Q.q0() ), Q.n07() |> Numexy.mul( Q.q1() ) )
iex> Q.h( Q.q0() )
Numexy.new [ Q.n07(), Q.n07() ]
iex> Q.h( Q.q1() )
Numexy.sub( Q.n07() |> Numexy.mul( Q.q0() ), Q.n07() |> Numexy.mul( Q.q1() ) )
iex> Q.h( Q.q1() )
Numexy.new [ Q.n07(), -Q.n07() ]
"""
def h( qubit ), do: Numexy.mul( hx(), 1 / Math.sqrt( 2 ) ) |> Numexy.dot( qubit ) |> to_bit
@doc """
for Hadamard gate matrix ... ( ( 1, 1 ), ( 1, -1 ) )
"""
def hx(), do: Numexy.new [ [ 1, 1 ], [ 1, -1 ] ]
@doc """
Controlled NOT gate.
## Examples
iex> Q.cnot( Q.q0(), Q.q0() ) # |00>
Numexy.new [ 1, 0, 0, 0 ]
iex> Q.cnot( Q.q0(), Q.q1() ) # |01>
Numexy.new [ 0, 1, 0, 0 ]
iex> Q.cnot( Q.q1(), Q.q0() ) # |11>
Numexy.new [ 0, 0, 0, 1 ]
iex> Q.cnot( Q.q1(), Q.q1() ) # |10>
Numexy.new [ 0, 0, 1, 0 ]
"""
def cnot( qubit1, qubit2 ), do: Numexy.dot( cx(), tensordot( qubit1, qubit2, 0 ) )
@doc """
for Controlled NOT gate 4x4 matrix ... ( ( 1, 0, 0, 0 ), ( 0, 1, 0, 0 ), ( 0, 0, 0, 1 ), ( 0, 0, 1, 0 ) )
"""
def cx() do
Numexy.new [
[ 1, 0, 0, 0 ],
[ 0, 1, 0, 0 ],
[ 0, 0, 0, 1 ],
[ 0, 0, 1, 0 ],
]
end
@doc """
Calculate tensor product.<br>
TODO: Later, transfer to Numexy github
## Examples
iex> Q.tensordot( Q.q0(), Q.q0(), 0 )
Numexy.new( [ 1, 0, 0, 0 ] )
iex> Q.tensordot( Q.q0(), Q.q1(), 0 )
Numexy.new( [ 0, 1, 0, 0 ] )
iex> Q.tensordot( Q.q1(), Q.q0(), 0 )
Numexy.new( [ 0, 0, 1, 0 ] )
iex> Q.tensordot( Q.q1(), Q.q1(), 0 )
Numexy.new( [ 0, 0, 0, 1 ] )
"""
def tensordot( %Array{ array: xm, shape: _xm_shape }, %Array{ array: ym, shape: _ym_shape }, _axes ) do
xv = List.flatten( xm )
yv = List.flatten( ym )
xv
|> Enum.map( fn x -> yv |> Enum.map( fn y -> x * y end ) end )
|> List.flatten
|> Numexy.new
end
end
|
lib/q.ex
| 0.627038
| 0.637891
|
q.ex
|
starcoder
|
defmodule Aino do
@moduledoc """
Aino, an experimental HTTP framework
To load Aino, add to your supervision tree. `callback` and `port` are both required options.
```elixir
children = [
{Aino, [callback: Aino.Handler, port: 3000]}
]
```
The `callback` should be an `Aino.Handler`, which has a single `handle/1` function that
processes the request.
"""
@behaviour :elli_handler
require Logger
@doc false
def child_spec(opts) do
opts = [
callback: Aino,
callback_args: opts,
port: opts[:port]
]
%{
id: __MODULE__,
start: {:elli, :start_link, [opts]},
type: :worker,
restart: :permanent,
shutdown: 500
}
end
@impl true
def init(request, _args) do
case :elli_request.get_header("Upgrade", request) do
"websocket" ->
{:ok, :handover}
_ ->
:ignore
end
end
@impl true
def handle(request, options) do
try do
request
|> handle_request(options)
|> handle_response()
rescue
exception ->
Logger.error(Exception.format(:error, exception, __STACKTRACE__))
assigns = %{
exception: Exception.format(:error, exception, __STACKTRACE__)
}
{500, [{"Content-Type", "text/html"}], Aino.Exception.render(assigns)}
end
end
defp handle_request(request, options) do
callback = options[:callback]
token =
request
|> Aino.Request.from_record()
|> Aino.Token.from_request()
|> Map.put(:otp_app, options[:otp_app])
|> Map.put(:scheme, scheme(options))
|> Map.put(:host, options[:host])
|> Map.put(:port, options[:port])
|> Map.put(:default_assigns, %{})
|> Map.put(:environment, options[:environment])
case :elli_request.get_header("Upgrade", request) do
"websocket" ->
Aino.WebSocket.handle(token, callback)
Map.put(token, :handover, true)
_ ->
callback.handle(token)
end
end
defp handle_response(%{handover: true}) do
{:close, <<>>}
end
defp handle_response(token) do
required_keys = [:response_status, :response_headers, :response_body]
case Enum.all?(required_keys, fn key -> Map.has_key?(token, key) end) do
true ->
{token.response_status, token.response_headers, token.response_body}
false ->
missing_keys = required_keys -- Map.keys(token)
raise "Token is missing required keys - #{inspect(missing_keys)}"
end
end
defp scheme(options), do: options[:scheme] || :http
@impl true
def handle_event(:request_complete, data, _args) do
{timings, _} = Enum.at(data, 4)
diff = timings[:request_end] - timings[:request_start]
microseconds = System.convert_time_unit(diff, :native, :microsecond)
if microseconds > 1_000 do
milliseconds = System.convert_time_unit(diff, :native, :millisecond)
Logger.info("Request complete in #{milliseconds}ms")
else
Logger.info("Request complete in #{microseconds}μs")
end
:ok
end
def handle_event(:request_error, data, _args) do
Logger.error("Internal server error, #{inspect(data)}")
:ok
end
def handle_event(:elli_startup, _data, opts) do
Logger.info("Aino started on #{scheme(opts)}://#{opts[:host]}:#{opts[:port]}")
:ok
end
def handle_event(_event, _data, _args) do
:ok
end
end
defmodule Aino.Exception do
@moduledoc false
# Compiles the error page into a function for calling in `Aino`
require EEx
EEx.function_from_file(:def, :render, "lib/aino/exception.html.eex", [:assigns])
end
|
lib/aino.ex
| 0.795301
| 0.686521
|
aino.ex
|
starcoder
|
defmodule Collision.Polygon.Edge do
@moduledoc """
An edge; the connection between two vertices.
Represented as a point, the next point that its connected to, and the
length between them.
"""
defstruct point: nil, next: nil, length: nil
alias Collision.Polygon.Edge
alias Collision.Polygon.Vertex
alias Collision.Vector.Vector2
@type t :: t
@doc """
Build an edge from a pair of vertices.
Returns: Edge.t | {:error, String.t}
## Example
iex> Edge.from_vertex_pair({%Vertex{x: 0, y: 0}, %Vertex{x: 0, y: 4}})
%Edge{point: %Vertex{x: 0, y: 0}, next: %Vertex{x: 0, y: 4}, length: 4.0}
"""
@spec from_vertex_pair({Vertex.t, Vertex.t}) :: t | {:error, String.t}
def from_vertex_pair({point1, point1}), do: {:error, "Same point"}
def from_vertex_pair({point1, point2} = points) do
edge_length = calculate_length(points)
%Edge{point: point1, next: point2, length: edge_length}
end
@doc """
Calculate the distance between two vertices.
Returns: float
## Examples
iex> Edge.calculate_length({%Vertex{x: 0, y: 0}, %Vertex{x: 0, y: 4}})
4.0
iex> Edge.calculate_length({%Vertex{x: 3, y: 0}, %Vertex{x: 0, y: 4}})
5.0
"""
@spec calculate_length({%{x: number, y: number}, %{x: number, y: number}}) :: float
def calculate_length({%{x: x1, y: y1}, %{x: x2, y: y2}}) do
sum_of_squares = :math.pow(x2 - x1, 2) + :math.pow(y2 - y1, 2)
:math.sqrt(sum_of_squares)
end
@doc """
Calculate the angle between three vertices, A -> B -> C,
based on the edges AB and BC.
Returns: angle B, in radians
## Example
iex> a = Edge.from_vertex_pair({%Vertex{x: 4, y: 4}, %Vertex{x: 0, y: 4}})
iex> b = Edge.from_vertex_pair({%Vertex{x: 0, y: 4}, %Vertex{x: 0, y: 0}})
iex> Edge.calculate_angle(a, b)
:math.pi / 2
"""
@spec calculate_angle(Edge.t, Edge.t) :: float | {:error, String.t}
def calculate_angle([edge1, edge2]) do
calculate_angle(edge1, edge2)
end
def calculate_angle(
%Edge{point: %{x: _x1, y: _y1},
next: %{x: x2, y: y2}} = edge1,
%Edge{point: %{x: x2, y: y2},
next: %{x: _x3, y: _y3}} = edge2
) do
vector_1 = Vector2.from_points(edge1.next, edge1.point)
vector_2 = Vector2.from_points(edge2.point, edge2.next)
cross = (vector_1.x * vector_2.y) - (vector_1.y * vector_2.x)
dot = Vector.dot_product(vector_1, vector_2)
angle = :math.atan2(cross, dot)
case angle do
a when a > 0 -> (2 * :math.pi) - a
_ -> abs(angle)
end
end
defimpl String.Chars, for: Edge do
@spec to_string(Edge.t) :: String.t
def to_string(%Edge{} = e) do
"#{e.point} -> #{e.next}"
end
end
end
|
lib/collision/polygon/edge.ex
| 0.938801
| 0.969032
|
edge.ex
|
starcoder
|
defmodule Dmage.Range.Calculator do
@faces 6
@defence 3
def probable_damage_in_open([attacks, skill, damage_normal, damage_crit, save]) do
hits = attacks(attacks, skill)
saves = saves(@defence, save, 0)
resolve(hits, saves, {damage_normal, damage_crit})
|> Tuple.sum()
end
def probable_damage_in_cover([attacks, skill, damage_normal, damage_crit, save]) do
hits = attacks(attacks, skill)
saves = saves(@defence, save, 1)
resolve(hits, saves, {damage_normal, damage_crit})
|> Tuple.sum()
end
def hits(dice, _eyes) when dice < 0, do: error "dice cannot be negative"
def hits(_dice, eyes) when eyes < 0, do: error "eyes cannot be negative"
def hits(_dice, eyes) when eyes > @faces, do: error "eyes cannot excced #{@faces}"
def hits(dice, eyes) do
dice * (eyes / @faces)
end
def attacks(dice, _skill) when dice < 0, do: error "dice cannot be negative"
def attacks(_dice, skill) when skill < 2, do: error "skill cannot be less than 2"
def attacks(_dice, skill) when skill > @faces, do: error "skill cannot excced #{@faces}"
def attacks(dice, skill) do
normal = hits(dice, @faces - skill)
crit = hits(dice, 1)
{normal, crit}
end
def saves(dice, _save, _retained) when dice < 0, do: error "dice cannot be negative"
def saves(_dice, save, _retained) when save < 0, do: error "save cannot be negative"
def saves(_dice, save, _retained) when save > @faces, do: error "save cannot be greater than #{@faces}"
def saves(_dice, save, _retained) when save < 1, do: error "save cannot be less than 1"
def saves(_dice, _save, retained) when retained > 3, do: error "retained cannot be greater than #{@defence}"
def saves(defence, save, retained) when retained > 0 do
{normal, crit} = saves(defence - retained, save, 0)
{normal + retained, crit}
end
def saves(defence, save, _retained) do
normal = hits(defence, @faces - save)
crit = hits(defence, 1)
{normal, crit}
end
def resolve({hits_normal, hits_crit}, {saves_normal, saves_crit}, {damage_normal, damage_crit}) do
damage_normal = damage(hits_normal - saves_normal, damage_normal)
damage_crit = damage(hits_crit - saves_crit, damage_crit)
{damage_normal, damage_crit}
end
def damage(hits, _damage) when hits <= 0, do: 0.0
def damage(_hits, damage) when damage <= 0, do: 0.0
def damage(hits, damage) do
hits * damage
|> Float.round(2)
end
defp error(msg) do
{:error, msg}
end
end
|
lib/dmage/range.ex
| 0.538012
| 0.472136
|
range.ex
|
starcoder
|
defmodule SimpleMqtt do
alias SimpleMqtt.Subscriptions
use GenServer
require Logger
@type package_identifier() :: 0x0001..0xFFFF | nil
@type topic() :: String.t()
@type topic_filter() :: String.t()
@type payload() :: binary() | nil
@moduledoc """
The SimpleMqtt is a basic, single node pub-sub implementation where publishers and subscribers use topics and topics filters
compatible with MQTT.
It cannot replace a real MQTT broker, but can be used in a simple IoT device, with multiple local sensors and actuators that have to communicate with each other.
"""
@doc """
Starts new Simple MQTT server and links it to the current process
"""
def start_link() do
GenServer.start_link(__MODULE__, nil)
end
@doc """
Subscribes the current process to the given list of topics. Each item in the list must be a valid MQTT filter.
## Examples
In the following example, the current process subscribes to two topic filters:
```
{:ok, pid} = SimpleMqtt.start_link()
:ok = SimpleMqtt.subscribe(pid, ["things/sensor_1/+", "things/sensor_2/+"])
```
If the process needs to monitor one more topic filter, it can call `subscribe` again. After this call, the current process
will be subscribed to three topic filters.
```
:ok = SimpleMqtt.subscribe(pid, ["things/sensor_3/+"])
```
"""
@spec subscribe(pid(), [topic_filter()]) :: :ok
def subscribe(pid, topics) do
Logger.debug("Subscribed process #{inspect(pid)} to topics #{inspect(topics)}")
GenServer.call(pid, {:subscribe, topics})
end
@doc """
Unsubscribes the current process from the given list of topics.
## Examples
In the following example, the current process starts the Simple MQTT server, subscribes to two topic filters,
and then unsubscribes from the second one. It will still receive messages published to a topic that matches the first
filter.
```
{:ok, pid} = SimpleMqtt.start_link()
:ok = SimpleMqtt.subscribe(pid, ["things/sensor_1/+", "things/sensor_2/+"])
:ok = SimpleMqtt.unsubscribe(pid, ["things/sensor_2/+"])
```
In the second example, the current process unsubscribes from all topics. It will no longer receive any messages.
```
{:ok, pid} = SimpleMqtt.start_link()
:ok = SimpleMqtt.subscribe(pid, ["things/sensor_1/+", "things/sensor_2/+"])
:ok = SimpleMqtt.unsubscribe(pid, :all)
```
"""
@spec unsubscribe(pid(), [topic_filter()] | :all) :: :ok
def unsubscribe(pid, topics) do
Logger.debug("Unsubscribed process #{inspect(pid)} from topics #{inspect(topics)}")
GenServer.call(pid, {:unsubscribe, topics})
end
@doc """
Publishes message to the given topic.
## Examples
```
{:ok, pid} = SimpleMqtt.start_link()
:ok = SimpleMqtt.publish(pid, "things/sensor_1/temperature", "34.5")
```
"""
@spec publish(pid(), topic(), payload()) :: :ok
def publish(pid, topic, payload) do
Logger.info("Publishing message to topic #{topic}")
GenServer.cast(pid, {:publish, topic, payload})
end
@impl true
def init(_) do
{:ok, {Subscriptions.new(), %{}}}
end
@impl true
def handle_call({:subscribe, topics}, {from, _}, {subscriptions, monitors}) do
case Subscriptions.subscribe(subscriptions, from, topics) do
:error -> {:reply, :error, subscriptions}
new_subscriptions ->
reference = Process.monitor(from)
new_monitors = Map.put(monitors, from, reference)
{:reply, :ok, {new_subscriptions, new_monitors}}
end
end
@impl true
def handle_call({:unsubscribe, topics}, {from, _}, {subscriptions, monitors} = state) do
case Subscriptions.unsubscribe(subscriptions, from, topics) do
:error -> {:reply, :error, state}
{:empty, new_subscriptions} ->
new_monitors = case Map.fetch(monitors, from) do
{:ok, monitor_ref} ->
Process.demonitor(monitor_ref)
Map.delete(monitors, from)
_ -> monitors
end
{:reply, :ok, {new_subscriptions, new_monitors}}
{:not_empty, new_subscriptions} ->
{:reply, :ok, {new_subscriptions, monitors}}
end
end
@impl true
def handle_cast({:publish, topic, payload}, {subscriptions, _} = state) do
case Subscriptions.list_matched(subscriptions, topic) do
:error -> {:noreply, state}
pids ->
for pid <- pids do
Logger.debug("Sending message published to topic #{topic} to subscriber #{inspect(pid)}")
send(pid, {:simple_mqtt, topic, payload})
end
{:noreply, state}
end
end
@impl true
def handle_info({:DOWN, _ref, :process, pid, _reason}, {subscriptions, monitors}) do
Logger.info("Subscriber #{inspect(pid)} exited. Removing its subscriptions")
new_monitors = Map.delete(monitors, pid)
case Subscriptions.unsubscribe(subscriptions, pid, :all) do
:error -> {:noreply, {subscriptions, new_monitors}}
{_, new_subscriptions} -> {:noreply, {new_subscriptions, new_monitors}}
end
end
end
|
lib/simple_mqtt.ex
| 0.876667
| 0.844088
|
simple_mqtt.ex
|
starcoder
|
defmodule AmqpDirector do
@moduledoc """
Documentation for AmqpDirector.
This module provides wrapping for the Erlang code and is intended to be used with Elixir applications.
"""
require AmqpDirector.Queues
defmodule Definitions do
require Record
Record.defrecord(
:basic_deliver,
:"basic.deliver",
Record.extract(:"basic.deliver", from_lib: "amqp_client/include/amqp_client.hrl")
)
Record.defrecord(
:amqp_msg,
Record.extract(:amqp_msg, from_lib: "amqp_client/include/amqp_client.hrl")
)
Record.defrecord(
:p_basic,
:P_basic,
Record.extract(:P_basic, from_lib: "amqp_client/include/amqp_client.hrl")
)
@type basic_deliver ::
record(:basic_deliver,
consumer_tag: String.t(),
delivery_tag: String.t(),
redelivered: bool(),
exchange: String.t(),
routing_key: String.t()
)
@type p_basic ::
record(:p_basic,
content_type: String.t(),
content_encoding: String.t(),
headers: amqp_table(),
delivery_mode: any,
priority: any,
correlation_id: String.t(),
reply_to: String.t(),
expiration: any,
message_id: any,
timestamp: any,
type: any,
user_id: any,
app_id: any,
cluster_id: any
)
@type amqp_msg :: record(:amqp_msg, props: p_basic(), payload: String.t())
# copied from rabbit_common rabbit_framing_amqp_0_9_1.erl
@type amqp_field_type() ::
:longstr
| :signedint
| :decimal
| :timestamp
| :unsignedbyte
| :unsignedshort
| :unsignedint
| :table
| :byte
| :double
| :float
| :long
| :short
| :bool
| :binary
| :void
| :array
@type amqp_value() ::
binary # longstr, binary
| integer() # signedint
| {non_neg_integer(), non_neg_integer()} # decimal
| amqp_table()
| amqp_array()
| byte() # byte
| float() # double
| integer() # long, short
| boolean() # bool
| binary() # binary
| :undefined # void
| non_neg_integer() # timestamp
@type amqp_table() :: [{binary, amqp_field_type(), amqp_value()}]
@type amqp_array() :: [{amqp_field_type(), amqp_value()}]
end
@typedoc "RabbitMQ broker connection options."
@type connection_option ::
{:host, String.t()}
| {:port, non_neg_integer}
| {:username, String.t()}
| {:password, String.t()}
| {:virtual_host, String.t()}
@typedoc "The content type of the AMQP message payload."
@type content_type :: String.t()
@typedoc "The type that a AMQP RPC Server handler function must return."
@type handler_return_type ::
{:reply, payload :: binary | {binary, Definitions.amqp_table()}, content_type}
| :reject
| :reject_no_requeue
| {:reject_dump_msg, String.t()}
| :ack
@typedoc """
The handler function type for the AMQP RPC Server. The handler can be either arity 3 or 1. If the handler is of arity 3 then `amqp_director` will
deconstruct the message and only provide the payload along with content type into the handler. If the handler is of arity 1 then it will be called
with a raw AMQP message record
"""
@type handler ::
(payload :: binary, content_type, type :: String.t() -> handler_return_type)
| (raw_msg :: {basic_deliver :: Definitions.basic_deliver(), amqp_msg :: Definitions.amqp_msg()} -> handler_return_type)
@typedoc """
AMQP RPC Server configuration options.
* `:consume_queue` - Specifies the name of the queue on which the server will consume messages.
* `:consumer_tag` - Specifies the tag that the server will be identified when listening on the queue.
* `:queue_definition` - A list of instructions for setting up the queues and exchanges. The RPC Server will call these during its
initialization. The instructions should be created using `exchange_declare/2`, `queue_declare/2` and `queue_bind/3`. E.g.
```
{:queue_definitions, [
AmqpDirector.exchange_declare("my_exchange", type: "topic"),
AmqpDirector.queue_declare("my_queue", exclusive: true),
AmqpDirector.queue_bind("my_queue", "my_exchange", "some.topic.*")
]}
```
* `:no_ack` - Specifies if the server should _NOT_ auto-acknowledge consumed messages. Defaults to `false`.
* `:qos` - Specifies the prefetch count on the consume queue. Defaults to `2`.
* `:reply_persistent` - Specifies the delivery mode for the AMQP replies. Setting this to `true` will make the broker log the
messages on disk. See AMQP specification for more information. Defaults to `false`
"""
@type server_option ::
{:consume_queue, String.t()}
| {:consumer_tag, String.t()}
| {:queue_definitions, list(queue_definition)}
| {:no_ack, boolean}
| {:qos, number}
| {:reply_persistent, boolean}
@typedoc """
AMQP RPC Client configuraion options.
* `:app_id` - Specifies the identitifier of the client.
* `:queue_definition` - A list of instructions for setting up the queues and exchanges. The RPC Client will call these during its
initialization. The instructions should be created using `exchange_declare/2`, `queue_declare/2` and `queue_bind/3`. E.g.
```
{:queue_definitions, [
AmqpDirector.exchange_declare("my_exchange", type: "topic"),
AmqpDirector.queue_declare("my_queue", exclusive: true),
AmqpDirector.queue_bind("my_queue", "my_exchange", "some.topic.*")
]}
```
* `:reply_queue` - Allows naming for the reply queue. Defaults to empty name, making the RabbitMQ broker auto-generate the name.
* `:no_ack` - Specifies if the client should _NOT_ auto-acknowledge replies. Defaults to `false`.
* `:rabbitmq_direct_reply` - use pseudo-queue `amq.rabbitmq.reply-to` instead of setting up a new queue to consume from. Only applicable when connecting to a `rabbitmq` server. By default not present.
"""
@type client_option ::
{:app_id, String.t()}
| {:queue_definitions, list(queue_definition)}
| {:reply_queue, String.t()}
| {:no_ack, boolean}
| :rabbitmq_direct_reply
@typedoc """
AMQP RPC Pull Client configuraion options.
* `:app_id` - Specifies the identitifier of the client.
* `:queue_definition` - A list of instructions for setting up the queues and exchanges. The RPC Client will call these during its
initialization. The instructions should be created using `exchange_declare/2`, `queue_declare/2` and `queue_bind/3`. E.g.
```
{:queue_definitions, [
AmqpDirector.exchange_declare("my_exchange", type: "topic"),
AmqpDirector.queue_declare("my_queue", exclusive: true),
AmqpDirector.queue_bind("my_queue", "my_exchange", "some.topic.*")
]}
```
"""
@type pull_client_option ::
{:app_id, String.t()}
| {:queue_definitions, list(queue_definition)}
@typedoc "Queue definition instructions."
@type queue_definition :: exchange_declare | queue_declare | queue_bind
@doc """
Creates a child specification for an AMQP RPC server.
This specification allows for RPC servers to be nested under any supervisor in the application using AmqpDirector. The RPC Server
will initialize the queues it is instructed to and will then consume messages on the queue specified. The handler function will
be called to handle each request. See `t:server_option/0` for configuration options and `t:handler/0` for the type spec
of the handler function.
The server handles reconnecting by itself.
"""
@spec server_child_spec(
atom,
handler,
list(connection_option),
non_neg_integer,
list(server_option)
) :: Supervisor.child_spec()
def server_child_spec(name, handler, connectionInfo, count, config) do
connectionInfo
|> Keyword.update!(:host, &String.to_charlist/1)
|> :amqp_director.parse_connection_parameters()
|> (fn connection ->
:amqp_director.server_child_spec(name, handler, connection, count, config)
end).()
|> old_spec_to_new
end
@doc """
Creates a child specification for an AMQP RPC client.
This specification allows for RPC clients to be nested under any supervisor in the application using AmqpDirector. The RPC
client can perform queue initialization. It will also create a reply queue to consume replies on. The client can then be used
using `AmqpDirector.Client` module API. See `t:client_option/0` for configuration options.
The client handles reconnecting by itself.
"""
@spec client_child_spec(atom, list(connection_option), list(client_option)) ::
Supervisor.child_spec()
def client_child_spec(name, connectionInfo, config) do
connectionInfo
|> Keyword.update!(:host, &String.to_charlist/1)
|> :amqp_director.parse_connection_parameters()
|> (fn connection -> :amqp_director.ad_client_child_spec(name, connection, config) end).()
|> old_spec_to_new
end
@doc """
Creates a child specification for an AMQP RPC pull client.
This specification allows for RPC clients to be nested under any supervisor in the application using AmqpDirector. The pull client
uses the Synchronous Pull (`#basic.get{}`) over AMQP. The client can then be used using `AmqpDirector.PullClient` module API.
See `t:pull_client_option/0` for configuration options.
The client handles reconnecting by itself.
"""
@spec pull_client_child_spec(atom, list(connection_option), list(pull_client_option)) ::
Supervisor.child_spec()
def pull_client_child_spec(name, connectionInfo, config) do
connectionInfo
|> Keyword.update!(:host, &String.to_charlist/1)
|> :amqp_director.parse_connection_parameters()
|> (fn connection -> :amqp_director.sp_client_child_spec(name, connection, config) end).()
|> old_spec_to_new
end
@typep queue_declare :: AmqpDirector.Queues.queue_declare()
@doc """
Declares a queue on the AMQP Broker.
This function is intended to be using within `:queue_definitions` configuration parameter of a client or a server. See
`t:client_option/0` or `t:server_option/0` for details.
Available options are: `:passive`, `:durable`, `:exclusive`, `:auto_delete` and `:arguments`. See AMQP specification for details on
queue declaration.
"""
@spec queue_declare(String.t(), Keyword.t()) :: queue_declare
def queue_declare(name, params \\ []) do
passive = Access.get(params, :passive, false)
durable = Access.get(params, :durable, false)
exclusive = Access.get(params, :exclusive, false)
auto_delete = Access.get(params, :auto_delete, false)
arguments = Access.get(params, :arguments, [])
AmqpDirector.Queues.queue_declare(
queue: name,
passive: passive,
durable: durable,
exclusive: exclusive,
auto_delete: auto_delete,
arguments: arguments
)
end
@typep queue_bind :: AmqpDirector.Queues.queue_bind()
@doc """
Binds a queue to an exchange.
This function is intended to be using within `:queue_definitions` configuration parameter of a client or a server. See
`t:client_option/0` or `t:server_option/0` for details. See AMQP specification for details of queue binding.
"""
@spec queue_bind(String.t(), String.t(), String.t()) :: queue_bind
def queue_bind(name, exchange, routing_key) do
AmqpDirector.Queues.queue_bind(queue: name, exchange: exchange, routing_key: routing_key)
end
@typep exchange_declare :: AmqpDirector.Queues.exchange_declare()
@doc """
Declares an exchange on the AMQP Broker.
This function is intended to be using within `:queue_definitions` configuration parameter of a client or a server. See
`t:client_option/0` or `t:server_option/0` for details.
Available options are: `:passive`, `:durable`, `:auto_delete`, `:internal` and `:arguments`. See AMQP specification for details on exchange
declaration.
"""
@spec exchange_declare(String.t(), Keyword.t()) :: exchange_declare
def exchange_declare(name, params \\ []) do
type = Keyword.get(params, :type, "direct")
passive = Keyword.get(params, :passive, false)
durable = Keyword.get(params, :durable, false)
auto_delete = Keyword.get(params, :auto_delete, false)
internal = Keyword.get(params, :internal, false)
arguments = Access.get(params, :arguments, [])
AmqpDirector.Queues.exchange_declare(
exchange: name,
passive: passive,
durable: durable,
type: type,
auto_delete: auto_delete,
internal: internal,
arguments: arguments
)
end
defp old_spec_to_new({name, start, restart, shutdown, type, modules}),
do: %{
id: name,
start: start,
restart: restart,
shutdown: shutdown,
type: type,
modules: modules
}
end
|
lib/amqp_director.ex
| 0.83612
| 0.634204
|
amqp_director.ex
|
starcoder
|
defmodule AWS.GuardDuty do
@moduledoc """
Amazon GuardDuty is a continuous security monitoring service that analyzes
and processes the following data sources: VPC Flow Logs, AWS CloudTrail
event logs, and DNS logs. It uses threat intelligence feeds (such as lists
of malicious IPs and domains) and machine learning to identify unexpected,
potentially unauthorized, and malicious activity within your AWS
environment. This can include issues like escalations of privileges, uses
of exposed credentials, or communication with malicious IPs, URLs, or
domains. For example, GuardDuty can detect compromised EC2 instances that
serve malware or mine bitcoin.
GuardDuty also monitors AWS account access behavior for signs of
compromise. Some examples of this are unauthorized infrastructure
deployments such as EC2 instances deployed in a Region that has never been
used, or unusual API calls like a password policy change to reduce password
strength.
GuardDuty informs you of the status of your AWS environment by producing
security findings that you can view in the GuardDuty console or through
Amazon CloudWatch events. For more information, see the * [Amazon GuardDuty
User
Guide](https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html)
*.
"""
@doc """
Accepts the invitation to be monitored by a master GuardDuty account.
"""
def accept_invitation(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/master"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Archives GuardDuty findings that are specified by the list of finding IDs.
<note> Only the master account can archive findings. Member accounts don't
have permission to archive findings from their accounts.
</note>
"""
def archive_findings(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/findings/archive"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Creates a single Amazon GuardDuty detector. A detector is a resource that
represents the GuardDuty service. To start using GuardDuty, you must create
a detector in each Region where you enable the service. You can have only
one detector per account per Region. All data sources are enabled in a new
detector by default.
"""
def create_detector(client, input, options \\ []) do
path_ = "/detector"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Creates a filter using the specified finding criteria.
"""
def create_filter(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/filter"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Creates a new IPSet, which is called a trusted IP list in the console user
interface. An IPSet is a list of IP addresses that are trusted for secure
communication with AWS infrastructure and applications. GuardDuty doesn't
generate findings for IP addresses that are included in IPSets. Only users
from the master account can use this operation.
"""
def create_i_p_set(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/ipset"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Creates member accounts of the current AWS account by specifying a list of
AWS account IDs. This step is a prerequisite for managing the associated
member accounts either by invitation or through an organization.
When using `Create Members` as an organizations delegated administrator
this action will enable GuardDuty in the added member accounts, with the
exception of the organization master account, which must enable GuardDuty
prior to being added as a member.
If you are adding accounts by invitation use this action after GuardDuty
has been enabled in potential member accounts and before using [ `Invite
Members`
](https://docs.aws.amazon.com/guardduty/latest/APIReference/API_InviteMembers.html).
"""
def create_members(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/member"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Creates a publishing destination to export findings to. The resource to
export findings to must exist before you use this operation.
"""
def create_publishing_destination(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/publishingDestination"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Generates example findings of types specified by the list of finding types.
If 'NULL' is specified for `findingTypes`, the API generates example
findings of all supported finding types.
"""
def create_sample_findings(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/findings/create"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Creates a new ThreatIntelSet. ThreatIntelSets consist of known malicious IP
addresses. GuardDuty generates findings based on ThreatIntelSets. Only
users of the master account can use this operation.
"""
def create_threat_intel_set(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/threatintelset"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Declines invitations sent to the current member account by AWS accounts
specified by their account IDs.
"""
def decline_invitations(client, input, options \\ []) do
path_ = "/invitation/decline"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Deletes an Amazon GuardDuty detector that is specified by the detector ID.
"""
def delete_detector(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}"
headers = []
query_ = []
request(client, :delete, path_, query_, headers, input, options, 200)
end
@doc """
Deletes the filter specified by the filter name.
"""
def delete_filter(client, detector_id, filter_name, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/filter/#{URI.encode(filter_name)}"
headers = []
query_ = []
request(client, :delete, path_, query_, headers, input, options, 200)
end
@doc """
Deletes the IPSet specified by the `ipSetId`. IPSets are called trusted IP
lists in the console user interface.
"""
def delete_i_p_set(client, detector_id, ip_set_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/ipset/#{URI.encode(ip_set_id)}"
headers = []
query_ = []
request(client, :delete, path_, query_, headers, input, options, 200)
end
@doc """
Deletes invitations sent to the current member account by AWS accounts
specified by their account IDs.
"""
def delete_invitations(client, input, options \\ []) do
path_ = "/invitation/delete"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Deletes GuardDuty member accounts (to the current GuardDuty master account)
specified by the account IDs.
"""
def delete_members(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/member/delete"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Deletes the publishing definition with the specified `destinationId`.
"""
def delete_publishing_destination(client, destination_id, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/publishingDestination/#{URI.encode(destination_id)}"
headers = []
query_ = []
request(client, :delete, path_, query_, headers, input, options, 200)
end
@doc """
Deletes the ThreatIntelSet specified by the ThreatIntelSet ID.
"""
def delete_threat_intel_set(client, detector_id, threat_intel_set_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/threatintelset/#{URI.encode(threat_intel_set_id)}"
headers = []
query_ = []
request(client, :delete, path_, query_, headers, input, options, 200)
end
@doc """
Returns information about the account selected as the delegated
administrator for GuardDuty.
"""
def describe_organization_configuration(client, detector_id, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/admin"
headers = []
query_ = []
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Returns information about the publishing destination specified by the
provided `destinationId`.
"""
def describe_publishing_destination(client, destination_id, detector_id, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/publishingDestination/#{URI.encode(destination_id)}"
headers = []
query_ = []
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Disables an AWS account within the Organization as the GuardDuty delegated
administrator.
"""
def disable_organization_admin_account(client, input, options \\ []) do
path_ = "/admin/disable"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Disassociates the current GuardDuty member account from its master account.
"""
def disassociate_from_master_account(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/master/disassociate"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Disassociates GuardDuty member accounts (to the current GuardDuty master
account) specified by the account IDs.
"""
def disassociate_members(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/member/disassociate"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Enables an AWS account within the organization as the GuardDuty delegated
administrator.
"""
def enable_organization_admin_account(client, input, options \\ []) do
path_ = "/admin/enable"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Retrieves an Amazon GuardDuty detector specified by the detectorId.
"""
def get_detector(client, detector_id, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}"
headers = []
query_ = []
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Returns the details of the filter specified by the filter name.
"""
def get_filter(client, detector_id, filter_name, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/filter/#{URI.encode(filter_name)}"
headers = []
query_ = []
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Describes Amazon GuardDuty findings specified by finding IDs.
"""
def get_findings(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/findings/get"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Lists Amazon GuardDuty findings statistics for the specified detector ID.
"""
def get_findings_statistics(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/findings/statistics"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Retrieves the IPSet specified by the `ipSetId`.
"""
def get_i_p_set(client, detector_id, ip_set_id, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/ipset/#{URI.encode(ip_set_id)}"
headers = []
query_ = []
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Returns the count of all GuardDuty membership invitations that were sent to
the current member account except the currently accepted invitation.
"""
def get_invitations_count(client, options \\ []) do
path_ = "/invitation/count"
headers = []
query_ = []
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Provides the details for the GuardDuty master account associated with the
current GuardDuty member account.
"""
def get_master_account(client, detector_id, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/master"
headers = []
query_ = []
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Describes which data sources are enabled for the member account's detector.
"""
def get_member_detectors(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/member/detector/get"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Retrieves GuardDuty member accounts (to the current GuardDuty master
account) specified by the account IDs.
"""
def get_members(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/member/get"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Retrieves the ThreatIntelSet that is specified by the ThreatIntelSet ID.
"""
def get_threat_intel_set(client, detector_id, threat_intel_set_id, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/threatintelset/#{URI.encode(threat_intel_set_id)}"
headers = []
query_ = []
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Lists Amazon GuardDuty usage statistics over the last 30 days for the
specified detector ID. For newly enabled detectors or data sources the cost
returned will include only the usage so far under 30 days, this may differ
from the cost metrics in the console, which projects usage over 30 days to
provide a monthly cost estimate. For more information see [Understanding
How Usage Costs are
Calculated](https://docs.aws.amazon.com/guardduty/latest/ug/monitoring_costs.html#usage-calculations).
"""
def get_usage_statistics(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/usage/statistics"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Invites other AWS accounts (created as members of the current AWS account
by CreateMembers) to enable GuardDuty, and allow the current AWS account to
view and manage these accounts' GuardDuty findings on their behalf as the
master account.
"""
def invite_members(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/member/invite"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Lists detectorIds of all the existing Amazon GuardDuty detector resources.
"""
def list_detectors(client, max_results \\ nil, next_token \\ nil, options \\ []) do
path_ = "/detector"
headers = []
query_ = []
query_ = if !is_nil(next_token) do
[{"nextToken", next_token} | query_]
else
query_
end
query_ = if !is_nil(max_results) do
[{"maxResults", max_results} | query_]
else
query_
end
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Returns a paginated list of the current filters.
"""
def list_filters(client, detector_id, max_results \\ nil, next_token \\ nil, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/filter"
headers = []
query_ = []
query_ = if !is_nil(next_token) do
[{"nextToken", next_token} | query_]
else
query_
end
query_ = if !is_nil(max_results) do
[{"maxResults", max_results} | query_]
else
query_
end
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Lists Amazon GuardDuty findings for the specified detector ID.
"""
def list_findings(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/findings"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Lists the IPSets of the GuardDuty service specified by the detector ID. If
you use this operation from a member account, the IPSets returned are the
IPSets from the associated master account.
"""
def list_i_p_sets(client, detector_id, max_results \\ nil, next_token \\ nil, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/ipset"
headers = []
query_ = []
query_ = if !is_nil(next_token) do
[{"nextToken", next_token} | query_]
else
query_
end
query_ = if !is_nil(max_results) do
[{"maxResults", max_results} | query_]
else
query_
end
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Lists all GuardDuty membership invitations that were sent to the current
AWS account.
"""
def list_invitations(client, max_results \\ nil, next_token \\ nil, options \\ []) do
path_ = "/invitation"
headers = []
query_ = []
query_ = if !is_nil(next_token) do
[{"nextToken", next_token} | query_]
else
query_
end
query_ = if !is_nil(max_results) do
[{"maxResults", max_results} | query_]
else
query_
end
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Lists details about all member accounts for the current GuardDuty master
account.
"""
def list_members(client, detector_id, max_results \\ nil, next_token \\ nil, only_associated \\ nil, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/member"
headers = []
query_ = []
query_ = if !is_nil(only_associated) do
[{"onlyAssociated", only_associated} | query_]
else
query_
end
query_ = if !is_nil(next_token) do
[{"nextToken", next_token} | query_]
else
query_
end
query_ = if !is_nil(max_results) do
[{"maxResults", max_results} | query_]
else
query_
end
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Lists the accounts configured as GuardDuty delegated administrators.
"""
def list_organization_admin_accounts(client, max_results \\ nil, next_token \\ nil, options \\ []) do
path_ = "/admin"
headers = []
query_ = []
query_ = if !is_nil(next_token) do
[{"nextToken", next_token} | query_]
else
query_
end
query_ = if !is_nil(max_results) do
[{"maxResults", max_results} | query_]
else
query_
end
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Returns a list of publishing destinations associated with the specified
`dectectorId`.
"""
def list_publishing_destinations(client, detector_id, max_results \\ nil, next_token \\ nil, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/publishingDestination"
headers = []
query_ = []
query_ = if !is_nil(next_token) do
[{"nextToken", next_token} | query_]
else
query_
end
query_ = if !is_nil(max_results) do
[{"maxResults", max_results} | query_]
else
query_
end
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Lists tags for a resource. Tagging is currently supported for detectors,
finding filters, IP sets, and threat intel sets, with a limit of 50 tags
per resource. When invoked, this operation returns all assigned tags for a
given resource.
"""
def list_tags_for_resource(client, resource_arn, options \\ []) do
path_ = "/tags/#{URI.encode(resource_arn)}"
headers = []
query_ = []
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Lists the ThreatIntelSets of the GuardDuty service specified by the
detector ID. If you use this operation from a member account, the
ThreatIntelSets associated with the master account are returned.
"""
def list_threat_intel_sets(client, detector_id, max_results \\ nil, next_token \\ nil, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/threatintelset"
headers = []
query_ = []
query_ = if !is_nil(next_token) do
[{"nextToken", next_token} | query_]
else
query_
end
query_ = if !is_nil(max_results) do
[{"maxResults", max_results} | query_]
else
query_
end
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Turns on GuardDuty monitoring of the specified member accounts. Use this
operation to restart monitoring of accounts that you stopped monitoring
with the `StopMonitoringMembers` operation.
"""
def start_monitoring_members(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/member/start"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Stops GuardDuty monitoring for the specified member accounts. Use the
`StartMonitoringMembers` operation to restart monitoring for those
accounts.
"""
def stop_monitoring_members(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/member/stop"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Adds tags to a resource.
"""
def tag_resource(client, resource_arn, input, options \\ []) do
path_ = "/tags/#{URI.encode(resource_arn)}"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 204)
end
@doc """
Unarchives GuardDuty findings specified by the `findingIds`.
"""
def unarchive_findings(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/findings/unarchive"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Removes tags from a resource.
"""
def untag_resource(client, resource_arn, input, options \\ []) do
path_ = "/tags/#{URI.encode(resource_arn)}"
headers = []
{query_, input} =
[
{"TagKeys", "tagKeys"},
]
|> AWS.Request.build_params(input)
request(client, :delete, path_, query_, headers, input, options, 204)
end
@doc """
Updates the Amazon GuardDuty detector specified by the detectorId.
"""
def update_detector(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Updates the filter specified by the filter name.
"""
def update_filter(client, detector_id, filter_name, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/filter/#{URI.encode(filter_name)}"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Marks the specified GuardDuty findings as useful or not useful.
"""
def update_findings_feedback(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/findings/feedback"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Updates the IPSet specified by the IPSet ID.
"""
def update_i_p_set(client, detector_id, ip_set_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/ipset/#{URI.encode(ip_set_id)}"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Contains information on member accounts to be updated.
"""
def update_member_detectors(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/member/detector/update"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Updates the delegated administrator account with the values provided.
"""
def update_organization_configuration(client, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/admin"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Updates information about the publishing destination specified by the
`destinationId`.
"""
def update_publishing_destination(client, destination_id, detector_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/publishingDestination/#{URI.encode(destination_id)}"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Updates the ThreatIntelSet specified by the ThreatIntelSet ID.
"""
def update_threat_intel_set(client, detector_id, threat_intel_set_id, input, options \\ []) do
path_ = "/detector/#{URI.encode(detector_id)}/threatintelset/#{URI.encode(threat_intel_set_id)}"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@spec request(AWS.Client.t(), binary(), binary(), list(), list(), map(), list(), pos_integer()) ::
{:ok, Poison.Parser.t(), Poison.Response.t()}
| {:error, Poison.Parser.t()}
| {:error, HTTPoison.Error.t()}
defp request(client, method, path, query, headers, input, options, success_status_code) do
client = %{client | service: "guardduty"}
host = build_host("guardduty", client)
url = host
|> build_url(path, client)
|> add_query(query)
additional_headers = [{"Host", host}, {"Content-Type", "application/x-amz-json-1.1"}]
headers = AWS.Request.add_headers(additional_headers, headers)
payload = encode_payload(input)
headers = AWS.Request.sign_v4(client, method, url, headers, payload)
perform_request(method, url, payload, headers, options, success_status_code)
end
defp perform_request(method, url, payload, headers, options, nil) do
case HTTPoison.request(method, url, payload, headers, options) do
{:ok, %HTTPoison.Response{status_code: 200, body: ""} = response} ->
{:ok, response}
{:ok, %HTTPoison.Response{status_code: status_code, body: body} = response}
when status_code == 200 or status_code == 202 or status_code == 204 ->
{:ok, Poison.Parser.parse!(body, %{}), response}
{:ok, %HTTPoison.Response{body: body}} ->
error = Poison.Parser.parse!(body, %{})
{:error, error}
{:error, %HTTPoison.Error{reason: reason}} ->
{:error, %HTTPoison.Error{reason: reason}}
end
end
defp perform_request(method, url, payload, headers, options, success_status_code) do
case HTTPoison.request(method, url, payload, headers, options) do
{:ok, %HTTPoison.Response{status_code: ^success_status_code, body: ""} = response} ->
{:ok, %{}, response}
{:ok, %HTTPoison.Response{status_code: ^success_status_code, body: body} = response} ->
{:ok, Poison.Parser.parse!(body, %{}), response}
{:ok, %HTTPoison.Response{body: body}} ->
error = Poison.Parser.parse!(body, %{})
{:error, error}
{:error, %HTTPoison.Error{reason: reason}} ->
{:error, %HTTPoison.Error{reason: reason}}
end
end
defp build_host(_endpoint_prefix, %{region: "local"}) do
"localhost"
end
defp build_host(endpoint_prefix, %{region: region, endpoint: endpoint}) do
"#{endpoint_prefix}.#{region}.#{endpoint}"
end
defp build_url(host, path, %{:proto => proto, :port => port}) do
"#{proto}://#{host}:#{port}#{path}"
end
defp add_query(url, []) do
url
end
defp add_query(url, query) do
querystring = AWS.Util.encode_query(query)
"#{url}?#{querystring}"
end
defp encode_payload(input) do
if input != nil, do: Poison.Encoder.encode(input, %{}), else: ""
end
end
|
lib/aws/guard_duty.ex
| 0.82748
| 0.513851
|
guard_duty.ex
|
starcoder
|
use Croma
defmodule Antikythera.Websocket do
@moduledoc """
Behaviour module for websocket handlers.
Note the naming convention of the websocket-related modules; we use `Websocket`, `WebSocket` is not allowed.
Websocket module of gears must `use` this module as in the example below.
`use Antikythera.Websocket` implicitly invokes `use Antikythera.Controller`, for convenience in implementing `connect/1` callback.
## Example
The following example simply echoes back messages from client:
defmodule MyGear.Websocket do
use Antikythera.Websocket
def init(_conn) do
{%{}, []}
end
def handle_client_message(state, _conn, frame) do
{state, [frame]}
end
def handle_server_message(state, _conn, _msg) do
{state, []}
end
end
## Name registration
Once a websocket connection is established, subsequent bidirectional communication is handled by a dedicated connection process.
To send websocket frames to the connected client,
you should first be able to send messages to the connection process when a particular event occurs somewhere in the cluster.
To this end antikythera provides a process registry mechanism which makes connection processes accessible by "name"s.
To register connection processes, call `Antikythera.Registry.Unique.register/2` and/or `Antikythera.Registry.Group.join/2`
in your `init/1` implementation.
Then, to notify events of connection processes, use `Antikythera.Registry.Unique.send_message/3` or
`Antikythera.Registry.Group.publish/3`.
Finally to send websocket message from a connection process to client, implement `handle_server_message/3` callback
so that it returns an appropriate websocket frame using the message.
See `Antikythera.Registry.Unique` and `Antikythera.Registry.Group` for more detail of the registry.
"""
alias Antikythera.Conn
alias Antikythera.Websocket.{Frame, FrameList}
@type state :: any
@type terminate_reason :: :normal | :stop | :timeout | :remote | {:remote, Frame.close_code, Frame.close_payload} | {:error, any}
@typedoc """
Type of return value of `init/1`, `handle_client_message/3` and `handle_server_message/3` callbacks.
The 1st element of the return value is used as the new state.
The 2nd element of the return value is sent to the client.
To close the connection, include a `:close` frame in the 2nd element of the return value.
Note that the remaining frames after the close frame will not be sent.
"""
@type callback_result :: {state, FrameList.t}
@doc """
Callback function to be used during websocket handshake request.
This callback is implemented in basically the same way as ordinary controller actions.
You can use plugs and controller helper functions.
The only difference is that on success this function returns a `Antikythera.Conn.t` without setting an HTTP status code.
This callback is responsible for authenticating/authorizing the client.
If the client is valid and it's OK to start websocket communication, implementation of this callback must return the given `Antikythera.Conn.t`.
On the other hand if the client is not allowed to open websocket connection, this function must return an error as a usual HTTP response.
`use Antikythera.Websocket` generates a default implementation of this callback, which just returns the given `Antikythera.Conn.t`.
Note that you can use plugs without overriding the default.
"""
@callback connect(Conn.t) :: Conn.t
@doc """
Callback function to be called right after a connection is established.
This callback is responsible for:
1. initialize the process state (1st element of return value)
2. send initial message to client (2nd element of return value)
3. register the process to make it accessible from other processes in the system (see "Name registration" above)
"""
@callback init(Conn.t) :: callback_result
@doc """
Callback function to be called on receipt of a client message.
"""
@callback handle_client_message(state, Conn.t, Frame.t) :: callback_result
@doc """
Callback function to be called on receipt of a message from other process in the cluster.
"""
@callback handle_server_message(state, Conn.t, any) :: callback_result
@doc """
Callback function to clean up resources used by the websocket connection.
For typical use cases you don't need to implement this callback;
`Antikythera.Websocket` generates a default implementation (which does nothing) for you.
"""
@callback terminate(state, Conn.t, terminate_reason) :: any
defmacro __using__(_) do
quote do
expected = Mix.Project.config()[:app] |> Atom.to_string() |> Macro.camelize() |> Module.concat("Websocket")
if __MODULE__ != expected do
raise "invalid module name: expected=#{expected} actual=#{__MODULE__}"
end
@behaviour Antikythera.Websocket
use Antikythera.Controller
@impl true
def connect(conn), do: conn
@impl true
def terminate(_state, _conn, _reason), do: :ok
defoverridable [connect: 1, terminate: 3]
end
end
end
|
lib/web/websocket.ex
| 0.855066
| 0.444083
|
websocket.ex
|
starcoder
|
defmodule Frankt do
@moduledoc """
Run client-side actions from the backend.
Frankt provides a thin layer over Phoenix channels which allows running client-side actions
from the backend. Since the logic of those actions lives in the backend, they can leverage all the
[`Elixir`][1] and `Phoenix` capabilities.
## Basic Usage
As explained before Frankt channels are actually Phoenix channels which `use Frankt`. You can find
more information about setting up channels and wiring them into sockets in the `Phoenix.Channel`
docs.
Frankt channels implement the `Frankt` behaviour and therefore must export a `handlers/0`
function which returns a map containing the modules which will handle incoming actions. We call
those modules "_action handlers_". Action handlers would be the Frankt equivalent to Phoenix
controllers.
This example shows a very basic Frankt channel which allows any connection and registers a single
action handler.
defmodule MyApp.FranktChannel do
use Phoenix.Channel
use Frankt
def join(_topic, _payload, socket), do: {:ok, socket}
def handlers, do: %{"example_actions" => MyApp.FranktExampleActions}
end
When messages arrive to our channel, Frankt automatically checks if there is any matching action
handler registered and runs it.
This example shows a very basic action handler with a single action. Action handlers can run
business logic, render templates, push or broadcast messages into the channel, etc.
defmodule MyApp.FranktExampleActions do
import Phoenix.Channel
import MyApp.Router.Helpers
def redirect_to_home(_params, socket), do: push(socket, "redirect", %{target: "/"})
end
Frankt channels can also customize other advanced aspects such as i18n, plugs and error handlers.
## Advanced Usage
### Setting up i18n
Frankt can optionally use `Gettext` to internationalize rendered templated and messages just like
Phoenix controllers do. To set up the `Gettext` integration, your Frankt channel must implement
the `gettext/0` callback.
We can add the following line to our example Frankt channel:
def gettext, do: MyApp.Gettext
Now the action handlers registered in our Frankt channel will automatically use `MyApp.Gettext`
to internationalize texts.
To know which locale to use in the action handlers Frankt needs a `locale` assigned into the
socket. A great place to assign a locale to the socket would be a Frankt plug.
### Setting up plugs
Frankt channels can run certain modules to modify the socket before the action handler is executed.
Those modules are known as Frankt plugs because they are somewhat similar to our beloved `Plug`.
We can register plugs in our Frankt channel by implementing the `plugs` callback:
def plugs, do: [MyApp.FranktLocalePlug]
Frankt plugs implement the `Frankt.Plug` behaviour so they must to export a `call/2`
function which returns a `Phoenix.Socket`.
The following example shows a very basic plug that could set up the locale to use in our action
handlers.
defmodule MyApp.FranktLocalePlug do
use Frankt.Plug
@impl true
def call(socket = %{assigns: assigns}, opts) do
assign(socket, :locale, assigns.current_user.locale)
end
end
Just like `Plug`, Frankt plugs are run sequentially and each one receives the socket returned by
the previous plug.
Frankt functionality is also implemented as plugs. You can take a look at them into the
`lib/frankt/plug` directory to see some examples.
### Handling errors
Frankt catches any errors thay may happen while handling an incoming message. By default, those
errors are handled by pushing a `frankt-configuration-error` (in the case of
`Frankt.ConfigurationError`) or a `frankt-error` (in other cases) to the socket. Those messages
can be used to provide appropriate feedback in the client.
If you want to customize how errors are handled, you can implement the `handle_error/3` callback.
The `handle_error/4` callback receives the rescued error, the socket and the params of the
incoming message.
The following example shows a very basic error handler that redirects to the index in case of
any errors.
def handle_error(%Frankt.ConfigurationError{}, _stacktrace, socket, _params) do
push(socket, "redirect", %{target: "/"})
{:noreply, socket}
end
If you choose to implement a custom error handler for your Frankt channel, keep in mind that it
must return some of the values specified in `c:Phoenix.Channel.handle_in/3`.
[1]: https://hexdocs.pm/elixir/Kernel.html
"""
import Phoenix.Channel
alias Frankt.ConfigurationError
alias Frankt.Plug
require Logger
@callback handlers() :: %{required(String.t()) => module()}
@callback gettext() :: module()
@callback handle_error(
error :: Exception.t(),
stacktrace :: list(),
socket :: Phoenix.Socket.t(),
params :: map()
) ::
{:noreply, Phoenix.Socket.t()}
| {:reply, Phoenix.Channel.reply(), Phoenix.Socket.t()}
| {:stop, reason :: term, Phoenix.Socket.t()}
| {:stop, reason :: term, Phoenix.Channel.reply(), Phoenix.Socket.t()}
@callback plugs() :: list(module())
@pre_plugs [Plug.SetHandler, Plug.SetGettext]
@post_plugs [Plug.ExecuteAction]
defmacro __using__(_opts) do
quote do
@behaviour Frankt
def handle_in("frankt-action", params = %{"action" => action}, socket) do
socket
|> Frankt.__setup_action__(params, __MODULE__)
|> Frankt.__run_pipeline__()
rescue
error -> handle_error(error, System.stacktrace(), socket, params)
end
def handle_error(error, stacktrace, socket, params) do
Frankt.__handle_error__(error, stacktrace, socket, params)
end
def gettext, do: nil
def plugs, do: []
defoverridable Frankt
end
end
def __setup_action__(socket = %{private: private}, params = %{"action" => action}, module) do
setup_vars = %{
frankt_action: action,
frankt_module: module,
frankt_data: Map.get(params, "data", %{})
}
%{socket | private: Enum.into(setup_vars, private)}
end
def __run_pipeline__(socket = %{private: %{frankt_module: module}}) do
socket =
[@pre_plugs, module.plugs(), @post_plugs]
|> List.flatten()
|> Enum.reduce(socket, &__run_plug__/2)
{:noreply, socket}
end
def __run_plug__({module, opts}, socket), do: module.call(socket, opts)
def __run_plug__(module, socket), do: __run_plug__({module, nil}, socket)
@doc false
def __handle_error__(error, _stacktrace, socket, _params) do
message =
case error do
%ConfigurationError{} -> "frankt-configuration-error"
_ -> "frankt-error"
end
:error
|> Exception.format(error)
|> Logger.error()
push(socket, message, %{})
{:noreply, socket}
end
end
|
lib/frankt.ex
| 0.845544
| 0.731826
|
frankt.ex
|
starcoder
|
defmodule Record.Extractor do
@moduledoc false
# Retrieve a record definition from an Erlang file using
# the same lookup as the *include* attribute from Erlang modules.
def retrieve(name, from: file) when is_binary(file) do
file = String.to_char_list!(file)
realfile =
case :code.where_is_file(file) do
:non_existing -> file
realfile -> realfile
end
retrieve_record(name, realfile)
end
# Retrieve a record definition from an Erlang file using
# the same lookup as the *include_lib* attribute from Erlang modules.
def retrieve(name, from_lib: file) when is_binary(file) do
[app|path] = :filename.split(String.to_char_list!(file))
case :code.lib_dir(list_to_atom(app)) do
{ :error, _ } ->
raise ArgumentError, message: "lib file #{file} could not be found"
libpath ->
retrieve_record name, :filename.join([libpath|path])
end
end
# Retrieve the record with the given name from the given file
defp retrieve_record(name, file) do
form = read_file(file)
records = retrieve_records(form)
if record = List.keyfind(records, name, 0) do
parse_record(record, form)
else
raise ArgumentError, message: "no record #{name} found at #{file}"
end
end
# Parse the given file and retrieve all existent records.
defp retrieve_records(form) do
lc { :attribute, _, :record, record } inlist form, do: record
end
# Read a file and return its abstract syntax form that also
# includes record and other preprocessor modules. This is done
# by using Erlang's epp_dodger.
defp read_file(file) do
case :epp_dodger.quick_parse_file(file) do
{ :ok, form } ->
form
other ->
raise "error parsing file #{file}, got: #{inspect(other)}"
end
end
# Parse a tuple with name and fields and returns a
# list of tuples where the first element is the field
# and the second is its default value.
defp parse_record({ _name, fields }, form) do
cons = List.foldr fields, { nil, 0 }, fn f, acc ->
{ :cons, 0, parse_field(f), acc }
end
eval_record(cons, form)
end
defp parse_field({ :typed_record_field, record_field, _type }) do
parse_field(record_field)
end
defp parse_field({ :record_field, _, key }) do
{ :tuple, 0, [key, {:atom, 0, :undefined}] }
end
defp parse_field({ :record_field, _, key, value }) do
{ :tuple, 0, [key, value] }
end
defp eval_record(cons, form) do
form = form ++
[ { :function, 0, :hello, 0, [
{ :clause, 0, [], [], [ cons ] } ] } ]
{ :function, 0, :hello, 0, [
{ :clause, 0, [], [], [ record_ast ] } ] } = :erl_expand_records.module(form, []) |> List.last
{ :value, record, _ } = :erl_eval.expr(record_ast, [])
record
end
end
|
lib/elixir/lib/record/extractor.ex
| 0.714927
| 0.595875
|
extractor.ex
|
starcoder
|
defmodule Kantele.Brain do
@moduledoc """
Load and parse brain data into behavior tree structs
"""
@brains_path "data/brains"
@doc """
Load brain data from the path
Defaults to `#{@brains_path}`
"""
def load_all(path \\ @brains_path) do
File.ls!(path)
|> Enum.filter(fn file ->
String.ends_with?(file, ".ucl")
end)
|> Enum.map(fn file ->
File.read!(Path.join(path, file))
end)
|> Enum.map(&Elias.parse/1)
|> Enum.flat_map(&merge_data/1)
|> Enum.into(%{})
end
defp merge_data(brain_data) do
Enum.map(brain_data.brains, fn {key, value} ->
{to_string(key), value}
end)
end
def process_all(brains) do
Enum.into(brains, %{}, fn {key, value} ->
{key, process(value, brains)}
end)
end
def process(brain, brains) when brain != nil do
%Kalevala.Brain{
root: parse_node(brain, brains)
}
end
def process(_, _brains) do
%Kalevala.Brain{
root: %Kalevala.Brain.NullNode{}
}
end
# This is `brain = brains.town_crier`
defp parse_node("brains." <> key_path, brains) do
parse_node(brains[key_path], brains)
end
# A ref `{ ref = brains.town_crier }`
defp parse_node(%{ref: "brains." <> key_path}, brains) do
parse_node(brains[key_path], brains)
end
# Sequences
defp parse_node(%{type: "sequence", nodes: nodes}, brains) do
%Kalevala.Brain.Sequence{
nodes: Enum.map(nodes, &parse_node(&1, brains))
}
end
defp parse_node(%{type: "first", nodes: nodes}, brains) do
%Kalevala.Brain.FirstSelector{
nodes: Enum.map(nodes, &parse_node(&1, brains))
}
end
defp parse_node(%{type: "conditional", nodes: nodes}, brains) do
%Kalevala.Brain.ConditionalSelector{
nodes: Enum.map(nodes, &parse_node(&1, brains))
}
end
defp parse_node(condition = %{type: "conditions/" <> type}, brains),
do: parse_condition(type, condition, brains)
defp parse_node(action = %{type: "actions/" <> type}, brains),
do: parse_action(type, action, brains)
@doc """
Process a condition
"""
def parse_condition("message-match", %{data: data}, _brains) do
{:ok, regex} = Regex.compile(data.text, "i")
%Kalevala.Brain.Condition{
type: Kalevala.Brain.Conditions.MessageMatch,
data: %{
interested?: &Kantele.Character.SayEvent.interested?/1,
self_trigger: data.self_trigger == "true",
text: regex
}
}
end
def parse_condition("tell-match", %{data: data}, _brains) do
{:ok, regex} = Regex.compile(data.text, "i")
%Kalevala.Brain.Condition{
type: Kalevala.Brain.Conditions.MessageMatch,
data: %{
interested?: &Kantele.Character.TellEvent.interested?/1,
self_trigger: data.self_trigger == "true",
text: regex
}
}
end
def parse_condition("state-match", %{data: data}, _brains) do
%Kalevala.Brain.Condition{
type: Kalevala.Brain.Conditions.StateMatch,
data: data
}
end
def parse_condition("room-enter", %{data: data}, _brains) do
%Kalevala.Brain.Condition{
type: Kalevala.Brain.Conditions.EventMatch,
data: %{
self_trigger: data.self_trigger == "true",
topic: Kalevala.Event.Movement.Notice,
data: %{
direction: :to
}
}
}
end
def parse_condition("event-match", %{data: data}, _brains) do
%Kalevala.Brain.Condition{
type: Kalevala.Brain.Conditions.EventMatch,
data: %{
self_trigger: Map.get(data, :self_trigger, "false") == "true",
topic: data.topic,
data: Map.get(data, :data, %{})
}
}
end
@doc """
Process actions
"""
def parse_action("state-set", action, _brains) do
%Kalevala.Brain.StateSet{
data: action.data
}
end
def parse_action("say", action, _brains) do
%Kalevala.Brain.Action{
type: Kantele.Character.SayAction,
data: action.data,
delay: Map.get(action, :delay, 0)
}
end
def parse_action("emote", action, _brains) do
%Kalevala.Brain.Action{
type: Kantele.Character.EmoteAction,
data: action.data,
delay: Map.get(action, :delay, 0)
}
end
def parse_action("flee", action, _brains) do
%Kalevala.Brain.Action{
type: Kantele.Character.FleeAction,
data: %{},
delay: Map.get(action, :delay, 0)
}
end
def parse_action("wander", action, _brains) do
%Kalevala.Brain.Action{
type: Kantele.Character.WanderAction,
data: %{},
delay: Map.get(action, :delay, 0)
}
end
def parse_action("delay-event", action, _brains) do
%Kalevala.Brain.Action{
type: Kantele.Character.DelayEventAction,
data: action.data,
delay: Map.get(action, :delay, 0)
}
end
end
|
example/lib/kantele/brain.ex
| 0.721939
| 0.498352
|
brain.ex
|
starcoder
|
defmodule ExUnit.CaptureLog do
@moduledoc ~S"""
Functionality to capture logs for testing.
## Examples
defmodule AssertionTest do
use ExUnit.Case
import ExUnit.CaptureLog
require Logger
test "example" do
assert capture_log(fn ->
Logger.error("log msg")
end) =~ "log msg"
end
test "check multiple captures concurrently" do
fun = fn ->
for msg <- ["hello", "hi"] do
assert capture_log(fn -> Logger.error(msg) end) =~ msg
end
Logger.debug("testing")
end
assert capture_log(fun) =~ "hello"
assert capture_log(fun) =~ "testing"
end
end
"""
alias Logger.Backends.Console
@compile {:no_warn_undefined, Logger}
@doc """
Captures Logger messages generated when evaluating `fun`.
Returns the binary which is the captured output.
This function mutes the `:console` backend and captures any log
messages sent to Logger from the calling processes. It is possible
to ensure explicit log messages from other processes are captured
by waiting for their exit or monitor signal.
However, `capture_log` does not guarantee to capture log messages
originated from processes spawned using a low level `Kernel` spawn
function (for example, `Kernel.spawn/1`) and such processes exit with an
exception or a throw. Therefore, prefer using a `Task`, or other OTP
process, will send explicit logs before its exit or monitor signals
and will not cause VM generated log messages.
Note that when the `async` is set to `true`, the messages from another
test might be captured. This is OK as long you consider such cases in
your assertions.
It is possible to configure the level to capture with `:level`,
which will set the capturing level for the duration of the
capture, for instance, if the log level is set to :error
any message with the lower level will be ignored.
The default level is `nil`, which will capture all messages.
The behaviour is undetermined if async tests change Logger level.
The format, metadata and colors can be configured with `:format`,
`:metadata` and `:colors` respectively. These three options
defaults to the `:console` backend configuration parameters.
"""
@spec capture_log(keyword, (() -> any)) :: String.t()
def capture_log(opts \\ [], fun) do
opts = Keyword.put_new(opts, :level, nil)
{:ok, string_io} = StringIO.open("")
try do
:ok = add_capture(string_io, opts)
ref = ExUnit.CaptureServer.log_capture_on(self())
try do
fun.()
after
:ok = Logger.flush()
:ok = ExUnit.CaptureServer.log_capture_off(ref)
:ok = remove_capture(string_io)
end
:ok
catch
kind, reason ->
_ = StringIO.close(string_io)
:erlang.raise(kind, reason, __STACKTRACE__)
else
:ok ->
{:ok, content} = StringIO.close(string_io)
elem(content, 1)
end
end
defp add_capture(pid, opts) do
case :proc_lib.start(__MODULE__, :init_proxy, [pid, opts, self()]) do
:ok ->
:ok
:noproc ->
raise "cannot capture_log/2 because the :logger application was not started"
{:error, reason} ->
mfa = {ExUnit.CaptureLog, :add_capture, [pid, opts]}
exit({reason, mfa})
end
end
@doc false
def init_proxy(pid, opts, parent) do
case :gen_event.add_sup_handler(Logger, {Console, pid}, {Console, [device: pid] ++ opts}) do
:ok ->
ref = Process.monitor(parent)
:proc_lib.init_ack(:ok)
receive do
{:DOWN, ^ref, :process, ^parent, _reason} -> :ok
{:gen_event_EXIT, {Console, ^pid}, _reason} -> :ok
end
{:EXIT, reason} ->
:proc_lib.init_ack({:error, reason})
{:error, reason} ->
:proc_lib.init_ack({:error, reason})
end
catch
:exit, :noproc -> :proc_lib.init_ack(:noproc)
end
defp remove_capture(pid) do
case :gen_event.delete_handler(Logger, {Console, pid}, :ok) do
:ok ->
:ok
{:error, :module_not_found} = error ->
mfa = {ExUnit.CaptureLog, :remove_capture, [pid]}
exit({error, mfa})
end
end
end
|
lib/ex_unit/lib/ex_unit/capture_log.ex
| 0.834086
| 0.643238
|
capture_log.ex
|
starcoder
|
defmodule Timex.DateTime.Helpers do
@moduledoc false
alias Timex.{Types, Timezone, TimezoneInfo, AmbiguousDateTime, AmbiguousTimezoneInfo}
@type precision :: -1 | 0..6
@doc """
Constructs an empty DateTime, for internal use only
"""
def empty() do
%DateTime{
year: 0,
month: 1,
day: 1,
hour: 0,
minute: 0,
second: 0,
microsecond: {0, 0},
time_zone: nil,
zone_abbr: nil,
utc_offset: 0,
std_offset: 0
}
end
@doc """
Constructs a DateTime from an Erlang date or datetime tuple and a timezone.
Intended for internal use only.
"""
@spec construct(Types.date(), Types.valid_timezone()) ::
DateTime.t() | AmbiguousDateTime.t() | {:error, term}
@spec construct(Types.datetime(), Types.valid_timezone()) ::
DateTime.t() | AmbiguousDateTime.t() | {:error, term}
@spec construct(Types.microsecond_datetime(), Types.valid_timezone()) ::
DateTime.t() | AmbiguousDateTime.t() | {:error, term}
@spec construct(Types.microsecond_datetime(), precision, Types.valid_timezone()) ::
DateTime.t() | AmbiguousDateTime.t() | {:error, term}
def construct({_, _, _} = date, timezone) do
construct({date, {0, 0, 0, 0}}, 0, timezone)
end
def construct({{_, _, _} = date, {h, mm, s}}, timezone) do
construct({date, {h, mm, s, 0}}, 0, timezone)
end
def construct({{_, _, _} = date, {_, _, _, _} = time}, timezone) do
construct({date, time}, -1, timezone)
end
def construct({{_, _, _} = date, {_, _, _, _} = time}, precision, timezone) do
construct({date, time}, precision, timezone, :wall)
end
def construct({{y, m, d} = date, {h, mm, s, us}}, precision, timezone, utc_or_wall) do
seconds_from_zeroyear = :calendar.datetime_to_gregorian_seconds({date, {h, mm, s}})
case Timezone.name_of(timezone) do
{:error, _} = err ->
err
tzname ->
case Timezone.resolve(tzname, seconds_from_zeroyear, utc_or_wall) do
{:error, _} = err ->
err
%TimezoneInfo{} = tz ->
%DateTime{
:year => y,
:month => m,
:day => d,
:hour => h,
:minute => mm,
:second => s,
:microsecond => construct_microseconds(us, precision),
:time_zone => tz.full_name,
:zone_abbr => tz.abbreviation,
:utc_offset => tz.offset_utc,
:std_offset => tz.offset_std
}
%AmbiguousTimezoneInfo{before: b, after: a} ->
bd = %DateTime{
:year => y,
:month => m,
:day => d,
:hour => h,
:minute => mm,
:second => s,
:microsecond => construct_microseconds(us, precision),
:time_zone => b.full_name,
:zone_abbr => b.abbreviation,
:utc_offset => b.offset_utc,
:std_offset => b.offset_std
}
ad = %DateTime{
:year => y,
:month => m,
:day => d,
:hour => h,
:minute => mm,
:second => s,
:microsecond => construct_microseconds(us, precision),
:time_zone => a.full_name,
:zone_abbr => a.abbreviation,
:utc_offset => a.offset_utc,
:std_offset => a.offset_std
}
%AmbiguousDateTime{before: bd, after: ad}
end
end
end
def construct_microseconds({us, p}) when is_integer(us) and is_integer(p) do
construct_microseconds(us, p)
end
# Input precision of -1 means it should be recalculated based on the value
def construct_microseconds(0, -1), do: {0, 0}
def construct_microseconds(0, p), do: {0, p}
def construct_microseconds(n, -1), do: {n, precision(n)}
def construct_microseconds(n, p), do: {to_precision(n, p), p}
def to_precision(0, _p), do: 0
def to_precision(us, p) do
case precision(us) do
detected_p when detected_p > p ->
# Convert to lower precision
pow = trunc(:math.pow(10, detected_p - p))
Integer.floor_div(us, pow) * pow
_detected_p ->
# Already correct precision or less precise
us
end
end
def precision(0), do: 0
def precision(n) when is_integer(n) do
ns = Integer.to_string(n)
n_width = byte_size(ns)
trimmed = byte_size(String.trim_trailing(ns, "0"))
new_p = 6 - (n_width - trimmed)
if new_p >= 6 do
6
else
new_p
end
end
end
|
lib/datetime/helpers.ex
| 0.843428
| 0.413773
|
helpers.ex
|
starcoder
|
defmodule EventStore.Storage do
@moduledoc false
alias EventStore.Snapshots.SnapshotData
alias EventStore.Storage
alias EventStore.Storage.{
Appender,
CreateStream,
QueryStreamInfo,
Reader,
Snapshot,
Subscription
}
@doc """
Initialise the PostgreSQL database by creating the tables and indexes.
"""
def initialize_store!(conn, opts \\ []) do
Storage.Initializer.run!(conn, opts)
end
@doc """
Reset the PostgreSQL database by deleting all rows.
"""
def reset!(conn, opts \\ []) do
Storage.Initializer.reset!(conn, opts)
end
@doc """
Create a new event stream with the given unique identifier.
"""
def create_stream(conn, stream_uuid, opts \\ []) do
CreateStream.execute(conn, stream_uuid, opts)
end
@doc """
Append the given list of recorded events to storage.
"""
def append_to_stream(conn, stream_id, events, opts \\ []) do
Appender.append(conn, stream_id, events, opts)
end
@doc """
Link the existing event ids already present in a stream to the given stream.
"""
def link_to_stream(conn, stream_id, event_ids, opts \\ []) do
Appender.link(conn, stream_id, event_ids, opts)
end
@doc """
Read events for the given stream forward from the starting version, use zero
for all events for the stream.
"""
def read_stream_forward(conn, stream_id, start_version, count, opts \\ []) do
Reader.read_forward(conn, stream_id, start_version, count, opts)
end
@doc """
Get the id and version of the stream with the given `stream_uuid`.
"""
def stream_info(conn, stream_uuid, opts \\ []) do
QueryStreamInfo.execute(conn, stream_uuid, opts)
end
@doc """
Create, or locate an existing, persistent subscription to a stream using a
unique name and starting position (event number or stream version).
"""
def subscribe_to_stream(conn, stream_uuid, subscription_name, start_from \\ nil, opts \\ [])
def subscribe_to_stream(conn, stream_uuid, subscription_name, start_from, opts) do
Subscription.subscribe_to_stream(conn, stream_uuid, subscription_name, start_from, opts)
end
@doc """
Acknowledge receipt of an event by its number, for a single subscription.
"""
def ack_last_seen_event(conn, stream_uuid, subscription_name, last_seen, opts \\ []) do
Subscription.ack_last_seen_event(conn, stream_uuid, subscription_name, last_seen, opts)
end
@doc """
Delete an existing named subscription to a stream.
"""
def delete_subscription(conn, stream_uuid, subscription_name, opts \\ []) do
Subscription.delete_subscription(conn, stream_uuid, subscription_name, opts)
end
@doc """
Get all known subscriptions, to any stream.
"""
def subscriptions(conn, opts \\ []) do
Subscription.subscriptions(conn, opts)
end
@doc """
Read a snapshot, if available, for a given source.
"""
def read_snapshot(conn, source_uuid, opts \\ []) do
Snapshot.read_snapshot(conn, source_uuid, opts)
end
@doc """
Record a snapshot of the data and metadata for a given source.
"""
def record_snapshot(conn, %SnapshotData{} = snapshot, opts \\ []) do
Snapshot.record_snapshot(conn, snapshot, opts)
end
@doc """
Delete an existing snapshot for a given source.
"""
def delete_snapshot(conn, source_uuid, opts \\ []) do
Snapshot.delete_snapshot(conn, source_uuid, opts)
end
end
|
lib/event_store/storage.ex
| 0.81309
| 0.416352
|
storage.ex
|
starcoder
|
defmodule Ecto.Reflections.HasOne do
@moduledoc """
The reflection record for a `has_one` association. Its fields are:
* `field` - The name of the association field on the model;
* `owner` - The model where the association was defined;
* `associated` - The model that is associated;
* `key` - The key on the `owner` model used for the association;
* `assoc_key` - The key on the `associated` model used for the association;
"""
defstruct [:field, :owner, :associated, :key, :assoc_key]
end
defmodule Ecto.Associations.HasOne do
@moduledoc false
defstruct [:loaded, :target, :name, :primary_key]
end
defmodule Ecto.Associations.HasOne.Proxy do
@moduledoc """
A has_one association.
## Create
A new struct of the associated model can be created with `struct/2`. The
created struct will have its foreign key set to the primary key of the parent
model.
defmodule Post do
use Ecto.Model
schema "posts" do
has_one :permalink, Permalink
end
end
post = put_primary_key(%Post{}, 42)
struct(post.permalink, []) #=> %Permalink{post_id: 42}
## Reflection
Any association module will generate the `__assoc__` function that can be
used for runtime introspection of the association.
* `__assoc__(:loaded, assoc)` - Returns the loaded entities or `:not_loaded`;
* `__assoc__(:loaded, value, assoc)` - Sets the loaded entities;
* `__assoc__(:target, assoc)` - Returns the model where the association was
defined;
* `__assoc__(:name, assoc)` - Returns the name of the association field on the
model;
* `__assoc__(:primary_key, assoc)` - Returns the primary key (used when
creating a an model with `new/2`);
* `__assoc__(:primary_key, value, assoc)` - Sets the primary key;
* `__assoc__(:new, name, target)` - Creates a new association with the given
name and target;
"""
@not_loaded :ECTO_NOT_LOADED
require Ecto.Associations
Ecto.Associations.defproxy(Ecto.Associations.HasOne)
@doc false
def __struct__(params \\ [], proxy(target: target, name: name, primary_key: pk_value)) do
refl = target.__schema__(:association, name)
fk = refl.assoc_key
struct(refl.associated, [{fk, pk_value}] ++ params)
end
@doc """
Returns the associated struct. Raises `AssociationNotLoadedError` if the
association is not loaded.
"""
def get(proxy(loaded: @not_loaded, target: target, name: name)) do
refl = target.__schema__(:association, name)
raise Ecto.AssociationNotLoadedError,
type: :has_one, owner: refl.owner, name: name
end
def get(proxy(loaded: loaded)) do
loaded
end
@doc """
Returns `true` if the association is loaded.
"""
def loaded?(proxy(loaded: @not_loaded)), do: false
def loaded?(_), do: true
@doc false
Enum.each [:loaded, :target, :name, :primary_key], fn field ->
def __assoc__(unquote(field), record) do
proxy([{unquote(field), var}]) = record
var
end
end
@doc false
Enum.each [:loaded, :primary_key], fn field ->
def __assoc__(unquote(field), value, record) do
proxy(record, [{unquote(field), value}])
end
end
def __assoc__(:new, name, target) do
proxy(name: name, target: target, loaded: @not_loaded)
end
end
defimpl Inspect, for: Ecto.Associations.HasOne do
import Inspect.Algebra
def inspect(%{name: name, target: target}, opts) do
refl = target.__schema__(:association, name)
associated = refl.associated
references = refl.key
foreign_key = refl.assoc_key
kw = [
name: name,
target: target,
associated: associated,
references: references,
foreign_key: foreign_key
]
concat ["#Ecto.Associations.HasOne<", Inspect.List.inspect(kw, opts), ">"]
end
end
|
lib/ecto/associations/has_one.ex
| 0.847005
| 0.542803
|
has_one.ex
|
starcoder
|
defmodule RDF.NTriples.Encoder do
@moduledoc """
An encoder for N-Triples serializations of RDF.ex data structures.
As for all encoders of `RDF.Serialization.Format`s, you normally won't use these
functions directly, but via one of the `write_` functions on the `RDF.NTriples`
format module or the generic `RDF.Serialization` module.
"""
use RDF.Serialization.Encoder
alias RDF.{Triple, Term, IRI, BlankNode, Literal, LangString, XSD}
@impl RDF.Serialization.Encoder
@callback encode(RDF.Data.t(), keyword) :: {:ok, String.t()} | {:error, any}
def encode(data, _opts \\ []) do
{:ok,
data
|> Enum.reduce([], &[statement(&1) | &2])
|> Enum.reverse()
|> Enum.join()}
end
@impl RDF.Serialization.Encoder
@spec stream(RDF.Data.t(), keyword) :: Enumerable.t()
def stream(data, opts \\ []) do
case Keyword.get(opts, :mode, :string) do
:string -> Stream.map(data, &statement(&1))
:iodata -> Stream.map(data, &iolist_statement(&1))
invalid -> raise "Invalid stream mode: #{invalid}"
end
end
@spec statement(Triple.t()) :: String.t()
def statement({subject, predicate, object}) do
"#{term(subject)} #{term(predicate)} #{term(object)} .\n"
end
@spec term(Term.t()) :: String.t()
def term(%IRI{} = iri) do
"<#{to_string(iri)}>"
end
def term(%Literal{literal: %LangString{} = lang_string}) do
~s["#{escape_string(lang_string.value)}"@#{lang_string.language}]
end
def term(%Literal{literal: %XSD.String{} = xsd_string}) do
~s["#{escape_string(xsd_string.value)}"]
end
def term(%Literal{} = literal) do
~s["#{Literal.lexical(literal)}"^^<#{to_string(Literal.datatype_id(literal))}>]
end
def term(%BlankNode{} = bnode) do
to_string(bnode)
end
def term({s, p, o}) do
"<< #{term(s)} #{term(p)} #{term(o)} >>"
end
@spec iolist_statement(Triple.t()) :: iolist
def iolist_statement({subject, predicate, object}) do
[iolist_term(subject), " ", iolist_term(predicate), " ", iolist_term(object), " .\n"]
end
@spec iolist_term(Term.t()) :: String.t()
def iolist_term(%IRI{} = iri) do
["<", iri.value, ">"]
end
def iolist_term(%Literal{literal: %LangString{} = lang_string}) do
[~s["], escape_string(lang_string.value), ~s["@], lang_string.language]
end
def iolist_term(%Literal{literal: %XSD.String{} = xsd_string}) do
[~s["], escape_string(xsd_string.value), ~s["]]
end
def iolist_term(%Literal{} = literal) do
[~s["], Literal.lexical(literal), ~s["^^<], to_string(Literal.datatype_id(literal)), ">"]
end
def iolist_term(%BlankNode{} = bnode) do
to_string(bnode)
end
@doc false
def escape_string(string) do
string
|> String.replace("\\", "\\\\\\\\")
|> String.replace("\b", "\\b")
|> String.replace("\f", "\\f")
|> String.replace("\t", "\\t")
|> String.replace("\n", "\\n")
|> String.replace("\r", "\\r")
|> String.replace("\"", ~S[\"])
end
end
|
lib/rdf/serializations/ntriples_encoder.ex
| 0.81409
| 0.480966
|
ntriples_encoder.ex
|
starcoder
|
defmodule Oban.Worker do
@moduledoc """
Defines a behavior and macro to guide the creation of worker modules.
Worker modules do the work of processing a job. At a minimum they must define a `perform/2`
function, which will be called with an `args` map and the job struct.
## Defining Workers
Define a worker to process jobs in the `events` queue, retrying at most 10 times if a job fails,
and ensuring that duplicate jobs aren't enqueued within a 30 second period:
defmodule MyApp.Workers.Business do
use Oban.Worker, queue: "events", max_attempts: 10, unique: [period: 30]
@impl Worker
def perform(_args, %Oban.Job{attempt: attempt}) when attempt > 3 do
IO.inspect(attempt)
end
def perform(args, _job) do
IO.inspect(args)
end
end
The `perform/2` function receives an args map and an `Oban.Job` struct as arguments. This
allows workers to change the behavior of `perform/2` based on attributes of the Job, e.g. the
number of attempts or when it was inserted.
A job is considered complete if `perform/2` returns a non-error value, and it doesn't raise an
exception or have an unhandled exit.
Any of these return values or error events will fail the job:
* return `{:error, error}`
* return `:error`
* an unhandled exception
* an unhandled exit or throw
As an example of error tuple handling, this worker may return an error tuple when the value is
less than one:
defmodule MyApp.Workers.ErrorExample do
use Oban.Worker
@impl Worker
def perform(%{"value" => value}, _job) do
if value > 1 do
:ok
else
{:error, "invalid value given: " <> inspect(value)}
end
end
end
## Enqueuing Jobs
All workers implement a `new/2` function that converts an args map into a job changeset
suitable for inserting into the database for later execution:
%{in_the: "business", of_doing: "business"}
|> MyApp.Workers.Business.new()
|> Oban.insert()
The worker's defaults may be overridden by passing options:
%{vote_for: "none of the above"}
|> MyApp.Workers.Business.new(queue: "special", max_attempts: 5)
|> Oban.insert()
Uniqueness options may also be overridden by passing options:
%{expensive: "business"}
|> MyApp.Workers.Business.new(unique: [period: 120, fields: [:worker]])
|> Oban.insert()
Note that `unique` options aren't merged, they are overridden entirely.
See `Oban.Job` for all available options.
## Unique Jobs
The unique jobs feature lets you specify constraints to prevent enqueuing duplicate jobs.
Uniquness is based on a combination of `args`, `queue`, `worker`, `state` and insertion time. It
is configured at the worker or job level using the following options:
* `:period` — The number of seconds until a job is no longer considered duplicate. You should
always specify a period.
* `:fields` — The fields to compare when evaluating uniqueness. The available fields are
`:args`, `:queue` and `:worker`, by default all three are used.
* `:states` — The job states that will be checked for duplicates. The available states are
`:available`, `:scheduled`, `:executing`, `:retryable` and `:completed`. By default all states
are checked, which prevents _any_ duplicates, even if the previous job has been completed.
For example, configure a worker to be unique across all fields and states for 60 seconds:
```elixir
use Oban.Worker, unique: [period: 60]
```
Configure the worker to be unique only by `:worker` and `:queue`:
```elixir
use Oban.Worker, unique: [fields: [:queue, :worker], period: 60]
```
Or, configure a worker to be unique until it has executed:
```elixir
use Oban.Worker, unique: [period: 300, states: [:available, :scheduled, :executing]]
```
### Stronger Guarantees
Oban's unique job support is built on a client side read/write cycle. That makes it subject to
duplicate writes if two transactions are started simultaneously. If you _absolutely must_ ensure
that a duplicate job isn't inserted then you will have to make use of unique constraints within
the database. `Oban.insert/2,4` will handle unique constraints safely through upsert support.
### Performance Note
If your application makes heavy use of unique jobs you may want to add indexes on the `args` and
`inserted_at` columns of the `oban_jobs` table. The other columns considered for uniqueness are
already covered by indexes.
## Customizing Backoff
When jobs fail they may be retried again in the future using a backoff algorithm. By default
the backoff is exponential with a fixed padding of 15 seconds. This may be too aggressive for
jobs that are resource intensive or need more time between retries. To make backoff scheduling
flexible a worker module may define a custom backoff function.
This worker defines a backoff function that delays retries using a variant of the historic
Resque/Sidekiq algorithm:
defmodule MyApp.SidekiqBackoffWorker do
use Oban.Worker
@impl Worker
def backoff(attempt) do
:math.pow(attempt, 4) + 15 + :rand.uniform(30) * attempt
end
@impl Worker
def perform(_args, _job) do
:do_business
end
end
Here are some alternative backoff strategies to consider:
* **constant** — delay by a fixed number of seconds, e.g. 1→15, 2→15, 3→15
* **linear** — delay for the same number of seconds as the current attempt, e.g. 1→1, 2→2, 3→3
* **squared** — delay by attempt number squared, e.g. 1→1, 2→4, 3→9
* **sidekiq** — delay by a base amount plus some jitter, e.g. 1→32, 2→61, 3→135
"""
@moduledoc since: "0.1.0"
alias Oban.Job
@doc """
Build a job changeset for this worker with optional overrides.
See `Oban.Job.new/2` for the available options.
"""
@callback new(args :: Job.args(), opts :: [Job.option()]) :: Ecto.Changeset.t()
@doc """
Calculate the execution backoff, or the number of seconds to wait before retrying a failed job.
"""
@callback backoff(attempt :: pos_integer()) :: pos_integer()
@doc """
The `perform/2` function is called when the job is executed.
The value returned from `perform/2` is ignored, unless it returns an `{:error, reason}` tuple.
With an error return or when perform has an uncaught exception or throw then the error will be
reported and the job will be retried (provided there are attempts remaining).
"""
@callback perform(args :: Job.args(), job :: Job.t()) :: term()
@doc false
defmacro __using__(opts) do
quote location: :keep do
alias Oban.{Job, Worker}
@after_compile Worker
@behaviour Worker
@doc false
def __opts__ do
Keyword.put(unquote(opts), :worker, to_string(__MODULE__))
end
@impl Worker
def new(args, opts \\ []) when is_map(args) and is_list(opts) do
Job.new(args, Keyword.merge(__opts__(), opts, &Worker.resolve_opts/3))
end
@impl Worker
def backoff(attempt) when is_integer(attempt) do
Worker.default_backoff(attempt)
end
defoverridable Worker
end
end
@doc false
defmacro __after_compile__(%{module: module}, _) do
Enum.each(module.__opts__(), &validate_opt!/1)
end
@doc false
def resolve_opts(:unique, [_ | _] = opts_1, [_ | _] = opts_2) do
Keyword.merge(opts_1, opts_2)
end
def resolve_opts(_key, _opts, opts), do: opts
@doc false
@spec default_backoff(pos_integer(), non_neg_integer()) :: pos_integer()
def default_backoff(attempt, base_backoff \\ 15) when is_integer(attempt) do
trunc(:math.pow(2, attempt) + base_backoff)
end
defp validate_opt!({:max_attempts, max_attempts}) do
unless is_integer(max_attempts) and max_attempts > 0 do
raise ArgumentError, "expected :max_attempts to be a positive integer"
end
end
defp validate_opt!({:queue, queue}) do
unless is_atom(queue) or is_binary(queue) do
raise ArgumentError, "expected :queue to be an atom or a binary, got: #{inspect(queue)}"
end
end
defp validate_opt!({:unique, unique}) do
unless is_list(unique) and Enum.all?(unique, &Job.valid_unique_opt?/1) do
raise ArgumentError, "unexpected unique options: #{inspect(unique)}"
end
end
defp validate_opt!({:worker, worker}) do
unless is_binary(worker) do
raise ArgumentError, "expected :worker to be a binary, got: #{inspect(worker)}"
end
end
defp validate_opt!(option) do
raise ArgumentError, "unknown option provided #{inspect(option)}"
end
end
|
lib/oban/worker.ex
| 0.919782
| 0.908252
|
worker.ex
|
starcoder
|
defmodule AWS.Route53Resolver do
@moduledoc """
When you create a VPC using Amazon VPC, you automatically get DNS resolution
within the VPC from Route 53 Resolver.
By default, Resolver answers DNS queries for VPC domain names such as domain
names for EC2 instances or Elastic Load Balancing load balancers. Resolver
performs recursive lookups against public name servers for all other domain
names.
You can also configure DNS resolution between your VPC and your network over a
Direct Connect or VPN connection:
## Forward DNS queries from resolvers on your network to Route 53 Resolver
DNS resolvers on your network can forward DNS queries to Resolver in a specified
VPC. This allows your DNS resolvers to easily resolve domain names for AWS
resources such as EC2 instances or records in a Route 53 private hosted zone.
For more information, see [How DNS Resolvers on Your Network Forward DNS Queries to Route 53
Resolver](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver.html#resolver-overview-forward-network-to-vpc)
in the *Amazon Route 53 Developer Guide*.
## Conditionally forward queries from a VPC to resolvers on your network
You can configure Resolver to forward queries that it receives from EC2
instances in your VPCs to DNS resolvers on your network. To forward selected
queries, you create Resolver rules that specify the domain names for the DNS
queries that you want to forward (such as example.com), and the IP addresses of
the DNS resolvers on your network that you want to forward the queries to. If a
query matches multiple rules (example.com, acme.example.com), Resolver chooses
the rule with the most specific match (acme.example.com) and forwards the query
to the IP addresses that you specified in that rule. For more information, see
[How Route 53 Resolver Forwards DNS Queries from Your VPCs to Your Network](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver.html#resolver-overview-forward-vpc-to-network)
in the *Amazon Route 53 Developer Guide*.
Like Amazon VPC, Resolver is Regional. In each Region where you have VPCs, you
can choose whether to forward queries from your VPCs to your network (outbound
queries), from your network to your VPCs (inbound queries), or both.
"""
alias AWS.Client
alias AWS.Request
def metadata do
%AWS.ServiceMetadata{
abbreviation: "Route53Resolver",
api_version: "2018-04-01",
content_type: "application/x-amz-json-1.1",
credential_scope: nil,
endpoint_prefix: "route53resolver",
global?: false,
protocol: "json",
service_id: "Route53Resolver",
signature_version: "v4",
signing_name: "route53resolver",
target_prefix: "Route53Resolver"
}
end
@doc """
Associates a `FirewallRuleGroup` with a VPC, to provide DNS filtering for the
VPC.
"""
def associate_firewall_rule_group(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "AssociateFirewallRuleGroup", input, options)
end
@doc """
Adds IP addresses to an inbound or an outbound Resolver endpoint.
If you want to add more than one IP address, submit one
`AssociateResolverEndpointIpAddress` request for each IP address.
To remove an IP address from an endpoint, see
[DisassociateResolverEndpointIpAddress](https://docs.aws.amazon.com/Route53/latest/APIReference/API_route53resolver_DisassociateResolverEndpointIpAddress.html).
"""
def associate_resolver_endpoint_ip_address(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "AssociateResolverEndpointIpAddress", input, options)
end
@doc """
Associates an Amazon VPC with a specified query logging configuration.
Route 53 Resolver logs DNS queries that originate in all of the Amazon VPCs that
are associated with a specified query logging configuration. To associate more
than one VPC with a configuration, submit one `AssociateResolverQueryLogConfig`
request for each VPC.
The VPCs that you associate with a query logging configuration must be in the
same Region as the configuration.
To remove a VPC from a query logging configuration, see
[DisassociateResolverQueryLogConfig](https://docs.aws.amazon.com/Route53/latest/APIReference/API_route53resolver_DisassociateResolverQueryLogConfig.html).
"""
def associate_resolver_query_log_config(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "AssociateResolverQueryLogConfig", input, options)
end
@doc """
Associates a Resolver rule with a VPC.
When you associate a rule with a VPC, Resolver forwards all DNS queries for the
domain name that is specified in the rule and that originate in the VPC. The
queries are forwarded to the IP addresses for the DNS resolvers that are
specified in the rule. For more information about rules, see
[CreateResolverRule](https://docs.aws.amazon.com/Route53/latest/APIReference/API_route53resolver_CreateResolverRule.html).
"""
def associate_resolver_rule(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "AssociateResolverRule", input, options)
end
@doc """
Creates an empty firewall domain list for use in DNS Firewall rules.
You can populate the domains for the new list with a file, using
`ImportFirewallDomains`, or with domain strings, using `UpdateFirewallDomains`.
"""
def create_firewall_domain_list(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CreateFirewallDomainList", input, options)
end
@doc """
Creates a single DNS Firewall rule in the specified rule group, using the
specified domain list.
"""
def create_firewall_rule(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CreateFirewallRule", input, options)
end
@doc """
Creates an empty DNS Firewall rule group for filtering DNS network traffic in a
VPC.
You can add rules to the new rule group by calling `CreateFirewallRule`.
"""
def create_firewall_rule_group(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CreateFirewallRuleGroup", input, options)
end
@doc """
Creates a Resolver endpoint.
There are two types of Resolver endpoints, inbound and outbound:
* An *inbound Resolver endpoint* forwards DNS queries to the DNS
service for a VPC from your network.
* An *outbound Resolver endpoint* forwards DNS queries from the DNS
service for a VPC to your network.
"""
def create_resolver_endpoint(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CreateResolverEndpoint", input, options)
end
@doc """
Creates a Resolver query logging configuration, which defines where you want
Resolver to save DNS query logs that originate in your VPCs.
Resolver can log queries only for VPCs that are in the same Region as the query
logging configuration.
To specify which VPCs you want to log queries for, you use
`AssociateResolverQueryLogConfig`. For more information, see
[AssociateResolverQueryLogConfig](https://docs.aws.amazon.com/Route53/latest/APIReference/API_route53resolver_AssociateResolverQueryLogConfig.html).
You can optionally use AWS Resource Access Manager (AWS RAM) to share a query
logging configuration with other AWS accounts. The other accounts can then
associate VPCs with the configuration. The query logs that Resolver creates for
a configuration include all DNS queries that originate in all VPCs that are
associated with the configuration.
"""
def create_resolver_query_log_config(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CreateResolverQueryLogConfig", input, options)
end
@doc """
For DNS queries that originate in your VPCs, specifies which Resolver endpoint
the queries pass through, one domain name that you want to forward to your
network, and the IP addresses of the DNS resolvers in your network.
"""
def create_resolver_rule(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CreateResolverRule", input, options)
end
@doc """
Deletes the specified domain list.
"""
def delete_firewall_domain_list(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteFirewallDomainList", input, options)
end
@doc """
Deletes the specified firewall rule.
"""
def delete_firewall_rule(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteFirewallRule", input, options)
end
@doc """
Deletes the specified firewall rule group.
"""
def delete_firewall_rule_group(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteFirewallRuleGroup", input, options)
end
@doc """
Deletes a Resolver endpoint.
The effect of deleting a Resolver endpoint depends on whether it's an inbound or
an outbound Resolver endpoint:
* **Inbound**: DNS queries from your network are no longer routed to
the DNS service for the specified VPC.
* **Outbound**: DNS queries from a VPC are no longer routed to your
network.
"""
def delete_resolver_endpoint(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteResolverEndpoint", input, options)
end
@doc """
Deletes a query logging configuration.
When you delete a configuration, Resolver stops logging DNS queries for all of
the Amazon VPCs that are associated with the configuration. This also applies if
the query logging configuration is shared with other AWS accounts, and the other
accounts have associated VPCs with the shared configuration.
Before you can delete a query logging configuration, you must first disassociate
all VPCs from the configuration. See
[DisassociateResolverQueryLogConfig](https://docs.aws.amazon.com/Route53/latest/APIReference/API_route53resolver_DisassociateResolverQueryLogConfig.html).
If you used Resource Access Manager (RAM) to share a query logging configuration
with other accounts, you must stop sharing the configuration before you can
delete a configuration. The accounts that you shared the configuration with can
first disassociate VPCs that they associated with the configuration, but that's
not necessary. If you stop sharing the configuration, those VPCs are
automatically disassociated from the configuration.
"""
def delete_resolver_query_log_config(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteResolverQueryLogConfig", input, options)
end
@doc """
Deletes a Resolver rule.
Before you can delete a Resolver rule, you must disassociate it from all the
VPCs that you associated the Resolver rule with. For more information, see
[DisassociateResolverRule](https://docs.aws.amazon.com/Route53/latest/APIReference/API_route53resolver_DisassociateResolverRule.html).
"""
def delete_resolver_rule(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteResolverRule", input, options)
end
@doc """
Disassociates a `FirewallRuleGroup` from a VPC, to remove DNS filtering from the
VPC.
"""
def disassociate_firewall_rule_group(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DisassociateFirewallRuleGroup", input, options)
end
@doc """
Removes IP addresses from an inbound or an outbound Resolver endpoint.
If you want to remove more than one IP address, submit one
`DisassociateResolverEndpointIpAddress` request for each IP address.
To add an IP address to an endpoint, see
[AssociateResolverEndpointIpAddress](https://docs.aws.amazon.com/Route53/latest/APIReference/API_route53resolver_AssociateResolverEndpointIpAddress.html).
"""
def disassociate_resolver_endpoint_ip_address(%Client{} = client, input, options \\ []) do
Request.request_post(
client,
metadata(),
"DisassociateResolverEndpointIpAddress",
input,
options
)
end
@doc """
Disassociates a VPC from a query logging configuration.
Before you can delete a query logging configuration, you must first disassociate
all VPCs from the configuration. If you used AWS Resource Access Manager (AWS
RAM) to share a query logging configuration with other accounts, VPCs can be
disassociated from the configuration in the following ways:
The accounts that you shared the configuration with can
disassociate VPCs from the configuration.
You can stop sharing the configuration.
"""
def disassociate_resolver_query_log_config(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DisassociateResolverQueryLogConfig", input, options)
end
@doc """
Removes the association between a specified Resolver rule and a specified VPC.
If you disassociate a Resolver rule from a VPC, Resolver stops forwarding DNS
queries for the domain name that you specified in the Resolver rule.
"""
def disassociate_resolver_rule(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DisassociateResolverRule", input, options)
end
@doc """
Retrieves the configuration of the firewall behavior provided by DNS Firewall
for a single VPC from Amazon Virtual Private Cloud (Amazon VPC).
"""
def get_firewall_config(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetFirewallConfig", input, options)
end
@doc """
Retrieves the specified firewall domain list.
"""
def get_firewall_domain_list(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetFirewallDomainList", input, options)
end
@doc """
Retrieves the specified firewall rule group.
"""
def get_firewall_rule_group(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetFirewallRuleGroup", input, options)
end
@doc """
Retrieves a firewall rule group association, which enables DNS filtering for a
VPC with one rule group.
A VPC can have more than one firewall rule group association, and a rule group
can be associated with more than one VPC.
"""
def get_firewall_rule_group_association(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetFirewallRuleGroupAssociation", input, options)
end
@doc """
Returns the AWS Identity and Access Management (AWS IAM) policy for sharing the
specified rule group.
You can use the policy to share the rule group using AWS Resource Access Manager
(AWS RAM).
"""
def get_firewall_rule_group_policy(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetFirewallRuleGroupPolicy", input, options)
end
@doc """
Gets DNSSEC validation information for a specified resource.
"""
def get_resolver_dnssec_config(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetResolverDnssecConfig", input, options)
end
@doc """
Gets information about a specified Resolver endpoint, such as whether it's an
inbound or an outbound Resolver endpoint, and the current status of the
endpoint.
"""
def get_resolver_endpoint(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetResolverEndpoint", input, options)
end
@doc """
Gets information about a specified Resolver query logging configuration, such as
the number of VPCs that the configuration is logging queries for and the
location that logs are sent to.
"""
def get_resolver_query_log_config(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetResolverQueryLogConfig", input, options)
end
@doc """
Gets information about a specified association between a Resolver query logging
configuration and an Amazon VPC.
When you associate a VPC with a query logging configuration, Resolver logs DNS
queries that originate in that VPC.
"""
def get_resolver_query_log_config_association(%Client{} = client, input, options \\ []) do
Request.request_post(
client,
metadata(),
"GetResolverQueryLogConfigAssociation",
input,
options
)
end
@doc """
Gets information about a query logging policy.
A query logging policy specifies the Resolver query logging operations and
resources that you want to allow another AWS account to be able to use.
"""
def get_resolver_query_log_config_policy(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetResolverQueryLogConfigPolicy", input, options)
end
@doc """
Gets information about a specified Resolver rule, such as the domain name that
the rule forwards DNS queries for and the ID of the outbound Resolver endpoint
that the rule is associated with.
"""
def get_resolver_rule(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetResolverRule", input, options)
end
@doc """
Gets information about an association between a specified Resolver rule and a
VPC.
You associate a Resolver rule and a VPC using
[AssociateResolverRule](https://docs.aws.amazon.com/Route53/latest/APIReference/API_route53resolver_AssociateResolverRule.html).
"""
def get_resolver_rule_association(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetResolverRuleAssociation", input, options)
end
@doc """
Gets information about the Resolver rule policy for a specified rule.
A Resolver rule policy includes the rule that you want to share with another
account, the account that you want to share the rule with, and the Resolver
operations that you want to allow the account to use.
"""
def get_resolver_rule_policy(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetResolverRulePolicy", input, options)
end
@doc """
Imports domain names from a file into a domain list, for use in a DNS firewall
rule group.
Each domain specification in your domain list must satisfy the following
requirements:
* It can optionally start with `*` (asterisk).
* With the exception of the optional starting asterisk, it must only
contain the following characters: `A-Z`, `a-z`, `0-9`, `-` (hyphen).
* It must be from 1-255 characters in length.
"""
def import_firewall_domains(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ImportFirewallDomains", input, options)
end
@doc """
Retrieves the firewall configurations that you have defined.
DNS Firewall uses the configurations to manage firewall behavior for your VPCs.
A single call might return only a partial list of the configurations. For
information, see `MaxResults`.
"""
def list_firewall_configs(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListFirewallConfigs", input, options)
end
@doc """
Retrieves the firewall domain lists that you have defined.
For each firewall domain list, you can retrieve the domains that are defined for
a list by calling `ListFirewallDomains`.
A single call to this list operation might return only a partial list of the
domain lists. For information, see `MaxResults`.
"""
def list_firewall_domain_lists(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListFirewallDomainLists", input, options)
end
@doc """
Retrieves the domains that you have defined for the specified firewall domain
list.
A single call might return only a partial list of the domains. For information,
see `MaxResults`.
"""
def list_firewall_domains(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListFirewallDomains", input, options)
end
@doc """
Retrieves the firewall rule group associations that you have defined.
Each association enables DNS filtering for a VPC with one rule group.
A single call might return only a partial list of the associations. For
information, see `MaxResults`.
"""
def list_firewall_rule_group_associations(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListFirewallRuleGroupAssociations", input, options)
end
@doc """
Retrieves the minimal high-level information for the rule groups that you have
defined.
A single call might return only a partial list of the rule groups. For
information, see `MaxResults`.
"""
def list_firewall_rule_groups(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListFirewallRuleGroups", input, options)
end
@doc """
Retrieves the firewall rules that you have defined for the specified firewall
rule group.
DNS Firewall uses the rules in a rule group to filter DNS network traffic for a
VPC.
A single call might return only a partial list of the rules. For information,
see `MaxResults`.
"""
def list_firewall_rules(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListFirewallRules", input, options)
end
@doc """
Lists the configurations for DNSSEC validation that are associated with the
current AWS account.
"""
def list_resolver_dnssec_configs(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListResolverDnssecConfigs", input, options)
end
@doc """
Gets the IP addresses for a specified Resolver endpoint.
"""
def list_resolver_endpoint_ip_addresses(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListResolverEndpointIpAddresses", input, options)
end
@doc """
Lists all the Resolver endpoints that were created using the current AWS
account.
"""
def list_resolver_endpoints(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListResolverEndpoints", input, options)
end
@doc """
Lists information about associations between Amazon VPCs and query logging
configurations.
"""
def list_resolver_query_log_config_associations(%Client{} = client, input, options \\ []) do
Request.request_post(
client,
metadata(),
"ListResolverQueryLogConfigAssociations",
input,
options
)
end
@doc """
Lists information about the specified query logging configurations.
Each configuration defines where you want Resolver to save DNS query logs and
specifies the VPCs that you want to log queries for.
"""
def list_resolver_query_log_configs(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListResolverQueryLogConfigs", input, options)
end
@doc """
Lists the associations that were created between Resolver rules and VPCs using
the current AWS account.
"""
def list_resolver_rule_associations(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListResolverRuleAssociations", input, options)
end
@doc """
Lists the Resolver rules that were created using the current AWS account.
"""
def list_resolver_rules(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListResolverRules", input, options)
end
@doc """
Lists the tags that you associated with the specified resource.
"""
def list_tags_for_resource(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListTagsForResource", input, options)
end
@doc """
Attaches an AWS Identity and Access Management (AWS IAM) policy for sharing the
rule group.
You can use the policy to share the rule group using AWS Resource Access Manager
(AWS RAM).
"""
def put_firewall_rule_group_policy(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "PutFirewallRuleGroupPolicy", input, options)
end
@doc """
Specifies an AWS account that you want to share a query logging configuration
with, the query logging configuration that you want to share, and the operations
that you want the account to be able to perform on the configuration.
"""
def put_resolver_query_log_config_policy(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "PutResolverQueryLogConfigPolicy", input, options)
end
@doc """
Specifies an AWS rule that you want to share with another account, the account
that you want to share the rule with, and the operations that you want the
account to be able to perform on the rule.
"""
def put_resolver_rule_policy(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "PutResolverRulePolicy", input, options)
end
@doc """
Adds one or more tags to a specified resource.
"""
def tag_resource(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "TagResource", input, options)
end
@doc """
Removes one or more tags from a specified resource.
"""
def untag_resource(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UntagResource", input, options)
end
@doc """
Updates the configuration of the firewall behavior provided by DNS Firewall for
a single VPC from Amazon Virtual Private Cloud (Amazon VPC).
"""
def update_firewall_config(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateFirewallConfig", input, options)
end
@doc """
Updates the firewall domain list from an array of domain specifications.
"""
def update_firewall_domains(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateFirewallDomains", input, options)
end
@doc """
Updates the specified firewall rule.
"""
def update_firewall_rule(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateFirewallRule", input, options)
end
@doc """
Changes the association of a `FirewallRuleGroup` with a VPC.
The association enables DNS filtering for the VPC.
"""
def update_firewall_rule_group_association(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateFirewallRuleGroupAssociation", input, options)
end
@doc """
Updates an existing DNSSEC validation configuration.
If there is no existing DNSSEC validation configuration, one is created.
"""
def update_resolver_dnssec_config(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateResolverDnssecConfig", input, options)
end
@doc """
Updates the name of an inbound or an outbound Resolver endpoint.
"""
def update_resolver_endpoint(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateResolverEndpoint", input, options)
end
@doc """
Updates settings for a specified Resolver rule.
`ResolverRuleId` is required, and all other parameters are optional. If you
don't specify a parameter, it retains its current value.
"""
def update_resolver_rule(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateResolverRule", input, options)
end
end
|
lib/aws/generated/route53_resolver.ex
| 0.914362
| 0.428771
|
route53_resolver.ex
|
starcoder
|
defmodule K8s.Client do
@moduledoc """
Kubernetes API Client.
Functions return `K8s.Operation`s that represent kubernetes operations.
To run operations pass them to: `run/2`, or `run/3`
When specifying kinds the format should either be in the literal kubernetes kind name (eg `"ServiceAccount"`)
or the downcased version seen in kubectl (eg `"serviceaccount"`). A string or atom may be used.
## Examples
```elixir
"Deployment", "deployment", :Deployment, :deployment
"ServiceAccount", "serviceaccount", :ServiceAccount, :serviceaccount
"HorizontalPodAutoscaler", "horizontalpodautoscaler", :HorizontalPodAutoscaler, :horizontalpodautoscaler
```
`http_opts` to `K8s.Client.Runner` modules are `K8s.Client.HTTPProvider` HTTP options.
"""
@type path_param :: {:name, String.t()} | {:namespace, binary() | :all}
@type path_params :: [path_param]
@mgmt_param_defaults %{
field_manager: "elixir",
force: true
}
alias K8s.Operation
alias K8s.Client.Runner.{Async, Base, Stream, Wait, Watch}
@doc "alias of `K8s.Client.Runner.Base.run/2`"
defdelegate run(conn, operation), to: Base
@doc "alias of `K8s.Client.Runner.Base.run/3`"
defdelegate run(conn, operation, http_opts), to: Base
@doc "alias of `K8s.Client.Runner.Async.run/3`"
defdelegate async(operations, conn), to: Async, as: :run
@doc "alias of `K8s.Client.Runner.Async.run/3`"
defdelegate async(operations, conn, http_opts), to: Async, as: :run
@doc "alias of `K8s.Client.Runner.Async.run/3`"
defdelegate parallel(operations, conn, http_opts), to: Async, as: :run
@doc "alias of `K8s.Client.Runner.Wait.run/3`"
defdelegate wait_until(conn, operation, wait_opts), to: Wait, as: :run
@doc "alias of `K8s.Client.Runner.Watch.run/3`"
defdelegate watch(conn, operation, http_opts), to: Watch, as: :run
@doc "alias of `K8s.Client.Runner.Watch.run/4`"
defdelegate watch(conn, operation, rv, http_opts), to: Watch, as: :run
@doc "alias of `K8s.Client.Runner.Watch.stream/2`"
defdelegate watch_and_stream(conn, operation), to: Watch, as: :stream
@doc "alias of `K8s.Client.Runner.Watch.stream/3`"
defdelegate watch_and_stream(conn, operation, http_opts), to: Watch, as: :stream
@doc "alias of `K8s.Client.Runner.Stream.run/2`"
defdelegate stream(conn, operation), to: Stream, as: :run
@spec stream(K8s.Conn.t(), K8s.Operation.t(), keyword) ::
{:error, K8s.Operation.Error.t()}
| {:ok,
({:cont, any} | {:halt, any} | {:suspend, any}, any ->
:badarg | {:halted, any} | {:suspended, any, (any -> any)})}
@doc "alias of `K8s.Client.Runner.Stream.run/3`"
defdelegate stream(conn, operation, http_opts), to: Stream, as: :run
@doc """
Returns a `PATCH` operation to server-side-apply the given resource.
[K8s Docs](https://kubernetes.io/docs/reference/using-api/server-side-apply/):
## Examples
Apply a deployment with management parameteres
iex> deployment = K8s.Resource.from_file!("test/support/manifests/nginx-deployment.yaml")
...> K8s.Client.apply(deployment, field_manager: "my-operator", force: true)
%K8s.Operation{
method: :patch,
verb: :apply,
api_version: "apps/v1",
name: "Deployment",
path_params: [namespace: "test", name: "nginx"],
data: K8s.Resource.from_file!("test/support/manifests/nginx-deployment.yaml"),
query_params: [fieldManager: "my-operator", force: true]
}
"""
@spec apply(map(), keyword()) :: Operation.t()
def apply(resource, mgmt_params \\ []) do
field_manager = Keyword.get(mgmt_params, :field_manager, @mgmt_param_defaults[:field_manager])
force = Keyword.get(mgmt_params, :force, @mgmt_param_defaults[:force])
Operation.build(:apply, resource, field_manager: field_manager, force: force)
end
@doc """
Returns a `PATCH` operation to server-side-apply the given subresource given a resource's details and a subresource map.
## Examples
Apply a status to a pod:
iex> pod_with_status_subresource = K8s.Resource.from_file!("test/support/manifests/nginx-pod.yaml") |> Map.put("status", %{"message" => "some message"})
...> K8s.Client.apply("v1", "pods/status", [namespace: "default", name: "nginx"], pod_with_status_subresource, field_manager: "my-operator", force: true)
%K8s.Operation{
method: :patch,
verb: :apply,
api_version: "v1",
name: "pods/status",
path_params: [namespace: "default", name: "nginx"],
data: K8s.Resource.from_file!("test/support/manifests/nginx-pod.yaml") |> Map.put("status", %{"message" => "some message"}),
query_params: [fieldManager: "my-operator", force: true]
}
"""
@spec apply(binary, binary | atom, Keyword.t(), map(), keyword()) :: Operation.t()
def apply(
api_version,
kind,
path_params,
subresource,
mgmt_params \\ []
) do
field_manager = Keyword.get(mgmt_params, :field_manager, @mgmt_param_defaults[:field_manager])
force = Keyword.get(mgmt_params, :force, @mgmt_param_defaults[:force])
Operation.build(:apply, api_version, kind, path_params, subresource,
field_manager: field_manager,
force: force
)
end
@doc """
Returns a `GET` operation for a resource given a Kubernetes manifest. May be a partial manifest as long as it contains:
* apiVersion
* kind
* metadata.name
* metadata.namespace (if applicable)
[K8s Docs](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/):
> Get will retrieve a specific resource object by name.
## Examples
Getting a pod
iex> pod = %{
...> "apiVersion" => "v1",
...> "kind" => "Pod",
...> "metadata" => %{"name" => "nginx-pod", "namespace" => "test"},
...> "spec" => %{"containers" => %{"image" => "nginx"}}
...> }
...> K8s.Client.get(pod)
%K8s.Operation{
method: :get,
verb: :get,
api_version: "v1",
name: "Pod",
path_params: [namespace: "test", name: "nginx-pod"],
}
"""
@spec get(map()) :: Operation.t()
def get(%{} = resource), do: Operation.build(:get, resource)
@doc """
Returns a `GET` operation for a resource by version, kind/resource type, name, and optionally namespace.
[K8s Docs](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/):
> Get will retrieve a specific resource object by name.
## Examples
Get the nginx deployment in the default namespace:
iex> K8s.Client.get("apps/v1", "Deployment", namespace: "test", name: "nginx")
%K8s.Operation{
method: :get,
verb: :get,
api_version: "apps/v1",
name: "Deployment",
path_params: [namespace: "test", name: "nginx"]
}
Get the nginx deployment in the default namespace by passing the kind as atom.
iex> K8s.Client.get("apps/v1", :deployment, namespace: "test", name: "nginx")
%K8s.Operation{
method: :get,
verb: :get,
api_version: "apps/v1",
name: :deployment,
path_params: [namespace: "test", name: "nginx"]}
Get the nginx deployment's status:
iex> K8s.Client.get("apps/v1", "deployments/status", namespace: "test", name: "nginx")
%K8s.Operation{
method: :get,
verb: :get,
api_version: "apps/v1",
name: "deployments/status",
path_params: [namespace: "test", name: "nginx"]}
Get the nginx deployment's scale:
iex> K8s.Client.get("v1", "deployments/scale", namespace: "test", name: "nginx")
%K8s.Operation{
method: :get,
verb: :get,
api_version: "v1",
name: "deployments/scale",
path_params: [namespace: "test", name: "nginx"]}
"""
@spec get(binary, binary | atom, path_params | nil) :: Operation.t()
def get(api_version, kind, path_params \\ []),
do: Operation.build(:get, api_version, kind, path_params)
@doc """
Returns a `GET` operation to list all resources by version, kind, and namespace.
Given the namespace `:all` as an atom, will perform a list across all namespaces.
[K8s Docs](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/):
> List will retrieve all resource objects of a specific type within a namespace, and the results can be restricted to resources matching a selector query.
> List All Namespaces: Like List but retrieves resources across all namespaces.
## Examples
iex> K8s.Client.list("v1", "Pod", namespace: "default")
%K8s.Operation{
method: :get,
verb: :list,
api_version: "v1",
name: "Pod",
path_params: [namespace: "default"]
}
iex> K8s.Client.list("apps/v1", "Deployment", namespace: :all)
%K8s.Operation{
method: :get,
verb: :list_all_namespaces,
api_version: "apps/v1",
name: "Deployment",
path_params: []
}
"""
@spec list(binary, binary | atom, path_params | nil) :: Operation.t()
def list(api_version, kind, path_params \\ [])
def list(api_version, kind, namespace: :all),
do: Operation.build(:list_all_namespaces, api_version, kind, [])
def list(api_version, kind, path_params),
do: Operation.build(:list, api_version, kind, path_params)
@doc """
Returns a `POST` `K8s.Operation` to create the given resource.
## Examples
iex> deployment = K8s.Resource.from_file!("test/support/manifests/nginx-deployment.yaml")
...> K8s.Client.create(deployment)
%K8s.Operation{
method: :post,
path_params: [namespace: "test", name: "nginx"],
verb: :create,
api_version: "apps/v1",
name: "Deployment",
data: K8s.Resource.from_file!("test/support/manifests/nginx-deployment.yaml")
}
"""
@spec create(map()) :: Operation.t()
def create(
%{
"apiVersion" => api_version,
"kind" => kind,
"metadata" => %{"namespace" => ns, "name" => name}
} = resource
) do
Operation.build(:create, api_version, kind, [namespace: ns, name: name], resource)
end
def create(
%{
"apiVersion" => api_version,
"kind" => kind,
"metadata" => %{"namespace" => ns, "generateName" => _}
} = resource
) do
Operation.build(:create, api_version, kind, [namespace: ns], resource)
end
# Support for creating resources that are cluster-scoped, like Namespaces.
def create(
%{"apiVersion" => api_version, "kind" => kind, "metadata" => %{"name" => name}} = resource
) do
Operation.build(:create, api_version, kind, [name: name], resource)
end
def create(
%{"apiVersion" => api_version, "kind" => kind, "metadata" => %{"generateName" => _}} =
resource
) do
Operation.build(:create, api_version, kind, [], resource)
end
@doc """
Returns a `POST` `K8s.Operation` to create the given subresource.
Used for creating subresources like `Scale` or `Eviction`.
## Examples
Evicting a pod
iex> eviction = K8s.Resource.from_file!("test/support/manifests/eviction-policy.yaml")
...> K8s.Client.create("v1", "pods/eviction", [namespace: "default", name: "nginx"], eviction)
%K8s.Operation{
api_version: "v1",
method: :post,
name: "pods/eviction",
path_params: [namespace: "default", name: "nginx"],
verb: :create,
data: K8s.Resource.from_file!("test/support/manifests/eviction-policy.yaml")
}
"""
@spec create(binary, binary | atom, Keyword.t(), map()) :: Operation.t()
def create(api_version, kind, path_params, subresource),
do: Operation.build(:create, api_version, kind, path_params, subresource)
@doc """
Returns a `POST` `K8s.Operation` to create the given subresource.
Used for creating subresources like `Scale` or `Eviction`.
## Examples
Evicting a pod
iex> pod = K8s.Resource.from_file!("test/support/manifests/nginx-pod.yaml")
...> eviction = K8s.Resource.from_file!("test/support/manifests/eviction-policy.yaml")
...> K8s.Client.create(pod, eviction)
%K8s.Operation{
api_version: "v1",
data: K8s.Resource.from_file!("test/support/manifests/eviction-policy.yaml"),
method: :post, name: {"Pod", "Eviction"},
path_params: [namespace: "default", name: "nginx"],
verb: :create
}
"""
@spec create(map(), map()) :: Operation.t()
def create(
%{
"apiVersion" => api_version,
"kind" => kind,
"metadata" => %{"namespace" => ns, "name" => name}
},
%{"kind" => subkind} = subresource
) do
Operation.build(
:create,
api_version,
{kind, subkind},
[namespace: ns, name: name],
subresource
)
end
# Support for creating resources that are cluster-scoped, like Namespaces.
def create(
%{"apiVersion" => api_version, "kind" => kind, "metadata" => %{"name" => name}},
%{"kind" => subkind} = subresource
) do
Operation.build(:create, api_version, {kind, subkind}, [name: name], subresource)
end
@doc """
Returns a `PATCH` operation to patch the given resource.
[K8s Docs](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/):
> Patch will apply a change to a specific field. How the change is merged is defined per field. Lists may either be replaced or merged. Merging lists will not preserve ordering.
> Patches will never cause optimistic locking failures, and the last write will win. Patches are recommended when the full state is not read before an update, or when failing on optimistic locking is undesirable. When patching complex types, arrays and maps, how the patch is applied is defined on a per-field basis and may either replace the field's current value, or merge the contents into the current value.
## Examples
iex> deployment = K8s.Resource.from_file!("test/support/manifests/nginx-deployment.yaml")
...> K8s.Client.patch(deployment)
%K8s.Operation{
method: :patch,
verb: :patch,
api_version: "apps/v1",
name: "Deployment",
path_params: [namespace: "test", name: "nginx"],
data: K8s.Resource.from_file!("test/support/manifests/nginx-deployment.yaml")
}
"""
@spec patch(map()) :: Operation.t()
def patch(%{} = resource), do: Operation.build(:patch, resource)
@doc """
Returns a `PATCH` operation to patch the given subresource given a resource's details and a subresource map.
"""
@spec patch(binary, binary | atom, Keyword.t(), map()) :: Operation.t()
def patch(api_version, kind, path_params, subresource),
do: Operation.build(:patch, api_version, kind, path_params, subresource)
@doc """
Returns a `PATCH` operation to patch the given subresource given a resource map and a subresource map.
"""
@spec patch(map(), map()) :: Operation.t()
def patch(
%{
"apiVersion" => api_version,
"kind" => kind,
"metadata" => %{"namespace" => ns, "name" => name}
},
%{"kind" => subkind} = subresource
) do
Operation.build(
:patch,
api_version,
{kind, subkind},
[namespace: ns, name: name],
subresource
)
end
@doc """
Returns a `PUT` operation to replace/update the given resource.
[K8s Docs](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/):
> Replacing a resource object will update the resource by replacing the existing spec with the provided one. For read-then-write operations this is safe because an optimistic lock failure will occur if the resource was modified between the read and write. Note: The ResourceStatus will be ignored by the system and will not be updated. To update the status, one must invoke the specific status update operation.
> Note: Replacing a resource object may not result immediately in changes being propagated to downstream objects. For instance replacing a ConfigMap or Secret resource will not result in all Pods seeing the changes unless the Pods are restarted out of band.
## Examples
iex> deployment = K8s.Resource.from_file!("test/support/manifests/nginx-deployment.yaml")
...> K8s.Client.update(deployment)
%K8s.Operation{
method: :put,
verb: :update,
api_version: "apps/v1",
name: "Deployment",
path_params: [namespace: "test", name: "nginx"],
data: K8s.Resource.from_file!("test/support/manifests/nginx-deployment.yaml")
}
"""
@spec update(map()) :: Operation.t()
def update(%{} = resource), do: Operation.build(:update, resource)
@doc """
Returns a `PUT` operation to replace/update the given subresource given a resource's details and a subresource map.
Used for updating subresources like `Scale` or `Status`.
## Examples
Scaling a deployment
iex> scale = K8s.Resource.from_file!("test/support/manifests/scale-replicas.yaml")
...> K8s.Client.update("apps/v1", "deployments/scale", [namespace: "default", name: "nginx"], scale)
%K8s.Operation{
api_version: "apps/v1",
data: K8s.Resource.from_file!("test/support/manifests/scale-replicas.yaml"),
method: :put,
name: "deployments/scale",
path_params: [namespace: "default", name: "nginx"],
verb: :update
}
"""
@spec update(binary, binary | atom, Keyword.t(), map()) :: Operation.t()
def update(api_version, kind, path_params, subresource),
do: Operation.build(:update, api_version, kind, path_params, subresource)
@doc """
Returns a `PUT` operation to replace/update the given subresource given a resource map and a subresource map.
Used for updating subresources like `Scale` or `Status`.
## Examples
Scaling a deployment:
iex> deployment = K8s.Resource.from_file!("test/support/manifests/nginx-deployment.yaml")
...> scale = K8s.Resource.from_file!("test/support/manifests/scale-replicas.yaml")
...> K8s.Client.update(deployment, scale)
%K8s.Operation{
api_version: "apps/v1",
method: :put,
path_params: [namespace: "test", name: "nginx"],
verb: :update,
data: K8s.Resource.from_file!("test/support/manifests/scale-replicas.yaml"),
name: {"Deployment", "Scale"}
}
"""
@spec update(map(), map()) :: Operation.t()
def update(
%{
"apiVersion" => api_version,
"kind" => kind,
"metadata" => %{"namespace" => ns, "name" => name}
},
%{"kind" => subkind} = subresource
) do
Operation.build(
:update,
api_version,
{kind, subkind},
[namespace: ns, name: name],
subresource
)
end
@doc """
Returns a `DELETE` operation for a resource by manifest. May be a partial manifest as long as it contains:
* apiVersion
* kind
* metadata.name
* metadata.namespace (if applicable)
[K8s Docs](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/):
> Delete will delete a resource. Depending on the specific resource, child objects may or may not be garbage collected by the server. See notes on specific resource objects for details.
## Examples
iex> deployment = K8s.Resource.from_file!("test/support/manifests/nginx-deployment.yaml")
...> K8s.Client.delete(deployment)
%K8s.Operation{
method: :delete,
verb: :delete,
api_version: "apps/v1",
name: "Deployment",
path_params: [namespace: "test", name: "nginx"]
}
"""
@spec delete(map()) :: Operation.t()
def delete(%{} = resource), do: Operation.build(:delete, resource)
@doc """
Returns a `DELETE` operation for a resource by version, kind, name, and optionally namespace.
## Examples
iex> K8s.Client.delete("apps/v1", "Deployment", namespace: "test", name: "nginx")
%K8s.Operation{
method: :delete,
verb: :delete,
api_version: "apps/v1",
name: "Deployment",
path_params: [namespace: "test", name: "nginx"]
}
"""
@spec delete(binary, binary | atom, path_params | nil) :: Operation.t()
def delete(api_version, kind, path_params),
do: Operation.build(:delete, api_version, kind, path_params)
@doc """
Returns a `DELETE` collection operation for all instances of a cluster scoped resource kind.
## Examples
iex> K8s.Client.delete_all("extensions/v1beta1", "PodSecurityPolicy")
%K8s.Operation{
method: :delete,
verb: :deletecollection,
api_version: "extensions/v1beta1",
name: "PodSecurityPolicy",
path_params: []
}
iex> K8s.Client.delete_all("storage.k8s.io/v1", "StorageClass")
%K8s.Operation{
method: :delete,
verb: :deletecollection,
api_version: "storage.k8s.io/v1",
name: "StorageClass",
path_params: []
}
"""
@spec delete_all(binary(), binary() | atom()) :: Operation.t()
def delete_all(api_version, kind) do
Operation.build(:deletecollection, api_version, kind, [])
end
@doc """
Returns a `DELETE` collection operation for all instances of a resource kind in a specific namespace.
## Examples
iex> K8s.Client.delete_all("apps/v1beta1", "ControllerRevision", namespace: "default")
%K8s.Operation{
method: :delete,
verb: :deletecollection,
api_version: "apps/v1beta1",
name: "ControllerRevision",
path_params: [namespace: "default"]
}
iex> K8s.Client.delete_all("apps/v1", "Deployment", namespace: "staging")
%K8s.Operation{
method: :delete,
verb: :deletecollection,
api_version: "apps/v1",
name: "Deployment",
path_params: [namespace: "staging"]
}
"""
@spec delete_all(binary(), binary() | atom(), namespace: binary()) :: Operation.t()
def delete_all(api_version, kind, namespace: namespace) do
Operation.build(:deletecollection, api_version, kind, namespace: namespace)
end
end
|
lib/k8s/client.ex
| 0.919728
| 0.653155
|
client.ex
|
starcoder
|
defmodule AWS.Sdb do
@moduledoc """
Amazon SimpleDB is a web service providing the core database functions of data
indexing and querying in the cloud.
By offloading the time and effort associated with building and operating a
web-scale database, SimpleDB provides developers the freedom to focus on
application development. A traditional, clustered relational database requires a
sizable upfront capital outlay, is complex to design, and often requires
extensive and repetitive database administration. Amazon SimpleDB is
dramatically simpler, requiring no schema, automatically indexing your data and
providing a simple API for storage and access. This approach eliminates the
administrative burden of data modeling, index maintenance, and performance
tuning. Developers gain access to this functionality within Amazon's proven
computing environment, are able to scale instantly, and pay only for what they
use.
Visit [http://aws.amazon.com/simpledb/](http://aws.amazon.com/simpledb/) for
more information.
"""
alias AWS.Client
alias AWS.Request
def metadata do
%AWS.ServiceMetadata{
abbreviation: nil,
api_version: "2009-04-15",
content_type: "application/x-www-form-urlencoded",
credential_scope: nil,
endpoint_prefix: "sdb",
global?: false,
protocol: "query",
service_id: nil,
signature_version: "v2",
signing_name: "sdb",
target_prefix: nil
}
end
@doc """
Performs multiple DeleteAttributes operations in a single call, which reduces
round trips and latencies.
This enables Amazon SimpleDB to optimize requests, which generally yields better
throughput.
If you specify BatchDeleteAttributes without attributes or values, all the
attributes for the item are deleted.
BatchDeleteAttributes is an idempotent operation; running it multiple times on
the same item or attribute doesn't result in an error.
The BatchDeleteAttributes operation succeeds or fails in its entirety. There are
no partial deletes. You can execute multiple BatchDeleteAttributes operations
and other operations in parallel. However, large numbers of concurrent
BatchDeleteAttributes calls can result in Service Unavailable (503) responses.
This operation is vulnerable to exceeding the maximum URL size when making a
REST request using the HTTP GET method.
This operation does not support conditions using Expected.X.Name,
Expected.X.Value, or Expected.X.Exists.
The following limitations are enforced for this operation:
* 1 MB request size
* 25 item limit per BatchDeleteAttributes operation
"""
def batch_delete_attributes(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "BatchDeleteAttributes", input, options)
end
@doc """
The `BatchPutAttributes` operation creates or replaces attributes within one or
more items.
By using this operation, the client can perform multiple `PutAttribute`
operation with a single call. This helps yield savings in round trips and
latencies, enabling Amazon SimpleDB to optimize requests and generally produce
better throughput.
The client may specify the item name with the `Item.X.ItemName` parameter. The
client may specify new attributes using a combination of the
`Item.X.Attribute.Y.Name` and `Item.X.Attribute.Y.Value` parameters. The client
may specify the first attribute for the first item using the parameters
`Item.0.Attribute.0.Name` and `Item.0.Attribute.0.Value`, and for the second
attribute for the first item by the parameters `Item.0.Attribute.1.Name` and
`Item.0.Attribute.1.Value`, and so on.
Attributes are uniquely identified within an item by their name/value
combination. For example, a single item can have the attributes `{ "first_name",
"first_value" }` and `{ "first_name", "second_value" }`. However, it cannot have
two attribute instances where both the `Item.X.Attribute.Y.Name` and
`Item.X.Attribute.Y.Value` are the same.
Optionally, the requester can supply the `Replace` parameter for each individual
value. Setting this value to `true` will cause the new attribute values to
replace the existing attribute values. For example, if an item `I` has the
attributes `{ 'a', '1' }, { 'b', '2'}` and `{ 'b', '3' }` and the requester does
a BatchPutAttributes of `{'I', 'b', '4' }` with the Replace parameter set to
true, the final attributes of the item will be `{ 'a', '1' }` and `{ 'b', '4'
}`, replacing the previous values of the 'b' attribute with the new value.
You cannot specify an empty string as an item or as an attribute name. The
`BatchPutAttributes` operation succeeds or fails in its entirety. There are no
partial puts.
This operation is vulnerable to exceeding the maximum URL size when making a
REST request using the HTTP GET method. This operation does not support
conditions using `Expected.X.Name`, `Expected.X.Value`, or `Expected.X.Exists`.
You can execute multiple `BatchPutAttributes` operations and other operations in
parallel. However, large numbers of concurrent `BatchPutAttributes` calls can
result in Service Unavailable (503) responses.
The following limitations are enforced for this operation:
* 256 attribute name-value pairs per item
* 1 MB request size
* 1 billion attributes per domain
* 10 GB of total user data storage per domain
* 25 item limit per `BatchPutAttributes` operation
"""
def batch_put_attributes(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "BatchPutAttributes", input, options)
end
@doc """
The `CreateDomain` operation creates a new domain.
The domain name should be unique among the domains associated with the Access
Key ID provided in the request. The `CreateDomain` operation may take 10 or more
seconds to complete.
CreateDomain is an idempotent operation; running it multiple times using the
same domain name will not result in an error response.
The client can create up to 100 domains per account.
If the client requires additional domains, go to [
http://aws.amazon.com/contact-us/simpledb-limit-request/](http://aws.amazon.com/contact-us/simpledb-limit-request/).
"""
def create_domain(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CreateDomain", input, options)
end
@doc """
Deletes one or more attributes associated with an item.
If all attributes of the item are deleted, the item is deleted.
If `DeleteAttributes` is called without being passed any attributes or values
specified, all the attributes for the item are deleted.
`DeleteAttributes` is an idempotent operation; running it multiple times on the
same item or attribute does not result in an error response.
Because Amazon SimpleDB makes multiple copies of item data and uses an eventual
consistency update model, performing a `GetAttributes` or `Select` operation
(read) immediately after a `DeleteAttributes` or `PutAttributes` operation
(write) might not return updated item data.
"""
def delete_attributes(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteAttributes", input, options)
end
@doc """
The `DeleteDomain` operation deletes a domain.
Any items (and their attributes) in the domain are deleted as well. The
`DeleteDomain` operation might take 10 or more seconds to complete.
Running `DeleteDomain` on a domain that does not exist or running the function
multiple times using the same domain name will not result in an error response.
"""
def delete_domain(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteDomain", input, options)
end
@doc """
Returns information about the domain, including when the domain was created, the
number of items and attributes in the domain, and the size of the attribute
names and values.
"""
def domain_metadata(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DomainMetadata", input, options)
end
@doc """
Returns all of the attributes associated with the specified item.
Optionally, the attributes returned can be limited to one or more attributes by
specifying an attribute name parameter.
If the item does not exist on the replica that was accessed for this operation,
an empty set is returned. The system does not return an error as it cannot
guarantee the item does not exist on other replicas.
If GetAttributes is called without being passed any attribute names, all the
attributes for the item are returned.
"""
def get_attributes(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetAttributes", input, options)
end
@doc """
The `ListDomains` operation lists all domains associated with the Access Key ID.
It returns domain names up to the limit set by
[MaxNumberOfDomains](#MaxNumberOfDomains). A [NextToken](#NextToken) is returned
if there are more than `MaxNumberOfDomains` domains. Calling `ListDomains`
successive times with the `NextToken` provided by the operation returns up to
`MaxNumberOfDomains` more domain names with each successive operation call.
"""
def list_domains(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListDomains", input, options)
end
@doc """
The PutAttributes operation creates or replaces attributes in an item.
The client may specify new attributes using a combination of the
`Attribute.X.Name` and `Attribute.X.Value` parameters. The client specifies the
first attribute by the parameters `Attribute.0.Name` and `Attribute.0.Value`,
the second attribute by the parameters `Attribute.1.Name` and
`Attribute.1.Value`, and so on.
Attributes are uniquely identified in an item by their name/value combination.
For example, a single item can have the attributes `{ "first_name",
"first_value" }` and `{ "first_name", second_value" }`. However, it cannot have
two attribute instances where both the `Attribute.X.Name` and
`Attribute.X.Value` are the same.
Optionally, the requestor can supply the `Replace` parameter for each individual
attribute. Setting this value to `true` causes the new attribute value to
replace the existing attribute value(s). For example, if an item has the
attributes `{ 'a', '1' }`, `{ 'b', '2'}` and `{ 'b', '3' }` and the requestor
calls `PutAttributes` using the attributes `{ 'b', '4' }` with the `Replace`
parameter set to true, the final attributes of the item are changed to `{ 'a',
'1' }` and `{ 'b', '4' }`, which replaces the previous values of the 'b'
attribute with the new value.
Using `PutAttributes` to replace attribute values that do not exist will not
result in an error response.
You cannot specify an empty string as an attribute name.
Because Amazon SimpleDB makes multiple copies of client data and uses an
eventual consistency update model, an immediate `GetAttributes` or `Select`
operation (read) immediately after a `PutAttributes` or `DeleteAttributes`
operation (write) might not return the updated data.
The following limitations are enforced for this operation:
* 256 total attribute name-value pairs per item
* One billion attributes per domain
* 10 GB of total user data storage per domain
"""
def put_attributes(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "PutAttributes", input, options)
end
@doc """
The `Select` operation returns a set of attributes for `ItemNames` that match
the select expression.
`Select` is similar to the standard SQL SELECT statement.
The total size of the response cannot exceed 1 MB in total size. Amazon SimpleDB
automatically adjusts the number of items returned per page to enforce this
limit. For example, if the client asks to retrieve 2500 items, but each
individual item is 10 kB in size, the system returns 100 items and an
appropriate `NextToken` so the client can access the next page of results.
For information on how to construct select expressions, see Using Select to
Create Amazon SimpleDB Queries in the Developer Guide.
"""
def select(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "Select", input, options)
end
end
|
lib/aws/generated/sdb.ex
| 0.916458
| 0.71677
|
sdb.ex
|
starcoder
|
defmodule GSS.Client do
@moduledoc """
Model of Client abstraction
This process is a Producer for this GenStage pipeline.
"""
use GenStage
require Logger
defmodule RequestParams do
@type t :: %__MODULE__{
method: atom(),
url: binary(),
body: HTTPoison.body(),
headers: HTTPoison.headers(),
options: Keyword.t()
}
defstruct [method: nil, url: nil, body: "", headers: [], options: []]
end
@type event :: {:request, GenStage.from(), RequestParams.t}
@type partition :: :write | :read
@spec start_link() :: GenServer.on_start()
def start_link do
GenStage.start_link(__MODULE__, :ok, name: __MODULE__)
end
@doc ~S"""
Issues an HTTP request with the given method to the given url.
This function is usually used indirectly by `get/3`, `post/4`, `put/4`, etc
Args:
* `method` - HTTP method as an atom (`:get`, `:head`, `:post`, `:put`,
`:delete`, etc.)
* `url` - target url as a binary string or char list
* `body` - request body. See more below
* `headers` - HTTP headers as an orddict (e.g., `[{"Accept", "application/json"}]`)
* `options` - Keyword list of options
Body:
* binary, char list or an iolist
* `{:form, [{K, V}, ...]}` - send a form url encoded
* `{:file, "/path/to/file"}` - send a file
* `{:stream, enumerable}` - lazily send a stream of binaries/charlists
Options:
* `:result_timeout` - receive result timeout, in milliseconds. Default is 2 minutes
* `:timeout` - timeout to establish a connection, in milliseconds. Default is 8000
* `:recv_timeout` - timeout used when receiving a connection. Default is 5000
* `:proxy` - a proxy to be used for the request; it can be a regular url
or a `{Host, Port}` tuple
* `:proxy_auth` - proxy authentication `{User, Password}` tuple
* `:ssl` - SSL options supported by the `ssl` erlang module
* `:follow_redirect` - a boolean that causes redirects to be followed
* `:max_redirect` - an integer denoting the maximum number of redirects to follow
* `:params` - an enumerable consisting of two-item tuples that will be appended to the url as query string parameters
Timeouts can be an integer or `:infinity`
This function returns `{:ok, response}` or `{:ok, async_response}` if the
request is successful, `{:error, reason}` otherwise.
## Examples
request(:post, "https://my.website.com", "{\"foo\": 3}", [{"Accept", "application/json"}])
"""
@spec request(atom, binary, HTTPoison.body, HTTPoison.headers, Keyword.t) :: {:ok, HTTPoison.Response.t} | {:error, binary} | no_return
def request(method, url, body \\ "", headers \\ [], options \\ []) do
result_timeout = options[:result_timeout] || :timer.seconds(60*60)
request = %RequestParams{
method: method,
url: url,
body: body,
headers: headers,
options: options
}
GenStage.call(__MODULE__, {:request, request}, result_timeout)
end
@doc ~S"""
Starts a task with request that must be awaited on.
"""
@spec request_async(atom, binary, HTTPoison.body, HTTPoison.headers, Keyword.t) :: Task.t
def request_async(method, url, body \\ "", headers \\ [], options \\ []) do
Task.async(GSS.Client, :request, [method, url, body, headers, options])
end
## Callbacks
def init(:ok) do
dispatcer = {GenStage.PartitionDispatcher, partitions: [:write, :read], hash: &dispatcher_hash/1}
{:producer, :queue.new(), dispatcher: dispatcer}
end
@doc ~S"""
Divide request into to partitions :read and :write
"""
@spec dispatcher_hash(event) :: {event, partition()}
def dispatcher_hash({:request, _from, request} = event) do
case request.method do
:get -> {event, :read}
_ -> {event, :write}
end
end
@doc ~S"""
Adds an event to the queue
"""
def handle_call({:request, request}, from, queue) do
updated_queue = :queue.in({:request, from, request}, queue)
{:noreply, [], updated_queue}
end
@doc ~S"""
Gives events for the next stage to process when requested
"""
def handle_demand(demand, queue) when demand > 0 do
{events, updated_queue} = take_from_queue(queue, demand, [])
{:noreply, Enum.reverse(events), updated_queue}
end
# take demand events from the queue
defp take_from_queue(queue, 0, events) do
{events, queue}
end
defp take_from_queue(queue, demand, events) do
case :queue.out(queue) do
{{:value, {kind, from, event}}, queue} ->
take_from_queue(queue, demand - 1, [{kind, from, event} | events])
{:empty, queue} ->
take_from_queue(queue, 0, events)
end
end
end
|
lib/elixir_google_spreadsheets/client.ex
| 0.928417
| 0.422713
|
client.ex
|
starcoder
|
defmodule ElixirRigidPhysics.Geometry.Plane do
@moduledoc """
Functions for creating and inteacting with [planes](https://en.wikipedia.org/wiki/Plane_(geometry)).
Planes are all points `{x,y,z}` that fulfill the equation `ax + by + cz + d = 0`.
We represent planes in the form: `{a,b,c,d}`, where `{a,b,c}` is is the normal of the plane, and `d` is `-dot(norm, point)`.
This is known as [Hessian normal form](http://mathworld.wolfram.com/HessianNormalForm.html).
"""
require Record
Record.defrecord(:plane, a: 0.0, b: 0.0, c: 0.0, d: 0.0)
@type plane :: record(:plane, a: number, b: number, c: number, d: number)
alias Graphmath.Vec3
@doc """
Creates a plane given its normal and a point on the plane (for finding d).
## Examples
iex> # test basic plane creation
iex> require ElixirRigidPhysics.Geometry.Plane, as: Plane
iex> Plane.create( {0.0, 1.0, 0.0}, {0.0, 0.0, 0.0})
{:plane, 0.0, 1.0, 0.0, 0.0}
iex> # test basic plane creation
iex> require ElixirRigidPhysics.Geometry.Plane, as: Plane
iex> Plane.create( {0.0, 1.0, 0.0}, {0.0, 1.0, 0.0})
{:plane, 0.0, 1.0, 0.0, -1.0}
"""
@spec create(Vec3.vec3(), Vec3.vec3()) :: plane
def create({nx, ny, nz} = _normal, {px, py, pz} = _point) do
d = -(nx * px + ny * py + nz * pz)
plane(a: nx, b: ny, c: nz, d: d)
end
@verysmol 1.0e-12
@doc """
Checks the distance from a point to the plane. Returns positive values if in front of plane, negative behind,
and zero if the point is coplanar.
## Examples
iex> # Test in front
iex> require ElixirRigidPhysics.Geometry.Plane, as: Plane
iex> p = Plane.create( {0,1,0}, {0,0,0})
iex> Plane.distance_to_point(p, {0,5,0})
5.0
iex> # Test behind
iex> require ElixirRigidPhysics.Geometry.Plane, as: Plane
iex> p = Plane.create( {1,0,0}, {0,0,0})
iex> Plane.distance_to_point(p, {-3,5,3})
-3.0
iex> # Test coplanar
iex> require ElixirRigidPhysics.Geometry.Plane, as: Plane
iex> sqrt_third = :math.sqrt(1/3)
iex> n = {sqrt_third, sqrt_third, sqrt_third}
iex> p = Plane.create( n, {0,1,0})
iex> t = Graphmath.Vec3.cross(n, {0,1,0})
iex> Plane.distance_to_point(p, Graphmath.Vec3.add({0,1,0}, t))
0.0
"""
@spec distance_to_point(plane, Vec3.vec3()) :: float
def distance_to_point(plane(a: a, b: b, c: c, d: d) = _plane, {px, py, pz} = _point) do
1.0 * (a * px + b * py + c * pz + d)
end
@doc """
Projects a point on to a plane.
## Examples
iex> # Project coplanar point
iex> require ElixirRigidPhysics.Geometry.Plane, as: Plane
iex> p = Plane.create( {0,1,0}, {0,4.0,0})
iex> Plane.project_point_to_plane(p, {24.0,4.0,55.0})
{24.0, 4.0, 55.0}
iex> # Project front point
iex> require ElixirRigidPhysics.Geometry.Plane, as: Plane
iex> p = Plane.create( {0,0,1.0}, {0.0,0,3.0})
iex> Plane.project_point_to_plane(p, {44.0,22.0, 43.0})
{44.0, 22.0, 3.0}
iex> # Project behind point
iex> require ElixirRigidPhysics.Geometry.Plane, as: Plane
iex> p = Plane.create( {0,0,1.0}, {0.0,0,3.0})
iex> Plane.project_point_to_plane(p, {44.0,22.0, -43.0})
{44.0, 22.0, 3.0}
"""
@spec project_point_to_plane(plane, Vec3.t()) :: Vec3.t()
def project_point_to_plane(plane(a: a, b: b, c: c, d: d) = _plane, {px, py, pz} = point) do
distance = 1.0 * (a * px + b * py + c * pz + d)
{a, b, c}
|> Vec3.scale( -distance)
|> Vec3.add(point)
end
@doc """
Clips a point to exist in the positive half-space of a plane.
## Examples
iex> # Project coplanar point
iex> require ElixirRigidPhysics.Geometry.Plane, as: Plane
iex> p = Plane.create( {0,0,1.0}, {0.0,0,3.0})
iex> point = {24.0,4.0,3.0}
iex> Plane.clip_point(p, point)
{24.0, 4.0, 3.0}
iex> # Project front point
iex> require ElixirRigidPhysics.Geometry.Plane, as: Plane
iex> p = Plane.create( {0,0,1.0}, {0.0,0,3.0})
iex> point = {24.0,4.0,55.0}
iex> Plane.clip_point(p, point)
{24.0, 4.0, 55.0}
iex> # Project behind point
iex> require ElixirRigidPhysics.Geometry.Plane, as: Plane
iex> p = Plane.create( {0,0,1.0}, {0.0,0,3.0})
iex> point = {24.0,4.0,-55.0}
iex> Plane.clip_point(p, point)
{24.0, 4.0, 3.0}
"""
@spec clip_point(plane, Vec3.t()) :: Vec3.t()
def clip_point(plane(a: a, b: b, c: c, d: d), {px, py, pz} = point) do
distance = 1.0 * (a * px + b * py + c * pz + d)
if distance >= @verysmol do
point
else
# we're behind it, must project onto plane.
{a, b, c}
|> Vec3.scale(-distance)
|> Vec3.add(point)
end
end
end
|
lib/geometry/plane.ex
| 0.937918
| 0.854095
|
plane.ex
|
starcoder
|
defmodule Day4 do
def from_file(path) do
File.stream!(path)
|> Enum.sort
|> Enum.map(&parse_row/1)
end
def parse_row(row) do
[date, hour, minute, action] = Regex.run(~r{\[(\d\d\d\d-\d\d-\d\d) (\d\d):(\d\d)\] (.*)}, row, capture: :all_but_first)
{date, hour, String.to_integer(minute), parse_action(action)}
end
def parse_action("wakes up"), do: :awake
def parse_action("falls asleep"), do: :asleep
def parse_action("Guard #" <> rest) do
[id, _, _] = String.split(rest)
String.to_integer(id)
end
def most_asleep(list) do
accumulated_minutes = Enum.reduce(list, Map.new, fn x, acc -> sum_minutes(x, acc) end)
{most_asleep_id, _} = Map.to_list(accumulated_minutes)
|> Enum.filter(fn {k, _} -> is_number(k) end)
|> Enum.reduce(fn {id, minutes}, {a_id, a_minutes} -> if minutes > a_minutes do {id, minutes} else {a_id, a_minutes} end end)
{most_asleep_minute, _} = Map.to_list(accumulated_minutes) |> Enum.filter(fn {k, _} -> is_tuple(k) end) |> Enum.filter(fn {{id, _}, _} -> id == most_asleep_id end) |> Enum.reduce({0, 0}, fn {{_, minute}, minutes}, {_, a_minutes} = a -> if minutes > a_minutes do {minute, minutes} else a end end)
{most_asleep_id, most_asleep_minute}
end
def sum_minutes({_, _, _, id}, acc) when is_number(id), do: Map.put(acc, :current, id)
def sum_minutes({_, _, minute, :asleep}, %{:current => _} = acc), do: Map.put(acc, :asleep, minute)
def sum_minutes({_, _, minute, :awake}, %{:current => _} = acc), do: add_minutes(acc, Map.get(acc, :asleep), minute - 1)
def add_minutes(acc, start, stop) do
id = Map.get(acc, :current)
minutes = for minute <- start..stop, do: minute
minutes
|> Enum.reduce(acc, fn x, map -> Map.update(map, {id, x}, 1, &(&1+1)) |> Map.update(id, 1, &(&1+1)) end)
end
def most_asleep_on_minute(list) do
accumulated_minutes = Enum.reduce(list, Map.new, fn x, acc -> sum_minutes(x, acc) end)
{id, most_asleep_minute, _} = Map.to_list(accumulated_minutes) |> Enum.filter(fn {k, _} -> is_tuple(k) end) |> Enum.reduce({0, 0, 0}, fn {{id, minute}, minutes}, {_, _, a_minutes} = a -> if minutes > a_minutes do {id, minute, minutes} else a end end)
{id, most_asleep_minute}
end
def solution do
IO.puts("#{from_file("day4_input.txt") |> most_asleep |> (fn {id, minute} -> id * minute end).()}")
IO.puts("#{from_file("day4_input.txt") |> most_asleep_on_minute |> (fn {id, minute} -> id * minute end).()}")
end
end
|
lib/day4.ex
| 0.513181
| 0.490663
|
day4.ex
|
starcoder
|
defmodule Cocktail.Builder.String do
@moduledoc """
Build human readable strings from schedules.
This module exposes functions for building human readable string
representations of schedules. It currently only represents the recurrence rules
of a schedule, and doesn't indicate the start time, duration, nor any
recurrence times or exception times. This is mainly useful for quick glances
at schedules in IEx sessions (because it's used for the `inspect`
implementation) and for simple doctests.
"""
alias Cocktail.{Rule, Schedule}
alias Cocktail.Validation.{Day, HourOfDay, Interval, MinuteOfHour, SecondOfMinute}
# These are the keys represented in the string representation of a schedule.
@represented_keys [:interval, :day, :hour_of_day, :minute_of_hour, :second_of_minute]
@typep represented_keys :: :interval | :day | :hour_of_day | :minute_of_hour | :second_of_minute
@doc """
Builds a human readable string representation of a `t:Cocktail.Schedule.t/0`.
## Examples
iex> alias Cocktail.Schedule
...> schedule = Schedule.new(~N[2017-01-01 06:00:00])
...> schedule = Schedule.add_recurrence_rule(schedule, :daily, interval: 2, hours: [10, 12])
...> build(schedule)
"Every 2 days on the 10th and 12th hours of the day"
"""
@spec build(Schedule.t()) :: String.t()
def build(%Schedule{recurrence_rules: recurrence_rules}) do
recurrence_rules
|> Enum.map(&build_rule/1)
|> Enum.join(" / ")
end
@doc false
@spec build_rule(Rule.t()) :: String.t()
def build_rule(%Rule{validations: validations_map}) do
for key <- @represented_keys, validation = validations_map[key], !is_nil(validation) do
build_validation_part(key, validation)
end
|> Enum.join(" ")
end
@spec build_validation_part(represented_keys(), Cocktail.Validation.t()) :: String.t()
defp build_validation_part(:interval, %Interval{interval: interval, type: type}), do: build_interval(type, interval)
defp build_validation_part(:day, %Day{days: days}), do: days |> build_days()
defp build_validation_part(:hour_of_day, %HourOfDay{hours: hours}), do: hours |> build_hours()
defp build_validation_part(:minute_of_hour, %MinuteOfHour{minutes: minutes}), do: minutes |> build_minutes()
defp build_validation_part(:second_of_minute, %SecondOfMinute{seconds: seconds}), do: seconds |> build_seconds()
# intervals
@spec build_interval(Cocktail.frequency(), pos_integer) :: String.t()
defp build_interval(:secondly, 1), do: "Secondly"
defp build_interval(:secondly, n), do: "Every #{n} seconds"
defp build_interval(:minutely, 1), do: "Minutely"
defp build_interval(:minutely, n), do: "Every #{n} minutes"
defp build_interval(:hourly, 1), do: "Hourly"
defp build_interval(:hourly, n), do: "Every #{n} hours"
defp build_interval(:daily, 1), do: "Daily"
defp build_interval(:daily, n), do: "Every #{n} days"
defp build_interval(:weekly, 1), do: "Weekly"
defp build_interval(:weekly, n), do: "Every #{n} weeks"
# "day" validation
@spec build_days([Cocktail.day_number()]) :: String.t()
defp build_days(days) do
days
|> Enum.sort()
|> build_days_sentence()
end
@spec build_days_sentence([Cocktail.day_number()]) :: String.t()
defp build_days_sentence([0, 6]), do: "on Weekends"
defp build_days_sentence([1, 2, 3, 4, 5]), do: "on Weekdays"
defp build_days_sentence(days), do: "on " <> (days |> Enum.map(&on_days/1) |> sentence)
@spec on_days(Cocktail.day_number()) :: String.t()
defp on_days(0), do: "Sundays"
defp on_days(1), do: "Mondays"
defp on_days(2), do: "Tuesdays"
defp on_days(3), do: "Wednesdays"
defp on_days(4), do: "Thursdays"
defp on_days(5), do: "Fridays"
defp on_days(6), do: "Saturdays"
# "hour of day" validation
@spec build_hours([Cocktail.hour_number()]) :: String.t()
defp build_hours(hours) do
hours
|> Enum.sort()
|> build_hours_sentence()
end
@spec build_hours_sentence([Cocktail.hour_number()]) :: String.t()
defp build_hours_sentence([hour]), do: "on the #{ordinalize(hour)} hour of the day"
defp build_hours_sentence(hours),
do: "on the " <> (hours |> Enum.map(&ordinalize/1) |> sentence()) <> " hours of the day"
# "minute of hour" validation
@spec build_minutes([Cocktail.minute_number()]) :: String.t()
defp build_minutes(minutes) do
minutes
|> Enum.sort()
|> build_minutes_sentence()
end
@spec build_minutes_sentence([Cocktail.minute_number()]) :: String.t()
defp build_minutes_sentence([minute]), do: "on the #{ordinalize(minute)} minute of the hour"
defp build_minutes_sentence(minutes),
do: "on the " <> (minutes |> Enum.map(&ordinalize/1) |> sentence()) <> " minutes of the hour"
# "second of minute" validation
@spec build_seconds([Cocktail.second_number()]) :: String.t()
defp build_seconds(seconds) do
seconds
|> Enum.sort()
|> build_seconds_sentence()
end
@spec build_seconds_sentence([Cocktail.second_number()]) :: String.t()
defp build_seconds_sentence([second]), do: "on the #{ordinalize(second)} second of the minute"
defp build_seconds_sentence(seconds) do
"on the " <> (seconds |> Enum.map(&ordinalize/1) |> sentence()) <> " seconds of the minute"
end
# utils
@spec sentence([String.t()]) :: String.t()
defp sentence([first, second]), do: "#{first} and #{second}"
defp sentence(words) do
{words, [last]} = Enum.split(words, -1)
first_half = words |> Enum.join(", ")
"#{first_half} and #{last}"
end
@spec ordinalize(integer) :: String.t()
defp ordinalize(n) when rem(n, 100) in 4..20, do: "#{n}th"
defp ordinalize(n) do
case rem(n, 10) do
1 -> "#{n}st"
2 -> "#{n}nd"
3 -> "#{n}rd"
_ -> "#{n}th"
end
end
end
|
lib/cocktail/builder/string.ex
| 0.87892
| 0.589805
|
string.ex
|
starcoder
|
defmodule Tensorflow.AttrValue.ListValue do
@moduledoc false
use Protobuf, syntax: :proto3
@type t :: %__MODULE__{
s: [binary],
i: [integer],
f: [float | :infinity | :negative_infinity | :nan],
b: [boolean],
type: [[Tensorflow.DataType.t()]],
shape: [Tensorflow.TensorShapeProto.t()],
tensor: [Tensorflow.TensorProto.t()],
func: [Tensorflow.NameAttrList.t()]
}
defstruct [:s, :i, :f, :b, :type, :shape, :tensor, :func]
field(:s, 2, repeated: true, type: :bytes)
field(:i, 3, repeated: true, type: :int64, packed: true)
field(:f, 4, repeated: true, type: :float, packed: true)
field(:b, 5, repeated: true, type: :bool, packed: true)
field(:type, 6,
repeated: true,
type: Tensorflow.DataType,
enum: true,
packed: true
)
field(:shape, 7, repeated: true, type: Tensorflow.TensorShapeProto)
field(:tensor, 8, repeated: true, type: Tensorflow.TensorProto)
field(:func, 9, repeated: true, type: Tensorflow.NameAttrList)
end
defmodule Tensorflow.AttrValue do
@moduledoc false
use Protobuf, syntax: :proto3
@type t :: %__MODULE__{
value: {atom, any}
}
defstruct [:value]
oneof(:value, 0)
field(:s, 2, type: :bytes, oneof: 0)
field(:i, 3, type: :int64, oneof: 0)
field(:f, 4, type: :float, oneof: 0)
field(:b, 5, type: :bool, oneof: 0)
field(:type, 6, type: Tensorflow.DataType, enum: true, oneof: 0)
field(:shape, 7, type: Tensorflow.TensorShapeProto, oneof: 0)
field(:tensor, 8, type: Tensorflow.TensorProto, oneof: 0)
field(:list, 1, type: Tensorflow.AttrValue.ListValue, oneof: 0)
field(:func, 10, type: Tensorflow.NameAttrList, oneof: 0)
field(:placeholder, 9, type: :string, oneof: 0)
end
defmodule Tensorflow.NameAttrList.AttrEntry do
@moduledoc false
use Protobuf, map: true, syntax: :proto3
@type t :: %__MODULE__{
key: String.t(),
value: Tensorflow.AttrValue.t() | nil
}
defstruct [:key, :value]
field(:key, 1, type: :string)
field(:value, 2, type: Tensorflow.AttrValue)
end
defmodule Tensorflow.NameAttrList do
@moduledoc false
use Protobuf, syntax: :proto3
@type t :: %__MODULE__{
name: String.t(),
attr: %{String.t() => Tensorflow.AttrValue.t() | nil}
}
defstruct [:name, :attr]
field(:name, 1, type: :string)
field(:attr, 2,
repeated: true,
type: Tensorflow.NameAttrList.AttrEntry,
map: true
)
end
|
lib/tensorflow/core/framework/attr_value.pb.ex
| 0.827131
| 0.767864
|
attr_value.pb.ex
|
starcoder
|
defmodule Juvet.Superintendent do
@moduledoc """
Process that acts as the brains around processes within Juvet.
It starts the `Juvet.BotFactory` process only if the configuration
is valid.
It delegates calls around the bot processes to the bot factory supervisor.
"""
use GenServer
defmodule State do
defstruct factory_supervisor: nil, config: %{}
end
# Client API
@doc """
Starts a `Superintendent` process linked to the current process
with the configuration specified.
"""
def start_link(config) do
GenServer.start_link(__MODULE__, config, name: __MODULE__)
end
@doc """
Connects a `Juvet.Bot` process with the Slack platform and the given parameters.
"""
def connect_bot(bot, :slack, parameters = %{team_id: _team_id}) do
GenServer.cast(__MODULE__, {:connect_bot, bot, :slack, parameters})
end
@doc """
Creates a `Juvet.Bot` process with the specified name under the
`Juvet.FactorySupervisor`.
## Example
```
{:ok, bot} = Juvet.Superintendent.create_bot("MyBot")
```
"""
def create_bot(name) do
GenServer.call(__MODULE__, {:create_bot, name})
end
def find_bot(name) do
GenServer.call(__MODULE__, {:find_bot, name})
end
@doc """
Returns the current state for the bot.
## Example
```
state = Juvet.Superintendent.get_state(bot)
```
"""
def get_state, do: GenServer.call(__MODULE__, :get_state)
# Server Callbacks
@doc false
def init(config) do
if Juvet.Config.valid?(config) do
send(self(), :start_factory_supervisor)
end
{:ok, %State{config: config}}
end
@doc false
def handle_call(
{:create_bot, name},
_from,
state = %{factory_supervisor: factory_supervisor, config: config}
) do
reply =
Juvet.FactorySupervisor.add_bot(factory_supervisor, config[:bot], name)
{:reply, reply, state}
end
@doc false
def handle_call(
{:find_bot, name},
_from,
state
) do
reply =
case String.to_atom(name) |> Process.whereis() do
nil -> {:error, "Bot named '#{name}' not found"}
pid -> {:ok, pid}
end
{:reply, reply, state}
end
@doc false
def handle_call(:get_state, _from, state) do
{:reply, state, Map.from_struct(state)}
end
@doc false
def handle_cast(
{:connect_bot, bot, platform, parameters},
state = %{config: config}
) do
bot_module = Juvet.Config.bot(config)
bot_module.connect(bot, platform, parameters)
{:noreply, state}
end
@doc false
def handle_info(:start_factory_supervisor, state) do
{:ok, factory_supervisor} =
Supervisor.start_child(
Juvet.BotFactory,
Supervisor.child_spec({Juvet.FactorySupervisor, [[]]},
restart: :temporary
)
)
{:noreply, %{state | factory_supervisor: factory_supervisor}}
end
end
|
lib/juvet/superintendent.ex
| 0.828315
| 0.733428
|
superintendent.ex
|
starcoder
|
defmodule Instream.Series do
@moduledoc """
## Series Definition
Series definitions can be used to have a fixed structured usable for
reading and writing data to an InfluxDB server:
defmodule MySeries.CPULoad do
use Instream.Series
series do
measurement "cpu_load"
tag :host, default: "www"
tag :core
field :value, default: 100
field :value_desc
end
end
The macros `tag/2` and `field/2` both accept a keyword tuple with a
`:default` entry. This value will be pre-assigned when using the data
struct with all other fields or tags being set to `nil`.
### Structs
Each of your series definitions will register three separate structs.
Based on the aforementioned `MySeries.CPULoad` you will have access
to the following structs:
%MySeries.CPULoad{
fields: %MySeries.CPULoad.Fields{value: 100, value_desc: nil},
tags: %MySeries.CPULoad.Tags{host: "www", core: nil},
timestamp: nil
}
`:timestamp` is expected to be either a
unix nanosecond or an RFC3339 timestamp.
### Compile-Time Series Validation
Defining a series triggers a validation function during compilation.
This validation for example prevents the usage of a field and tag sharing
the same name. Some internal keys like `:time` will also raise an
`ArgumentError` during compilation.
You can deactivate this compile time validation by passing
`skip_validation: true` in your series module:
defmodule MySeries.ConflictButAccepted do
use Instream.Series, skip_validation: true
series do
tag :conflict
field :conflict
end
end
Validations performed:
- having `use Instream.Series` requires also calling `series do .. end`
- a measurement must be defined
- at least one field must be defined
- fields and tags must not share a name
- the names `:_field`, `:_measurement`, and `:time` are not allowed to be
used for fields or tags
## Reading Series Points (Hydration)
Whenever you want to convert a plain map or a query result into a specific
series you can use the built-in hydration methods:
MySeries.from_map(%{
timestamp: 1_234_567_890,
some_tag: "hydrate",
some_field: 123
})
~S(SELECT * FROM "my_measurement")
|> MyConnection.query()
|> MySeries.from_result()
The timestamp itself is kept "as is" for integer values, timestamps in
RFC3339 format (e.g. `"1970-01-01T01:00:00.000+01:00"`) will be converted
to `:nanosecond` integer values.
Please be aware that when using an `OTP` release prior to `21.0` the time
will be truncated to `:microsecond` precision due to
`:calendar.rfc3339_to_system_time/2` not being available and
`DateTime.from_iso8601/1` only supporting microseconds.
## Writing Series Points
You can then use your series module to assemble a data point (one at a time)
for writing:
data = %MySeries{}
data = %{data | fields: %{data.fields | value: 17}}
data = %{data | tags: %{data.tags | bar: "bar", foo: "foo"}}
And then write one or many at once:
MyConnection.write(point)
MyConnection.write([point_1, point_2, point_3])
If you want to pass an explicit timestamp you can use the key `:timestamp`:
data = %MySeries{}
data = %{data | timestamp: 1_439_587_926_000_000_000}
The timestamp is (by default) expected to be a nanosecond unix timestamp.
To use different precision (for all points in this write operation!) you can
change this value by modifying your write call:
data = %MySeries{}
data = %{data | timestamp: 1_439_587_926}
MyConnection.write(data, precision: :second)
Supported precision types are:
- `:hour`
- `:minute`
- `:second`
- `:millisecond`
- `:microsecond`
- `:nanosecond`
- `:rfc3339`
Please be aware that the UDP protocol writer (`Instream.Writer.UDP`) does
not support custom timestamp precisions. All UDP timestamps are implicitly
expected to already be at nanosecond precision.
"""
alias Instream.Series.Hydrator
alias Instream.Series.Validator
defmacro __using__(opts) do
quote location: :keep do
unless unquote(opts[:skip_validation]) do
@after_compile unquote(__MODULE__)
end
import unquote(__MODULE__), only: [series: 1]
end
end
defmacro __after_compile__(%{module: module}, _bytecode) do
Validator.proper_series?(module)
end
@doc """
Defines the series.
"""
defmacro series(do: block) do
quote location: :keep do
@behaviour unquote(__MODULE__)
@measurement nil
Module.register_attribute(__MODULE__, :fields_raw, accumulate: true)
Module.register_attribute(__MODULE__, :tags_raw, accumulate: true)
try do
# scoped import
import unquote(__MODULE__)
unquote(block)
after
:ok
end
@fields_struct Enum.sort(@fields_raw, &unquote(__MODULE__).__sort_fields__/2)
@tags_struct Enum.sort(@tags_raw, &unquote(__MODULE__).__sort_tags__/2)
def __meta__(:fields), do: Keyword.keys(@fields_struct)
def __meta__(:measurement), do: @measurement
def __meta__(:tags), do: Keyword.keys(@tags_struct)
Module.eval_quoted(__ENV__, [
unquote(__MODULE__).__struct_fields__(@fields_struct),
unquote(__MODULE__).__struct_tags__(@tags_struct)
])
Module.eval_quoted(__ENV__, [
unquote(__MODULE__).__struct__(__MODULE__)
])
def from_map(data), do: Hydrator.from_map(__MODULE__, data)
def from_result(data), do: Hydrator.from_result(__MODULE__, data)
end
end
@doc """
Provides additional metadata for a series.
## Available information
- `:fields`: the fields in the series
- `:measurement`: the measurement of the series
- `:tags`: the available tags defining the series
"""
@callback __meta__(:field | :measurement | :tags) :: any
@doc """
Creates a series dataset from any given map.
Keys not defined in the series are silently dropped.
"""
@callback from_map(map) :: struct
@doc """
Creates a list of series datasets from a query result.
Keys not defined in the series are silently dropped.
"""
@callback from_result(map | [map]) :: [struct]
@doc """
Defines a field in the series.
"""
defmacro field(name, opts \\ []) do
quote do
@fields_raw {unquote(name), unquote(opts[:default])}
end
end
@doc """
Defines the measurement of the series.
"""
defmacro measurement(name) do
quote do
@measurement unquote(name)
end
end
@doc """
Defines a tag in the series.
"""
defmacro tag(name, opts \\ []) do
quote do
@tags_raw {unquote(name), unquote(opts[:default])}
end
end
@doc false
def __sort_fields__({left, _}, {right, _}), do: left < right
@doc false
def __sort_tags__({left, _}, {right, _}), do: left < right
@doc false
def __struct__(series) do
quote do
@type t :: %unquote(series){
fields: %unquote(series).Fields{},
tags: %unquote(series).Tags{},
timestamp: non_neg_integer | binary | nil
}
defstruct fields: %unquote(series).Fields{},
tags: %unquote(series).Tags{},
timestamp: nil
end
end
@doc false
def __struct_fields__(fields) do
quote do
defmodule Fields do
@moduledoc false
defstruct unquote(Macro.escape(fields))
end
end
end
@doc false
def __struct_tags__(tags) do
quote do
defmodule Tags do
@moduledoc false
defstruct unquote(Macro.escape(tags))
end
end
end
end
|
lib/instream/series.ex
| 0.90225
| 0.803097
|
series.ex
|
starcoder
|
defmodule ExPixBRCode.Payments.DynamicPixLoader do
@moduledoc """
Load either a :dynamic_payment_immediate or a :dynamic_payment_with_due_date from a url.
Dynamic payments have a URL inside their text representation which we should use to
validate the certificate chain and signature and fill a Pix payment model.
"""
alias ExPixBRCode.Changesets
alias ExPixBRCode.JWS
alias ExPixBRCode.JWS.Models.{JWKS, JWSHeaders}
alias ExPixBRCode.Payments.Models.{DynamicImmediatePixPayment, DynamicPixPaymentWithDueDate}
@valid_query_params [:cod_mun, :dpp]
defguardp is_success(status) when status >= 200 and status < 300
@doc """
Given a `t:Tesla.Client` and a PIX payment URL it loads its details after validation.
"""
@spec load_pix(Tesla.Client.t(), String.t()) ::
{:ok, DynamicImmediatePixPayment.t() | DynamicPixPaymentWithDueDate.t()}
| {:error, atom()}
def load_pix(client, url, opts \\ []) do
query_params = extract_query_params(opts)
case Tesla.get(client, url, query: query_params) do
{:ok, %{status: status} = env} when is_success(status) ->
do_process_jws(client, url, env.body, opts)
{:ok, _} ->
{:error, :http_status_not_success}
{:error, _} = err ->
err
end
end
defp do_process_jws(client, url, jws, opts) do
with {:ok, header_claims} <- Joken.peek_header(jws),
{:ok, header_claims} <-
Changesets.cast_and_apply(JWSHeaders, header_claims),
{:ok, jwks_storage} <- fetch_jwks_storage(client, header_claims, opts),
:ok <- verify_certificate(jwks_storage.certificate),
:ok <- verify_alg(jwks_storage.jwk, header_claims.alg),
{:ok, payload} <-
Joken.verify(jws, build_signer(jwks_storage.jwk, header_claims.alg)),
type <- type_from_url(url),
{:ok, pix} <- Changesets.cast_and_apply(type, payload) do
{:ok, pix}
end
end
defp type_from_url(url) do
url
|> URI.parse()
|> Map.get(:path)
|> Path.split()
|> Enum.member?("cobv")
|> if do
DynamicPixPaymentWithDueDate
else
DynamicImmediatePixPayment
end
end
defp build_signer(jwk, alg) do
%Joken.Signer{
alg: alg,
jwk: jwk,
jws: JOSE.JWS.from_map(%{"alg" => alg})
}
end
defp verify_alg(%{kty: {:jose_jwk_kty_ec, _}}, alg)
when alg in ["ES256", "ES384", "ES512"],
do: :ok
defp verify_alg(%{kty: {:jose_jwk_kty_rsa, _}}, alg)
when alg in ["PS256", "PS384", "PS512", "RS256", "RS384", "RS512"],
do: :ok
defp verify_alg(_jwk, _alg) do
{:error, :invalid_token_signing_algorithm}
end
defp verify_certificate(certificate) do
{:Validity, not_before, not_after} = X509.Certificate.validity(certificate)
not_before_check = DateTime.compare(DateTime.utc_now(), X509.DateTime.to_datetime(not_before))
not_after_check = DateTime.compare(DateTime.utc_now(), X509.DateTime.to_datetime(not_after))
cond do
not_before_check not in [:gt, :eq] -> {:error, :certificate_not_yet_valid}
not_after_check not in [:lt, :eq] -> {:error, :certificate_expired}
true -> :ok
end
end
defp fetch_jwks_storage(client, header_claims, opts) do
case JWS.jwks_storage_by_jws_headers(header_claims) do
nil ->
try_fetching_signers(client, header_claims, opts)
storage_item ->
{:ok, storage_item}
end
end
defp try_fetching_signers(client, header_claims, opts) do
case Tesla.get(client, header_claims.jku) do
{:ok, %{status: status} = env} when is_success(status) ->
process_jwks(env.body, header_claims, opts)
{:ok, _} ->
{:error, :http_status_not_success}
{:error, _} = err ->
err
end
end
defp process_jwks(jwks, header_claims, opts) when is_binary(jwks) do
case Jason.decode(jwks) do
{:ok, jwks} when is_map(jwks) -> process_jwks(jwks, header_claims, opts)
{:error, _} = err -> err
{:ok, _} -> {:error, :invalid_jwks_contents}
end
end
defp process_jwks(jwks, header_claims, opts) when is_map(jwks) do
with {:ok, jwks} <- Changesets.cast_and_apply(JWKS, jwks),
:ok <- JWS.process_keys(jwks.keys, header_claims.jku, opts),
storage_item when not is_nil(storage_item) <-
JWS.jwks_storage_by_jws_headers(header_claims) do
{:ok, storage_item}
else
nil -> {:error, :key_not_found_in_jku}
err -> err
end
end
defp extract_query_params(opts) do
opts
|> Enum.filter(fn {opt, value} -> opt in @valid_query_params and not is_nil(value) end)
|> Enum.map(fn
{:cod_mun, value} -> {:codMun, value}
{:dpp, value} -> {:DDP, value}
opt -> opt
end)
end
end
|
lib/ex_pix_brcode/payments/dynamic_pix_loader.ex
| 0.784773
| 0.460471
|
dynamic_pix_loader.ex
|
starcoder
|
defmodule SpadesGame.GameUI do
@moduledoc """
One level on top of Game.
"""
alias SpadesGame.{Card, Game, GameOptions, GameUI, GameUISeat}
@derive Jason.Encoder
defstruct [:game, :game_name, :options, :created_at, :status, :seats, :when_seats_full]
use Accessible
@type t :: %GameUI{
game: Game.t(),
game_name: String.t(),
options: GameOptions.t(),
created_at: DateTime.t(),
status: :staging | :playing | :done,
seats: %{
west: GameUISeat.t(),
north: GameUISeat.t(),
east: GameUISeat.t(),
south: GameUISeat.t()
},
when_seats_full: nil | DateTime.t()
}
@spec new(String.t(), GameOptions.t()) :: GameUI.t()
def new(game_name, %GameOptions{} = options) do
game = Game.new(game_name, options)
%GameUI{
game: game,
game_name: game_name,
options: options,
created_at: DateTime.utc_now(),
status: :staging,
seats: %{
west: GameUISeat.new_blank(),
north: GameUISeat.new_blank(),
east: GameUISeat.new_blank(),
south: GameUISeat.new_blank()
}
}
end
@doc """
censor_hands/1: Return a version of GameUI with all hands hidden.
"""
@spec censor_hands(GameUI.t()) :: GameUI.t()
def censor_hands(gameui) do
gameui
|> put_in([:game, :east, :hand], [])
|> put_in([:game, :north, :hand], [])
|> put_in([:game, :south, :hand], [])
|> put_in([:game, :west, :hand], [])
end
@doc """
bid/3: User bid `bid_amount` of tricks.
"""
@spec bid(GameUI.t(), number | :bot, number) :: GameUI.t()
def bid(game_ui, user_id, bid_amount) do
seat = user_id_to_seat(game_ui, user_id)
if seat == nil do
game_ui
else
case Game.bid(game_ui.game, seat, bid_amount) do
{:ok, new_game} ->
%{game_ui | game: new_game}
|> checks
{:error, _msg} ->
game_ui
end
end
end
@doc """
play/3: A player puts a card on the table. (Moves from hand to trick.)
"""
@spec play(GameUI.t(), number | :bot, Card.t()) :: GameUI.t()
def play(game_ui, user_id, card) do
seat = user_id_to_seat(game_ui, user_id)
if seat == nil do
game_ui
else
case Game.play(game_ui.game, seat, card) do
{:ok, new_game} ->
%{game_ui | game: new_game}
|> checks
{:error, _msg} ->
game_ui
end
end
end
@doc """
user_id_to_seat/2: Which seat is this user sitting in?
If :bot, check if the active turn seat belongs to a bot, return that seat if so.
"""
@spec user_id_to_seat(GameUI.t(), number | :bot) :: nil | :west | :east | :north | :south
def user_id_to_seat(%GameUI{game: %Game{turn: turn}} = game_ui, :bot) do
if bot_turn?(game_ui), do: turn, else: nil
end
def user_id_to_seat(game_ui, user_id) do
game_ui.seats
|> Map.new(fn {k, %GameUISeat{} = v} -> {v.sitting, k} end)
|> Map.delete(nil)
|> Map.get(user_id)
end
@doc """
checks/1: Applies checks to GameUI and return an updated copy.
Generally, all "checks" we append to all outputs.
These are all derived state updates. If something
needs to fire off a timer or something, it will be here.
It's always safe to call this function.
"""
@spec checks(GameUI.t()) :: GameUI.t()
def checks(gameui) do
gameui
|> check_full_seats
|> check_status_advance
|> check_game
end
@doc """
sit/3: User is attempting to sit in a seat.
Let them do it if no one is in the seat, and they are not
in any other seats. Otherwise return the game unchanged.
--> sit(gameui, userid, which_seat)
"""
@spec sit(GameUI.t(), integer, String.t()) :: GameUI.t()
def sit(gameui, userid, "north"), do: do_sit(gameui, userid, :north)
def sit(gameui, userid, "south"), do: do_sit(gameui, userid, :south)
def sit(gameui, userid, "east"), do: do_sit(gameui, userid, :east)
def sit(gameui, userid, "west"), do: do_sit(gameui, userid, :west)
def sit(gameui, _userid, _), do: gameui |> checks
@spec do_sit(GameUI.t(), integer, :north | :south | :east | :west) :: GameUI.t()
defp do_sit(gameui, userid, which) do
if sit_allowed?(gameui, userid, which) do
seat = gameui.seats[which] |> GameUISeat.sit(userid)
seats = gameui.seats |> Map.put(which, seat)
%GameUI{gameui | seats: seats}
|> checks
else
gameui
|> checks
end
end
# Is this user allowed to sit in this seat?
@spec sit_allowed?(GameUI.t(), integer, :north | :south | :east | :west) :: boolean
defp sit_allowed?(gameui, userid, which) do
!already_sitting?(gameui, userid) && seat_empty?(gameui, which)
end
# Is this user sitting in a seat?
@spec seat_empty?(GameUI.t(), integer) :: boolean
defp already_sitting?(gameui, userid) do
gameui.seats
|> Map.values()
|> Enum.map(fn %GameUISeat{} = seat -> seat.sitting end)
|> Enum.member?(userid)
end
# Is this seat empty?
@spec seat_empty?(GameUI.t(), :north | :south | :east | :west) :: boolean
defp seat_empty?(gameui, which), do: gameui.seats[which].sitting == nil
@doc """
leave/2: Userid just left the table. If they were seated, mark
their seat as vacant.
"""
@spec leave(GameUI.t(), integer) :: GameUI.t()
def leave(gameui, userid) do
seats =
for {k, v} <- gameui.seats,
into: %{},
do: {k, if(v.sitting == userid, do: GameUISeat.new_blank(), else: v)}
%{gameui | seats: seats}
|> checks
end
@doc """
check_full_seats/1
When the last person sits down and all of the seats are full, put a timestamp
on ".when_seats_full".
If there is a timestamp set, and someone just stood up, clear the timestamp.
"""
@spec check_full_seats(GameUI.t()) :: GameUI.t()
def check_full_seats(%GameUI{} = gameui) do
cond do
everyone_sitting?(gameui) and gameui.when_seats_full == nil ->
%{gameui | when_seats_full: DateTime.utc_now()}
not everyone_sitting?(gameui) and gameui.when_seats_full != nil ->
%{gameui | when_seats_full: nil}
true ->
gameui
end
end
@doc """
check_game/1:
Run the series of checks on the Game object.
Similar to GameUI's checks(), but running on the embedded
game_ui.game object/level instead.
"""
@spec check_game(GameUI.t()) :: GameUI.t()
def check_game(%GameUI{} = game_ui) do
{:ok, game} = Game.checks(game_ui.game)
%GameUI{game_ui | game: game}
end
@doc """
check_status_advance/1: Move a game's status when appropriate.
:staging -> :playing -> :done
"""
@spec check_status_advance(GameUI.t()) :: GameUI.t()
def check_status_advance(%GameUI{status: :staging} = gameui) do
if everyone_sitting?(gameui) and seat_full_countdown_finished?(gameui) do
%{gameui | status: :playing}
else
gameui
end
end
# This doesn't seem to work
def check_status_advance(%GameUI{status: :playing, game: %Game{winner: winner}} = gameui)
when not is_nil(winner) do
%{gameui | status: :done}
end
def check_status_advance(gameui) do
gameui
end
@doc """
everyone_sitting?/1:
Does each seat have a person sitting in it?
"""
@spec everyone_sitting?(GameUI.t()) :: boolean
def everyone_sitting?(gameui) do
[:north, :west, :south, :east]
|> Enum.reduce(true, fn seat, acc ->
acc and gameui.seats[seat].sitting != nil
end)
end
@doc """
trick_full?/1:
Does the game's current trick have one card for each player?
"""
@spec trick_full?(GameUI.t()) :: boolean
def trick_full?(game_ui) do
Game.trick_full?(game_ui.game)
end
@doc """
seat_full_countdown_finished?/1
Is the "when_seats_full" timestamp at least 10 seconds old?
"""
@spec seat_full_countdown_finished?(GameUI.t()) :: boolean
def seat_full_countdown_finished?(%GameUI{when_seats_full: nil}) do
false
end
def seat_full_countdown_finished?(%GameUI{when_seats_full: when_seats_full}) do
time_elapsed = DateTime.diff(DateTime.utc_now(), when_seats_full, :millisecond)
# 10 seconds
time_elapsed >= 10 * 1000
end
@doc """
rewind_countdown_devtest/1:
If a "when_seats_full" timestamp is set, rewind it to be
10 minutes ago. Also run check_for_trick_winner. Used in
dev and testing for instant trick advance only.
"""
@spec rewind_countdown_devtest(GameUI.t()) :: GameUI.t()
def rewind_countdown_devtest(%GameUI{when_seats_full: when_seats_full} = game_ui) do
if when_seats_full == nil do
game_ui
|> checks
else
ten_mins_in_seconds = 60 * 10
nt = DateTime.add(when_seats_full, -1 * ten_mins_in_seconds, :second)
%GameUI{game_ui | when_seats_full: nt}
|> checks
end
end
@spec rewind_trickfull_devtest(GameUI.t()) :: GameUI.t()
def rewind_trickfull_devtest(game_ui) do
%GameUI{game_ui | game: Game.rewind_trickfull_devtest(game_ui.game)}
|> checks
end
@doc """
invite_bots/1: Invite bots to sit on the remaining seats.
"""
@spec invite_bots(GameUI.t()) :: GameUI.t()
def invite_bots(game_ui) do
game_ui
|> map_seats(fn seat ->
GameUISeat.bot_sit_if_empty(seat)
end)
end
@doc """
bots_leave/1: Bots have left the table (server terminated).
"""
@spec bots_leave(GameUI.t()) :: GameUI.t()
def bots_leave(game_ui) do
game_ui
|> map_seats(fn seat ->
GameUISeat.bot_leave_if_sitting(seat)
end)
end
@doc """
map_seats/2: Apply a 1 arity function to all seats
should probably only be used internally
"""
@spec map_seats(GameUI.t(), (GameUISeat.t() -> GameUISeat.t())) :: GameUI.t()
def map_seats(game_ui, f) do
seats =
game_ui.seats
|> Enum.map(fn {where, seat} -> {where, f.(seat)} end)
|> Enum.into(%{})
%GameUI{game_ui | seats: seats}
|> checks
end
@doc """
bot_turn?/1 : Is it currently a bot's turn?
"""
@spec bot_turn?(GameUI.t()) :: boolean
def bot_turn?(%GameUI{game: %Game{winner: winner}}) when winner != nil, do: false
def bot_turn?(%GameUI{game: %Game{turn: nil}}), do: false
def bot_turn?(%GameUI{game: %Game{turn: turn}, seats: seats}) do
seats
|> Map.get(turn)
|> GameUISeat.is_bot?()
end
end
|
backend/lib/spades_game/ui/game_ui.ex
| 0.768125
| 0.44553
|
game_ui.ex
|
starcoder
|
defmodule Services.Search.TokenManager do
@moduledoc """
Manages and refreshes access tokens which have a defined expiry period.
"""
require Logger
defmodule TokenDefinition do
@moduledoc """
Represents a token with an expiry period, along with a method
of re-acquiring the token once it expires
"""
@enforce_keys [:name, :expiry_seconds, :generator]
defstruct [
:name,
:expiry_seconds,
:current_value,
:generator,
:timer
]
@typedoc """
A unique identifier for the token
"""
@type name :: atom
@typedoc """
The number of seconds elapsed before this token expires
"""
@type expiry_seconds :: integer
@typedoc """
The current value of the token
"""
@type current_value :: String.t()
@typedoc """
A function which acquires a new token, to be called
when after `expiry_seconds` seconds elapses
"""
@type generator :: (() -> String.t())
@typedoc """
A reference to the timer process which will respond
once this token expires
"""
@type timer :: reference()
@type t :: %__MODULE__{
name: name,
expiry_seconds: expiry_seconds,
current_value: current_value,
generator: generator,
timer: timer
}
end
use GenServer
@doc """
Starts the token manager
"""
def start_link(opts) do
GenServer.start_link(__MODULE__, %{}, opts)
end
@doc """
Defines a new managed token. The token is regenerated after `expiry_seconds` passes
using `generator`.
An initial value is optional. Not providing an `initial` value (or providing `nil`)
means that `generator` will be used to obtain the first value.
"""
@spec define(atom | pid, atom, integer, (() -> String.t()), String.t() | nil) :: String.t()
def define(manager, name, expiry_seconds, generator, initial \\ nil) do
GenServer.call(manager, {:define, name, expiry_seconds, generator, initial})
end
@doc """
Defines a temporary managed token with value `token`. The token cannot
be regenerated, and will be deleted automatically after `expiry_seconds`
passes.
"""
@spec define_temporary(atom | pid, atom, integer, String.t()) :: String.t()
def define_temporary(manager, name, expiry_seconds, token) do
GenServer.call(manager, {:define_temporary, name, expiry_seconds, token})
end
@doc """
Obtains the token value defined under a name. Returns `:error` if
no token is defined.
"""
@spec token?(atom | pid, atom) :: String.t() | :error
def token?(manager, name) do
GenServer.call(manager, {:lookup, name})
end
@doc """
Undefines a token
"""
@spec undefine(atom | pid, atom) :: :ok
def undefine(manager, name) do
GenServer.call(manager, {:delete, name})
end
## Handlers
def init(registry) do
{:ok, registry}
end
def handle_call({:define, name, expiry_seconds, generator, initial}, _from, registry) do
definition = define_internal(name, expiry_seconds, generator, initial)
{:reply, definition.current_value, Map.put(registry, name, definition)}
end
def handle_call({:define_temporary, name, expiry_seconds, token}, _from, registry) do
definition = define_internal(name, expiry_seconds, token)
{:reply, definition.current_value, Map.put(registry, name, definition)}
end
def handle_call({:lookup, name}, _from, registry) do
case Map.get(registry, name) do
nil -> {:reply, :error, registry}
definition -> {:reply, definition.current_value, registry}
end
end
def handle_call({:delete, name}, _from, registry) do
case Map.get(registry, name) do
nil -> {:reply, :ok, registry}
_ -> {:reply, :ok, Map.delete(registry, name)}
end
end
def handle_info({:expired, name}, registry) do
Logger.debug(fn -> "Token #{name} expired." end)
case expire(Map.get(registry, name)) do
nil -> {:noreply, Map.delete(registry, name)}
definition -> {:noreply, Map.put(registry, name, definition)}
end
end
defp define_internal(name, expiry_seconds, current_value) do
%TokenDefinition{
name: name,
expiry_seconds: expiry_seconds,
current_value: current_value,
generator: nil,
timer: start_timer(name, expiry_seconds)
}
end
defp define_internal(name, expiry_seconds, generator, nil) do
%TokenDefinition{
name: name,
expiry_seconds: expiry_seconds,
generator: generator,
current_value: generator.(),
timer: start_timer(name, expiry_seconds)
}
end
defp define_internal(name, expiry_seconds, generator, current_value) do
%TokenDefinition{
name: name,
expiry_seconds: expiry_seconds,
current_value: current_value,
generator: generator,
timer: start_timer(name, expiry_seconds)
}
end
defp expire(nil) do
nil
end
defp expire(%TokenDefinition{generator: nil}) do
nil
end
defp expire(
%TokenDefinition{generator: generator, name: name, expiry_seconds: expiry} = definition
) do
%{definition | current_value: generator.(), timer: start_timer(name, expiry)}
end
defp start_timer(name, expiry_seconds) do
Process.send_after(self(), expiry_message(name), expiry_seconds)
end
defp expiry_message(name) do
{:expired, name}
end
end
|
apps/services/lib/services/search/token_manager.ex
| 0.874661
| 0.627181
|
token_manager.ex
|
starcoder
|
defmodule Day12 do
def part1(input) do
ship = {0, 0}
direction = 1
parse(input)
|> Enum.reduce({ship, direction}, &execute_part1/2)
|> distance
end
def part2(input) do
ship = {0, 0}
waypoint = {10, 1}
parse(input)
|> Enum.reduce({ship, waypoint}, &execute_part2/2)
|> distance
end
defp execute_part1({command, amount} = cmd, {location, direction}) do
case command do
:F -> {forward(location, direction, amount), direction}
:L -> {location, turn_right(direction, -amount)}
:R -> {location, turn_right(direction, amount)}
_direction -> {execute_direction(cmd, location), direction}
end
end
defp execute_part2({command, amount} = cmd, {ship, waypoint}) do
case command do
:F ->
offset = vec_sub(waypoint, ship)
ship = vec_add(ship, vec_mul(amount, offset))
waypoint = vec_add(ship, offset)
{ship, waypoint}
:L ->
{ship, rot_around(waypoint, amount, ship)}
:R ->
{ship, rot_around(waypoint, -amount, ship)}
_ ->
{ship, execute_direction(cmd, waypoint)}
end
end
defp execute_direction({command, amount}, location) do
case command do
:N -> forward(location, 0, amount)
:E -> forward(location, 1, amount)
:S -> forward(location, 2, amount)
:W -> forward(location, 3, amount)
end
end
defp forward({x, y}, direction, amount) do
case direction do
0 -> {x, y + amount}
2 -> {x, y - amount}
1 -> {x + amount, y}
3 -> {x - amount, y}
end
end
defp vec_add({x1, y1}, {x2, y2}), do: {x1 + x2, y1 + y2}
defp vec_mul(scalar, {x, y}), do: {scalar * x, scalar * y}
defp vec_sub({x1, y1}, {x2, y2}), do: {x1 - x2, y1 - y2}
defp rot_around(point, amount, around) do
point
|> vec_sub(around)
|> rot_ccw(amount)
|> vec_add(around)
end
defp rot_ccw(point, amount) do
amount = div(amount, 90)
amount = if (amount < 0), do: 4 + amount, else: amount
Enum.reduce(1..amount, point, fn _, point ->
rot90_ccw(point)
end)
end
defp rot90_ccw({x, y}), do: {-y, x}
defp turn_right(direction, amount) do
rem(4 + direction + div(amount, 90), 4)
end
defp distance({{x, y}, _}) do
abs(x) + abs(y)
end
def parse(input) do
Enum.map(input, fn line ->
{command, amount} = String.split_at(line, 1)
{String.to_atom(command), String.to_integer(amount)}
end)
end
end
|
day12/lib/day12.ex
| 0.708414
| 0.617686
|
day12.ex
|
starcoder
|
defmodule FlowAssertions.Define.BodyParts do
import ExUnit.Assertions
alias ExUnit.AssertionError
import FlowAssertions.Define.Defchain
alias FlowAssertions.Messages
@moduledoc """
Functions helpful in the construction of a new assertion.
Mostly, they give you more control over what's shown in a failing test by letting
you set `ExUnit.AssertionError` values like `:left` and `:right`.
All such functions take a string first argument. That's shorthand
for setting the `:message` field.
"""
@doc """
Like `ExUnit.Assertions.flunk/1` but the second argument is used to set `AssertionError` keys.
```
elaborate_flunk("the value is wrong", left: value_to_check)
```
Warning: as far as I know, the structure of `ExUnit.AssertionError` is not
guaranteed to be stable.
See also `elaborate_assert/3`.
"""
def elaborate_flunk(message, opts) do
try do
flunk message
rescue
ex in AssertionError ->
annotated =
Enum.reduce(opts, ex, fn {k, v}, acc -> Map.put(acc, k, v) end)
reraise annotated, __STACKTRACE__
end
end
@doc """
Like `ExUnit.Assertions.assert/2` but the third argument is used to set `AssertionError` keys.
```
elaborate_assert(
left =~ right,
"Regular expression didn't match",
left: left, right: right)
```
Warning: as far as I know, the structure of `ExUnit.AssertionError` is not
guaranteed to be stable.
See also `elaborate_assert_equal/4`.
"""
defchain elaborate_assert(value, message, opts) do
if !value, do: elaborate_flunk(message, opts)
end
@doc """
`elaborate_assert/3`, except the value is expected to be falsy.
"""
defchain elaborate_refute(value, message, opts),
do: elaborate_assert(!value, message, opts)
@doc """
This replicates the diagnostic output from `assert a == b`, except for the
code snippet that's reported.
The user will see a failing test containing:
Assertion with == failed
code: assert_same_map(new, old, ignoring: [:stable])
left: ...
right: ...
... instead of the assertion that actually failed, something like this:
Assertion with == failed
code: assert Map.drop(new, fields_to_ignore) == Map.drop(old, fields_to_ignore)
left: ...
right: ...
"""
defchain elaborate_assert_equal(left, right) do
elaborate_assert(left == right,
Messages.stock_equality,
left: left, right: right,
expr: AssertionError.no_value)
end
# ----------------------------------------------------------------------------
@doc """
Flunk test if it checks structure fields that don't exist.
It doesn't make sense to write an assertion that checks a field that
a structure can't contain. If a user tries, this function will object with a message like:
```
Test error: there is no key `:b` in a `MyApp.Struct`
```
Notes:
* It's safe to call on non-struct values.
* It returns its first argument.
"""
defchain struct_must_have_key!(struct, key) when is_struct(struct) do
elaborate_assert(
Map.has_key?(struct, key),
Messages.required_key_missing(key, struct),
left: struct |> Map.from_struct |> Map.keys)
end
def struct_must_have_key!(x, _), do: x
@doc"""
Same as `struct_must_have_key!/2` but checks multiple keys.
"""
defchain struct_must_have_keys!(struct, keys) when is_struct(struct) do
for key <- keys, do: struct_must_have_key!(struct, key)
end
def struct_must_have_keys!(x, _), do: x
# ----------------------------------------------------------------------------
@doc ~S"""
Run a function, perhaps generating an assertion error. If so, use the
keyword arguments to replace or update values in the error.
**Replacement:**
adjust_assertion_error(fn ->
MiscA.assert_good_enough(Map.get(kvs, key), expected)
end,
message: "Field `#{inspect key}` has the wrong value",
expr: AssertionError.no_value)
Setting the `expr` field to `AssertionError.no_value` has the handy effect of
making the reporting machinery report the code of the assertion the user called,
rather than the nested assertion that generated the error.
**Update:**
adjust_assertion_error(fn ->
MiscA.assert_good_enough(Map.get(kvs, key), expected)
end,
expr: fn expr -> [expr, "..."] end) # indicate something missing.
See also `adjust_assertion_message/2`
"""
def adjust_assertion_error(f, replacements) do
try do
f.()
rescue
ex in AssertionError ->
Enum.reduce(replacements, ex, fn {key, value}, acc ->
if is_function(value),
do: Map.update!(acc, key, value),
else: Map.put(acc, key, value)
end)
|> reraise(__STACKTRACE__)
end
end
@doc ~S"""
Run a function, perhaps generating an assertion error. If so, call the second function, passing the current assertion message as its argument. The result is installed as the new assertion message.
adjust_assertion_message(
fn -> flunk "message" end,
fn message -> "#{message} and #{message}" end)
See also `adjust_assertion_error/2`"
"""
def adjust_assertion_message(asserter, adjuster) do
try do
asserter.()
rescue
ex in AssertionError ->
Map.put(ex, :message, adjuster.(ex.message))
|> reraise(__STACKTRACE__)
end
end
end
|
lib/define/body_parts.ex
| 0.845002
| 0.942929
|
body_parts.ex
|
starcoder
|
defmodule Ecto.Adapters.Mnesia.Query do
@moduledoc false
import Ecto.Adapters.Mnesia.Table,
only: [
record_field_index: 2
]
alias Ecto.Adapters.Mnesia
alias Ecto.Query.QueryExpr
require Qlc
defstruct type: nil,
codepath: nil,
sources: nil,
query: nil,
sort: nil,
answers: nil,
new_record: nil
@type t :: %__MODULE__{
codepath: :qlc | :read,
type: :all | :update_all | :delete_all,
sources: Keyword.t(),
query: (params :: list() -> query_handle :: :qlc.query_handle()),
sort: (query_handle :: :qlc.query_handle() -> query_handle :: :qlc.query_handle()),
answers: (query_handle :: :qlc.query_handle(), context :: Keyword.t() -> list(tuple())),
new_record: (tuple(), list() -> tuple())
}
alias Ecto.Query.BooleanExpr
alias Ecto.Query.QueryExpr
alias Ecto.Query.SelectExpr
@spec from_ecto_query(type :: atom(), ecto_query :: Ecto.Query.t()) :: mnesia_query :: t()
def from_ecto_query(type, ecto_query) do
cond do
is_simple_ecto_where_expr?(ecto_query) and match_simple_where_expr?(ecto_query, :id) ->
do_from_ecto_query(type, ecto_query, :read)
is_simple_ecto_where_expr?(ecto_query) and
match_simple_where_expr?(ecto_query, :non_id_field) and index_exists?(ecto_query) ->
do_from_ecto_query(type, ecto_query, :index_read)
true ->
do_from_ecto_query(type, ecto_query)
end
end
defp is_simple_ecto_where_expr?(%Ecto.Query{
select: %SelectExpr{expr: {:&, [], [0]}},
wheres: [where],
order_bys: []
}) do
match?(
%Ecto.Query.BooleanExpr{
expr: expr,
op: :and,
params: nil,
subqueries: []
},
where
)
end
defp is_simple_ecto_where_expr?(_), do: false
defp match_simple_where_expr?(%{wheres: [%{expr: expr}]}, :id) do
match?(
{:==, [], [{{:., [], [{:&, [], [0], field}]}, [], []}, {:^, [], [0]}]} when field == :id,
expr
) or
match?(
{:==, [], [{{:., [], [{:&, [], [0]}, field]}, [], []}, {:^, [], [0]}]} when field == :id,
expr
)
end
defp match_simple_where_expr?(%{wheres: [%{expr: expr}]}, :non_id_field) do
match?(
{:==, [], [{{:., [], [{:&, [], [0]}, field]}, [], []}, {:^, [], [0]}]}
when field != :id,
expr
)
end
defp index_exists?(%Ecto.Query{
wheres: [where],
select: select,
sources: sources
}) do
fields_in_correct_order = for {{_, _, [_, field]}, _, _} <- select.fields, do: field
field = get_field(where.expr)
[{tab, _schema}] = sources(sources)
index_exists?(tab, field)
end
defp get_field({:==, [], [{{:., [], [{:&, [], [0]}, field]}, [], []}, {:^, [], [0]}]}) do
field
end
defp index_exists?(table, field) when is_atom(field) and is_atom(table) do
attrs = :mnesia.table_info(table, :attributes)
field_pos = Enum.find_index(attrs, &(&1 == field))
index = :mnesia.table_info(table, :index)
(field_pos + 2) in index
end
@spec from_ecto_query(type :: atom(), ecto_query :: Ecto.Query.t()) :: mnesia_query :: t()
defp do_from_ecto_query(
type,
%Ecto.Query{
select: select,
joins: [] = joins,
sources: sources,
wheres: wheres,
updates: [] = updates,
order_bys: [] = order_bys,
limit: nil = limit,
offset: nil = offset
} = eq,
codepath
)
when codepath in [:read, :index_read] do
sources = sources(sources)
[{table, _schema}] = sources
queryfn = Mnesia.Read.query(select, joins, sources, wheres)
sort = fn queryfn_result ->
fields_in_correct_order = for {{_, _, [_, field]}, _, _} <- select.fields, do: field
attributes = :mnesia.table_info(table, :attributes)
queryfn_result
|> Enum.map(&Enum.zip(attributes, &1))
|> Enum.map(fn kv_list ->
Enum.sort_by(kv_list, fn {k, _} ->
Enum.find_index(fields_in_correct_order, &(&1 == k))
end)
end)
|> Enum.map(fn kv_list -> Enum.map(kv_list, fn {_, v} -> v end) end)
end
%Mnesia.Query{
type: type,
query: queryfn,
sort: sort,
sources: sources,
codepath: :read
}
end
@spec from_ecto_query(type :: atom(), ecto_query :: Ecto.Query.t()) ::
mnesia_query :: t()
defp do_from_ecto_query(
type,
%Ecto.Query{
select: select,
joins: joins,
sources: sources,
wheres: wheres,
updates: updates,
order_bys: order_bys,
limit: limit,
offset: offset
}
) do
sources = sources(sources)
query = Mnesia.Qlc.query(select, joins, sources).(wheres)
sort = Mnesia.Qlc.sort(order_bys, select, sources)
answers = Mnesia.Qlc.answers(limit, offset)
new_record = new_record(Enum.at(sources, 0), updates)
%Mnesia.Query{
type: type,
sources: sources,
query: query,
sort: sort,
answers: answers,
new_record: new_record,
codepath: :qlc
}
end
defp sources(sources) do
sources
|> Tuple.to_list()
|> Enum.map(fn {table_name, schema, _} ->
{String.to_atom(table_name), schema}
end)
end
defp new_record({table_name, schema}, updates) do
fn record, params ->
case updates do
[%QueryExpr{expr: [set: replacements]}] ->
replacements
|> Enum.reduce(record, fn {field, {:^, [], [param_index]}}, record ->
record_field_index = record_field_index(field, table_name)
value = Enum.at(params, param_index)
List.replace_at(record, record_field_index, value)
end)
|> List.insert_at(0, schema)
|> List.to_tuple()
_ ->
record
end
end
end
end
|
lib/ecto/adapters/mnesia/query.ex
| 0.648689
| 0.522811
|
query.ex
|
starcoder
|
defmodule Matrex do
@moduledoc """
Performs fast operations on matrices using native C code and CBLAS library.
## Access behaviour
Access behaviour is partly implemented for Matrex, so you can do:
```elixir
iex> m = Matrex.magic(3)
#Matrex[3×3]
┌ ┐
│ 8.0 1.0 6.0 │
│ 3.0 5.0 7.0 │
│ 4.0 9.0 2.0 │
└ ┘
iex> m[2][3]
7.0
```
Or even:
```elixir
iex> m[1..2]
#Matrex[2×3]
┌ ┐
│ 8.0 1.0 6.0 │
│ 3.0 5.0 7.0 │
└ ┘
```
There are also several shortcuts for getting dimensions of matrix:
```elixir
iex> m[:rows]
3
iex> m[:size]
{3, 3}
```
calculating maximum value of the whole matrix:
```elixir
iex> m[:max]
9.0
```
or just one of it's rows:
```elixir
iex> m[2][:max]
7.0
```
calculating one-based index of the maximum element for the whole matrix:
```elixir
iex> m[:argmax]
8
```
and a row:
```elixir
iex> m[2][:argmax]
3
```
## Inspect protocol
Matrex implements `Inspect` and looks nice in your console:

## Math operators overloading
`Matrex.Operators` module redefines `Kernel` math operators (+, -, *, / <|>) and
defines some convenience functions, so you can write calculations code in more natural way.
It should be used with great caution. We suggest using it only inside specific functions
and only for increased readability, because using `Matrex` module functions, especially
ones which do two or more operations at one call, are 2-3 times faster.
### Example
```elixir
def lr_cost_fun_ops(%Matrex{} = theta, {%Matrex{} = x, %Matrex{} = y, lambda} = _params)
when is_number(lambda) do
# Turn off original operators
import Kernel, except: [-: 1, +: 2, -: 2, *: 2, /: 2, <|>: 2]
import Matrex.Operators
import Matrex
m = y[:rows]
h = sigmoid(x * theta)
l = ones(size(theta)) |> set(1, 1, 0.0)
j = (-t(y) * log(h) - t(1 - y) * log(1 - h) + lambda / 2 * t(l) * pow2(theta)) / m
grad = (t(x) * (h - y) + (theta <|> l) * lambda) / m
{scalar(j), grad}
end
```
The same function, coded with module methods calls (2.5 times faster):
```elixir
def lr_cost_fun(%Matrex{} = theta, {%Matrex{} = x, %Matrex{} = y, lambda} = _params)
when is_number(lambda) do
m = y[:rows]
h = Matrex.dot_and_apply(x, theta, :sigmoid)
l = Matrex.ones(theta[:rows], theta[:cols]) |> Matrex.set(1, 1, 0)
regularization =
Matrex.dot_tn(l, Matrex.square(theta))
|> Matrex.scalar()
|> Kernel.*(lambda / (2 * m))
j =
y
|> Matrex.dot_tn(Matrex.apply(h, :log), -1)
|> Matrex.subtract(
Matrex.dot_tn(
Matrex.subtract(1, y),
Matrex.apply(Matrex.subtract(1, h), :log)
)
)
|> Matrex.scalar()
|> (fn
:nan -> :nan
x -> x / m + regularization
end).()
grad =
x
|> Matrex.dot_tn(Matrex.subtract(h, y))
|> Matrex.add(Matrex.multiply(theta, l), 1.0, lambda)
|> Matrex.divide(m)
{j, grad}
end
```
## Enumerable protocol
Matrex implements `Enumerable`, so, all kinds of `Enum` functions are applicable:
```elixir
iex> Enum.member?(m, 2.0)
true
iex> Enum.count(m)
9
iex> Enum.sum(m)
45
```
For functions, that exist both in `Enum` and in `Matrex` it's preferred to use Matrex
version, beacuse it's usually much, much faster. I.e., for 1 000 x 1 000 matrix `Matrex.sum/1`
and `Matrex.to_list/1` are 438 and 41 times faster, respectively, than their `Enum` counterparts.
## Saving and loading matrix
You can save/load matrix with native binary file format (extra fast)
and CSV (slow, especially on large matrices).
Matrex CSV format is compatible with GNU Octave CSV output,
so you can use it to exchange data between two systems.
### Example
```elixir
iex> Matrex.random(5) |> Matrex.save("rand.mtx")
:ok
iex> Matrex.load("rand.mtx")
#Matrex[5×5]
┌ ┐
│ 0.05624 0.78819 0.29995 0.25654 0.94082 │
│ 0.50225 0.22923 0.31941 0.3329 0.78058 │
│ 0.81769 0.66448 0.97414 0.08146 0.21654 │
│ 0.33411 0.59648 0.24786 0.27596 0.09082 │
│ 0.18673 0.18699 0.79753 0.08101 0.47516 │
└ ┘
iex> Matrex.magic(5) |> Matrex.divide(Matrex.eye(5)) |> Matrex.save("nan.csv")
:ok
iex> Matrex.load("nan.csv")
#Matrex[5×5]
┌ ┐
│ 16.0 ∞ ∞ ∞ ∞ │
│ ∞ 4.0 ∞ ∞ ∞ │
│ ∞ ∞ 12.0 ∞ ∞ │
│ ∞ ∞ ∞ 25.0 ∞ │
│ ∞ ∞ ∞ ∞ 8.0 │
└ ┘
```
## NaN and Infinity
Float special values, like `:nan` and `:inf` live well inside matrices,
can be loaded from and saved to files.
But when getting them into Elixir they are transferred to `:nan`,`:inf` and `:neg_inf` atoms,
because BEAM does not accept special values as valid floats.
```elixir
iex> m = Matrex.eye(3)
#Matrex[3×3]
┌ ┐
│ 1.0 0.0 0.0 │
│ 0.0 1.0 0.0 │
│ 0.0 0.0 1.0 │
└ ┘
iex> n = Matrex.divide(m, Matrex.zeros(3))
#Matrex[3×3]
┌ ┐
│ ∞ NaN NaN │
│ NaN ∞ NaN │
│ NaN NaN ∞ │
└ ┘
iex> n[1][1]
:inf
iex> n[1][2]
:nan
```
"""
alias Matrex.NIFs
import Matrex.Guards
@enforce_keys [:data]
defstruct [:data]
@type element :: number | :nan | :inf | :neg_inf
@type index :: pos_integer
@type matrex :: %Matrex{data: binary}
@type t :: matrex
# Size of matrix element (float) in bytes
@element_size 4
# Float special values in binary form
@not_a_number <<0, 0, 192, 255>>
@positive_infinity <<0, 0, 128, 127>>
@negative_infinity <<0, 0, 128, 255>>
@compile {:inline,
add: 2,
argmax: 1,
at: 3,
binary_to_float: 1,
column_to_list: 2,
contains?: 2,
divide: 2,
dot: 2,
dot_and_add: 3,
dot_nt: 2,
dot_tn: 2,
forward_substitute: 2,
cholesky: 1,
eye: 1,
diagonal: 1,
element_to_string: 1,
fill: 3,
fill: 2,
first: 1,
fetch: 2,
float_to_binary: 1,
max: 1,
multiply: 2,
ones: 2,
ones: 1,
parse_float: 1,
random: 2,
random: 1,
reshape: 3,
row_to_list: 2,
row: 2,
set: 4,
size: 1,
square: 1,
subtract: 2,
subtract_inverse: 2,
sum: 1,
to_list: 1,
to_list_of_lists: 1,
to_row: 1,
to_column: 1,
transpose: 1,
update: 4,
zeros: 2,
zeros: 1}
@behaviour Access
@impl Access
def fetch(matrex, key)
# Horizontal vector
def fetch(matrex_data(1, _, _) = matrex, key)
when is_integer(key) and key > 0,
do: {:ok, at(matrex, 1, key)}
# Vertical vector
def fetch(matrex_data(_, 1, _) = matrex, key)
when is_integer(key) and key > 0,
do: {:ok, at(matrex, key, 1)}
# Return a row
def fetch(matrex, key)
when is_integer(key) and key > 0,
do: {:ok, row(matrex, key)}
# Slice on horizontal vector
def fetch(matrex_data(1, columns, data), a..b)
when b > a and a > 0 and b <= columns do
data = binary_part(data, (a - 1) * @element_size, (b - a + 1) * @element_size)
{:ok, matrex_data(1, b - a + 1, data)}
end
def fetch(matrex_data(rows, columns, data), a..b)
when b > a and a > 0 and b <= rows do
data =
binary_part(data, (a - 1) * columns * @element_size, (b - a + 1) * columns * @element_size)
{:ok, matrex_data(b - a + 1, columns, data)}
end
def fetch(matrex_data(rows, _, _), :rows), do: {:ok, rows}
def fetch(matrex_data(_, cols, _), :cols), do: {:ok, cols}
def fetch(matrex_data(_, cols, _), :columns), do: {:ok, cols}
def fetch(matrex_data(rows, cols, _), :size), do: {:ok, {rows, cols}}
def fetch(matrex, :sum), do: {:ok, sum(matrex)}
def fetch(matrex, :max), do: {:ok, max(matrex)}
def fetch(matrex, :min), do: {:ok, min(matrex)}
def fetch(matrex, :argmax), do: {:ok, argmax(matrex)}
def get(%Matrex{} = matrex, key, default) do
case fetch(matrex, key) do
{:ok, value} -> value
:error -> default
end
end
@impl Access
def pop(matrex_data(rows, columns, body), row)
when is_integer(row) and row >= 1 and row <= rows do
get =
matrex_data(
1,
columns,
binary_part(body, (row - 1) * columns * @element_size, columns * @element_size)
)
update =
matrex_data(
rows - 1,
columns,
binary_part(body, 0, (row - 1) * columns * @element_size) <>
binary_part(body, row * columns * @element_size, (rows - row) * columns * @element_size)
)
{get, update}
end
def pop(%Matrex{} = matrex, _), do: {nil, matrex}
# To silence warnings
@impl Access
def get_and_update(%Matrex{}, _row, _fun), do: raise("not implemented")
defimpl Inspect do
@doc false
def inspect(%Matrex{} = matrex, opts) do
columns =
case opts.width do
:infinity -> 80
width -> width
end
Matrex.Inspect.do_inspect(matrex, columns, 21)
end
end
defimpl Enumerable do
# Matrix element size in bytes
@element_size 4
import Matrex.Guards
@doc false
def count(matrex_data(rows, cols, _data)), do: {:ok, rows * cols}
@doc false
def member?(%Matrex{} = matrex, element), do: {:ok, Matrex.contains?(matrex, element)}
@doc false
def slice(matrex_data(rows, cols, body)) do
{:ok, rows * cols,
fn start, length ->
Matrex.binary_to_list(binary_part(body, start * @element_size, length * @element_size))
end}
end
@doc false
def reduce(matrex_data(_rows, _cols, body), acc, fun) do
reduce_each(body, acc, fun)
end
defp reduce_each(_, {:halt, acc}, _fun), do: {:halted, acc}
defp reduce_each(matrix, {:suspend, acc}, fun),
do: {:suspended, acc, &reduce_each(matrix, &1, fun)}
defp reduce_each(<<elem::binary-@element_size, rest::binary>>, {:cont, acc}, fun),
do: reduce_each(rest, fun.(Matrex.binary_to_float(elem), acc), fun)
defp reduce_each(<<>>, {:cont, acc}, _fun), do: {:done, acc}
end
@doc """
Adds scalar to matrix.
See `Matrex.add/4` for details.
"""
@spec add(matrex, number) :: matrex
@spec add(number, matrex) :: matrex
def add(%Matrex{data: matrix} = _a, b) when is_number(b),
do: %Matrex{data: NIFs.add_scalar(matrix, b)}
def add(a, %Matrex{data: matrix} = _b) when is_number(a),
do: %Matrex{data: NIFs.add_scalar(matrix, a)}
@doc """
Adds two matrices or scalar to each element of matrix. NIF.
Can optionally scale any of the two matrices.
C = αA + βB
Raises `ErlangError` if matrices' sizes do not match.
## Examples
iex> Matrex.add(Matrex.new([[1,2,3],[4,5,6]]), Matrex.new([[7,8,9],[10,11,12]]))
#Matrex[2×3]
┌ ┐
│ 8.0 10.0 12.0 │
│ 14.0 16.0 18.0 │
└ ┘
Adding with scalar:
iex> m = Matrex.magic(3)
#Matrex[3×3]
┌ ┐
│ 8.0 1.0 6.0 │
│ 3.0 5.0 7.0 │
│ 4.0 9.0 2.0 │
└ ┘
iex> Matrex.add(m, 1)
#Matrex[3×3]
┌ ┐
│ 9.0 2.0 7.0 │
│ 4.0 6.0 8.0 │
│ 5.0 10.0 3.0 │
└ ┘
With scaling each matrix:
iex> Matrex.add(Matrex.new("1 2 3; 4 5 6"), Matrex.new("3 2 1; 6 5 4"), 2.0, 3.0)
#Matrex[2×3]
┌ ┐
│ 11.0 10.0 9.0 │
│ 26.0 25.0 24.0 │
└ ┘
"""
@spec add(matrex, matrex, number, number) :: matrex
def add(
%Matrex{
data:
<<rows::unsigned-integer-little-32, columns::unsigned-integer-little-32,
_data1::binary>> = first
},
%Matrex{
data:
<<rows::unsigned-integer-little-32, columns::unsigned-integer-little-32,
_data2::binary>> = second
},
alpha \\ 1.0,
beta \\ 1.0
)
when is_number(alpha) and is_number(beta),
do: %Matrex{data: NIFs.add(first, second, alpha, beta)}
@doc """
Applies given function to each element of the matrix and returns the matrex of results. NIF.
If second argument is an atom, then applies C language math function.
## Example
iex> Matrex.magic(5) |> Matrex.apply(:sigmoid)
#Matrex[5×5]
┌ ┐
│-0.95766-0.53283 0.28366 0.7539 0.13674 │
│-0.99996-0.65364 0.96017 0.90745 0.40808 │
│-0.98999-0.83907 0.84385 0.9887-0.54773 │
│-0.91113 0.00443 0.66032 0.9912-0.41615 │
│-0.75969-0.27516 0.42418 0.5403 -0.1455 │
└ ┘
The following math functions from C <math.h> are supported, and also a sigmoid function:
```elixir
:exp, :exp2, :sigmoid, :expm1, :log, :log2, :sqrt, :cbrt, :ceil, :floor, :truncate, :round,
:abs, :sin, :cos, :tan, :asin, :acos, :atan, :sinh, :cosh, :tanh, :asinh, :acosh, :atanh,
:erf, :erfc, :tgamma, :lgamm
```
If second argument is a function that takes one argument,
then this function receives the element of the matrix.
## Example
iex> Matrex.magic(5) |> Matrex.apply(&:math.cos/1)
#Matrex[5×5]
┌ ┐
│-0.95766-0.53283 0.28366 0.7539 0.13674 │
│-0.99996-0.65364 0.96017 0.90745 0.40808 │
│-0.98999-0.83907 0.84385 0.9887-0.54773 │
│-0.91113 0.00443 0.66032 0.9912-0.41615 │
│-0.75969-0.27516 0.42418 0.5403 -0.1455 │
└ ┘
If second argument is a function that takes two arguments,
then this function receives the element of the matrix and its one-based index.
## Example
iex> Matrex.ones(5) |> Matrex.apply(fn val, index -> val + index end)
#Matrex[5×5]
┌ ┐
│ 2.0 3.0 4.0 5.0 6.0 │
│ 7.0 8.0 9.0 10.0 11.0 │
│ 12.0 13.0 14.0 15.0 16.0 │
│ 17.0 18.0 19.0 20.0 21.0 │
│ 22.0 23.0 24.0 25.0 26.0 │
└ ┘
If second argument is a function that takes three arguments,
then this function receives the element of the matrix one-based row index and one-based
column index of the element.
## Example
iex> Matrex.ones(5) |> Matrex.apply(fn val, row, col -> val + row + col end)
#Matrex[5×5]
┌ ┐
│ 3.0 4.0 5.0 6.0 7.0 │
│ 4.0 5.0 6.0 7.0 8.0 │
│ 5.0 6.0 7.0 8.0 9.0 │
│ 6.0 7.0 8.0 9.0 10.0 │
│ 7.0 8.0 9.0 10.0 11.0 │
└ ┘
"""
@math_functions [
:exp,
:exp2,
:sigmoid,
:expm1,
:log,
:log2,
:sqrt,
:cbrt,
:ceil,
:floor,
:truncate,
:round,
:abs,
:sin,
:cos,
:tan,
:asin,
:acos,
:atan,
:sinh,
:cosh,
:tanh,
:asinh,
:acosh,
:atanh,
:erf,
:erfc,
:tgamma,
:lgamma
]
@spec apply(
matrex,
atom
| (element -> element)
| (element, index -> element)
| (element, index, index -> element)
) :: matrex
def apply(%Matrex{data: data} = _matrix, function_atom)
when function_atom in @math_functions do
# {rows, cols} = size(matrix)
%Matrex{
data:
if(
true,
# rows * cols < 100_000,
do: NIFs.apply_math(data, function_atom),
else: NIFs.apply_parallel_math(data, function_atom)
)
}
end
def apply(
%Matrex{
data:
<<rows::unsigned-integer-little-32, columns::unsigned-integer-little-32,
data::binary>>
},
function
)
when is_function(function, 1) do
initial = <<rows::unsigned-integer-little-32, columns::unsigned-integer-little-32>>
%Matrex{data: apply_on_matrix(data, function, initial)}
end
def apply(matrex_data(rows, columns, data), function)
when is_function(function, 2) do
initial = <<rows::unsigned-integer-little-32, columns::unsigned-integer-little-32>>
size = rows * columns
%Matrex{data: apply_on_matrix(data, function, 1, size, initial)}
end
def apply(matrex_data(rows, columns, data), function)
when is_function(function, 3) do
initial = <<rows::unsigned-integer-little-32, columns::unsigned-integer-little-32>>
%Matrex{data: apply_on_matrix(data, function, 1, 1, columns, initial)}
end
defp apply_on_matrix(<<>>, _, accumulator), do: accumulator
defp apply_on_matrix(<<value::float-little-32, rest::binary>>, function, accumulator) do
new_value = function.(value)
apply_on_matrix(rest, function, <<accumulator::binary, new_value::float-little-32>>)
end
defp apply_on_matrix(<<>>, _, _, _, accumulator), do: accumulator
defp apply_on_matrix(
<<value::float-little-32, rest::binary>>,
function,
index,
size,
accumulator
) do
new_value = function.(value, index)
apply_on_matrix(
rest,
function,
index + 1,
size,
<<accumulator::binary, new_value::float-little-32>>
)
end
defp apply_on_matrix(<<>>, _, _, _, _, accumulator), do: accumulator
defp apply_on_matrix(
<<value::float-little-32, rest::binary>>,
function,
row_index,
column_index,
columns,
accumulator
) do
new_value = function.(value, row_index, column_index)
new_accumulator = <<accumulator::binary, new_value::float-little-32>>
case column_index < columns do
true ->
apply_on_matrix(rest, function, row_index, column_index + 1, columns, new_accumulator)
false ->
apply_on_matrix(rest, function, row_index + 1, 1, columns, new_accumulator)
end
end
@doc """
Applies function to elements of two matrices and returns matrix of function results.
Matrices must be of the same size.
## Example
iex(11)> Matrex.apply(Matrex.random(5), Matrex.random(5), fn x1, x2 -> min(x1, x2) end)
#Matrex[5×5]
┌ ┐
│ 0.02025 0.15055 0.69177 0.08159 0.07237 │
│ 0.03252 0.14805 0.03627 0.1733 0.58721 │
│ 0.10865 0.49192 0.12166 0.0573 0.66522 │
│ 0.13642 0.23838 0.14403 0.57151 0.12359 │
│ 0.12877 0.12745 0.10933 0.27281 0.35957 │
└ ┘
"""
@spec apply(matrex, matrex, (element, element -> element)) :: matrex
def apply(matrex_data(rows, columns, data1), matrex_data(rows, columns, data2), function)
when is_function(function, 2) do
initial = <<rows::unsigned-integer-little-32, columns::unsigned-integer-little-32>>
%Matrex{data: apply_on_matrices(data1, data2, function, initial)}
end
defp apply_on_matrices(<<>>, <<>>, _, accumulator), do: accumulator
defp apply_on_matrices(
<<first_value::float-little-32, first_rest::binary>>,
<<second_value::float-little-32, second_rest::binary>>,
function,
accumulator
)
when is_function(function, 2) do
new_value = function.(first_value, second_value)
new_accumulator = <<accumulator::binary, new_value::float-little-32>>
apply_on_matrices(first_rest, second_rest, function, new_accumulator)
end
@doc """
Returns one-based index of the biggest element. NIF.
There is also `matrex[:argmax]` shortcut for this function.
## Example
iex> m = Matrex.magic(3)
#Matrex[3×3]
┌ ┐
│ 8.0 1.0 6.0 │
│ 3.0 5.0 7.0 │
│ 4.0 9.0 2.0 │
└ ┘
iex> Matrex.argmax(m)
8
"""
@spec argmax(matrex) :: index
def argmax(%Matrex{data: data}), do: NIFs.argmax(data) + 1
@doc """
Get element of a matrix at given one-based (row, column) position.
Negative or out of bound indices will raise an exception.
## Example
iex> m = Matrex.magic(3)
#Matrex[3×3]
┌ ┐
│ 8.0 1.0 6.0 │
│ 3.0 5.0 7.0 │
│ 4.0 9.0 2.0 │
└ ┘
iex> Matrex.at(m, 3, 2)
9.0
You can use `Access` behaviour square brackets for the same purpose,
but it will be slower:
iex> m[3][2]
9.0
"""
@spec at(matrex, index, index) :: element
def at(matrex_data(rows, columns, data), row, col)
when is_integer(row) and is_integer(col) do
if row < 1 or row > rows, do: raise(ArgumentError, "row position out of range: #{row}")
if col < 1 or col > columns, do: raise(ArgumentError, "column position out of range: #{col}")
data
|> binary_part(((row - 1) * columns + (col - 1)) * @element_size, @element_size)
|> binary_to_float()
end
@doc false
@spec binary_to_float(<<_::32>>) :: element | :nan | :inf | :neg_inf
def binary_to_float(@not_a_number), do: :nan
def binary_to_float(@positive_infinity), do: :inf
def binary_to_float(@negative_infinity), do: :neg_inf
def binary_to_float(<<val::float-little-32>>), do: val
@doc false
@spec binary_to_list(<<_::_*32>>) :: [element | NaN | Inf | NegInf]
def binary_to_list(<<elem::binary-@element_size, rest::binary>>),
do: [binary_to_float(elem) | binary_to_list(rest)]
def binary_to_list(<<>>), do: []
@doc """
Get column of matrix as matrix (vector) in matrex form. One-based.
## Example
iex> m = Matrex.magic(3)
#Matrex[3×3]
┌ ┐
│ 8.0 1.0 6.0 │
│ 3.0 5.0 7.0 │
│ 4.0 9.0 2.0 │
└ ┘
iex> Matrex.column(m, 2)
#Matrex[3×1]
┌ ┐
│ 1.0 │
│ 5.0 │
│ 9.0 │
└ ┘
"""
@spec column(matrex, index) :: matrex
def column(matrex_data(rows, columns, data), col)
when is_integer(col) and col > 0 and col <= columns do
column = <<rows::unsigned-integer-little-32, 1::unsigned-integer-little-32>>
data =
Enum.map(0..(rows - 1), fn row ->
binary_part(data, (row * columns + (col - 1)) * @element_size, @element_size)
end)
%Matrex{data: IO.iodata_to_binary([column | data])}
end
@doc """
Get column of matrix as list of floats. One-based, NIF.
## Example
iex> m = Matrex.magic(3)
#Matrex[3×3]
┌ ┐
│ 8.0 1.0 6.0 │
│ 3.0 5.0 7.0 │
│ 4.0 9.0 2.0 │
└ ┘
iex> Matrex.column_to_list(m, 3)
[6.0, 7.0, 2.0]
"""
@spec column_to_list(matrex, index) :: [element]
def column_to_list(%Matrex{data: matrix}, column) when is_integer(column) and column > 0,
do: NIFs.column_to_list(matrix, column - 1)
@doc """
Concatenate list of matrices along columns.
The number of rows must be equal.
## Example
iex> Matrex.concat([Matrex.fill(2, 0), Matrex.fill(2, 1), Matrex.fill(2, 2)]) #Matrex[2×6]
┌ ┐
│ 0.0 0.0 1.0 1.0 2.0 2.0 │
│ 0.0 0.0 1.0 1.0 2.0 2.0 │
└ ┘
"""
@spec concat([matrex]) :: matrex
def concat([%Matrex{} | _] = list_of_ma), do: Enum.reduce(list_of_ma, &Matrex.concat(&2, &1))
@doc """
Concatenate two matrices along rows or columns. NIF.
The number of rows or columns must be equal.
## Examples
iex> m1 = Matrex.new([[1, 2, 3], [4, 5, 6]])
#Matrex[2×3]
┌ ┐
│ 1.0 2.0 3.0 │
│ 4.0 5.0 6.0 │
└ ┘
iex> m2 = Matrex.new([[7, 8, 9], [10, 11, 12]])
#Matrex[2×3]
┌ ┐
│ 7.0 8.0 9.0 │
│ 10.0 11.0 12.0 │
└ ┘
iex> Matrex.concat(m1, m2)
#Matrex[2×6]
┌ ┐
│ 1.0 2.0 3.0 7.0 8.0 9.0 │
│ 4.0 5.0 6.0 10.0 11.0 12.0 │
└ ┘
iex> Matrex.concat(m1, m2, :rows)
#Matrex[4×3]
┌ ┐
│ 1.0 2.0 3.0 │
│ 4.0 5.0 6.0 │
│ 7.0 8.0 9.0 │
│ 10.0 11.0 12.0 │
└ ┘
"""
@spec concat(matrex, matrex, :columns | :rows) :: matrex
def concat(matrex1, matrex2, type \\ :columns)
def concat(
%Matrex{
data:
<<
rows1::unsigned-integer-little-32,
_rest1::binary
>> = first
},
%Matrex{
data:
<<
rows2::unsigned-integer-little-32,
_rest2::binary
>> = second
},
:columns
)
when rows1 == rows2,
do: %Matrex{data: Matrex.NIFs.concat_columns(first, second)}
def concat(matrex_data(rows1, columns, data1), matrex_data(rows2, columns, data2), :rows) do
matrex_data(rows1 + rows2, columns, data1 <> data2)
end
def concat(matrex_data(rows1, columns1, _data1), matrex_data(rows2, columns2, _data2), type) do
raise(
ArgumentError,
"Cannot concat: #{rows1}×#{columns1} does not fit with #{rows2}×#{columns2} along #{type}."
)
end
@doc """
Checks if given element exists in the matrix.
## Example
iex> m = Matrex.new("1 NaN 3; Inf 10 23")
#Matrex[2×3]
┌ ┐
│ 1.0 NaN 3.0 │
│ ∞ 10.0 23.0 │
└ ┘
iex> Matrex.contains?(m, 1.0)
true
iex> Matrex.contains?(m, :nan)
true
iex> Matrex.contains?(m, 9)
false
"""
@spec contains?(matrex, element) :: boolean
def contains?(%Matrex{} = matrex, value), do: find(matrex, value) != nil
@doc """
Divides two matrices element-wise or matrix by scalar or scalar by matrix. NIF through `find/2`.
Raises `ErlangError` if matrices' sizes do not match.
## Examples
iex> Matrex.new([[10, 20, 25], [8, 9, 4]])
...> |> Matrex.divide(Matrex.new([[5, 10, 5], [4, 3, 4]]))
#Matrex[2×3]
┌ ┐
│ 2.0 2.0 5.0 │
│ 2.0 3.0 1.0 │
└ ┘
iex> Matrex.new([[10, 20, 25], [8, 9, 4]])
...> |> Matrex.divide(2)
#Matrex[2×3]
┌ ┐
│ 5.0 10.0 12.5 │
│ 4.0 4.5 2.0 │
└ ┘
iex> Matrex.divide(100, Matrex.new([[10, 20, 25], [8, 16, 4]]))
#Matrex[2×3]
┌ ┐
│ 10.0 5.0 4.0 │
│ 12.5 6.25 25.0 │
└ ┘
"""
@spec divide(matrex, matrex) :: matrex
@spec divide(matrex, number) :: matrex
@spec divide(number, matrex) :: matrex
def divide(%Matrex{data: dividend} = _dividend, %Matrex{data: divisor} = _divisor),
do: %Matrex{data: NIFs.divide(dividend, divisor)}
def divide(%Matrex{data: matrix}, scalar) when is_number(scalar),
do: %Matrex{data: NIFs.divide_by_scalar(matrix, scalar)}
def divide(scalar, %Matrex{data: matrix}) when is_number(scalar),
do: %Matrex{data: NIFs.divide_scalar(scalar, matrix)}
@doc """
Matrix multiplication. NIF, via `cblas_sgemm()`.
Number of columns of the first matrix must be equal to the number of rows of the second matrix.
Raises `ErlangError` if matrices' sizes do not match.
## Example
iex> Matrex.new([[1, 2, 3], [4, 5, 6]]) |>
...> Matrex.dot(Matrex.new([[1, 2], [3, 4], [5, 6]]))
#Matrex[2×2]
┌ ┐
│ 22.0 28.0 │
│ 49.0 64.0 │
└ ┘
"""
@spec dot(matrex, matrex) :: matrex
def dot(
matrex_data(_rows1, columns1, _data1, first),
matrex_data(rows2, _columns2, _data2, second)
)
when columns1 == rows2,
do: %Matrex{data: NIFs.dot(first, second)}
@doc """
Matrix inner product for two "vector" matrices (e.g. rows == 1 and columns >= 1).
Number of columns of the first matrix must be equal to the number of rows of the second matrix.
Raises `ErlangError` if matrices' sizes do not match.
## Example
iex> Matrex.new([[1, 2, 3], [4, 5, 6]]) |>
...> Matrex.dot(Matrex.new([[1, 2], [3, 4], [5, 6]]))
#Matrex[2×2]
┌ ┐
│ 22.0 28.0 │
│ 49.0 64.0 │
└ ┘
"""
@spec inner_dot(matrex, matrex) :: matrex
def inner_dot(
vector_data(columns1, _data1, first),
vector_data(columns2, _data2, second)
)
when columns1 == columns2,
do: %Matrex{data: NIFs.dot_nt(first, second)}
@doc """
Matrix multiplication with addition of third matrix. NIF, via `cblas_sgemm()`.
Raises `ErlangError` if matrices' sizes do not match.
## Example
iex> Matrex.new([[1, 2, 3], [4, 5, 6]]) |>
...> Matrex.dot_and_add(Matrex.new([[1, 2], [3, 4], [5, 6]]), Matrex.new([[1, 2], [3, 4]]))
#Matrex[2×2]
┌ ┐
│ 23.0 30.0 │
│ 52.0 68.0 │
└ ┘
"""
@spec dot_and_add(matrex, matrex, matrex) :: matrex
def dot_and_add(
matrex_data(_rows1, columns1, _data1, first),
matrex_data(rows2, _columns2, _data2, second),
%Matrex{data: third}
)
when columns1 == rows2,
do: %Matrex{data: NIFs.dot_and_add(first, second, third)}
@doc """
Computes dot product of two matrices, then applies math function to each element
of the resulting matrix.
## Example
iex> Matrex.new([[1, 2, 3], [4, 5, 6]]) |>
...> Matrex.dot_and_apply(Matrex.new([[1, 2], [3, 4], [5, 6]]), :sqrt)
#Matrex[2×2]
┌ ┐
│ 4.69042 5.2915 │
│ 7.0 8.0 │
└ ┘
"""
@spec dot_and_apply(matrex, matrex, atom) :: matrex
def dot_and_apply(
matrex_data(_rows1, columns1, _data1, first),
matrex_data(rows2, _columns2, _data2, second),
function
)
when columns1 == rows2 and function in @math_functions,
do: %Matrex{data: NIFs.dot_and_apply(first, second, function)}
@doc """
Matrix multiplication where the second matrix needs to be transposed. NIF, via `cblas_sgemm()`.
Raises `ErlangError` if matrices' sizes do not match.
## Example
iex> Matrex.new([[1, 2, 3], [4, 5, 6]]) |>
...> Matrex.dot_nt(Matrex.new([[1, 3, 5], [2, 4, 6]]))
#Matrex[2×2]
┌ ┐
│ 22.0 28.0 │
│ 49.0 64.0 │
└ ┘
"""
@spec dot_nt(matrex, matrex) :: matrex
def dot_nt(
matrex_data(_rows1, columns1, _data1, first),
matrex_data(_rows2, columns2, _data2, second)
)
when columns1 == columns2,
do: %Matrex{data: NIFs.dot_nt(first, second)}
@doc """
Matrix dot multiplication where the first matrix needs to be transposed. NIF, via `cblas_sgemm()`.
The result is multiplied by scalar `alpha`.
Raises `ErlangError` if matrices' sizes do not match.
## Example
iex> Matrex.new([[1, 4], [2, 5], [3, 6]]) |>
...> Matrex.dot_tn(Matrex.new([[1, 2], [3, 4], [5, 6]]))
#Matrex[2×2]
┌ ┐
│ 22.0 28.0 │
│ 49.0 64.0 │
└ ┘
"""
@spec dot_tn(matrex, matrex, number) :: matrex
def dot_tn(
matrex_data(rows1, _columns1, _data1, first),
matrex_data(rows2, _columns2, _data2, second),
alpha \\ 1.0
)
when rows1 == rows2 and is_number(alpha),
do: %Matrex{data: NIFs.dot_tn(first, second, alpha)}
@doc """
Matrix cholesky decompose. NIF, via naive implementation.
The first matrix must be symmetric and positive definitive.
Raises `ErlangError` if matrices' sizes do not match.
## Example
iex> Matrex.new([[3, 4, 3], [4, 8, 6], [3, 6, 9]]) |>
...> Matrex.cholesky()
#Matrex[3×3]
┌ ┐
│ 1.73205 0.0 0.0 │
│ 2.3094 1.63299 0.0 │
│ 1.73205 1.22474 2.12132 │
└ ┘
"""
@spec cholesky(matrex) :: matrex
def cholesky(matrex_data(rows1, columns1, _data1, first))
when rows1 == columns1,
do: %Matrex{data: NIFs.cholesky(first)}
@doc """
Matrix forward substitution. NIF, via naive C implementation.
The first matrix must be square while the
number of columns of the first matrix must
equal the number of rows of the second.
Raises `ErlangError` if matrices' sizes do not match.
## Example
iex> Matrex.forward_substitute(
...> Matrex.new([[3, 4], [4, 8]]) |> Matrex.cholesky(),
...> Matrex.new([[1],[2]]))
#Matrex[2×1]
┌ ┐
│ 0.57735 │
│ 0.40825 │
└ ┘
"""
@spec forward_substitute(matrex, matrex) :: matrex
def forward_substitute(
matrex_data(rows1, columns1, _data1, first),
matrex_data(rows2, columns2, _data2, second)
)
when rows1 == columns1 and rows1 == rows2 and columns2 == 1,
do: %Matrex{data: NIFs.forward_substitute(first, second)}
@doc """
Create eye (identity) square matrix of given size.
## Examples
iex> Matrex.eye(3)
#Matrex[3×3]
┌ ┐
│ 1.0 0.0 0.0 │
│ 0.0 1.0 0.0 │
│ 0.0 0.0 1.0 │
└ ┘
iex> Matrex.eye(3, 2.95)
#Matrex[3×3]
┌ ┐
│ 2.95 0.0 0.0 │
│ 0.0 2.95 0.0 │
│ 0.0 0.0 2.95 │
└ ┘
"""
@spec eye(index, element) :: matrex
def eye(size, value \\ 1.0) when is_integer(size) and is_number(value),
do: %Matrex{data: NIFs.eye(size, value)}
@doc """
Create new matrix with only diagonal elements from a given matrix.
## Examples
iex> Matrex.eye(3) |> Matrex.diagonal()
┌ ┐
│ 1.0 1.0 1.0 │
└ ┘
"""
@spec diagonal(matrex) :: matrex
def diagonal(matrix),
do: %Matrex{data: NIFs.diagonal(matrix.data)}
@doc """
Create matrix filled with given value. NIF.
## Example
iex> Matrex.fill(4, 3, 55)
#Matrex[4×3]
┌ ┐
│ 55.0 55.0 55.0 │
│ 55.0 55.0 55.0 │
│ 55.0 55.0 55.0 │
│ 55.0 55.0 55.0 │
└ ┘
"""
@spec fill(index, index, element) :: matrex
def fill(rows, cols, value)
when (is_integer(rows) and is_integer(cols) and is_number(value)) or is_atom(value),
do: %Matrex{data: NIFs.fill(rows, cols, float_to_binary(value))}
@doc """
Create square matrix filled with given value. Inlined.
## Example
iex> Matrex.fill(3, 55)
#Matrex[3×3]
┌ ┐
│ 55.0 55.0 55.0 │
│ 55.0 55.0 55.0 │
│ 55.0 55.0 55.0 │
└ ┘
"""
@spec fill(index, element) :: matrex
def fill(size, value), do: fill(size, size, value)
@doc """
Find position of the first occurence of the given value in the matrix. NIF.
Returns {row, column} tuple or nil, if nothing was found. One-based.
## Example
"""
@spec find(matrex, element) :: {index, index} | nil
def find(%Matrex{data: data}, value) when is_number(value) or value in [:nan, :inf, :neg_inf],
do: NIFs.find(data, float_to_binary(value))
@doc """
Return first element of a matrix.
## Example
iex> Matrex.new([[6,5,4],[3,2,1]]) |> Matrex.first()
6.0
"""
@spec first(matrex) :: element
def first(matrex_data(_rows, _columns, <<element::binary-@element_size, _::binary>>)),
do: binary_to_float(element)
@doc """
Prints monochrome or color heatmap of the matrix to the console.
Supports 8, 256 and 16mln of colors terminals. Monochrome on 256 color palette is the default.
`type` can be `:mono8`, `:color8`, `:mono256`, `:color256`, `:mono24bit` and `:color24bit`.
Special float values, like infinity and not-a-number are marked with contrast colors on the map.
## Options
* `:at` — positions heatmap at the specified `{row, col}` position inside terminal.
* `:title` — sets the title of the heatmap.
## Examples
<img src="https://raw.githubusercontent.com/versilov/matrex/master/docs/mnist8.png" width="200px" />
<img src="https://raw.githubusercontent.com/versilov/matrex/master/docs/mnist_sum.png" width="200px" />
<img src="https://raw.githubusercontent.com/versilov/matrex/master/docs/magic_square.png" width="200px" />
<img src="https://raw.githubusercontent.com/versilov/matrex/master/docs/twin_peaks.png" width="220px" />
<img src="https://raw.githubusercontent.com/versilov/matrex/master/docs/neurons_mono.png" width="233px" />
<img src="https://raw.githubusercontent.com/versilov/matrex/master/docs/logistic_regression.gif" width="180px" />
"""
@spec heatmap(
matrex,
:mono8 | :color8 | :mono256 | :color256 | :mono24bit | :color24bit,
keyword
) :: matrex
defdelegate heatmap(matrex, type \\ :mono256, opts \\ []), to: Matrex.Inspect
@doc """
An alias for `eye/1`.
"""
@spec identity(index) :: matrex
defdelegate identity(size), to: __MODULE__, as: :eye
@doc """
Returns list of all rows of a matrix as single-row matrices.
## Example
iex> m = Matrex.reshape(1..6, 3, 2)
#Matrex[6×2]
┌ ┐
│ 1.0 2.0 │
│ 3.0 4.0 │
│ 5.0 6.0 │
└ ┘
iex> Matrex.list_of_rows(m)
[#Matrex[1×2]
┌ ┐
│ 1.0 2.0 │
└ ┘,
#Matrex[1×2]
┌ ┐
│ 3.0 4.0 │
└ ┘,
#Matrex[1×2]
┌ ┐
│ 5.0 6.0 │
└ ┘]
"""
@spec list_of_rows(matrex) :: [matrex]
def list_of_rows(matrex_data(rows, columns, matrix)) do
do_list_rows(matrix, rows, columns)
end
@doc """
Returns range of rows of a matrix as list of 1-row matrices.
## Example
iex> m = Matrex.reshape(1..12, 6, 2)
#Matrex[6×2]
┌ ┐
│ 1.0 2.0 │
│ 3.0 4.0 │
│ 5.0 6.0 │
│ 7.0 8.0 │
│ 9.0 10.0 │
│ 11.0 12.0 │
└ ┘
iex> Matrex.list_of_rows(m, 2..4)
[#Matrex[1×2]
┌ ┐
│ 3.0 4.0 │
└ ┘,
#Matrex[1×2]
┌ ┐
│ 5.0 6.0 │
└ ┘,
#Matrex[1×2]
┌ ┐
│ 7.0 8.0 │
└ ┘]
"""
@spec list_of_rows(matrex, Range.t()) :: [matrex]
def list_of_rows(matrex_data(rows, columns, matrix), from..to)
when from <= to and to <= rows do
part =
binary_part(
matrix,
(from - 1) * columns * @element_size,
(to - from + 1) * columns * @element_size
)
do_list_rows(part, to - from + 1, columns)
end
defp do_list_rows(<<>>, 0, _), do: []
defp do_list_rows(<<rows::binary>>, row_num, columns) do
[
matrex_data(1, columns, binary_part(rows, 0, columns * @element_size))
| do_list_rows(
binary_part(rows, columns * @element_size, (row_num - 1) * columns * @element_size),
row_num - 1,
columns
)
]
end
@doc """
Load matrex from file.
.csv and .mtx (binary) formats are supported.
## Example
iex> Matrex.load("test/matrex.csv")
#Matrex[5×4]
┌ ┐
│ 0.0 4.8e-4-0.00517-0.01552 │
│-0.01616-0.01622 -0.0161-0.00574 │
│ 6.8e-4 0.0 0.0 0.0 │
│ 0.0 0.0 0.0 0.0 │
│ 0.0 0.0 0.0 0.0 │
└ ┘
"""
@spec load(binary) :: matrex
def load(file_name) when is_binary(file_name) do
cond do
:filename.extension(file_name) == ".gz" ->
File.read!(file_name)
|> :zlib.gunzip()
|> do_load(String.split(file_name, ".") |> Enum.at(-2) |> String.to_existing_atom())
:filename.extension(file_name) == ".csv" ->
do_load(File.read!(file_name), :csv)
:filename.extension(file_name) == ".mtx" ->
do_load(File.read!(file_name), :mtx)
:filename.extension(file_name) == ".idx" ->
do_load(File.read!(file_name), :idx)
true ->
raise "Unknown file format: #{file_name}"
end
end
@spec load(binary, :idx | :csv | :mtx) :: matrex
def load(file_name, format) when format in [:idx, :mtx, :csv],
do: do_load(File.read!(file_name), format)
defp do_load(data, :csv), do: new(data)
defp do_load(data, :mtx), do: %Matrex{data: data}
defp do_load(data, :idx), do: %Matrex{data: Matrex.IDX.load(data)}
@doc """
Creates "magic" n*n matrix, where sums of all dimensions are equal.
## Example
iex> Matrex.magic(5)
#Matrex[5×5]
┌ ┐
│ 16.0 23.0 5.0 7.0 14.0 │
│ 22.0 4.0 6.0 13.0 20.0 │
│ 3.0 10.0 12.0 19.0 21.0 │
│ 9.0 11.0 18.0 25.0 2.0 │
│ 15.0 17.0 24.0 1.0 8.0 │
└ ┘
"""
@spec magic(index) :: matrex
def magic(n) when is_integer(n), do: Matrex.MagicSquare.new(n) |> new()
@doc false
# Shortcut to get functions list outside in Matrex.Operators module.
def math_functions_list(), do: @math_functions
@doc """
Maximum element in a matrix. NIF.
## Example
iex> m = Matrex.magic(5)
#Matrex[5×5]
┌ ┐
│ 16.0 23.0 5.0 7.0 14.0 │
│ 22.0 4.0 6.0 13.0 20.0 │
│ 3.0 10.0 12.0 19.0 21.0 │
│ 9.0 11.0 18.0 25.0 2.0 │
│ 15.0 17.0 24.0 1.0 8.0 │
└ ┘
iex> Matrex.max(m)
25.0
iex> Matrex.reshape([1, 2, :inf, 4, 5, 6], 2, 3) |> max()
:inf
"""
@spec max(matrex) :: element
def max(%Matrex{data: matrix}), do: NIFs.max(matrix)
@doc """
Returns maximum finite element of a matrex. NIF.
Used on matrices which may contain infinite values.
## Example
iex>Matrex.reshape([1, 2, :inf, 3, :nan, 5], 3, 2) |> Matrex.max_finite()
5.0
"""
@spec max_finite(matrex) :: float
def max_finite(%Matrex{data: matrix}), do: NIFs.max_finite(matrix)
@doc """
Minimum element in a matrix. NIF.
## Example
iex> m = Matrex.magic(5)
#Matrex[5×5]
┌ ┐
│ 16.0 23.0 5.0 7.0 14.0 │
│ 22.0 4.0 6.0 13.0 20.0 │
│ 3.0 10.0 12.0 19.0 21.0 │
│ 9.0 11.0 18.0 25.0 2.0 │
│ 15.0 17.0 24.0 1.0 8.0 │
└ ┘
iex> Matrex.min(m)
1.0
iex> Matrex.reshape([1, 2, :neg_inf, 4, 5, 6], 2, 3) |> max()
:neg_inf
"""
@spec min(matrex) :: element
def min(%Matrex{data: matrix}), do: NIFs.min(matrix)
@doc """
Returns minimum finite element of a matrex. NIF.
Used on matrices which may contain infinite values.
## Example
iex>Matrex.reshape([1, 2, :neg_inf, 3, 4, 5], 3, 2) |> Matrex.min_finite()
1.0
"""
@spec min_finite(matrex) :: float
def min_finite(%Matrex{data: matrix}), do: NIFs.min_finite(matrix)
@doc """
Elementwise multiplication of two matrices or matrix and a scalar. NIF.
Raises `ErlangError` if matrices' sizes do not match.
## Examples
iex> Matrex.new([[1, 2, 3], [4, 5, 6]]) |>
...> Matrex.multiply(Matrex.new([[5, 2, 1], [3, 4, 6]]))
#Matrex[2×3]
┌ ┐
│ 5.0 4.0 3.0 │
│ 12.0 20.0 36.0 │
└ ┘
iex> Matrex.new([[1, 2, 3], [4, 5, 6]]) |> Matrex.multiply(2)
#Matrex[2×3]
┌ ┐
│ 2.0 4.0 6.0 │
│ 8.0 10.0 12.0 │
└ ┘
"""
@spec multiply(matrex, matrex) :: matrex
@spec multiply(matrex, number) :: matrex
@spec multiply(number, matrex) :: matrex
def multiply(%Matrex{data: first}, %Matrex{data: second}),
do: %Matrex{data: NIFs.multiply(first, second)}
def multiply(%Matrex{data: matrix}, scalar) when is_number(scalar),
do: %Matrex{data: NIFs.multiply_with_scalar(matrix, scalar)}
def multiply(scalar, %Matrex{data: matrix}) when is_number(scalar),
do: %Matrex{data: NIFs.multiply_with_scalar(matrix, scalar)}
@doc """
Negates each element of the matrix. NIF.
## Example
iex> Matrex.new([[1, 2, 3], [4, 5, 6]]) |> Matrex.neg()
#Matrex[2×3]
┌ ┐
│ -1.0 -2.0 -3.0 │
│ -4.0 -5.0 -6.0 │
└ ┘
"""
@spec neg(matrex) :: matrex
def neg(%Matrex{data: matrix}), do: %Matrex{data: NIFs.neg(matrix)}
@doc """
Creates new matrix with values provided by the given function.
If function accepts two arguments one-based row and column index of each element are passed to it.
## Examples
iex> Matrex.new(3, 3, fn -> :rand.uniform() end)
#Matrex[3×3]
┌ ┐
│ 0.45643 0.91533 0.25332 │
│ 0.29095 0.21241 0.9776 │
│ 0.42451 0.05422 0.92863 │
└ ┘
iex> Matrex.new(3, 3, fn row, col -> row*col end)
#Matrex[3×3]
┌ ┐
│ 1.0 2.0 3.0 │
│ 2.0 4.0 6.0 │
│ 3.0 6.0 9.0 │
└ ┘
"""
@spec new(index, index, (() -> element)) :: matrex
@spec new(index, index, (index, index -> element)) :: matrex
def new(rows, columns, function) when is_function(function, 0) do
initial = <<rows::unsigned-integer-little-32, columns::unsigned-integer-little-32>>
new_matrix_from_function(rows * columns, function, initial)
end
def new(rows, columns, function) when is_function(function, 2) do
initial = <<rows::unsigned-integer-little-32, columns::unsigned-integer-little-32>>
size = rows * columns
new_matrix_from_function(size, rows, columns, function, initial)
end
@doc """
Creates new 1-column matrix (aka vector) from the given list.
## Examples
iex> [1,2,3] |> Matrex.from_list()
#Matrex[1×3]
┌ ┐
│ 1.0 2.0 3.0 │
└ ┘
"""
def from_list(lst) when is_list(lst) do
new([lst])
end
@spec float_to_binary(element | :nan | :inf | :neg_inf) :: binary
defp float_to_binary(val) when is_number(val), do: <<val::float-little-32>>
defp float_to_binary(:nan), do: @not_a_number
defp float_to_binary(:inf), do: @positive_infinity
defp float_to_binary(:neg_inf), do: @negative_infinity
defp float_to_binary(unknown_val),
do: raise(ArgumentError, message: "Unknown matrix element value: #{unknown_val}")
@doc """
Creates new matrix from list of lists or text representation (compatible with MathLab/Octave).
List of lists can contain other matrices, which are concatenated in one.
## Example
iex> Matrex.new([[1, 2, 3], [4, 5, 6]])
#Matrex[2×3]
┌ ┐
│ 1.0 2.0 3.0 │
│ 4.0 5.0 6.0 │
└ ┘
iex> Matrex.new([[Matrex.fill(2, 1.0), Matrex.fill(2, 3, 2.0)],
...> [Matrex.fill(1, 2, 3.0), Matrex.fill(1, 3, 4.0)]])
#Matrex[5×5]
┌ ┐
│ 1.0 1.0 2.0 2.0 2.0 │
│ 1.0 1.0 2.0 2.0 2.0 │
│ 3.0 3.0 4.0 4.0 4.0 │
└ ┘
iex> Matrex.new("1;0;1;0;1")
#Matrex[5×1]
┌ ┐
│ 1.0 │
│ 0.0 │
│ 1.0 │
│ 0.0 │
│ 1.0 │
└ ┘
iex> Matrex.new(\"\"\"
...> 1.0 0.1 0.6 1.1
...> 1.0 0.2 0.7 1.2
...> 1.0 NaN 0.8 1.3
...> Inf 0.4 0.9 1.4
...> 1.0 0.5 NegInf 1.5
...> \"\"\")
#Matrex[5×4]
┌ ┐
│ 1.0 0.1 0.6 1.1 │
│ 1.0 0.2 0.7 1.2 │
│ 1.0 NaN 0.8 1.3 │
│ ∞ 0.4 0.9 1.4 │
│ 1.0 0.5 -∞ 1.5 │
└ ┘
"""
@spec new([[element]] | [[matrex]] | binary) :: matrex
def new(
[
[
%Matrex{} | _
]
| _
] = lol_of_ma
) do
lol_of_ma
|> Enum.map(&Matrex.concat/1)
|> Enum.reduce(&Matrex.concat(&2, &1, :rows))
end
def new([first_list | _] = lol_or_binary) when is_list(first_list) do
rows = length(lol_or_binary)
columns = length(first_list)
initial = <<rows::unsigned-integer-little-32, columns::unsigned-integer-little-32>>
%Matrex{
data:
Enum.reduce(lol_or_binary, initial, fn list, accumulator ->
accumulator <>
Enum.reduce(list, <<>>, fn element, partial ->
<<partial::binary, float_to_binary(element)::binary>>
end)
end)
}
end
def new(text) when is_binary(text) do
text
|> String.split(["\n", ";"], trim: true)
|> Enum.map(fn line ->
line
|> String.split(["\s", ","], trim: true)
|> Enum.map(fn f -> parse_float(f) end)
end)
|> new()
end
@spec parse_float(binary) :: element | :nan | :inf | :neg_inf
defp parse_float("NaN"), do: :nan
defp parse_float("Inf"), do: :inf
defp parse_float("+Inf"), do: :inf
defp parse_float("-Inf"), do: :neg_inf
defp parse_float("NegInf"), do: :neg_inf
defp parse_float(string) do
case Float.parse(string) do
{value, _rem} -> value
:error -> raise ArgumentError, message: "Unparseable matrix element value: #{string}"
end
end
defp new_matrix_from_function(0, _, accumulator), do: %Matrex{data: accumulator}
defp new_matrix_from_function(size, function, accumulator),
do:
new_matrix_from_function(
size - 1,
function,
<<accumulator::binary, function.()::float-little-32>>
)
defp new_matrix_from_function(0, _, _, _, accumulator), do: %Matrex{data: accumulator}
defp new_matrix_from_function(size, rows, columns, function, accumulator) do
{row, col} =
if rem(size, columns) == 0 do
{rows - div(size, columns), 0}
else
{rows - 1 - div(size, columns), columns - rem(size, columns)}
end
new_accumulator = <<accumulator::binary, function.(row + 1, col + 1)::float-little-32>>
new_matrix_from_function(size - 1, rows, columns, function, new_accumulator)
end
@doc """
Bring all values of matrix into [0, 1] range. NIF.
Where 0 corresponds to the minimum value of the matrix, and 1 — to the maxixmim.
## Example
iex> m = Matrex.reshape(1..9, 3, 3)
#Matrex[3×3]
┌ ┐
│ 1.0 2.0 3.0 │
│ 4.0 5.0 6.0 │
│ 7.0 8.0 9.0 │
└ ┘
iex> Matrex.normalize(m)
#Matrex[3×3]
┌ ┐
│ 0.0 0.125 0.25 │
│ 0.375 0.5 0.625 │
│ 0.75 0.875 1.0 │
└ ┘
"""
@spec normalize(matrex) :: matrex
def normalize(%Matrex{data: data}), do: %Matrex{data: NIFs.normalize(data)}
@doc """
Create matrix filled with ones.
## Example
iex> Matrex.ones(2, 3)
#Matrex[2×3]
┌ ┐
│ 1.0 1.0 1.0 │
│ 1.0 1.0 1.0 │
└ ┘
"""
@spec ones(index, index) :: matrex
def ones(rows, cols) when is_integer(rows) and is_integer(cols), do: fill(rows, cols, 1)
@doc """
Create matrex of ones of square dimensions or consuming output of `size/1` function.
## Examples
iex> Matrex.ones(3)
#Matrex[3×3]
┌ ┐
│ 1.0 1.0 1.0 │
│ 1.0 1.0 1.0 │
│ 1.0 1.0 1.0 │
└ ┘
iex> m = Matrex.new("1 2 3; 4 5 6")
#Matrex[2×3]
┌ ┐
│ 1.0 2.0 3.0 │
│ 4.0 5.0 6.0 │
└ ┘
iex> Matrex.ones(Matrex.size(m))
#Matrex[2×3]
┌ ┐
│ 1.0 1.0 1.0 │
│ 1.0 1.0 1.0 │
└ ┘
"""
@spec ones(index) :: matrex
@spec ones({index, index}) :: matrex
def ones({rows, cols}), do: ones(rows, cols)
def ones(size) when is_integer(size), do: fill(size, 1)
@doc """
Prints matrix to the console.
Accepted options:
* `:rows` — number of rows of matrix to show. Defaults to 21
* `:columns` — number of columns of matrix to show. Defaults to maximum number of column,
that fits into current terminal width.
Returns the matrix itself, so can be used in pipes.
## Example
iex> print(m, rows: 5, columns: 3)
#Matrex[20×20]
┌ ┐
│ 1.0 399.0 … 20.0 │
│ 380.0 22.0 … 361.0 │
│ 360.0 42.0 … 341.0 │
│ ⋮ ⋮ … ⋮ │
│ 40.0 362.0 … 21.0 │
│ 381.0 19.0 … 400.0 │
└ ┘
"""
@spec print(matrex, Keyword.t()) :: matrex
def print(%Matrex{} = matrex, opts \\ [rows: 21]) do
{:ok, terminal_columns} = :io.columns()
columns =
case Keyword.get(opts, :columns) do
nil -> terminal_columns
cols -> cols * 8 + 10
end
matrex
|> Matrex.Inspect.do_inspect(columns, Keyword.get(opts, :rows, 21))
|> IO.puts()
matrex
end
@doc """
Create matrix of random floats in [0, 1] range. NIF.
C language RNG is seeded on NIF libray load with `srandom(time(NULL) + clock())`.
## Example
iex> Matrex.random(4,3)
#Matrex[4×3]
┌ ┐
│ 0.32994 0.28736 0.88012 │
│ 0.51782 0.68608 0.29976 │
│ 0.52953 0.9071 0.26743 │
│ 0.82189 0.59311 0.8451 │
└ ┘
"""
@spec random(index, index) :: matrex
def random(rows, columns) when is_integer(rows) and is_integer(columns),
do: %Matrex{data: NIFs.random(rows, columns)}
@doc """
Create square matrix of random floats.
See `random/2` for details.
## Example
iex> Matrex.random(3)
#Matrex[3×3]
┌ ┐
│ 0.66438 0.31026 0.98602 │
│ 0.82127 0.04701 0.13278 │
│ 0.96935 0.70772 0.98738 │
└ ┘
"""
@spec random(index) :: matrex
def random(size) when is_integer(size), do: random(size, size)
@doc """
Resize matrix by scaling its dimenson with `scale`. NIF.
## Examples
iex> m = Matrex.magic(3)
#Matrex[3×3]
┌ ┐
│ 8.0 1.0 6.0 │
│ 3.0 5.0 7.0 │
│ 4.0 9.0 2.0 │
└ ┘
iex(3)> Matrex.resize(m, 2)
#Matrex[6×6]
┌ ┐
│ 8.0 8.0 1.0 1.0 6.0 6.0 │
│ 8.0 8.0 1.0 1.0 6.0 6.0 │
│ 3.0 3.0 5.0 5.0 7.0 7.0 │
│ 3.0 3.0 5.0 5.0 7.0 7.0 │
│ 4.0 4.0 9.0 9.0 2.0 2.0 │
│ 4.0 4.0 9.0 9.0 2.0 2.0 │
└ ┘
iex(4)> m = Matrex.magic(5)
#Matrex[5×5]
┌ ┐
│ 16.0 23.0 5.0 7.0 14.0 │
│ 22.0 4.0 6.0 13.0 20.0 │
│ 3.0 10.0 12.0 19.0 21.0 │
│ 9.0 11.0 18.0 25.0 2.0 │
│ 15.0 17.0 24.0 1.0 8.0 │
└ ┘
iex(5)> Matrex.resize(m, 0.5)
#Matrex[3×3]
┌ ┐
│ 16.0 23.0 7.0 │
│ 22.0 4.0 13.0 │
│ 9.0 11.0 25.0 │
└ ┘
"""
@spec resize(matrex, number, :nearest | :bilinear) :: matrex
def resize(matrex, scale, method \\ :nearest)
def resize(%Matrex{} = matrex, 1, _), do: matrex
def resize(%Matrex{data: data}, scale, :nearest) when is_number(scale) and scale > 0,
do: %Matrex{data: NIFs.resize(data, scale)}
@doc """
Reshapes list of values into a matrix of given size or changes the shape of existing matrix.
Takes a list or anything, that implements `Enumerable.to_list/1`.
Can take a list of matrices and concatenate them into one big matrix.
Raises `ArgumentError` if list size and given shape do not match.
## Example
iex> [1, 2, 3, 4, 5, 6] |> Matrex.reshape(2, 3)
#Matrex[2×3]
┌ ┐
│ 1.0 2.0 3.0 │
│ 4.0 5.0 6.0 │
└ ┘
iex> Matrex.reshape([Matrex.zeros(2), Matrex.ones(2),
...> Matrex.fill(3, 2, 2.0), Matrex.fill(3, 2, 3.0)], 2, 2)
#Matrex[5×4]
┌ ┐
│ 0.0 0.0 1.0 1.0 │
│ 0.0 0.0 1.0 1.0 │
│ 2.0 2.0 3.0 3.0 │
│ 2.0 2.0 3.0 3.0 │
│ 2.0 2.0 3.0 3.0 │
└ ┘
iex> Matrex.reshape(1..6, 2, 3)
#Matrex[2×3]
┌ ┐
│ 1.0 2.0 3.0 │
│ 4.0 5.0 6.0 │
└ ┘
iex> Matrex.new("1 2 3; 4 5 6") |> Matrex.reshape(3, 2)
#Matrex[3×2]
┌ ┐
│ 1.0 2.0 │
│ 3.0 4.0 │
│ 5.0 6.0 │
└ ┘
"""
def reshape([], _, _), do: raise(ArgumentError)
@spec reshape([matrex], index, index) :: matrex
def reshape([%Matrex{} | _] = enum, _rows, columns) do
enum
|> Enum.chunk_every(columns)
|> new()
end
@spec reshape([element], index, index) :: matrex
def reshape([_ | _] = list, rows, columns),
do: %Matrex{
data:
do_reshape(
<<rows::unsigned-integer-little-32, columns::unsigned-integer-little-32>>,
list,
rows,
columns
)
}
@spec reshape(matrex, index, index) :: matrex
def reshape(
matrex_data(rows, columns, _matrix),
new_rows,
new_columns
)
when rows * columns != new_rows * new_columns,
do:
raise(
ArgumentError,
message:
"Cannot reshape: #{rows}×#{columns} does not fit into #{new_rows}×#{new_columns}."
)
def reshape(
matrex_data(rows, columns, _matrix) = matrex,
rows,
columns
),
# No need to reshape.
do: matrex
def reshape(
matrex_data(_rows, _columns, matrix),
new_rows,
new_columns
),
do: matrex_data(new_rows, new_columns, matrix)
@spec reshape(Range.t(), index, index) :: matrex
def reshape(a..b, rows, cols) when b - a + 1 != rows * cols,
do:
raise(
ArgumentError,
message: "range #{a}..#{b} cannot be reshaped into #{rows}×#{cols} matrix."
)
def reshape(a..b, rows, cols), do: %Matrex{data: NIFs.from_range(a, b, rows, cols)}
@spec reshape(Enumerable.t(), index, index) :: matrex
def reshape(input, rows, columns), do: input |> Enum.to_list() |> reshape(rows, columns)
defp do_reshape(data, [], 1, 0), do: data
defp do_reshape(_data, [], _, _),
do: raise(ArgumentError, message: "Not enough elements for this shape")
defp do_reshape(_data, [_ | _], 1, 0),
do: raise(ArgumentError, message: "Too much elements for this shape")
# Another row is ready, restart counters
defp do_reshape(
<<_rows::unsigned-integer-little-32, columns::unsigned-integer-little-32, _::binary>> =
data,
list,
row,
0
),
do: do_reshape(data, list, row - 1, columns)
defp do_reshape(
<<_rows::unsigned-integer-little-32, _columns::unsigned-integer-little-32, _::binary>> =
data,
[elem | tail],
row,
column
) do
do_reshape(<<data::binary, float_to_binary(elem)::binary-4>>, tail, row, column - 1)
end
@doc """
Return matrix row as list by one-based index.
## Example
iex> m = Matrex.magic(5)
#Matrex[5×5]
┌ ┐
│ 16.0 23.0 5.0 7.0 14.0 │
│ 22.0 4.0 6.0 13.0 20.0 │
│ 3.0 10.0 12.0 19.0 21.0 │
│ 9.0 11.0 18.0 25.0 2.0 │
│ 15.0 17.0 24.0 1.0 8.0 │
└ ┘
iex> Matrex.row_to_list(m, 3)
[3.0, 10.0, 12.0, 19.0, 21.0]
"""
@spec row_to_list(matrex, index) :: [element]
def row_to_list(%Matrex{data: matrix}, row) when is_integer(row) and row > 0,
do: NIFs.row_to_list(matrix, row - 1)
@doc """
Get row of matrix as matrix (vector) in matrex form. One-based.
You can use shorter `matrex[n]` syntax for the same result.
## Example
iex> m = Matrex.magic(5)
#Matrex[5×5]
┌ ┐
│ 16.0 23.0 5.0 7.0 14.0 │
│ 22.0 4.0 6.0 13.0 20.0 │
│ 3.0 10.0 12.0 19.0 21.0 │
│ 9.0 11.0 18.0 25.0 2.0 │
│ 15.0 17.0 24.0 1.0 8.0 │
└ ┘
iex> Matrex.row(m, 4)
#Matrex[1×5]
┌ ┐
│ 9.0 11.0 18.0 25.0 2.0 │
└ ┘
iex> m[4]
#Matrex[1×5]
┌ ┐
│ 9.0 11.0 18.0 25.0 2.0 │
└ ┘
"""
@spec row(matrex, index) :: matrex
def row(matrex_data(rows, columns, data), row)
when is_integer(row) and row > 0 and row <= rows do
matrex_data(
1,
columns,
binary_part(data, (row - 1) * columns * @element_size, columns * @element_size)
)
end
@doc """
Saves matrex into file.
Binary (.mtx) and CSV formats are supported currently.
Format is defined by the extension of the filename.
## Example
iex> Matrex.random(5) |> Matrex.save("r.mtx")
:ok
"""
@spec save(matrex, binary) :: :ok | :error
def save(
%Matrex{
data: matrix
},
file_name
)
when is_binary(file_name) do
cond do
:filename.extension(file_name) == ".mtx" ->
File.write!(file_name, matrix)
:filename.extension(file_name) == ".csv" ->
csv =
matrix
|> NIFs.to_list_of_lists()
|> Enum.reduce("", fn row_list, acc ->
acc <>
Enum.reduce(row_list, "", fn elem, line ->
line <> element_to_string(elem) <> ","
end) <> "\n"
end)
File.write!(file_name, csv)
true ->
raise "Unknown file format: #{file_name}"
end
end
@doc false
@spec element_to_string(element) :: binary
# Save zero values without fraction part to save space
def element_to_string(0.0), do: "0"
def element_to_string(val) when is_float(val), do: Float.to_string(val)
def element_to_string(:nan), do: "NaN"
def element_to_string(:inf), do: "Inf"
def element_to_string(:neg_inf), do: "-Inf"
@doc """
Transfer one-element matrix to a scalar value.
Unlike `first/1` it raises `FunctionClauseError`
if matrix contains more than one element.
## Example
iex> Matrex.new([[1.234]]) |> Matrex.scalar()
1.234
iex> Matrex.new([[0]]) |> Matrex.divide(0) |> Matrex.scalar()
:nan
iex> Matrex.new([[1.234, 5.678]]) |> Matrex.scalar()
** (FunctionClauseError) no function clause matching in Matrex.scalar/1
"""
@spec scalar(matrex) :: element
def scalar(%Matrex{
data: <<1::unsigned-integer-little-32, 1::unsigned-integer-little-32, elem::binary-4>>
}),
do: binary_to_float(elem)
@doc """
Set element of matrix at the specified position (one-based) to new value.
## Example
iex> m = Matrex.ones(3)
#Matrex[3×3]
┌ ┐
│ 1.0 1.0 1.0 │
│ 1.0 1.0 1.0 │
│ 1.0 1.0 1.0 │
└ ┘
iex> m = Matrex.set(m, 2, 2, 0)
#Matrex[3×3]
┌ ┐
│ 1.0 1.0 1.0 │
│ 1.0 0.0 1.0 │
│ 1.0 1.0 1.0 │
└ ┘
iex> m = Matrex.set(m, 3, 2, :neg_inf)
#Matrex[3×3]
┌ ┐
│ 1.0 1.0 1.0 │
│ 1.0 0.0 1.0 │
│ 1.0 -∞ 1.0 │
└ ┘
"""
@spec set(matrex, index, index, element) :: matrex
def set(matrex_data(rows, cols, _rest, matrix), row, column, value)
when (is_number(value) or value in [:nan, :inf, :neg_inf]) and row > 0 and column > 0 and
row <= rows and column <= cols,
do: %Matrex{data: NIFs.set(matrix, row - 1, column - 1, float_to_binary(value))}
@doc """
Set column of a matrix to the values from the given 1-column matrix. NIF.
## Example
iex> m = Matrex.reshape(1..6, 3, 2)
#Matrex[3×2]
┌ ┐
│ 1.0 2.0 │
│ 3.0 4.0 │
│ 5.0 6.0 │
└ ┘
iex> Matrex.set_column(m, 2, Matrex.new("7; 8; 9"))
#Matrex[3×2]
┌ ┐
│ 1.0 7.0 │
│ 3.0 8.0 │
│ 5.0 9.0 │
└ ┘
"""
@spec set_column(matrex, index, matrex) :: matrex
def set_column(
matrex_data(rows, columns, _rest1, matrix),
column,
matrex_data(rows, 1, _rest2, column_matrix)
)
when column in 1..columns,
do: %Matrex{data: NIFs.set_column(matrix, column - 1, column_matrix)}
@doc """
Return size of matrix as `{rows, cols}`
## Example
iex> m = Matrex.random(2,3)
#Matrex[2×3]
┌ ┐
│ 0.69745 0.23668 0.36376 │
│ 0.63423 0.29651 0.22844 │
└ ┘
iex> Matrex.size(m)
{2, 3}
"""
@spec size(matrex) :: {index, index}
def size(matrex_data(rows, cols, _)), do: {rows, cols}
@doc """
Produces element-wise squared matrix. NIF through `multiply/4`.
## Example
iex> m = Matrex.new("1 2 3; 4 5 6")
#Matrex[2×3]
┌ ┐
│ 1.0 2.0 3.0 │
│ 4.0 5.0 6.0 │
└ ┘
iex> Matrex.square(m)
#Matrex[2×3]
┌ ┐
│ 1.0 4.0 9.0 │
│ 16.0 25.0 36.0 │
└ ┘
"""
@spec square(matrex) :: matrex
def square(%Matrex{data: matrix}), do: %Matrex{data: Matrex.NIFs.multiply(matrix, matrix)}
@doc """
Produces element-wise pow matrix. NIF through `power/2`.
## Example
iex> m = Matrex.new("1 2 3; 4 5 6")
#Matrex[2×3]
┌ ┐
│ 1.0 2.0 3.0 │
│ 4.0 5.0 6.0 │
└ ┘
iex> Matrex.pow(m, 2)
#Matrex[2×3]
┌ ┐
│ 1.0 4.0 9.0 │
│ 16.0 25.0 36.0 │
└ ┘
"""
@spec pow(matrex, number) :: matrex
def pow(%Matrex{data: matrix}, exponent), do: %Matrex{data: Matrex.NIFs.power(exponent, matrix)}
@doc """
Returns submatrix for a given matrix. NIF.
Rows and columns ranges are inclusive and one-based.
## Example
iex> m = Matrex.new("1 2 3; 4 5 6; 7 8 9")
#Matrex[3×3]
┌ ┐
│ 1.0 2.0 3.0 │
│ 4.0 5.0 6.0 │
│ 7.0 8.0 9.0 │
└ ┘
iex> Matrex.submatrix(m, 2..3, 2..3)
#Matrex[2×2]
┌ ┐
│ 5.0 6.0 │
│ 8.0 9.0 │
└ ┘
"""
@spec submatrix(matrex, Range.t(), Range.t()) :: matrex
def submatrix(matrex_data(rows, cols, _rest, data), row_from..row_to, col_from..col_to)
when row_from in 1..rows and row_to in row_from..rows and col_from in 1..cols and
col_to in col_from..cols,
do: %Matrex{data: NIFs.submatrix(data, row_from - 1, row_to - 1, col_from - 1, col_to - 1)}
def submatrix(%Matrex{} = matrex, rows, cols) do
raise(
RuntimeError,
"Submatrix position out of range or malformed: position is " <>
"(#{Kernel.inspect(rows)}, #{Kernel.inspect(cols)}), source size is " <>
"(#{Kernel.inspect(1..matrex[:rows])}, #{Kernel.inspect(1..matrex[:columns])})"
)
end
@doc """
Subtracts two matrices or matrix from scalar element-wise. NIF.
Raises `ErlangError` if matrices' sizes do not match.
## Examples
iex> Matrex.new([[1, 2, 3], [4, 5, 6]]) |>
...> Matrex.subtract(Matrex.new([[5, 2, 1], [3, 4, 6]]))
#Matrex[2×3]
┌ ┐
│ -4.0 0.0 2.0 │
│ 1.0 1.0 0.0 │
└ ┘
iex> Matrex.subtract(1, Matrex.new([[1, 2, 3], [4, 5, 6]]))
#Matrex[2×3]
┌ ┐
│ 0.0 -1.0 -2.0 │
│ -3.0 -4.0 -5.0 │
└ ┘
"""
@spec subtract(matrex | number, matrex | number) :: matrex
def subtract(%Matrex{data: first}, %Matrex{data: second}),
do: %Matrex{data: NIFs.subtract(first, second)}
def subtract(scalar, %Matrex{data: matrix}) when is_number(scalar),
do: %Matrex{data: NIFs.subtract_from_scalar(scalar, matrix)}
def subtract(%Matrex{data: matrix}, scalar) when is_number(scalar),
do: %Matrex{data: NIFs.add_scalar(matrix, -scalar)}
@doc """
Subtracts the second matrix or scalar from the first. Inlined.
Raises `ErlangError` if matrices' sizes do not match.
## Example
iex> Matrex.new([[1, 2, 3], [4, 5, 6]]) |>
...> Matrex.subtract_inverse(Matrex.new([[5, 2, 1], [3, 4, 6]]))
#Matrex[2×3]
┌ ┐
│ 4.0 0.0 -2.0 │
│ -1.0 -1.0 0.0 │
└ ┘
iex> Matrex.eye(3) |> Matrex.subtract_inverse(1)
#Matrex[3×3]
┌ ┐
│ 0.0 1.0 1.0 │
│ 1.0 0.0 1.0 │
│ 1.0 1.0 0.0 │
└ ┘
"""
@spec subtract_inverse(matrex | number, matrex | number) :: matrex
def subtract_inverse(%Matrex{} = first, %Matrex{} = second), do: subtract(second, first)
def subtract_inverse(%Matrex{} = first, scalar) when is_number(scalar),
do: subtract(scalar, first)
def subtract_inverse(scalar, %Matrex{} = second) when is_number(scalar),
do: subtract(second, scalar)
@doc """
Sums all elements. NIF.
Can return special float values as atoms.
## Example
iex> m = Matrex.magic(3)
#Matrex[3×3]
┌ ┐
│ 8.0 1.0 6.0 │
│ 3.0 5.0 7.0 │
│ 4.0 9.0 2.0 │
└ ┘
iex> Matrex.sum(m)
45.0
iex> m = Matrex.new("1 Inf; 2 3")
#Matrex[2×2]
┌ ┐
│ 1.0 ∞ │
│ 2.0 3.0 │
└ ┘
iex> sum(m)
:inf
"""
@spec sum(matrex) :: element
def sum(%Matrex{data: matrix}), do: NIFs.sum(matrix)
@doc """
Trace of matrix (sum of all diagonal elements). Elixir.
Can return special float values as atoms.
## Example
iex> m = Matrex.magic(3)
#Matrex[3×3]
┌ ┐
│ 8.0 1.0 6.0 │
│ 3.0 5.0 7.0 │
│ 4.0 9.0 2.0 │
└ ┘
iex> Matrex.trace(m)
15.0
iex> m = Matrex.new("Inf 1; 2 3")
#Matrex[2×2]
┌ ┐
│ ∞ 1.0 │
│ 2.0 3.0 │
└ ┘
iex> trace(m)
:inf
"""
@spec trace(matrex) :: element
def trace(%Matrex{data: matrix}), do: NIFs.diagonal(matrix) |> NIFs.sum()
@doc """
Converts to flat list. NIF.
## Example
iex> m = Matrex.magic(3)
#Matrex[3×3]
┌ ┐
│ 8.0 1.0 6.0 │
│ 3.0 5.0 7.0 │
│ 4.0 9.0 2.0 │
└ ┘
iex> Matrex.to_list(m)
[8.0, 1.0, 6.0, 3.0, 5.0, 7.0, 4.0, 9.0, 2.0]
"""
@spec to_list(matrex) :: list(element)
def to_list(%Matrex{data: matrix}), do: NIFs.to_list(matrix)
@doc """
Converts to list of lists. NIF.
## Examples
iex> m = Matrex.magic(3)
#Matrex[3×3]
┌ ┐
│ 8.0 1.0 6.0 │
│ 3.0 5.0 7.0 │
│ 4.0 9.0 2.0 │
└ ┘
iex> Matrex.to_list_of_lists(m)
[[8.0, 1.0, 6.0], [3.0, 5.0, 7.0], [4.0, 9.0, 2.0]]
iex> r = Matrex.divide(Matrex.eye(3), Matrex.zeros(3))
#Matrex[3×3]
┌ ┐
│ ∞ NaN NaN │
│ NaN ∞ NaN │
│ NaN NaN ∞ │
└ ┘
iex> Matrex.to_list_of_lists(r)
[[:inf, :nan, :nan], [:nan, :inf, :nan], [:nan, :nan, :inf]]
"""
@spec to_list_of_lists(matrex) :: list(list(element))
def to_list_of_lists(%Matrex{data: matrix}), do: NIFs.to_list_of_lists(matrix)
@doc """
Convert any matrix m×n to a column matrix (m*n)×1.
## Example
iex> m = Matrex.magic(3)
#Matrex[3×3]
┌ ┐
│ 8.0 1.0 6.0 │
│ 3.0 5.0 7.0 │
│ 4.0 9.0 2.0 │
└ ┘
iex> Matrex.to_column(m)
#Matrex[1×9]
┌ ┐
│ 8.0 1.0 6.0 3.0 5.0 7.0 4.0 9.0 2.0 │
└ ┘
"""
@spec to_column(matrex) :: matrex
def to_column(matrex_data(_rows, 1, _rest) = m), do: m
def to_column(matrex_data(rows, columns, _rest) = m), do: reshape(m, rows * columns, 1)
@doc """
Convert any matrix m×n to a row matrix 1×(m*n).
## Example
iex> m = Matrex.magic(3)
#Matrex[3×3]
┌ ┐
│ 8.0 1.0 6.0 │
│ 3.0 5.0 7.0 │
│ 4.0 9.0 2.0 │
└ ┘
iex> Matrex.to_row(m)
#Matrex[1×9]
┌ ┐
│ 8.0 1.0 6.0 3.0 5.0 7.0 4.0 9.0 2.0 │
└ ┘
"""
@spec to_row(matrex) :: matrex
def to_row(matrex_data(1, _columns, _rest) = m), do: m
def to_row(matrex_data(rows, columns, _rest) = m), do: reshape(m, 1, rows * columns)
@doc """
Transposes a matrix. NIF.
## Example
iex> m = Matrex.new([[1,2,3],[4,5,6]])
#Matrex[2×3]
┌ ┐
│ 1.0 2.0 3.0 │
│ 4.0 5.0 6.0 │
└ ┘
iex> Matrex.transpose(m)
#Matrex[3×2]
┌ ┐
│ 1.0 4.0 │
│ 2.0 5.0 │
│ 3.0 6.0 │
└ ┘
"""
@spec transpose(matrex) :: matrex
# Vectors are transposed by simply reshaping
def transpose(matrex_data(1, columns, _rest) = m), do: reshape(m, columns, 1)
def transpose(matrex_data(rows, 1, _rest) = m), do: reshape(m, 1, rows)
def transpose(%Matrex{data: matrix}), do: %Matrex{data: NIFs.transpose(matrix)}
@doc """
Updates the element at the given position in matrix with function.
Function is invoked with the current element value
## Example
iex> m = Matrex.reshape(1..6, 3, 2)
#Matrex[3×2]
┌ ┐
│ 1.0 2.0 │
│ 3.0 4.0 │
│ 5.0 6.0 │
└ ┘
iex> Matrex.update(m, 2, 2, fn x -> x * x end)
#Matrex[3×2]
┌ ┐
│ 1.0 2.0 │
│ 3.0 16.0 │
│ 5.0 6.0 │
└ ┘
"""
@spec update(matrex, index, index, (element -> element)) :: matrex
def update(matrex_data(rows, columns, _data), row, col, _fun)
when not inside_matrex(row, col, rows, columns),
do:
raise(
ArgumentError,
message: "Position (#{row}, #{col}) is out of matrex [#{rows}×#{columns}]"
)
def update(matrex_data(_rows, columns, data, matrix), row, col, fun)
when is_function(fun, 1) do
new_value =
data
|> binary_part(((row - 1) * columns + (col - 1)) * @element_size, @element_size)
|> binary_to_float()
|> fun.()
|> float_to_binary()
%Matrex{data: NIFs.set(matrix, row - 1, col - 1, new_value)}
end
@doc """
Create matrix of zeros of the specified size. NIF, using `memset()`.
Faster, than `fill(rows, cols, 0)`.
## Example
iex> Matrex.zeros(4,3)
#Matrex[4×3]
┌ ┐
│ 0.0 0.0 0.0 │
│ 0.0 0.0 0.0 │
│ 0.0 0.0 0.0 │
│ 0.0 0.0 0.0 │
└ ┘
"""
@spec zeros(index, index) :: matrex
def zeros(rows, cols) when is_integer(rows) and is_integer(cols),
do: %Matrex{data: NIFs.zeros(rows, cols)}
@doc """
Create square matrix of size `size` rows × `size` columns, filled with zeros. Inlined.
## Example
iex> Matrex.zeros(3)
#Matrex[3×3]
┌ ┐
│ 0.0 0.0 0.0 │
│ 0.0 0.0 0.0 │
│ 0.0 0.0 0.0 │
└ ┘
"""
@spec zeros(index | {index, index}) :: matrex
def zeros({rows, cols}), do: zeros(rows, cols)
def zeros(size), do: zeros(size, size)
end
|
lib/matrex.ex
| 0.916352
| 0.883789
|
matrex.ex
|
starcoder
|
defmodule Vnu.Message do
@moduledoc """
A message is a unit of information returned by the Checker.
See [its documentation](https://github.com/validator/validator/wiki/Output-%C2%BB-JSON#media-type) for detailed up to date information about its output format.
## Fields
- `:type` - One of `:error`, `:info`, or `:non_document_error`. Info messages can either be general information or warnings, see `:sub_type`.
Non-document errors signify errors with the Checker server itself, and are treated internally by this library as if the validation could not be run at all.
- `:sub_type` - For messages of type `:error` it could be `nil` or `:fatal`. For messages of type `:info`, it could be `nil` or `:warning`.
- `:message` - The detailed description of the issue.
- `:extract` - The snippet of the document that the message is about.
- `:first_line`, `:last_line`, `:first_column`, `:last_column` - The position of the part of the document the message is about relative to the whole document.
Lines and columns are numbered from 1.
- `:hilite_start`, `:hilite_length` - Indicate the start and length of substring of the `:extract` that the message is roughly about.
The characters are numbered from 0.
"""
defstruct([
:type,
:sub_type,
:message,
:extract,
:offset,
:first_line,
:first_column,
:last_line,
:last_column,
:hilite_start,
:hilite_length
])
@type t :: %__MODULE__{
type: :error | :info | :non_document_error,
sub_type: :warning | :fatal | :io | :schema | :internal | nil,
message: String.t() | nil,
extract: String.t() | nil,
offset: integer() | nil,
first_line: integer() | nil,
first_column: integer() | nil,
last_line: integer() | nil,
last_column: integer() | nil,
hilite_start: integer() | nil,
hilite_length: integer() | nil
}
@doc false
def from_http_response(map) do
message = %__MODULE__{
type: get_type(map),
sub_type: get_sub_type(map),
message: get_string(map, "message"),
extract: get_string(map, "extract"),
offset: get_integer(map, "offset"),
first_line: get_integer(map, "firstLine"),
first_column: get_integer(map, "firstColumn"),
last_line: get_integer(map, "lastLine"),
last_column: get_integer(map, "lastColumn"),
hilite_start: get_integer(map, "hiliteStart"),
hilite_length: get_integer(map, "hiliteLength")
}
if message.last_line && !message.first_line do
%{message | first_line: message.last_line}
else
message
end
end
defp get_type(map) do
case Map.get(map, "type") do
"error" -> :error
"info" -> :info
"non-document-error" -> :non_document_error
end
end
defp get_sub_type(map) do
case Map.get(map, "subType") do
"warning" -> :warning
"fatal" -> :fatal
"io" -> :io
"schema" -> :schema
"internal" -> :internal
_ -> nil
end
end
defp get_string(map, key) do
case Map.get(map, key) do
string when is_bitstring(string) -> string
_ -> nil
end
end
defp get_integer(map, key) do
case Map.get(map, key) do
integer when is_integer(integer) -> integer
_ -> nil
end
end
end
|
lib/vnu/message.ex
| 0.793546
| 0.595346
|
message.ex
|
starcoder
|
defmodule Hui.URL do
@moduledoc """
Struct and utilities for working with Solr URLs and parameters.
Use the module `t:Hui.URL.t/0` struct to specify
Solr core or collection URLs with request handlers.
### Hui URL endpoints
```
# binary
url = "http://localhost:8983/solr/collection"
Hui.search(url, q: "loch")
# key referring to config setting
url = :library
Hui.search(url, q: "edinburgh", rows: 10)
# Hui.URL struct
url = %Hui.URL{url: "http://localhost:8983/solr/collection", handler: "suggest"}
Hui.search(url, suggest: true, "suggest.dictionary": "mySuggester", "suggest.q": "el")
```
`t:Hui.URL.t/0` struct also enables HTTP headers and options, e.g. [HTTPoison options](https://hexdocs.pm/httpoison/HTTPoison.html#request/5)
to be specified in keyword lists. HTTPoison options provide further controls for a request, e.g. `timeout`, `recv_timeout`,
`max_redirect`, `params` etc.
```
# setting up a header and a 10s receiving connection timeout
url = %Hui.URL{url: "..", headers: [{"accept", "application/json"}], options: [recv_timeout: 10000]}
Hui.search(url, q: "solr rocks")
```
"""
defstruct [:url, handler: "select", headers: [], options: []]
@type headers :: [{binary(), binary()}]
@type options :: Keyword.t()
@typedoc """
Struct for a Solr endpoint with a request handler and any associated HTTP headers and options.
## Example
```
%Hui.URL{handler: "suggest", url: "http://localhost:8983/solr/collection"}
```
- `url`: typical endpoint including the core or collection name. This may also be a load balancer
endpoint fronting several Solr upstreams.
- `handler`: name of a Solr request handler that processes requests.
- `headers`: HTTP headers.
- `options`: e.g. [HTTPoison options](https://hexdocs.pm/httpoison/HTTPoison.html#request/5).
"""
@type t :: %__MODULE__{url: nil | binary, handler: nil | binary, headers: nil | headers, options: nil | options}
@doc """
Returns a configured default Solr endpoint as `t:Hui.URL.t/0` struct.
```
Hui.URL.default_url!
%Hui.URL{handler: "select", url: "http://localhost:8983/solr/gettingstarted", headers: [{"accept", "application/json"}], options: [recv_timeout: 10000]}
```
The default endpoint can be specified in application configuration as below:
```
config :hui, :default,
url: "http://localhost:8983/solr/gettingstarted",
handler: "select", # optional
headers: [{"accept", "application/json"}],
options: [recv_timeout: 10000]
```
"""
@spec default_url! :: t | nil
def default_url! do
{status, default_url} = configured_url(:default)
case status do
:ok -> default_url
:error -> nil
end
end
@doc """
Retrieve url configuration as `t:Hui.URL.t/0` struct.
## Example
iex> Hui.URL.configured_url(:suggester)
{:ok, %Hui.URL{handler: "suggest", url: "http://localhost:8983/solr/collection"}}
The above retrieves the following endpoint configuration e.g. from `config.exs`:
```
config :hui, :suggester,
url: "http://localhost:8983/solr/collection",
handler: "suggest"
```
"""
# TODO: refactor this function, use module attributes in this module or in "Hui" to retrieve config settings
@spec configured_url(atom) :: {:ok, t} | {:error, binary} | nil
def configured_url(config_key) do
url = Application.get_env(:hui, config_key)[:url]
handler = Application.get_env(:hui, config_key)[:handler]
headers =
if Application.get_env(:hui, config_key)[:headers], do: Application.get_env(:hui, config_key)[:headers], else: []
options =
if Application.get_env(:hui, config_key)[:options], do: Application.get_env(:hui, config_key)[:options], else: []
case {url, handler} do
{nil, _} -> {:error, %Hui.Error{reason: :nxdomain}}
{_, nil} -> {:ok, %Hui.URL{url: url, headers: headers, options: options}}
{_, _} -> {:ok, %Hui.URL{url: url, handler: handler, headers: headers, options: options}}
end
end
@doc "Returns the string representation (URL path) of the given `t:Hui.URL.t/0` struct."
@spec to_string(t) :: binary
defdelegate to_string(uri), to: String.Chars.Hui.URL
end
# implement `to_string` for %Hui.URL{} in Elixir generally via the String.Chars protocol
defimpl String.Chars, for: Hui.URL do
def to_string(%Hui.URL{url: url, handler: handler}), do: [url, "/", handler] |> IO.iodata_to_binary()
end
|
lib/hui/url.ex
| 0.846086
| 0.70524
|
url.ex
|
starcoder
|
defmodule Calcinator.Alembic.Document do
@moduledoc """
`Alembic.Document.t` for errors added by `Calcinator` on top of `Alembic.Error`
"""
alias Alembic.Document
alias Calcinator.Alembic.Error
@doc """
Retort returned a 500 JSONAPI error inside a 422 JSONRPC error.
"""
@spec bad_gateway() :: Document.t()
def bad_gateway do
Error.bad_gateway()
|> Error.to_document()
end
@doc """
Converts an error `reason` from that isn't a standard format (such as those from the backing store) to a
500 Internal Server Error JSONAPI error, but with `id` set to `id` that is also used in `Logger.error`, so that
`reason`, which should remain private to limit implementation disclosures that could lead to security issues.
## Log Messages
```
id=UUIDv4 reason=inspect(reason)
```
"""
@spec error_reason(term) :: Document.t()
def error_reason(reason) do
reason
|> Error.error_reason()
|> Error.to_document()
end
@doc """
The current resource or action is forbidden to the authenticated user
"""
@spec forbidden :: Document.t()
def forbidden do
Error.forbidden()
|> Error.to_document()
end
@doc """
504 Gateway Timeout JSONAPI error document.
"""
@spec gateway_timeout :: Document.t()
def gateway_timeout do
Error.gateway_timeout()
|> Error.to_document()
end
@doc """
Puts 404 Resource Not Found JSONAPI error with `parameter` as the source parameter.
"""
@spec not_found(String.t()) :: Document.t()
def not_found(parameter) do
parameter
|> Error.not_found()
|> Error.to_document()
end
@doc """
500 Internal Server Error JSONAPI error document with error with title `"Ownership Error"`.
"""
@spec ownership_error :: Document.t()
def ownership_error do
Error.ownership_error()
|> Error.to_document()
end
@doc """
Puts 422 Unprocessable Entity JSONAPI error document with error with title `"Sandbox Access Disallowed"`.
"""
@spec sandbox_access_disallowed :: Document.t()
def sandbox_access_disallowed do
Error.sandbox_access_disallowed()
|> Error.to_document()
end
@doc """
Puts 422 Unrpcessable Entity JSONAPI error document with error with title `"Child missing"`.
"""
@spec sandbox_token_missing :: Document.t()
def sandbox_token_missing do
Error.sandbox_token_missing()
|> Error.to_document()
end
end
|
lib/calcinator/alembic/document.ex
| 0.848628
| 0.554169
|
document.ex
|
starcoder
|
defmodule SchedEx.Runner do
@moduledoc false
use GenServer
@doc """
Main point of entry into this module. Starts and returns a process which will
run the given function per the specified delay definition (can be an integer
unit as derived from a TimeScale, or a CronExpression)
"""
def run(func, delay_definition, opts) when is_function(func) do
GenServer.start_link(__MODULE__, {func, delay_definition, opts}, Keyword.take(opts, [:name]))
end
@doc """
Returns stats for the given process.
"""
def stats(pid) when is_pid(pid) do
GenServer.call(pid, :stats)
end
def stats(_token) do
{:error, "Not a statable token"}
end
@doc """
Cancels future invocation of the given process. If it has already been invoked, does nothing.
"""
def cancel(pid) when is_pid(pid) do
:shutdown = send(pid, :shutdown)
:ok
end
def cancel(_token) do
{:error, "Not a cancellable token"}
end
# Server API
def init({func, delay_definition, opts}) do
Process.flag(:trap_exit, true)
start_time = Keyword.get(opts, :start_time, DateTime.utc_now())
case schedule_next(start_time, delay_definition, opts) do
{%DateTime{} = next_time, quantized_next_time} ->
stats = %SchedEx.Stats{}
{:ok,
%{
func: func,
delay_definition: delay_definition,
scheduled_at: next_time,
quantized_scheduled_at: quantized_next_time,
stats: stats,
opts: opts
}}
{:error, _} ->
:ignore
end
end
def handle_call(:stats, _from, %{stats: stats} = state) do
{:reply, stats, state}
end
def handle_info(
:run,
%{
func: func,
delay_definition: delay_definition,
scheduled_at: this_time,
quantized_scheduled_at: quantized_this_time,
stats: stats,
opts: opts
} = state
) do
start_time = DateTime.utc_now()
if is_function(func, 1) do
func.(this_time)
else
func.()
end
end_time = DateTime.utc_now()
stats = SchedEx.Stats.update(stats, this_time, quantized_this_time, start_time, end_time)
if Keyword.get(opts, :repeat, false) do
case schedule_next(this_time, delay_definition, opts) do
{%DateTime{} = next_time, quantized_next_time} ->
{:noreply,
%{
state
| scheduled_at: next_time,
quantized_scheduled_at: quantized_next_time,
stats: stats
}}
_ ->
{:stop, :normal, %{state | stats: stats}}
end
else
{:stop, :normal, %{state | stats: stats}}
end
end
def handle_info(:shutdown, state) do
{:stop, :normal, state}
end
defp schedule_next(%DateTime{} = from, delay, opts) when is_integer(delay) do
time_scale = Keyword.get(opts, :time_scale, SchedEx.IdentityTimeScale)
delay = round(delay / time_scale.speedup())
next = Timex.shift(from, milliseconds: delay)
now = DateTime.utc_now()
delay = max(DateTime.diff(next, now, :millisecond), 0)
Process.send_after(self(), :run, delay)
{next, Timex.shift(now, milliseconds: delay)}
end
defp schedule_next(_from, %Crontab.CronExpression{} = crontab, opts) do
time_scale = Keyword.get(opts, :time_scale, SchedEx.IdentityTimeScale)
timezone = Keyword.get(opts, :timezone, "UTC")
now = time_scale.now(timezone)
case Crontab.Scheduler.get_next_run_date(crontab, DateTime.to_naive(now)) do
{:ok, naive_next} ->
next =
case Timex.to_datetime(naive_next, timezone) do
%Timex.AmbiguousDateTime{after: later_time} -> later_time
time -> time
end
delay = round(max(DateTime.diff(next, now, :millisecond) / time_scale.speedup(), 0))
Process.send_after(self(), :run, delay)
{next, Timex.shift(DateTime.utc_now(), milliseconds: delay)}
{:error, _} = error ->
error
end
end
end
|
lib/sched_ex/runner.ex
| 0.784938
| 0.466359
|
runner.ex
|
starcoder
|
defmodule DelveExamples do
@moduledoc false
use Boundary, check: [in: true, out: false]
alias Delve.{Planner, Resolver}
# {:ok, {g, a}} = DelveExamples.test_ast_alt()
def test_ast_alt do
extra_resolvers = [
resolver({Complex, :r40}, [], [{Complex, :c}], &lookup/3),
resolver({Complex, :r41}, [], [{Complex, :q}], &lookup/3),
resolver({Complex, :r42}, [], [{Complex, :t}], &lookup/3),
resolver({Complex, :r43}, [], [{Complex, :u}], &lookup/3),
resolver({Complex, :r44}, [], [{Complex, :ae}], &lookup/3)
]
ast = EQL.to_ast([{Complex, :p}])
complex_resolvers()
|> :lists.reverse(extra_resolvers)
|> Delve.Graph.new()
|> Planner.walk_ast(ast, Planner.new([]))
end
# {:ok, {g, a}} = DelveExamples.test_ast()
def test_ast do
planner =
Planner.new([
{Complex, :c},
{Complex, :q},
{Complex, :t},
{Complex, :u},
{Complex, :ae}
])
ast =
EQL.to_ast([
{Complex, :p},
{Complex, :m},
# {Complex, :ae},
{Complex, :n}
])
DelveExamples.complex_resolvers()
|> Delve.Graph.new()
|> Planner.walk_ast(ast, planner)
end
# {_, g, a} = DelveExamples.test_attr()
def test_attr do
planner = Planner.new(complex_source())
DelveExamples.complex_resolvers()
|> Delve.Graph.new()
|> Planner.walk_attr(complex_attr(), Planner.reset(planner, complex_attr()))
end
def complex_source do
[
{Complex, :c},
{Complex, :q},
{Complex, :t},
{Complex, :u},
{Complex, :ae}
]
end
def complex_attr, do: {Complex, :p}
@spec complex_resolvers() :: [Resolver.t()]
def complex_resolvers do
[
resolver({Complex, :r1}, [{Complex, :a}], [{Complex, :b}], &lookup/3),
resolver({Complex, :r2}, [{Complex, :c}], [{Complex, :d}], &lookup/3),
resolver({Complex, :r3}, [{Complex, :c}], [{Complex, :e}], &lookup/3),
resolver({Complex, :r4}, [{Complex, :e}], [{Complex, :l}], &lookup/3),
resolver({Complex, :r5}, [{Complex, :l}], [{Complex, :m}], &lookup/3),
resolver({Complex, :r6}, [{Complex, :l}], [{Complex, :n}], &lookup/3),
resolver({Complex, :r7}, [{Complex, :n}], [{Complex, :o}], &lookup/3),
resolver({Complex, :r8}, [{Complex, :m}], [{Complex, :p}], &lookup/3),
resolver({Complex, :r9}, [{Complex, :o}], [{Complex, :p}], &lookup/3),
resolver({Complex, :r10}, [{Complex, :g}], [{Complex, :k}], &lookup/3),
resolver({Complex, :r11}, [{Complex, :h}], [{Complex, :g}], &lookup/3),
resolver({Complex, :r12}, [{Complex, :i}], [{Complex, :h}], &lookup/3),
resolver({Complex, :r13}, [{Complex, :j}], [{Complex, :i}], &lookup/3),
resolver({Complex, :r14}, [{Complex, :g}], [{Complex, :j}], &lookup/3),
resolver({Complex, :r15}, [{Complex, :b}, {Complex, :d}], [{Complex, :f}], &lookup/3),
resolver({Complex, :r16}, [{Complex, :q}], [{Complex, :r}], &lookup/3),
resolver({Complex, :r17}, [{Complex, :t}], [{Complex, :v}], &lookup/3),
resolver({Complex, :r18}, [{Complex, :u}], [{Complex, :v}], &lookup/3),
resolver({Complex, :r19}, [{Complex, :v}], [{Complex, :w}], &lookup_and/3),
resolver({Complex, :r20}, [{Complex, :r}, {Complex, :w}], [{Complex, :s}], &lookup/3),
resolver({Complex, :r21}, [{Complex, :s}], [{Complex, :y}], &lookup/3),
resolver({Complex, :r22}, [{Complex, :y}], [{Complex, :z}], &lookup/3),
resolver({Complex, :r23}, [{Complex, :z}], [{Complex, :o}], &lookup/3),
resolver({Complex, :r24}, [{Complex, :aa}], [{Complex, :ab}], &lookup/3),
resolver({Complex, :r25}, [{Complex, :ab}], [{Complex, :z}], &lookup/3),
resolver({Complex, :r26}, [{Complex, :ac}], [{Complex, :y}], &lookup/3),
resolver({Complex, :r27}, [{Complex, :ad}], [{Complex, :ac}], &lookup/3),
resolver({Complex, :r28}, [{Complex, :ae}], [{Complex, :ad}], &lookup/3),
resolver({Complex, :r29}, [{Complex, :ae}], [{Complex, :af}], &lookup/3),
resolver({Complex, :r30}, [{Complex, :af}], [{Complex, :ab}], &lookup/3),
resolver({Complex, :r31}, [{Complex, :ad}], [{Complex, :ab}], &lookup/3),
resolver({Complex, :r32}, [{Complex, :f}], [{Complex, :k}], &lookup/3),
resolver({Complex, :r33}, [{Complex, :k}], [{Complex, :p}], &lookup/3)
]
end
@available %{
{Complex, :c} => 3,
{Complex, :q} => 17,
{Complex, :t} => 20,
{Complex, :u} => 21,
{Complex, :ae} => 31
}
def available_data, do: @available
@complex_database %{
{Complex, :a} => %{},
{Complex, :b} => %{},
{Complex, :c} => %{3 => %{{Complex, :e} => 5}},
{Complex, :d} => %{},
{Complex, :e} => %{5 => %{{Complex, :l} => 12}},
{Complex, :f} => %{},
{Complex, :g} => %{},
{Complex, :h} => %{},
{Complex, :i} => %{},
{Complex, :j} => %{},
{Complex, :k} => %{},
{Complex, :l} => %{12 => %{{Complex, :n} => 14, {Complex, :m} => 13}},
{Complex, :m} => %{13 => %{{Complex, :p} => 16}},
{Complex, :n} => %{14 => %{{Complex, :o} => 15}},
{Complex, :o} => %{15 => %{{Complex, :p} => 16}},
{Complex, :p} => %{16 => :ANSWER!},
{Complex, :q} => %{17 => %{{Complex, :r} => 18}},
{Complex, :r} => %{18 => %{{Complex, :s} => {:and, 8}}},
{Complex, :s} => %{19 => %{{Complex, :y} => 25}},
{Complex, :t} => %{20 => %{{Complex, :v} => 22}},
{Complex, :u} => %{21 => %{{Complex, :v} => 22}},
{Complex, :v} => %{22 => %{{Complex, :w} => 23}},
{Complex, :w} => %{23 => %{{Complex, :s} => {:and, 11}}},
{Complex, :x} => %{24 => %{}},
{Complex, :y} => %{25 => %{{Complex, :y} => 26}},
{Complex, :z} => %{26 => %{{Complex, :o} => 15}},
{Complex, :aa} => %{27 => %{}},
{Complex, :ab} => %{28 => %{{Complex, :z} => 26}},
{Complex, :ac} => %{29 => %{{Complex, :y} => 25}},
{Complex, :ad} => %{30 => %{{Complex, :ab} => 28, {Complex, :ac} => 29}},
{Complex, :ae} => %{31 => %{{Complex, :ad} => 30, {Complex, :af} => 32}},
{Complex, :af} => %{32 => %{{Complex, :ab} => 28}}
}
def lookup(input, _env, output) do
[key | _] = Map.keys(input)
@complex_database
|> Map.get(key)
|> Map.get(Map.get(input, key))
|> Enum.filter(fn {k, _} -> k in output end)
|> Enum.into(%{})
end
def lookup_and(input, _env, output) do
input
|> Enum.map(fn {k, v} -> get_in(@complex_database, [k, v]) end)
|> Enum.map(&Enum.filter(&1, fn {k, _} -> k in output end))
|> Enum.concat()
|> Enum.group_by(&elem(&1, 0), &elem(&1, 1))
|> Enum.map(fn {k, vs} ->
{k, Enum.reduce(vs, 0, fn {:and, x}, acc -> acc + x end)}
end)
|> Enum.into(%{})
end
@spec nested_resolvers() :: [Resolver.t()]
def nested_resolvers do
[
resolver({Nested, :r1}, [{Nested, :a}], [
{Nested, :b},
{Nested, :c},
%{{Nested, :d} => [{Nested, :e}]}
]),
resolver({Nested, :r2}, [{Nested, :e}], [
{Nested, :f},
{Nested, :g},
%{{Nested, :h} => [{Nested, :i}]}
]),
resolver({Nested, :r3}, [{Nested, :i}], [
{Nested, :j},
{Nested, :k},
%{{Nested, :l} => [{Nested, :m}]}
])
]
end
@spec resolver(Resolver.id(), Resolver.input(), Resolver.output(), (... -> term) | nil) ::
Resolver.t()
def resolver(id, input, output, resolve \\ nil)
def resolver(id, input, output, nil) do
Resolver.new(id, input, output, &default_resolver/2)
end
def resolver(id, input, output, resolve) when is_function(resolve, 2) do
Resolver.new(id, input, output, resolve)
end
def resolver(id, input, output, resolve) when is_function(resolve, 3) do
Resolver.new(id, input, output, &resolve.(&1, &2, output))
end
def default_resolver(_inputs, _env) do
%{}
end
end
|
test/support/delve_examples.ex
| 0.709724
| 0.4881
|
delve_examples.ex
|
starcoder
|
defmodule Oli.Delivery.Paywall.Payment do
use Ecto.Schema
import Ecto.Changeset
@moduledoc """
Modeling of a payment.
Payments can be one of two types: direct or deferred. A direct payment is a
payment made by a student through a system supported payment provider (e.g. Stripe or Cashnet).
A deferred payment is a payment record that can be created by the system but not "applied" to
any enrollment at the time of creation. In this deferred case, the payment code is made available
to a third-party bookstore to be sold to a student. The student then redeems the code in
this system (which then "applies" the payment to the enrollment).
The "code" attribute is a random number, guaranteed to be unique, that is non-ordered and thus
not "guessable" by a malicious actor. Convenience routines for expressing this code as a
human readable string of the form: "XV7-JKR4", where the integer (big int) codes are expressed
in Crockford Base 32 a variant of a standard base 32 encoding that substitutes potentially confusing
characters in an aim towards maximizing human readability.
"""
schema "payments" do
field :type, Ecto.Enum, values: [:direct, :deferred], default: :direct
field :code, :integer
field :generation_date, :utc_datetime
field :application_date, :utc_datetime
field :amount, Money.Ecto.Map.Type
field :provider_type, Ecto.Enum, values: [:stripe, :cashnet], default: :stripe
field :provider_id, :string
field :provider_payload, :map
field :pending_user_id, :integer
field :pending_section_id, :integer
belongs_to :section, Oli.Delivery.Sections.Section
belongs_to :enrollment, Oli.Delivery.Sections.Enrollment
timestamps(type: :utc_datetime)
end
@doc false
def changeset(section, attrs) do
section
|> cast(attrs, [
:type,
:code,
:generation_date,
:application_date,
:amount,
:provider_type,
:provider_id,
:provider_payload,
:pending_user_id,
:pending_section_id,
:section_id,
:enrollment_id
])
|> validate_required([:type, :generation_date, :amount, :section_id])
end
def to_human_readable(code) do
Base32Crockford.encode(code, partitions: 2)
end
def from_human_readable(human_readable_code) do
Base32Crockford.decode(human_readable_code)
end
end
|
lib/oli/delivery/paywall/payment.ex
| 0.636466
| 0.431824
|
payment.ex
|
starcoder
|
defmodule RobotSimulator do
@directions [:north, :east, :south, :west]
defguard is_position(x, y) when is_integer(x) and is_integer(y)
defguard is_direction(direction) when direction in @directions
@doc """
Create a Robot Simulator given an initial direction and position.
Valid directions are: `:north`, `:east`, `:south`, `:west`
"""
@spec create(direction :: atom, position :: {integer, integer}) :: any
def create(direction \\ :north, position \\ {0, 0})
def create(direction, {x, y}) when is_direction(direction) and is_position(x, y) do
%{direction: direction, position: {x, y}}
end
def create(direction, _) when not is_direction(direction) do
{:error, "invalid direction"}
end
def create(_, _) do
{:error, "invalid position"}
end
@doc """
Simulate the robot's movement given a string of instructions.
Valid instructions are: "R" (turn right), "L", (turn left), and "A" (advance)
"""
@spec simulate(robot :: any, instructions :: String.t()) :: any
def simulate(robot, instructions) do
instructions
|> String.graphemes()
|> Enum.reduce_while(robot, fn instruction, robot ->
case instruction do
"L" -> {:cont, turn(robot, :left)}
"R" -> {:cont, turn(robot, :right)}
"A" -> {:cont, advance(robot)}
_ -> {:halt, {:error, "invalid instruction"}}
end
end)
end
defp turn(robot, direction) when is_atom(direction) do
%{
robot
| direction:
(Enum.find_index(@directions, fn x -> x == direction(robot) end) +
case direction do
:left -> -1
:right -> 1
end)
|> Integer.mod(4)
|> (&Enum.at(@directions, &1)).()
}
end
defp advance(robot) do
{x, y} = position(robot)
%{
robot
| position:
case direction(robot) do
:north ->
{x, y + 1}
:south ->
{x, y - 1}
:east ->
{x + 1, y}
:west ->
{x - 1, y}
end
}
end
@doc """
Return the robot's direction.
Valid directions are: `:north`, `:east`, `:south`, `:west`
"""
@spec direction(robot :: any) :: atom
def direction(%{direction: dir}), do: dir
@doc """
Return the robot's position.
"""
@spec position(robot :: any) :: {integer, integer}
def position(%{position: pos}), do: pos
end
|
elixir/robot-simulator/lib/robot_simulator.ex
| 0.88715
| 0.830216
|
robot_simulator.ex
|
starcoder
|
defmodule Grax.RDF.Access do
@moduledoc !"""
This encapsulates the access functions to the RDF data.
It is intended to become an adapter to different types of data sources.
"""
alias RDF.{Description, Graph, Query}
alias Grax.Schema.LinkProperty
alias Grax.InvalidResourceTypeError
def description(graph, id) do
Graph.description(graph, id) || Description.new(id)
end
def objects(_graph, description, property_iri)
def objects(graph, description, {:inverse, property_iri}) do
inverse_values(graph, description.subject, property_iri)
end
def objects(_graph, description, property_iri) do
Description.get(description, property_iri)
end
def filtered_objects(graph, description, property_schema) do
case LinkProperty.value_type(property_schema) do
%{} = class_mapping when not is_struct(class_mapping) ->
graph
|> objects(description, property_schema.iri)
|> Enum.reduce_while({:ok, []}, fn object, {:ok, objects} ->
description = description(graph, object)
case determine_schema(
description[RDF.type()],
class_mapping,
property_schema.on_type_mismatch
) do
{:ok, nil} -> {:cont, {:ok, objects}}
{:ok, _} -> {:cont, {:ok, [object | objects]}}
{:error, _} = error -> {:halt, error}
end
end)
_ ->
{:ok, objects(graph, description, property_schema.iri)}
end
end
defp inverse_values(graph, subject, property) do
{:object?, property, subject}
|> Query.execute!(graph)
|> case do
[] -> nil
results -> Enum.map(results, &Map.fetch!(&1, :object))
end
end
def determine_schema(types, class_mapping, on_type_mismatch) do
types
|> List.wrap()
|> Enum.reduce([], fn class, candidates ->
case class_mapping[class] do
nil -> candidates
schema -> [schema | candidates]
end
end)
|> do_determine_schema(types, class_mapping, on_type_mismatch)
end
defp do_determine_schema([schema], _, _, _), do: {:ok, schema}
defp do_determine_schema([], types, class_mapping, on_type_mismatch) do
case class_mapping[nil] do
nil ->
case on_type_mismatch do
:ignore ->
{:ok, nil}
:error ->
{:error, InvalidResourceTypeError.exception(type: :no_match, resource_types: types)}
end
schema ->
{:ok, schema}
end
end
defp do_determine_schema(_, types, _, _) do
{:error, InvalidResourceTypeError.exception(type: :multiple_matches, resource_types: types)}
end
end
|
lib/grax/rdf/access.ex
| 0.781831
| 0.533458
|
access.ex
|
starcoder
|
defmodule BencheeDsl.Benchmark do
@moduledoc """
Helpers for defining a benchmark with the DSL.
This module must be used to define and configure a benchmark.
"""
alias BencheeDsl.Server
@keys [
:config,
:description,
:dir,
:module,
:title
]
@type keys ::
:config
| :description
| :dir
| :module
| :title
@type t :: %__MODULE__{
config: keyword(),
description: String.t(),
dir: String.t(),
module: module(),
title: String.t()
}
defstruct @keys
@doc """
Creates a new `Benchmark` struct.
"""
@spec new(keyword()) :: t()
def new(data), do: struct!(__MODULE__, data)
@doc """
Updates a `benchmark` struct by the given `key` or `path`.
"""
@spec update(t(), keys() | list(atom()), (any() -> any())) :: t()
def update(benchmark, key, fun) when key in @keys do
Map.update!(benchmark, key, fun)
end
def update(benchmark, [key | path], fun) when key in @keys do
Map.update!(benchmark, key, fn data ->
update_in(data, path, fun)
end)
end
defmacro __using__(_opts) do
quote do
import BencheeDsl.Benchmark
Module.register_attribute(__MODULE__, :title, persist: true)
Module.register_attribute(__MODULE__, :description, persist: true)
Module.register_attribute(__MODULE__, :__dir__, persist: true)
Module.register_attribute(__MODULE__, :__file__, persist: true)
Module.put_attribute(__MODULE__, :__dir__, __DIR__)
Module.put_attribute(__MODULE__, :__file__, __ENV__.file)
end
end
@doc """
Defines a `setup` callback to be run before the benchmark starts.
"""
defmacro setup(body) do
quote do
def setup, do: unquote(body)
end
end
@doc """
Defines a callback that runs once the benchmark exits.
"""
defmacro on_exit(fun) do
quote do
Server.register(:on_exit, __MODULE__, unquote(fun))
end
end
@doc """
Defines a function or `map` to setup the inputs for the benchmark. If inputs
has a `do` block a `map` is expected to be returned.
"""
defmacro inputs(do: inputs) do
quote do
def inputs, do: unquote(inputs)
end
end
defmacro inputs(inputs) do
quote do
def inputs, do: unquote(inputs)
end
end
@doc """
Configures the benchmark.
"""
defmacro config(config) do
quote do
Server.register(:config, __MODULE__, unquote(config))
end
end
@doc """
This macro defines a function for the benchmark.
"""
defmacro job({fun_name, _, nil}, do: body) do
quote do
Server.register(:job, __MODULE__, unquote(fun_name))
def job(unquote(fun_name)) do
fn -> unquote(body) end
end
end
end
defmacro job({fun_name, _, [var]}, do: body), do: quote_job(fun_name, var, body)
defmacro job(fun_name, var, do: body)
when is_binary(fun_name),
do: quote_job(fun_name, var, body)
defmacro job(fun_name, do: body) do
quote do
Server.register(:job, __MODULE__, unquote(fun_name))
def job(unquote(fun_name)) do
fn -> unquote(body) end
end
end
end
defp quote_job(fun_name, var, body) do
quote do
Server.register(:job, __MODULE__, unquote(fun_name))
def job(unquote(fun_name)) do
fn unquote(var) -> unquote(body) end
end
end
end
@doc """
Adds a formatter to the benchmark.
"""
defmacro formatter(module, opts) do
quote do
Server.register(:formatter, __MODULE__, {unquote(module), unquote(opts)})
end
end
end
|
lib/benchee_dsl/benchmark.ex
| 0.804675
| 0.568506
|
benchmark.ex
|
starcoder
|
defmodule EntropyString.Error do
@moduledoc """
Errors raised when defining a EntropyString module with invalid options
"""
defexception message: "EntropyString error"
end
defmodule EntropyString do
alias EntropyString.CharSet
@moduledoc """
Efficiently generate cryptographically strong random strings of specified entropy from various
character sets.
## Example
Ten thousand potential hexidecimal strings with a 1 in 10 million chance of repeat
bits = EntropyString.bits(10000, 10000000)
EntropyString.random(bits, :charset16)
"9e9b34d6f69ea"
"""
@doc false
defmacro __using__(opts) do
quote do
import EntropyString
import CharSet
bitLen = unquote(opts)[:bits]
total = unquote(opts)[:total]
risk = unquote(opts)[:risk]
bits =
cond do
is_number(bitLen) ->
bitLen
is_number(total) and is_number(risk) ->
EntropyString.bits(total, risk)
true ->
128
end
@entropy_string_bits bits
charset =
case unquote(opts)[:charset] do
nil ->
CharSet.charset32()
:charset64 ->
CharSet.charset64()
:charset32 ->
CharSet.charset32()
:charset16 ->
CharSet.charset16()
:charset8 ->
CharSet.charset8()
:charset4 ->
CharSet.charset4()
:charset2 ->
CharSet.charset2()
charset when is_binary(charset) ->
case validate(charset) do
true -> charset
{_, reason} -> raise EntropyString.Error, message: reason
end
charset ->
raise EntropyString.Error, message: "Invalid predefined charset: #{charset}"
end
@entropy_string_charset charset
@before_compile EntropyString
end
end
@doc false
defmacro __before_compile__(_env) do
quote do
@doc """
Default entropy bits for random strings
"""
def bits, do: @entropy_string_bits
@doc """
Module **_EntropyString.CharSet_**
"""
def charset, do: @entropy_string_charset
@doc """
Random string using module **_charset_** with a 1 in a million chance of repeat in
30 strings.
## Example
MyModule.small()
"nGrqnt"
"""
def small, do: small(@entropy_string_charset)
@doc """
Random string using module **_charset_** with a 1 in a billion chance of repeat for a million
potential strings.
## Example
MyModulue.medium()
"nndQjL7FLR9pDd"
"""
def medium, do: medium(@entropy_string_charset)
@doc """
Random string using module **_charset_** with a 1 in a trillion chance of repeat for a billion
potential strings.
## Example
MyModule.large()
"NqJLbG8htr4t64TQmRDB"
"""
def large, do: large(@entropy_string_charset)
@doc """
Random string using module **_charset_** suitable for 128-bit OWASP Session ID
## Example
MyModule.session()
"6pLfLgfL8MgTn7tQDN8tqPFR4b"
"""
def session, do: session(@entropy_string_charset)
@doc """
Random string using module **_charset_** with 256 bits of entropy.
## Example
MyModule.token()
"zHZ278Pv_GaOsmRYdBIR5uO8Tt0OWSESZbVuQye6grt"
"""
def token, do: token(@entropy_string_charset)
@doc """
Random string of entropy **_bits_** using module **_charset_**
- **_bits_** - entropy bits for string
- non-negative integer
- predefined atom
- Defaults to module **_bits_**
Returns string of at least entropy **_bits_** using module characters; or
- `{:error, "Negative entropy"}` if **_bits_** is negative.
- `{:error, reason}` if `EntropyString.CharSet.validate(charset)` is not `true`.
Since the generated random strings carry an entropy that is a multiple of the bits per module
characters, the returned entropy is the minimum that equals or exceeds the specified
**_bits_**.
## Example
A million potential strings (assuming :charset32 characters) with a 1 in a billion chance
of a repeat
bits = EntropyString.bits(1.0e6, 1.0e9)
MyModule.random(bits)
"NbMbLrj9fBbQP6"
MyModule.random(:session)
"CeElDdo7HnNDuiWwlFPPq0"
"""
def random(bits \\ @entropy_string_bits), do: random(bits, @entropy_string_charset)
@doc """
Random string of module entropy **_bits_** and **_charset_**
## Example
Define a module for 10 billion strings with a 1 in a decillion chance of a repeat
defmodule Rare, do: use EntropyString, total: 1.0e10, risk: 1.0e33
Rare.string()
"H2Mp8MPT7F3Pp2bmHm"
Define a module for strings with 122 bits of entropy
defmodule MyId, do: use EntropyString, bits: 122, charset: charset64
MyId.string()
"aj2_kMH64P2QDRBlOkz7Z"
"""
@since "1.3"
def string(), do: random(@entropy_string_bits, @entropy_string_charset)
@doc """
Module characters
"""
@since "1.3"
def chars(), do: @entropy_string_charset
end
end
## -----------------------------------------------------------------------------------------------
## bits/2
## -----------------------------------------------------------------------------------------------
@doc """
Bits of entropy required for **_total_** number of strings with a given **_risk_**
- **_total_** - potential number of strings
- **_risk_** - risk of repeat in **_total_** strings
## Example
Bits of entropy for **_30_** strings with a **_1 in a million_** chance of repeat
iex> import EntropyString, only: [bits: 2]
iex> bits = bits(30, 1000000)
iex> round(bits)
29
"""
def bits(0, _), do: 0
def bits(_, 0), do: 0
def bits(total, _) when total < 0, do: NaN
def bits(_, risk) when risk < 0, do: NaN
def bits(total, risk) when is_number(total) and is_number(risk) do
n =
cond do
total < 1000 ->
:math.log2(total) + :math.log2(total - 1)
true ->
2 * :math.log2(total)
end
n + :math.log2(risk) - 1
end
def bits(_, _), do: NaN
## -----------------------------------------------------------------------------------------------
## small/1
## -----------------------------------------------------------------------------------------------
@doc """
Random string using **_charset_** characters with a 1 in a million chance of repeat in 30 strings.
Default **_CharSet_** is `charset32`.
## Example
EntropyString.small()
"nGrqnt"
EntropyString.small(:charset16)
"7bc250e5"
"""
def small(charset \\ :charset32)
def small(charset) when is_atom(charset) do
random(bits_from_atom(:small), charset_from_atom(charset))
end
def small(charset), do: random(bits_from_atom(:small), charset)
## -----------------------------------------------------------------------------------------------
## medium/1
## -----------------------------------------------------------------------------------------------
@doc """
Random string using **_charset_** characters with a 1 in a billion chance of repeat for a million
potential strings.
Default **_CharSet_** is `charset32`.
## Example
EntropyString.medium()
"nndQjL7FLR9pDd"
EntropyString.medium(:charset16)
"b95d23b299eeb9bbe6"
"""
def medium(charset \\ :charset32)
def medium(charset) when is_atom(charset) do
random(bits_from_atom(:medium), charset_from_atom(charset))
end
def medium(charset), do: random(bits_from_atom(:medium), charset)
## -----------------------------------------------------------------------------------------------
## large/1
## -----------------------------------------------------------------------------------------------
@doc """
Random string using **_charset_** characters with a 1 in a trillion chance of repeat for a billion
potential strings.
Default **_CharSet_** is `charset32`.
## Example
EntropyString.large()
"NqJLbG8htr4t64TQmRDB"
EntropyString.large(:charset16)
"f6c4d04cef266a5c3a7950f90"
"""
def large(charset \\ :charset32)
def large(charset) when is_atom(charset) do
random(bits_from_atom(:large), charset_from_atom(charset))
end
def large(charset), do: random(bits_from_atom(:large), charset)
## -----------------------------------------------------------------------------------------------
## session/1
## -----------------------------------------------------------------------------------------------
@doc """
Random string using **_charset_** characters suitable for 128-bit OWASP Session ID
Default **_CharSet_** is `charset32`.
## Example
EntropyString.session()
"6pLfLgfL8MgTn7tQDN8tqPFR4b"
EntropyString.session(:charset64)
"VzhprMROlM6Iy2Pk1IRCqR"
"""
def session(charset \\ :charset32)
def session(charset) when is_atom(charset) do
random(bits_from_atom(:session), charset_from_atom(charset))
end
def session(charset), do: random(bits_from_atom(:session), charset)
## -----------------------------------------------------------------------------------------------
## token/1
## -----------------------------------------------------------------------------------------------
@doc """
Random string using **_charset_** characters with 256 bits of entropy.
Default **_CharSet_** is the base 64 URL and file system safe character set.
## Example
EntropyString.token()
"<KEY>"
EntropyString.token(:charset32)
"<KEY>"
"""
def token(charset \\ CharSet.charset64())
def token(charset) when is_atom(charset) do
random(bits_from_atom(:token), charset_from_atom(charset))
end
def token(charset), do: random(bits_from_atom(:token), charset)
## -----------------------------------------------------------------------------------------------
## random/2
## -----------------------------------------------------------------------------------------------
@doc """
Random string of entropy **_bits_** using **_charset_** characters
- **_bits_** - entropy bits for string
- non-negative integer
- predefined atom
- **_charset_** - CharSet to use
- `EntropyString.CharSet`
- predefined atom
- Valid `String` representing the characters for the `EntropyString.CharSet`
Returns string of at least entropy **_bits_** using characters from **_charset_**; or
- `{:error, "Negative entropy"}` if **_bits_** is negative.
- `{:error, reason}` if `EntropyString.CharSet.validate(charset)` is not `true`.
Since the generated random strings carry an entropy that is a multiple of the bits per character
for **_charset_**, the returned entropy is the minimum that equals or exceeds the specified
**_bits_**.
## Examples
A million potential base32 strings with a 1 in a billion chance of a repeat
bits = EntropyString.bits(1.0e6, 1.0e9)
EntropyString.random(bits)
"NbMbLrj9fBbQP6"
A million potential hex strings with a 1 in a billion chance of a repeat
EntropyString.random(bits, :charset16)
"0746ae8fbaa2fb4d36"
A random session ID using URL and File System safe characters
EntropyString.random(:session, :charset64)
"txSdE3qBK2etQtLyCFNHGD"
"""
def random(bits \\ 128, charset \\ :charset32)
## -----------------------------------------------------------------------------------------------
## Invalid bits
## -----------------------------------------------------------------------------------------------
def random(bits, _charset) when bits < 0, do: {:error, "Negative entropy"}
def random(bits, charset) when is_atom(bits), do: random(bits_from_atom(bits), charset)
def random(bits, charset) when is_atom(charset), do: random(bits, charset_from_atom(charset))
def random(bits, charset) do
with_charset(charset, fn ->
byteCount = CharSet.bytes_needed(bits, charset)
bytes = :crypto.strong_rand_bytes(byteCount)
_random_string_bytes(bits, charset, bytes)
end)
end
## -----------------------------------------------------------------------------------------------
## random/3
## -----------------------------------------------------------------------------------------------
@doc """
Random string of entropy **_bits_** using **_charset_** characters and specified **_bytes_**
- **_bits_** - entropy bits
- non-negative integer
- predefined atom
- **_charset_** - CharSet to use
- `EntropyString.CharSet`
- predefined atom
- Valid `String` representing the characters for the `EntropyString.CharSet`
- **_bytes_** - Bytes to use
Returns random string of at least entropy **_bits_**; or
- `{:error, "Negative entropy"}` if **_bits_** is negative.
- `{:error, reason}` if `EntropyString.CharSet.validate(charset)` is not `true`.
- `{:error, reason}` if `validate_byte_count(bits, charset, bytes)` is not `true`.
Since the generated random strings carry an entropy that is a multiple of the bits per character
for **_charset_**, the returned entropy is the minimum that equals or exceeds the specified
**_bits_**.
## Example
30 potential random hex strings with a 1 in a million chance of a repeat
iex> bits = EntropyString.bits(30, 1000000)
iex> bytes = <<14, 201, 32, 143>>
iex> EntropyString.random(bits, :charset16, bytes)
"0ec9208f"
Use `EntropyString.CharSet.bytes_needed(bits, charset)` to determine how many **_bytes_** are
actually needed.
"""
def random(bits, charset, bytes) when is_atom(bits) do
random(bits_from_atom(bits), charset, bytes)
end
def random(bits, charset, bytes) when is_atom(charset) do
random(bits, charset_from_atom(charset), bytes)
end
def random(bits, charset, bytes) do
with_charset(charset, fn ->
case validate_byte_count(bits, charset, bytes) do
true -> _random_string_bytes(bits, charset, bytes)
error -> error
end
end)
end
defp _random_string_bytes(bits, charset, bytes) do
bitsPerChar = CharSet.bits_per_char(charset)
ndxFn = ndx_fn(charset)
charCount = trunc(Float.ceil(bits / bitsPerChar))
_random_string_count(charCount, ndxFn, charset, bytes, <<>>)
end
defp _random_string_count(0, _, _, _, chars), do: chars
defp _random_string_count(charCount, ndxFn, charset, bytes, chars) do
slice = charCount - 1
ndx = ndxFn.(slice, bytes)
char = :binary.part(charset, ndx, 1)
_random_string_count(slice, ndxFn, charset, bytes, <<char::binary, chars::binary>>)
end
## -----------------------------------------------------------------------------------------------
## validate_byte_count/3
## -----------------------------------------------------------------------------------------------
@doc """
Validate number of **_bytes_** is sufficient to generate random strings with entropy **_bits_**
using **_charset_**
- **_bits_** - entropy bits for random string
- **_charset_** - characters in use
- **_bytes_** - bytes to validate
### Validations
- **_bytes_** count must be sufficient to generate entropy **_bits_** string from **_charset_**
Use `EntropyString.CharSet.bytes_needed(bits, charset)` to determine how many **_bytes_** are
needed
"""
def validate_byte_count(bits, charset, bytes) when is_binary(bytes) do
need = CharSet.bytes_needed(bits, charset)
got = byte_size(bytes)
case need <= got do
true ->
true
_ ->
reason = :io_lib.format("Insufficient bytes: need ~p and got ~p", [need, got])
{:error, :binary.list_to_bin(reason)}
end
end
## -----------------------------------------------------------------------------------------------
## ndx_fn/1
## Return function to pull charset bits_per_char bits at position slice of bytes
## -----------------------------------------------------------------------------------------------
defp ndx_fn(charset) do
bitsPerChar = CharSet.bits_per_char(charset)
fn slice, bytes ->
offset = slice * bitsPerChar
<<_skip::size(offset), ndx::size(bitsPerChar), _rest::bits>> = bytes
ndx
end
end
## -----------------------------------------------------------------------------------------------
## with_charset/1
## For pre-defined CharSet, skip charset validation
## -----------------------------------------------------------------------------------------------
defp with_charset(charset, doFn) do
# Pre-defined charset does not require validation
case is_predefined_charset(charset) do
true ->
doFn.()
_ ->
case CharSet.validate(charset) do
true -> doFn.()
error -> error
end
end
end
defp is_predefined_charset(:charset2), do: true
defp is_predefined_charset(:charset4), do: true
defp is_predefined_charset(:charset8), do: true
defp is_predefined_charset(:charset16), do: true
defp is_predefined_charset(:charset32), do: true
defp is_predefined_charset(:charset64), do: true
defp is_predefined_charset(charset) do
charset == CharSet.charset64() or charset == CharSet.charset32() or
charset == CharSet.charset16() or charset == CharSet.charset8() or
charset == CharSet.charset4() or charset == CharSet.charset2()
end
## -----------------------------------------------------------------------------------------------
## Convert bits atom to bits integer
## -----------------------------------------------------------------------------------------------
defp bits_from_atom(:small), do: 29
defp bits_from_atom(:medium), do: 69
defp bits_from_atom(:large), do: 99
defp bits_from_atom(:session), do: 128
defp bits_from_atom(:token), do: 256
## -----------------------------------------------------------------------------------------------
## Convert charset atom to EntropyString.CharSet
## -----------------------------------------------------------------------------------------------
defp charset_from_atom(:charset2), do: CharSet.charset2()
defp charset_from_atom(:charset4), do: CharSet.charset4()
defp charset_from_atom(:charset8), do: CharSet.charset8()
defp charset_from_atom(:charset16), do: CharSet.charset16()
defp charset_from_atom(:charset32), do: CharSet.charset32()
defp charset_from_atom(:charset64), do: CharSet.charset64()
end
|
lib/entropy_string.ex
| 0.926976
| 0.580649
|
entropy_string.ex
|
starcoder
|
defmodule Mongo.OrderedBulk do
@moduledoc """
An **ordered** bulk is filled in the memory with the bulk operations. If the ordered bulk is written to the database, the order
is preserved. Only same types of operation are grouped and only if they have been inserted one after the other.
## Example
```
alias Mongo.OrderedBulk
alias Mongo.BulkWrite
bulk = "bulk"
|> OrderedBulk.new()
|> OrderedBulk.insert_one(%{name: "Greta"})
|> OrderedBulk.insert_one(%{name: "Tom"})
|> OrderedBulk.insert_one(%{name: "Waldo"})
|> OrderedBulk.update_one(%{name: "Greta"}, %{"$set": %{kind: "dog"}})
|> OrderedBulk.update_one(%{name: "Tom"}, %{"$set": %{kind: "dog"}})
|> OrderedBulk.update_one(%{name: "Waldo"}, %{"$set": %{kind: "dog"}})
|> OrderedBulk.update_many(%{kind: "dog"}, %{"$set": %{kind: "cat"}})
|> OrderedBulk.delete_one(%{kind: "cat"})
|> OrderedBulk.delete_one(%{kind: "cat"})
|> OrderedBulk.delete_one(%{kind: "cat"})
BulkWrite.write(:mongo, bulk, w: 1)
```
This example would not work by using an unordered bulk, because the `OrderedBulk.update_many` would executed too early.
To reduce the memory usage the ordered bulk can be used with streams as well.
## Example
```
alias Mongo.OrderedBulk
1..1000
|> Stream.map(fn
1 -> BulkOps.get_insert_one(%{count: 1})
1000 -> BulkOps.get_delete_one(%{count: 999})
i -> BulkOps.get_update_one(%{count: i - 1}, %{"$set": %{count: i}})
end)
|> OrderedBulk.write(:mongo, "bulk", 25)
|> Stream.run()
```
Of course, this example is a bit silly. A sequence of update operations is created that only work in the correct order.
"""
alias Mongo.OrderedBulk
alias Mongo.BulkWrite
import Mongo.BulkOps
@type t :: %__MODULE__{
coll: String.t,
ops: [BulkOps.bulk_op]
}
defstruct coll: nil, ops: []
@doc """
Creates an empty ordered bulk for a collection.
Example:
```
Mongo.orderedBulk.new("bulk")
%Mongo.OrderedBulk{coll: "bulk", ops: []}
```
"""
@spec new(String.t) :: OrderedBulk.t
def new(coll) do
%OrderedBulk{coll: coll}
end
@doc """
Returns true, if the bulk is empty, that means it contains no inserts, updates or deletes operations
"""
def empty?(%OrderedBulk{ops: []}) do
true
end
def empty?(_other) do
false
end
@doc """
Appends a bulk operation to the ordered bulk.
"""
@spec push(BulkOps.bulk_op, OrderedBulk.t) :: OrderedBulk.t
def push(op, %OrderedBulk{ops: rest} = bulk) do
%OrderedBulk{bulk | ops: [op | rest] }
end
@doc """
Appends an insert operation.
Example:
```
Mongo.OrderedBulk.insert_one(bulk, %{name: "Waldo"})
%Mongo.OrderedBulk{coll: "bulk", ops: [insert: %{name: "Waldo"}]}
```
"""
@spec insert_one(OrderedBulk.t, BulkOps.bulk_op) :: OrderedBulk.t
def insert_one(%OrderedBulk{} = bulk, doc) do
get_insert_one(doc) |> push(bulk)
end
@doc """
Appends a delete operation with `:limit = 1`.
Example:
```
Mongo.OrderedBulk.delete_one(bulk, %{name: "Waldo"})
%Mongo.OrderedBulk{coll: "bulk", ops: [delete: {%{name: "Waldo"}, [limit: 1]}]}
```
"""
@spec delete_one(OrderedBulk.t, BulkOps.bulk_op) :: OrderedBulk.t
def delete_one(%OrderedBulk{} = bulk, doc) do
get_delete_one(doc) |> push(bulk)
end
@doc """
Appends a delete operation with `:limit = 0`.
Example:
```
Mongo.OrderedBulk.delete_many(bulk, %{name: "Waldo"})
%Mongo.OrderedBulk{coll: "bulk", ops: [delete: {%{name: "Waldo"}, [limit: 0]}]}
```
"""
@spec delete_many(OrderedBulk.t, BulkOps.bulk_op) :: OrderedBulk.t
def delete_many(%OrderedBulk{} = bulk, doc) do
get_delete_many(doc) |> push(bulk)
end
@doc """
Appends a replace operation with `:multi = false`.
Example:
```
Mongo.OrderedBulk.replace_one(bulk, %{name: "Waldo"}, %{name: "Greta", kind: "dog"})
%Mongo.OrderedBulk{
coll: "bulk",
ops: [
update: {%{name: "Waldo"}, %{kind: "dog", name: "Greta"}, [multi: false]}
]
}
```
"""
@spec replace_one(OrderedBulk.t, BSON.document, BSON.document, Keyword.t) :: OrderedBulk.t
def replace_one(%OrderedBulk{} = bulk, filter, replacement, opts \\ []) do
get_replace_one(filter, replacement, opts) |> push(bulk)
end
@doc """
Appends a update operation with `:multi = false`.
Example:
```
Mongo.OrderedBulk.update_one(bulk, %{name: "Waldo"}, %{"$set": %{name: "Greta", kind: "dog"}})
%Mongo.OrderedBulk{
coll: "bulk",
ops: [
update: {%{name: "Waldo"}, %{"$set": %{kind: "dog", name: "Greta"}},
[multi: false]}
]
}
```
"""
@spec update_one(OrderedBulk.t, BSON.document, BSON.document, Keyword.t) :: OrderedBulk.t
def update_one(%OrderedBulk{} = bulk, filter, update, opts \\ []) do
get_update_one(filter, update, opts) |> push(bulk)
end
@doc """
Appends a update operation with `:multi = true`.
Example:
```
Mongo.OrderedBulk.update_many(bulk, %{name: "Waldo"}, %{"$set": %{name: "Greta", kind: "dog"}})
%Mongo.OrderedBulk{
coll: "bulk",
ops: [
update: {%{name: "Waldo"}, %{"$set": %{kind: "dog", name: "Greta"}},
[multi: true]}
]
}
```
"""
@spec update_many(OrderedBulk.t, BSON.document, BSON.document, Keyword.t) :: OrderedBulk.t
def update_many(%OrderedBulk{} = bulk, filter, update, opts \\ []) do
get_update_many(filter, update, opts) |> push(bulk)
end
@doc """
Returns a stream chunk function that can be used with streams. The `limit` specifies the number
of operation hold in the memory while processing the stream inputs.
The inputs of the stream should be `Mongo.BulkOps.bulk_op`. See `Mongo.BulkOps`
"""
@spec write(Enumerable.t(), GenServer.server, String.t, non_neg_integer, Keyword.t ) :: Enumerable.t()
def write(enum, top, coll, limit \\ 1000, opts \\ [])
def write(enum, top, coll, limit, opts) when limit > 1 do
Stream.chunk_while(enum,
{new(coll), limit - 1},
fn
op, {bulk, 0} -> {:cont, BulkWrite.write(top, push(op, bulk), opts), {new(coll), limit - 1}}
op, {bulk, l} -> {:cont, {push(op, bulk), l - 1}}
end,
fn
{bulk, _} ->
case empty?(bulk) do
true ->
{:cont, bulk}
false ->
{:cont, BulkWrite.write(top, bulk, opts), {new(coll), limit - 1}}
end
end)
end
def write(_enum, _top, _coll, limit, _opts) when limit < 1 do
raise(ArgumentError, "limit must be greater then 1, got: #{limit}")
end
end
|
lib/mongo/ordered_bulk.ex
| 0.941061
| 0.916783
|
ordered_bulk.ex
|
starcoder
|
defmodule Site.TripPlan.Query do
alias TripPlan.{Itinerary, NamedPosition}
defstruct [
:from,
:to,
:itineraries,
errors: MapSet.new(),
time: :unknown,
wheelchair_accessible?: false
]
@type position_error :: TripPlan.Geocode.error() | :same_address
@type position :: NamedPosition.t() | {:error, position_error} | nil
@type t :: %__MODULE__{
from: position,
to: position,
time: :unknown | Site.TripPlan.DateTime.date_time(),
errors: MapSet.t(atom),
wheelchair_accessible?: boolean,
itineraries: TripPlan.Api.t() | nil
}
@spec from_query(map, Keyword.t()) :: t
def from_query(params, date_opts) do
opts = get_query_options(params)
%__MODULE__{}
|> Site.TripPlan.DateTime.validate(params, date_opts)
|> Site.TripPlan.Location.validate(params)
|> include_options(opts)
|> maybe_fetch_itineraries(opts)
end
@spec get_query_options(map) :: keyword()
def get_query_options(params) do
%{}
|> set_default_options
|> Map.merge(params)
|> opts_from_query
end
@spec maybe_fetch_itineraries(t, Keyword.t()) :: t
defp maybe_fetch_itineraries(
%__MODULE__{
to: %NamedPosition{},
from: %NamedPosition{}
} = query,
opts
) do
if Enum.empty?(query.errors) do
query
|> fetch_itineraries([query.time | opts])
|> parse_itinerary_result(query)
else
query
end
end
defp maybe_fetch_itineraries(%__MODULE__{} = query, _opts) do
query
end
@spec fetch_itineraries(t, Keyword.t()) :: TripPlan.Api.t()
defp fetch_itineraries(
%__MODULE__{from: %NamedPosition{} = from, to: %NamedPosition{} = to},
opts
) do
pid = self()
if Keyword.get(opts, :wheelchair_accessible?) do
TripPlan.plan(from, to, opts)
else
accessible_opts = Keyword.put(opts, :wheelchair_accessible?, true)
[mixed_results, accessible_results] =
Util.async_with_timeout(
[
fn -> TripPlan.plan(from, to, opts, pid) end,
fn -> TripPlan.plan(from, to, accessible_opts, pid) end
],
{:error, :timeout},
__MODULE__
)
dedup_itineraries(mixed_results, accessible_results)
end
end
@spec parse_itinerary_result(TripPlan.Api.t(), t) :: t
defp parse_itinerary_result({:ok, _} = result, %__MODULE__{} = query) do
%{query | itineraries: result}
end
defp parse_itinerary_result({:error, error}, %__MODULE__{} = query) do
query
|> Map.put(:itineraries, {:error, error})
|> Map.put(:errors, MapSet.put(query.errors, error))
end
@spec dedup_itineraries(TripPlan.Api.t(), TripPlan.Api.t()) :: TripPlan.Api.t()
defp dedup_itineraries({:error, _status} = response, {:error, _accessible_response}),
do: response
defp dedup_itineraries(unknown, {:error, _response}), do: unknown
defp dedup_itineraries({:error, _response}, {:ok, _itineraries} = accessible), do: accessible
defp dedup_itineraries({:ok, unknown}, {:ok, accessible}) do
merged =
Site.TripPlan.Merge.merge_itineraries(
accessible,
unknown
)
{:ok, merged}
end
defp set_default_options(params) do
params
|> default_optimize_for
|> default_mode
end
def default_optimize_for(params) do
Map.put(params, "optimize_for", "best_route")
end
def default_mode(params) do
Map.put(params, "modes", %{
"bus" => "true",
"commuter_rail" => "true",
"ferry" => "true",
"subway" => "true"
})
end
defp include_options(%__MODULE__{} = query, opts) do
%{query | wheelchair_accessible?: opts[:wheelchair_accessible?] == true}
end
@spec opts_from_query(map, Keyword.t()) :: Keyword.t()
def opts_from_query(query, opts \\ [])
def opts_from_query(%{"optimize_for" => val} = query, opts) do
# We have seen some rare sentry errors where the page anchor can
# get appended to the optimize_for value, so we preemptively
# strip it here.
val =
val
|> String.split("#")
|> List.first()
|> optimize_for(opts)
opts_from_query(Map.delete(query, "optimize_for"), val)
end
def opts_from_query(%{"modes" => modes} = query, opts) do
opts_from_query(
Map.delete(query, "modes"),
get_mode_opts(modes, opts)
)
end
def opts_from_query(_, opts) do
opts
end
@spec get_mode_opts(map, Keyword.t()) :: Keyword.t()
def get_mode_opts(%{} = modes, opts) do
active_modes = Enum.reduce(modes, [], &get_active_modes/2)
Keyword.put(opts, :mode, active_modes)
end
@spec get_active_modes({String.t(), String.t()}, Keyword.t()) :: Keyword.t()
defp get_active_modes({"subway", "true"}, acc) do
["TRAM", "SUBWAY" | acc]
end
defp get_active_modes({"commuter_rail", "true"}, acc) do
["RAIL" | acc]
end
defp get_active_modes({"bus", "true"}, acc) do
["BUS" | acc]
end
defp get_active_modes({"ferry", "true"}, acc) do
["FERRY" | acc]
end
defp get_active_modes({_, "false"}, acc) do
acc
end
@spec optimize_for(String.t(), Keyword.t()) :: Keyword.t()
defp optimize_for("best_route", opts) do
opts
end
defp optimize_for("accessibility", opts) do
Keyword.put(opts, :wheelchair_accessible?, true)
end
defp optimize_for("fewest_transfers", opts) do
Keyword.put(opts, :optimize_for, :fewest_transfers)
end
defp optimize_for("less_walking", opts) do
Keyword.put(opts, :optimize_for, :less_walking)
end
@doc "Determines if the given query contains any itineraries"
@spec itineraries?(t | nil) :: boolean
def itineraries?(%__MODULE__{itineraries: {:ok, itineraries}}) do
!Enum.empty?(itineraries)
end
def itineraries?(_query), do: false
@spec get_itineraries(t) :: [Itinerary.t()]
def get_itineraries(%__MODULE__{itineraries: {:ok, itineraries}}) do
itineraries
end
def get_itineraries(%__MODULE__{itineraries: {:error, _}}) do
[]
end
def get_itineraries(%__MODULE__{itineraries: nil}) do
[]
end
@doc "Returns the name of the location for a given query"
@spec location_name(t, :from | :to) :: String.t()
def location_name(%__MODULE__{} = query, key) when key in [:from, :to] do
case Map.get(query, key) do
%NamedPosition{name: name} -> name
_ -> nil
end
end
end
|
apps/site/lib/site/trip_plan/query.ex
| 0.834339
| 0.460046
|
query.ex
|
starcoder
|
defmodule Certbot.Provider.Acme do
@moduledoc """
Certificate provider for the Acme protocol
When a request is made for a hostname, the provider will look into the
certificate store (`Certbot.CertificateStore`) to see whether it has a
certificate for that hostname.
If so, it will return the certificate.
If not, it will try to request a certificate using the acme client. This is done
by retrieving an authorization, which has challenges. We need to prove to the acme
server that we own the hostname.
One of these challenges can be done over http. We use this one to prove ownership.
The challenge is stored in the challenge store (`Certbot.Acme.ChallengeStore`),
then the Acme server is asked to verify the challenge. The `Certbot.Acme.Plug`
verifies the challenge by using the store.
Next step is to build a Certificate Signing Request (`csr`) and send this to
the Acme server. In the response there will be a url where the signed certificate
can be retrieved from the Acme server.
The downloaded certificate is used for the serving the request, and also stored
in the certificate store for subsequent requests.
## Example
```
use Certbot.Provider.Acme,
acme_client: YourApp.Certbot,
certificate_store: Certbot.CertificateStore.Default,
challenge_store: Certbot.ChallengeStore.Default
```
For the options that can be given to the `use` macro, see `Certbot.Provider.Acme.Config`
"""
defmodule Config do
@moduledoc """
Configuration for the `Certbot.Provider.Acme` certificate provider.
- `:acme_client` -- Client implementing `use Certbot`, e.g. `Myapp.Certbot`
- `:certificate_store` -- Module used to store certificates,
- `:challenge_store` -- Module used to store certificates,
- `:logger` -- Module to log events, defaults to `Certbot.Logger`,
- `:key_algorithm` -- Algorithm used to generate keys for certificates,
defaults to `{:ec, :secp384r1}`. Can also be e.g. `{:rsa, 2048}`
"""
defstruct [:certificate_store, :challenge_store, :acme_client, :logger, :key_algorithm]
def new(opts \\ []) do
%__MODULE__{
acme_client: Keyword.fetch!(opts, :acme_client),
certificate_store: Keyword.fetch!(opts, :certificate_store),
challenge_store: opts[:challenge_store],
logger: opts[:logger] || Certbot.Logger,
key_algorithm: opts[:key_algorithm] || {:ec, :secp384r1}
}
end
end
defmacro __using__(opts) do
quote location: :keep do
@defaults unquote(opts)
@behaviour Certbot.Provider
alias Certbot.Provider.Acme
def get_by_hostname(hostname, opts \\ []) do
opts = Keyword.merge(@defaults, opts)
Acme.get_by_hostname(hostname, opts)
end
end
end
alias Certbot.Acme.Authorization
alias Certbot.Provider.Acme.Config
def get_by_hostname(hostname, opts) do
config = Config.new(opts)
config.logger.log(:info, "Checking store for certificate for #{hostname}")
case config.certificate_store.find_certificate(hostname) do
%Certbot.Certificate{} = certificate ->
serial = Certbot.Certificate.hex_serial(certificate)
config.logger.log(:info, "Found certificate (#{serial}) for #{hostname} in store")
certificate
_ ->
config.logger.log(
:info,
"No certificate found in store, requesting certificate for #{hostname}"
)
case authorize_hostname(hostname, config) do
{:ok, certificate} ->
serial = Certbot.Certificate.hex_serial(certificate)
config.logger.log(
:info,
"Retrieved certificate (#{serial}) for #{hostname}, storing it"
)
config.certificate_store.insert(hostname, certificate)
certificate
{:error, error} ->
config.logger.log(:error, inspect(error))
error
end
end
end
defp authorize_hostname(hostname, config) do
case config.acme_client.authorize(hostname) do
{:ok, authorization} ->
challenge = Authorization.fetch_challenge(authorization, "http-01")
config.logger.log(:info, "Storing challenge in store for #{hostname}")
config.challenge_store.insert(challenge)
check_challenge(challenge, hostname, config)
{:error, error} ->
{:error, error}
end
end
defp check_challenge(challenge, hostname, config) do
config.logger.log(:info, "Checking challenge #{challenge.uri} for #{hostname}")
case config.acme_client.respond_challenge(challenge) do
{:ok, %{status: "valid"}} ->
get_certificate(hostname, config)
# should validate edge cases here
# the 10ms is completely arbitrary...
{:ok, _challenge_response} ->
Process.sleep(10)
check_challenge(challenge, hostname, config)
{:error, error} ->
config.logger.error(inspect(error))
nil
end
end
defp get_certificate(hostname, config) do
key = Certbot.SSL.generate_key(config.key_algorithm)
algorithm = elem(key, 0)
csr = Certbot.SSL.generate_csr(key, %{common_name: hostname})
der_key = Certbot.SSL.convert_private_key_to_der(key)
with {:ok, url} <- config.acme_client.new_certificate(csr),
{:ok, certificate} <- config.acme_client.get_certificate(url) do
certificate = Certbot.Certificate.build(certificate, {algorithm, der_key})
{:ok, certificate}
else
error -> error
end
end
end
|
lib/certbot/provider/acme.ex
| 0.882713
| 0.773559
|
acme.ex
|
starcoder
|
defmodule Ecto.ULID do
@moduledoc """
An Ecto type for ULID strings.
"""
@default_params %{variant: :b32}
# replace with `use Ecto.ParameterizedType` after Ecto 3.2.0 is required
@behaviour Ecto.ParameterizedType
# and remove both of these functions
def embed_as(_, _params), do: :self
def equal?(term1, term2, _params), do: dump(term1) == dump(term2)
@doc """
The underlying schema type.
"""
def type(_params \\ @default_params), do: :uuid
@doc """
Casts a string to ULID.
"""
def cast(value, params \\ @default_params)
def cast(<<_::bytes-size(26)>> = value, _params) do
# Crockford Base32 encoded string
if valid?(value) do
{:ok, value}
else
:error
end
end
def cast(<<_::bytes-size(22)>> = value, _params) do
# Lexicographic Base64 encoded string
if valid64?(value) do
{:ok, value}
else
:error
end
end
def cast(<<_::bytes-size(20)>> = value, _params) do
# Firebase-Push-Key Base64 encoded string
if valid64?(value) do
{:ok, value}
else
:error
end
end
def cast(_, _params), do: :error
@doc """
Same as `cast/2` but raises `Ecto.CastError` on invalid arguments.
"""
def cast!(value, params \\ @default_params) do
case cast(value, params) do
{:ok, ulid} -> ulid
:error -> raise Ecto.CastError, type: __MODULE__, value: value
end
end
@doc """
Converts a Crockford Base32 encoded string or
Lexicographic Base64 encoded string or Firebase-Push-Key Base64 encoded string
into a binary ULID.
"""
def dump(encoded)
def dump(<<_::bytes-size(26)>> = encoded), do: decode(encoded)
def dump(<<_::bytes-size(22)>> = encoded), do: decode64(encoded)
def dump(<<_::bytes-size(20)>> = encoded), do: decode64(encoded)
def dump(_), do: :error
@doc false
def dump(encoded, _dumper, _params), do: dump(encoded)
@doc """
Converts a binary ULID into an encoded string (defaults to Crockford Base32 encoding).
Variants:
* `:b32`: Crockford Base32 encoding (default)
* `:b64`: Lexicographic Base64 encoding
* `:push`: Firebase Push-Key Base64 encoding
Arguments:
* `bytes`: A binary ULID.
* `variant`: :b32 (default), :b64 (Base64), or :push (Firebase Push-Key).
"""
def load(bytes, variant \\ :b32)
def load(<<_::unsigned-size(128)>> = bytes, :b32), do: encode(bytes)
def load(<<_::unsigned-size(128)>> = bytes, :b64), do: encode64(bytes)
def load(<<ts::bits-size(48), _::bits-size(8), rand::bits-size(72)>> = _bytes, :push), do: encode64(<<ts::binary, rand::binary>>)
def load(_, _variant), do: :error
@doc false
def load(bytes, _loader, %{variant: variant}), do: load(bytes, variant)
def load(_, _loader, _params), do: :error
@doc false
def init(opts) do
case Keyword.get(opts, :variant, :b32) do
v when v in [:b32, :b64, :push] -> %{variant: v}
_ -> raise "Ecto.ULID variant must be one of [:b32, :b64, :push]"
end
end
@doc false
def autogenerate(%{variant: variant} = _params), do: generate(variant)
@doc """
Generates a string encoded ULID (defaults to Crockford Base32 encoding).
If a value is provided for `timestamp`, the generated ULID will be for the provided timestamp.
Otherwise, a ULID will be generated for the current time.
Variants:
* `:b32`: Crockford Base32 encoding (default)
* `:b64`: Lexicographic Base64 encoding
* `:push`: Firebase Push-Key Base64 encoding
Arguments:
* `variant`: :b32 (default), :b64 (Base64), or :push (Firebase Push-Key).
* `timestamp`: A Unix timestamp with millisecond precision.
"""
def generate(variant \\ :b32, timestamp \\ System.system_time(:millisecond))
def generate(:b32, timestamp) do
{:ok, ulid} = encode(bingenerate(timestamp))
ulid
end
def generate(:b64, timestamp) do
{:ok, ulid} = encode64(bingenerate(timestamp))
ulid
end
def generate(:push, timestamp) do
<<ts::bits-size(48), _::bits-size(8), rand::bits-size(72)>> = bingenerate(timestamp)
{:ok, ulid} = encode64(<<ts::binary, rand::binary>>)
ulid
end
def generate(timestamp, _) when is_integer(timestamp) do
{:ok, ulid} = encode(bingenerate(timestamp))
ulid
end
@doc """
Generates a binary ULID.
If a value is provided for `timestamp`, the generated ULID will be for the provided timestamp.
Otherwise, a ULID will be generated for the current time.
Arguments:
* `timestamp`: A Unix timestamp with millisecond precision.
"""
def bingenerate(timestamp \\ System.system_time(:millisecond)) do
<<timestamp::unsigned-size(48), :crypto.strong_rand_bytes(10)::binary>>
end
defp encode(<< fc00:db20:35b:7399::5, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, fdf8:f53e:61e4::18, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, fc00:db20:35b:7399::5, fdf8:f53e:61e4::18, fc00:db20:35b:7399::5, fdf8:f53e:61e4::18, fdf8:f53e:61e4::18, fc00:e968:6179::de52:7100, bfd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b,
fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, fc00:e968:6179::de52:7100, bfc00:e968:6179::de52:7100, bfd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, fc00:db20:35b:7399::5, bfd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, bfd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, bfd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, bfc00:db20:35b:7399::5, bfc00:e968:6179::de52:7100, bfd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, b25::5, b26::5>>) do
<<e(b1), e(b2), e(b3), e(b4), e(b5), e(b6), e(b7), e(b8), e(b9), e(b10), e(b11), e(b12), e(b13),
e(b14), e(b15), e(b16), e(b17), e(b18), e(b19), e(b20), e(b21), e(b22), e(b23), e(b24), e(b25), e(b26)>>
catch
:error -> :error
else
encoded -> {:ok, encoded}
end
defp encode(_), do: :error
defp encode64(<< fc00:db20:35b:7399::5, fdf8:f53e:61e4::18, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, fc00:e968:6179::de52:7100, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, fdf8:f53e:61e4::18, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, fc00:db20:35b:7399::5, fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b,
fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, bfd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, bfd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, bfc00:e968:6179::de52:7100, bfd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, bfdf8:f53e:61e4::18, bfc00:db20:35b:7399::5, bfd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, b22::6>>) do
<<e64(b1), e64(b2), e64(b3), e64(b4), e64(b5), e64(b6), e64(b7), e64(b8), e64(b9), e64(b10), e64(b11), e64(b12), e64(b13),
e64(b14), e64(b15), e64(b16), e64(b17), e64(b18), e64(b19), e64(b20), e64(b21), e64(b22)>>
catch
:error -> :error
else
encoded -> {:ok, encoded}
end
defp encode64(<< fc00:e968:6179::de52:7100, fdf8:f53e:61e4::18, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, fc00:e968:6179::de52:7100, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, fdf8:f53e:61e4::18, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, fc00:db20:35b:7399::5, fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b,
fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, fdf8:f53e:61e4::18, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, fc00:db20:35b:7399::5, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, bfdf8:f53e:61e4::18, b20::6>>) do
<<e64(b1), e64(b2), e64(b3), e64(b4), e64(b5), e64(b6), e64(b7), e64(b8), e64(b9), e64(b10), e64(b11), e64(b12), e64(b13),
e64(b14), e64(b15), e64(b16), e64(b17), e64(b18), e64(b19), e64(b20)>>
catch
:error -> :error
else
encoded -> {:ok, encoded}
end
defp encode64(_), do: :error
@compile {:inline, e: 1, e64: 1}
defp e(0), do: ?0
defp e(1), do: ?1
defp e(2), do: ?2
defp e(3), do: ?3
defp e(4), do: ?4
defp e(5), do: ?5
defp e(6), do: ?6
defp e(7), do: ?7
defp e(8), do: ?8
defp e(9), do: ?9
defp e(10), do: ?A
defp e(11), do: ?B
defp e(12), do: ?C
defp e(13), do: ?D
defp e(14), do: ?E
defp e(15), do: ?F
defp e(16), do: ?G
defp e(17), do: ?H
defp e(18), do: ?J
defp e(19), do: ?K
defp e(20), do: ?M
defp e(21), do: ?N
defp e(22), do: ?P
defp e(23), do: ?Q
defp e(24), do: ?R
defp e(25), do: ?S
defp e(26), do: ?T
defp e(27), do: ?V
defp e(28), do: ?W
defp e(29), do: ?X
defp e(30), do: ?Y
defp e(31), do: ?Z
defp e64(0), do: ?-
defp e64(1), do: ?0
defp e64(2), do: ?1
defp e64(3), do: ?2
defp e64(4), do: ?3
defp e64(5), do: ?4
defp e64(6), do: ?5
defp e64(7), do: ?6
defp e64(8), do: ?7
defp e64(9), do: ?8
defp e64(10), do: ?9
defp e64(11), do: ?A
defp e64(12), do: ?B
defp e64(13), do: ?C
defp e64(14), do: ?D
defp e64(15), do: ?E
defp e64(16), do: ?F
defp e64(17), do: ?G
defp e64(18), do: ?H
defp e64(19), do: ?I
defp e64(20), do: ?J
defp e64(21), do: ?K
defp e64(22), do: ?L
defp e64(23), do: ?M
defp e64(24), do: ?N
defp e64(25), do: ?O
defp e64(26), do: ?P
defp e64(27), do: ?Q
defp e64(28), do: ?R
defp e64(29), do: ?S
defp e64(30), do: ?T
defp e64(31), do: ?U
defp e64(32), do: ?V
defp e64(33), do: ?W
defp e64(34), do: ?X
defp e64(35), do: ?Y
defp e64(36), do: ?Z
defp e64(37), do: ?_
defp e64(38), do: ?a
defp e64(39), do: ?b
defp e64(40), do: ?c
defp e64(41), do: ?d
defp e64(42), do: ?e
defp e64(43), do: ?f
defp e64(44), do: ?g
defp e64(45), do: ?h
defp e64(46), do: ?i
defp e64(47), do: ?j
defp e64(48), do: ?k
defp e64(49), do: ?l
defp e64(50), do: ?m
defp e64(51), do: ?n
defp e64(52), do: ?o
defp e64(53), do: ?p
defp e64(54), do: ?q
defp e64(55), do: ?r
defp e64(56), do: ?s
defp e64(57), do: ?t
defp e64(58), do: ?u
defp e64(59), do: ?v
defp e64(60), do: ?w
defp e64(61), do: ?x
defp e64(62), do: ?y
defp e64(63), do: ?z
defp decode(<< fdf8:f53e:61e4::18, fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, fc00:db20:35b:7399::5, fc00:db20:35b:7399::5, fc00:e968:6179::de52:7100, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, fc00:db20:35b:7399::5, fdf8:f53e:61e4::18, fc00:db20:35b:7399::5, fc00:db20:35b:7399::5, cfc00:db20:35b:7399::5, fdf8:f53e:61e4::18,
fdf8:f53e:61e4::18, fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, fdf8:f53e:61e4::18, fdf8:f53e:61e4::18, fc00:e968:6179::de52:7100, fc00:e968:6179::de52:7100, fc00:db20:35b:7399::5, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, fc00:e968:6179::de52:7100, fdf8:f53e:61e4::18, fc00:db20:35b:7399::5, fc00:db20:35b:7399::5, c26::8>>) do
<< d(c1)::3, d(c2)::5, d(c3)::5, d(c4)::5, d(c5)::5, d(c6)::5, d(c7)::5, d(c8)::5, d(c9)::5, d(c10)::5, d(c11)::5, d(c12)::5, d(c13)::5,
d(c14)::5, d(c15)::5, d(c16)::5, d(c17)::5, d(c18)::5, d(c19)::5, d(c20)::5, d(c21)::5, d(c22)::5, d(c23)::5, d(c24)::5, d(c25)::5, d(c26)::5>>
catch
:error -> :error
else
decoded -> {:ok, decoded}
end
defp decode(_), do: :error
defp decode64(<< fdf8:f53e:61e4::18, fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, fc00:db20:35b:7399::5, fc00:db20:35b:7399::5, fc00:e968:6179::de52:7100, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, fc00:db20:35b:7399::5, fdf8:f53e:61e4::18, fc00:db20:35b:7399::5, fc00:db20:35b:7399::5, cfc00:db20:35b:7399::5, cfdf8:f53e:61e4::18,
cfdf8:f53e:61e4::18, cfd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, cfc00:db20:35b:7399::5, cfc00:db20:35b:7399::5, cfdf8:f53e:61e4::18, cfdf8:f53e:61e4::18, c20::8, c21::8, c22::8>>) do
<< d64(c1)::2, d64(c2)::6, d64(c3)::6, d64(c4)::6, d64(c5)::6, d64(c6)::6, d64(c7)::6, d64(c8)::6, d64(c9)::6, d64(c10)::6, d64(c11)::6, d64(c12)::6, d64(c13)::6,
d64(c14)::6, d64(c15)::6, d64(c16)::6, d64(c17)::6, d64(c18)::6, d64(c19)::6, d64(c20)::6, d64(c21)::6, d64(c22)::6>>
catch
:error -> :error
else
decoded -> {:ok, decoded}
end
defp decode64(<< c1::8, cfc00:e968:6179::de52:7100, c3::8, c4::8, c5::8, c6::8, c7::8, cfc00:db20:35b:7399::5, cfc00:db20:35b:7399::5, cfdf8:f53e:61e4::18, cfdf8:f53e:61e4::18, cfc00:db20:35b:7399::5, cfdf8:f53e:61e4::18,
cfdf8:f53e:61e4::18, cfd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, cfc00:db20:35b:7399::5, cfc00:db20:35b:7399::5, cfdf8:f53e:61e4::18, c19::8, c20::8>>) do
<< d64(c1)::6, d64(c2)::6, d64(c3)::6, d64(c4)::6, d64(c5)::6, d64(c6)::6, d64(c7)::6, d64(c8)::6, 0::unsigned-size(8), d64(c9)::6, d64(c10)::6, d64(c11)::6, d64(c12)::6, d64(c13)::6,
d64(c14)::6, d64(c15)::6, d64(c16)::6, d64(c17)::6, d64(c18)::6, d64(c19)::6, d64(c20)::6>>
catch
:error -> :error
else
decoded -> {:ok, decoded}
end
defp decode64(_), do: :error
@compile {:inline, d: 1, d64: 1}
defp d(?0), do: 0
defp d(?1), do: 1
defp d(?2), do: 2
defp d(?3), do: 3
defp d(?4), do: 4
defp d(?5), do: 5
defp d(?6), do: 6
defp d(?7), do: 7
defp d(?8), do: 8
defp d(?9), do: 9
defp d(?A), do: 10
defp d(?B), do: 11
defp d(?C), do: 12
defp d(?D), do: 13
defp d(?E), do: 14
defp d(?F), do: 15
defp d(?G), do: 16
defp d(?H), do: 17
defp d(?J), do: 18
defp d(?K), do: 19
defp d(?M), do: 20
defp d(?N), do: 21
defp d(?P), do: 22
defp d(?Q), do: 23
defp d(?R), do: 24
defp d(?S), do: 25
defp d(?T), do: 26
defp d(?V), do: 27
defp d(?W), do: 28
defp d(?X), do: 29
defp d(?Y), do: 30
defp d(?Z), do: 31
defp d(_), do: throw :error
defp d64(?-), do: 0
defp d64(?0), do: 1
defp d64(?1), do: 2
defp d64(?2), do: 3
defp d64(?3), do: 4
defp d64(?4), do: 5
defp d64(?5), do: 6
defp d64(?6), do: 7
defp d64(?7), do: 8
defp d64(?8), do: 9
defp d64(?9), do: 10
defp d64(?A), do: 11
defp d64(?B), do: 12
defp d64(?C), do: 13
defp d64(?D), do: 14
defp d64(?E), do: 15
defp d64(?F), do: 16
defp d64(?G), do: 17
defp d64(?H), do: 18
defp d64(?I), do: 19
defp d64(?J), do: 20
defp d64(?K), do: 21
defp d64(?L), do: 22
defp d64(?M), do: 23
defp d64(?N), do: 24
defp d64(?O), do: 25
defp d64(?P), do: 26
defp d64(?Q), do: 27
defp d64(?R), do: 28
defp d64(?S), do: 29
defp d64(?T), do: 30
defp d64(?U), do: 31
defp d64(?V), do: 32
defp d64(?W), do: 33
defp d64(?X), do: 34
defp d64(?Y), do: 35
defp d64(?Z), do: 36
defp d64(?_), do: 37
defp d64(?a), do: 38
defp d64(?b), do: 39
defp d64(?c), do: 40
defp d64(?d), do: 41
defp d64(?e), do: 42
defp d64(?f), do: 43
defp d64(?g), do: 44
defp d64(?h), do: 45
defp d64(?i), do: 46
defp d64(?j), do: 47
defp d64(?k), do: 48
defp d64(?l), do: 49
defp d64(?m), do: 50
defp d64(?n), do: 51
defp d64(?o), do: 52
defp d64(?p), do: 53
defp d64(?q), do: 54
defp d64(?r), do: 55
defp d64(?s), do: 56
defp d64(?t), do: 57
defp d64(?u), do: 58
defp d64(?v), do: 59
defp d64(?w), do: 60
defp d64(?x), do: 61
defp d64(?y), do: 62
defp d64(?z), do: 63
defp d64(_), do: throw :error
defp valid?(<< fdf8:f53e:61e4::18, fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, fc00:db20:35b:7399::5, fc00:db20:35b:7399::5, fc00:e968:6179::de52:7100, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, fc00:db20:35b:7399::5, fdf8:f53e:61e4::18, fc00:db20:35b:7399::5, fc00:db20:35b:7399::5, fdf8:f53e:61e4::18, fdf8:f53e:61e4::18,
fdf8:f53e:61e4::18, fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, fdf8:f53e:61e4::18, fdf8:f53e:61e4::18, fc00:e968:6179::de52:7100, fc00:e968:6179::de52:7100, fc00:db20:35b:7399::5, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, fc00:e968:6179::de52:7100, fdf8:f53e:61e4::18, fc00:db20:35b:7399::5, fc00:db20:35b:7399::5, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b>>) do
c1 in [?0, ?1, ?2, ?3, ?4, ?5, ?6, ?7] &&
v(c2) && v(c3) && v(c4) && v(c5) && v(c6) && v(c7) && v(c8) && v(c9) && v(c10) && v(c11) && v(c12) && v(c13) &&
v(c14) && v(c15) && v(c16) && v(c17) && v(c18) && v(c19) && v(c20) && v(c21) && v(c22) && v(c23) && v(c24) && v(c25) && v(c26)
end
defp valid?(_), do: false
defp valid64?(<< cfdf8:f53e:61e4::18, fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, fc00:db20:35b:7399::5, fc00:db20:35b:7399::5, fc00:e968:6179::de52:7100, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, fc00:db20:35b:7399::5, fdf8:f53e:61e4::18, fc00:db20:35b:7399::5, fc00:db20:35b:7399::5, fdf8:f53e:61e4::18, cfdf8:f53e:61e4::18,
fdf8:f53e:61e4::18, fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, cfc00:db20:35b:7399::5, cfc00:db20:35b:7399::5, cfdf8:f53e:61e4::18, fc00:e968:6179::de52:7100, cfd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, c2fdf8:f53e:61e4::18, c22::8>>) do
v64(c1) && v64(c2) && v64(c3) && v64(c4) && v64(c5) && v64(c6) && v64(c7) && v64(c8) && v64(c9) && v64(c10) && v64(c11) && v64(c12) && v64(c13) &&
v64(c14) && v64(c15) && v64(c16) && v64(c17) && v64(c18) && v64(c19) && v64(c20) && v64(c21) && v64(c22)
end
defp valid64?(<< fdf8:f53e:61e4::18, fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, fc00:db20:35b:7399::5, fc00:db20:35b:7399::5, fc00:e968:6179::de52:7100, fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b, fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, fc00:db20:35b:7399::5, fdf8:f53e:61e4::18, fc00:db20:35b:7399::5, fc00:db20:35b:7399::5, fdf8:f53e:61e4::18, fdf8:f53e:61e4::18,
fdf8:f53e:61e4::18, fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, fdf8:f53e:61e4::18, fdf8:f53e:61e4::18, fc00:e968:6179::de52:7100, fc00:e968:6179::de52:7100, cfd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b>>) do
v64(c1) && v64(c2) && v64(c3) && v64(c4) && v64(c5) && v64(c6) && v64(c7) && v64(c8) && v64(c9) && v64(c10) && v64(c11) && v64(c12) && v64(c13) &&
v64(c14) && v64(c15) && v64(c16) && v64(c17) && v64(c18) && v64(c19) && v64(c20)
end
defp valid64?(_), do: false
@compile {:inline, v: 1, v64: 1}
defp v(?0), do: true
defp v(?1), do: true
defp v(?2), do: true
defp v(?3), do: true
defp v(?4), do: true
defp v(?5), do: true
defp v(?6), do: true
defp v(?7), do: true
defp v(?8), do: true
defp v(?9), do: true
defp v(?A), do: true
defp v(?B), do: true
defp v(?C), do: true
defp v(?D), do: true
defp v(?E), do: true
defp v(?F), do: true
defp v(?G), do: true
defp v(?H), do: true
defp v(?J), do: true
defp v(?K), do: true
defp v(?M), do: true
defp v(?N), do: true
defp v(?P), do: true
defp v(?Q), do: true
defp v(?R), do: true
defp v(?S), do: true
defp v(?T), do: true
defp v(?V), do: true
defp v(?W), do: true
defp v(?X), do: true
defp v(?Y), do: true
defp v(?Z), do: true
defp v(_), do: false
defp v64(?-), do: true
defp v64(?0), do: true
defp v64(?1), do: true
defp v64(?2), do: true
defp v64(?3), do: true
defp v64(?4), do: true
defp v64(?5), do: true
defp v64(?6), do: true
defp v64(?7), do: true
defp v64(?8), do: true
defp v64(?9), do: true
defp v64(?A), do: true
defp v64(?B), do: true
defp v64(?C), do: true
defp v64(?D), do: true
defp v64(?E), do: true
defp v64(?F), do: true
defp v64(?G), do: true
defp v64(?H), do: true
defp v64(?I), do: true
defp v64(?J), do: true
defp v64(?K), do: true
defp v64(?L), do: true
defp v64(?M), do: true
defp v64(?N), do: true
defp v64(?O), do: true
defp v64(?P), do: true
defp v64(?Q), do: true
defp v64(?R), do: true
defp v64(?S), do: true
defp v64(?T), do: true
defp v64(?U), do: true
defp v64(?V), do: true
defp v64(?W), do: true
defp v64(?X), do: true
defp v64(?Y), do: true
defp v64(?Z), do: true
defp v64(?_), do: true
defp v64(?a), do: true
defp v64(?b), do: true
defp v64(?c), do: true
defp v64(?d), do: true
defp v64(?e), do: true
defp v64(?f), do: true
defp v64(?g), do: true
defp v64(?h), do: true
defp v64(?i), do: true
defp v64(?j), do: true
defp v64(?k), do: true
defp v64(?l), do: true
defp v64(?m), do: true
defp v64(?n), do: true
defp v64(?o), do: true
defp v64(?p), do: true
defp v64(?q), do: true
defp v64(?r), do: true
defp v64(?s), do: true
defp v64(?t), do: true
defp v64(?u), do: true
defp v64(?v), do: true
defp v64(?w), do: true
defp v64(?x), do: true
defp v64(?y), do: true
defp v64(?z), do: true
defp v64(_), do: false
end
|
lib/ecto/ulid.ex
| 0.745306
| 0.400105
|
ulid.ex
|
starcoder
|
defmodule Rmc.RaceState do
@moduledoc """
This agent holds state about the race.
The state is a map.
The :racers key holds all the data about racers: (TODO)
- motion
- setup
- status
- lap
- participant
- telemetry
"""
alias Rmc.FOne2018.{Lap, Laps}
use Agent
@name __MODULE__
def start_link(opts \\ []) do
opts = Keyword.put_new(opts, :name, @name)
Agent.start_link(fn -> %{} end, opts)
end
def get(name \\ @name) do
Agent.get(name, fn x -> x end)
end
def get_session_time(name \\ @name) do
Agent.get(name, fn
%{packet_header: %{session_time: time}} -> time
_ -> nil
end)
end
defp true_map_update(map, key, initial, func) do
if Map.has_key?(map, key) do
Map.update(map, key, initial, func)
else
Map.put(map, key, func.(initial))
end
end
@doc """
merge_racer calculates the sector three time when a new lap starts and merges new data into the existing map
"""
def merge_racer(%{current_lap_num: old_lap} = old, %{current_lap_num: now_lap} = now)
when old_lap != now_lap do
sector_three_time =
Map.get(now, :last_lap_time, 0) -
(Map.get(old, :sector_one_time, 0) + Map.get(old, :sector_two_time, 0))
old
|> Map.merge(now)
|> Map.put(:sector_three_time, sector_three_time)
|> true_map_update(
:laps,
[],
&List.insert_at(&1, 0, [old.current_lap_num, now.last_lap_time])
)
end
def merge_racer(old, now), do: Map.merge(old, now)
@doc """
merge_racers steps through each of the maps in the given list and runs merge_racer
"""
def merge_racers([], now), do: now
def merge_racers([old | rest_old], [now | rest_now]),
do: [merge_racer(old, now) | merge_racers(rest_old, rest_now)]
@doc """
merge_state takes a state and a list of key value pairs to merge in
Some key value pairs get diverted to be merged into the :racers key, which is a list of maps
"""
def merge_state(state, %{} = update), do: merge_state(state, Map.to_list(update))
def merge_state(acc, []), do: acc
def merge_state(acc, [{key, value} | rest])
when key in [:laps, :participants, :motions, :car_setups, :statuses, :telemetries] do
acc
|> Map.put(:racers, merge_racers(Map.get(acc, :racers, []), value))
|> merge_state(rest)
end
def merge_state(acc, [{key, value} | rest]) do
acc
|> Map.put(key, value)
|> merge_state(rest)
end
def find_gap({time, distance}, front_locations) do
case Enum.find(front_locations, fn {_time, dist} -> dist < distance end) do
{front_time, _dist} -> time - front_time
_ -> nil
end
end
def find_gap_to_front(%{car_position: 1}, _racers), do: nil
def find_gap_to_front(%{car_position: p, time_distance: [first | _rest]}, racers) do
case Enum.find(racers, fn %{car_position: cp} -> cp == p - 1 end) do
%{time_distance: locations} -> find_gap(first, locations)
_ -> nil
end
end
def find_gap_to_front(_, _racers), do: nil
def calculate_gaps(state) do
Map.update(state, :racers, [], fn racers ->
Enum.map(racers, fn racer ->
Map.put(racer, :gap, find_gap_to_front(racer, racers))
end)
end)
end
def add_time_distance(state, %Laps{packet_header: %{session_time: time}}) do
Map.update(state, :racers, [], fn racers ->
Enum.map(racers, fn racer ->
true_map_update(
racer,
:time_distance,
[],
&List.insert_at(&1, 0, {time, Map.get(racer, :total_distance)})
)
end)
end)
end
def add_time_distance(state, _), do: state
def log_gaps(%{racers: racers} = state) do
Enum.each(racers, fn %{gap: gap} -> IO.inspect(gap) end)
state
end
def put(update, name \\ @name) do
Agent.update(name, fn state ->
state
|> merge_state(update)
|> add_time_distance(update)
|> calculate_gaps
end)
end
def get_session(name \\ @name) do
fields = [
:total_laps,
:track_temperature,
:air_temperature,
:weather,
:session_type,
:track_id
]
Agent.get(name, &Map.take(&1, fields))
end
def get_timing(name \\ @name) do
fields = [
:car_position,
:gap,
:laps,
:tyre_compound,
:best_lap_time,
:last_lap_time,
:sector_one_time,
:sector_two_time,
:sector_three_time,
:name,
:race_number
]
Agent.get(name, fn state ->
state
|> Map.get(:racers, [])
|> Enum.map(&Map.take(&1, fields))
end)
end
end
|
lib/rmc/race_state.ex
| 0.5144
| 0.535524
|
race_state.ex
|
starcoder
|
defmodule Augur.Twilio do
@moduledoc """
Twilio service for Augur
Sends texts via Twilio messaging API
Configuration required:
- `account_sid`: Find this on the Twilio Dashboard
- `auth_token`: Find this on the Twilio Dashboard
```
config = %Augur.Twilio{account_sid: "account_sid", auth_token: "<PASSWORD>"}
Augur.Service.send_text(config, "from", "to", "Hello!")
```
"""
@enforce_keys [:account_sid, :auth_token]
defstruct [:account_sid, :auth_token, cache: %Augur.Cache{}, finch_name: Augur.Twilio]
defmodule Exception do
@moduledoc """
Exception for Twilio
If Twilio returns something not 2XX, this exception is generated
`body` and `status` are copied directly from the response from Twilio.
`code` and `reason` are from Twilio if provided, Twilio has it's own internal
error codes that help debug the problem that was encountered.
"""
defexception [:body, :code, :reason, :status]
@impl true
def message(struct) do
"""
Twilio failed an API request
Error Code: #{struct.code} - #{struct.reason}
Status: #{struct.status}
Body:
#{inspect(struct.body, pretty: true)}
"""
end
end
defimpl Augur.Service do
alias Augur.Twilio
@twilio_base_url "https://api.twilio.com/2010-04-01/Accounts/"
def send_text(config, from, to, message) do
basic_auth = "#{config.account_sid}:#{config.auth_token}" |> Base.encode64()
api_url = "#{@twilio_base_url}#{config.account_sid}/Messages.json"
req_headers = [
{"Authorization", "Basic #{basic_auth}"},
{"Content-Type", "application/x-www-form-urlencoded"}
]
req_body =
URI.encode_query(%{
"Body" => message,
"From" => from,
"To" => to
})
request = Finch.build(:post, api_url, req_headers, req_body)
case Finch.request(request, config.finch_name) do
{:ok, %{status: 201}} ->
:ok
{:ok, %{body: body, status: 400}} ->
body = Jason.decode!(body)
exception = %Twilio.Exception{
code: body["code"],
body: body,
reason: body["message"],
status: 400
}
{:error, exception}
{:ok, %{body: body, status: status}} ->
body = Jason.decode!(body)
exception = %Twilio.Exception{
body: body,
reason: "Unknown error",
status: status
}
{:error, exception}
end
end
end
end
|
lib/augur/twilio.ex
| 0.783326
| 0.676904
|
twilio.ex
|
starcoder
|
defmodule Serum.Plugins.RssGenerator do
@moduledoc """
A Serum plugin that creates an RSS feed
## Using the Plugin
# serum.exs:
%{
server_root: "https://example.io",
plugins: [
{Serum.Plugins.RssGenerator, only: :prod}
]
}
"""
@behaviour Serum.Plugin
serum_ver = Version.parse!(Mix.Project.config()[:version])
serum_req = "~> #{serum_ver.major}.#{serum_ver.minor}"
require EEx
alias Serum.GlobalBindings
alias Serum.Page
alias Serum.Post
def name, do: "Create RSS feed for humans"
def version, do: "1.2.0"
def elixir, do: ">= 1.8.0"
def serum, do: unquote(serum_req)
def description do
"Create an RSS feed so that humans can read fresh new posts."
end
def implements, do: [build_succeeded: 3]
def build_succeeded(_src, dest, args) do
{pages, posts} = get_items(args[:for])
dest
|> create_file(pages, posts)
|> Serum.File.write()
|> case do
{:ok, _} -> :ok
{:error, _} = error -> error
end
end
@spec get_items(term()) :: {[Page.t()], [Post.t()]}
defp get_items(arg)
defp get_items(nil), do: get_items([:posts])
defp get_items(arg) when not is_list(arg), do: get_items([arg])
defp get_items(arg) do
pages = if :pages in arg, do: GlobalBindings.get(:all_pages), else: []
posts = if :posts in arg, do: GlobalBindings.get(:all_posts), else: []
{pages, posts}
end
rss_path =
:serum
|> :code.priv_dir()
|> Path.join("build_resources")
|> Path.join("rss.xml.eex")
EEx.function_from_file(:defp, :rss_xml, rss_path, [
:pages,
:posts,
:transformer,
:bindings
])
@spec create_file(binary(), [Page.t()], [Post.t()]) :: Serum.File.t()
defp create_file(dest, pages, posts) do
%Serum.File{
dest: Path.join(dest, "rss.xml"),
out_data: rss_xml(pages, posts, &to_rfc822_format/1, bindings())
}
end
defp to_rfc822_format(_now) do
# reference to https://www.w3.org/TR/NOTE-datetime
# 10 Mar 21 22:43:37 UTC
# NaiveDateTime.from_erl!()
# Timex.now()
# |> Timex.format!("%d %b %y %T Z", :strftime)
# "10 Mar 21 22:43:37 UTC"
end
defp bindings do
:site
|> GlobalBindings.get()
end
end
|
lib/serum/plugins/rss_generator.ex
| 0.68941
| 0.416025
|
rss_generator.ex
|
starcoder
|
defmodule KafkaEx.ConsumerGroup do
@moduledoc """
A process that manages membership in a Kafka consumer group.
Consumers in a consumer group coordinate with each other through a Kafka
broker to distribute the work of consuming one or several topics without any
overlap. This is facilitated by the
[Kafka client-side assignment protocol](https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Client-side+Assignment+Proposal).
Any time group membership changes (a member joins or leaves the group), a
Kafka broker initiates group synchronization by asking one of the group
members (the leader elected by the broker) to provide partition assignments
for the whole group. KafkaEx uses a round robin partition assignment
algorithm by default. This can be overridden by passing a callback function
in the `:partition_assignment_callback` option. See
`KafkaEx.ConsumerGroup.PartitionAssignment` for details on partition
assignment functions.
A `KafkaEx.ConsumerGroup` process is responsible for:
1. Maintaining membership in a Kafka consumer group.
2. Determining partition assignments if elected as the group leader.
3. Launching and terminating `KafkaEx.GenConsumer` processes based on its
assigned partitions.
To use a `KafkaEx.ConsumerGroup`, a developer must define a module that
implements the `KafkaEx.GenConsumer` behaviour and start a
`KafkaEx.ConsumerGroup` configured to use that module.
The api versions of some of the underlying messages can be specified in the
`:api_versions` option. Note that these will be ignored (api version 0 used)
unless you have `kafka_version: "kayrock"` set in the KafkaEx application
config. The following versions can be specified:
* `:fetch` - Fetch requests - use v2+ for newer versions of Kafka
* `:offset_fetch` - Offset fetch requests - use v1+ for offsets stored in
Kafka (as opposed to zookeeper)
* `:offset_commit` - Offset commit requests - use v1+ to store offsets in
Kafka (as opposed to zookeeper)
## Example
Suppose we want to consume from a topic called `"example_topic"` with a
consumer group named `"example_group"` and we have a `KafkaEx.GenConsumer`
implementation called `ExampleGenConsumer` (see the `KafkaEx.GenConsumer`
documentation). We could start a consumer group in our application's
supervision tree as follows:
```
defmodule MyApp do
use Application
def start(_type, _args) do
consumer_group_opts = [
# setting for the ConsumerGroup
heartbeat_interval: 1_000,
# this setting will be forwarded to the GenConsumer
commit_interval: 1_000
]
gen_consumer_impl = ExampleGenConsumer
consumer_group_name = "example_group"
topic_names = ["example_topic"]
children = [
# ... other children
%{
id: KafkaEx.ConsumerGroup,
start: {
KafkaEx.ConsumerGroup,
:start_link,
[gen_consumer_impl, consumer_group_name, topic_names, consumer_group_opts]
}
}
# ... other children
]
Supervisor.start_link(children, strategy: :one_for_one)
end
end
```
**Note** It is not necessary for the Elixir nodes in a consumer group to be
connected (i.e., using distributed Erlang methods). The coordination of
group consumers is mediated by the broker.
See `start_link/4` for configuration details.
"""
use Supervisor
alias KafkaEx.ConsumerGroup.PartitionAssignment
alias KafkaEx.GenConsumer
@typedoc """
Option values used when starting a consumer group
* `:heartbeat_interval` - How frequently, in milliseconds, to send heartbeats
to the broker. This impacts how quickly we will process partition
changes as consumers start/stop. Default: 5000 (5 seconds).
* `:session_timeout` - Consumer group session timeout in milliseconds.
Default: 30000 (30 seconds). See below.
* `:session_timeout_padding` - Timeout padding for consumer group options.
Default: 10000 (10 seconds). See below.
* Any of `t:KafkaEx.GenConsumer.option/0`,
which will be passed on to consumers
* `:gen_server_opts` - `t:GenServer.options/0` passed on to the manager
GenServer
* `:name` - Name for the consumer group supervisor
* `:max_restarts`, `:max_seconds` - Supervisor restart policy parameters
* `:partition_assignment_callback` - See
`t:KafkaEx.ConsumerGroup.PartitionAssignment.callback/0`
* `:uris` - See `KafkaEx.create_worker/2`
Note `:session_timeout` is registered with the broker and determines how long
before the broker will de-register a consumer from which it has not heard a
heartbeat. This value must between the broker cluster's configured values
for `group.min.session.timeout.ms` and `group.max.session.timeout.ms` (6000
and 30000 by default). See
[https://kafka.apache.org/documentation/#configuration](https://kafka.apache.org/documentation/#configuration).
You may need to adjust `session_timeout_padding` on high-latency clusters to
avoid timing out when joining/syncing consumer groups.
"""
@type option ::
KafkaEx.GenConsumer.option()
| {:heartbeat_interval, pos_integer}
| {:session_timeout, pos_integer}
| {:session_timeout_padding, pos_integer}
| {:partition_assignment_callback, PartitionAssignment.callback()}
| {:gen_server_opts, GenServer.options()}
| {:name, Supervisor.name()}
| {:max_restarts, non_neg_integer}
| {:max_seconds, non_neg_integer}
| {:uris, KafkaEx.uri()}
@type options :: [option]
@doc """
Starts a consumer group process tree process linked to the current process.
This can be used to start a `KafkaEx.ConsumerGroup` as part of a supervision
tree.
`consumer_module` is
- a module that implements the `KafkaEx.GenConsumer`
behaviour.
- a tuple of `{gen_consumer_module, consumer_module}` can substitute another
`GenServer` implementation for `KafkaEx.GenConsumer`. When a single module
is passed it is transformed to `{KafkaEx.GenConsumer, consumer_module}`.
`group_name` is the name of the consumer group.
`topics` is a list of topics that the consumer group should consume from.
`opts` can be composed of options for the supervisor as well as for the
`KafkEx.GenConsumer` processes that will be spawned by the supervisor. See
`t:option/0` for details.
*Note* When starting a consumer group with multiple topics, you should
propagate this configuration change to your consumers. If you add a topic to
an existing consumer group from a single consumer, it may take a long time
to propagate depending on the leader election process.
### Return Values
This function has the same return values as `Supervisor.start_link/3`.
"""
@spec start_link(module | {module, module}, binary, [binary], options) ::
Supervisor.on_start()
def start_link(consumer_module, group_name, topics, opts \\ [])
def start_link(consumer_module, group_name, topics, opts)
when is_atom(consumer_module) do
start_link({KafkaEx.GenConsumer, consumer_module}, group_name, topics, opts)
end
def start_link(
{gen_consumer_module, consumer_module},
group_name,
topics,
opts
) do
{supervisor_opts, module_opts} =
Keyword.split(opts, [:name, :strategy, :max_restarts, :max_seconds])
Supervisor.start_link(
__MODULE__,
{{gen_consumer_module, consumer_module}, group_name, topics, module_opts},
supervisor_opts
)
end
@doc """
Returns the generation id of the consumer group.
The generation id is provided by the broker on sync. Returns `nil` if
queried before the initial sync has completed.
"""
@spec generation_id(Supervisor.supervisor(), timeout) :: integer | nil
def generation_id(supervisor_pid, timeout \\ 5000) do
call_manager(supervisor_pid, :generation_id, timeout)
end
@doc """
Returns the consumer group member id
The id is assigned by the broker. Returns `nil` if queried before the
initial sync has completed.
"""
@spec member_id(Supervisor.supervisor(), timeout) :: binary | nil
def member_id(supervisor_pid, timeout \\ 5000) do
call_manager(supervisor_pid, :member_id, timeout)
end
@doc """
Returns the member id of the consumer group's leader
This is provided by the broker on sync. Returns `nil` if queried before the
initial sync has completed
"""
@spec leader_id(Supervisor.supervisor(), timeout) :: binary | nil
def leader_id(supervisor_pid, timeout \\ 5000) do
call_manager(supervisor_pid, :leader_id, timeout)
end
@doc """
Returns true if this consumer is the leader of the consumer group
Leaders are elected by the broker and are responsible for assigning
partitions. Returns false if queried before the initial sync has completed.
"""
@spec leader?(Supervisor.supervisor(), timeout) :: boolean
def leader?(supervisor_pid, timeout \\ 5000) do
call_manager(supervisor_pid, :am_leader, timeout)
end
@doc """
Returns a list of topic and partition assignments for which this consumer is
responsible.
These are assigned by the leader and communicated by the broker on sync.
"""
@spec assignments(Supervisor.supervisor(), timeout) :: [
{topic :: binary, partition_id :: non_neg_integer}
]
def assignments(supervisor_pid, timeout \\ 5000) do
call_manager(supervisor_pid, :assignments, timeout)
end
@doc """
Returns the pid of the `KafkaEx.GenConsumer.Supervisor` that supervises this
member's consumers.
Returns `nil` if called before the initial sync.
"""
@spec consumer_supervisor_pid(Supervisor.supervisor(), timeout) :: nil | pid
def consumer_supervisor_pid(supervisor_pid, timeout \\ 5000) do
call_manager(supervisor_pid, :consumer_supervisor_pid, timeout)
end
@doc """
Returns the pids of consumer processes
"""
@spec consumer_pids(Supervisor.supervisor()) :: [pid]
def consumer_pids(supervisor_pid) do
supervisor_pid
|> consumer_supervisor_pid
|> GenConsumer.Supervisor.child_pids()
end
@doc """
Returns the name of the consumer group
"""
@spec group_name(Supervisor.supervisor(), timeout) :: binary
def group_name(supervisor_pid, timeout \\ 5000) do
call_manager(supervisor_pid, :group_name, timeout)
end
@doc """
Returns a map from `{topic, partition_id}` to consumer pid
"""
@spec partition_consumer_map(Supervisor.supervisor()) :: %{
{topic :: binary, partition_id :: non_neg_integer} => pid
}
def partition_consumer_map(supervisor_pid) do
supervisor_pid
|> consumer_pids
|> Enum.into(%{}, fn pid ->
{GenConsumer.partition(pid), pid}
end)
end
@doc """
Returns true if at least one child consumer process is alive
"""
@spec active?(Supervisor.supervisor(), timeout) :: boolean
def active?(supervisor_pid, timeout \\ 5000) do
consumer_supervisor = consumer_supervisor_pid(supervisor_pid, timeout)
if consumer_supervisor && Process.alive?(consumer_supervisor) do
GenConsumer.Supervisor.active?(consumer_supervisor)
else
false
end
end
@doc """
Returns the pid of the `KafkaEx.ConsumerGroup.Manager` process for the
given consumer group supervisor.
Intended for introspection usage only.
"""
@spec get_manager_pid(Supervisor.supervisor()) :: pid
def get_manager_pid(supervisor_pid) do
{_, pid, _, _} =
Enum.find(
Supervisor.which_children(supervisor_pid),
fn
{KafkaEx.ConsumerGroup.Manager, _, _, _} -> true
{_, _, _, _} -> false
end
)
pid
end
# used by ConsumerGroup.Manager to set partition assignments
@doc false
def start_consumer(
pid,
{gen_consumer_module, consumer_module},
group_name,
assignments,
opts
) do
child =
supervisor(
KafkaEx.GenConsumer.Supervisor,
[{gen_consumer_module, consumer_module}, group_name, assignments, opts],
id: :consumer
)
case Supervisor.start_child(pid, child) do
{:ok, consumer_pid} -> {:ok, consumer_pid}
{:ok, consumer_pid, _info} -> {:ok, consumer_pid}
end
end
# used by ConsumerGroup to pause consumption during rebalance
@doc false
def stop_consumer(pid) do
case Supervisor.terminate_child(pid, :consumer) do
:ok ->
Supervisor.delete_child(pid, :consumer)
{:error, :not_found} ->
:ok
end
end
@doc false
def init({{gen_consumer_module, consumer_module}, group_name, topics, opts}) do
opts = Keyword.put(opts, :supervisor_pid, self())
children = [
worker(
KafkaEx.ConsumerGroup.Manager,
[{gen_consumer_module, consumer_module}, group_name, topics, opts]
)
]
Supervisor.init(children,
strategy: :one_for_all,
max_restarts: 0,
max_seconds: 1
)
end
defp call_manager(supervisor_pid, call, timeout) do
supervisor_pid
|> get_manager_pid
|> GenServer.call(call, timeout)
end
end
|
lib/kafka_ex/consumer_group.ex
| 0.938379
| 0.845751
|
consumer_group.ex
|
starcoder
|
defmodule ElixirRPG.World do
use GenServer
alias ElixirRPG.World
alias ElixirRPG.Entity
alias ElixirRPG.Entity.EntityStore
require Logger
@initial_state %World.Data{target_tick_rate: 15, last_tick: nil}
def start_link(name, live_view_frontend \\ nil) when is_atom(name) do
GenServer.start_link(__MODULE__, [args: {name, live_view_frontend}], name: name)
end
def add_system(world, system) when is_pid(world) and is_atom(system) do
GenServer.cast(world, {:add_system, system})
end
def add_entity(world, type) when is_pid(world) and is_atom(type) do
GenServer.cast(world, {:add_entity, type})
end
def pause(world) when is_pid(world) do
GenServer.cast(world, :pause)
end
def resume(world) when is_pid(world) do
GenServer.cast(world, :resume)
end
@impl GenServer
def init(args: args) do
state = @initial_state
{world_name, liveview_pid} = args
clock_ref = World.Clock.start_tick(state.target_tick_rate, self())
curr_time = :os.system_time(:millisecond)
{:ok,
%World.Data{
state
| name: world_name,
frontend: liveview_pid,
clock: clock_ref,
last_tick: curr_time
}}
end
@impl GenServer
def handle_cast(:pause, current_state) do
Logger.info("PAUSE WORLD: #{current_state.name}")
{:noreply, %{current_state | playing: false}}
end
def handle_cast(:resume, current_state) do
Logger.info("RESUME WORLD: #{current_state.name}")
{:noreply, %{current_state | playing: true}}
end
def handle_cast({:add_system, system}, current_state) do
{:noreply, %World.Data{current_state | systems: [system | current_state.systems]}}
end
def handle_cast({:remove_entity, entity}, current_state) do
Process.exit(entity, 0)
{:noreply, current_state}
end
def handle_cast({:add_entity, entity_type}, current_state) do
Entity.create_entity(entity_type, current_state.name)
{:noreply, current_state}
end
@impl GenServer
def handle_call(message, from, current_state) do
Logger.warn("Unkown message type #{inspect(message)}, from #{inspect(from)}")
{:reply, :ok, current_state}
end
@impl GenServer
def handle_info(:tick, current_state) do
curr_time = :os.system_time(:millisecond)
last_tick_time = current_state.last_tick
delta_time = (curr_time - last_tick_time) / 1000
if current_state.playing do
Enum.each(current_state.systems, fn system ->
ents =
system.wants()
|> EntityStore.get_entities_with(current_state.name)
system.__tick(ents, current_state.name, current_state.frontend, delta_time)
end)
end
update_frontend_world_state(current_state)
flush_frontend_backbuffer(current_state)
{:noreply, %World.Data{current_state | last_tick: curr_time}}
end
def update_frontend_world_state(state) do
send(
state.frontend,
{:_force_update,
fn socket ->
Phoenix.LiveView.assign(socket, :world_data, state)
end}
)
end
def flush_frontend_backbuffer(state) do
send(state.frontend, :_flush_backbuffer)
end
end
|
lib/world/world.ex
| 0.627723
| 0.409398
|
world.ex
|
starcoder
|
defmodule TinyColor.HSL do
@moduledoc """
Represents a color in the for of red, green, blue, and optional alpha
"""
defstruct hue: 0.0, saturation: 0.0, lightness: 0.0, alpha: 1.0
import TinyColor.Normalize
@doc ~S"""
Returns a string representation of this color. hex is only supported if alpha == 1.0
## Examples
iex> TinyColor.HSL.to_string(%TinyColor.HSL{hue: 128.0, saturation: 47.0, lightness: 50.0})
"hsl(128, 47%, 50%)"
iex> TinyColor.HSL.to_string(%TinyColor.HSL{hue: 128.0, saturation: 47.0, lightness: 50.0, alpha: 0.5})
"hsla(128, 47%, 50%, 0.5)"
iex> TinyColor.HSL.to_string(%TinyColor.HSL{hue: 128.0, saturation: 47.0, lightness: 50.0}, :hsla)
"hsla(128, 47%, 50%, 1.0)"
iex> TinyColor.HSL.to_string(%TinyColor.HSL{hue: 128.0, saturation: 47.0, lightness: 50.0, alpha: 0.5}, :hsla)
"hsla(128, 47%, 50%, 0.5)"
"""
def to_string(struct, type \\ nil)
def to_string(%__MODULE__{hue: h, saturation: s, lightness: l, alpha: alpha}, :hsla) do
"hsla(#{round(h)}, #{round(s)}%, #{round(l)}%, #{Float.round(alpha, 4)})"
end
def to_string(%__MODULE__{hue: h, saturation: s, lightness: l, alpha: 1.0}, _) do
"hsl(#{round(h)}, #{round(s)}%, #{round(l)}%)"
end
def to_string(%__MODULE__{} = struct, _) do
to_string(struct, :hsla)
end
def new(hue, saturation, lightness, alpha \\ 1.0) do
%__MODULE__{
hue: cast(hue, :hue),
saturation: cast(saturation, :saturation),
lightness: cast(lightness, :lightness),
alpha: cast(alpha, :alpha)
}
end
def percentages(%TinyColor.HSL{hue: h, saturation: s, lightness: l, alpha: a}) do
{
h / 360,
s / 100,
l / 100,
a
}
end
defimpl String.Chars do
def to_string(struct) do
TinyColor.HSL.to_string(struct)
end
end
defimpl Jason.Encoder do
def encode(value, opts) do
Jason.Encode.string(TinyColor.HSL.to_string(value), opts)
end
end
defimpl Phoenix.HTML.Safe do
def to_iodata(value), do: to_string(value)
end
end
|
lib/tiny_color/spaces/hsl.ex
| 0.893244
| 0.643567
|
hsl.ex
|
starcoder
|
defmodule FloUI.Tabs do
alias Scenic.Graph
alias Scenic.Primitive
@moduledoc ~S"""
## Usage in SnapFramework
The following example uses FloUI.Tabs and Grid to lay the tabs out. Iterates over the @tabs assign to render each tab.
@module.get_tab_width runs FontMetrics on the label to get the width.
``` elixir
<%= graph font_size: 20 %>
<%= component FloUI.Tabs, {@active_tab, @tabs}, id: :tabs do %>
<%= component FloUI.Grid, %{
start_xy: {0, 0},
max_xy: {@module.get_tabs_width(@tabs), 40}
} do %>
<%= for {{label, cmp}, i} <- Enum.with_index(@tabs) do %>
<%= component FloUI.Tab,
{label, cmp},
selected?: if(i == 0, do: true, else: false),
id: :"#{label}",
width: @module.get_tab_width(label),
height: 40
%>
<% end %>
<% end %>
<% end %>
```
"""
use SnapFramework.Component,
name: :tabs,
template: "lib/tabs/tabs.eex",
controller: FloUI.TabsController,
assigns: [
active_tab: nil,
active_pid: nil,
tabs: nil
],
opts: []
defcomponent(:tabs, :any)
use_effect([assigns: [active_tab: :any]],
run: [:on_tab_change]
)
def setup(%{assigns: %{data: {active_tab, tabs}}} = scene) do
scene |> assign(active_tab: active_tab, tabs: tabs)
end
def process_info({:tab_pid, pid}, scene) do
{:noreply, assign(scene, active_pid: pid)}
end
def process_event(
{:select_tab, cmp},
pid,
%{assigns: %{active_tab: active_tab, active_pid: active_pid}} = scene
)
when cmp != active_tab do
GenServer.call(active_pid, {:put, false})
scene =
scene
|> assign(active_tab: cmp, active_pid: pid)
{:cont, {:select_tab, cmp}, scene}
end
def process_event({:select_tab, _cmp}, _pid, scene) do
{:noreply, scene}
end
def process_event(event, _, scene) do
{:cont, event, scene}
end
end
|
lib/tabs/tabs.ex
| 0.801081
| 0.70724
|
tabs.ex
|
starcoder
|
defmodule Rummage.Ecto.CustomHook.KeysetPaginate do
@moduledoc """
`Rummage.Ecto.CustomHook.KeysetPaginate` is an example of a Custom Hook that
comes with `Rummage.Ecto`.
This module uses `keyset` pagination to add a pagination query expression
on top a given `Ecto.Queryable`.
For more information on Keyset Pagination, check this
[article](http://use-the-index-luke.com/no-offset)
NOTE: This module doesn't return a list of entries, but a `Ecto.Query.t`.
This module `uses` `Rummage.Ecto.Hook`.
_____________________________________________________________________________
# ABOUT:
## Arguments:
This Hook expects a `queryable` (an `Ecto.Queryable`) and
`paginate_params` (a `Map`). The map should be in the format:
`%{per_page: 10, page: 1, last_seen_pk: 10, pk: :id}`
Details:
* `per_page`: Specifies the entries in each page.
* `page`: Specifies the `page` number.
* `last_seen_pk`: Specifies the primary_key value of last_seen entry,
This hook uses this entry instead of offset.
* `pk`: Specifies what's the `primary_key` for the entries being paginated.
Cannot be `nil`
For example, if we want to paginate products (primary_key = :id), we would
do the following:
```elixir
Rummage.Ecto.CustomHook.KeysetPaginate.run(Product,
%{per_page: 10, page: 1, last_seen_pk: 10, pk: :id})
```
## When to Use KeysetPaginate?
- Keyset Pagination is mainly here to make pagination faster for complex
pages. It is recommended that you use `Rummage.Ecto.Hook.Paginate` for a
simple pagination operation, as this module has a lot of assumptions and
it's own ordering on top of the given query.
NOTE: __It is not recommended to use this with the native sort hook__
_____________________________________________________________________________
# ASSUMPTIONS/NOTES:
* This Hook assumes that the querried `Ecto.Schema` has a `primary_key`.
* This Hook also orders the query by ascending `primary_key`
_____________________________________________________________________________
# USAGE
```elixir
alias Rummage.Ecto.CustomHook.KeysetPaginate
queryable = KeysetPaginate.run(Parent,
%{per_page: 10, page: 1, last_seen_pk: 10, pk: :id})
```
This module can be used by overriding the default module. This can be done
in the following ways:
In the `Rummage.Ecto` call:
```elixir
Rummage.Ecto.rummage(queryable, rummage,
paginate: Rummage.Ecto.CustomHook.KeysetPaginate)
```
OR
Globally for all models in `config.exs`:
```elixir
config :my_app,
Rummage.Ecto,
paginate: Rummage.Ecto.CustomHook.KeysetPaginate
```
OR
When `using` Rummage.Ecto with an `Ecto.Schema`:
```elixir
defmodule MySchema do
use Rummage.Ecto, repo: SomeRepo,
paginate: Rummage.Ecto.CustomHook.KeysetPaginate
end
```
"""
use Rummage.Ecto.Hook
import Ecto.Query
@expected_keys ~w(per_page page last_seen_pk pk)a
@err_msg "Error in params, No values given for keys: "
@per_page 10
@doc """
This is the callback implementation of `Rummage.Ecto.Hook.run/2`.
Builds a paginate `Ecto.Query.t` on top of a given `Ecto.Query.t` variable
with given `params`.
Besides an `Ecto.Query.t` an `Ecto.Schema` module can also be passed as it
implements `Ecto.Queryable`
Params is a `Map` which is expected to have the keys `#{Enum.join(@expected_keys, ", ")}`.
If an expected key isn't given, a `Runtime Error` is raised.
## Examples
When an empty map is passed as `params`:
iex> alias Rummage.Ecto.CustomHook.KeysetPaginate
iex> KeysetPaginate.run(Parent, %{})
** (RuntimeError) Error in params, No values given for keys: per_page, page, last_seen_pk, pk
When a non-empty map is passed as `params`, but with a missing key:
iex> alias Rummage.Ecto.CustomHook.KeysetPaginate
iex> KeysetPaginate.run(Parent, %{per_page: 10})
** (RuntimeError) Error in params, No values given for keys: page, last_seen_pk, pk
When a valid map of params is passed with an `Ecto.Schema` module:
iex> alias Rummage.Ecto.CustomHook.KeysetPaginate
iex> params = %{per_page: 10, page: 1, last_seen_pk: 0, pk: :id}
iex> KeysetPaginate.run(Rummage.Ecto.Product, params)
#Ecto.Query<from p in Rummage.Ecto.Product, where: p.id > ^0, limit: ^10>
When the `queryable` passed is an `Ecto.Query` variable:
iex> alias Rummage.Ecto.CustomHook.KeysetPaginate
iex> import Ecto.Query
iex> queryable = from u in "products"
#Ecto.Query<from p in "products">
iex> params = %{per_page: 10, page: 1, last_seen_pk: 0, pk: :id}
iex> KeysetPaginate.run(queryable, params)
#Ecto.Query<from p in "products", where: p.id > ^0, limit: ^10>
More examples:
iex> alias Rummage.Ecto.CustomHook.KeysetPaginate
iex> import Ecto.Query
iex> params = %{per_page: 5, page: 5, last_seen_pk: 25, pk: :id}
iex> queryable = from u in "products"
#Ecto.Query<from p in "products">
iex> KeysetPaginate.run(queryable, params)
#Ecto.Query<from p in "products", where: p.id > ^25, limit: ^5>
iex> alias Rummage.Ecto.CustomHook.KeysetPaginate
iex> import Ecto.Query
iex> params = %{per_page: 5, page: 1, last_seen_pk: 0, pk: :some_id}
iex> queryable = from u in "products"
#Ecto.Query<from p in "products">
iex> KeysetPaginate.run(queryable, params)
#Ecto.Query<from p in "products", where: p.some_id > ^0, limit: ^5>
"""
@spec run(Ecto.Query.t(), map()) :: Ecto.Query.t()
def run(queryable, paginate_params) do
:ok = validate_params(paginate_params)
handle_paginate(queryable, paginate_params)
end
# Helper function which handles addition of paginated query on top of
# the sent queryable variable
defp handle_paginate(queryable, paginate_params) do
per_page = Map.get(paginate_params, :per_page)
last_seen_pk = Map.get(paginate_params, :last_seen_pk)
pk = Map.get(paginate_params, :pk)
queryable
|> where([p1, ...], field(p1, ^pk) > ^last_seen_pk)
|> limit(^per_page)
end
# Helper function that validates the list of params based on
# @expected_keys list
defp validate_params(params) do
key_validations = Enum.map(@expected_keys, &Map.fetch(params, &1))
case Enum.filter(key_validations, & &1 == :error) do
[] -> :ok
_ -> raise @err_msg <> missing_keys(key_validations)
end
end
# Helper function used to build error message using missing keys
defp missing_keys(key_validations) do
key_validations
|> Enum.with_index()
|> Enum.filter(fn {v, _i} -> v == :error end)
|> Enum.map(fn {_v, i} -> Enum.at(@expected_keys, i) end)
|> Enum.map(&to_string/1)
|> Enum.join(", ")
end
@doc """
Callback implementation for `Rummage.Ecto.Hook.format_params/2`.
This function takes an `Ecto.Query.t` or `queryable`, `paginate_params` which
will be passed to the `run/2` function, but also takes a list of options,
`opts`.
The function expects `opts` to include a `repo` key which points to the
`Ecto.Repo` which will be used to calculate the `total_count` and `max_page`
for this paginate hook module.
## Examples
When a `repo` isn't passed in `opts` it gives an error:
iex> alias Rummage.Ecto.CustomHook.KeysetPaginate
iex> alias Rummage.Ecto.Category
iex> KeysetPaginate.format_params(Category, %{per_page: 1, page: 1}, [])
** (RuntimeError) Expected key `repo` in `opts`, got []
When `paginate_params` given aren't valid, it uses defaults to populate params:
iex> alias Rummage.Ecto.CustomHook.KeysetPaginate
iex> alias Rummage.Ecto.Category
iex> Ecto.Adapters.SQL.Sandbox.checkout(Rummage.Ecto.Repo)
iex> KeysetPaginate.format_params(Category, %{}, [repo: Rummage.Ecto.Repo])
%{max_page: 0, page: 1, per_page: 10, total_count: 0, pk: :id,
last_seen_pk: 0}
When `paginate_params` and `opts` given are valid:
iex> alias Rummage.Ecto.CustomHook.KeysetPaginate
iex> alias Rummage.Ecto.Category
iex> paginate_params = %{
...> per_page: 1,
...> page: 1
...> }
iex> repo = Rummage.Ecto.Repo
iex> Ecto.Adapters.SQL.Sandbox.checkout(repo)
iex> KeysetPaginate.format_params(Category, paginate_params, [repo: repo])
%{max_page: 0, last_seen_pk: 0, page: 1,
per_page: 1, total_count: 0, pk: :id}
When `paginate_params` and `opts` given are valid:
iex> alias Rummage.Ecto.CustomHook.KeysetPaginate
iex> alias Rummage.Ecto.Category
iex> paginate_params = %{
...> per_page: 1,
...> page: 1
...> }
iex> repo = Rummage.Ecto.Repo
iex> Ecto.Adapters.SQL.Sandbox.checkout(repo)
iex> repo.insert!(%Category{name: "name"})
iex> repo.insert!(%Category{name: "name2"})
iex> KeysetPaginate.format_params(Category, paginate_params, [repo: repo])
%{max_page: 2, last_seen_pk: 0, page: 1,
per_page: 1, total_count: 2, pk: :id}
When `paginate_params` and `opts` given are valid and when the `queryable`
passed has a `primary_key` defaulted to `id`.
iex> alias Rummage.Ecto.CustomHook.KeysetPaginate
iex> alias Rummage.Ecto.Category
iex> paginate_params = %{
...> per_page: 1,
...> page: 1
...> }
iex> repo = Rummage.Ecto.Repo
iex> Ecto.Adapters.SQL.Sandbox.checkout(repo)
iex> repo.insert!(%Category{name: "name"})
iex> repo.insert!(%Category{name: "name2"})
iex> KeysetPaginate.format_params(Category, paginate_params, [repo: repo])
%{max_page: 2, last_seen_pk: 0, page: 1,
per_page: 1, total_count: 2, pk: :id}
When `paginate_params` and `opts` given are valid and when the `queryable`
passed has a custom `primary_key`.
iex> alias Rummage.Ecto.CustomHook.KeysetPaginate
iex> alias Rummage.Ecto.Product
iex> paginate_params = %{
...> per_page: 1,
...> page: 2
...> }
iex> repo = Rummage.Ecto.Repo
iex> Ecto.Adapters.SQL.Sandbox.checkout(repo)
iex> repo.insert!(%Product{internal_code: "100"})
iex> repo.insert!(%Product{internal_code: "101"})
iex> KeysetPaginate.format_params(Product, paginate_params, [repo: repo])
%{max_page: 2, last_seen_pk: 1, page: 2,
per_page: 1, total_count: 2, pk: :internal_code}
"""
@spec format_params(Ecto.Query.t(), map(), keyword()) :: map()
def format_params(queryable, paginate_params, opts) do
paginate_params = populate_params(queryable, paginate_params, opts)
case Keyword.get(opts, :repo) do
nil -> raise "Expected key `repo` in `opts`, got #{inspect(opts)}"
repo -> get_params(queryable, paginate_params, repo)
end
end
# Helper function that populate the list of params based on
# @expected_keys list
defp populate_params(queryable, params, opts) do
params = params
|> Map.put_new(:per_page, Keyword.get(opts, :per_page, @per_page))
|> Map.put_new(:pk, pk(queryable))
|> Map.put_new(:page, 1)
Map.put_new(params, :last_seen_pk, get_last_seen(params))
end
# Helper function which gets the default last_seen_pk from
# page and per_page
defp get_last_seen(params) do
Map.get(params, :per_page) * (Map.get(params, :page) - 1)
end
# Helper function which gets formatted list of params including
# page, per_page, total_count and max_page keys
defp get_params(queryable, paginate_params, repo) do
per_page = Map.get(paginate_params, :per_page)
total_count = get_total_count(queryable, repo)
max_page = total_count
|> (& &1 / per_page).()
|> Float.ceil()
|> trunc()
%{page: Map.get(paginate_params, :page), pk: Map.get(paginate_params, :pk),
last_seen_pk: Map.get(paginate_params, :last_seen_pk),
per_page: per_page, total_count: total_count, max_page: max_page}
end
# Helper function which gets total count of a queryable based on
# the given repo.
# This excludes operations such as select, preload and order_by
# to make the query more effectient
defp get_total_count(queryable, repo) do
queryable
|> exclude(:select)
|> exclude(:preload)
|> exclude(:order_by)
|> get_count(repo, pk(queryable))
end
# This function gets count of a query and repo passed.
# A primary key must be passed and it just counts
# the distinct primary keys
defp get_count(query, repo, pk) do
query = select(query, [s], count(field(s, ^pk), :distinct))
hd(apply(repo, :all, [query]))
end
# Helper function which returns the primary key associated with a
# Queryable.
defp pk(queryable) do
schema = is_map(queryable) && elem(queryable.from, 1) || queryable
case schema.__schema__(:primary_key) do
[] -> nil
list -> hd(list)
end
end
end
|
lib/rummage_ecto/custom_hooks/keyset_paginate.ex
| 0.819063
| 0.879302
|
keyset_paginate.ex
|
starcoder
|
defmodule Alfred.Result do
@moduledoc """
Represents a result to be displayed in an Alfred search list.
Every result is required to have a title and a subtitle. Beyond this, there are many optional
attributes that are helpful in various scenarios:
* `:arg` *(recommended)* — Text that is passed to connected output actions in workflows
* `:autocomplete` *(recommended)* — Text which is populated into Alfred's search field if
the user autocompletes the result
* `:quicklookurl` — URL which will be visible if the user uses the Quick Look feature
* `:uid` — Used to track an item across invocations so that Alfred can do its frecency sorting
* `:valid` — When `false` it means that the result cannot be selected
**See:** [Script Filter JSON Format](https://www.alfredapp.com/help/workflows/inputs/script-filter/json/)
"""
alias Alfred.ResultList
@type t :: %__MODULE__{
arg: String.t(),
autocomplete: String.t(),
quicklookurl: String.t(),
title: String.t(),
subtitle: String.t(),
uid: String.t(),
valid: boolean
}
defstruct [:title, :subtitle, :arg, :autocomplete, :quicklookurl, :uid, :valid]
@doc """
Creates a new generic result.
## Examples
Basic result:
```
iex> Alfred.Result.new("title", "subtitle")
%Alfred.Result{subtitle: "subtitle", title: "title"}
```
Result with some optional attributes:
```
iex> Alfred.Result.new("title", "subtitle", arg: "output", valid: false, uid: "test")
%Alfred.Result{arg: "output", subtitle: "subtitle", title: "title", uid: "test", valid: false}
```
"""
@spec new(String.t(), String.t(), Keyword.t()) :: t
def new(title, subtitle, options \\ [])
def new(nil, _, _), do: raise(ArgumentError, "Result title is required")
def new(_, nil, _), do: raise(ArgumentError, "Result subtitle is required")
def new(title, subtitle, options) do
ensure_not_blank(title, :title)
ensure_not_blank(subtitle, :subtitle)
add_options(%__MODULE__{title: title, subtitle: subtitle}, options)
end
@doc """
Creates a new URL result.
## Examples
Basic URL result:
iex> Alfred.Result.new_url("title", "http://www.example.com")
%Alfred.Result{arg: "http://www.example.com", autocomplete: "title",
quicklookurl: "http://www.example.com", subtitle: "http://www.example.com", title: "title",
uid: "http://www.example.com", valid: nil}
"""
@spec new_url(String.t(), String.t()) :: t
def new_url(title, url),
do: new(title, url, arg: url, autocomplete: title, quicklookurl: url, uid: url)
@doc """
Converts the results to the [expected JSON output format](https://www.alfredapp.com/help/workflows/inputs/script-filter/json/).
"""
@spec to_json(t) :: String.t()
def to_json(results) do
results
|> ResultList.new()
|> ResultList.to_json()
end
defp add_options(struct, []), do: struct
defp add_options(struct, [{:uid, value} | rest]),
do: add_options(add_uid_option(struct, value), rest)
defp add_options(struct, [{:valid, value} | rest]),
do: add_options(add_valid_option(struct, value), rest)
defp add_options(struct, [{key, value} | rest]),
do: add_options(Map.put(struct, key, value), rest)
defp add_uid_option(struct, value) when is_binary(value), do: Map.put(struct, :uid, value)
defp add_uid_option(_, _), do: raise(ArgumentError, "uid must be a string value")
defp add_valid_option(struct, true), do: struct
defp add_valid_option(struct, false), do: Map.put(struct, :valid, false)
defp add_valid_option(_, _), do: raise(ArgumentError, "valid must be either true or false")
defp ensure_not_blank(text, arg_name) do
case String.trim(text) do
"" -> raise ArgumentError, "#{arg_name} cannot be blank"
_ -> nil
end
end
end
|
lib/alfred/result.ex
| 0.901707
| 0.890913
|
result.ex
|
starcoder
|
defmodule Baud do
@moduledoc """
Serial port module.
```elixir
tty = case :os.type() do
{:unix, :darwin} -> "cu.usbserial-FTYHQD9MA"
{:unix, :linux} -> "ttyUSB0"
{:win32, :nt} -> "COM5"
end
#Try this with a loopback
{:ok, pid} = Baud.start_link([device: tty])
Baud.write pid, "01234\\n56789\\n98765\\n43210"
{:ok, "01234\\n"} = Baud.readln pid
{:ok, "56789\\n"} = Baud.readln pid
{:ok, "98765\\n"} = Baud.readln pid
{:to, "43210"} = Baud.readln pid
Baud.write pid, "01234\\r56789\\r98765\\r43210"
{:ok, "01234\\r"} = Baud.readch pid, 0x0d
{:ok, "56789\\r"} = Baud.readch pid, 0x0d
{:ok, "98765\\r"} = Baud.readch pid, 0x0d
{:to, "43210"} = Baud.readch pid, 0x0d
Baud.write pid, "01234\\n56789\\n98765\\n43210"
{:ok, "01234\\n"} = Baud.readn pid, 6
{:ok, "56789\\n"} = Baud.readn pid, 6
{:ok, "98765\\n"} = Baud.readn pid, 6
{:to, "43210"} = Baud.readn pid, 6
Baud.write pid, "01234\\n"
Baud.write pid, "56789\\n"
Baud.write pid, "98765\\n"
Baud.write pid, "43210"
:timer.sleep 100
{:ok, "01234\\n56789\\n98765\\n43210"} = Baud.readall pid
```
"""
@doc """
Starts the serial server.
`params` *must* contain a keyword list to be merged with the following defaults:
```elixir
[
device: nil, #serial port name: "COM1", "ttyUSB0", "cu.usbserial-FTYHQD9MA"
speed: 9600, #either 1200, 2400, 4800, 9600, 19200, 38400, 57600, 115200
#win32 adds 14400, 128000, 256000
config: "8N1", #either "8N1", "7E1", "7O1"
]
```
`opts` is optional and is passed verbatim to GenServer.
Returns `{:ok, pid}`.
## Example
```
Baud.start_link([device: "COM8"])
```
"""
def start_link(params, opts \\ []) do
Agent.start_link(fn -> init(params) end, opts)
end
@sleep 1
@to 400
@doc """
Stops the serial server.
Returns `:ok`.
"""
def stop(pid) do
Agent.get(pid, fn {nid, _} ->
:ok = Sniff.close nid
end, @to)
Agent.stop(pid)
end
@doc """
Writes `data` to the serial port.
Returns `:ok`.
"""
def write(pid, data, timeout \\ @to) do
Agent.get(pid, fn {nid, _} ->
:ok = Sniff.write nid, data
end, timeout)
end
@doc """
Reads all available data.
Returns `{:ok, data}`.
"""
def readall(pid, timeout \\ @to) do
Agent.get_and_update(pid, fn {nid, buf} ->
{:ok, data} = Sniff.read nid
all = buf <> data
{{:ok, all}, {nid, <<>>}}
end, timeout)
end
@doc """
Reads `count` bytes.
Returns `{:ok, data} | {:to, partial}`.
"""
def readn(pid, count, timeout \\ @to) do
Agent.get_and_update(pid, fn {nid, buf} ->
now = now()
size = byte_size(buf)
dl = now + timeout
{res, head, tail} = read_n(nid, [buf], size, count, dl)
{{res, head}, {nid, tail}}
end, 2*timeout)
end
@doc """
Reads until 'nl' is received.
Returns `{:ok, line} | {:to, partial}`.
"""
def readln(pid, timeout \\ @to) do
Agent.get_and_update(pid, fn {nid, buf} ->
now = now()
ch = 10;
index = index(buf, ch)
size = byte_size(buf)
dl = now + timeout
{res, head, tail} = read_ch(nid, [buf], index, size, ch, dl)
{{res, head}, {nid, tail}}
end, 2*timeout)
end
@doc """
Reads until 'ch' is received.
Returns `{:ok, data} | {:to, partial}`.
"""
def readch(pid, ch, timeout \\ @to) do
Agent.get_and_update(pid, fn {nid, buf} ->
now = now()
index = index(buf, ch)
size = byte_size(buf)
dl = now + timeout
{res, head, tail} = read_ch(nid, [buf], index, size, ch, dl)
{{res, head}, {nid, tail}}
end, 2*timeout)
end
defp init(params) do
device = Keyword.fetch!(params, :device)
speed = Keyword.get(params, :speed, 9600)
config = Keyword.get(params, :config, "8N1")
{:ok, nid} = Sniff.open device, speed, config
{nid, <<>>}
end
defp read_ch(nid, iol, index, size, ch, dl) do
case index >= 0 do
true -> split_i iol, index
false ->
{:ok, data} = Sniff.read nid
case data do
<<>> ->
:timer.sleep @sleep
now = now()
case now > dl do
true -> {:to, all(iol), <<>>}
false -> read_ch(nid, iol, -1, size, ch, dl)
end
_ ->
case index(data, ch) do
-1 -> read_ch(nid, [data | iol], -1,
size + byte_size(data), ch, dl)
index -> read_ch(nid, [data | iol], size + index,
size + byte_size(data), ch, dl)
end
end
end
end
defp read_n(nid, iol, size, count, dl) do
case size >= count do
true -> split_c iol, count
false ->
{:ok, data} = Sniff.read nid
case data do
<<>> ->
:timer.sleep @sleep
now = now()
case now > dl do
true -> {:to, all(iol), <<>>}
false -> read_n(nid, iol, size, count, dl)
end
_ -> read_n(nid, [data | iol], size + byte_size(data),
count, dl)
end
end
end
defp now(), do: :os.system_time :milli_seconds
#defp now(), do: :erlang.monotonic_time :milli_seconds
defp index(bin, ch) do
case :binary.match(bin, <<ch>>) do
:nomatch -> -1
{index, _} -> index
end
end
defp all(bin) when is_binary(bin) do
bin
end
defp all(list) when is_list(list) do
reversed = Enum.reverse list
:erlang.iolist_to_binary(reversed)
end
defp split_i(bin, index) when is_binary(bin) do
head = :binary.part(bin, {0, index + 1})
tail = :binary.part(bin, {index + 1, byte_size(bin) - index - 1})
{:ok, head, tail}
end
defp split_i(list, index) when is_list(list) do
reversed = Enum.reverse list
bin = :erlang.iolist_to_binary(reversed)
split_i(bin, index)
end
defp split_c(bin, count) when is_binary(bin) do
<<head::bytes-size(count), tail::binary>> = bin
{:ok, head, tail}
end
defp split_c(list, count) when is_list(list) do
reversed = Enum.reverse list
bin = :erlang.iolist_to_binary(reversed)
split_c(bin, count)
end
end
|
lib/Baud.ex
| 0.768733
| 0.571348
|
Baud.ex
|
starcoder
|
defmodule EctoFixtures.Conditioners.Associations do
import EctoFixtures.Conditioners.PrimaryKey, only: [generate_key_value: 3]
def process(data, path) do
table_path = path |> Enum.take(2)
model = get_in(data, table_path ++ [:model])
Enum.reduce model.__schema__(:associations), data, fn(association_name, data) ->
if get_in(data, path ++ [:data, association_name]) do
case model.__schema__(:association, association_name) do
%Ecto.Association.Has{} = association ->
has_association(data, path, association)
%Ecto.Association.HasThrough{} = association ->
has_through_association(data, path, association)
%Ecto.Association.BelongsTo{} = association ->
belongs_to_association(data, path, association)
end
else
data
end
end
end
defp has_association(data, path, %{cardinality: :one} = association) do
data = put_in(data, path ++ [:data, association.field], get_in(data, path ++ [:data, association.field]) |> List.wrap)
has_association(data, path, struct(association, %{cardinality: :many}))
end
defp has_association(data, path, %{cardinality: :many} = association) do
%{field: field, owner_key: owner_key, related_key: related_key} = association
Enum.reduce(get_in(data, path ++ [:data, field]), data, fn(association_expr, data) ->
{ data, association_path } = get_path(data, path, association_expr)
data = generate_key_value(data, path, owner_key)
owner_key_value = get_in(data, path ++ [:data, owner_key])
put_in(data, association_path ++ [:data, related_key], owner_key_value)
|> EctoFixtures.Conditioners.DAG.add_vertex(association_path, get_in(data, [:__DAG__]))
|> EctoFixtures.Conditioners.DAG.add_edge(path, association_path)
end)
|> delete_in(path ++ [:data, field])
end
defp has_through_association(data, path, %{cardinality: :one} = association) do
data = put_in(data, path ++ [:data, association.field], get_in(data, path ++ [:data, association.field]) |> List.wrap)
has_through_association(data, path, struct(association, %{cardinality: :many}))
end
defp has_through_association(data, [source, table_name, :rows, _row_name] = path, %{cardinality: :many} = association) do
%{owner: owner, field: field, through: [through_association_name, inverse_association_name]} = association
through_association = owner.__schema__(:association, through_association_name)
%{owner_key: through_owner_key, related_key: through_related_key, related: through_related} = through_association
inverse_association = through_related.__schema__(:association, inverse_association_name)
%{field: inverse_field, owner_key: inverse_owner_key, related_key: inverse_related_key} = inverse_association
Enum.reduce(get_in(data, path ++ [:data, field]), data, fn(association_expr, data) ->
{ data, inverse_association_path } = get_path(data, path, association_expr)
data = generate_key_value(data, inverse_association_path, inverse_related_key)
through_schema_source =
through_related.__schema__(:source)
|> String.to_atom()
through_row_name =
Enum.join(path, "-") <> ":" <> Enum.join(inverse_association_path, "-")
|> String.to_atom
through_data = %{
source => %{
through_schema_source => %{
model: through_related,
repo: get_in(data, [source, table_name, :repo]),
rows: %{
through_row_name => %{
data: %{
inverse_owner_key => get_in(data, inverse_association_path ++ [:data, inverse_related_key]),
through_related_key => get_in(data, path ++ [:data, through_owner_key])
}
}
}
}
}
}
through_association_path = [source, through_schema_source, :rows, through_row_name]
EctoFixtures.Utils.deep_merge(data, through_data)
|> delete_in(inverse_association_path ++ [:data, inverse_field])
|> EctoFixtures.Conditioners.DAG.add_vertex(inverse_association_path, get_in(data, [:__DAG__]))
|> EctoFixtures.Conditioners.DAG.add_vertex(through_association_path, get_in(data, [:__DAG__]))
|> EctoFixtures.Conditioners.DAG.add_edge(path, through_association_path)
|> EctoFixtures.Conditioners.DAG.add_edge(inverse_association_path, through_association_path)
end)
|> delete_in(path ++ [:data, field])
end
defp belongs_to_association(data, path, association) do
%{field: field, owner_key: owner_key, related_key: related_key} = association
{data, association_path} = get_path(data, path, get_in(data, path ++ [:data, field]))
data = generate_key_value(data, association_path, related_key)
related_key_value = get_in(data, association_path ++ [:data, related_key])
data
|> put_in(path ++ [:data, owner_key], related_key_value)
|> delete_in(path ++ [:data, field])
|> EctoFixtures.Conditioners.DAG.add_vertex(association_path, get_in(data, [:__DAG__]))
|> EctoFixtures.Conditioners.DAG.add_edge(association_path, path)
end
defp get_path(data, path, {{:., _, [{{:., _, [{:fixtures, _, [file_path]}, inverse_table_name]}, _, _}, inverse_row_name]}, _, _}) do
inverse_source = "test/fixtures/#{file_path}.exs"
inverse_source_atom = String.to_atom(inverse_source)
[source, _table_name, :rows, _row_name] = path
inverse_data =
inverse_source
|> EctoFixtures.read()
|> EctoFixtures.parse()
|> filter_by([inverse_source_atom, inverse_table_name, :rows, inverse_row_name])
|> Map.put(:__DAG__, get_in(data, [:__DAG__]))
|> EctoFixtures.Conditioner.process(source: source)
{ EctoFixtures.Utils.deep_merge(data, inverse_data),
[inverse_source_atom, inverse_table_name, :rows, inverse_row_name] }
end
defp get_path(data, path, {{:., _, [{inverse_table_name, _, _}, inverse_row_name]}, _, _}) do
source = List.first(path)
{ data, [source, inverse_table_name, :rows, inverse_row_name] }
end
defp filter_by(data, [file_path, table_name, :rows, row_name] = path) do
filtered_row = get_in(data, path)
filtered_data =
data
|> get_in([file_path, table_name])
|> put_in([:rows], %{row_name => filtered_row})
%{file_path => %{table_name => filtered_data}}
end
defp delete_in(data, path) do
{path, [target]} = Enum.split(path, length(path) - 1)
put_in(data, path, Map.delete(get_in(data, path), target))
end
end
|
lib/ecto/fixtures/conditioners/associations.ex
| 0.618089
| 0.512205
|
associations.ex
|
starcoder
|
defmodule Day11 do
@spec part1(integer(), any(), any(), integer(), any()) :: any()
def part2(w \\ 300, h \\ 300, serial \\ 9810) do
grid = gen_grid(w, h, serial)
# Brute forced it with inspect... figured 13x13 or 13th pass
# gives the most optimal power level
for x1 <- 1..14,
do: {x1, get_max_power(w, h, grid, x1, x1)}
end
defp get_max_power(w, h, grid, x1, y1) do
result =
for x <- 1..(w - x1 + 1),
y <- 1..(h - y1 + 1),
do: {{x, y}, total_power(grid, {x, y}, x1, y1)},
into: %{}
result
|> Map.to_list()
|> Enum.max_by(&elem(&1, 1))
end
def part1(w \\ 300, h \\ 300, serial \\ 9810, x1 \\ 3, y1 \\ 3) do
grid = gen_grid(w, h, serial)
result =
for x <- 1..(w - x1 + 1),
y <- 1..(h - y1 + 1),
do: {{x, y}, total_power(grid, {x, y}, x1, y1)},
into: %{}
result
|> Map.to_list()
|> Enum.max_by(&elem(&1, 1))
end
defp total_power(grid, {x, y}, x1, y1) do
Enum.reduce(x..(x + x1 - 1), 0, fn i, acc ->
total =
Enum.reduce(y..(y + y1 - 1), 0, fn j, count ->
count + Map.get(grid, {i, j})
end)
total + acc
end)
end
defp gen_grid(w, h, serial) do
for x <- 1..w,
y <- 1..h,
do: {{x, y}, power_level(x, y, serial)},
into: %{}
end
defp power_level(x, y, serial) do
rack_id = x + 10
rack_id
|> Kernel.*(y)
|> Kernel.+(serial)
|> Kernel.*(rack_id)
|> rem(1000)
|> div(100)
|> Kernel.-(5)
end
def part3(input) do
serial = String.to_integer(input)
t = :ets.new(:t, [])
:ets.insert(t, {:best, {0, 0, 0, -10000}})
build_table(t, serial)
Enum.each(1..30, fn s ->
Enum.each(1..(300 - s), fn x ->
Enum.each(1..(300 - s), fn y ->
power =
value_at(t, x, y) - value_at(t, x - s, y) - value_at(t, x, y - s) +
value_at(t, x - s, y - s)
[{:best, {_, _, _, p}}] = :ets.lookup(t, :best)
if power > p do
:ets.insert(t, {:best, {x - s + 1, y - s + 1, s, power}})
end
end)
end)
end)
[{:best, {x, y, size, power}}] = :ets.lookup(t, :best)
"#{x},#{y},#{size} @ #{power}"
end
defp value_at(t, x, y) do
case :ets.lookup(t, {x, y}) do
[] -> 0
[{_, v}] -> v
end
end
defp build_table(t, serial) do
Enum.each(1..300, fn x ->
Enum.each(1..300, fn y ->
:ets.insert(t, {
{x, y},
power_level(x, y, serial) + value_at(t, x - 1, y) + value_at(t, x, y - 1) -
value_at(t, x - 1, y - 1)
})
end)
end)
end
end
|
lib/day11.ex
| 0.544922
| 0.608187
|
day11.ex
|
starcoder
|
defmodule ZenMonitor.Metrics do
@moduledoc """
Metrics helper for monitoring the ZenMonitor system.
"""
alias Instruments.Probe
@doc """
Registers various probes for the ZenMonitor System.
- ERTS message_queue_len for the `ZenMonitor.Local` and `ZenMonitor.Proxy` processes.
- Internal Batch Queue length for `ZenMonitor.Local` (dispatches to be delivered)
- ETS table size for References (number of monitors)
- ETS table size for Subscribers (number of monitored local processes * interested remotes)
"""
@spec register() :: :ok
def register do
Probe.define!(
"zen_monitor.local.message_queue_len",
:gauge,
mfa: {__MODULE__, :message_queue_len, [ZenMonitor.Local]}
)
Probe.define!(
"zen_monitor.proxy.message_queue_len",
:gauge,
mfa: {__MODULE__, :message_queue_len, [ZenMonitor.Proxy]}
)
Probe.define!(
"zen_monitor.local.batch_length",
:gauge,
mfa: {ZenMonitor.Local, :batch_length, []}
)
Probe.define!(
"zen_monitor.local.ets.references.size",
:gauge,
mfa: {__MODULE__, :table_size, [ZenMonitor.Local.Tables.references()]}
)
Probe.define!(
"zen_monitor.proxy.ets.subscribers.size",
:gauge,
mfa: {__MODULE__, :table_size, [ZenMonitor.Proxy.Tables.subscribers()]}
)
:ok
end
@doc """
Given a pid or a registered name, this will return the message_queue_len as reported by
`Process.info/2`
"""
@spec message_queue_len(target :: nil | pid() | atom()) :: nil | integer()
def message_queue_len(nil), do: nil
def message_queue_len(target) when is_pid(target) do
case Process.info(target, :message_queue_len) do
{:message_queue_len, len} -> len
_ -> nil
end
end
def message_queue_len(target) when is_atom(target) do
target
|> Process.whereis()
|> message_queue_len()
end
@doc """
Given a table identifier, returns the size as reported by `:ets.info/2`
"""
@spec table_size(:ets.tid()) :: nil | integer()
def table_size(tid) do
case :ets.info(tid, :size) do
:undefined -> nil
size -> size
end
end
end
|
lib/zen_monitor/metrics.ex
| 0.743913
| 0.425277
|
metrics.ex
|
starcoder
|
defmodule Nerves.UART do
@moduledoc """
Fake out Nerves.UART, mostly for unit testing.
"""
use GenServer
@enforce_keys [:port, :client]
defstruct port: nil, client: nil, reactions: [], written: []
@opaque t :: %__MODULE__{port: String.t,
client: pid,
reactions: list({String.t | Regex.t, String.t}),
written: [String.t]}
def start_link(opts \\ []) do
GenServer.start_link(__MODULE__, [], opts)
end
def init(_) do
{:ok, nil}
end
@doc """
Mimics real `Nerves.UART.open`. In opts active must be true, and all other opts are not
supported.
"""
@spec open(pid, String.t, [active: true] | %{active: true}) :: :ok | {:error, :active_only_supported}
def open(pid, port, opts) do
case Keyword.get(opts, :active) do
true ->
GenServer.call(pid, {:open, port})
:ok
_ -> {:error, :active_only_supported}
end
end
@doc """
Mimics the real `Nerves.UART.write`. What is written, will be stored and retrieved via `written/1`.
"""
@spec write(pid, String.t) :: :ok | {:error, :ebadf}
def write(pid, text) do
GenServer.call(pid, {:write, text})
end
@doc """
Everything written so far via `write/2`
"""
@spec written(pid) :: list(String.t) | {:error, :ebadf}
def written(pid) do
GenServer.call(pid, :written)
end
@doc """
Mimics receiving from the serial port. Will send a message to the client process (that which opened this GenServer)
with in the form of `{:nerves_uart, port, msg}` where `port` is the serial port set in `open/3`.
"""
@spec pretend_to_receive(pid, String.t) :: :ok | {:error, :ebadf}
def pretend_to_receive(pid, msg) do
GenServer.call(pid, {:pretend_to_receive, msg})
end
@doc """
When next something is written, which matches `match` respond as if `reaction_message` has just been received. Mimics command/response
over the wire. The client process is sent a message in the form of `{:nerves_uart, port, msg}`.
Note that reactions are ordered and one-time.
"""
@spec react_to_next_matching_write(pid, String.t | Regex.t, String.t) :: :ok | {:error, :ebadf}
def react_to_next_matching_write(pid, match, reaction_message) do
GenServer.cast(pid, {:react_to_next_matching_write, match, reaction_message})
end
@doc """
Removes reactions set with `react_to_next_matching_write/3` and also writes
"""
@spec reset(pid) :: :ok
def reset(pid) do
GenServer.cast(pid, :reset)
end
def handle_call({:open, port}, {from, _}, _) do
{:reply, :ok, %__MODULE__{port: port, client: from}}
end
def handle_call(_, _, nil), do: {:reply, {:error, :ebadf}, nil}
def handle_call({:write, text}, _, s = %{written: written,
reactions: [{react_match, react_message} | react_tail]}) do
if text =~ react_match do
send_to_client(react_message, s)
{:reply, :ok, %{s | written: [text | written], reactions: react_tail}}
else
{:reply, :ok, %{s | written: [text | written]}}
end
end
def handle_call({:write, text}, _, s = %{written: written}) do
{:reply, :ok, %{s | written: [text | written]}}
end
def handle_call(:written, _, s = %{written: written}), do: {:reply, Enum.reverse(written), s}
def handle_call({:pretend_to_receive, msg}, _from, s) do
send_to_client(msg, s)
{:reply, :ok, s}
end
def handle_cast({:react_to_next_matching_write, match, reaction_message}, s = %{reactions: reactions}) do
{:noreply, %{s | reactions: reactions ++ [{match, reaction_message}]}}
end
def handle_cast(:reset, s), do: {:noreply, %{s | reactions: [], written: []}}
defp send_to_client(msg, %{port: port, client: client}), do: send(client, {:nerves_uart, port, msg})
end
|
lib/nerves/uart.ex
| 0.777469
| 0.41742
|
uart.ex
|
starcoder
|
defmodule Bundesbank do
@moduledoc ~S"""
A collection of German Bank Data including BIC, Bankcodes, PAN and more useful information based on the [Bundesbank Data Set](https://www.bundesbank.de/de/aufgaben/unbarer-zahlungsverkehr/serviceangebot/bankleitzahlen/download-bankleitzahlen-602592)
**Current Data Set is Valid until March, 3rd 2021**
"""
@doc """
Returns all banks.
"""
def all do
bundesbank()
end
@doc """
Returns one bank given its code
## Examples
iex> %Bundesbank.Bank{bank_name: bank_name} = Bundesbank.get(50010060)
iex> bank_name
"Postbank Ndl Deutsche Bank"
"""
def get(code) do
[bank] = filter_by(:code, code)
bank
end
@doc """
Filters banks by given key.
Returns a list of `Bundesbank.Bank` structs
Possible keys:
```
[:code, :property, :description, :postal_code, :city, :bank_name, :pan, :bic, :mark_of_conformity, :record_number, :change_code, :delete_code, :emulation_code]
```
## Examples
iex> Bundesbank.filter_by(:bic, "GENODED1KDB")
[%Bundesbank.Bank{bank_name: "KD-Bank Berlin", bic: "GENODED1KDB", change_code: "U", city: "Berlin", code: "10061006", delete_code: "0", description: "Bank für Kirche und Diakonie - KD-Bank Gf Sonder-BLZ", emulation_code: "00000000", mark_of_conformity: "09", pan: "", postal_code: "10117", property: "1", record_number: "055270" }]
iex> Bundesbank.filter_by(:code, "20050000")
[%Bundesbank.Bank{bank_name: "Hamburg Commercial Bank", bic: "HSHNDEHHXXX", change_code: "U", city: "Hamburg", code: "20050000", delete_code: "0", description: "Hamburg Commercial Bank, ehemals HSH Nordbank Hamburg", emulation_code: "00000000", mark_of_conformity: "C5", pan: "52000", postal_code: "20095", property: "1", record_number: "011954"}]
iex> Bundesbank.filter_by(:city, "Berlin") |> Enum.count()
101
"""
def filter_by(key, value) do
Enum.filter(bundesbank(), fn bank ->
Map.get(bank, key)
|> normalize(value, key)
end)
end
defp normalize(attribute, value, key),
do: normalize(attribute, key) == normalize(value, key)
defp normalize(value, key) when is_integer(value),
do: value |> Integer.to_string() |> normalize(key)
defp normalize(value, key) when is_binary(value) and key != :bic,
do: value |> String.downcase() |> String.replace(~r/\s+/, "") |> String.pad_trailing(11, "x")
defp normalize(value, key) when is_binary(value) and key == :bic,
do: value |> String.downcase() |> String.replace(~r/\s+/, "") |> String.pad_trailing(11, "x")
defp normalize(value, _), do: value
@doc """
Checks if a bank for specific key and value exists.
Returns boolean
## Examples
iex> Bundesbank.exists?(:city, "New York")
false
iex> Bundesbank.exists?(:city, "Berlin")
true
"""
def exists?(key, value) do
filter_by(key, value) |> length > 0
end
# Load banks from csv file once on compile time
@bundesbank Bundesbank.Loader.load()
defp bundesbank do
@bundesbank
end
end
|
lib/bundesbank.ex
| 0.838548
| 0.880592
|
bundesbank.ex
|
starcoder
|
defmodule Expline.Spline do
alias Expline.Matrix
alias Expline.Vector
@closeness_threshold 1.0e-15
@moduledoc """
`Expline.Spline` is the module defining the struct and functions for
constructing cubic splines and interpolating values along them. It is the core
module and data structure for the library.
## Mathematics
Expline uses information from the
["Spline interpolation"](https://en.wikipedia.org/wiki/Spline_interpolation#Algorithm_to_find_the_interpolating_cubic_spline),
["Cholesky decomposition"](https://en.wikipedia.org/wiki/Cholesky_decomposition#The_Cholesky.E2.80.93Banachiewicz_and_Cholesky.E2.80.93Crout_algorithms),
and ["Triangular matrix"](https://en.wikipedia.org/wiki/Triangular_matrix#Forward_and_back_substitution)
Wikipedia pages.
Spline interpolation requires the curvatures at the spline's given points.
To determine the curvatures, a system of linear equations needs to be solved.
There are a number of ways cubic splines can extrapolate values beyond their
minimum and maximum. Currently, the only way Expline implements is "Natural
Spline". Others, such as "Clamped Spline" and "Not-A-Knot", may be
implemented depending on feedback from users.
For "Natural Splines", a case of splines where the ends of the spline have no
curvature, and are therefore can be linearly extrapolated, the matrix `X` in
the system `Xk = y` (where the solution `k` is the vector of curvatures for
the spline) is by definition Hermitian and positive-definite, a conclusion one
can make by analyzing the equations and enforcing certain conditions by
manipulating the input parameters. This allows us to generate the components
necessary for solving for the curvatures in a more specialized fashion, and in
the author's experience, a fairly simple fashion.
Cholesky decomposition is a special type of Matrix decomposition that applies
to Hermitian, positive-definite matrices. Its runtime is half of LU
decomposition, another popular matrix decomposition.
With the Cholesky decomposition, the procedure described
in the first paragraph of [the "Applications" section](https://en.wikipedia.org/wiki/Cholesky_decomposition#Applications)
of the "Cholesky decomposition" Wikipedia page is used to find the
curvatures.
## Performance
Given `n` points, the algorithm for building a spline is `O(n^3)` and
interpolating a value is `O(n)`.
"""
@typedoc "The Expline's internal representation of a cubic spline"
@opaque t :: %__MODULE__{
min: independent_value(),
max: independent_value(),
ranges: :ordsets.ordset(range()),
points: %{required(independent_value()) => dependent_value()},
derivatives: %{required(independent_value()) => curvature()}
}
@typedoc """
The method by which a spline's end conditions are evaluated.
This can have a significant effect on the runtime for the spline creation.
"Natural Spline", denoted by the `:natural_spline` extrapolation method, uses
a particular shape of system of linear equations be solved that is more
performant to solce than a more general system. Currently, it is also the
only extrapolation method implemented.
"""
@type extrapolation_method :: :natural_spline
@enforce_keys [:min, :max, :ranges, :points, :derivatives, :extrapolation_method]
defstruct [:min, :max, :ranges, :points, :derivatives, :extrapolation_method]
@typedoc """
The type used to denote a value that is independent and may be used to
interpolate a dependent value from `interpolate/2`, reference a point, or
determine the extrema of a spline or its ranges.
"""
@type independent_value() :: float()
@typedoc """
The type used to denote a value that depends on another, whether it is by
relation to an independent value in `t:point/0` or returned by
`interpolate/2`.
"""
@type dependent_value() :: float()
@typedoc """
The type used to denote the rate of change in the curvature of function at a
given point.
"""
@type curvature() :: float()
@typedoc """
The type used to initialize a spline and returned from `Expline.interpolate/2`
and `Expline.interpolate/3`.
"""
@type point() :: {independent_value(), dependent_value()}
@typedoc """
The type used to denote the open range within which a point is interpolated.
"""
@type range() :: {independent_value(), independent_value()}
@typedoc """
The errors that can arise from improper input to `from_points/1`.
`:too_few_points` occurs when less than 3 points are used to create the
spline. If this error occurs, providing more points should mitigate the
issue. If you want to interpolate values between two points,
[Linear interpolation](https://en.wikipedia.org/wiki/Linear_interpolation)
may be of more interest, but is not in the interest of this library.
`{:range_too_small, point(), point()}` occurs when the distance between the
`t:independent_value/0` in each of at least two points are too close for the
virtual machine to determine a curvature between them. If this error occurs,
increasing the resolution between the provided points or dropping points that
are too close before input are both straightforward examples of mitigation
strategies.
"""
@type creation_error() ::
:too_few_points
| {:range_too_small, point(), point()}
@typedoc """
The errors that can arise from improper input to `interpolate/2`.
`:corrupt_extrema` occurs when the minimum/maximum is not greater/less than a
point that does not fall into a range in the spline.
`:corrupt_spline` occurs when the spline's internal information does not
conform to various mathematical invariants.
For either of these values, a
[bug report](https://github.com/isaacsanders/expline/issues)
may be needed, as in normal operation, neither should occur.
"""
@type interpolation_error() ::
:corrupt_extrema
| :corrupt_spline
@doc """
Create a spline from a list of floating point pairs (tuples).
This function bootstraps a spline and prepares it for use in the
`interpolate/2` function.
If fewer than 3 points are passed to the function, the function will
short-circuit and return `{:error, :too_few_points}`. This is a mathematical
constraint. A cubic spline cannot be built on fewer than 3 points.
Due to the constraints of Erlang and Elixir, there is a limit on how close
points can be. If points are too close to each other, this leads to a
situation where an error similar to dividing by zero occurs. In light of
this, there is a check that occurs and short-circuits the construction of the
spline. It returns `{:error, {:range_too_small, p1, p2}}` where `p1` and `p2`
are two `t:point/0`s that are too close together. If this error is
encountered, mitigating it is use-case specific. In order to find the points
that are too close, the following snippet may prove useful:
```
points
|> Enum.group_by(fn ({x, y}) -> Float.round(x, 15) end)
```
`Float.round/2` has a maximum precision of 15, and while floating point
numbers appear to have higher precision, with representations with much
higher points of precision, being able to reliably reconcile points is
important.
If this is an issue, a smaller closeness threshold will be used, but until
then, the threshold for determining points that are too close is 1.0e-15, the
smallest value that can't be rounded by `Float.round/2`.
## Examples
# A point that is too close
iex> Expline.Spline.from_points([{0.0, 0.0}, {1.0, 1.0}, {5.0e-324, 0.0}])
{:error, {:range_too_small, {0.0, 0.0}, {5.0e-324, 0.0}}}
# Not enough points
iex> Expline.Spline.from_points([{0.0, 0.0}, {1.0, 1.0}])
{:error, :too_few_points}
# Well-spaced, enough points
iex> Expline.Spline.from_points([{0.0, 0.0}, {1.0, 1.0}, {2.0, 2.0}])
{:ok, %Expline.Spline{derivatives: %{0.0 => 0.9999999999999999,
1.0 => 0.9999999999999999,
2.0 => 1.0000000000000002},
extrapolation_method: :natural_spline,
max: 2.0, min: 0.0,
points: %{0.0 => 0.0, 1.0 => 1.0, 2.0 => 2.0},
ranges: [{0.0, 1.0}, {1.0, 2.0}]}}
"""
@spec from_points(list(point())) ::
{:ok, t()}
| {:error, creation_error()}
def from_points(list_of_points) when length(list_of_points) >= 3 do
points = Map.new(list_of_points)
xs = Map.keys(points)
{min, max} = Enum.min_max(xs)
ranges = make_ranges(xs)
case Enum.find(ranges, &range_too_small?/1) do
{x1, x2} ->
y1 = Map.get(points, x1)
y2 = Map.get(points, x2)
{:error, {:range_too_small, {x1, y1}, {x2, y2}}}
nil ->
derivatives = make_derivatives(points)
spline = %__MODULE__{
min: min,
max: max,
ranges: :ordsets.from_list(ranges),
points: points,
derivatives: derivatives,
extrapolation_method: :natural_spline
}
{:ok, spline}
end
end
def from_points(list_of_points) when length(list_of_points) < 3,
do: {:error, :too_few_points}
@spec make_ranges(list(independent_value())) :: list(range())
defp make_ranges(xs) do
xs
|> Enum.sort()
|> Enum.chunk_every(2, 1, :discard)
|> Enum.map(&List.to_tuple/1)
end
@spec range_too_small?(range()) :: boolean()
defp range_too_small?({x1, x2}) do
abs(x1 - x2) <= @closeness_threshold
end
@doc """
Interpolate a value from the spline.
In regular usage, when the function is given a float and a spline, it will
return a tuple of `{:ok, float}` corresponding to the interpolated value of
dependent variable from the given value of the independent variable and the
spline.
If any of the invariants of the spline's internal representation are not
satisfied, then `{:error, :corrupt_spline}` will be returned. If this
happens, please report it, as that would be a sign that there is an issue with
the underlying data structures or algorithms used in the library.
## Examples
iex> with {:ok, spline} <- Expline.Spline.from_points([{0.0, 0.0}, {1.0, 1.0}, {2.0, 2.0}]),
...> do: Expline.Spline.interpolate(spline, -0.5)
{:ok, -0.49999999999999994}
iex> with {:ok, spline} <- Expline.Spline.from_points([{0.0, 0.0}, {1.0, 1.0}, {2.0, 2.0}]),
...> do: Expline.Spline.interpolate(spline, 0.5)
{:ok, 0.5}
iex> with {:ok, spline} <- Expline.Spline.from_points([{0.0, 0.0}, {1.0, 1.0}, {2.0, 2.0}]),
...> do: Expline.Spline.interpolate(spline, 1.5)
{:ok, 1.5}
iex> with {:ok, spline} <- Expline.Spline.from_points([{0.0, 0.0}, {1.0, 1.0}, {2.0, 2.0}]),
...> do: Expline.Spline.interpolate(spline, 2.5)
{:ok, 2.5}
"""
@spec interpolate(t(), independent_value()) ::
{:ok, dependent_value()}
| {:error, interpolation_error()}
| {:error, :corrupt_spline}
def interpolate(%__MODULE__{} = spline, x) when is_float(x) do
with :error <- Map.fetch(spline.points, x) do
case :ordsets.filter(fn {x1, x2} -> x1 < x and x < x2 end, spline.ranges) do
[{_x1, _x2} = range] ->
do_interpolate(spline, range, x)
[] ->
extrapolate(spline, x)
_ranges ->
{:error, :corrupt_spline}
end
end
end
@spec do_interpolate(t(), range(), independent_value()) :: {:ok, dependent_value()}
defp do_interpolate(%__MODULE__{} = spline, {x1, x2}, x) do
y1 = Map.get(spline.points, x1)
y2 = Map.get(spline.points, x2)
k1 = Map.get(spline.derivatives, x1)
k2 = Map.get(spline.derivatives, x2)
# Described by equations (1), (2), (3), and (4) on
# https://en.wikipedia.org/wiki/Spline_interpolation
t = (x - x1) / (x2 - x1)
a = k1 * (x2 - x1) - (y2 - y1)
b = -k2 * (x2 - x1) + (y2 - y1)
y = (1 - t) * y1 + t * y2 + t * (1 - t) * (a * (1 - t) + b * t)
{:ok, y}
end
@spec extrapolate(t(), independent_value()) ::
{:ok, dependent_value()}
| {:error, :corrupt_extrema}
defp extrapolate(spline, x) do
cond do
spline.min > x ->
min_curvature = Map.get(spline.derivatives, spline.min)
min_y = Map.get(spline.points, spline.min)
y = (x - spline.min) * min_curvature + min_y
{:ok, y}
spline.max < x ->
max_curvature = Map.get(spline.derivatives, spline.max)
max_y = Map.get(spline.points, spline.max)
y = (x - spline.max) * max_curvature + max_y
{:ok, y}
true ->
{:error, :corrupt_extrema}
end
end
@spec make_derivatives(%{required(independent_value()) => dependent_value()}) :: %{
required(independent_value()) => curvature()
}
defp make_derivatives(points) do
n = map_size(points) - 1
xs =
points
|> Map.keys()
|> Enum.sort()
[x0, x1] = Enum.take(xs, 2)
[y0, y1] = Enum.take(xs, 2) |> Enum.map(&Map.get(points, &1))
[xn_1, xn] = Enum.drop(xs, n - 1)
[yn_1, yn] = Enum.drop(xs, n - 1) |> Enum.map(&Map.get(points, &1))
# Described by equations (15), (16), and (17) on
# https://en.wikipedia.org/wiki/Spline_interpolation
system_of_eqns =
Expline.Matrix.construct(n + 1, n + 2, fn
# first row
0, 0 ->
2 / (x1 - x0)
0, 1 ->
1 / (x1 - x0)
# =
0, j when j == n + 1 ->
3.0 * ((y1 - y0) / :math.pow(x1 - x0, 2))
# last row
^n, j when j == n - 1 ->
1 / (xn - xn_1)
^n, ^n ->
2 / (xn - xn_1)
# =
^n, j when j == n + 1 ->
3.0 * ((yn - yn_1) / :math.pow(xn - xn_1, 2))
# middle rows
i, j when j == i - 1 ->
[xi_1, xi] = Enum.map(-1..0, fn offset -> Enum.at(xs, i + offset) end)
1.0 / (xi - xi_1)
i, i ->
[xi_1, xi, xi1] = Enum.map(-1..1, fn offset -> Enum.at(xs, i + offset) end)
2.0 * (1.0 / (xi - xi_1) + 1.0 / (xi1 - xi))
i, j when j == i + 1 ->
[xi, xi1] = Enum.map(0..1, fn offset -> Enum.at(xs, i + offset) end)
1.0 / (xi1 - xi)
# =
i, j when j == n + 1 ->
[xi_1, xi, xi1] = Enum.map(-1..1, fn offset -> Enum.at(xs, i + offset) end)
[yi_1, yi, yi1] =
[xi_1, xi, xi1]
|> Enum.map(&Map.get(points, &1))
3.0 *
((yi - yi_1) / :math.pow(xi - xi_1, 2) +
(yi1 - yi) / :math.pow(xi1 - xi, 2))
# empty terms
_i, _j ->
0.0
end)
with {:ok, {matrix, vector}} <- Matrix.disaugment(system_of_eqns),
{:ok, l} <- Matrix.cholesky_decomposition(matrix),
{:ok, y} <- Matrix.forward_substitution(l, vector),
{:ok, derivative_vector} <- l |> Matrix.transpose() |> Matrix.backward_substitution(y) do
Enum.zip(xs, Vector.to_list(derivative_vector)) |> Map.new()
end
end
end
|
lib/expline/spline.ex
| 0.945109
| 0.982641
|
spline.ex
|
starcoder
|
defmodule Cartel.HTTP do
@moduledoc false
alias Cartel.HTTP.{Request, Response}
@type t :: %__MODULE__{conn: Mint.HTTP.t() | nil}
@type scheme :: :http | :https
@type status :: integer()
@type host :: String.t()
@type method :: String.t()
@type url :: String.t()
@type body :: iodata() | nil
@type header :: {String.t(), String.t()}
@type headers :: [header()]
@typedoc """
HTTP request options
Available options are
- `follow_redirects`: If true, when an HTTP redirect is received a new request is made to the redirect URL, else the redirect is returned. Defaults to `true`
- `max_redirects`: Maximum number of redirects to follow, defaults to `10`
- `request_timeout`: Timeout for the request, defaults to `20 seconds`
- `query_params`: Enumerable containing key-value query parameters to add to the url
Defaults can be changed by setting values in the app configuration:
```elixir
config :cartel, :http,
max_redirects: 4,
request_timeout: 5_000
```
"""
@type options :: [
request_timeout: integer(),
max_redirects: integer(),
follow_redirects: boolean(),
query_params: Enum.t()
]
defstruct conn: nil
@doc """
Establish an HTTP connection
Returns the connection stucture to use for subsequent requests.
"""
@spec connect(url, options) :: {:ok, t()} | {:error, term}
def connect(url, options \\ []) do
with %URI{scheme: scheme, host: host, port: port} <- URI.parse(url),
{:ok, conn} <-
scheme
|> String.downcase()
|> String.to_existing_atom()
|> Mint.HTTP.connect(host, port, options),
do: {:ok, %__MODULE__{conn: conn}}
end
@doc """
Close an HTTP connection
"""
@spec close(t()) :: :ok | {:error, term}
def close(%{conn: conn}) do
with {:ok, _conn} <- Mint.HTTP.close(conn), do: :ok
end
@doc """
Performs an HTTP request
Returns the connection stucture to use for subsequent requests.
"""
@spec request(t(), Request.t()) :: {:ok, t(), Response.t()} | {:error, term}
def request(connection, %Request{
method: method,
url: url,
headers: headers,
body: body,
options: options
}) do
request(connection, method, url, body, headers, options)
end
@doc """
Performs an HTTP request
Returns the connection stucture to use for subsequent requests.
"""
@spec request(t, method, url, body, headers, options) ::
{:ok, t(), Response.t()} | {:error, term}
def request(connection, method, url, body \\ nil, headers \\ [], options \\ [])
def request(%__MODULE__{conn: nil}, method, url, body, headers, options) do
with {:ok, connection} <- connect(url, options),
do: request(connection, method, url, body, headers, options)
end
def request(%__MODULE__{conn: conn} = connection, method, url, body, headers, options) do
follow_redirects = get_option(options, :follow_redirects, true)
with %URI{path: path, query: query} <- URI.parse(url),
{:ok, conn, request_ref} <-
Mint.HTTP.request(
conn,
method,
process_request_url(path, query, options),
headers,
body
),
{:ok, conn, response} when conn != :error and not follow_redirects <-
receive_msg(conn, %Response{}, request_ref, options) do
{:ok, %{connection | conn: conn}, response}
else
{:ok, conn, %Response{status: status, headers: response_headers} = response} ->
case Enum.find(response_headers, fn {header, _value} -> header == "location" end) do
{_header, redirect_url} when follow_redirects and (status >= 300 and status < 400) ->
max_redirects = get_option(options, :max_redirects, 10)
redirect(
%{connection | conn: conn},
method,
URI.parse(url),
URI.parse(redirect_url),
body,
headers,
options,
max_redirects
)
_ ->
{:ok, %{connection | conn: conn}, response}
end
{:error, %Mint.TransportError{reason: reason}} ->
{:error, reason}
{:error, reason} ->
{:error, reason}
error ->
error
end
end
defp redirect(
_connection,
_method,
_original_url,
_redirect_url,
_body,
_headers,
_options,
max_redirects
)
when max_redirects == 0 do
{:error, :too_many_redirects}
end
defp redirect(
connection,
method,
%URI{scheme: original_scheme} = original_url,
%URI{scheme: redirect_scheme} = redirect_url,
body,
headers,
options,
max_redirects
)
when is_nil(redirect_scheme) do
redirect(
connection,
method,
original_url,
%{redirect_url | scheme: original_scheme},
body,
headers,
options,
max_redirects - 1
)
end
defp redirect(
_connection,
method,
%URI{scheme: original_scheme, host: original_host, port: original_port},
%URI{scheme: redirect_scheme, host: redirect_host, port: redirect_port} = redirect_url,
body,
headers,
options,
max_redirects
)
when redirect_scheme != original_scheme or
redirect_host != original_host or
redirect_port != original_port do
options = put_option(options, :max_redirects, max_redirects - 1)
request(%__MODULE__{}, method, URI.to_string(redirect_url), body, headers, options)
end
defp redirect(
connection,
method,
_original_url,
redirect_url,
body,
headers,
options,
max_redirects
) do
options = put_option(options, :max_redirects, max_redirects - 1)
request(connection, method, URI.to_string(redirect_url), body, headers, options)
end
defp receive_msg(conn, response, request_ref, options) do
socket = Mint.HTTP.get_socket(conn)
timeout = get_option(options, :request_timeout, 20_000)
receive do
{tag, ^socket, _data} = msg when tag in [:tcp, :ssl] ->
handle_msg(conn, request_ref, msg, response, options)
{tag, ^socket} = msg when tag in [:tcp_closed, :ssl_closed] ->
handle_msg(conn, request_ref, msg, response, options)
{tag, ^socket, _reason} = msg when tag in [:tcp_error, :ssl_error] ->
handle_msg(conn, request_ref, msg, response, options)
after
timeout ->
{:error, :timeout}
end
end
defp handle_msg(conn, request_ref, msg, response, options) do
with {:ok, conn, responses} <- Mint.HTTP.stream(conn, msg),
{:ok, conn, {response, true}} <-
handle_responses(conn, response, responses, request_ref) do
{:ok, conn, response}
else
:unknown ->
receive_msg(conn, response, request_ref, options)
{:error, _, %{reason: reason}, _} ->
{:error, reason}
{:ok, conn, {response, false}} ->
receive_msg(conn, response, request_ref, options)
end
end
defp handle_responses(conn, response, responses, request_ref) do
{response, complete} =
responses
|> Enum.reduce({response, false}, fn
{:status, ^request_ref, v}, {response, complete} ->
{%Response{response | status: v}, complete}
{:data, ^request_ref, v}, {%Response{body: body} = response, complete} ->
{%Response{response | body: [v | body]}, complete}
{:headers, ^request_ref, v}, {response, complete} ->
{%Response{response | headers: v}, complete}
{:done, ^request_ref}, {%Response{body: body} = response, _complete} ->
{%Response{response | body: Enum.reverse(body)}, true}
end)
{:ok, conn, {response, complete}}
end
defp get_option(options, option, default) do
default_value =
:cartel
|> Application.get_env(:http, [])
|> Keyword.get(option, default)
Keyword.get(options, option, default_value)
end
defp put_option(options, option, value) do
Keyword.put(options, option, value)
end
defp process_request_url(nil = _path, query, options),
do: process_request_url("/", query, options)
defp process_request_url(path, nil = _query, options),
do: process_request_url(path, "", options)
defp process_request_url(path, query, options) do
query_params =
options
|> Keyword.get(:query_params, [])
|> encode_query()
path <> "?" <> query <> "&" <> query_params
end
defp encode_query([]), do: ""
defp encode_query(%{}), do: ""
defp encode_query(query_params) do
query_params
|> URI.encode_query()
end
end
|
lib/cartel/http.ex
| 0.886623
| 0.600891
|
http.ex
|
starcoder
|
defmodule Grapex.Model.Operations do
require Axon
@doc """
Analyzes provided parameters and depending on the analysis results runs model testing either using test subset of a corpus either validation subset
"""
@spec test_or_validate({Grapex.Init, Axon, Map}) :: tuple # , list) :: tuple
def test_or_validate({%Grapex.Init{validate: should_run_validation, task: task, reverse: reverse, tester: tester} = params, model, model_state}) do # , opts \\ []) do
# IO.puts "Reverse: #{reverse}"
# reverse = Keyword.get(opts, :reverse, false)
case task do
:link_prediction ->
case should_run_validation do
true -> tester.validate({params, model, model_state}, reverse: reverse) # implement reverse option
false -> tester.test({params, model, model_state}, reverse: reverse)
end
_ -> raise "Task #{task} is not supported"
end
end
@doc """
Saves trained model to an external file in onnx-compatible format
"""
def save({%Grapex.Init{output_path: output_path, remove: remove, is_imported: is_imported, verbose: verbose} = params, model, model_state}) do
case is_imported do
true ->
case verbose do
true -> IO.puts "The model was not saved because it was initialized from pre-trained tensors"
_ -> {:ok, nil}
end
_ ->
case remove do
true ->
case verbose do
true -> IO.puts "Trained model was not saved because the appropriate flag was provided"
_ -> {:ok, nil}
end
_ ->
File.mkdir_p!(Path.dirname(output_path))
model
|> AxonOnnx.Serialize.__export__(model_state, filename: output_path)
case verbose do
true -> IO.puts "Trained model is saved as #{output_path}"
_ -> {:ok, nil}
end
end
end
{params, model, model_state}
end
@doc """
Load model from an external file
"""
def load(%Grapex.Init{import_path: import_path} = params) do
[params | Tuple.to_list(AxonOnnx.Deserialize.__import__(import_path))]
|> List.to_tuple
end
@doc """
Analyzes the passed parameters object and according to the analysis results either loads trained model from an external file either trains it from scratch.
"""
@spec train_or_import(Grapex.Init) :: tuple
def train_or_import(%Grapex.Init{import_path: import_path, verbose: verbose, trainer: trainer} = params) do
IO.puts "Training model..."
if verbose do
IO.puts "Supported computational platforms:"
IO.inspect EXLA.NIF.get_supported_platforms()
IO.puts "Gpu client:"
IO.inspect EXLA.NIF.get_gpu_client(1.0, 0)
end
# IO.puts "Import path:"
# IO.puts import_path
case import_path do
nil ->
# trainer = Grapex.Init.get_trainer(params)
trainer.train(params)
_ ->
{params, model, state} = load(params)
{Grapex.Init.set_is_imported(params, true), model, state}
end
end
end
|
lib/grapex/models/operations.ex
| 0.693058
| 0.484441
|
operations.ex
|
starcoder
|
defmodule PolicrMiniBot.Disposable do
@moduledoc """
一次性处理的保证服务。
此模块可以为并发环境下只允许调用一次的任务强制保证一次性状态。当给同一个的 key
二次添加保证时将返回处理中或完成处理两个状态,这样便可在任务执行前就知道任务是否执行过。状态在完成处理或超时后自动清理。
"""
use GenServer
@type unix_datetime :: integer
@type status_type :: :processing | :done
@type status_value :: {:processing, unix_datetime} | :done
@type second :: integer
def start_link(_opts) do
GenServer.start_link(__MODULE__, %{}, name: __MODULE__)
end
@impl true
def init(state) do
schedule_clean()
{:ok, state}
end
@type key :: integer | binary
@doc """
添加一个一次性处理保证。
如果指定的 key 不存在,则设置为 `{:processing, expired_unix}` 状态并返回 `:ok`。其中
`expired_unix` 变量表示以计算当前时间和超时时间得出的过期时间戳,它用于清理任务。否则将返回
`{:repeat, status}`,其中的 `status` 变量表示已存在的状态。
*注意*:设置了超时时间并不表示会按时清理,因为清理任务是以固定时间轮询执行的。
"""
@spec processing(key, second) :: :ok | {:repeat, status_type}
def processing(key, timeout \\ 5) do
value = {:processing, now_unix() + timeout}
case GenServer.call(__MODULE__, {:get_and_put_new_status, key, value}) do
nil -> :ok
{:processing, _expired_unix} -> {:repeat, :processing}
:done -> {:repeat, :done}
end
end
@doc """
完成一次性处理的保证状态。
直接将指定 key 的状态设置为 `:done`,此状态最终会被清理任务删除,和 `PolicrMiniBot.Disposable.processing/2`
函数的 `timeout` 参数类似,清理并不是即时的。
"""
@spec done(key) :: :ok
def done(key) do
GenServer.cast(__MODULE__, {:set_status, key, :done})
end
@impl true
def handle_call({:get_status, key}, _from, state) do
{:reply, Map.get(state, key), state}
end
# 获取并更新不存在的状态(如果状态已存在则不更新)。
@impl true
def handle_call({:get_and_put_new_status, key, new_status}, _from, state) do
status = Map.get(state, key)
state =
if status == nil do
Map.put(state, key, new_status)
else
state
end
{:reply, status, state}
end
@impl true
def handle_cast({:set_status, key, status}, state) do
{:noreply, Map.put(state, key, status)}
end
# 执行清理任务
@impl true
def handle_info(:clean, state) do
now_unix = now_unix()
state = state |> Enum.filter(&retain?(&1, now_unix)) |> Enum.into(%{})
schedule_clean()
{:noreply, state}
end
@spec now_unix() :: unix_datetime
defp now_unix(), do: DateTime.to_unix(DateTime.utc_now())
@spec retain?({key, status_value}, unix_datetime) :: boolean
defp retain?({_key, :done}, _now_unix), do: false
defp retain?({_key, {:processing, expired_unix}}, now_unix) when is_integer(expired_unix),
do: now_unix < expired_unix
# 每一分钟清理一次(此时间可在启动 GenServer 时通过参数传递)
@clean_sleep_time 1000 * 60
@spec schedule_clean() :: :ok
defp schedule_clean() do
Process.send_after(__MODULE__, :clean, @clean_sleep_time)
:ok
end
end
|
lib/policr_mini_bot/disposable.ex
| 0.551574
| 0.511412
|
disposable.ex
|
starcoder
|
defmodule V3Api.Cache do
@moduledoc """
Cache HTTP responses from the V3 API.
Static data such as schedules and stops do not change frequently. However,
we do want to check in with the API periodically to make sure we have the
most recent data. This module stores the previous HTTP responses, and can
return them if the server says that the data is unchanged.
"""
use GenServer
require Logger
alias HTTPoison.Response
@type url :: String.t()
@type params :: Enumerable.t()
def start_link(opts \\ []) do
opts = Keyword.put_new(opts, :name, __MODULE__)
GenServer.start_link(__MODULE__, opts, opts)
end
@doc """
Given a URL, parameters, and an HTTP response:
- If the HTTP response is a 304 Not Modified, return the previously cached response
- If the HTTP response is a 200, 400, or 404, cache it and return the response
- If the HTTP response is anything else, try to return a cached response, otherwise return the response as-is
"""
@spec cache_response(url, params, Response.t()) ::
{:ok, Response.t()} | {:error, :no_cached_response}
def cache_response(name \\ __MODULE__, url, params, response)
def cache_response(name, url, params, %{status_code: 304}) do
lookup_cached_response(name, url, params)
rescue
ArgumentError ->
{:error, :no_cached_response}
end
def cache_response(name, url, params, %{status_code: status_code} = response)
when status_code in [200, 400, 404] do
key = {url, params}
last_modified = header(response, "last-modified")
true = :ets.insert(name, {key, last_modified, response, now()})
{:ok, response}
end
def cache_response(name, url, params, response) do
lookup_cached_response(name, url, params)
rescue
ArgumentError ->
{:ok, response}
end
defp lookup_cached_response(name, url, params) do
key = {url, params}
element = :ets.lookup_element(name, key, 3)
:ets.update_element(name, key, {4, now()})
{:ok, element}
end
@doc """
Return a list of cache headers for the given URL/parameters.
"""
@spec cache_headers(url, params) :: [{String.t(), String.t()}]
def cache_headers(name \\ __MODULE__, url, params) do
last_modfied = :ets.lookup_element(name, {url, params}, 2)
[{"if-modified-since", last_modfied}]
rescue
ArgumentError ->
[]
end
defp header(%{headers: headers}, header) do
case Enum.find(headers, &(String.downcase(elem(&1, 0)) == header)) do
{_, value} -> value
nil -> nil
end
end
@doc "Expire the least-recently-used cache items"
@spec expire!() :: :ok
def expire!(name \\ __MODULE__) do
GenServer.call(name, :expire!)
end
@impl GenServer
def init(opts) do
name = Keyword.fetch!(opts, :name)
^name =
:ets.new(name, [
:set,
:named_table,
:public,
{:read_concurrency, true},
{:write_concurrency, true}
])
timeout = Keyword.get(opts, :timeout, 60_000)
Process.send_after(self(), :expire, timeout)
size = Keyword.get(opts, :size, Application.get_env(:v3_api, :cache_size))
{:ok, %{name: name, size: size, timeout: timeout}}
end
@impl GenServer
def handle_call(:expire!, _from, state) do
:ok = do_expire(state)
{:reply, :ok, state}
end
@impl GenServer
def handle_info(:expire, state) do
:ok = do_expire(state)
Process.send_after(self(), :expire, state.timeout)
{:noreply, state}
end
defp do_expire(%{name: name, size: size}) do
current_size = :ets.info(name, :size)
_ =
Logger.info(fn ->
"#{name} report - size=#{current_size} max_size=#{size} memory=#{:ets.info(name, :memory)}"
end)
if current_size > size do
# keep half of the cache, so that we don't bounce around clearing the
# cache each minute
keep = div(size, 2)
name
|> :ets.match({:"$2", :_, :_, :"$1"})
|> Enum.sort(&>=/2)
|> Enum.drop(keep)
|> Enum.each(fn [_lru, key] -> :ets.delete(name, key) end)
else
:ok
end
end
defp now do
System.monotonic_time()
end
end
|
apps/v3_api/lib/cache.ex
| 0.779867
| 0.402099
|
cache.ex
|
starcoder
|
defmodule ExMatchers.Include do
@moduledoc false
import ExUnit.Assertions
import ExMatchers.Custom
defprotocol IncludeMatcher do
def to_match(value, key)
def to_match(value, key, expected_value)
def to_not_match(value, key)
def to_not_match(value, key, expected_value)
end
defimpl IncludeMatcher, for: BitString do
def to_match(value, substring) do
assert String.contains?(value, substring)
end
def to_match(value, substring, expected_value) do
flunk "Includes not supported from #{substring} in #{value} with #{expected_value}"
end
def to_not_match(value, substring) do
refute String.contains?(value, substring)
end
def to_not_match(value, substring, expected_value) do
flunk "Includes not supported from #{substring} in #{value} with #{expected_value}"
end
end
defimpl IncludeMatcher, for: List do
def to_match([{_k, _v} | _t] = value, key) do
assert Keyword.has_key?(value, key)
end
def to_match(list, element) do
assert Enum.member?(list, element)
end
def to_match([{_k, _v} | _t] = value, keys, expected_value) when is_list(keys) do
assert get_in(value, keys) == expected_value
end
def to_match([{_k, _v} | _t] = value, key, expected_value) do
assert value[key] == expected_value
end
def to_match(list, element, expected_value) do
flunk "Includes not supported from #{element} in #{list} with #{expected_value}"
end
def to_not_match([{_k, _v} | _t] = value, key) do
refute Keyword.has_key?(value, key)
end
def to_not_match(list, element) do
refute Enum.member?(list, element)
end
def to_not_match([{_k, _v} | _t] = value, keys, expected_value) when is_list(keys) do
refute get_in(value, keys) == expected_value
end
def to_not_match([{_k, _v} | _t] = value, key, expected_value) do
refute value[key] == expected_value
end
def to_not_match(list, element, expected_value) do
flunk "Includes not supported from #{element} in #{list} with #{expected_value}"
end
end
defimpl IncludeMatcher, for: Range do
def to_match(range, element) do
assert Enum.member?(range, element)
end
def to_match(range, element, expected_value) do
flunk "Includes not supported from #{element} in #{range} with #{expected_value}"
end
def to_not_match(range, element) do
refute Enum.member?(range, element)
end
def to_not_match(range, element, expected_value) do
flunk "Includes not supported from #{element} in #{range} with #{expected_value}"
end
end
defimpl IncludeMatcher, for: Tuple do
def to_match(tuple, element) do
assert Tuple.to_list(tuple) |> Enum.member?(element)
end
def to_match(tuple, element, expected_value) do
flunk "Includes not supported from #{element} in #{tuple} with #{expected_value}"
end
def to_not_match(tuple, element) do
refute Tuple.to_list(tuple) |> Enum.member?(element)
end
def to_not_match(tuple, element, expected_value) do
flunk "Includes not supported from #{element} in #{tuple} with #{expected_value}"
end
end
defimpl IncludeMatcher, for: Map do
def to_match(value, key) do
assert Map.has_key?(value, key)
end
def to_match(value, keys, expected_value) when is_list(keys) do
assert get_in(value, keys) == expected_value
end
def to_match(value, key, expected_value) do
assert value[key] == expected_value
end
def to_not_match(value, key) do
refute Map.has_key?(value, key)
end
def to_not_match(value, keys, expected_value) when is_list(keys) do
refute get_in(value, keys) == expected_value
end
def to_not_match(value, key, expected_value) do
refute value[key] == expected_value
end
end
defmatcher include(key), with: value, matcher: IncludeMatcher
end
|
lib/ex_matchers/include.ex
| 0.775392
| 0.810254
|
include.ex
|
starcoder
|
defmodule Exmorph.Time do
@doc """
Takes a string representing time and returns the integer
value for that time in nanoseconds.
## Examples
iex> Exmorph.Time.from_string("100000ms")
1.0e11
iex> Exmorph.Time.from_string("10s")
10000000000
iex> Exmorph.Time.from_string("3min")
180000000000
iex> Exmorph.Time.from_string("1hr")
3600000000000
"""
def from_string("infinity"), do: :infinity
def from_string(value) when is_bitstring(value) do
if Regex.match?(~r/((?:\d*\.)?\d+)(ms|s|m|h)/, value) do
parse_time(value)
|> to_nano
else
raise "Cannot parse duration #{value}."
end
end
@doc """
Returns the current system time in nanoseconds.
"""
def now() do
:os.system_time(:nano_seconds)
end
@doc """
Takes an atom with a duration as the first element and unit
of time as the second. Returns the duration converted to
nanoseconds.
## Examples
iex> Exmorph.Time.to_nano({8_888, :milli_seconds})
8.888e9
iex> Exmorph.Time.to_nano({88, :seconds})
88000000000
iex> Exmorph.Time.to_nano({64, :minutes})
3840000000000
iex> Exmorph.Time.to_nano({4, :hours})
14400000000000
"""
def to_nano({time, :milli_seconds}) do
(time / 1_000) * 1_000_000_000
end
def to_nano({time, :seconds}) do
time * 1_000_000_000
end
def to_nano({time, :minutes}) do
time * 60 * 1_000_000_000
end
def to_nano({time, :hours}) do
time * 3600 * 1_000_000_000
end
def to_nano({time, _}), do: time
defp parse_time(value) when is_bitstring(value) do
cond do
String.contains?(value, ".") ->
{result, unit} = Float.parse(value)
{result, parse_unit(unit)}
true ->
{result, unit} = Integer.parse(value)
{result, parse_unit(unit)}
end
end
defp parse_unit(unit) do
case unit do
"ms" -> :milli_seconds
"msec" -> :milli_seconds
"s" -> :seconds
"sec" -> :seconds
"m" -> :minutes
"min" -> :minutes
"h" -> :hours
"hr" -> :hours
_ -> :unknown
end
end
end
|
lib/time.ex
| 0.837952
| 0.449332
|
time.ex
|
starcoder
|
defmodule LocationService do
@moduledoc """
Interacts with a service to perform geocoding, reverse geocoding and place lookups.
"""
use RepoCache, ttl: :timer.hours(24)
require Logger
@type result ::
{:ok, nonempty_list(LocationService.Address.t())}
| {:error, :zero_results | :internal_error}
@doc "Uses either AWS Location Service or Google Maps Place API to perform a
geocode lookup, selecting based on config value.
Caches the result using the input address as key."
@spec geocode(String.t()) :: result
def geocode(address) when is_binary(address) do
geocode_service = active_service(:reverse_geocode)
_ =
Logger.info(fn ->
"#{__MODULE__} geocode active_service=#{geocode_service} address=#{address}"
end)
cache(address, fn address ->
case geocode_service do
:aws -> AWSLocation.geocode(address)
_ -> GoogleMaps.Geocode.geocode(address)
end
end)
end
@doc "Uses either AWS Location Service or Google Maps Place API to perform a
geocode lookup, selecting based on config value.
Caches the result using the input address as key."
@spec reverse_geocode(number, number) :: result
def reverse_geocode(latitude, longitude) when is_float(latitude) and is_float(longitude) do
reverse_geocode_service = active_service(:reverse_geocode)
_ =
Logger.info(fn ->
"#{__MODULE__} reverse_geocode active_service=#{reverse_geocode_service} lat=#{latitude} lon=#{
longitude
}"
end)
cache({latitude, longitude}, fn {latitude, longitude} ->
case reverse_geocode_service do
:aws -> AWSLocation.reverse_geocode(latitude, longitude)
_ -> GoogleMaps.Geocode.reverse_geocode(latitude, longitude)
end
end)
end
@doc "Uses either AWS Location Service or Google Maps Place API to do
autocompletion, selecting based on config value."
@spec autocomplete(String.t(), number, String.t() | nil) :: LocationService.Suggestion.result()
def autocomplete(search, limit, token) do
autocomplete_service = active_service(:autocomplete)
_ =
Logger.info(fn ->
"#{__MODULE__} autocomplete active_service=#{autocomplete_service} search=#{search} limit=#{
limit
}"
end)
cache({search, limit}, fn {search, limit} ->
case autocomplete_service do
:aws -> AWSLocation.autocomplete(search, limit)
_ -> LocationService.Wrappers.google_autocomplete(search, limit, token)
end
end)
end
def active_service(key) do
{:system, env_var, default} = Application.get_env(:location_service, key)
if value = System.get_env(env_var), do: String.to_atom(value), else: default
end
end
|
apps/location_service/lib/location_service.ex
| 0.856962
| 0.637412
|
location_service.ex
|
starcoder
|
defmodule Medium do
@moduledoc """
Medium is an Elixir library that provides an interface to interact
with the Medium API.
## Installation
1. Add Medium to your list of dependencies in `mix.exs`:
def deps do
[{:medium_client, "~> #{Medium.Mixfile.project[:version]}"}]
end
2. Ensure Medium is started before your application:
def application do
[applications: [:medium_client]]
end
## Authorship and License
Medium is copyright 2016 <NAME> (http://roperzh.com)
Medium is released under the
[MIT License]
(https://github.com/roperzh/medium-sdk-elixir/blob/master/LICENSE).
"""
use Tesla
alias Medium.Helpers.Query
plug Medium.Middlewares.Codec
plug Tesla.Middleware.JSON
adapter Tesla.Adapter.Hackney
@base_url "https://api.medium.com/v1"
@auth_url "https://medium.com/m/oauth/authorize?"
@doc """
Generate an authorization URL
Parameters for the url must be provided as a map, with the following
required keys:
- `client_id`, String
- `scope`, List of String elements
- `state`, String
- `response_type`, String
- `redirect_uri`, String
For more details, please check the official [documentation]
(https://github.com/Medium/medium-api-docs#21-browser-based-authentication)
"""
def authorize_url(query) do
@auth_url <> Query.encode(query)
end
@doc """
Request for a token
Parameters for the url must be provided as a map, with the following
required keys:
- `code`, String generated with the url provided via `authorize_url/1`
- `client_id`, String
- `client_secret`, String
- `grant_type`, String
- `redirect_uri`, String
For more details, please check the official [documentation]
(https://github.com/Medium/medium-api-docs#21-browser-based-authentication)
"""
def get_token(query, url \\ @base_url) do
post url <> "/tokens", query
end
@doc """
Generate a client which will use a defined `token` for future requests
Optionally the client accepts a custom base url to be used for all request
this can be useful for future api versions and testing.
## Examples
client = Medium.client("my-access-token")
test_client = Medium.client("my-acces-token", "http://localhost")
"""
def client(token, url \\ @base_url) do
Tesla.build_client [
{Tesla.Middleware.BaseUrl, url},
{Tesla.Middleware.Headers, %{"Authorization" => "Bearer #{token}"}}
]
end
@doc """
Returns details of the user who has granted permission to the application.
_Please check the official
[documentation](https://github.com/Medium/medium-api-docs#31-users)_
## Examples
user_info = Medium.client("token") |> Medium.me
user_info.username //=> "roperzh"
user_info.image_url //=> "https://images.medium.com/0*fkfQT7TlUGGyI.png"
"""
def me(client) do
get client, "/me"
end
@doc """
Returns a full list of publications that the user is related to in some way.
_Please check the official [documentation]
(https://github.com/Medium/medium-api-docs#listing-the-users-publications)_
## Examples
publications = Medium.client("token") |> Medium.publications("user_id")
"""
def publications(client, user_id) do
get client, "/users/#{user_id}/publications"
end
@doc """
Returns a list of contributors for a given publication.
_Please check the official [documentation]
(https://github.com/Medium/medium-api-docs#listing-the-users-publications)_
## Examples
contributors =
token
|> Medium.client
|> Medium.publications("publication_id")
"""
def contributors(client, publication_id) do
get client, "/publications/#{publication_id}/contributors"
end
@doc """
Creates a post on the authenticated user’s profile.
_Please check the official
[documentation](https://github.com/Medium/medium-api-docs#creating-a-post)_
## Examples
resp = Medium.client("token") |> Medium.publish("user_id", publication)
"""
def publish(client, author_id, publication) do
post client, "/users/#{author_id}/posts", publication
end
@doc """
This API allows creating a post and associating it with a
publication on Medium.
_Please check the official [documentation]
(http://github.com/Medium/medium-api-docs#creating-a-post-under-a-publication)_
## Examples
resp =
token
|> Medium.client
|> Medium.publish_comment("publication_id", publication_data)
"""
def publish_comment(client, publication_id, publication) do
post client, "/publications/#{publication_id}/posts", publication
end
end
|
lib/medium.ex
| 0.79909
| 0.47457
|
medium.ex
|
starcoder
|
defmodule Farmbot.Repo.Snapshot do
@moduledoc false
alias Farmbot.Repo.Snapshot
defmodule Diff do
@moduledoc false
defstruct [
additions: [],
deletions: [],
updates: [],
]
end
defstruct [data: [], hash: nil]
def diff(%Snapshot{} = old, %Snapshot{} = new) do
struct(Diff, [
additions: calculate_additions(old.data, new.data),
deletions: calculate_deletions(old.data, new.data),
updates: calculate_updates(old.data, new.data)
])
end
def diff(%Snapshot{} = data) do
struct(Diff, [
additions: calculate_additions([], data.data),
deletions: calculate_deletions([], data.data),
updates: []
])
end
defp calculate_additions(old, new) do
Enum.reduce(new, [], fn(new_object, acc) ->
maybe_old_object = Enum.find(old, fn(old_object) ->
is_correct_mod? = old_object.__struct__ == new_object.__struct__
is_correct_id? = old_object.id == new_object.id
is_correct_mod? && is_correct_id?
end)
if maybe_old_object do
acc
else
[new_object | acc]
end
end)
end
# We need all the items that are not in `new`, but were in `old`
defp calculate_deletions(old, new) do
Enum.reduce(old, [], fn(old_object, acc) ->
maybe_new_object = Enum.find(new, fn(new_object) ->
is_correct_mod? = old_object.__struct__ == new_object.__struct__
is_correct_id? = old_object.id == new_object.id
is_correct_mod? && is_correct_id?
end)
if maybe_new_object do
acc
else
[old_object | acc]
end
end)
end
# We need all items that weren't added, or deleted.
defp calculate_updates(old, new) do
index = fn(%{__struct__: mod, id: id} = data) ->
{{mod, id}, data}
end
old_index = Map.new(old, index)
new_index = Map.new(new, index)
a = Map.take(new_index, Map.keys(old_index))
Enum.reduce(a, [], fn({key, val}, acc) ->
if old_index[key] != val do
[val | acc]
else
acc
end
end)
end
def md5(%Snapshot{data: data} = snapshot) do
data
|> Enum.map(&:crypto.hash(:md5, inspect(&1)))
|> fn(data) ->
:crypto.hash(:md5, data) |> Base.encode16()
end.()
|> fn(hash) ->
%{snapshot | hash: hash}
end.()
end
defimpl Inspect, for: Snapshot do
def inspect(%Snapshot{data: []}, _) do
"#Snapshot<[NULL]>"
end
def inspect(%Snapshot{hash: hash}, _) when is_binary(hash) do
"#Snapshot<#{hash}>"
end
end
end
|
lib/farmbot/repo/snapshot.ex
| 0.521471
| 0.448487
|
snapshot.ex
|
starcoder
|
defmodule Pulsar do
@moduledoc """
This is the client API for Pulsar.
Pulsar manages a simple text-mode dashboard of jobs.
Jobs can be updated at any time; updates appear *in place*.
When a job is updated, it will briefly be repainted in bold and/or bright text,
then be redrawn in standard text.
This is to draw attention to changes.
Completed jobs bubble up above any incomplete jobs.
Jobs may have a status, which drives font color. Normal jobs are in white.
Jobs with status `:ok` are in green.
Jobs with status `:error` are in red.
Note that the actual colors are driven by the configuration of your terminal.
Pulsar has no way to determine if other output is occuring.
Care should be taken that logging is redirected to a file.
Pulsar is appropriate to generally short-lived applications such as command line tools,
who can ensure that output, including logging, is directed away from the console.
"""
@app_name Pulsar.DashboardServer
@doc """
Creates a new job using the local server.
Returns a job tuple that may be passed to the other functions.
"""
def new_job() do
request_new_job(@app_name)
end
@doc """
Creates a new job using a remote server, from the `node` parameter.
"""
def new_job(node) do
request_new_job({@app_name, node})
end
@doc """
Given a previously created job, updates the message for the job.
This will cause the job's line in the dashboard to update, and will briefly be
highlighted.
Returns the job.
"""
def message(job, message) do
{process, jobid} = job
GenServer.cast(process, {:update, jobid, message})
job
end
@doc """
Completes a previously created job. No further updates to the job
should be sent.
Returns the job.
"""
def complete(job) do
{process, jobid} = job
GenServer.cast(process, {:complete, jobid})
end
@doc """
Updates the status of the job.
`status` should be `:normal`, `:ok`, or `:error`.
Returns the job.
"""
def status(job, status) do
{process, jobid} = job
GenServer.cast(process, {:status, jobid, status})
job
end
@doc """
Pauses the dashboard.
The dashboard will clear itself when paused.
Console output can then be written.
Returns :ok, after the dashboard is cleared.
To restore normal behavior to the dashboard, invoke `resume/0`.
"""
def pause() do
GenServer.call(@app_name, :pause)
end
@doc """
Pauses the dashboard on the indicated node.
"""
def pause(node) do
GenServer.call({@app_name, node}, :pause)
end
@doc """
Resumes the dashboard after a `pause/0`.
"""
def resume() do
GenServer.cast(@app_name, :resume)
end
@doc """
Resumes the dashboard after a `pause/1`.
"""
def resume(node) do
GenServer.cast({@app_name, node}, :resume)
end
@doc """
Sets the prefix for the job; this immediately precedes the message.
Generally, the prefix provides a job with a title.
There is no seperator between the prefix and the message, a prefix
typically ends with ": " or "- ".
Returns the job.
"""
def prefix(job, prefix) do
{process, jobid} = job
GenServer.cast(process, {:prefix, jobid, prefix})
job
end
defp request_new_job(server) do
process = GenServer.whereis(server)
{process, GenServer.call(process, :job)}
end
end
|
lib/pulsar.ex
| 0.846101
| 0.637581
|
pulsar.ex
|
starcoder
|
defmodule Ecto.ERD.Document do
@moduledoc false
alias Ecto.ERD.{Node, Field, Edge}
defstruct [:edges, :nodes, :clusters]
def map_nodes(%__MODULE__{nodes: nodes, edges: edges, clusters: []}, map_node_callback)
when is_function(map_node_callback, 1) do
{nodes, removed_nodes} =
Enum.flat_map_reduce(nodes, [], fn node, removed_nodes ->
case map_node_callback.(node) do
nil -> {[], [node | removed_nodes]}
node -> {[node], removed_nodes}
end
end)
clusters = Enum.group_by(nodes, & &1.cluster)
edges =
Enum.reject(edges, fn edge ->
Enum.any?(removed_nodes, fn node -> Edge.connected_with_node?(edge, node) end)
end)
{nodes, clusters} = Map.pop(clusters, nil)
%__MODULE__{
nodes: List.wrap(nodes),
clusters: clusters,
edges: edges
}
end
def new(modules) do
data =
modules
|> Enum.flat_map(fn module ->
association_components =
:associations
|> module.__schema__()
|> Enum.flat_map(fn assoc_field ->
from_reflection(module.__schema__(:association, assoc_field))
end)
embed_components =
:embeds
|> module.__schema__()
|> Enum.flat_map(fn embed_field ->
from_reflection(module.__schema__(:embed, embed_field))
end)
[Node.from_schema_module(module) | embed_components ++ association_components]
end)
|> Enum.group_by(fn
%Edge{} -> :edges
%Node{} -> :nodes
end)
%__MODULE__{
# multiple nodes could be generated by multiple schemas which use the same table in many to many relation
nodes: Enum.uniq(Map.get(data, :nodes, [])),
edges: merge_edges_with_same_direction(Map.get(data, :edges, [])),
clusters: []
}
end
defp merge_edges_with_same_direction(edges) do
edges
|> Enum.group_by(fn %Edge{from: from, to: to} -> {from, to} end)
|> Enum.map(fn {_direction, edges} -> Enum.reduce(edges, &Edge.merge/2) end)
end
defp from_reflection(%Ecto.Embedded{} = embedded) do
[
Edge.new(%{
from: {embedded.owner.__schema__(:source), embedded.owner, {:field, embedded.field}},
to: {embedded.related.__schema__(:source), embedded.related, {:header, :schema_module}},
assoc_types: [has: embedded.cardinality]
})
]
end
defp from_reflection(%Ecto.Association.BelongsTo{
owner: owner,
owner_key: owner_key,
related: related,
related_key: related_key
}) do
related_source = related.__schema__(:source)
owner_source = owner.__schema__(:source)
[
Edge.new(%{
from: {related_source, related, {:field, related_key}},
to: {owner_source, owner, {:field, owner_key}},
assoc_types: [:belongs_to]
})
]
end
defp from_reflection(%Ecto.Association.Has{
owner: owner,
owner_key: owner_key,
related: related,
related_key: related_key,
cardinality: cardinality
}) do
related_source = related.__schema__(:source)
owner_source = owner.__schema__(:source)
[
Edge.new(%{
from: {owner_source, owner, {:field, owner_key}},
to: {related_source, related, {:field, related_key}},
assoc_types: [has: cardinality]
})
]
end
defp from_reflection(%Ecto.Association.ManyToMany{
join_through: join_through,
owner: owner,
related: related,
join_keys: [{join_source_owner_fk, owner_pk}, {join_source_related_fk, related_pk}]
}) do
{join_module, join_source} =
case join_through do
value when is_atom(value) -> {value, value.__schema__(:source)}
value when is_binary(value) -> {nil, value}
end
nodes =
case join_module do
nil ->
fields = [
Field.new(join_source_owner_fk, owner.__schema__(:type, owner_pk)),
Field.new(join_source_related_fk, related.__schema__(:type, related_pk))
]
[Node.from_schemaless_join_source(join_source, fields)]
_join_module ->
[]
end
nodes ++
[
Edge.new(%{
from: {owner.__schema__(:source), owner, {:field, owner_pk}},
to: {join_source, join_module, {:field, join_source_owner_fk}},
assoc_types: [has: :many]
}),
Edge.new(%{
from: {related.__schema__(:source), related, {:field, related_pk}},
to: {join_source, join_module, {:field, join_source_related_fk}},
assoc_types: [has: :many]
})
]
end
defp from_reflection(%Ecto.Association.HasThrough{}) do
[]
end
end
|
lib/ecto/erd/document.ex
| 0.766119
| 0.424949
|
document.ex
|
starcoder
|
defprotocol Calendar.ContainsNaiveDateTime do
@doc """
Returns a Calendar.NaiveDateTime struct for the provided data
"""
def ndt_struct(data)
end
defmodule Calendar.NaiveDateTime do
require Calendar.DateTime.Format
@moduledoc """
NaiveDateTime can represents a "naive time". That is a point in time without
a specified time zone.
"""
@doc """
Like from_erl/1 without "!", but returns the result directly without a tag.
Will raise if date is invalid. Only use this if you are sure the date is valid.
## Examples
iex> from_erl!({{2014, 9, 26}, {17, 10, 20}})
%NaiveDateTime{day: 26, hour: 17, minute: 10, month: 9, second: 20, year: 2014}
iex from_erl!({{2014, 99, 99}, {17, 10, 20}})
# this will throw a MatchError
"""
def from_erl!(erl_date_time, microsecond \\ {0, 0}) do
{:ok, result} = from_erl(erl_date_time, microsecond)
result
end
@doc """
Takes an Erlang-style date-time tuple.
If the datetime is valid it returns a tuple with a tag and a naive DateTime.
Naive in this context means that it does not have any timezone data.
## Examples
iex>from_erl({{2014, 9, 26}, {17, 10, 20}})
{:ok, %NaiveDateTime{day: 26, hour: 17, minute: 10, month: 9, second: 20, year: 2014} }
iex>from_erl({{2014, 9, 26}, {17, 10, 20}}, 321321)
{:ok, %NaiveDateTime{day: 26, hour: 17, minute: 10, month: 9, second: 20, year: 2014, microsecond: {321321, 6}} }
# Invalid date
iex>from_erl({{2014, 99, 99}, {17, 10, 20}})
{:error, :invalid_datetime}
# Invalid time
iex>from_erl({{2014, 9, 26}, {17, 70, 20}})
{:error, :invalid_datetime}
"""
def from_erl(dt, microsecond \\ {0, 0})
def from_erl({{year, month, day}, {hour, min, sec}}, microsecond) when is_integer(microsecond) do
from_erl({{year, month, day}, {hour, min, sec}}, {microsecond, 6})
end
def from_erl({{year, month, day}, {hour, min, sec}}, microsecond) do
if validate_erl_datetime {{year, month, day}, {hour, min, sec}} do
{:ok, %NaiveDateTime{year: year, month: month, day: day, hour: hour, minute: min, second: sec, microsecond: microsecond}}
else
{:error, :invalid_datetime}
end
end
defp validate_erl_datetime({date, time}) do
{time_tag, _ } = Calendar.Time.from_erl(time)
:calendar.valid_date(date) && time_tag == :ok
end
@doc """
Takes a NaiveDateTime struct and returns an erlang style datetime tuple.
## Examples
iex> from_erl!({{2014, 10, 15}, {2, 37, 22}}) |> to_erl
{{2014, 10, 15}, {2, 37, 22}}
"""
def to_erl(%NaiveDateTime{year: year, month: month, day: day, hour: hour, minute: min, second: sec}) do
{{year, month, day}, {hour, min, sec}}
end
def to_erl(ndt) do
ndt |> contained_ndt |> to_erl
end
@doc """
Takes a NaiveDateTime struct and returns an Ecto style datetime tuple. This is
like an erlang style tuple, but with microseconds added as an additional
element in the time part of the tuple.
If the datetime has its microsecond field set to nil, 0 will be used for microsecond.
## Examples
iex> from_erl!({{2014,10,15},{2,37,22}}, {999999, 6}) |> Calendar.NaiveDateTime.to_micro_erl
{{2014, 10, 15}, {2, 37, 22, 999999}}
iex> from_erl!({{2014,10,15},{2,37,22}}, {0, 0}) |> Calendar.NaiveDateTime.to_micro_erl
{{2014, 10, 15}, {2, 37, 22, 0}}
"""
def to_micro_erl(%NaiveDateTime{year: year, month: month, day: day, hour: hour, minute: min, second: sec, microsecond: {0, _}}) do
{{year, month, day}, {hour, min, sec, 0}}
end
def to_micro_erl(%NaiveDateTime{year: year, month: month, day: day, hour: hour, minute: min, second: sec, microsecond: {microsecond, _}}) do
{{year, month, day}, {hour, min, sec, microsecond}}
end
def to_micro_erl(ndt) do
ndt |> contained_ndt |> to_micro_erl
end
@doc """
Takes a NaiveDateTime struct and returns a Date struct representing the date part
of the provided NaiveDateTime.
iex> from_erl!({{2014,10,15},{2,37,22}}) |> Calendar.NaiveDateTime.to_date
%Date{day: 15, month: 10, year: 2014}
"""
def to_date(ndt) do
ndt = ndt |> contained_ndt
%Date{year: ndt.year, month: ndt.month, day: ndt.day}
end
@doc """
Takes a NaiveDateTime struct and returns a Time struct representing the time part
of the provided NaiveDateTime.
iex> from_erl!({{2014,10,15},{2,37,22}}) |> Calendar.NaiveDateTime.to_time
%Time{microsecond: {0, 0}, hour: 2, minute: 37, second: 22}
"""
def to_time(ndt) do
ndt = ndt |> contained_ndt
%Time{hour: ndt.hour, minute: ndt.minute, second: ndt.second, microsecond: ndt.microsecond}
end
@doc """
For turning NaiveDateTime structs to into a DateTime.
Takes a NaiveDateTime and a timezone name. If timezone is valid, returns a tuple with an :ok and DateTime.
iex> from_erl!({{2014,10,15},{2,37,22}}) |> Calendar.NaiveDateTime.to_date_time("UTC")
{:ok, %DateTime{zone_abbr: "UTC", day: 15, microsecond: {0, 0}, hour: 2, minute: 37, month: 10, second: 22, std_offset: 0, time_zone: "UTC", utc_offset: 0, year: 2014}}
"""
def to_date_time(ndt, timezone) do
ndt = ndt |> contained_ndt
Calendar.DateTime.from_erl(to_erl(ndt), timezone, ndt.microsecond)
end
@doc """
Promote to DateTime with UTC time zone. Should only be used if you
are sure that the provided argument is in UTC.
Takes a NaiveDateTime. Returns a DateTime.
iex> from_erl!({{2014,10,15},{2,37,22}}) |> Calendar.NaiveDateTime.to_date_time_utc
%DateTime{zone_abbr: "UTC", day: 15, microsecond: {0, 0}, hour: 2, minute: 37, month: 10, second: 22, std_offset: 0, time_zone: "Etc/UTC", utc_offset: 0, year: 2014}
"""
def to_date_time_utc(ndt) do
ndt = ndt |> contained_ndt
{:ok, dt} = to_date_time(ndt, "Etc/UTC")
dt
end
@doc """
Create new NaiveDateTime struct based on a date and a time.
## Examples
iex> from_date_and_time({2016, 1, 8}, {14, 10, 55})
{:ok, %NaiveDateTime{day: 8, microsecond: {0, 0}, hour: 14, minute: 10, month: 1, second: 55, year: 2016}}
iex> from_date_and_time(Calendar.Date.Parse.iso8601!("2016-01-08"), {14, 10, 55})
{:ok, %NaiveDateTime{day: 8, microsecond: {0, 0}, hour: 14, minute: 10, month: 1, second: 55, year: 2016}}
"""
def from_date_and_time(date_container, time_container) do
contained_time = Calendar.ContainsTime.time_struct(time_container)
from_erl({Calendar.Date.to_erl(date_container), Calendar.Time.to_erl(contained_time)}, contained_time.microsecond)
end
@doc """
Like `from_date_and_time/2` but returns the result untagged.
Raises in case of an error.
## Example
iex> from_date_and_time!({2016, 1, 8}, {14, 10, 55})
%NaiveDateTime{day: 8, microsecond: {0, 0}, hour: 14, minute: 10, month: 1, second: 55, year: 2016}
"""
def from_date_and_time!(date_container, time_container) do
{:ok, result} = from_date_and_time(date_container, time_container)
result
end
@doc """
If you have a naive datetime and you know the offset, promote it to a
UTC DateTime.
## Examples
# A naive datetime at 2:37:22 with a 3600 second offset will return
# a UTC DateTime with the same date, but at 1:37:22
iex> with_offset_to_datetime_utc {{2014,10,15},{2,37,22}}, 3600
{:ok, %DateTime{zone_abbr: "UTC", day: 15, microsecond: {0, 0}, hour: 1, minute: 37, month: 10, second: 22, std_offset: 0, time_zone: "Etc/UTC", utc_offset: 0, year: 2014} }
iex> with_offset_to_datetime_utc{{2014,10,15},{2,37,22}}, 999_999_999_999_999_999_999_999_999
{:error, nil}
"""
def with_offset_to_datetime_utc(ndt, total_utc_offset) do
ndt = ndt |> contained_ndt
{tag, advanced_ndt} = ndt |> advance(total_utc_offset*-1)
case tag do
:ok -> to_date_time(advanced_ndt, "Etc/UTC")
_ -> {:error, nil}
end
end
@doc """
Takes a NaiveDateTime and an integer.
Returns the `naive_date_time` advanced by the number
of seconds found in the `seconds` argument.
If `seconds` is negative, the time is moved back.
## Examples
# Advance 2 seconds
iex> from_erl!({{2014,10,2},{0,29,10}}, 123456) |> add(2)
{:ok, %NaiveDateTime{day: 2, hour: 0, minute: 29, month: 10,
second: 12, microsecond: {123456, 6},
year: 2014}}
"""
def add(ndt, seconds), do: advance(ndt, seconds)
@doc """
Like `add` without exclamation points.
Instead of returning a tuple with :ok and the result,
the result is returned untagged. Will raise an error in case
no correct result can be found based on the arguments.
## Examples
# Advance 2 seconds
iex> from_erl!({{2014,10,2},{0,29,10}}, 123456) |> add!(2)
%NaiveDateTime{day: 2, hour: 0, minute: 29, month: 10,
second: 12, microsecond: {123456, 6},
year: 2014}
"""
def add!(ndt, seconds), do: advance!(ndt, seconds)
def subtract(ndt, seconds), do: add(ndt, -1 * seconds)
def subtract!(ndt, seconds), do: add!(ndt, -1 * seconds)
@doc """
Deprecated version of `add/2`
"""
def advance(ndt, seconds) do
try do
ndt = ndt |> contained_ndt
greg_secs = ndt |> gregorian_seconds
advanced = greg_secs + seconds
|>from_gregorian_seconds!(ndt.microsecond)
{:ok, advanced}
rescue
FunctionClauseError ->
{:error, :function_clause_error}
end
end
@doc """
Deprecated version of `add!/2`
"""
def advance!(ndt, seconds) do
ndt = ndt |> contained_ndt
{:ok, result} = advance(ndt, seconds)
result
end
@doc """
Takes a NaiveDateTime and returns an integer of gregorian seconds starting with
year 0. This is done via the Erlang calendar module.
## Examples
iex> from_erl!({{2014,9,26},{17,10,20}}) |> gregorian_seconds
63578970620
"""
def gregorian_seconds(ndt) do
ndt
|> contained_ndt
|> to_erl
|> :calendar.datetime_to_gregorian_seconds
end
@doc """
The difference between two naive datetimes. In seconds and microseconds.
Returns tuple with {:ok, seconds, microseconds, :before or :after or :same_time}
If the first argument is later (e.g. greater) the second, the result will be positive.
In case of a negative result the second element (seconds) will be negative. This is always
the case if both of the arguments have the microseconds as nil or 0. But if the difference
is less than a second and the result is negative, then the microseconds will be negative.
## Examples
# The first NaiveDateTime is 40 seconds after the second NaiveDateTime
iex> diff({{2014,10,2},{0,29,50}}, {{2014,10,2},{0,29,10}})
{:ok, 40, 0, :after}
# The first NaiveDateTime is 40 seconds before the second NaiveDateTime
iex> diff({{2014,10,2},{0,29,10}}, {{2014,10,2},{0,29,50}})
{:ok, -40, 0, :before}
iex> diff(from_erl!({{2014,10,2},{0,29,10}},{999999, 6}), from_erl!({{2014,10,2},{0,29,50}}))
{:ok, -39, 1, :before}
iex> diff(from_erl!({{2014,10,2},{0,29,10}},{999999, 6}), from_erl!({{2014,10,2},{0,29,11}}))
{:ok, 0, -1, :before}
iex> diff(from_erl!({{2014,10,2},{0,29,11}}), from_erl!({{2014,10,2},{0,29,10}},{999999, 6}))
{:ok, 0, 1, :after}
iex> diff(from_erl!({{2014,10,2},{0,29,11}}), from_erl!({{2014,10,2},{0,29,11}}))
{:ok, 0, 0, :same_time}
"""
def diff(%NaiveDateTime{} = first_dt, %NaiveDateTime{} = second_dt) do
first_dt_utc = first_dt |> to_date_time_utc
second_dt_utc = second_dt |> to_date_time_utc
Calendar.DateTime.diff(first_dt_utc, second_dt_utc)
end
def diff(ndt1, ndt2) do
diff(contained_ndt(ndt1), contained_ndt(ndt2))
end
@doc """
Takes a two `NaiveDateTime`s and returns true if the first
one is greater than the second. Otherwise false. Greater than
means that it is later then the second datetime.
## Examples
iex> {{2014,1,1}, {10,10,10}} |> after?({{1999, 1, 1}, {11, 11, 11}})
true
iex> {{2014,1,1}, {10,10,10}} |> after?({{2020, 1, 1}, {11, 11, 11}})
false
iex> {{2014,1,1}, {10,10,10}} |> after?({{2014, 1, 1}, {10, 10, 10}})
false
"""
def after?(ndt1, ndt2) do
{_, _, _, comparison} = diff(ndt1, ndt2)
comparison == :after
end
@doc """
Takes a two `NaiveDateTime`s and returns true if the first
one is less than the second. Otherwise false. Less than
means that it is earlier then the second datetime.
## Examples
iex> {{2014,1,1}, {10,10,10}} |> before?({{1999, 1, 1}, {11, 11, 11}})
false
iex> {{2014,1,1}, {10,10,10}} |> before?({{2020, 1, 1}, {11, 11, 11}})
true
iex> {{2014,1,1}, {10,10,10}} |> before?({{2014, 1, 1}, {10, 10, 10}})
false
"""
def before?(ndt1, ndt2) do
{_, _, _, comparison} = diff(ndt1, ndt2)
comparison == :before
end
@doc """
Takes a two `NaiveDateTime`s and returns true if the first
is equal to the second one.
In this context equal means that they happen at the same time.
## Examples
iex> {{2014,1,1}, {10,10,10}} |> same_time?({{1999, 1, 1}, {11, 11, 11}})
false
iex> {{2014,1,1}, {10,10,10}} |> same_time?({{2020, 1, 1}, {11, 11, 11}})
false
iex> {{2014,1,1}, {10,10,10}} |> same_time?({{2014, 1, 1}, {10, 10, 10}})
true
"""
def same_time?(ndt1, ndt2) do
{_, _, _, comparison} = diff(ndt1, ndt2)
comparison == :same_time
end
defp from_gregorian_seconds!(gregorian_seconds, microsecond) do
gregorian_seconds
|>:calendar.gregorian_seconds_to_datetime
|>from_erl!(microsecond)
end
defp contained_ndt(ndt_container) do
Calendar.ContainsNaiveDateTime.ndt_struct(ndt_container)
end
end
defimpl Calendar.ContainsNaiveDateTime, for: NaiveDateTime do
def ndt_struct(data), do: data
end
defimpl Calendar.ContainsNaiveDateTime, for: Calendar.DateTime do
def ndt_struct(data), do: data |> Calendar.DateTime.to_naive
end
defimpl Calendar.ContainsNaiveDateTime, for: Tuple do
def ndt_struct({{year, month, day}, {hour, min, sec}}) do
NaiveDateTime.from_erl!({{year, month, day}, {hour, min, sec}})
end
def ndt_struct({{year, month, day}, {hour, min, sec, microsecond}}) do
Calendar.NaiveDateTime.from_erl!({{year, month, day}, {hour, min, sec}}, microsecond)
end
end
defimpl Calendar.ContainsNaiveDateTime, for: DateTime do
def ndt_struct(%{calendar: Calendar.ISO}=data), do: %NaiveDateTime{day: data.day, month: data.month, year: data.year, hour: data.hour, minute: data.minute, second: data.second, microsecond: data.microsecond}
end
#defimpl Calendar.ContainsNaiveDateTime, for: NaiveDateTime do
# def ndt_struct(%{calendar: Calendar.ISO}=data), do: %NaiveDateTime{day: data.day, month: data.month, year: data.year, hour: data.hour, minute: data.minute, second: data.second, microsecond: data.microsecond}
#end
|
lib/calendar/naive_date_time.ex
| 0.896601
| 0.480601
|
naive_date_time.ex
|
starcoder
|
defmodule Matrax do
@moduledoc """
A matrix library in pure Elixir based on `atomics`.
[Erlang atomics documentation](http://erlang.org/doc/man/atomics.html)
Key features:
- **concurrent accessibility**: atomics are mutable and can be accessed from multiple processes
- **access path only transofrmations**: transformations like transpose change only the access path so the same matrix can be worked on in multiple states by different processes at the same time
- **fast accessibility**: operations like `get/2` and `put/3` are very fast and based only on pure Elixir
## Examples
iex> matrax = Matrax.new(100, 100) # 100 x 100 matrix
iex> matrax |> Matrax.put({0, 0}, 10) # add 10 to position {0, 0}
iex> matrax |> Matrax.get({0, 0})
10
iex> matrax |> Matrax.add({0, 0}, 80)
iex> matrax |> Matrax.get({0, 0})
90
## Enumerable protocol
`Matrax` implements the Enumerable protocol, so all Enum functions can be used:
iex> matrax = Matrax.new(10, 10)
iex> matrax |> Matrax.put({0, 0}, 8)
iex> matrax |> Enum.max()
8
iex> matrax |> Enum.member?(7)
false
"""
@compile {:inline,
position_to_index: 2,
do_position_to_index: 4,
index_to_position: 2,
do_index_to_position: 2,
count: 1,
put: 3,
get: 2}
@keys [:atomics, :rows, :columns, :min, :max, :signed, :changes]
@enforce_keys @keys
defstruct @keys
@type t :: %__MODULE__{
atomics: reference,
rows: pos_integer,
columns: pos_integer,
min: integer,
max: pos_integer,
signed: boolean,
changes: list
}
@type position :: {row :: non_neg_integer, col :: non_neg_integer}
@doc """
Converts an integer `list_of_lists` to a new `%Matrax{}` struct.
Same as `new/2` without options.
## Examples
iex> matrax = %Matrax{rows: 2, columns: 3} = Matrax.new([[1,2,3], [4, 5, 6]])
iex> matrax |> Matrax.to_list_of_lists
[[1,2,3], [4, 5, 6]]
"""
@spec new(list(list)) :: t
def new(list_of_lists) do
new(list_of_lists, [])
end
@doc """
Converts a `list_of_lists` to a new `%Matrax{}` struct.
## Options
* `:signed` - (boolean) to have signed or unsigned 64bit integers. Defaults to `true`.
## Examples
iex> matrax = %Matrax{rows: 2, columns: 3} = Matrax.new([[1,2,3], [4, 5, 6]], signed: false)
iex> matrax |> Matrax.to_list_of_lists
[[1,2,3], [4, 5, 6]]
iex> matrax |> Matrax.count
6
"""
@spec new(list(list), list) :: t
def new([first_list | _] = list_of_lists, options)
when is_list(list_of_lists) and is_list(options) do
rows = length(list_of_lists)
columns = length(first_list)
signed = Keyword.get(options, :signed, true)
atomics = :atomics.new(rows * columns, signed: signed)
list_of_lists
|> List.flatten()
|> Enum.reduce(1, fn value, index ->
:atomics.put(atomics, index, value)
index + 1
end)
%{min: min, max: max} = :atomics.info(atomics)
%Matrax{
atomics: atomics,
rows: rows,
columns: columns,
min: min,
max: max,
signed: signed,
changes: []
}
end
@doc """
Returns a new `%Matrax{}` struct with the given `rows` and `columns` size.
## Options
* `:seed_fun` - (function) a function to seed all positions. See `apply/2` for further information.
* `:signed` - (boolean) to have signed or unsigned 64bit integers. Defaults to `true`.
## Examples
Matrax.new(10, 5) # 10 x 5 matrix
Matrax.new(10, 5, signed: false) # unsigned integers
Matrax.new(10, 5, seed_fun: fn _, {row, col} -> row * col end) # seed values
"""
@spec new(pos_integer, pos_integer, list) :: t
def new(rows, columns, options \\ []) when is_integer(rows) and is_integer(columns) do
seed_fun = Keyword.get(options, :seed_fun, nil)
signed = Keyword.get(options, :signed, true)
atomics = :atomics.new(rows * columns, signed: signed)
%{min: min, max: max} = :atomics.info(atomics)
matrax = %Matrax{
atomics: atomics,
rows: rows,
columns: columns,
min: min,
max: max,
signed: signed,
changes: []
}
if seed_fun do
Matrax.apply(matrax, seed_fun)
end
matrax
end
@doc """
Create identity square matrix of given `size`.
## Examples
iex> Matrax.identity(5) |> Matrax.to_list_of_lists
[
[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 1, 0],
[0, 0, 0, 0, 1]
]
"""
@spec identity(non_neg_integer) :: t
def identity(size) when is_integer(size) and size > 0 do
new(
size,
size,
seed_fun: fn
_, {same, same} -> 1
_, {_, _} -> 0
end
)
end
@doc """
Returns a position tuple for the given atomics `index`.
Indices of atomix are 1 based.
## Examples
iex> matrax = Matrax.new(10, 10)
iex> Matrax.index_to_position(matrax, 1)
{0, 0}
iex> Matrax.index_to_position(matrax, 10)
{0, 9}
"""
@spec index_to_position(t, pos_integer) :: position
def index_to_position(%Matrax{rows: rows, columns: columns}, index)
when is_integer(index) and index <= rows * columns do
do_index_to_position(columns, index)
end
defp do_index_to_position(columns, index) do
index = index - 1
{div(index, columns), rem(index, columns)}
end
@doc """
Returns atomics index corresponding to the position
tuple in the given `%Matrax{}` struct.
## Examples
iex> matrax = Matrax.new(10, 10)
iex> matrax |> Matrax.position_to_index({1, 1})
12
iex> matrax |> Matrax.position_to_index({0, 4})
5
"""
@spec position_to_index(t, position) :: pos_integer
def position_to_index(%Matrax{rows: rows, columns: columns, changes: changes}, position) do
do_position_to_index(changes, rows, columns, position)
end
defp do_position_to_index([], rows, columns, {row, col})
when row >= 0 and row < rows and col >= 0 and col < columns do
row * columns + col + 1
end
defp do_position_to_index([:transpose | changes_tl], rows, columns, {row, col}) do
do_position_to_index(changes_tl, columns, rows, {col, row})
end
defp do_position_to_index(
[{:reshape, {old_rows, old_columns}} | changes_tl],
rows,
columns,
{row, col}
) do
current_index = do_position_to_index([], rows, columns, {row, col})
old_position = do_index_to_position(old_columns, current_index)
do_position_to_index(changes_tl, old_rows, old_columns, old_position)
end
defp do_position_to_index(
[
{:submatrix, {old_rows, old_columns}, row_from.._row_to, col_from.._col_to}
| changes_tl
],
_,
_,
{row, col}
) do
do_position_to_index(changes_tl, old_rows, old_columns, {row + row_from, col + col_from})
end
defp do_position_to_index(
[{:diagonal, {old_rows, old_columns}} | changes_tl],
1,
_,
{0, col}
) do
do_position_to_index(changes_tl, old_rows, old_columns, {col, col})
end
defp do_position_to_index([:flip_lr | changes_tl], rows, columns, {row, col}) do
do_position_to_index(changes_tl, rows, columns, {row, columns - 1 - col})
end
defp do_position_to_index([:flip_ud | changes_tl], rows, columns, {row, col}) do
do_position_to_index(changes_tl, rows, columns, {rows - 1 - row, col})
end
defp do_position_to_index(
[{:row, {old_rows, _old_columns}, row_index} | changes_tl],
1,
columns,
{0, col}
) do
do_position_to_index(changes_tl, old_rows, columns, {row_index, col})
end
defp do_position_to_index(
[{:column, {_old_rows, old_columns}, col_index} | changes_tl],
rows,
1,
{row, 0}
) do
do_position_to_index(changes_tl, rows, old_columns, {row, col_index})
end
defp do_position_to_index(
[{:drop_row, dropped_row_index} | changes_tl],
rows,
columns,
{row, col}
) do
row =
if row >= dropped_row_index do
row + 1
else
row
end
do_position_to_index(changes_tl, rows + 1, columns, {row, col})
end
defp do_position_to_index(
[{:drop_column, dropped_column_index} | changes_tl],
rows,
columns,
{row, col}
) do
col =
if col >= dropped_column_index do
col + 1
else
col
end
do_position_to_index(changes_tl, rows, columns + 1, {row, col})
end
@doc """
Returns value at `position` from the given matrax.
## Examples
iex> matrax = Matrax.new(10, 10, seed_fun: fn _ -> 3 end)
iex> matrax |> Matrax.get({0, 5})
3
"""
@spec get(t, position) :: integer
def get(%Matrax{atomics: atomics} = matrax, position) do
index = position_to_index(matrax, position)
:atomics.get(atomics, index)
end
@doc """
Puts `value` into `matrax` at `position`.
Returns `:ok`
## Examples
iex> matrax = Matrax.new(10, 10)
iex> matrax |> Matrax.put({1, 3}, 5)
:ok
"""
@spec put(t, position, integer) :: :ok
def put(%Matrax{atomics: atomics} = matrax, position, value) when is_integer(value) do
index = position_to_index(matrax, position)
:atomics.put(atomics, index, value)
end
@doc """
Adds `incr` to atomic at `position`.
Returns `:ok`.
## Examples
iex> matrax = Matrax.new(10, 10)
iex> matrax |> Matrax.add({0, 0}, 2)
:ok
iex> matrax |> Matrax.add({0, 0}, 2)
:ok
iex> matrax |> Matrax.get({0, 0})
4
"""
@spec add(t, position, integer) :: :ok
def add(%Matrax{atomics: atomics} = matrax, position, incr) when is_integer(incr) do
index = position_to_index(matrax, position)
:atomics.add(atomics, index, incr)
end
@doc """
Adds a list of matrices to `matrax`.
Size (rows, columns) of matrices must match.
Returns `:ok`.
## Examples
iex> matrax = Matrax.new(5, 5)
iex> matrax7 = Matrax.new(5, 5, seed_fun: fn _ -> 7 end)
iex> matrax |> Matrax.get({0, 0})
0
iex> matrax |> Matrax.add([matrax7, matrax7])
iex> matrax |> Matrax.get({0, 0})
14
iex> matrax |> Matrax.add([matrax7])
iex> matrax |> Matrax.get({0, 0})
21
"""
@spec add(t, list(t) | []) :: :ok
def add(%Matrax{}, []) do
:ok
end
def add(%Matrax{rows: rows, columns: columns} = matrax, [
%Matrax{rows: rows, columns: columns} = head | tail
]) do
for row <- 0..(rows - 1), col <- 0..(columns - 1) do
add(
matrax,
{row, col},
get(head, {row, col})
)
end
add(matrax, tail)
end
@doc """
Atomic addition and return of the result.
Adds `incr` to atomic at `position` and returns result.
## Examples
iex> matrax = Matrax.new(10, 10)
iex> matrax |> Matrax.add_get({0, 0}, 2)
2
iex> matrax |> Matrax.add_get({0, 0}, 2)
4
"""
@spec add_get(t, position, integer) :: integer
def add_get(%Matrax{atomics: atomics} = matrax, position, incr) when is_integer(incr) do
index = position_to_index(matrax, position)
:atomics.add_get(atomics, index, incr)
end
@doc """
Subtracts `decr` from atomic at `position`.
## Examples
iex> matrax = Matrax.new(10, 10)
iex> matrax |> Matrax.sub({0, 0}, 1)
:ok
iex> matrax |> Matrax.sub({0, 0}, 1)
:ok
iex> matrax |> Matrax.get({0, 0})
-2
"""
@spec sub(t, position, integer) :: :ok
def sub(%Matrax{atomics: atomics} = matrax, position, decr) when is_integer(decr) do
index = position_to_index(matrax, position)
:atomics.sub(atomics, index, decr)
end
@doc """
Subtracts a list of matrices from `matrax`.
Size (rows, columns) of matrices must match.
Returns `:ok`.
## Examples
iex> matrax = Matrax.new(5, 5)
iex> matrax7 = Matrax.new(5, 5, seed_fun: fn _ -> 7 end)
iex> matrax |> Matrax.get({0, 0})
0
iex> matrax |> Matrax.sub([matrax7, matrax7])
iex> matrax |> Matrax.get({0, 0})
-14
iex> matrax |> Matrax.sub([matrax7])
iex> matrax |> Matrax.get({0, 0})
-21
"""
@spec sub(t, list(t) | []) :: :ok
def sub(%Matrax{}, []) do
:ok
end
def sub(%Matrax{rows: rows, columns: columns} = matrax, [
%Matrax{rows: rows, columns: columns} = head | tail
]) do
for row <- 0..(rows - 1), col <- 0..(columns - 1) do
sub(
matrax,
{row, col},
get(head, {row, col})
)
end
sub(matrax, tail)
end
@doc """
Atomic subtraction and return of the result.
Subtracts `decr` from atomic at `position` and returns result.
## Examples
iex> matrax = Matrax.new(10, 10)
iex> matrax |> Matrax.sub_get({0, 0}, 2)
-2
iex> matrax |> Matrax.sub_get({0, 0}, 2)
-4
"""
@spec sub_get(t, position, integer) :: integer
def sub_get(%Matrax{atomics: atomics} = matrax, position, decr) when is_integer(decr) do
index = position_to_index(matrax, position)
:atomics.sub_get(atomics, index, decr)
end
@doc """
Atomically compares the value at `position` with `expected` ,
and if those are equal, sets value at `position` to `desired`.
Returns :ok if `desired` was written.
Returns the actual value at `position` if it does not equal to `desired`.
## Examples
iex> matrax = Matrax.new(10, 10)
iex> matrax |> Matrax.compare_exchange({0, 0}, 0, -10)
:ok
iex> matrax |> Matrax.compare_exchange({0, 0}, 3, 10)
-10
"""
@spec compare_exchange(t, position, integer, integer) :: :ok | integer
def compare_exchange(%Matrax{atomics: atomics} = matrax, position, expected, desired)
when is_integer(expected) and is_integer(desired) do
index = position_to_index(matrax, position)
:atomics.compare_exchange(atomics, index, expected, desired)
end
@doc """
Atomically replaces value at `position` with `value` and
returns the value it had before.
## Examples
iex> matrax = Matrax.new(10, 10)
iex> matrax |> Matrax.exchange({0, 0}, -10)
0
iex> matrax |> Matrax.exchange({0, 0}, -15)
-10
"""
@spec exchange(t, position, integer) :: integer
def exchange(%Matrax{atomics: atomics} = matrax, position, value)
when is_integer(value) do
index = position_to_index(matrax, position)
:atomics.exchange(atomics, index, value)
end
@doc """
Returns count of values (rows * columns).
## Examples
iex> matrax = Matrax.new(5, 5)
iex> Matrax.count(matrax)
25
"""
@spec count(t) :: pos_integer
def count(%Matrax{rows: rows, columns: columns}) do
rows * columns
end
@doc """
Returns smallest integer in `matrax`.
## Examples
iex> matrax = Matrax.new(10, 10, seed_fun: fn _ -> 7 end)
iex> matrax |> Matrax.min()
7
"""
@spec min(t) :: integer
def min(%Matrax{} = matrax) do
{min_value, _position} = do_argmin(matrax)
min_value
end
@doc """
Returns largest integer in `matrax`.
## Examples
iex> matrax = Matrax.new(10, 10, seed_fun: fn _, {row, col} -> row * col end)
iex> matrax |> Matrax.max()
81
iex> Matrax.new(5, 5) |> Matrax.max()
0
"""
@spec max(t) :: integer
def max(%Matrax{} = matrax) do
{max_value, _position} = do_argmax(matrax)
max_value
end
@doc """
Returns sum of integers in `matrax`.
## Examples
iex> matrax = Matrax.new(10, 10, seed_fun: fn _, {row, col} -> row * col end)
iex> matrax |> Matrax.sum()
2025
iex> Matrax.new(5, 5, seed_fun: fn _ -> 1 end) |> Matrax.sum()
25
"""
@spec sum(t) :: integer
def sum(%Matrax{} = matrax) do
last_index = count(matrax)
do_sum(matrax, last_index, 0)
end
defp do_sum(_, 0, acc), do: acc
defp do_sum(matrax, index, acc) do
position = index_to_position(matrax, index)
do_sum(matrax, index - 1, acc + get(matrax, position))
end
@doc """
Applies the given `fun` function to all elements of `matrax`.
If arity of `fun` is 1 it receives the integer as single argument.
If arity of `fun` is 2 it receives the integer as first and
position tuple as the second argument.
## Examples
iex> matrax = Matrax.new(10, 10)
iex> matrax |> Matrax.apply(fn int -> int + 2 end)
iex> matrax |> Matrax.get({0, 0})
2
iex> matrax = Matrax.new(10, 10)
iex> matrax |> Matrax.apply(fn _int, {row, col} -> row * col end)
iex> matrax |> Matrax.get({9, 9})
81
"""
@spec apply(t, (integer -> integer) | (integer, position -> integer)) :: :ok
def apply(%Matrax{} = matrax, fun) when is_function(fun, 1) or is_function(fun, 2) do
fun_arity = Function.info(fun)[:arity]
do_apply(matrax, count(matrax), fun_arity, fun)
end
defp do_apply(_, 0, _, _), do: :ok
defp do_apply(%Matrax{} = matrax, index, fun_arity, fun) do
position = index_to_position(matrax, index)
value =
case fun_arity do
1 -> fun.(get(matrax, position))
2 -> fun.(get(matrax, position), position)
end
put(matrax, position, value)
do_apply(matrax, index - 1, fun_arity, fun)
end
@doc """
Converts `%Matrax{}` to a flat list.
## Examples
iex> matrax = Matrax.new(3, 3, seed_fun: fn _, {row, col} -> row * col end)
iex> Matrax.to_list(matrax)
[0, 0, 0, 0, 1, 2, 0, 2, 4]
"""
@spec to_list(t) :: list(integer)
def to_list(%Matrax{rows: rows, columns: columns} = matrax) do
for row <- 0..(rows - 1), col <- 0..(columns - 1) do
get(matrax, {row, col})
end
end
@doc """
Converts `%Matrax{}` to list of lists.
## Examples
iex> matrax = Matrax.new(5, 5, seed_fun: fn _, {row, col} -> row * col end)
iex> Matrax.to_list_of_lists(matrax)
[
[0, 0, 0, 0, 0],
[0, 1, 2, 3, 4],
[0, 2, 4, 6, 8],
[0, 3, 6, 9, 12],
[0, 4, 8, 12, 16]
]
"""
@spec to_list_of_lists(t) :: list(list(integer))
def to_list_of_lists(%Matrax{rows: rows, columns: columns} = matrax) do
for row <- 0..(rows - 1) do
for col <- 0..(columns - 1) do
get(matrax, {row, col})
end
end
end
@doc """
Converts row at given row index of `%Matrax{}` to list.
## Examples
iex> matrax = Matrax.new(5, 5, seed_fun: fn _, {row, col} -> row * col end)
iex> matrax |> Matrax.row_to_list(2)
[0, 2, 4, 6, 8]
"""
@spec row_to_list(t, non_neg_integer) :: list(integer)
def row_to_list(%Matrax{rows: rows, columns: columns} = matrax, row)
when row in 0..(rows - 1) do
for col <- 0..(columns - 1) do
get(matrax, {row, col})
end
end
@doc """
Converts column at given column index of `%Matrax{}` to list.
## Examples
iex> matrax = Matrax.new(5, 5, seed_fun: fn _, {row, col} -> row * col end)
iex> matrax |> Matrax.column_to_list(2)
[0, 2, 4, 6, 8]
"""
@spec column_to_list(t, non_neg_integer) :: list(integer)
def column_to_list(%Matrax{rows: rows, columns: columns} = matrax, col)
when col in 0..(columns - 1) do
for row <- 0..(rows - 1) do
get(matrax, {row, col})
end
end
@doc """
Only modifies the struct, it doesn't move or mutate data.
Reduces matrix to only one row at given `row` index.
After `row/2` the access path to positions will be
modified during execution.
If you want to get a new `:atomics` with mofified data
use the `copy/1` function which applies the `:changes`.
## Examples
iex> matrax = Matrax.new(5, 5, seed_fun: fn _, {row, _col} -> row end)
iex> matrax |> Matrax.row(4) |> Matrax.to_list_of_lists
[[4, 4, 4, 4, 4]]
"""
@spec row(t, non_neg_integer) :: t
def row(%Matrax{rows: rows, columns: columns, changes: changes} = matrax, row)
when row in 0..(rows - 1) do
%Matrax{matrax | rows: 1, changes: [{:row, {rows, columns}, row} | changes]}
end
@doc """
Only modifies the struct, it doesn't move or mutate data.
Reduces matrix to only one column at given `column` index.
After `column/2` the access path to positions will be
modified during execution.
If you want to get a new `:atomics` with mofified data
use the `copy/1` function which applies the `:changes`.
## Examples
iex> matrax = Matrax.new(5, 5, seed_fun: fn _, {_row, col} -> col end)
iex> matrax |> Matrax.column(4) |> Matrax.to_list_of_lists
[[4], [4], [4], [4], [4]]
"""
@spec column(t, non_neg_integer) :: t
def column(%Matrax{rows: rows, columns: columns, changes: changes} = matrax, column)
when column in 0..(columns - 1) do
%Matrax{matrax | columns: 1, changes: [{:column, {rows, columns}, column} | changes]}
end
@doc """
Checks if `value` exists within `matrax`.
## Examples
iex> matrax = Matrax.new(5, 5, seed_fun: fn _, {row, col} -> row * col end)
iex> matrax |> Matrax.member?(6)
true
iex> matrax |> Matrax.member?(100)
false
"""
@spec member?(t, integer) :: boolean
def member?(%Matrax{} = matrax, value) when is_integer(value) do
!!find(matrax, value)
end
@doc """
Returns a `%Matrax{}` struct with a new atomics reference
and positional values identical to the given `matrax`.
The returned copy is always `changes: []` so this
can be used to finish the access-path only changes
by the `transpose/1`, `submatrix/3`, `reshape/3` functions.
## Examples
iex> matrax = Matrax.new(10, 10)
iex> matrax |> Matrax.put({0, 0}, -9)
iex> matrax2 = Matrax.copy(matrax)
iex> Matrax.get(matrax2, {0, 0})
-9
"""
@spec copy(t) :: t
def copy(%Matrax{atomics: atomics, changes: changes, signed: signed, columns: columns} = matrax) do
size = count(matrax)
new_atomics_ref = :atomics.new(size, signed: signed)
case changes do
[] -> do_copy(size, atomics, new_atomics_ref)
[_ | _] -> do_copy(size, matrax, new_atomics_ref, columns)
end
%Matrax{matrax | atomics: new_atomics_ref, changes: []}
end
defp do_copy(0, _, _) do
:done
end
defp do_copy(index, atomics, new_atomics_ref) do
:atomics.put(new_atomics_ref, index, :atomics.get(atomics, index))
do_copy(index - 1, atomics, new_atomics_ref)
end
defp do_copy(0, _, _, _) do
:done
end
defp do_copy(index, matrax, new_atomics_ref, columns) do
value = get(matrax, {div(index - 1, columns), rem(index - 1, columns)})
:atomics.put(new_atomics_ref, index, value)
do_copy(index - 1, matrax, new_atomics_ref, columns)
end
@doc """
Only modifies the struct, it doesn't move or mutate data.
After `transpose/1` the access path to positions
will be modified during execution.
If you want to get a new `:atomics` with mofified data
use the `copy/1` function which applies the `:changes`.
## Examples
iex> matrax = Matrax.new(7, 4, seed_fun: fn _, {row, col} -> row + col end)
iex> matrax |> Matrax.to_list_of_lists()
[
[0, 1, 2, 3],
[1, 2, 3, 4],
[2, 3, 4, 5],
[3, 4, 5, 6],
[4, 5, 6, 7],
[5, 6, 7, 8],
[6, 7, 8, 9]
]
iex> matrax |> Matrax.transpose() |> Matrax.to_list_of_lists()
[
[0, 1, 2, 3, 4, 5, 6],
[1, 2, 3, 4, 5, 6, 7],
[2, 3, 4, 5, 6, 7, 8],
[3, 4, 5, 6, 7, 8, 9]
]
"""
@spec transpose(t) :: t
def transpose(
%Matrax{rows: rows, columns: columns, changes: [:transpose | changes_tl]} = matrax
) do
%Matrax{
matrax
| rows: columns,
columns: rows,
changes: changes_tl
}
end
def transpose(%Matrax{rows: rows, columns: columns, changes: changes} = matrax) do
%Matrax{
matrax
| rows: columns,
columns: rows,
changes: [:transpose | changes]
}
end
@doc """
Only modifies the struct, it doesn't move or mutate data.
After `diagonal/1` the access path to positions will be
modified during execution.
If you want to get a new `:atomics` with mofified data
use the `copy/1` function which applies the `:changes`.
## Examples
iex> matrax = Matrax.identity(5)
iex> matrax |> Matrax.diagonal() |> Matrax.to_list_of_lists
[[1, 1, 1, 1, 1]]
"""
@spec diagonal(t) :: t
def diagonal(%Matrax{rows: rows, columns: columns, changes: changes} = matrax) do
%Matrax{
matrax
| rows: 1,
columns: rows,
changes: [{:diagonal, {rows, columns}} | changes]
}
end
@doc """
Only modifies the struct, it doesn't move or mutate data.
Ranges are inclusive.
After `submatrix/3` the access path to positions will be
modified during execution.
If you want to get a new `:atomics` with mofified data
use the `copy/1` function which applies the `:changes`.
## Examples
iex> matrax = Matrax.new(7, 4, seed_fun: fn _, {row, col} -> row + col end)
iex> matrax |> Matrax.to_list_of_lists()
[
[0, 1, 2, 3],
[1, 2, 3, 4],
[2, 3, 4, 5],
[3, 4, 5, 6],
[4, 5, 6, 7],
[5, 6, 7, 8],
[6, 7, 8, 9]
]
iex> matrax |> Matrax.submatrix(5..6, 1..3) |> Matrax.to_list_of_lists()
[
[6, 7, 8],
[7, 8, 9]
]
"""
@spec submatrix(t, Range.t(), Range.t()) :: t
def submatrix(
%Matrax{rows: rows, columns: columns, changes: changes} = matrax,
row_from..row_to = row_range,
col_from..col_to = col_range
)
when row_from in 0..(rows - 1) and row_to in row_from..(rows - 1) and
col_from in 0..(columns - 1) and col_to in col_from..(columns - 1) do
submatrix_rows = row_to + 1 - row_from
submatrix_columns = col_to + 1 - col_from
%Matrax{
matrax
| rows: submatrix_rows,
columns: submatrix_columns,
changes: [{:submatrix, {rows, columns}, row_range, col_range} | changes]
}
end
@doc """
Returns position tuple of biggest value.
## Examples
iex> matrax = Matrax.new(5, 5, seed_fun: fn _, {row, col} -> row * col end)
iex> matrax |> Matrax.argmax()
{4, 4}
iex> matrax = Matrax.new(5, 5) # all zeros
iex> matrax |> Matrax.argmax()
{0, 0}
"""
@spec argmax(t) :: integer
def argmax(%Matrax{} = matrax) do
{_, position} = do_argmax(matrax)
position
end
defp do_argmax(matrax) do
acc = {get(matrax, {0, 0}), {0, 0}}
do_argmax(matrax, 1, count(matrax), acc)
end
defp do_argmax(_, same, same, acc) do
acc
end
defp do_argmax(matrax, index, size, {acc_value, _acc_position} = acc) do
next_index = index + 1
position = index_to_position(matrax, next_index)
value_at_index = get(matrax, position)
do_argmax(
matrax,
next_index,
size,
case Kernel.max(acc_value, value_at_index) do
^acc_value -> acc
_else -> {value_at_index, position}
end
)
end
@doc """
Returns position tuple of smallest value.
## Examples
iex> matrax = Matrax.new(5, 5, seed_fun: fn _, {row, col} -> row * col end)
iex> matrax |> Matrax.argmin()
{0, 0}
iex> matrax = Matrax.new(5, 5, seed_fun: fn _, {row, col} -> -(row * col) end)
iex> matrax |> Matrax.argmin()
{4, 4}
"""
@spec argmin(t) :: integer
def argmin(%Matrax{} = matrax) do
{_, position} = do_argmin(matrax)
position
end
defp do_argmin(matrax) do
acc = {get(matrax, {0, 0}), {0, 0}}
do_argmin(matrax, 1, count(matrax), acc)
end
defp do_argmin(_, same, same, acc) do
acc
end
defp do_argmin(matrax, index, size, {acc_value, _acc_position} = acc) do
next_index = index + 1
position = index_to_position(matrax, next_index)
value_at_index = get(matrax, position)
do_argmin(
matrax,
next_index,
size,
case Kernel.min(acc_value, value_at_index) do
^acc_value -> acc
_else -> {value_at_index, position}
end
)
end
@doc """
Reshapes `matrax` to the given `rows` & `cols`.
After `reshape/3` the access path to positions will be
modified during execution.
If you want to get a new `:atomics` with mofified data
use the `copy/1` function which applies the `:changes`.
## Examples
iex> matrax = Matrax.new(4, 3, seed_fun: fn _, {_row, col} -> col end)
iex> matrax |> Matrax.to_list_of_lists()
[
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2]
]
iex> matrax |> Matrax.reshape(2, 6) |> Matrax.to_list_of_lists()
[
[0, 1, 2, 0, 1, 2],
[0, 1, 2, 0, 1, 2]
]
"""
@spec reshape(t, pos_integer, pos_integer) :: t
def reshape(
%Matrax{changes: [{:reshape, {rows, columns}} | changes_tl]} = matrax,
desired_rows,
desired_columns
) do
reshape(
%Matrax{matrax | rows: rows, columns: columns, changes: changes_tl},
desired_rows,
desired_columns
)
end
def reshape(
%Matrax{rows: rows, columns: columns, changes: changes} = matrax,
desired_rows,
desired_columns
)
when rows * columns == desired_rows * desired_columns do
%Matrax{
matrax
| rows: desired_rows,
columns: desired_columns,
changes: [{:reshape, {rows, columns}} | changes]
}
end
@doc """
Returns position of the first occurence of the given `value`
or `nil ` if nothing was found.
## Examples
iex> Matrax.new(5, 5) |> Matrax.find(0)
{0, 0}
iex> matrax = Matrax.new(5, 5, seed_fun: fn _, {row, col} -> row * col end)
iex> matrax |> Matrax.find(16)
{4, 4}
iex> matrax |> Matrax.find(42)
nil
"""
@spec find(t, integer) :: position | nil
def find(%Matrax{min: min, max: max} = matrax, value) when is_integer(value) do
case value do
v when v < min or v > max ->
nil
_else ->
do_find(matrax, 1, count(matrax) + 1, value)
end
end
defp do_find(_, same, same, _) do
nil
end
defp do_find(matrax, index, one_over_last_index, value) do
position = index_to_position(matrax, index)
case get(matrax, position) do
^value -> position
_else -> do_find(matrax, index + 1, one_over_last_index, value)
end
end
@doc """
Flip columns of matrix in the left-right direction (vertical axis).
After `flip_lr/1` the access path to positions will be
modified during execution.
If you want to get a new `:atomics` with mofified data
use the `copy/1` function which applies the `:changes`.
## Examples
iex> matrax = Matrax.new(3, 4, seed_fun: fn _, {_row, col} -> col end)
iex> matrax |> Matrax.to_list_of_lists()
[
[0, 1, 2, 3],
[0, 1, 2, 3],
[0, 1, 2, 3]
]
iex> matrax |> Matrax.flip_lr() |> Matrax.to_list_of_lists()
[
[3, 2, 1, 0],
[3, 2, 1, 0],
[3, 2, 1, 0]
]
"""
@spec flip_lr(t) :: t
def flip_lr(%Matrax{changes: [:flip_lr | changes_tl]} = matrax) do
%Matrax{matrax | changes: changes_tl}
end
def flip_lr(%Matrax{changes: changes} = matrax) do
%Matrax{matrax | changes: [:flip_lr | changes]}
end
@doc """
Flip rows of matrix in the up-down direction (horizontal axis).
After `flip_ud/1` the access path to positions will be
modified during execution.
If you want to get a new `:atomics` with mofified data
use the `copy/1` function which applies the `:changes`.
## Examples
iex> matrax = Matrax.new(3, 4, seed_fun: fn _, {row, _col} -> row end)
iex> matrax |> Matrax.to_list_of_lists()
[
[0, 0, 0, 0],
[1, 1, 1, 1],
[2, 2, 2, 2]
]
iex> matrax |> Matrax.flip_ud() |> Matrax.to_list_of_lists()
[
[2, 2, 2, 2],
[1, 1, 1, 1],
[0, 0, 0, 0]
]
"""
@spec flip_ud(t) :: t
def flip_ud(%Matrax{changes: [:flip_ud | changes_tl]} = matrax) do
%Matrax{matrax | changes: changes_tl}
end
def flip_ud(%Matrax{changes: changes} = matrax) do
%Matrax{matrax | changes: [:flip_ud | changes]}
end
@doc """
Trace of matrix (sum of all diagonal elements).
## Examples
iex> matrax = Matrax.new(5, 5, seed_fun: fn _ -> 1 end)
iex> matrax |> Matrax.trace()
5
"""
@spec trace(t) :: integer()
def trace(%Matrax{} = matrax) do
matrax
|> diagonal()
|> sum()
end
@doc """
Set row of a matrix at `row_index` to the values from the given 1-row matrix.
## Examples
iex> matrax = Matrax.new(5, 5, seed_fun: fn _ -> 1 end)
iex> row_matrax = Matrax.new(1, 5, seed_fun: fn _ -> 3 end)
iex> Matrax.set_row(matrax, 2, row_matrax) |> Matrax.to_list_of_lists
[
[1, 1, 1, 1, 1],
[1, 1, 1, 1, 1],
[3, 3, 3, 3, 3],
[1, 1, 1, 1, 1],
[1, 1, 1, 1, 1],
]
"""
@spec set_row(t, non_neg_integer, t) :: t
def set_row(
%Matrax{columns: columns} = matrax,
row_index,
%Matrax{columns: columns, rows: 1} = row_matrax
) do
matrax
|> row(row_index)
|> Matrax.apply(fn _, position ->
get(row_matrax, position)
end)
matrax
end
@doc """
Set column of a matrix at `column_index` to the values from the given 1-column matrix.
## Examples
iex> matrax = Matrax.new(5, 5, seed_fun: fn _ -> 1 end)
iex> column_matrax = Matrax.new(5, 1, seed_fun: fn _ -> 3 end)
iex> Matrax.set_column(matrax, 2, column_matrax) |> Matrax.to_list_of_lists
[
[1, 1, 3, 1, 1],
[1, 1, 3, 1, 1],
[1, 1, 3, 1, 1],
[1, 1, 3, 1, 1],
[1, 1, 3, 1, 1],
]
"""
@spec set_column(t, non_neg_integer, t) :: t
def set_column(
%Matrax{rows: rows} = matrax,
column_index,
%Matrax{rows: rows, columns: 1} = column_matrax
) do
matrax
|> column(column_index)
|> Matrax.apply(fn _, position ->
get(column_matrax, position)
end)
matrax
end
@doc """
Clears all changes made to `%Matrax{}` struct by
setting the `:changes` key to `[]` and reverting its modifications
to `:rows` & `:columns`.
Clears access path only modifications like `transpose/1` but not
modifications to integer values in the `:atomics`.
## Examples
iex> matrax = Matrax.identity(3)
iex> matrax |> Matrax.to_list_of_lists()
[
[1, 0, 0],
[0, 1, 0],
[0, 0, 1]
]
iex> matrax = matrax |> Matrax.diagonal()
iex> matrax |> Matrax.apply(fn _ -> 8 end)
iex> matrax |> Matrax.to_list_of_lists()
[[8, 8, 8]]
iex> matrax = matrax |> Matrax.column(0)
iex> matrax |> Matrax.to_list_of_lists()
[[8]]
iex> matrax = matrax |> Matrax.clear_changes()
iex> matrax |> Matrax.to_list_of_lists()
[
[8, 0, 0],
[0, 8, 0],
[0, 0, 8]
]
"""
@spec clear_changes(t) :: t
def clear_changes(%Matrax{} = matrax) do
do_clear_changes(matrax)
end
defp do_clear_changes(%Matrax{changes: []} = matrax) do
matrax
end
defp do_clear_changes(matrax) do
do_clear_changes(matrax |> clear_last_change())
end
@doc """
Clears last change made to `%Matrax{}` struct by removing
the head of `:changes` key and reverting its modifications
to `:rows` & `:columns`.
Clears access path only modifications like `transpose/1` but not
modifications to integer values in the `:atomics`.
## Examples
iex> matrax = Matrax.identity(3)
iex> matrax |> Matrax.to_list_of_lists()
[
[1, 0, 0],
[0, 1, 0],
[0, 0, 1]
]
iex> matrax = matrax |> Matrax.diagonal()
iex> matrax |> Matrax.apply(fn _ -> 8 end)
iex> matrax |> Matrax.to_list_of_lists()
[[8, 8, 8]]
iex> matrax = matrax |> Matrax.clear_last_change()
iex> matrax |> Matrax.to_list_of_lists()
[
[8, 0, 0],
[0, 8, 0],
[0, 0, 8]
]
"""
@spec clear_last_change(t) :: t
def clear_last_change(%Matrax{changes: []} = matrax) do
matrax
end
def clear_last_change(%Matrax{changes: [change | changes_tl]} = matrax) when is_atom(change) do
%Matrax{matrax | changes: changes_tl}
end
def clear_last_change(%Matrax{changes: [change | changes_tl]} = matrax) when is_tuple(change) do
{rows, columns} = elem(change, 1)
%Matrax{matrax | rows: rows, columns: columns, changes: changes_tl}
end
@doc """
Drops row of matrix at given `row_index`.
Only modifies the struct, it doesn't move or mutate data.
After `drop_row/2` the access path to positions
will be modified during execution.
If you want to get a new `:atomics` with mofified data
use the `copy/1` function which applies the `:changes`.
## Examples
iex> matrax = Matrax.new(5, 4, seed_fun: fn _, {row, _col} -> row end)
iex> matrax |> Matrax.to_list_of_lists()
[
[0, 0, 0, 0],
[1, 1, 1, 1],
[2, 2, 2, 2],
[3, 3, 3, 3],
[4, 4, 4, 4],
]
iex> matrax |> Matrax.drop_row(1) |> Matrax.to_list_of_lists()
[
[0, 0, 0, 0],
[2, 2, 2, 2],
[3, 3, 3, 3],
[4, 4, 4, 4],
]
"""
@spec drop_row(t, non_neg_integer) :: t
def drop_row(%Matrax{rows: rows, changes: changes} = matrax, row_index)
when rows > 1 and row_index >= 0 and row_index < rows do
%Matrax{matrax | rows: rows - 1, changes: [{:drop_row, row_index} | changes]}
end
@doc """
Drops column of matrix at given `column_index`.
Only modifies the struct, it doesn't move or mutate data.
After `drop_column/2` the access path to positions
will be modified during execution.
If you want to get a new `:atomics` with mofified data
use the `copy/1` function which applies the `:changes`.
## Examples
iex> matrax = Matrax.new(4, 5, seed_fun: fn _, {_row, col} -> col end)
iex> matrax |> Matrax.to_list_of_lists()
[
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
]
iex> matrax |> Matrax.drop_column(1) |> Matrax.to_list_of_lists()
[
[0, 2, 3, 4],
[0, 2, 3, 4],
[0, 2, 3, 4],
[0, 2, 3, 4],
]
"""
@spec drop_column(t, non_neg_integer) :: t
def drop_column(%Matrax{columns: columns, changes: changes} = matrax, column_index)
when columns > 1 and column_index >= 0 and column_index < columns do
%Matrax{matrax | columns: columns - 1, changes: [{:drop_column, column_index} | changes]}
end
@doc """
Concatenates a list of `%Matrax{}` matrices.
Returns a new `%Matrax{}` struct with a new atomics reference containing all values
of matrices from `list`.
## Options
* `:signed` - (boolean) to have signed or unsigned 64bit integers in the new matrix. Defaults to `true`.
## Examples
iex> matrax = Matrax.new(3, 3, seed_fun: fn _, {_row, col} -> col end)
iex> matrax |> Matrax.to_list_of_lists()
[
[0, 1, 2],
[0, 1, 2],
[0, 1, 2]
]
iex> Matrax.concat([matrax, matrax], :rows) |> Matrax.to_list_of_lists()
[
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2]
]
iex> Matrax.concat([matrax, matrax], :columns) |> Matrax.to_list_of_lists()
[
[0, 1, 2, 0, 1, 2],
[0, 1, 2, 0, 1, 2],
[0, 1, 2, 0, 1, 2]
]
"""
@spec concat(nonempty_list(t), :rows | :columns, list) :: t | no_return
def concat([%Matrax{rows: rows, columns: columns} | _] = list, concat_type, options \\ [])
when is_list(list) and length(list) > 0 do
can_concat? =
case concat_type do
:columns ->
list |> Enum.all?(&(&1.rows == rows))
:rows ->
list |> Enum.all?(&(&1.columns == columns))
end
if not can_concat? do
raise ArgumentError,
"When concatenating by #{inspect(concat_type)} all matrices should " <>
"have the same number of #{if(concat_type == :row, do: "columns", else: "rows")}"
end
signed = Keyword.get(options, :signed, true)
size =
list
|> Enum.map(&count/1)
|> Enum.sum()
atomics = :atomics.new(size, signed: signed)
%{min: min, max: max} = :atomics.info(atomics)
{rows, columns} =
case concat_type do
:rows ->
{round(size / columns), columns}
:columns ->
{rows, round(size / rows)}
end
matrax =
%Matrax{
atomics: atomics,
rows: rows,
columns: columns,
signed: signed,
min: min,
max: max,
changes: []
}
do_concat(list, matrax, 0, 0, concat_type)
matrax
end
defp do_concat([], _, _, _, _), do: :done
defp do_concat([%Matrax{rows: rows} | tail], matrax, target_index, source_index, :rows) when source_index == rows do
do_concat(tail, matrax, target_index, 0, :rows)
end
defp do_concat([%Matrax{columns: columns} | tail], matrax, target_index, source_index, :columns) when source_index == columns do
do_concat(tail, matrax, target_index, 0, :columns)
end
defp do_concat([head | _] = list, matrax, target_index, source_index, concat_type) do
case concat_type do
:rows ->
set_row(matrax, target_index, head |> row(source_index))
:columns ->
set_column(matrax, target_index, head |> column(source_index))
end
do_concat(list, matrax, target_index + 1, source_index + 1, concat_type)
end
defimpl Enumerable do
@moduledoc false
alias Matrax
def count(%Matrax{} = matrax) do
{:ok, Matrax.count(matrax)}
end
def member?(%Matrax{} = matrax, int) do
{:ok, Matrax.member?(matrax, int)}
end
def slice(%Matrax{} = matrax) do
{
:ok,
Matrax.count(matrax),
fn start, length ->
do_slice(matrax, start + 1, length)
end
}
end
defp do_slice(_, _, 0), do: []
defp do_slice(matrax, index, length) do
position = Matrax.index_to_position(matrax, index)
[Matrax.get(matrax, position) | do_slice(matrax, index + 1, length - 1)]
end
def reduce(%Matrax{} = matrax, acc, fun) do
do_reduce({matrax, 0, Matrax.count(matrax)}, acc, fun)
end
defp do_reduce(_, {:halt, acc}, _fun), do: {:halted, acc}
defp do_reduce(tuple, {:suspend, acc}, fun), do: {:suspended, acc, &do_reduce(tuple, &1, fun)}
defp do_reduce({_, same, same}, {:cont, acc}, _fun), do: {:done, acc}
defp do_reduce({matrax, index, count}, {:cont, acc}, fun) do
position = Matrax.index_to_position(matrax, index + 1)
do_reduce(
{matrax, index + 1, count},
fun.(Matrax.get(matrax, position), acc),
fun
)
end
end
end
|
lib/matrax.ex
| 0.937797
| 0.758958
|
matrax.ex
|
starcoder
|
defmodule AWS.FSx do
@moduledoc """
Amazon FSx is a fully managed service that makes it easy for storage and
application administrators to launch and use shared file storage.
"""
@doc """
Cancels an existing Amazon FSx for Lustre data repository task if that task is
in either the `PENDING` or `EXECUTING` state.
When you cancel a task, Amazon FSx does the following.
* Any files that FSx has already exported are not reverted.
* FSx continues to export any files that are "in-flight" when the
cancel operation is received.
* FSx does not export any files that have not yet been exported.
"""
def cancel_data_repository_task(client, input, options \\ []) do
request(client, "CancelDataRepositoryTask", input, options)
end
@doc """
Creates a backup of an existing Amazon FSx file system.
Creating regular backups for your file system is a best practice, enabling you
to restore a file system from a backup if an issue arises with the original file
system.
For Amazon FSx for Lustre file systems, you can create a backup only for file
systems with the following configuration:
* a Persistent deployment type
* is *not* linked to a data respository.
For more information about backing up Amazon FSx for Lustre file systems, see
[Working with FSx for Lustre backups](https://docs.aws.amazon.com/fsx/latest/LustreGuide/using-backups-fsx.html).
For more information about backing up Amazon FSx for Lustre file systems, see
[Working with FSx for Windows backups](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/using-backups.html).
If a backup with the specified client request token exists, and the parameters
match, this operation returns the description of the existing backup. If a
backup specified client request token exists, and the parameters don't match,
this operation returns `IncompatibleParameterError`. If a backup with the
specified client request token doesn't exist, `CreateBackup` does the following:
* Creates a new Amazon FSx backup with an assigned ID, and an
initial lifecycle state of `CREATING`.
* Returns the description of the backup.
By using the idempotent operation, you can retry a `CreateBackup` operation
without the risk of creating an extra backup. This approach can be useful when
an initial call fails in a way that makes it unclear whether a backup was
created. If you use the same client request token and the initial call created a
backup, the operation returns a successful result because all the parameters are
the same.
The `CreateBackup` operation returns while the backup's lifecycle state is still
`CREATING`. You can check the backup creation status by calling the
`DescribeBackups` operation, which returns the backup state along with other
information.
"""
def create_backup(client, input, options \\ []) do
request(client, "CreateBackup", input, options)
end
@doc """
Creates an Amazon FSx for Lustre data repository task.
You use data repository tasks to perform bulk operations between your Amazon FSx
file system and its linked data repository. An example of a data repository task
is exporting any data and metadata changes, including POSIX metadata, to files,
directories, and symbolic links (symlinks) from your FSx file system to its
linked data repository. A `CreateDataRepositoryTask` operation will fail if a
data repository is not linked to the FSx file system. To learn more about data
repository tasks, see [Using Data Repository Tasks](https://docs.aws.amazon.com/fsx/latest/LustreGuide/data-repository-tasks.html).
To learn more about linking a data repository to your file system, see [Setting the Export
Prefix](https://docs.aws.amazon.com/fsx/latest/LustreGuide/export-data-repository.html#export-prefix).
"""
def create_data_repository_task(client, input, options \\ []) do
request(client, "CreateDataRepositoryTask", input, options)
end
@doc """
Creates a new, empty Amazon FSx file system.
If a file system with the specified client request token exists and the
parameters match, `CreateFileSystem` returns the description of the existing
file system. If a file system specified client request token exists and the
parameters don't match, this call returns `IncompatibleParameterError`. If a
file system with the specified client request token doesn't exist,
`CreateFileSystem` does the following:
* Creates a new, empty Amazon FSx file system with an assigned ID,
and an initial lifecycle state of `CREATING`.
* Returns the description of the file system.
This operation requires a client request token in the request that Amazon FSx
uses to ensure idempotent creation. This means that calling the operation
multiple times with the same client request token has no effect. By using the
idempotent operation, you can retry a `CreateFileSystem` operation without the
risk of creating an extra file system. This approach can be useful when an
initial call fails in a way that makes it unclear whether a file system was
created. Examples are if a transport level timeout occurred, or your connection
was reset. If you use the same client request token and the initial call created
a file system, the client receives success as long as the parameters are the
same.
The `CreateFileSystem` call returns while the file system's lifecycle state is
still `CREATING`. You can check the file-system creation status by calling the
`DescribeFileSystems` operation, which returns the file system state along with
other information.
"""
def create_file_system(client, input, options \\ []) do
request(client, "CreateFileSystem", input, options)
end
@doc """
Creates a new Amazon FSx file system from an existing Amazon FSx backup.
If a file system with the specified client request token exists and the
parameters match, this operation returns the description of the file system. If
a client request token specified by the file system exists and the parameters
don't match, this call returns `IncompatibleParameterError`. If a file system
with the specified client request token doesn't exist, this operation does the
following:
* Creates a new Amazon FSx file system from backup with an assigned
ID, and an initial lifecycle state of `CREATING`.
* Returns the description of the file system.
Parameters like Active Directory, default share name, automatic backup, and
backup settings default to the parameters of the file system that was backed up,
unless overridden. You can explicitly supply other settings.
By using the idempotent operation, you can retry a `CreateFileSystemFromBackup`
call without the risk of creating an extra file system. This approach can be
useful when an initial call fails in a way that makes it unclear whether a file
system was created. Examples are if a transport level timeout occurred, or your
connection was reset. If you use the same client request token and the initial
call created a file system, the client receives success as long as the
parameters are the same.
The `CreateFileSystemFromBackup` call returns while the file system's lifecycle
state is still `CREATING`. You can check the file-system creation status by
calling the `DescribeFileSystems` operation, which returns the file system state
along with other information.
"""
def create_file_system_from_backup(client, input, options \\ []) do
request(client, "CreateFileSystemFromBackup", input, options)
end
@doc """
Deletes an Amazon FSx backup, deleting its contents.
After deletion, the backup no longer exists, and its data is gone.
The `DeleteBackup` call returns instantly. The backup will not show up in later
`DescribeBackups` calls.
The data in a deleted backup is also deleted and can't be recovered by any
means.
"""
def delete_backup(client, input, options \\ []) do
request(client, "DeleteBackup", input, options)
end
@doc """
Deletes a file system, deleting its contents.
After deletion, the file system no longer exists, and its data is gone. Any
existing automatic backups will also be deleted.
By default, when you delete an Amazon FSx for Windows File Server file system, a
final backup is created upon deletion. This final backup is not subject to the
file system's retention policy, and must be manually deleted.
The `DeleteFileSystem` action returns while the file system has the `DELETING`
status. You can check the file system deletion status by calling the
`DescribeFileSystems` action, which returns a list of file systems in your
account. If you pass the file system ID for a deleted file system, the
`DescribeFileSystems` returns a `FileSystemNotFound` error.
Deleting an Amazon FSx for Lustre file system will fail with a 400 BadRequest if
a data repository task is in a `PENDING` or `EXECUTING` state.
The data in a deleted file system is also deleted and can't be recovered by any
means.
"""
def delete_file_system(client, input, options \\ []) do
request(client, "DeleteFileSystem", input, options)
end
@doc """
Returns the description of specific Amazon FSx backups, if a `BackupIds` value
is provided for that backup.
Otherwise, it returns all backups owned by your AWS account in the AWS Region of
the endpoint that you're calling.
When retrieving all backups, you can optionally specify the `MaxResults`
parameter to limit the number of backups in a response. If more backups remain,
Amazon FSx returns a `NextToken` value in the response. In this case, send a
later request with the `NextToken` request parameter set to the value of
`NextToken` from the last response.
This action is used in an iterative process to retrieve a list of your backups.
`DescribeBackups` is called first without a `NextToken`value. Then the action
continues to be called with the `NextToken` parameter set to the value of the
last `NextToken` value until a response has no `NextToken`.
When using this action, keep the following in mind:
* The implementation might return fewer than `MaxResults` file
system descriptions while still including a `NextToken` value.
* The order of backups returned in the response of one
`DescribeBackups` call and the order of backups returned across the responses of
a multi-call iteration is unspecified.
"""
def describe_backups(client, input, options \\ []) do
request(client, "DescribeBackups", input, options)
end
@doc """
Returns the description of specific Amazon FSx for Lustre data repository tasks,
if one or more `TaskIds` values are provided in the request, or if filters are
used in the request.
You can use filters to narrow the response to include just tasks for specific
file systems, or tasks in a specific lifecycle state. Otherwise, it returns all
data repository tasks owned by your AWS account in the AWS Region of the
endpoint that you're calling.
When retrieving all tasks, you can paginate the response by using the optional
`MaxResults` parameter to limit the number of tasks returned in a response. If
more tasks remain, Amazon FSx returns a `NextToken` value in the response. In
this case, send a later request with the `NextToken` request parameter set to
the value of `NextToken` from the last response.
"""
def describe_data_repository_tasks(client, input, options \\ []) do
request(client, "DescribeDataRepositoryTasks", input, options)
end
@doc """
Returns the description of specific Amazon FSx file systems, if a
`FileSystemIds` value is provided for that file system.
Otherwise, it returns descriptions of all file systems owned by your AWS account
in the AWS Region of the endpoint that you're calling.
When retrieving all file system descriptions, you can optionally specify the
`MaxResults` parameter to limit the number of descriptions in a response. If
more file system descriptions remain, Amazon FSx returns a `NextToken` value in
the response. In this case, send a later request with the `NextToken` request
parameter set to the value of `NextToken` from the last response.
This action is used in an iterative process to retrieve a list of your file
system descriptions. `DescribeFileSystems` is called first without a
`NextToken`value. Then the action continues to be called with the `NextToken`
parameter set to the value of the last `NextToken` value until a response has no
`NextToken`.
When using this action, keep the following in mind:
* The implementation might return fewer than `MaxResults` file
system descriptions while still including a `NextToken` value.
* The order of file systems returned in the response of one
`DescribeFileSystems` call and the order of file systems returned across the
responses of a multicall iteration is unspecified.
"""
def describe_file_systems(client, input, options \\ []) do
request(client, "DescribeFileSystems", input, options)
end
@doc """
Lists tags for an Amazon FSx file systems and backups in the case of Amazon FSx
for Windows File Server.
When retrieving all tags, you can optionally specify the `MaxResults` parameter
to limit the number of tags in a response. If more tags remain, Amazon FSx
returns a `NextToken` value in the response. In this case, send a later request
with the `NextToken` request parameter set to the value of `NextToken` from the
last response.
This action is used in an iterative process to retrieve a list of your tags.
`ListTagsForResource` is called first without a `NextToken`value. Then the
action continues to be called with the `NextToken` parameter set to the value of
the last `NextToken` value until a response has no `NextToken`.
When using this action, keep the following in mind:
* The implementation might return fewer than `MaxResults` file
system descriptions while still including a `NextToken` value.
* The order of tags returned in the response of one
`ListTagsForResource` call and the order of tags returned across the responses
of a multi-call iteration is unspecified.
"""
def list_tags_for_resource(client, input, options \\ []) do
request(client, "ListTagsForResource", input, options)
end
@doc """
Tags an Amazon FSx resource.
"""
def tag_resource(client, input, options \\ []) do
request(client, "TagResource", input, options)
end
@doc """
This action removes a tag from an Amazon FSx resource.
"""
def untag_resource(client, input, options \\ []) do
request(client, "UntagResource", input, options)
end
@doc """
Use this operation to update the configuration of an existing Amazon FSx file
system.
You can update multiple properties in a single request.
For Amazon FSx for Windows File Server file systems, you can update the
following properties:
* AutomaticBackupRetentionDays
* DailyAutomaticBackupStartTime
* SelfManagedActiveDirectoryConfiguration
* StorageCapacity
* ThroughputCapacity
* WeeklyMaintenanceStartTime
For Amazon FSx for Lustre file systems, you can update the following properties:
* AutoImportPolicy
* AutomaticBackupRetentionDays
* DailyAutomaticBackupStartTime
* WeeklyMaintenanceStartTime
"""
def update_file_system(client, input, options \\ []) do
request(client, "UpdateFileSystem", input, options)
end
@spec request(AWS.Client.t(), binary(), map(), list()) ::
{:ok, map() | nil, map()}
| {:error, term()}
defp request(client, action, input, options) do
client = %{client | service: "fsx"}
host = build_host("fsx", client)
url = build_url(host, client)
headers = [
{"Host", host},
{"Content-Type", "application/x-amz-json-1.1"},
{"X-Amz-Target", "AWSSimbaAPIService_v20180301.#{action}"}
]
payload = encode!(client, input)
headers = AWS.Request.sign_v4(client, "POST", url, headers, payload)
post(client, url, payload, headers, options)
end
defp post(client, url, payload, headers, options) do
case AWS.Client.request(client, :post, url, payload, headers, options) do
{:ok, %{status_code: 200, body: body} = response} ->
body = if body != "", do: decode!(client, body)
{:ok, body, response}
{:ok, response} ->
{:error, {:unexpected_response, response}}
error = {:error, _reason} -> error
end
end
defp build_host(_endpoint_prefix, %{region: "local", endpoint: endpoint}) do
endpoint
end
defp build_host(_endpoint_prefix, %{region: "local"}) do
"localhost"
end
defp build_host(endpoint_prefix, %{region: region, endpoint: endpoint}) do
"#{endpoint_prefix}.#{region}.#{endpoint}"
end
defp build_url(host, %{:proto => proto, :port => port}) do
"#{proto}://#{host}:#{port}/"
end
defp encode!(client, payload) do
AWS.Client.encode!(client, payload, :json)
end
defp decode!(client, payload) do
AWS.Client.decode!(client, payload, :json)
end
end
|
lib/aws/generated/fsx.ex
| 0.890384
| 0.620593
|
fsx.ex
|
starcoder
|
defmodule EarmarkParser do
@type ast_meta :: map()
@type ast_tag :: binary()
@type ast_attribute_name :: binary()
@type ast_attribute_value :: binary()
@type ast_attribute :: {ast_attribute_name(), ast_attribute_value()}
@type ast_attributes :: list(ast_attribute())
@type ast_tuple :: {ast_tag(), ast_attributes(), ast(), ast_meta()}
@type ast_node :: binary() | ast_tuple()
@type ast :: list(ast_node())
@moduledoc """
### API
#### EarmarkParser.as_ast
This is the structure of the result of `as_ast`.
{:ok, ast, []} = EarmarkParser.as_ast(markdown)
{:ok, ast, deprecation_messages} = EarmarkParser.as_ast(markdown)
{:error, ast, error_messages} = EarmarkParser.as_ast(markdown)
For examples see the functiondoc below.
#### Options
Options can be passed into `as_ast/2` according to the documentation of `EarmarkParser.Options`.
{status, ast, errors} = EarmarkParser.as_ast(markdown, options)
## Supports
Standard [Gruber markdown][gruber].
[gruber]: <http://daringfireball.net/projects/markdown/syntax>
## Extensions
### Links
#### Links supported by default
##### Oneline HTML Link tags
iex(1)> EarmarkParser.as_ast(~s{<a href="href">link</a>})
{:ok, [{"a", [{"href", "href"}], ["link"], %{verbatim: true}}], []}
##### Markdown links
New style ...
iex(2)> EarmarkParser.as_ast(~s{[title](destination)})
{:ok, [{"p", [], [{"a", [{"href", "destination"}], ["title"], %{}}], %{}}], []}
and old style
iex(3)> EarmarkParser.as_ast("[foo]: /url \\"title\\"\\n\\n[foo]\\n")
{:ok, [{"p", [], [{"a", [{"href", "/url"}, {"title", "title"}], ["foo"], %{}}], %{}}], []}
#### Autolinks
iex(4)> EarmarkParser.as_ast("<https://elixir-lang.com>")
{:ok, [{"p", [], [{"a", [{"href", "https://elixir-lang.com"}], ["https://elixir-lang.com"], %{}}], %{}}], []}
#### Additional link parsing via options
#### Pure links
**N.B.** that the `pure_links` option is `true` by default
iex(5)> EarmarkParser.as_ast("https://github.com")
{:ok, [{"p", [], [{"a", [{"href", "https://github.com"}], ["https://github.com"], %{}}], %{}}], []}
But can be deactivated
iex(6)> EarmarkParser.as_ast("https://github.com", pure_links: false)
{:ok, [{"p", [], ["https://github.com"], %{}}], []}
#### Wikilinks...
are disabled by default
iex(7)> EarmarkParser.as_ast("[[page]]")
{:ok, [{"p", [], ["[[page]]"], %{}}], []}
and can be enabled
iex(8)> EarmarkParser.as_ast("[[page]]", wikilinks: true)
{:ok, [{"p", [], [{"a", [{"href", "page"}], ["page"], %{wikilink: true}}], %{}}], []}
### Github Flavored Markdown
GFM is supported by default, however as GFM is a moving target and all GFM extension do not make sense in a general context, EarmarkParser does not support all of it, here is a list of what is supported:
#### Strike Through
iex(9)> EarmarkParser.as_ast("~~hello~~")
{:ok, [{"p", [], [{"del", [], ["hello"], %{}}], %{}}], []}
#### Syntax Highlighting
All backquoted or fenced code blocks with a language string are rendered with the given
language as a _class_ attribute of the _code_ tag.
For example:
iex(10)> [
...(10)> "```elixir",
...(10)> " @tag :hello",
...(10)> "```"
...(10)> ] |> EarmarkParser.as_ast()
{:ok, [{"pre", [], [{"code", [{"class", "elixir"}], [" @tag :hello"], %{}}], %{}}], []}
will be rendered as shown in the doctest above.
If you want to integrate with a syntax highlighter with different conventions you can add more classes by specifying prefixes that will be
put before the language string.
Prism.js for example needs a class `language-elixir`. In order to achieve that goal you can add `language-`
as a `code_class_prefix` to `EarmarkParser.Options`.
In the following example we want more than one additional class, so we add more prefixes.
iex(11)> [
...(11)> "```elixir",
...(11)> " @tag :hello",
...(11)> "```"
...(11)> ] |> EarmarkParser.as_ast(%EarmarkParser.Options{code_class_prefix: "lang- language-"})
{:ok, [{"pre", [], [{"code", [{"class", "elixir lang-elixir language-elixir"}], [" @tag :hello"], %{}}], %{}}], []}
#### Tables
Are supported as long as they are preceded by an empty line.
State | Abbrev | Capital
----: | :----: | -------
Texas | TX | Austin
Maine | ME | Augusta
Tables may have leading and trailing vertical bars on each line
| State | Abbrev | Capital |
| ----: | :----: | ------- |
| Texas | TX | Austin |
| Maine | ME | Augusta |
Tables need not have headers, in which case all column alignments
default to left.
| Texas | TX | Austin |
| Maine | ME | Augusta |
Currently we assume there are always spaces around interior vertical unless
there are exterior bars.
However in order to be more GFM compatible the `gfm_tables: true` option
can be used to interpret only interior vertical bars as a table if a separation
line is given, therefor
Language|Rating
--------|------
Elixir | awesome
is a table (iff `gfm_tables: true`) while
Language|Rating
Elixir | awesome
never is.
#### HTML Blocks
HTML is not parsed recursively or detected in all conditions right now, though GFM compliance
is a goal.
But for now the following holds:
A HTML Block defined by a tag starting a line and the same tag starting a different line is parsed
as one HTML AST node, marked with %{verbatim: true}
E.g.
iex(12)> lines = [ "<div><span>", "some</span><text>", "</div>more text" ]
...(12)> EarmarkParser.as_ast(lines)
{:ok, [{"div", [], ["<span>", "some</span><text>"], %{verbatim: true}}, "more text"], []}
And a line starting with an opening tag and ending with the corresponding closing tag is parsed in similar
fashion
iex(13)> EarmarkParser.as_ast(["<span class=\\"superspan\\">spaniel</span>"])
{:ok, [{"span", [{"class", "superspan"}], ["spaniel"], %{verbatim: true}}], []}
What is HTML?
We differ from strict GFM by allowing **all** tags not only HTML5 tags this holds for one liners....
iex(14)> {:ok, ast, []} = EarmarkParser.as_ast(["<stupid />", "<not>better</not>"])
...(14)> ast
[
{"stupid", [], [], %{verbatim: true}},
{"not", [], ["better"], %{verbatim: true}}]
and for multi line blocks
iex(15)> {:ok, ast, []} = EarmarkParser.as_ast([ "<hello>", "world", "</hello>"])
...(15)> ast
[{"hello", [], ["world"], %{verbatim: true}}]
#### HTML Comments
Are recognized if they start a line (after ws and are parsed until the next `-->` is found
all text after the next '-->' is ignored
E.g.
iex(16)> EarmarkParser.as_ast(" <!-- Comment\\ncomment line\\ncomment --> text -->\\nafter")
{:ok, [{:comment, [], [" Comment", "comment line", "comment "], %{comment: true}}, {"p", [], ["after"], %{}}], []}
### Adding Attributes with the IAL extension
#### To block elements
HTML attributes can be added to any block-level element. We use
the Kramdown syntax: add the line `{:` _attrs_ `}` following the block.
_attrs_ can be one or more of:
* `.className`
* `#id`
* name=value, name="value", or name='value'
For example:
# Warning
{: .red}
Do not turn off the engine
if you are at altitude.
{: .boxed #warning spellcheck="true"}
#### To links or images
It is possible to add IAL attributes to generated links or images in the following
format.
iex(17)> markdown = "[link](url) {: .classy}"
...(17)> EarmarkParser.as_ast(markdown)
{ :ok, [{"p", [], [{"a", [{"class", "classy"}, {"href", "url"}], ["link"], %{}}], %{}}], []}
For both cases, malformed attributes are ignored and warnings are issued.
iex(18)> [ "Some text", "{:hello}" ] |> Enum.join("\\n") |> EarmarkParser.as_ast()
{:error, [{"p", [], ["Some text"], %{}}], [{:warning, 2,"Illegal attributes [\\"hello\\"] ignored in IAL"}]}
It is possible to escape the IAL in both forms if necessary
iex(19)> markdown = "[link](url)\\\\{: .classy}"
...(19)> EarmarkParser.as_ast(markdown)
{:ok, [{"p", [], [{"a", [{"href", "url"}], ["link"], %{}}, "{: .classy}"], %{}}], []}
This of course is not necessary in code blocks or text lines
containing an IAL-like string, as in the following example
iex(20)> markdown = "hello {:world}"
...(20)> EarmarkParser.as_ast(markdown)
{:ok, [{"p", [], ["hello {:world}"], %{}}], []}
## Limitations
* Block-level HTML is correctly handled only if each HTML
tag appears on its own line. So
<div>
<div>
hello
</div>
</div>
will work. However. the following won't
<div>
hello</div>
* <NAME>uber's tests contain an ambiguity when it comes to
lines that might be the start of a list inside paragraphs.
One test says that
This is the text
* of a paragraph
that I wrote
is a single paragraph. The "*" is not significant. However, another
test has
* A list item
* an another
and expects this to be a nested list. But, in reality, the second could just
be the continuation of a paragraph.
I've chosen always to use the second interpretation—a line that looks like
a list item will always be a list item.
* Rendering of block and inline elements.
Block or void HTML elements that are at the absolute beginning of a line end
the preceding paragraph.
Thusly
mypara
<hr />
Becomes
<p>mypara</p>
<hr />
While
mypara
<hr />
will be transformed into
<p>mypara
<hr /></p>
## Timeouts
By default, that is if the `timeout` option is not set EarmarkParser uses parallel mapping as implemented in `EarmarkParser.pmap/2`,
which uses `Task.await` with its default timeout of 5000ms.
In rare cases that might not be enough.
By indicating a longer `timeout` option in milliseconds EarmarkParser will use parallel mapping as implemented in `EarmarkParser.pmap/3`,
which will pass `timeout` to `Task.await`.
In both cases one can override the mapper function with either the `mapper` option (used if and only if `timeout` is nil) or the
`mapper_with_timeout` function (used otherwise).
"""
alias EarmarkParser.Error
alias EarmarkParser.Options
import EarmarkParser.Message, only: [sort_messages: 1]
@doc """
iex(21)> markdown = "My `code` is **best**"
...(21)> {:ok, ast, []} = EarmarkParser.as_ast(markdown)
...(21)> ast
[{"p", [], ["My ", {"code", [{"class", "inline"}], ["code"], %{}}, " is ", {"strong", [], ["best"], %{}}], %{}}]
iex(22)> markdown = "```elixir\\nIO.puts 42\\n```"
...(22)> {:ok, ast, []} = EarmarkParser.as_ast(markdown, code_class_prefix: "lang-")
...(22)> ast
[{"pre", [], [{"code", [{"class", "elixir lang-elixir"}], ["IO.puts 42"], %{}}], %{}}]
**Rationale**:
The AST is exposed in the spirit of [Floki's](https://hex.pm/packages/floki).
"""
def as_ast(lines, options \\ %Options{})
def as_ast(lines, %Options{}=options) do
context = _as_ast(lines, options)
messages = sort_messages(context)
status =
case Enum.any?(messages, fn {severity, _, _} ->
severity == :error || severity == :warning
end) do
true -> :error
_ -> :ok
end
{status, context.value, messages}
end
def as_ast(lines, options) when is_list(options) do
as_ast(lines, struct(Options, options))
end
def as_ast(lines, options) when is_map(options) do
as_ast(lines, struct(Options, options |> Map.delete(:__struct__) |> Enum.into([])))
end
defp _as_ast(lines, options) do
{blocks, context} = EarmarkParser.Parser.parse_markdown(lines, options)
EarmarkParser.AstRenderer.render(blocks, context)
end
@doc """
Accesses current hex version of the `EarmarkParser` application. Convenience for
`iex` usage.
"""
def version() do
with {:ok, version} = :application.get_key(:earmark_parser, :vsn),
do: to_string(version)
end
@default_timeout_in_ms 5000
@doc false
def pmap(collection, func, timeout \\ @default_timeout_in_ms) do
collection
|> Enum.map(fn item -> Task.async(fn -> func.(item) end) end)
|> Task.yield_many(timeout)
|> Enum.map(&_join_pmap_results_or_raise(&1, timeout))
end
defp _join_pmap_results_or_raise(yield_tuples, timeout)
defp _join_pmap_results_or_raise({_task, {:ok, result}}, _timeout), do: result
defp _join_pmap_results_or_raise({task, {:error, reason}}, _timeout),
do: raise(Error, "#{inspect(task)} has died with reason #{inspect(reason)}")
defp _join_pmap_results_or_raise({task, nil}, timeout),
do:
raise(
Error,
"#{inspect(task)} has not responded within the set timeout of #{timeout}ms, consider increasing it"
)
end
# SPDX-License-Identifier: Apache-2.0
|
lib/earmark_parser.ex
| 0.751192
| 0.60711
|
earmark_parser.ex
|
starcoder
|
defmodule BioMonitor.SensorManager do
@moduledoc """
Wrapper module around all serial communication with
all sensors using a SerialMonitor instance.
"""
alias BioMonitor.SerialMonitor
# Name used for the arduino board.
@arduino_gs ArduinoGenServer
@doc """
Helper to expose arduino's serial monitor identifier.
"""
def arduino_gs_id, do: @arduino_gs
@doc """
Adds all sensors specified in the config file to the
SerialMonitor.
"""
def start_sensors do
with sensor_specs = Application.get_env(
:bio_monitor,
BioMonitor.SensorManager
),
false <- sensor_specs == nil,
arduino_spec <- process_specs(sensor_specs[:arduino]),
:ok <- SerialMonitor.set_port(
@arduino_gs,
arduino_spec.port,
arduino_spec.speed
)
do
#Register sensors here.
SerialMonitor.add_sensors(@arduino_gs, arduino_spec[:sensors])
{:ok, "Sensor ready"}
else
{:error, _} ->
{:error, "Hubo un error al conectarse con la placa."}
_ ->
{:error, "Error al procesar la configuración del sistema."}
end
end
@doc """
Fetch the ph value from the sensor.
"""
def get_ph do
case get_readings() do
{:ok, readings} -> {:ok, readings[:ph]}
_ -> {:error, "Error al obtener el Ph."}
end
end
@doc """
sets the offset of the ph sensor for calibration
"""
def calibratePh(type) do
case send_and_read(:ph, "CP #{type}") do
{:ok, _result} -> :ok
{:error, message, _description} -> {:error, message}
end
end
@doc """
sets on the acid pump to drop.
"""
def pump_acid() do
case send_and_read(:ph, "SP 0 1:185,0:15,2:100,0") do
{:ok, _result} -> :ok
{:error, message, _description} -> {:error, message}
end
end
@doc """
sets on the base pump to drop.
"""
def pump_base() do
case send_and_read(:ph, "SP 1 1:255,0:15,2:100,0") do
{:ok, _result} -> :ok
{:error, message, _description} -> {:error, message}
end
end
@doc """
pumps through the third bomb any substance that
the operator has set up for a time interval.
"""
def pump_trigger(for_seconds) do
case send_and_read(:ph, "SP 2 1:#{for_seconds},0") do
{:ok, _result} -> :ok
{:error, message, _description} -> {:error, message}
end
end
@doc """
pushes acid through the pump
"""
def push_acid() do
## TODO, change this values to the real ones
case send_and_read(:ph, "SP 0 1:4000,0") do
{:ok, _result} -> :ok
{:error, message, _description} -> {:error, message}
end
end
@doc """
pushes base through the pump
"""
def push_base() do
## TODO, change this values to the real ones
case send_and_read(:ph, "SP 1 1:4000,0") do
{:ok, _result} -> :ok
{:error, message, _description} -> {:error, message}
end
end
@doc """
Get the status of each sensor
"""
def get_sensors_status() do
case send_and_read(:temp, "GS") do
{:ok, result} -> parse_sensors_status(result)
{:error, message, _description} -> {:error, message}
end
end
@doc """
Fetchs all readings from the SerialMonitors and parse them.
"""
def get_readings do
with {:ok, arduino_readings} <- SerialMonitor.get_readings(@arduino_gs),
{:ok, temp} <- parse_reading(arduino_readings[:temp]),
{:ok, ph} <- parse_reading(arduino_readings[:ph])
do
IO.puts "~~~~~~~~~~~~~~~~~~~~~"
IO.puts "~~Temp is: #{temp}~~~"
IO.puts "~~Ph is: #{ph}~~~~~~~"
IO.puts "~~~~~~~~~~~~~~~~~~~~~"
{:ok, %{temp: temp, ph: ph}}
else
{:error, message} ->
{:error, "Hubo un error al obtener las lecturas: #{message}"}
_ ->
{:error, "Error inesperado, por favor revise la conexión con la placa."}
end
end
@doc """
Sends a command for an specific sensor.
sensor should be one of the previously reigstered sensors.
example send_command(:temp, "GT")
returns:
* {:ok, result}
* {:error, message}
"""
def send_command(sensor, command) do
with {:ok, gs_name} <- gs_name_for_sensor(sensor),
{:ok, result} <- SerialMonitor.send_command(gs_name, command)
do
{:ok, result}
else
{:error, message} ->
{:error, "Error al enviar instrucción.", message}
:error ->
{:error, "Ningún sensor concuerda con el puerto."}
end
end
@doc """
Sends a command for an specific sensor and reads the response.
sensor should be one of the previously reigstered sensors.
example send_command(:temp, "GT")
returns:
* {:ok, result}
* {:error, message}
"""
def send_and_read(sensor, command) do
with {:ok, gs_name} <- gs_name_for_sensor(sensor),
{:ok, result} <- SerialMonitor.send_and_read(gs_name, command)
do
{:ok, result}
else
{:error, message} ->
{:error, "Error al enviar el comando para el sensor #{sensor}.", message}
_ ->
{:error, "No hay ninguún sensor conectado para #{sensor}"}
end
end
#Procesess the keyword list returned from the config file to a
#list of maps to send to the SerialMonitor with the following format:
# [
# %{
# port: "dummy port",
# sensors: [temp: "GT, ph: "GP"],
# speed: 9600
# }
#]
defp process_specs(sensor_spec) do
%{
port: sensor_spec[:port],
speed: sensor_spec[:speed],
sensors: sensor_spec[:sensors]
}
end
defp parse_reading(reading) do
case reading do
nil -> {:error, "No se pudo obtener la lectura"}
"ERROR" -> {:error, "Error interno de la placa"}
{:error, message} -> {:error, message}
reading -> case Float.parse(reading) do
{parsed_reading, _} -> {:ok, parsed_reading}
_ -> {:error, "Hubo un error al conectarse con la placa"}
end
end
end
defp parse_sensors_status(response) do
case response do
"ERROR" -> {:error, "Error interno de la placa"}
response ->
strings = String.split(response, ",")
|> Enum.map(fn val ->
String.split(val, ":")
end)
{
:ok,
%{
pumps: strings |> Enum.at(0) |> Enum.at(1),
ph: strings |> Enum.at(1) |> Enum.at(1),
temp: strings |> Enum.at(2) |> Enum.at(1)
}
}
end
end
defp gs_name_for_sensor(sensor) do
case sensor do
:temp -> {:ok, @arduino_gs}
:ph -> {:ok, @arduino_gs}
:density -> {:ok, @arduino_gs}
_ -> :error
end
end
end
|
lib/bio_monitor/sensor_manager.ex
| 0.590425
| 0.410993
|
sensor_manager.ex
|
starcoder
|
defmodule Swarm.Distribution.Strategy do
@moduledoc """
This module implements the interface for custom distribution strategies.
The default strategy used by Swarm is a consistent hash ring implemented
via the `libring` library.
Custom strategies are expected to return a datastructure or pid which will be
passed along to any functions which need to manipulate the current distribution state.
This can be either a plain datastructure (as is the case with the libring-based strategy),
or a pid which your strategy module then uses to call a process in your own supervision tree.
For efficiency reasons, it is highly recommended to use plain datastructures rather than a
process for storing the distribution state, because it has the potential to become a bottleneck otherwise,
however this is really up to the needs of your situation, just know that you can go either way.
"""
alias Swarm.Distribution.Ring, as: RingStrategy
defmacro __using__(_) do
quote do
@behaviour Swarm.Distribution.Strategy
end
end
@type reason :: String.t()
@type strategy :: term
@type weight :: pos_integer
@type nodelist :: [node() | {node(), weight}]
@type key :: term
@type t :: strategy
@callback create() :: strategy | {:error, reason}
@callback add_node(strategy, node) :: strategy | {:error, reason}
@callback add_node(strategy, node, weight) :: strategy | {:error, reason}
@callback add_nodes(strategy, nodelist) :: strategy | {:error, reason}
@callback remove_node(strategy, node) :: strategy | {:error, reason}
@callback key_to_node(strategy, key) :: node() | :undefined
def create(), do: strategy_module().create()
def create(node), do: strategy_module().add_node(create(), node)
@doc """
Adds a node to the state of the current distribution strategy.
"""
def add_node(strategy, node) do
strategy_module().add_node(strategy, node)
end
@doc """
Adds a node to the state of the current distribution strategy,
and give it a specific weighting relative to other nodes.
"""
def add_node(strategy, node, weight) do
strategy_module().add_node(strategy, node, weight)
end
@doc """
Adds a list of nodes to the state of the current distribution strategy.
The node list can be composed of both node names (atoms) or tuples containing
a node name and a weight for that node.
"""
def add_nodes(strategy, nodes) do
strategy_module().add_nodes(strategy, nodes)
end
@doc """
Removes a node from the state of the current distribution strategy.
"""
def remove_node(strategy, node) do
strategy_module().remove_node(strategy, node)
end
@doc """
Maps a key to a specific node via the current distribution strategy.
"""
def key_to_node(strategy, node) do
strategy_module().key_to_node(strategy, node)
end
defp strategy_module(), do: Application.get_env(:swarm, :distribution_strategy, RingStrategy)
end
|
lib/swarm/distribution/strategy.ex
| 0.849301
| 0.559922
|
strategy.ex
|
starcoder
|
defmodule Ecto.Type do
@moduledoc """
Defines functions and the `Ecto.Type` behaviour for implementing
custom types.
A custom type expects 4 functions to be implemented, all documented
and described below. We also provide two examples of how custom
types can be used in Ecto to augment existing types or providing
your own types.
## Example
Imagine you want to support your id field to be looked up as a
permalink. For example, you want the following query to work:
permalink = "10-how-to-be-productive-with-elixir"
from p in Post, where: p.id == ^permalink
If `id` is an integer field, Ecto will fail in the query above
because it cannot cast the string to an integer. By using a
custom type, we can provide special casting behaviour while
still keeping the underlying Ecto type the same:
defmodule Permalink do
@behaviour Ecto.Type
def type, do: :integer
# Provide our own casting rules.
def cast(string) when is_binary(string) do
case Integer.parse(string) do
{int, _} -> {:ok, int}
:error -> :error
end
end
# We should still accept integers
def cast(integer) when is_integer(integer), do: {:ok, integer}
# Everything else is a failure though
def cast(_), do: :error
# When loading data from the database, we are guaranteed to
# receive an integer (as databases are strict) and we will
# just return it to be stored in the schema struct.
def load(integer) when is_integer(integer), do: {:ok, integer}
# When dumping data to the database, we *expect* an integer
# but any value could be inserted into the struct, so we need
# guard against them.
def dump(integer) when is_integer(integer), do: {:ok, integer}
def dump(_), do: :error
end
Now we can use our new field above as our primary key type in schemas:
defmodule Post do
use Ecto.Schema
@primary_key {:id, Permalink, autogenerate: true}
schema "posts" do
...
end
end
"""
import Kernel, except: [match?: 2]
@typedoc "An Ecto type, primitive or custom."
@type t :: primitive | custom
@typedoc "Primitive Ecto types (handled by Ecto)."
@type primitive :: base | composite
@typedoc "Custom types are represented by user-defined modules."
@type custom :: atom
@typep base :: :integer | :float | :boolean | :string | :map |
:binary | :decimal | :id | :binary_id |
:utc_datetime | :naive_datetime | :date | :time | :any
@typep composite :: {:array, t} | {:map, t} | {:embed, Ecto.Embedded.t} | {:in, t}
@base ~w(integer float boolean string binary decimal datetime utc_datetime naive_datetime date time id binary_id map any)a
@composite ~w(array map in embed)a
@doc """
Returns the underlying schema type for the custom type.
For example, if you want to provide your own date
structures, the type function should return `:date`.
Note this function is not required to return Ecto primitive
types, the type is only required to be known by the adapter.
"""
@callback type :: t
@doc """
Casts the given input to the custom type.
This callback is called on external input and can return any type,
as long as the `dump/1` function is able to convert the returned
value back into an Ecto native type. There are two situations where
this callback is called:
1. When casting values by `Ecto.Changeset`
2. When passing arguments to `Ecto.Query`
"""
@callback cast(term) :: {:ok, term} | :error
@doc """
Loads the given term into a custom type.
This callback is called when loading data from the database and
receive an Ecto native type. It can return any type, as long as
the `dump/1` function is able to convert the returned value back
into an Ecto native type.
"""
@callback load(term) :: {:ok, term} | :error
@doc """
Dumps the given term into an Ecto native type.
This callback is called with any term that was stored in the struct
and it needs to validate them and convert it to an Ecto native type.
"""
@callback dump(term) :: {:ok, term} | :error
## Functions
@doc """
Checks if we have a primitive type.
iex> primitive?(:string)
true
iex> primitive?(Another)
false
iex> primitive?({:array, :string})
true
iex> primitive?({:array, Another})
true
"""
@spec primitive?(t) :: boolean
def primitive?({composite, _}) when composite in @composite, do: true
def primitive?(base) when base in @base, do: true
def primitive?(_), do: false
@doc """
Checks if the given atom can be used as composite type.
iex> composite?(:array)
true
iex> composite?(:string)
false
"""
@spec composite?(atom) :: boolean
def composite?(atom), do: atom in @composite
@doc """
Checks if the given atom can be used as base type.
iex> base?(:string)
true
iex> base?(:array)
false
iex> base?(Custom)
false
"""
@spec base?(atom) :: boolean
def base?(atom), do: atom in @base
@doc """
Retrieves the underlying schema type for the given, possibly custom, type.
iex> type(:string)
:string
iex> type(Ecto.UUID)
:uuid
iex> type({:array, :string})
{:array, :string}
iex> type({:array, Ecto.UUID})
{:array, :uuid}
iex> type({:map, Ecto.UUID})
{:map, :uuid}
"""
@spec type(t) :: t
def type(type)
def type({:array, type}), do: {:array, type(type)}
def type({:map, type}), do: {:map, type(type)}
def type(type) do
if primitive?(type) do
type
else
type.type
end
end
@doc """
Checks if a given type matches with a primitive type
that can be found in queries.
iex> match?(:string, :any)
true
iex> match?(:any, :string)
true
iex> match?(:string, :string)
true
iex> match?({:array, :string}, {:array, :any})
true
iex> match?(Ecto.UUID, :uuid)
true
iex> match?(Ecto.UUID, :string)
false
"""
@spec match?(t, primitive) :: boolean
def match?(schema_type, query_type) do
if primitive?(schema_type) do
do_match?(schema_type, query_type)
else
do_match?(schema_type.type, query_type)
end
end
defp do_match?(_left, :any), do: true
defp do_match?(:any, _right), do: true
defp do_match?({outer, left}, {outer, right}), do: match?(left, right)
defp do_match?({:array, :any}, {:embed, %{cardinality: :many}}), do: true
defp do_match?(:decimal, type) when type in [:float, :integer], do: true
defp do_match?(:binary_id, :binary), do: true
defp do_match?(:id, :integer), do: true
defp do_match?(type, type), do: true
defp do_match?(_, _), do: false
@doc """
Dumps a value to the given type.
Opposite to casting, dumping requires the returned value
to be a valid Ecto type, as it will be sent to the
underlying data store.
iex> dump(:string, nil)
{:ok, nil}
iex> dump(:string, "foo")
{:ok, "foo"}
iex> dump(:integer, 1)
{:ok, 1}
iex> dump(:integer, "10")
:error
iex> dump(:binary, "foo")
{:ok, "foo"}
iex> dump(:binary, 1)
:error
iex> dump({:array, :integer}, [1, 2, 3])
{:ok, [1, 2, 3]}
iex> dump({:array, :integer}, [1, "2", 3])
:error
iex> dump({:array, :binary}, ["1", "2", "3"])
{:ok, ["1", "2", "3"]}
A `dumper` function may be given for handling recursive types.
"""
@spec dump(t, term, (t, term -> {:ok, term} | :error)) :: {:ok, term} | :error
def dump(type, value, dumper \\ &dump/2)
def dump(_type, nil, _dumper) do
{:ok, nil}
end
def dump(:binary_id, value, _dumper) when is_binary(value) do
{:ok, value}
end
def dump(:any, value, _dumper) do
Ecto.DataType.dump(value)
end
def dump({:embed, embed}, value, dumper) do
dump_embed(embed, value, dumper)
end
def dump({:array, type}, value, dumper) when is_list(value) do
array(value, type, dumper, [])
end
def dump({:map, type}, value, dumper) when is_map(value) do
map(Map.to_list(value), type, dumper, %{})
end
def dump({:in, type}, value, dumper) do
case dump({:array, type}, value, dumper) do
{:ok, v} -> {:ok, {:in, v}}
:error -> :error
end
end
def dump(:decimal, term, _dumper) when is_number(term) do
{:ok, Decimal.new(term)}
end
def dump(:date, term, _dumper) do
dump_date(term)
end
def dump(:time, term, _dumper) do
dump_time(term)
end
def dump(:naive_datetime, term, _dumper) do
dump_naive_datetime(term)
end
def dump(:utc_datetime, term, _dumper) do
dump_utc_datetime(term)
end
def dump(type, value, _dumper) do
cond do
not primitive?(type) ->
type.dump(value)
of_base_type?(type, value) ->
{:ok, value}
true ->
:error
end
end
defp dump_embed(%{cardinality: :one, related: schema, field: field},
value, fun) when is_map(value) do
{:ok, dump_embed(field, schema, value, schema.__schema__(:types), fun)}
end
defp dump_embed(%{cardinality: :many, related: schema, field: field},
value, fun) when is_list(value) do
types = schema.__schema__(:types)
{:ok, Enum.map(value, &dump_embed(field, schema, &1, types, fun))}
end
defp dump_embed(_embed, _value, _fun) do
:error
end
defp dump_embed(_field, schema, %{__struct__: schema} = struct, types, dumper) do
Enum.reduce(types, %{}, fn {field, type}, acc ->
value = Map.get(struct, field)
case dumper.(type, value) do
{:ok, value} -> Map.put(acc, field, value)
:error -> raise ArgumentError, "cannot dump `#{inspect value}` as type #{inspect type}"
end
end)
end
defp dump_embed(field, _schema, value, _types, _fun) do
raise ArgumentError, "cannot dump embed `#{field}`, invalid value: #{inspect value}"
end
@doc """
Loads a value with the given type.
iex> load(:string, nil)
{:ok, nil}
iex> load(:string, "foo")
{:ok, "foo"}
iex> load(:integer, 1)
{:ok, 1}
iex> load(:integer, "10")
:error
A `loader` function may be given for handling recursive types.
"""
@spec load(t, term, (t, term -> {:ok, term} | :error)) :: {:ok, term} | :error
def load(type, value, loader \\ &load/2)
def load({:embed, embed}, value, loader) do
load_embed(embed, value, loader)
end
def load(_type, nil, _loader), do: {:ok, nil}
def load(:binary_id, value, _loader) when is_binary(value) do
{:ok, value}
end
def load({:array, type}, value, loader) when is_list(value) do
array(value, type, loader, [])
end
def load({:map, type}, value, loader) when is_map(value) do
map(Map.to_list(value), type, loader, %{})
end
def load(:date, term, _loader) do
load_date(term)
end
def load(:time, term, _loader) do
load_time(term)
end
def load(:naive_datetime, term, _loader) do
load_naive_datetime(term)
end
def load(:utc_datetime, term, _loader) do
load_utc_datetime(term)
end
def load(type, value, _loader) do
cond do
not primitive?(type) ->
type.load(value)
of_base_type?(type, value) ->
{:ok, value}
true ->
:error
end
end
defp load_embed(%{cardinality: :one}, nil, _fun), do: {:ok, nil}
defp load_embed(%{cardinality: :one, related: schema, field: field},
value, fun) when is_map(value) do
{:ok, load_embed(field, schema, value, fun)}
end
defp load_embed(%{cardinality: :many}, nil, _fun), do: {:ok, []}
defp load_embed(%{cardinality: :many, related: schema, field: field},
value, fun) when is_list(value) do
{:ok, Enum.map(value, &load_embed(field, schema, &1, fun))}
end
defp load_embed(_embed, _value, _fun) do
:error
end
defp load_embed(_field, schema, value, loader) when is_map(value) do
Ecto.Schema.__load__(schema, nil, nil, nil, value, loader)
end
defp load_embed(field, _schema, value, _fun) do
raise ArgumentError, "cannot load embed `#{field}`, invalid value: #{inspect value}"
end
@doc """
Casts a value to the given type.
`cast/2` is used by the finder queries and changesets
to cast outside values to specific types.
Note that nil can be cast to all primitive types as data
stores allow nil to be set on any column.
iex> cast(:any, "whatever")
{:ok, "whatever"}
iex> cast(:any, nil)
{:ok, nil}
iex> cast(:string, nil)
{:ok, nil}
iex> cast(:integer, 1)
{:ok, 1}
iex> cast(:integer, "1")
{:ok, 1}
iex> cast(:integer, "1.0")
:error
iex> cast(:id, 1)
{:ok, 1}
iex> cast(:id, "1")
{:ok, 1}
iex> cast(:id, "1.0")
:error
iex> cast(:float, 1.0)
{:ok, 1.0}
iex> cast(:float, 1)
{:ok, 1.0}
iex> cast(:float, "1")
{:ok, 1.0}
iex> cast(:float, "1.0")
{:ok, 1.0}
iex> cast(:float, "1-foo")
:error
iex> cast(:boolean, true)
{:ok, true}
iex> cast(:boolean, false)
{:ok, false}
iex> cast(:boolean, "1")
{:ok, true}
iex> cast(:boolean, "0")
{:ok, false}
iex> cast(:boolean, "whatever")
:error
iex> cast(:string, "beef")
{:ok, "beef"}
iex> cast(:binary, "beef")
{:ok, "beef"}
iex> cast(:decimal, Decimal.new(1.0))
{:ok, Decimal.new(1.0)}
iex> cast(:decimal, Decimal.new("1.0"))
{:ok, Decimal.new(1.0)}
iex> cast({:array, :integer}, [1, 2, 3])
{:ok, [1, 2, 3]}
iex> cast({:array, :integer}, ["1", "2", "3"])
{:ok, [1, 2, 3]}
iex> cast({:array, :string}, [1, 2, 3])
:error
iex> cast(:string, [1, 2, 3])
:error
"""
@spec cast(t, term) :: {:ok, term} | :error
def cast({:embed, type}, value) do
cast_embed(type, value)
end
def cast(_type, nil), do: {:ok, nil}
def cast(:binary_id, value) when is_binary(value) do
{:ok, value}
end
def cast({:array, type}, term) when is_list(term) do
array(term, type, &cast/2, [])
end
def cast({:map, type}, term) when is_map(term) do
map(Map.to_list(term), type, &cast/2, %{})
end
def cast({:in, type}, term) when is_list(term) do
array(term, type, &cast/2, [])
end
def cast(:float, term) when is_binary(term) do
case Float.parse(term) do
{float, ""} -> {:ok, float}
_ -> :error
end
end
def cast(:float, term) when is_integer(term), do: {:ok, term + 0.0}
def cast(:boolean, term) when term in ~w(true 1), do: {:ok, true}
def cast(:boolean, term) when term in ~w(false 0), do: {:ok, false}
def cast(:decimal, term) when is_binary(term) do
Decimal.parse(term)
end
def cast(:decimal, term) when is_number(term) do
{:ok, Decimal.new(term)}
end
def cast(:date, term) do
cast_date(term)
end
def cast(:time, term) do
cast_time(term)
end
def cast(:naive_datetime, term) do
cast_naive_datetime(term)
end
def cast(:utc_datetime, term) do
cast_utc_datetime(term)
end
def cast(type, term) when type in [:id, :integer] and is_binary(term) do
case Integer.parse(term) do
{int, ""} -> {:ok, int}
_ -> :error
end
end
def cast(type, term) do
cond do
not primitive?(type) ->
type.cast(term)
of_base_type?(type, term) ->
{:ok, term}
true ->
:error
end
end
defp cast_embed(%{cardinality: :one}, nil), do: {:ok, nil}
defp cast_embed(%{cardinality: :one, related: schema}, %{__struct__: schema} = struct) do
{:ok, struct}
end
defp cast_embed(%{cardinality: :many}, nil), do: {:ok, []}
defp cast_embed(%{cardinality: :many, related: schema}, value) when is_list(value) do
if Enum.all?(value, &Kernel.match?(%{__struct__: ^schema}, &1)) do
{:ok, value}
else
:error
end
end
defp cast_embed(_embed, _value) do
:error
end
## Adapter related
@doc false
def adapter_load(_adapter, type, nil) do
load(type, nil)
end
def adapter_load(adapter, type, value) do
if of_base_type?(type, value) do
{:ok, value}
else
do_adapter_load(adapter.loaders(type(type), type), {:ok, value}, adapter)
end
end
defp do_adapter_load(_, :error, _adapter),
do: :error
defp do_adapter_load([fun|t], {:ok, value}, adapter) when is_function(fun),
do: do_adapter_load(t, fun.(value), adapter)
defp do_adapter_load([type|t], {:ok, value}, adapter),
do: do_adapter_load(t, load(type, value, &adapter_load(adapter, &1, &2)), adapter)
defp do_adapter_load([], {:ok, _} = acc, _adapter),
do: acc
@doc false
def adapter_dump(_adapter, type, nil),
do: dump(type, nil)
def adapter_dump(adapter, type, value),
do: do_adapter_dump(adapter.dumpers(type(type), type), {:ok, value}, adapter)
defp do_adapter_dump(_, :error, _adapter),
do: :error
defp do_adapter_dump([fun|t], {:ok, value}, adapter) when is_function(fun),
do: do_adapter_dump(t, fun.(value), adapter)
defp do_adapter_dump([type|t], {:ok, value}, adapter),
do: do_adapter_dump(t, dump(type, value, &adapter_dump(adapter, &1, &2)), adapter)
defp do_adapter_dump([], {:ok, _} = acc, _adapter),
do: acc
## Date
defp cast_date(binary) when is_binary(binary) do
case Date.from_iso8601(binary) do
{:ok, _} = ok -> ok
{:error, _} -> :error
end
end
defp cast_date(%{__struct__: _} = struct),
do: {:ok, struct}
defp cast_date(%{"year" => empty, "month" => empty, "day" => empty}) when empty in ["", nil],
do: {:ok, nil}
defp cast_date(%{year: empty, month: empty, day: empty}) when empty in ["", nil],
do: {:ok, nil}
defp cast_date(%{"year" => year, "month" => month, "day" => day}),
do: cast_date(to_i(year), to_i(month), to_i(day))
defp cast_date(%{year: year, month: month, day: day}),
do: cast_date(to_i(year), to_i(month), to_i(day))
defp cast_date(_),
do: :error
defp cast_date(year, month, day) when is_integer(year) and is_integer(month) and is_integer(day) do
case Date.new(year, month, day) do
{:ok, _} = ok -> ok
{:error, _} -> :error
end
end
defp cast_date(_, _, _),
do: :error
defp dump_date(%Date{year: year, month: month, day: day}),
do: {:ok, {year, month, day}}
defp dump_date(%{__struct__: _} = struct),
do: Ecto.DataType.dump(struct)
defp dump_date(_),
do: :error
defp load_date({year, month, day}),
do: {:ok, %Date{year: year, month: month, day: day}}
defp load_date(_),
do: :error
## Time
defp cast_time(binary) when is_binary(binary) do
case Time.from_iso8601(binary) do
{:ok, _} = ok -> ok
{:error, _} -> :error
end
end
defp cast_time(%{__struct__: _} = struct),
do: {:ok, struct}
defp cast_time(%{"hour" => empty, "minute" => empty}) when empty in ["", nil],
do: {:ok, nil}
defp cast_time(%{hour: empty, minute: empty}) when empty in ["", nil],
do: {:ok, nil}
defp cast_time(%{"hour" => hour, "minute" => minute} = map),
do: cast_time(to_i(hour), to_i(minute), to_i(map["second"]), to_i(map["microsecond"]))
defp cast_time(%{hour: hour, minute: minute} = map),
do: cast_time(to_i(hour), to_i(minute), to_i(map[:second]), to_i(map[:microsecond]))
defp cast_time(_),
do: :error
defp cast_time(hour, minute, sec, usec)
when is_integer(hour) and is_integer(minute) and
(is_integer(sec) or is_nil(sec)) and (is_integer(usec) or is_nil(usec)) do
case Time.new(hour, minute, sec || 0, usec || {0, 0}) do
{:ok, _} = ok -> ok
{:error, _} -> :error
end
end
defp cast_time(_, _, _, _) do
:error
end
defp dump_time(%Time{hour: hour, minute: minute, second: second, microsecond: {microsecond, _}}),
do: {:ok, {hour, minute, second, microsecond}}
defp dump_time(%{__struct__: _} = struct),
do: Ecto.DataType.dump(struct)
defp dump_time(_),
do: :error
defp load_time({hour, minute, second, microsecond}),
do: {:ok, %Time{hour: hour, minute: minute, second: second, microsecond: {microsecond, 6}}}
defp load_time({hour, minute, second}),
do: {:ok, %Time{hour: hour, minute: minute, second: second}}
defp load_time(_),
do: :error
## Naive datetime
defp cast_naive_datetime(binary) when is_binary(binary) do
case NaiveDateTime.from_iso8601(binary) do
{:ok, _} = ok -> ok
{:error, _} -> :error
end
end
defp cast_naive_datetime(%{__struct__: _} = struct),
do: {:ok, struct}
defp cast_naive_datetime(%{"year" => empty, "month" => empty, "day" => empty,
"hour" => empty, "minute" => empty}) when empty in ["", nil],
do: {:ok, nil}
defp cast_naive_datetime(%{year: empty, month: empty, day: empty,
hour: empty, minute: empty}) when empty in ["", nil],
do: {:ok, nil}
defp cast_naive_datetime(%{"year" => year, "month" => month, "day" => day, "hour" => hour, "minute" => min} = map),
do: cast_naive_datetime(to_i(year), to_i(month), to_i(day),
to_i(hour), to_i(min), to_i(map["second"]), to_i(map["microsecond"]))
defp cast_naive_datetime(%{year: year, month: month, day: day, hour: hour, minute: min} = map),
do: cast_naive_datetime(to_i(year), to_i(month), to_i(day),
to_i(hour), to_i(min), to_i(map[:second]), to_i(map[:microsecond]))
defp cast_naive_datetime(_),
do: :error
defp cast_naive_datetime(year, month, day, hour, minute, sec, usec)
when is_integer(year) and is_integer(month) and is_integer(day) and
is_integer(hour) and is_integer(minute) and
(is_integer(sec) or is_nil(sec)) and (is_integer(usec) or is_nil(usec)) do
case NaiveDateTime.new(year, month, day, hour, minute, sec || 0, usec || {0, 0}) do
{:ok, _} = ok -> ok
{:error, _} -> :error
end
end
defp cast_naive_datetime(_, _, _, _, _, _, _) do
:error
end
defp dump_naive_datetime(%NaiveDateTime{year: year, month: month, day: day,
hour: hour, minute: minute, second: second, microsecond: {microsecond, _}}),
do: {:ok, {{year, month, day}, {hour, minute, second, microsecond}}}
defp dump_naive_datetime(%{__struct__: _} = struct),
do: Ecto.DataType.dump(struct)
defp dump_naive_datetime(_),
do: :error
defp load_naive_datetime({{year, month, day}, {hour, minute, second, microsecond}}),
do: {:ok, %NaiveDateTime{year: year, month: month, day: day,
hour: hour, minute: minute, second: second, microsecond: {microsecond, 6}}}
defp load_naive_datetime({{year, month, day}, {hour, minute, second}}),
do: {:ok, %NaiveDateTime{year: year, month: month, day: day,
hour: hour, minute: minute, second: second}}
defp load_naive_datetime(_),
do: :error
## UTC datetime
defp cast_utc_datetime(value) do
case cast_naive_datetime(value) do
{:ok, %NaiveDateTime{year: year, month: month, day: day,
hour: hour, minute: minute, second: second, microsecond: microsecond}} ->
{:ok, %DateTime{year: year, month: month, day: day,
hour: hour, minute: minute, second: second, microsecond: microsecond,
std_offset: 0, utc_offset: 0, zone_abbr: "UTC", time_zone: "Etc/UTC"}}
{:ok, _} = ok ->
ok
:error ->
:error
end
end
defp dump_utc_datetime(%DateTime{year: year, month: month, day: day, time_zone: "Etc/UTC",
hour: hour, minute: minute, second: second, microsecond: {microsecond, _}}),
do: {:ok, {{year, month, day}, {hour, minute, second, microsecond}}}
defp dump_utc_datetime(%{__struct__: _} = struct),
do: Ecto.DataType.dump(struct)
defp dump_utc_datetime(_),
do: :error
defp load_utc_datetime({{year, month, day}, {hour, minute, second, microsecond}}),
do: {:ok, %DateTime{year: year, month: month, day: day,
hour: hour, minute: minute, second: second, microsecond: {microsecond, 6},
std_offset: 0, utc_offset: 0, zone_abbr: "UTC", time_zone: "Etc/UTC"}}
defp load_utc_datetime({{year, month, day}, {hour, minute, second}}),
do: {:ok, %DateTime{year: year, month: month, day: day,
hour: hour, minute: minute, second: second,
std_offset: 0, utc_offset: 0, zone_abbr: "UTC", time_zone: "Etc/UTC"}}
defp load_utc_datetime(_),
do: :error
## Helpers
# Checks if a value is of the given primitive type.
defp of_base_type?(:any, _), do: true
defp of_base_type?(:id, term), do: is_integer(term)
defp of_base_type?(:float, term), do: is_float(term)
defp of_base_type?(:integer, term), do: is_integer(term)
defp of_base_type?(:boolean, term), do: is_boolean(term)
defp of_base_type?(:binary, term), do: is_binary(term)
defp of_base_type?(:string, term), do: is_binary(term)
defp of_base_type?(:map, term), do: is_map(term) and not Map.has_key?(term, :__struct__)
defp of_base_type?(:decimal, value), do: Kernel.match?(%{__struct__: Decimal}, value)
defp of_base_type?(_, _), do: false
defp array([h|t], type, fun, acc) do
case fun.(type, h) do
{:ok, h} -> array(t, type, fun, [h|acc])
:error -> :error
end
end
defp array([], _type, _fun, acc) do
{:ok, Enum.reverse(acc)}
end
defp map([{key, value} | t], type, fun, acc) do
case fun.(type, value) do
{:ok, value} -> map(t, type, fun, Map.put(acc, key, value))
:error -> :error
end
end
defp map([], _type, _fun, acc) do
{:ok, acc}
end
defp map(_, _, _, _), do: :error
defp to_i(nil), do: nil
defp to_i(int) when is_integer(int), do: int
defp to_i(bin) when is_binary(bin) do
case Integer.parse(bin) do
{int, ""} -> int
_ -> nil
end
end
end
|
data/web/deps/ecto/lib/ecto/type.ex
| 0.904427
| 0.641493
|
type.ex
|
starcoder
|
defmodule Crutches.Format.List do
alias Crutches.Option
@moduledoc ~S"""
Formatting helper functions for lists.
This module contains various helper functions that should be of use to you
when writing user interfaces or other parts of your application that have
to deal with lists formatting.
simply call the desired function with any relevant options that you may need.
"""
@doc ~S"""
Converts the array to a comma-separated sentence where the last element is
joined by the connector word.
You can pass the following options to change the default behavior. If you
pass an option key that doesn't exist in the list below, it will raise an
<tt>ArgumentError</tt>.
## Options
* <tt>:words_connector</tt> - The sign or word used to join the elements
in arrays with two or more elements (default: ", ").
* <tt>:two_words_connector</tt> - The sign or word used to join the elements
in arrays with two elements (default: " and ").
* <tt>:last_word_connector</tt> - The sign or word used to join the last element
in arrays with three or more elements (default: ", and ").
* <tt>:locale</tt> - If +i18n+ is available, you can set a locale and use
the connector options defined on the 'support.array' namespace in the
corresponding dictionary file.
## Examples
iex> List.as_sentence([])
""
iex> List.as_sentence(["one"])
"one"
iex> List.as_sentence(["one", "two"])
"one and two"
iex> List.as_sentence(["one", "two", "three"])
"one, two, and three"
iex> List.as_sentence(["one", "two"], passing: "invalid option")
** (ArgumentError) invalid key passing
iex> List.as_sentence(["one", "two"], two_words_connector: "-")
"one-two"
iex> List.as_sentence(["one", "two", "three"], words_connector: " or ", last_word_connector: " or at least ")
"one or two or at least three"
"""
@as_sentence [
valid: ~w(words_connector two_words_connector last_word_connector)a,
defaults: [
words_connector: ", ",
two_words_connector: " and ",
last_word_connector: ", and "
]
]
@spec as_sentence(list(any)) :: String.t
def as_sentence(words, opts \\ @as_sentence[:defaults])
def as_sentence([], _), do: ""
def as_sentence([word], _), do: "#{word}"
def as_sentence([first, last], opts) do
opts = Option.combine!(opts, @as_sentence)
first <> opts[:two_words_connector] <> last
end
def as_sentence(words, opts) when is_list(words) do
opts = Option.combine!(opts, @as_sentence)
init =
case Crutches.List.shorten(words) do
{:ok, shortened_list} -> Enum.join(shortened_list, opts[:words_connector])
_ -> []
end
last = List.last(words)
init <> opts[:last_word_connector] <> last
end
end
|
lib/crutches/format/list.ex
| 0.816662
| 0.519217
|
list.ex
|
starcoder
|
defmodule LivebookCLI.Server do
@moduledoc false
@behaviour LivebookCLI.Task
@external_resource "README.md"
[_, environment_variables, _] =
"README.md"
|> File.read!()
|> String.split("<!-- Environment variables -->")
@environment_variables String.trim(environment_variables)
@impl true
def usage() do
"""
Usage: livebook server [options] [open-command]
An optional open-command can be given as argument. It will open
up a browser window according these rules:
* If the open-command is "new", the browser window will point
to a new notebook
* If the open-command is a URL, the notebook at the given URL
will be imported
* If the open-command is a directory, the browser window will point
to the home page with the directory selected
* If the open-command is a notebook file, the browser window will point
to the opened notebook
The open-command runs after the server is started. If a server is
already running, the browser window will point to the server
currently running.
## Available options
--cookie Sets a cookie for the app distributed node
--data-path The directory to store Livebook configuration,
defaults to "livebook" under the default user data directory
--default-runtime Sets the runtime type that is used by default when none is started
explicitly for the given notebook, defaults to standalone
Supported options:
* standalone - Elixir standalone
* mix[:PATH][:FLAGS] - Mix standalone
* attached:NODE:COOKIE - Attached
* embedded - Embedded
--home The home path for the Livebook instance
--ip The ip address to start the web application on, defaults to 127.0.0.1
Must be a valid IPv4 or IPv6 address
--name Set a name for the app distributed node
--no-token Disable token authentication, enabled by default
If LIVEBOOK_PASSWORD is set, it takes precedence over token auth
-p, --port The port to start the web application on, defaults to 8080
--sname Set a short name for the app distributed node
The --help option can be given to print this notice.
## Environment variables
#{@environment_variables}
## Examples
Starts a server:
livebook server
Starts a server and opens up a browser at a new notebook:
livebook server new
Starts a server and imports the notebook at the given URL:
livebook server https://example.com/my-notebook.livemd
"""
end
@impl true
def call(args) do
{opts, extra_args} = args_to_options(args)
config_entries = opts_to_config(opts, [])
put_config_entries(config_entries)
case Livebook.Config.port() do
0 ->
# When a random port is configured, we can assume no collision
start_server(extra_args)
port ->
base_url = "http://localhost:#{port}"
case check_endpoint_availability(base_url) do
:livebook_running ->
IO.puts("Livebook already running on #{base_url}")
open_from_args(base_url, extra_args)
:taken ->
print_error(
"Another application is already running on port #{port}." <>
" Either ensure this port is free or specify a different port using the --port option"
)
:available ->
start_server(extra_args)
end
end
end
defp start_server(extra_args) do
# We configure the endpoint with `server: true`,
# so it's gonna start listening
case Application.ensure_all_started(:livebook) do
{:ok, _} ->
open_from_args(LivebookWeb.Endpoint.access_url(), extra_args)
Process.sleep(:infinity)
{:error, error} ->
print_error("Livebook failed to start with reason: #{inspect(error)}")
end
end
# Takes a list of {app, key, value} config entries
# and overrides the current applications' configuration accordingly.
# Multiple values for the same key are deeply merged (provided they are keyword lists).
defp put_config_entries(config_entries) do
config_entries
|> Enum.reduce([], fn {app, key, value}, acc ->
acc = Keyword.put_new_lazy(acc, app, fn -> Application.get_all_env(app) end)
Config.Reader.merge(acc, [{app, [{key, value}]}])
end)
|> Application.put_all_env(persistent: true)
end
defp check_endpoint_availability(base_url) do
Application.ensure_all_started(:inets)
health_url = set_path(base_url, "/public/health")
case Livebook.Utils.HTTP.request(:get, health_url) do
{:ok, status, _headers, body} ->
with 200 <- status,
{:ok, body} <- Jason.decode(body),
%{"application" => "livebook"} <- body do
:livebook_running
else
_ -> :taken
end
{:error, _error} ->
:available
end
end
defp open_from_args(_base_url, []) do
:ok
end
defp open_from_args(base_url, ["new"]) do
base_url
|> set_path("/explore/notebooks/new")
|> Livebook.Utils.browser_open()
end
defp open_from_args(base_url, [url_or_file_or_dir]) do
url = URI.parse(url_or_file_or_dir)
path = Path.expand(url_or_file_or_dir)
cond do
url.scheme in ~w(http https file) ->
base_url
|> Livebook.Utils.notebook_import_url(url_or_file_or_dir)
|> Livebook.Utils.browser_open()
File.regular?(path) ->
base_url
|> Livebook.Utils.notebook_open_url(url_or_file_or_dir)
|> Livebook.Utils.browser_open()
File.dir?(path) ->
base_url
|> update_query(%{"path" => path})
|> Livebook.Utils.browser_open()
true ->
Livebook.Utils.browser_open(base_url)
end
end
defp open_from_args(_base_url, _extra_args) do
print_error(
"Too many arguments entered. Ensure only one argument is used to specify the file path and all other arguments are preceded by the relevant switch"
)
end
@switches [
data_path: :string,
cookie: :string,
default_runtime: :string,
ip: :string,
name: :string,
port: :integer,
home: :string,
sname: :string,
token: :boolean
]
@aliases [
p: :port
]
defp args_to_options(args) do
{opts, extra_args} = OptionParser.parse!(args, strict: @switches, aliases: @aliases)
validate_options!(opts)
{opts, extra_args}
end
defp validate_options!(opts) do
if Keyword.has_key?(opts, :name) and Keyword.has_key?(opts, :sname) do
raise "the provided --sname and --name options are mutually exclusive, please specify only one of them"
end
end
defp opts_to_config([], config), do: config
defp opts_to_config([{:token, false} | opts], config) do
if Livebook.Config.auth_mode() == :token do
opts_to_config(opts, [{:livebook, :authentication_mode, :disabled} | config])
else
opts_to_config(opts, config)
end
end
defp opts_to_config([{:port, port} | opts], config) do
opts_to_config(opts, [{:livebook, LivebookWeb.Endpoint, http: [port: port]} | config])
end
defp opts_to_config([{:ip, ip} | opts], config) do
ip = Livebook.Config.ip!("--ip", ip)
opts_to_config(opts, [{:livebook, LivebookWeb.Endpoint, http: [ip: ip]} | config])
end
defp opts_to_config([{:home, home} | opts], config) do
home = Livebook.Config.writable_dir!("--home", home)
opts_to_config(opts, [{:livebook, :home, home} | config])
end
defp opts_to_config([{:sname, sname} | opts], config) do
sname = String.to_atom(sname)
opts_to_config(opts, [{:livebook, :node, {:shortnames, sname}} | config])
end
defp opts_to_config([{:name, name} | opts], config) do
name = String.to_atom(name)
opts_to_config(opts, [{:livebook, :node, {:longnames, name}} | config])
end
defp opts_to_config([{:cookie, cookie} | opts], config) do
cookie = String.to_atom(cookie)
opts_to_config(opts, [{:livebook, :cookie, cookie} | config])
end
defp opts_to_config([{:default_runtime, default_runtime} | opts], config) do
default_runtime = Livebook.Config.default_runtime!("--default-runtime", default_runtime)
opts_to_config(opts, [{:livebook, :default_runtime, default_runtime} | config])
end
defp opts_to_config([{:data_path, path} | opts], config) do
data_path = Livebook.Config.writable_dir!("--data-path", path)
opts_to_config(opts, [{:livebook, :data_path, data_path} | config])
end
defp opts_to_config([_opt | opts], config), do: opts_to_config(opts, config)
defp set_path(url, path) do
url
|> URI.parse()
|> Map.put(:path, path)
|> URI.to_string()
end
defp update_query(url, params) do
url
|> URI.parse()
|> Livebook.Utils.append_query(URI.encode_query(params))
|> URI.to_string()
end
defp print_error(message) do
IO.ANSI.format([:red, message]) |> IO.puts()
end
end
|
lib/livebook_cli/server.ex
| 0.757615
| 0.402216
|
server.ex
|
starcoder
|
defmodule Harnais.Runner.Cridar do
@moduledoc ~S"""
The *cridar* manages a test call.
Each test call specification is used to create a *cridar*.
See `Harnais.Runner` for the overview.
## Cridar State
A *cridar* has the following fields:
| Key | Aliases |
| :---------------- | -------------------: |
| `:module` | *:m, :d, :mod, :test_mod, :test_module* |
| `:fun` | *:f, :function, :test_fun, :test_function* |
| `:args` | *:a, :test_args* |
| `:rest_args` | *:ra, :test_rest_args* |
The default for all fields is *the unset value* (`Plymio.Fontais.Guard.the_unset_value/0`).
### Cridar Field: `:module`
The `:module` holds the name of the module to be used in an MFA apply (`Kernel.apply/3`).
### Cridar Field: `:fun`
The `:fun` can hold either an atom to be use in a MFA appply, or a function.
### Cridar Field: `:args`
If set, the `:args` holds *all* of the argumemenst for a call.
### Cridar Field: `rest_args`
If set, the `:rest_args` is intended to be used together with other arguments.
"""
require Plymio.Codi, as: CODI
require Plymio.Fontais.Option
require Harnais.Runner.Utility.Macro, as: HUM
alias Harnais.Utility, as: HUU
use Plymio.Fontais.Attribute
use Plymio.Codi.Attribute
use Harnais.Runner.Attribute
@codi_opts [
{@plymio_codi_key_vekil, Harnais.Runner.Codi.__vekil__()}
]
import Harnais.Error,
only: [
new_error_result: 1
],
warn: false
import Plymio.Fontais.Guard,
only: [
is_value_set: 1
]
import Plymio.Fontais.Option,
only: [
opts_create_aliases_dict: 1,
opts_canonical_keys: 2
]
@harnais_runner_cridar_kvs_aliases [
@harnais_runner_alias_cridar_module,
@harnais_runner_alias_cridar_fun,
@harnais_runner_alias_cridar_args,
@harnais_runner_alias_cridar_rest_args
]
@harnais_runner_cridar_keys_aliases @harnais_runner_cridar_kvs_aliases |> Keyword.keys()
@harnais_runner_cridar_dict_aliases @harnais_runner_cridar_kvs_aliases
|> opts_create_aliases_dict
@doc false
def update_canonical_opts(opts, dict \\ @harnais_runner_cridar_dict_aliases) do
opts |> opts_canonical_keys(dict)
end
@doc false
def update_sorted_canonical_opts(opts, dict \\ @harnais_runner_cridar_dict_aliases) do
with {:ok, opts} <- opts |> update_canonical_opts(dict) do
{:ok, opts |> HUU.opts_sort_keys(@harnais_runner_cridar_keys_aliases)}
else
{:error, %{__exception__: true}} = result -> result
end
end
@harnais_runner_cridar_defstruct @harnais_runner_cridar_kvs_aliases
|> Enum.map(fn {k, _v} ->
{k, @plymio_fontais_the_unset_value}
end)
defstruct @harnais_runner_cridar_defstruct
@type t :: %__MODULE__{}
@type opts :: Harnais.opts()
@type kv :: Harnais.kv()
@type error :: Harnais.error()
@doc false
HUM.def_struct_get()
@doc false
HUM.def_struct_fetch()
@doc false
HUM.def_struct_put()
@doc false
HUM.def_struct_delete()
struct_accessor_spec_default = %{funs: [:get, :fetch, :put, :maybe_put]}
struct_accessor_specs =
@harnais_runner_cridar_defstruct
|> Enum.map(fn {name, _} -> {name, struct_accessor_spec_default} end)
[
specs: struct_accessor_specs,
namer: fn name, fun -> ["cridar_", name, "_", fun] |> Enum.join() |> String.to_atom() end
]
|> HUM.custom_struct_accessors()
[
# updates
{@plymio_codi_pattern_proxy_put,
[
state_def_new_since: quote(do: @since("0.1.0")),
state_def_new_since!: quote(do: @since("0.1.0")),
state_def_update_since: quote(do: @since("0.1.0")),
state_def_update_since!: quote(do: @since("0.1.0"))
]},
{@plymio_codi_pattern_proxy_fetch,
[
{@plymio_codi_key_proxy_name, :state_def_update},
{@plymio_codi_key_forms_edit,
[
{@plymio_fontais_key_rename_funs,
[update_canonical_opts: :update_sorted_canonical_opts]}
]}
]},
{@plymio_codi_pattern_proxy_fetch,
[
:state_def_new,
:state_def_new!,
:state_def_update!,
:state_defp_update_field_header
]},
{@plymio_codi_pattern_proxy_fetch,
[
{@plymio_codi_key_proxy_name, :state_defp_update_proxy_field_atom},
{@plymio_fontais_key_forms_edit,
[{@plymio_fontais_key_rename_atoms, [proxy_field: @harnais_runner_key_cridar_module]}]}
]},
{@plymio_codi_pattern_proxy_fetch,
[
{@plymio_codi_key_proxy_name, :state_defp_update_proxy_field_passthru},
{@plymio_fontais_key_forms_edit,
[{@plymio_fontais_key_rename_atoms, [proxy_field: @harnais_runner_key_cridar_fun]}]}
]},
{@plymio_codi_pattern_proxy_fetch,
[
{@plymio_codi_key_proxy_name, :state_defp_update_proxy_field_normalise_list},
{@plymio_fontais_key_forms_edit,
[{@plymio_fontais_key_rename_atoms, [proxy_field: @harnais_runner_key_cridar_args]}]}
]},
{@plymio_codi_pattern_proxy_fetch,
[
{@plymio_codi_key_proxy_name, :state_defp_update_proxy_field_normalise_list},
{@plymio_fontais_key_forms_edit,
[{@plymio_fontais_key_rename_atoms, [proxy_field: @harnais_runner_key_cridar_rest_args]}]}
]},
{@plymio_codi_pattern_proxy_fetch, [:state_defp_update_field_unknown]}
]
|> CODI.reify_codi(@codi_opts)
@harnais_runner_cridar_defstruct_updaters @harnais_runner_cridar_defstruct
for {name, _} <- @harnais_runner_cridar_defstruct_updaters do
update_fun = "update_#{name}" |> String.to_atom()
@doc false
def unquote(update_fun)(%__MODULE__{} = state, value) do
state |> update([{unquote(name), value}])
end
end
@doc ~S"""
`call/2` take a *cridar* and and optional *test value* and actions the test call
returning `{:ok, {answer, cridar}` or `{error, error}`.
## Examples
An MFA apply. Note the `:args` are "listified" (`List.wrap/1`):
iex> {:ok, {answer, %CRIDAR{}}} = [
...> mod: Map, fun: :get, args: [%{a: 42}, :a]
...> ] |> CRIDAR.new! |> CRIDAR.call
...> answer
42
Another MFA but the `:rest_args` is set. The "listified" `:rest_args` are prepended
with the 2nd argument:
iex> {:ok, {answer, %CRIDAR{}}} = [
...> m: Map, f: :get, rest_args: [:a]
...> ] |> CRIDAR.new! |> CRIDAR.call(%{a: 42})
...> answer
42
The `:fun` is an arity 0 function so the `:args`,
`:rest_args` and *test value* are ignored:
iex> {:ok, {value, %CRIDAR{}}} = [
...> f: fn -> 123 end, ra: [:a]
...> ] |> CRIDAR.new! |> CRIDAR.call(%{a: 42})
...> value
123
An arity 1 function is called just with the *test value*:
iex> {:ok, {value, %CRIDAR{}}} = [
...> fun: fn value -> value |> Map.fetch(:b) end,
...> args: :will_be_ignored, rest_args: :will_be_ignored
...> ] |> CRIDAR.new! |> CRIDAR.call(%{b: 222})
...> value
{:ok, 222}
When `:fun` is any other arity, and the `:args` is set, it is
called with the "vanilla" `:args`:
iex> {:ok, {answer, %CRIDAR{}}} = [
...> fun: fn _x,y,_z -> y end,
...> a: [1,2,3], ra: :this_is_rest_args
...> ] |> CRIDAR.new! |> CRIDAR.call(%{b: 2})
...> answer
2
When `:fun` is any other arity, `:args` is not set but `:rest_args` is, it is
called (`Kernel.apply/2`) with the *test value* and "listified" `:rest_args`:
iex> {:ok, {answer, %CRIDAR{}}} = [
...> fun: fn _p,_q,_r,s -> s end,
...> rest_args: [:a,:b,:c]
...> ] |> CRIDAR.new! |> CRIDAR.call(:any_answer)
...> answer
:c
When `:fun` is any other arity, and neither `:args` not `:rest_args`
is set, an error will be returned:
iex> {:error, error} = [
...> f: fn _p,_q,_r,s -> s end,
...> ] |> CRIDAR.new! |> CRIDAR.call(:any_value)
...> error |> Exception.message |> String.starts_with?("cridar invalid")
true
"""
@since "0.1.0"
@spec call(t, any) :: {:ok, {any, t}} | {:error, error}
def call(cridar, test_value \\ nil)
def call(
%__MODULE__{
@harnais_runner_key_cridar_fun => fun,
@harnais_runner_key_cridar_args => args,
@harnais_runner_key_cridar_rest_args => rest_args
} = cridar,
test_value
)
when is_function(fun) do
try do
fun
|> case do
fun when is_function(fun, 0) ->
{:ok, {apply(fun, []), cridar}}
fun when is_function(fun, 1) ->
{:ok, {apply(fun, [test_value]), cridar}}
fun ->
cond do
is_value_set(args) ->
{:ok, {apply(fun, args), cridar}}
is_value_set(rest_args) ->
{:ok, {apply(fun, [test_value | rest_args]), cridar}}
true ->
new_error_result(m: "cridar invalid", v: cridar)
end
end
catch
error -> {:error, error}
end
end
def call(
%__MODULE__{
@harnais_runner_key_cridar_module => mod,
@harnais_runner_key_cridar_fun => fun,
@harnais_runner_key_cridar_args => args
} = cridar,
_test_value
)
when is_value_set(args) do
try do
{:ok, {apply(mod, fun, args), cridar}}
rescue
error -> {:error, error}
end
end
def call(
%__MODULE__{
@harnais_runner_key_cridar_module => mod,
@harnais_runner_key_cridar_fun => fun,
@harnais_runner_key_cridar_rest_args => rest_args
} = cridar,
test_value
)
when is_value_set(rest_args) do
try do
{:ok, {apply(mod, fun, [test_value | rest_args |> List.wrap()]), cridar}}
rescue
error -> {:error, error}
end
end
def call(cridar, _test_value) do
new_error_result(m: "cridar invalid", v: cridar)
end
end
defimpl Inspect, for: Harnais.Runner.Cridar do
use Harnais.Runner.Attribute
import Plymio.Fontais.Guard,
only: [
is_value_unset: 1
]
def inspect(
%Harnais.Runner.Cridar{
@harnais_runner_key_cridar_module => cridar_mod,
@harnais_runner_key_cridar_fun => cridar_fun,
@harnais_runner_key_cridar_args => cridar_args,
@harnais_runner_key_cridar_rest_args => cridar_rest_args
},
_opts
) do
cridar_mod_telltale =
cridar_mod
|> case do
x when is_value_unset(x) -> nil
x -> "M=#{inspect(x)}"
end
cridar_fun_telltale =
cridar_fun
|> case do
x when is_value_unset(x) -> nil
x when is_function(x, 1) -> "F=FUN/1"
x when is_function(x, 2) -> "F=FUN/2"
x -> "F=#{inspect(x)}"
end
cridar_args_telltale =
cridar_args
|> case do
x when is_value_unset(x) -> nil
_ -> "+A"
end
cridar_rest_args_telltale =
cridar_rest_args
|> case do
x when is_value_unset(x) -> nil
x -> "RA=#{inspect(x)}"
end
cridar_telltale =
[
cridar_mod_telltale,
cridar_fun_telltale,
cridar_args_telltale,
cridar_rest_args_telltale
]
|> Enum.reject(&is_nil/1)
|> Enum.join("; ")
"CRIDAR(#{cridar_telltale})"
end
end
|
lib/harnais/runner/cridar/cridar.ex
| 0.797596
| 0.472927
|
cridar.ex
|
starcoder
|
defmodule Machinery.Transitions do
@moduledoc """
This is a GenServer that controls the transitions for a struct
using a set of helper functions from Machinery.Transition
It's meant to be run by a supervisor.
"""
use GenServer
alias Machinery.Transition
@not_declated_error "Transition to this state isn't declared."
def init(args) do
{:ok, args}
end
@doc false
def start_link(opts) do
GenServer.start_link(__MODULE__, :ok, opts)
end
@doc false
def handle_call({:test, struct, state_machine_module, next_state}, _from, states) do
initial_state = state_machine_module._machinery_initial_state()
transitions = state_machine_module._machinery_transitions()
# Getting current state of the struct or falling back to the
# first declared state on the struct model.
current_state = case Map.get(struct, state_machine_module._field()) do
nil -> initial_state
current_state -> current_state
end
# Checking declared transitions and guard functions before
# actually updating the struct and retuning the tuple.
declared_transition? = Transition.declared_transition?(transitions, current_state, next_state)
guarded_transition? = Transition.guarded_transition?(state_machine_module, struct, next_state)
response = cond do
!declared_transition? ->
false
guarded_transition? ->
false
true ->
true
end
{:reply, response, states}
end
@doc false
def handle_call({:run, struct, state_machine_module, next_state}, _from, states) do
initial_state = state_machine_module._machinery_initial_state()
transitions = state_machine_module._machinery_transitions()
# Getting current state of the struct or falling back to the
# first declared state on the struct model.
current_state = case Map.get(struct, state_machine_module._field()) do
nil -> initial_state
current_state -> current_state
end
# Checking declared transitions and guard functions before
# actually updating the struct and retuning the tuple.
declared_transition? = Transition.declared_transition?(transitions, current_state, next_state)
guarded_transition? = Transition.guarded_transition?(state_machine_module, struct, next_state)
response = cond do
!declared_transition? ->
{:error, @not_declated_error}
guarded_transition? ->
guarded_transition?
true ->
struct = struct
|> Transition.before_callbacks(next_state, state_machine_module)
|> Transition.persist_struct(next_state, state_machine_module)
|> Transition.log_transition(next_state, state_machine_module)
|> Transition.after_callbacks(next_state, state_machine_module)
{:ok, struct}
end
{:reply, response, states}
end
end
|
lib/machinery/transitions.ex
| 0.753285
| 0.475362
|
transitions.ex
|
starcoder
|
defmodule D8 do
@moduledoc """
--- Day 8: Handheld Halting ---
Your flight to the major airline hub reaches cruising altitude without incident. While you consider checking the in-flight menu for one of those drinks that come with a little umbrella, you are interrupted by the kid sitting next to you.
Their handheld game console won't turn on! They ask if you can take a look.
You narrow the problem down to a strange infinite loop in the boot code (your puzzle input) of the device. You should be able to fix it, but first you need to be able to run the code in isolation.
The boot code is represented as a text file with one instruction per line of text. Each instruction consists of an operation (acc, jmp, or nop) and an argument (a signed number like +4 or -20).
acc increases or decreases a single global value called the accumulator by the value given in the argument. For example, acc +7 would increase the accumulator by 7. The accumulator starts at 0. After an acc instruction, the instruction immediately below it is executed next.
jmp jumps to a new instruction relative to itself. The next instruction to execute is found using the argument as an offset from the jmp instruction; for example, jmp +2 would skip the next instruction, jmp +1 would continue to the instruction immediately below it, and jmp -20 would cause the instruction 20 lines above to be executed next.
nop stands for No OPeration - it does nothing. The instruction immediately below it is executed next.
Run your copy of the boot code. Immediately before any instruction is executed a second time, what value is in the accumulator
--- Part Two ---
After some careful analysis, you believe that exactly one instruction is corrupted.
Somewhere in the program, either a jmp is supposed to be a nop, or a nop is supposed to be a jmp. (No acc instructions were harmed in the corruption of this boot code.)
The program is supposed to terminate by attempting to execute an instruction immediately after the last instruction in the file. By changing exactly one jmp or nop, you can repair the boot code and make it terminate correctly.
Fix the program so that it terminates normally by changing exactly one jmp (to nop) or nop (to jmp). What is the value of the accumulator after the program terminates?
"""
@behaviour Day
def execute(program, i, acc, visited) do
if MapSet.member?(visited, i) do
{:error, acc}
else
case Map.get(program, i, :end) do
{:nop, _} -> execute(program, i + 1, acc, MapSet.put(visited, i))
{:acc, x} -> execute(program, i + 1, acc + x, MapSet.put(visited, i))
{:jmp, x} -> execute(program, i + x, acc, MapSet.put(visited, i))
:end -> {:ok, acc}
end
end
end
def evaluate(program), do: execute(program, 0, 0, MapSet.new())
def interpret(input) do
input
|> Enum.with_index()
|> Enum.map(fn {line, i} ->
{instruction, relative_index} =
case line do
"nop " <> ri -> {:nop, ri}
"jmp " <> ri -> {:jmp, ri}
"acc " <> ri -> {:acc, ri}
end
{i, {instruction, Utils.to_int(relative_index)}}
end)
|> Map.new()
end
def solve(input) do
program = interpret(input)
{:error, part_1} = evaluate(program)
part_2 =
program
|> Enum.flat_map(fn
{_, {:acc, _}} -> []
{_, {:nop, 0}} -> []
{i, {:nop, ri}} -> [Map.put(program, i, {:jmp, ri})]
{i, {:jmp, ri}} -> [Map.put(program, i, {:nop, ri})]
end)
|> Enum.find_value(fn p ->
case evaluate(p) do
{:error, _} -> false
{:ok, acc} -> acc
end
end)
{part_1, part_2}
end
end
|
lib/days/08.ex
| 0.801081
| 0.837354
|
08.ex
|
starcoder
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.