code stringlengths 5 1.03M | repo_name stringlengths 5 90 | path stringlengths 4 158 | license stringclasses 15 values | size int64 5 1.03M | n_ast_errors int64 0 53.9k | ast_max_depth int64 2 4.17k | n_whitespaces int64 0 365k | n_ast_nodes int64 3 317k | n_ast_terminals int64 1 171k | n_ast_nonterminals int64 1 146k | loc int64 -1 37.3k | cycloplexity int64 -1 1.31k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
import Data.Ratio
#include "../Prelude.hs"
lit0'Rational = 0 :: Rational
rational = (Data.Ratio.%) :: Integer -> Integer -> Rational
show'Rational = show :: Rational -> String
add'Rational = (Prelude.+) :: Rational -> Rational -> Rational
subtract'Rational = (Prelude.-) :: Rational -> Rational -> Rational
| batterseapower/chsc | examples/imaginary/Bernouilli.hs | bsd-3-clause | 312 | 0 | 6 | 48 | 88 | 53 | 35 | 6 | 1 |
module Options () where
import Prelude ()
import Prelude.Compat
import Data.Aeson.Types
opts :: Options
opts = defaultOptions
{ sumEncoding = ObjectWithSingleField
}
| tolysz/prepare-ghcjs | spec-lts8/aeson/benchmarks/Options.hs | bsd-3-clause | 183 | 0 | 6 | 39 | 42 | 27 | 15 | 7 | 1 |
{-# LANGUAGE BangPatterns #-}
{-# LANGUAGE CPP #-}
{-# LANGUAGE MagicHash #-}
{-# LANGUAGE UnboxedTuples #-}
{-# OPTIONS_GHC -Wno-redundant-constraints -Wno-name-shadowing #-}
-----------------------------------------------------------------------------
-- |
-- Module : GHC.Compact
-- Copyright : (c) The University of Glasgow 2001-2009
-- (c) Giovanni Campagna <gcampagn@cs.stanford.edu> 2014
-- License : BSD-style (see the file LICENSE)
--
-- Maintainer : libraries@haskell.org
-- Stability : unstable
-- Portability : non-portable (GHC Extensions)
--
-- This module provides a data structure, called a 'Compact', for
-- holding immutable, fully evaluated data in a consecutive block of memory.
-- Compact regions are good for two things:
--
-- 1. Data in a compact region is not traversed during GC; any
-- incoming pointer to a compact region keeps the entire region
-- live. Thus, if you put a long-lived data structure in a compact
-- region, you may save a lot of cycles during major collections,
-- since you will no longer be (uselessly) retraversing this
-- data structure.
--
-- 2. Because the data is stored contiguously, you can easily
-- dump the memory to disk and/or send it over the network.
-- For applications that are not bandwidth bound (GHC's heap
-- representation can be as much of a x4 expansion over a
-- binary serialization), this can lead to substantial speedups.
--
-- For example, suppose you have a function @loadBigStruct :: IO BigStruct@,
-- which loads a large data structure from the file system. You can "compact"
-- the structure with the following code:
--
-- @
-- do r <- 'compact' =<< loadBigStruct
-- let x = 'getCompact' r :: BigStruct
-- -- Do things with x
-- @
--
-- Note that 'compact' will not preserve internal sharing; use
-- 'compactWithSharing' (which is 10x slower) if you have cycles and/or
-- must preserve sharing. The 'Compact' pointer @r@ can be used
-- to add more data to a compact region; see 'compactAdd' or
-- 'compactAddWithSharing'.
--
-- The implementation of compact regions is described by:
--
-- * Edward Z. Yang, Giovanni Campagna, Ömer Ağacan, Ahmed El-Hassany, Abhishek
-- Kulkarni, Ryan Newton. \"/Efficient communication and Collection with Compact
-- Normal Forms/\". In Proceedings of the 20th ACM SIGPLAN International
-- Conference on Functional Programming. September 2015. <http://ezyang.com/compact.html>
--
-- This library is supported by GHC 8.2 and later.
module GHC.Compact (
-- * The Compact type
Compact(..),
-- * Compacting data
compact,
compactWithSharing,
compactAdd,
compactAddWithSharing,
-- * Inspecting a Compact
getCompact,
inCompact,
isCompact,
compactSize,
-- * Other utilities
compactResize,
-- * Internal operations
mkCompact,
compactSized,
) where
import Control.Concurrent.MVar
import GHC.Prim
import GHC.Types
-- | A 'Compact' contains fully evaluated, pure, immutable data.
--
-- 'Compact' serves two purposes:
--
-- * Data stored in a 'Compact' has no garbage collection overhead.
-- The garbage collector considers the whole 'Compact' to be alive
-- if there is a reference to any object within it.
--
-- * A 'Compact' can be serialized, stored, and deserialized again.
-- The serialized data can only be deserialized by the exact binary
-- that created it, but it can be stored indefinitely before
-- deserialization.
--
-- Compacts are self-contained, so compacting data involves copying
-- it; if you have data that lives in two 'Compact's, each will have a
-- separate copy of the data.
--
-- The cost of compaction is fully evaluating the data + copying it. However,
-- because 'compact' does not stop-the-world, retaining internal sharing during
-- the compaction process is very costly. The user can choose whether to
-- 'compact' or 'compactWithSharing'.
--
-- When you have a @'Compact' a@, you can get a pointer to the actual object
-- in the region using 'getCompact'. The 'Compact' type
-- serves as handle on the region itself; you can use this handle
-- to add data to a specific 'Compact' with 'compactAdd' or
-- 'compactAddWithSharing' (giving you a new handle which corresponds
-- to the same compact region, but points to the newly added object
-- in the region). At the moment, due to technical reasons,
-- it's not possible to get the @'Compact' a@ if you only have an @a@,
-- so make sure you hold on to the handle as necessary.
--
-- Data in a compact doesn't ever move, so compacting data is also a
-- way to pin arbitrary data structures in memory.
--
-- There are some limitations on what can be compacted:
--
-- * Functions. Compaction only applies to data.
--
-- * Pinned 'ByteArray#' objects cannot be compacted. This is for a
-- good reason: the memory is pinned so that it can be referenced by
-- address (the address might be stored in a C data structure, for
-- example), so we can't make a copy of it to store in the 'Compact'.
--
-- * Objects with mutable pointer fields (e.g. 'Data.IORef.IORef',
-- 'GHC.Array.MutableArray') also cannot be compacted, because subsequent
-- mutation would destroy the property that a compact is self-contained.
--
-- If compaction encounters any of the above, a 'Control.Exception.CompactionFailed'
-- exception will be thrown by the compaction operation.
--
data Compact a = Compact Compact# a (MVar ())
-- we can *read* from a Compact without taking a lock, but only
-- one thread can be writing to the compact at any given time.
-- The MVar here is to enforce mutual exclusion among writers.
-- Note: the MVar protects the Compact# only, not the pure value 'a'
-- | Make a new 'Compact' object, given a pointer to the true
-- underlying region. You must uphold the invariant that @a@ lives
-- in the compact region.
--
mkCompact
:: Compact# -> a -> State# RealWorld -> (# State# RealWorld, Compact a #)
mkCompact compact# a s =
case unIO (newMVar ()) s of { (# s1, lock #) ->
(# s1, Compact compact# a lock #) }
where
unIO (IO a) = a
-- | Transfer @a@ into a new compact region, with a preallocated size (in
-- bytes), possibly preserving sharing or not. If you know how big the data
-- structure in question is, you can save time by picking an appropriate block
-- size for the compact region.
--
compactSized
:: Int -- ^ Size of the compact region, in bytes
-> Bool -- ^ Whether to retain internal sharing
-> a
-> IO (Compact a)
compactSized (I# size) share a = IO $ \s0 ->
case compactNew# (int2Word# size) s0 of { (# s1, compact# #) ->
case compactAddPrim compact# a s1 of { (# s2, pk #) ->
mkCompact compact# pk s2 }}
where
compactAddPrim
| share = compactAddWithSharing#
| otherwise = compactAdd#
-- | Retrieve a direct pointer to the value pointed at by a 'Compact' reference.
-- If you have used 'compactAdd', there may be multiple 'Compact' references
-- into the same compact region. Upholds the property:
--
-- > inCompact c (getCompact c) == True
--
getCompact :: Compact a -> a
getCompact (Compact _ obj _) = obj
-- | Compact a value. /O(size of unshared data)/
--
-- If the structure contains any internal sharing, the shared data
-- will be duplicated during the compaction process. This will
-- not terminate if the structure contains cycles (use 'compactWithSharing'
-- instead).
--
-- The object in question must not contain any functions or data with mutable
-- pointers; if it does, 'compact' will raise an exception. In the future, we
-- may add a type class which will help statically check if this is the case or
-- not.
--
compact :: a -> IO (Compact a)
compact = compactSized 31268 False
-- | Compact a value, retaining any internal sharing and
-- cycles. /O(size of data)/
--
-- This is typically about 10x slower than 'compact', because it works
-- by maintaining a hash table mapping uncompacted objects to
-- compacted objects.
--
-- The object in question must not contain any functions or data with mutable
-- pointers; if it does, 'compact' will raise an exception. In the future, we
-- may add a type class which will help statically check if this is the case or
-- not.
--
compactWithSharing :: a -> IO (Compact a)
compactWithSharing = compactSized 31268 True
-- | Add a value to an existing 'Compact'. This will help you avoid
-- copying when the value contains pointers into the compact region,
-- but remember that after compaction this value will only be deallocated
-- with the entire compact region.
--
-- Behaves exactly like 'compact' with respect to sharing and what data
-- it accepts.
--
compactAdd :: Compact b -> a -> IO (Compact a)
compactAdd (Compact compact# _ lock) a = withMVar lock $ \_ -> IO $ \s ->
case compactAdd# compact# a s of { (# s1, pk #) ->
(# s1, Compact compact# pk lock #) }
-- | Add a value to an existing 'Compact', like 'compactAdd',
-- but behaving exactly like 'compactWithSharing' with respect to sharing and
-- what data it accepts.
--
compactAddWithSharing :: Compact b -> a -> IO (Compact a)
compactAddWithSharing (Compact compact# _ lock) a =
withMVar lock $ \_ -> IO $ \s ->
case compactAddWithSharing# compact# a s of { (# s1, pk #) ->
(# s1, Compact compact# pk lock #) }
-- | Check if the second argument is inside the passed 'Compact'.
--
inCompact :: Compact b -> a -> IO Bool
inCompact (Compact buffer _ _) !val =
IO (\s -> case compactContains# buffer val s of
(# s', v #) -> (# s', isTrue# v #) )
-- | Check if the argument is in any 'Compact'. If true, the value in question
-- is also fully evaluated, since any value in a compact region must
-- be fully evaluated.
--
isCompact :: a -> IO Bool
isCompact !val =
IO (\s -> case compactContainsAny# val s of
(# s', v #) -> (# s', isTrue# v #) )
-- | Returns the size in bytes of the compact region.
--
compactSize :: Compact a -> IO Word
compactSize (Compact buffer _ lock) = withMVar lock $ \_ -> IO $ \s0 ->
case compactSize# buffer s0 of (# s1, sz #) -> (# s1, W# sz #)
-- | __Experimental__ This function doesn't actually resize a compact
-- region; rather, it changes the default block size which we allocate
-- when the current block runs out of space, and also appends a block
-- to the compact region.
--
compactResize :: Compact a -> Word -> IO ()
compactResize (Compact oldBuffer _ lock) (W# new_size) =
withMVar lock $ \_ -> IO $ \s ->
case compactResize# oldBuffer new_size s of
s' -> (# s', () #)
| sdiehl/ghc | libraries/ghc-compact/GHC/Compact.hs | bsd-3-clause | 10,499 | 0 | 13 | 2,070 | 1,110 | 661 | 449 | 71 | 1 |
module GUI.SaveAs (saveAsPDF, saveAsPNG) where
-- Imports for ThreadScope
import GUI.Timeline.Render (renderTraces, renderYScaleArea)
import GUI.Timeline.Render.Constants
import GUI.Timeline.Ticks (renderXScaleArea)
import GUI.Types
import Events.HECs
-- Imports for GTK
import Graphics.UI.Gtk hiding (rectangle)
import Graphics.Rendering.Cairo
( Render
, Operator(..)
, Format(..)
, rectangle
, getOperator
, setOperator
, fill
, translate
, liftIO
, withPDFSurface
, renderWith
, withImageSurface
, surfaceWriteToPNG
)
saveAs :: HECs -> ViewParameters -> Double -> DrawingArea
-> (Int, Int, Render ())
saveAs hecs params' @ViewParameters{xScaleAreaHeight, width,
height = oldHeight {-, histogramHeight-}}
yScaleAreaWidth yScaleArea =
let histTotalHeight = histXScaleHeight -- + histogramHeight
params@ViewParameters{height} =
params'{ viewTraces = viewTraces params' -- ++ [TraceHistogram]
, height = oldHeight + histTotalHeight + tracePad
}
w = ceiling yScaleAreaWidth + width
h = xScaleAreaHeight + height
drawTraces = renderTraces params hecs (Rectangle 0 0 width height)
drawXScale = renderXScaleArea params hecs
drawYScale = renderYScaleArea params hecs yScaleArea
-- Functions renderTraces and renderXScaleArea draw to the left of 0
-- which is not seen in the normal mode, but would be seen in export,
-- so it has to be cleared before renderYScaleArea is written on top:
clearLeftArea = do
rectangle 0 0 yScaleAreaWidth (fromIntegral h)
op <- getOperator
setOperator OperatorClear
fill
setOperator op
drawAll = do
translate yScaleAreaWidth (fromIntegral xScaleAreaHeight)
drawTraces
translate 0 (- fromIntegral xScaleAreaHeight)
drawXScale
translate (-yScaleAreaWidth) 0
clearLeftArea
translate 0 (fromIntegral xScaleAreaHeight)
drawYScale
in (w, h, drawAll)
saveAsPDF :: FilePath -> HECs -> ViewParameters -> DrawingArea -> IO ()
saveAsPDF filename hecs params yScaleArea = do
(xoffset, _) <- liftIO $ widgetGetSize yScaleArea
let (w', h', drawAll) = saveAs hecs params (fromIntegral xoffset) yScaleArea
withPDFSurface filename (fromIntegral w') (fromIntegral h') $ \surface ->
renderWith surface drawAll
saveAsPNG :: FilePath -> HECs -> ViewParameters -> DrawingArea -> IO ()
saveAsPNG filename hecs params yScaleArea = do
(xoffset, _) <- liftIO $ widgetGetSize yScaleArea
let (w', h', drawAll) = saveAs hecs params (fromIntegral xoffset) yScaleArea
withImageSurface FormatARGB32 w' h' $ \surface -> do
renderWith surface drawAll
surfaceWriteToPNG surface filename
| ml9951/ThreadScope | GUI/SaveAs.hs | bsd-3-clause | 2,796 | 2 | 14 | 646 | 703 | 370 | 333 | -1 | -1 |
module Bot.Test.TestUtil (
testGroupGenerator,
testCase,
(@?=),
captureStdOut
) where
import Bot.Util ( Text )
import Control.Exception ( bracket )
import Data.Text.Lazy.IO ( hGetContents )
import GHC.IO.Handle ( hDuplicate, hDuplicateTo)
import System.Directory ( removeFile )
import System.IO ( Handle, IOMode(ReadWriteMode), withFile,
stdout, hFlush,
hSeek, SeekMode(AbsoluteSeek) )
import Test.Framework.TH ( testGroupGenerator )
import Test.Framework.Providers.HUnit ( testCase ) -- Used by TH
import Test.HUnit ( (@?=) )
captureStdOut :: IO a -> ((a, Text) -> IO b) -> IO b
captureStdOut action processor = do
withTempFile $ \file -> do
a <- redirect stdout file action
hSeek file AbsoluteSeek 0
t <- hGetContents file
processor (a, t)
withTempFile :: (Handle -> IO a) -> IO a
withTempFile action = do
let tempFilename = "redirected_output"
a <- withFile tempFilename ReadWriteMode action
removeFile tempFilename
return a
redirect :: Handle -> Handle -> IO a -> IO a
redirect source destination action = bracket before after (const action)
where
before :: IO Handle
before = do
hFlush source
sourceDup <- hDuplicate source
hDuplicateTo destination source
return sourceDup
after :: Handle -> IO ()
after sourceDup = do
hFlush destination
hDuplicateTo sourceDup source | andregr/bot | test/src/Bot/Test/TestUtil.hs | bsd-3-clause | 1,587 | 0 | 12 | 500 | 448 | 235 | 213 | 41 | 1 |
{-# LANGUAGE ScopedTypeVariables, BangPatterns #-}
import Timing
import Vectorised
import System.IO
import Foreign.Storable
import Foreign.Marshal.Alloc
import Data.Array.Parallel
import System.Environment
import qualified Data.Vector as V
import qualified Vector as V
import qualified Flow as F
import Control.Exception (evaluate)
import qualified Data.Vector.Unboxed as U
import qualified Data.Array.Parallel.Unlifted as P
import Data.Array.Parallel.PArray as PA
main :: IO ()
main
= do args <- getArgs
case args of
[alg, reps, fileName] -> run alg (read reps) fileName
_ -> usage
usage
= putStr $ unlines
[ "usage: smvm <alg> <reps> <file>"
, " alg one of " ++ show ["vectorised", "vector", "flow" ] ]
-- Vectorised Nested Data Parallel Version ------------------------------------
run "vectorised" reps fileName
= do (matrix, vector) <- loadPA fileName
matrix `seq` return ()
vector `seq` return ()
-- Multiply sparse matrix by the dense vector
(vResult, tElapsed)
<- time
$ let loop n
= do -- Fake dependency on 'n' to prevent this being
-- floated out of the loop.
let !result = smvmPA n matrix vector
PA.nf result `seq` return ()
if n <= 1
then return result
else loop (n - 1)
in do
final <- loop reps
return final
-- Print how long it took.
putStr $ prettyTime tElapsed
-- Print some info about the test setup.
putStrLn $ "vector length = " ++ show (U.length (PA.toUArray vector))
-- Print checksum of resulting vector.
putStrLn $ "result sum = " ++ show (U.sum (PA.toUArray vResult))
-- Sequential version using Data.Vector ---------------------------------------
run "vector" reps fileName
= do (segd, uaMatrix, uaVector) <- loadUArr fileName
let vMatrix = U.fromList $ P.toList uaMatrix
let vVector = U.fromList $ P.toList uaVector
let matrix
= V.force
$ V.map U.force
$ V.zipWith
(\start len -> U.slice start len vMatrix)
(U.convert $ P.indicesSegd segd)
(U.convert $ P.lengthsSegd segd)
let vector = U.fromList $ U.toList uaVector
matrix `seq` return ()
vector `seq` return ()
-- Multiply sparse matrix by the dense vector
(vResult, tElapsed)
<- time
$ let loop n
= do -- Fake dependency on 'n' to prevent this being
-- floated out of the loop.
let !result = U.force $ V.smvm n matrix vector
if n <= 1
then return result
else loop (n - 1)
in do
final <- loop reps
return final
-- Print how long it took.
putStr $ prettyTime tElapsed
-- Print some info about the test setup.
putStrLn $ "vector length = " ++ show (U.length vector)
-- Print checksum of resulting vector.
putStrLn $ "result sum = " ++ show (U.sum vResult)
-- Sequential version using Repa Flows ----------------------------------------
run "flow" reps fileName
= do (segd, uaMatrix, uaVector) <- loadUArr fileName
let vRowLens = U.convert $ P.lengthsSegd segd
let vMatrix = U.fromList $ P.toList uaMatrix
let vVector = U.fromList $ P.toList uaVector
vRowLens `seq` return ()
vMatrix `seq` return ()
vVector `seq` return ()
-- Multiply sparse matrix by the dense vector
(vResult, tElapsed)
<- time
$ let loop n
= do let !result = F.smvm vRowLens vMatrix vVector
if n <= 1
then return result
else loop (n - 1)
in do
!final <- loop reps
return final
-- Print how long it took.
putStr $ prettyTime tElapsed
-- Print some info about the test setup.
putStrLn $ "vector length = " ++ show (U.length vVector)
-- Print checksum of resulting vector.
putStrLn $ "result sum = " ++ show (U.sum vResult)
-- Load Matrices --------------------------------------------------------------
-- | Load a test file containing a sparse matrix and dense vector.
loadPA :: String -- ^ filename.
-> IO ( PArray (PArray (Int, Double)) -- sparse matrix
, PArray Double) -- dense vector
loadPA fileName
= do (segd, arrMatrixElems, arrVector) <- loadUArr fileName
let paMatrix = PA.nestUSegd segd (PA.fromUArray2 arrMatrixElems)
let paVector = PA.fromUArray arrVector
return (paMatrix, paVector)
-- | Load a test file containing a sparse matrix and dense vector.
loadUArr :: String -- ^ filename
-> IO ( P.Segd -- segment descriptor saying what array elements
-- belong to each row of the matrix.
, P.Array (Int, Double) -- column indices and matrix elements
, P.Array Double) -- the dense vector
loadUArr fname
= do h <- openBinaryFile fname ReadMode
-- check magic numbers at start of file to guard against word-size screwups.
alloca $ \ptr -> do
hGetBuf h ptr (sizeOf (undefined :: Int))
magic1 :: Int <- peek ptr
hGetBuf h ptr (sizeOf (undefined :: Int))
magic2 :: Int <- peek ptr
if magic1 == 0xc0ffee00 Prelude.&& magic2 == 0x12345678
then return ()
else error $ "bad magic in " ++ fname
-- number of elements in each row of the matrix.
lengths <- P.hGet h
-- indices of all the elements.
indices <- P.hGet h
-- values of the matrix elements.
values <- P.hGet h
-- the dense vector.
vector <- P.hGet h
evaluate lengths
evaluate indices
evaluate values
evaluate vector
let segd = P.lengthsToSegd lengths
matrix = P.zip indices values
return (segd, matrix, vector)
| mainland/dph | dph-examples/examples/spectral/SMVM/dph/Main.hs | bsd-3-clause | 6,312 | 70 | 20 | 2,207 | 1,513 | 782 | 731 | -1 | -1 |
-------------------------------------------------------------------------------
-- |
-- Module : Reinforce.Spaces
-- Copyright : (c) Sentenai 2017
-- License : BSD3
-- Maintainer: sam@sentenai.com
-- Stability : experimental
-- Portability: non-portable
--
-- re-exports of Action- and State- types.
-------------------------------------------------------------------------------
module Reinforce.Spaces ( module X ) where
import Reinforce.Spaces.Action as X (DiscreteActionSpace)
import Reinforce.Spaces.State as X (StateSpaceStatic)
| stites/reinforce | reinforce/src/Reinforce/Spaces.hs | bsd-3-clause | 549 | 0 | 5 | 67 | 49 | 37 | 12 | 3 | 0 |
------------------------------------------------------------------------------
-- |
-- Module: Database.PostgreSQL.Simple.HStore
-- Copyright: (c) 2013 Leon P Smith
-- License: BSD3
-- Maintainer: Leon P Smith <leon@melding-monads.com>
-- Stability: experimental
--
-- Parsers and printers for hstore, a extended type bundled with
-- PostgreSQL providing finite maps from text strings to text strings.
-- See <http://www.postgresql.org/docs/9.2/static/hstore.html> for more
-- information.
--
-- Note that in order to use this type, a database superuser must
-- install it by running a sql script in the share directory. This
-- can be done on PostgreSQL 9.1 and later with the command
-- @CREATE EXTENSION hstore@. See
-- <http://www.postgresql.org/docs/9.2/static/contrib.html> for more
-- information.
--
------------------------------------------------------------------------------
module Database.PostgreSQL.Simple.HStore
( HStoreList(..)
, HStoreMap(..)
, ToHStore(..)
, HStoreBuilder
, toBuilder
, toLazyByteString
, hstore
, parseHStoreList
, ToHStoreText(..)
, HStoreText
) where
import Database.PostgreSQL.Simple.HStore.Implementation
| avieth/postgresql-simple | src/Database/PostgreSQL/Simple/HStore.hs | bsd-3-clause | 1,221 | 0 | 5 | 208 | 85 | 66 | 19 | 12 | 0 |
{-# OPTIONS_GHC -Wall #-}
module AST.Pattern where
import qualified AST.Helpers as Help
import AST.PrettyPrint
import Text.PrettyPrint as PP
import qualified Data.Set as Set
import qualified AST.Variable as Var
import AST.Literal as Literal
data Pattern var
= Data var [Pattern var]
| Record [String]
| Alias String (Pattern var)
| Var String
| Anything
| Literal Literal.Literal
deriving (Eq, Ord, Show)
type RawPattern = Pattern Var.Raw
type CanonicalPattern = Pattern Var.Canonical
cons :: RawPattern -> RawPattern -> RawPattern
cons h t = Data (Var.Raw "::") [h,t]
nil :: RawPattern
nil = Data (Var.Raw "[]") []
list :: [RawPattern] -> RawPattern
list = foldr cons nil
tuple :: [RawPattern] -> RawPattern
tuple es = Data (Var.Raw ("_Tuple" ++ show (length es))) es
boundVarList :: Pattern var -> [String]
boundVarList = Set.toList . boundVars
boundVars :: Pattern var -> Set.Set String
boundVars pattern =
case pattern of
Var x -> Set.singleton x
Alias x p -> Set.insert x (boundVars p)
Data _ ps -> Set.unions (map boundVars ps)
Record fields -> Set.fromList fields
Anything -> Set.empty
Literal _ -> Set.empty
instance Var.ToString var => Pretty (Pattern var) where
pretty pattern =
case pattern of
Var x -> variable x
Literal lit -> pretty lit
Record fs -> PP.braces (commaCat $ map variable fs)
Alias x p -> prettyParens p <+> PP.text "as" <+> variable x
Anything -> PP.text "_"
Data name [hd,tl] | Var.toString name == "::" ->
parensIf isCons (pretty hd) <+> PP.text "::" <+> pretty tl
where
isCons = case hd of
Data ctor _ -> Var.toString ctor == "::"
_ -> False
Data name ps
| Help.isTuple name' -> PP.parens . commaCat $ map pretty ps
| otherwise -> hsep (PP.text name' : map prettyParens ps)
where
name' = Var.toString name
prettyParens :: Var.ToString var => Pattern var -> Doc
prettyParens pattern =
parensIf needsThem (pretty pattern)
where
needsThem =
case pattern of
Data name (_:_) | not (Help.isTuple (Var.toString name)) -> True
Alias _ _ -> True
_ -> False
| JoeyEremondi/haskelm | src/AST/Pattern.hs | bsd-3-clause | 2,254 | 0 | 18 | 606 | 840 | 420 | 420 | 62 | 6 |
{-# LANGUAGE OverloadedStrings #-}
module Yesod.Form.I18n.Spanish where
import Yesod.Form.Types (FormMessage (..))
import Data.Monoid (mappend)
import Data.Text (Text)
spanishFormMessage :: FormMessage -> Text
spanishFormMessage (MsgInvalidInteger t) = "Número entero inválido: " `Data.Monoid.mappend` t
spanishFormMessage (MsgInvalidNumber t) = "Número inválido: " `mappend` t
spanishFormMessage (MsgInvalidEntry t) = "Entrada inválida: " `mappend` t
spanishFormMessage MsgInvalidTimeFormat = "Hora inválida, debe tener el formato HH:MM[:SS]"
spanishFormMessage MsgInvalidDay = "Fecha inválida, debe tener el formato AAAA-MM-DD"
spanishFormMessage (MsgInvalidUrl t) = "URL inválida: " `mappend` t
spanishFormMessage (MsgInvalidEmail t) = "Dirección de correo electrónico inválida: " `mappend` t
spanishFormMessage (MsgInvalidHour t) = "Hora inválida: " `mappend` t
spanishFormMessage (MsgInvalidMinute t) = "Minuto inválido: " `mappend` t
spanishFormMessage (MsgInvalidSecond t) = "Segundo inválido: " `mappend` t
spanishFormMessage MsgCsrfWarning = "Como protección contra ataques CSRF, confirme su envío por favor."
spanishFormMessage MsgValueRequired = "Se requiere un valor"
spanishFormMessage (MsgInputNotFound t) = "Entrada no encontrada: " `mappend` t
spanishFormMessage MsgSelectNone = "<Ninguno>"
spanishFormMessage (MsgInvalidBool t) = "Booleano inválido: " `mappend` t
spanishFormMessage MsgBoolYes = "Sí"
spanishFormMessage MsgBoolNo = "No"
spanishFormMessage MsgDelete = "¿Eliminar?"
| s9gf4ult/yesod | yesod-form/Yesod/Form/I18n/Spanish.hs | mit | 1,520 | 0 | 7 | 176 | 320 | 178 | 142 | 24 | 1 |
module ParenFunBind where
(foo x) y = x + y
((bar x)) y = x + y
((baz x)) (y) = x + y
| mpickering/ghc-exactprint | tests/examples/ghc8/ParenFunBind.hs | bsd-3-clause | 91 | 3 | 9 | 29 | 73 | 36 | 37 | 4 | 1 |
{-@ LIQUID "--no-termination" @-}
module RedBlackTree where
import Language.Haskell.Liquid.Prelude
data RBTree a = Leaf
| Node Color a !(RBTree a) !(RBTree a)
deriving (Show)
data Color = B -- ^ Black
| R -- ^ Red
deriving (Eq,Show)
---------------------------------------------------------------------------
-- | Add an element -------------------------------------------------------
---------------------------------------------------------------------------
{-@ add :: (Ord a) => a -> RBT a -> RBT a @-}
add x s = makeBlack (ins x s)
{-@ ins :: (Ord a) => a -> t:RBT a -> {v: ARBT a | ((IsB t) => (isRB v))} @-}
ins kx Leaf = Node R kx Leaf Leaf
ins kx s@(Node B x l r) = case compare kx x of
LT -> let t = lbal x (ins kx l) r in t
GT -> let t = rbal x l (ins kx r) in t
EQ -> s
ins kx s@(Node R x l r) = case compare kx x of
LT -> Node R x (ins kx l) r
GT -> Node R x l (ins kx r)
EQ -> s
---------------------------------------------------------------------------
-- | Delete an element ----------------------------------------------------
---------------------------------------------------------------------------
{-@ remove :: (Ord a) => a -> RBT a -> RBT a @-}
remove x t = makeBlack (del x t)
{-@ del :: (Ord a) => a -> t:RBT a -> {v:ARBT a | ((isB t) || (isRB v))} @-}
del x Leaf = Leaf
del x (Node _ y a b) = case compare x y of
EQ -> append y a b
LT -> case a of
Leaf -> Node R y Leaf b
Node B _ _ _ -> lbalS y (del x a) b
_ -> let t = Node R y (del x a) b in t
GT -> case b of
Leaf -> Node R y a Leaf
Node B _ _ _ -> rbalS y a (del x b)
_ -> Node R y a (del x b)
{-@ append :: y:a -> l:RBT a -> r:RBT a -> (ARBT2 a l r) @-}
append :: a -> RBTree a -> RBTree a -> RBTree a
append _ Leaf r
= r
append _ l Leaf
= l
append piv (Node R lx ll lr) (Node R rx rl rr)
= case append piv lr rl of
Node R x lr' rl' -> Node R x (Node R lx ll lr') (Node R rx rl' rr)
lrl -> Node R lx ll (Node R rx lrl rr)
append piv (Node B lx ll lr) (Node B rx rl rr)
= case append piv lr rl of
Node R x lr' rl' -> Node R x (Node B lx ll lr') (Node B rx rl' rr)
lrl -> lbalS lx ll (Node B rx lrl rr)
append piv l@(Node B _ _ _) (Node R rx rl rr)
= Node R rx (append piv l rl) rr
append piv l@(Node R lx ll lr) r@(Node B _ _ _)
= Node R lx ll (append piv lr r)
---------------------------------------------------------------------------
-- | Delete Minimum Element -----------------------------------------------
---------------------------------------------------------------------------
{-@ deleteMin :: RBT a -> RBT a @-}
deleteMin (Leaf) = Leaf
deleteMin (Node _ x l r) = makeBlack t
where
(_, t) = deleteMin' x l r
{-@ deleteMin' :: k:a -> l:RBT a -> r:RBT a -> (a, ARBT2 a l r) @-}
deleteMin' k Leaf r = (k, r)
deleteMin' x (Node R lx ll lr) r = (k, Node R x l' r) where (k, l') = deleteMin' lx ll lr
deleteMin' x (Node B lx ll lr) r = (k, lbalS x l' r ) where (k, l') = deleteMin' lx ll lr
---------------------------------------------------------------------------
-- | Rotations ------------------------------------------------------------
---------------------------------------------------------------------------
{-@ lbalS :: k:a -> l:ARBT a -> r:RBT a -> {v: ARBT a | ((IsB r) => (isRB v))} @-}
lbalS k (Node R x a b) r = Node R k (Node B x a b) r
lbalS k l (Node B y a b) = let t = rbal k l (Node R y a b) in t
lbalS k l (Node R z (Node B y a b) c) = Node R y (Node B k l a) (rbal z b (makeRed c))
lbalS k l r = error "nein"
{-@ rbalS :: k:a -> l:RBT a -> r:ARBT a -> {v: ARBT a | ((IsB l) => (isRB v))} @-}
rbalS k l (Node R y b c) = Node R k l (Node B y b c)
rbalS k (Node B x a b) r = let t = lbal k (Node R x a b) r in t
rbalS k (Node R x a (Node B y b c)) r = Node R y (lbal x (makeRed a) b) (Node B k c r)
rbalS k l r = error "nein"
{-@ lbal :: k:a -> l:ARBT a -> RBT a -> RBT a @-}
lbal k (Node R y (Node R x a b) c) r = Node R y (Node B x a b) (Node B k c r)
lbal k (Node R x a (Node R y b c)) r = Node R y (Node B x a b) (Node B k c r)
lbal k l r = Node B k l r
{-@ rbal :: k:a -> l:RBT a -> ARBT a -> RBT a @-}
rbal x a (Node R y b (Node R z c d)) = Node R y (Node B x a b) (Node B z c d)
rbal x a (Node R z (Node R y b c) d) = Node R y (Node B x a b) (Node B z c d)
rbal x l r = Node B x l r
---------------------------------------------------------------------------
---------------------------------------------------------------------------
---------------------------------------------------------------------------
{-@ type BlackRBT a = {v: RBT a | (IsB v)} @-}
{-@ makeRed :: l:BlackRBT a -> ARBT a @-}
makeRed (Node B x l r) = Node R x l r
makeRed _ = error "nein"
{-@ makeBlack :: ARBT a -> RBT a @-}
makeBlack Leaf = Leaf
makeBlack (Node _ x l r) = Node B x l r
---------------------------------------------------------------------------
-- | Specifications -------------------------------------------------------
---------------------------------------------------------------------------
-- | Red-Black Trees
{-@ type RBT a = {v: RBTree a | (isRB v)} @-}
{-@ measure isRB :: RBTree a -> Prop
isRB (Leaf) = true
isRB (Node c x l r) = ((isRB l) && (isRB r) && ((c == R) => ((IsB l) && (IsB r))))
@-}
-- | Almost Red-Black Trees
{-@ type ARBT a = {v: RBTree a | (isARB v) } @-}
{-@ measure isARB :: (RBTree a) -> Prop
isARB (Leaf) = true
isARB (Node c x l r) = ((isRB l) && (isRB r))
@-}
-- | Conditionally Red-Black Tree
{-@ type ARBT2 a L R = {v:ARBT a | (((IsB L) && (IsB R)) => (isRB v))} @-}
-- | Color of a tree
{-@ measure col :: RBTree a -> Color
col (Node c x l r) = c
col (Leaf) = B
@-}
{-@ measure isB :: RBTree a -> Prop
isB (Leaf) = false
isB (Node c x l r) = c == B
@-}
{-@ predicate IsB T = not ((col T) == R) @-}
------------------------------------------------------------------
-- | Auxiliary Invariants ----------------------------------------
------------------------------------------------------------------
{-@ predicate Invs V = ((Inv1 V) && (Inv2 V)) @-}
{-@ predicate Inv1 V = (((isARB V) && (IsB V)) => (isRB V)) @-}
{-@ predicate Inv2 V = ((isRB v) => (isARB v)) @-}
{-@ invariant {v: Color | (v = R || v = B)} @-}
{-@ invariant {v: RBTree a | (Invs v)} @-}
{-@ inv :: RBTree a -> {v:RBTree a | (Invs v)} @-}
inv Leaf = Leaf
inv (Node c x l r) = Node c x (inv l) (inv r)
| mightymoose/liquidhaskell | tests/pos/RBTree-color.hs | bsd-3-clause | 7,263 | 0 | 17 | 2,345 | 2,020 | 1,022 | 998 | 77 | 7 |
module B6 (myFringe, sumSquares) where
import D6 hiding (myFringe)
import C6 hiding ()
myFringe :: (Tree a) -> [a]
myFringe (Leaf x) = [x]
myFringe (Branch left right) = myFringe right
sumSquares1 ((x : xs)) = (x ^ 2) + (sumSquares xs)
sumSquares1 [] = 0
| kmate/HaRe | old/testing/moveDefBtwMods/B6_AstOut.hs | bsd-3-clause | 270 | 0 | 8 | 61 | 127 | 71 | 56 | 8 | 1 |
module MultiOccur1 where
--Generalise the function 'idid' on the second 'id' with a new parameter 'two'
idid = id $ id
main x = idid x
| kmate/HaRe | old/testing/generaliseDef/MultiOccur1.hs | bsd-3-clause | 138 | 0 | 5 | 29 | 25 | 14 | 11 | 3 | 1 |
{-|
Module : Idris.Directives
Description : Act upon Idris directives.
Copyright :
License : BSD3
Maintainer : The Idris Community.
-}
module Idris.Directives(directiveAction) where
import Idris.AbsSyntax
import Idris.ASTUtils
import Idris.Core.Evaluate
import Idris.Core.TT
import Idris.Imports
import Idris.Output (sendHighlighting)
import Util.DynamicLinker
-- | Run the action corresponding to a directive
directiveAction :: Directive -> Idris ()
directiveAction (DLib cgn lib) = do
addLib cgn lib
addIBC (IBCLib cgn lib)
directiveAction (DLink cgn obj) = do
dirs <- allImportDirs
o <- runIO $ findInPath dirs obj
addIBC (IBCObj cgn obj) -- just name, search on loading ibc
addObjectFile cgn o
directiveAction (DFlag cgn flag) = do
let flags = words flag
mapM_ (\f -> addIBC (IBCCGFlag cgn f)) flags
mapM_ (addFlag cgn) flags
directiveAction (DInclude cgn hdr) = do
addHdr cgn hdr
addIBC (IBCHeader cgn hdr)
directiveAction (DHide n') = do
ns <- allNamespaces n'
mapM_ (\n -> do
setAccessibility n Hidden
addIBC (IBCAccess n Hidden)) ns
directiveAction (DFreeze n') = do
ns <- allNamespaces n'
mapM_ (\n -> do
setAccessibility n Frozen
addIBC (IBCAccess n Frozen)) ns
directiveAction (DThaw n') = do
ns <- allNamespaces n'
mapM_ (\n -> do
ctxt <- getContext
case lookupDefAccExact n False ctxt of
Just (_, Frozen) -> do setAccessibility n Public
addIBC (IBCAccess n Public)
_ -> throwError (Msg (show n ++ " is not frozen"))) ns
directiveAction (DInjective n') = do
ns <- allNamespaces n'
mapM_ (\n -> do
setInjectivity n True
addIBC (IBCInjective n True)) ns
directiveAction (DSetTotal n') = do
ns <- allNamespaces n'
mapM_ (\n -> do
setTotality n (Total [])
addIBC (IBCTotal n (Total []))) ns
directiveAction (DAccess acc) = do updateIState (\i -> i { default_access = acc })
directiveAction (DDefault tot) = do updateIState (\i -> i { default_total = tot })
directiveAction (DLogging lvl) = setLogLevel (fromInteger lvl)
directiveAction (DDynamicLibs libs) = do
added <- addDyLib libs
case added of
Left lib -> addIBC (IBCDyLib (lib_name lib))
Right msg -> fail msg
directiveAction (DNameHint ty tyFC ns) = do
ty' <- disambiguate ty
mapM_ (addNameHint ty' . fst) ns
mapM_ (\n -> addIBC (IBCNameHint (ty', fst n))) ns
sendHighlighting $ [(tyFC, AnnName ty' Nothing Nothing Nothing)] ++ map (\(n, fc) -> (fc, AnnBoundName n False)) ns
directiveAction (DErrorHandlers fn nfc arg afc ns) = do
fn' <- disambiguate fn
ns' <- mapM (\(n, fc) -> do
n' <- disambiguate n
return (n', fc)) ns
addFunctionErrorHandlers fn' arg (map fst ns')
mapM_ (addIBC . IBCFunctionErrorHandler fn' arg . fst) ns'
sendHighlighting $
[ (nfc, AnnName fn' Nothing Nothing Nothing)
, (afc, AnnBoundName arg False)
] ++ map (\(n, fc) -> (fc, AnnName n Nothing Nothing Nothing)) ns'
directiveAction (DLanguage ext) = addLangExt ext
directiveAction (DDeprecate n reason) = do
n' <- disambiguate n
addDeprecated n' reason
addIBC (IBCDeprecate n' reason)
directiveAction (DFragile n reason) = do
n' <- disambiguate n
addFragile n' reason
addIBC (IBCFragile n' reason)
directiveAction (DAutoImplicits b) = setAutoImpls b
directiveAction (DUsed fc fn arg) = addUsedName fc fn arg
disambiguate :: Name -> Idris Name
disambiguate n = do
i <- getIState
case lookupCtxtName n (idris_implicits i) of
[(n', _)] -> return n'
[] -> throwError (NoSuchVariable n)
more -> throwError (CantResolveAlts (map fst more))
allNamespaces :: Name -> Idris [Name]
allNamespaces n = do
i <- getIState
case lookupCtxtName n (idris_implicits i) of
[(n', _)] -> return [n']
[] -> throwError (NoSuchVariable n)
more -> return (map fst more)
| mpkh/Idris-dev | src/Idris/Directives.hs | bsd-3-clause | 4,011 | 0 | 20 | 991 | 1,524 | 733 | 791 | 101 | 3 |
module TypeVariable1 where
-- checks for introdution over a type variable. Should error.
f :: a -> a
f x = x | kmate/HaRe | old/testing/introPattern/TypeVariable1_TokOut.hs | bsd-3-clause | 110 | 0 | 5 | 23 | 22 | 13 | 9 | 3 | 1 |
module Mod159_C(C(..)) where
import Mod159_A(C(m2))
| ghc-android/ghc | testsuite/tests/module/Mod159_C.hs | bsd-3-clause | 53 | 0 | 6 | 6 | 25 | 17 | 8 | 2 | 0 |
module Main where
myEven 0 = True
myEven n = not (myEven (n-1))
allEven [] = True
allEven (h:t) = (myEven h) && (allEven t)
myReverse lst = foldl (\result next -> next:result) [] lst
colourCombinations =
let
colours = ["black", "white", "blue", "yellow", "red"]
in
[(a, b) | a <- colours, b <- colours, a < b]
multiplicationTable =
let
numbers = [1..12]
in
[(a, b, a*b) | a <- numbers, b <- numbers]
mapColouring =
let
colours = ["red", "green", "blue"]
in
[(("Alabama", a), ("Florida", f), ("Georgia", g), ("Mississippi", m), ("Tennessee", t)) |
a <- colours,
f <- colours,
g <- colours,
m <- colours,
t <- colours,
m /= t,
m /= a,
a /= t,
a /= m,
a /= g,
a /= f,
g /= f,
g /= t]
| pauldoo/scratch | SevenLanguages/Haskell/day1.hs | isc | 1,047 | 0 | 9 | 500 | 402 | 219 | 183 | 31 | 1 |
module NotQuickCheck
( module Test.QuickCheck.Arbitrary
, module Test.QuickCheck.Gen
, quickCheck
) where
import qualified Test.QuickCheck as QC
import Test.QuickCheck.Arbitrary
import Test.QuickCheck.Gen
import Test.QuickCheck.Gen.Unsafe (promote)
import Test.QuickCheck.Property (Rose (MkRose), joinRose)
import Test.QuickCheck.Random (QCGen, mkQCGen, newQCGen)
import Data.List (intersperse)
import System.Random (split)
import VBool
-----------------------------------------------
----- Result
---
data Result = MkResult{ok :: Maybe VBool, testCase :: [String]}
-----------------------------------------------
----- Property
---
newtype Prop = MkProp{unProp :: Rose Result}
newtype Property = MkProperty{unProperty :: Gen Prop}
-----------------------------------------------
----- Testable
---
class Testable a where
property :: a -> Property
instance Testable Bool where
property = property . vbFromBool
instance Testable Double where
property = property . vbFromDouble
instance Testable VBool where
property vb = property $ MkResult{ok = Just vb, testCase = []}
instance Testable Result where
property = property . MkProp . return
instance Testable Prop where
property = MkProperty . return
instance Testable Property where
property = id
instance (Arbitrary a, Show a, Testable prop) => Testable (a -> prop) where
property pf = forAllShrink arbitrary shrink pf
-----------------------------------------------
----- Case generation and shrinking
---
forAllShrink :: (Show a, Testable prop)
=> Gen a -> (a -> [a]) -> (a -> prop) -> Property
forAllShrink gen shrinker pf =
MkProperty $
gen >>= \x ->
unProperty $
shrinking shrinker x $ \x' ->
counterexample (show x') (pf x')
counterexample :: Testable prop => String -> prop -> Property
counterexample s = mapTotalResult (\res -> res { testCase = s : testCase res })
shrinking :: Testable prop => (a -> [a]) -> a -> (a -> prop) -> Property
shrinking shrinker x0 pf =
MkProperty (fmap (MkProp . joinRose . fmap unProp) (promote (props x0)))
where
props x =
MkRose (unProperty (property (pf x))) [ props x' | x' <- shrinker x ]
mapTotalResult :: Testable prop => (Result -> Result) -> prop -> Property
mapTotalResult f = mapRoseResult (fmap f)
mapRoseResult :: Testable prop =>
(Rose Result -> Rose Result) -> prop -> Property
mapRoseResult f = mapProp (\(MkProp t) -> MkProp (f t))
mapProp :: Testable prop => (Prop -> Prop) -> prop -> Property
mapProp f = MkProperty . fmap f . unProperty . property
-----------------------------------------------
----- State
---
data State = MkState
{ maxSuccess :: Int
, maxDiscarded :: Int
, maxShrinks :: Int
, maxTryShrinks :: Int
, maxSize :: Int
, computeSize :: Int -> Int -> Int
, numSuccess :: Int
, numDiscarded :: Int
, numShrinks :: Int
, numTryShrinks :: Int
, randomSeed :: QCGen
}
initState :: State
initState = MkState
{ maxSuccess = mSu
, maxDiscarded = mDc
, maxShrinks = mSh
, maxTryShrinks = mTSh
, maxSize = mSi
, computeSize = compS
, numSuccess = 0
, numDiscarded = 0
, numShrinks = 0
, numTryShrinks = 0
, randomSeed = mkQCGen 0
}
where
mSu = 100
mDc = 200
mSh = 100
mTSh = 100
mSi = 100
compS n d
| (n `div` mSi) * mSi <= mSu || n >= mSu || mSu `mod` mSi == 0 =
(n `mod` mSi + d `div` 10) `min` mSi
| otherwise =
((n `mod` mSi) * mSi `div` (mSu `mod` mSi) + d `div` 10) `min` mSi
-----------------------------------------------
----- Testing
---
quickCheck :: Testable prop => prop -> IO ()
quickCheck p =
do rnd <- newQCGen
test initState{randomSeed = rnd} (unGen (unProperty (property p)))
return ()
test :: State -> (QCGen -> Int -> Prop) -> IO ()
test st f
| numSuccess st >= maxSuccess st = done st
| numDiscarded st >= maxDiscarded st = giveUp st
| otherwise = run st f
done :: State -> IO ()
done st = putStrLn ("+++ OK! (" ++ show (numSuccess st) ++ " passed)")
giveUp :: State -> IO ()
giveUp st = putStrLn ("*** Gave up! (" ++ show (numSuccess st) ++ " passed)")
failed :: State -> Result -> IO ()
failed st MkResult{ok = Just vb, testCase = testCase} =
do putStrLn $ concat
[ "--- Failed! (after " ++ show (numSuccess st+1) ++ " tests"
, " and " ++ show (numShrinks st) ++ " shrinks"
, " with badness " ++ show vb ++ ")"
]
putStrLn $ concat (intersperse "\n" testCase)
return ()
run :: State -> (QCGen -> Int -> Prop) -> IO ()
run st f =
do let size = computeSize st (numSuccess st) (numDiscarded st)
let (rnd1,rnd2) = split (randomSeed st)
let MkRose res ts = unProp (f rnd1 size)
-- continue?
case res of
MkResult{ok = Nothing} ->
test st{numDiscarded = numDiscarded st + 1, randomSeed = rnd2} f
MkResult{ok = Just vb, testCase = testCase} ->
if isTrue vb
then test st{numSuccess = numSuccess st + 1, randomSeed = rnd2} f
else localMin st res ts
-----------------------------------------------
----- Minimize failed test case
---
localMin :: State -> Result -> [Rose Result] -> IO ()
localMin st res [] = failed st res
localMin st res (t:ts)
| numTryShrinks st >= maxTryShrinks st = failed st res
| otherwise =
do let MkRose res' ts' = t
if ok res' <= ok res ---- <<<< THIS IS WHAT IT ALL LEADS UP TO
then localMin st{ numShrinks = numShrinks st + 1
, numTryShrinks = 0 } res' ts'
else localMin st{numTryShrinks = numTryShrinks st + 1} res ts
-----------------------------------------------
----- Example properties
---
prop_1 :: [Int] -> [Int] -> Bool
prop_1 xs ys = reverse (xs++ys) == reverse xs ++ reverse ys
prop_2 :: Int -> Int -> Double
prop_2 n m = fromIntegral n * fromIntegral m
| koengit/cyphy | src/NotQuickCheck.hs | mit | 5,861 | 0 | 18 | 1,371 | 2,089 | 1,112 | 977 | 137 | 3 |
module ProjectEuler.Problem015Spec (main, spec) where
import Test.Hspec
import ProjectEuler.Problem015
main :: IO ()
main = hspec spec
spec :: Spec
spec = parallel $
describe "solve" $
it "finds the numbers of routes in a 20 x 20 grid" $
solve 20 `shouldBe` 137846528820
| hachibu/project-euler | test/ProjectEuler/Problem015Spec.hs | mit | 286 | 0 | 9 | 59 | 79 | 43 | 36 | 10 | 1 |
data Cor = V|P deriving Show
data Arvore a = No a Cor ( Arvore a ) ( Arvore a ) | Folha deriving Show
ins' e Folha = No e V Folha Folha
ins' e a@( No e1 c esq dir )
| e < e1 = rot ( No e1 c ( ins' e esq ) dir )
| e > e1 = rot ( No e1 c esq ( ins' e esq ) dir )
| e == e1 = a
ins e a = No e1 P esq dir
where ( No e1 a esq dir ) = ins' e a
rot ( No x3 P ( No x1 V a ( No x2 V b c ) ) d ) = No x2 V ( No x1 P a b ) ( No x3 P c d )
rot ( No x3 P ( No x1 V ( No x2 V a b ) c ) d ) = No x2 V ( No x1 P a b ) ( No x3 P c d )
rot ( No x3 P a ( b No x1 V ( No x2 V c d ) ) ) = No x2 V ( No x1 P a b ) ( No x3 P c d )
rot ( No x3 P a ( No x1 V ( No x2 b c ) d ) = No x2 V ( No x1 P a b ) ( No x3 P c d )
rot a = a | AndressaUmetsu/PapaEhPop | redBlack.hs | mit | 712 | 3 | 11 | 267 | 549 | 267 | 282 | -1 | -1 |
-- Functors are things that can be mapped over.
-- fmap :: (a -> b) -> f a -> f b
instance Functor IO where
fmap f action = do
result <- action
return (f result)
main = do
line <- fmap reverse getLine
putStrLn line
-- if fmap was just limited to IO:
-- fmap :: (a -> b) -> IO a -> IO b
main = do
line <- fmap (intersperse '-' . reverse . map toUpper) getLine
putStrLn line
instance Functor ((->) r) where
fmap f g => (\x -> f (g x))
fmap :: (a -> b) -> f a -> f b
fmap :: (a -> b) -> ((->) r) a -> ((->) r) b
fmap :: (a -> b) -> (r -> a) -> (r -> b)
-- f: a->b
-- g: r->a
-- f (g x): r->b
instance Functor ((->) r) where
fmap = (.)
fmap :: (Functor f) => (a -> b) -> f a -> f b
fmap :: (a -> b) -> (f a -> f b)
-- Prelude Control.Monad> :t fmap (*2)
-- fmap (*2) :: (Num b, Functor f) => f b -> f b
-- Prelude Control.Monad> :t fmap (replicate 3)
-- fmap (replicate 3) :: Functor f => f a -> f [a]
-- Prelude Control.Monad>
-- fmap is a function that takes a function and lifts that function so that it operates on functor values.
fmap (++) Just "Hey"
fmap (\f -> f "foo") $ (fmap (++) $ Just "Hello")
-- Applicative
class (Functor f) => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
-- fmap :: (a -> b) -> f a -> f b.
instance Applicative Maybe where
pure = Just
Nothing <*> _ = Nothing
(Just f) <*> something = fmap f something
instance Applicative [] where
pure x = [x]
fs <*> xs = [f x | f <- fs, x <- xs]
instance Applicative IO where
pure = return
a <*> b = do
f <- a
x <- b
return (f x)
instance Applicative ((->) r)
where
pure x = (\_ -> x)
f <*> g = \x -> f x (g x)
-- Applicative Laws
pure id <*> v = v
pure (.) <*> u <*> v <*> w = u <*> (v <*> w)
pure f <*> pure x = pure (f x)
u <*> pure y = pure ($ y) <*> u
liftA2 f a b = f <$> a <*> b | v0lkan/learning-haskell | applicative-functors.hs | mit | 1,912 | 19 | 14 | 566 | 808 | 403 | 405 | -1 | -1 |
import Control.Monad
main :: IO ()
main = do
n <- readLn :: IO Int
str <- replicateM n getLine
let ans = map (sum . map read . words) str
mapM_ print ans
| mgrebenets/hackerrank | alg/warmup/solve-me-second.hs | mit | 163 | 1 | 14 | 43 | 85 | 38 | 47 | 7 | 1 |
module Globber (matchGlob) where
import AParser (runParser)
import GlobParser (Token (..), anyChar, parsePattern, setToList)
import Data.Maybe (isJust)
type GlobPattern = String
matchGlob :: GlobPattern -> String -> Bool
matchGlob p = maybe (const False) go (parsePattern p)
where go [Eof] [] = True
go [Eof] _ = False
go [Many,Eof] _ = True
go _ [] = False
go (x:xs) t@(y:ys) = case x of
U c -> (c == y) && go xs ys
S cs -> elem y (setToList cs) && go xs ys
Any -> isJust (runParser anyChar t) && go xs ys
Many -> go xs t || go (x:xs) ys
| ssbl/globber | globber-1.0.0/Globber.hs | mit | 759 | 0 | 13 | 325 | 290 | 153 | 137 | 16 | 8 |
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE TemplateHaskell #-}
-- | <http://strava.github.io/api/v3/activities/>
module Strive.Types.Activities
( ActivityDetailed(..)
, ActivitySummary(..)
, ActivityZoneDetailed(..)
, ActivityZoneDistributionBucket(..)
, ActivityLapSummary(..)
) where
import Control.Applicative (empty)
import Data.Aeson ((.:), (.:?), FromJSON, Value(Object), parseJSON)
import Data.Aeson.TH (deriveFromJSON)
import Data.Aeson.Types (Parser, withObject)
import Data.Text (Text)
import Data.Time.Clock (UTCTime)
import Strive.Enums (ActivityType, ActivityZoneType, ResourceState)
import Strive.Internal.TH (options)
import Strive.Types.Athletes (AthleteMeta)
import Strive.Types.Efforts (EffortDetailed)
import Strive.Types.Gear (GearSummary)
import Strive.Types.Polylines (PolylineDetailed, PolylineSummary)
parseLatlng :: Maybe [Double] -> Parser (Maybe (Double, Double))
parseLatlng = \case
Nothing -> pure Nothing
Just [] -> pure Nothing
Just [lat, lng] -> pure $ Just (lat, lng)
_ -> fail "Invalid array length when parsing a Latlng"
-- | <http://strava.github.io/api/v3/activities/#detailed>
data ActivityDetailed = ActivityDetailed
{ activityDetailed_achievementCount :: Integer
, activityDetailed_athlete :: AthleteMeta
, activityDetailed_athleteCount :: Integer
, activityDetailed_averageSpeed :: Double
, activityDetailed_averageWatts :: Maybe Double
, activityDetailed_averageHeartrate :: Maybe Double
, activityDetailed_calories :: Double
, activityDetailed_commentCount :: Integer
, activityDetailed_commute :: Bool
, activityDetailed_description :: Maybe Text
, activityDetailed_deviceWatts :: Maybe Bool
, activityDetailed_distance :: Double
, activityDetailed_elapsedTime :: Integer
, activityDetailed_endLatlng :: Maybe (Double, Double)
, activityDetailed_externalId :: Maybe Text
, activityDetailed_flagged :: Bool
, activityDetailed_gear :: GearSummary
, activityDetailed_gearId :: Maybe Text
, activityDetailed_hasKudoed :: Bool
, activityDetailed_id :: Integer
, activityDetailed_instagramPrimaryPhoto :: Maybe Text
, activityDetailed_kilojoules :: Maybe Double
, activityDetailed_locationCity :: Maybe Text
, activityDetailed_locationCountry :: Maybe Text
, activityDetailed_locationState :: Maybe Text
, activityDetailed_manual :: Bool
, activityDetailed_map :: PolylineDetailed
, activityDetailed_maxHeartrate :: Maybe Double
, activityDetailed_maxSpeed :: Double
, activityDetailed_movingTime :: Integer
, activityDetailed_name :: Text
, activityDetailed_photoCount :: Integer
, activityDetailed_private :: Bool
, activityDetailed_resourceState :: ResourceState
, activityDetailed_segmentEfforts :: [EffortDetailed]
, activityDetailed_startDate :: UTCTime
, activityDetailed_startDateLocal :: UTCTime
, activityDetailed_startLatitude :: Double
, activityDetailed_startLatlng :: Maybe (Double, Double)
, activityDetailed_startLongitude :: Double
, activityDetailed_timezone :: Text
, activityDetailed_totalElevationGain :: Double
, activityDetailed_trainer :: Bool
, activityDetailed_truncated :: Integer
, activityDetailed_type :: ActivityType
, activityDetailed_uploadId :: Maybe Integer
, activityDetailed_weightedAverageWatts :: Maybe Integer
}
deriving Show
instance FromJSON ActivityDetailed where
parseJSON = withObject "ActivityDetailed" $ \v ->
ActivityDetailed
<$> v
.: "achievement_count"
<*> v
.: "athlete"
<*> v
.: "athlete_count"
<*> v
.: "average_speed"
<*> v
.:? "average_watts"
<*> v
.:? "average_heartrate"
<*> v
.: "calories"
<*> v
.: "comment_count"
<*> v
.: "commute"
<*> v
.:? "description"
<*> v
.:? "device_watts"
<*> v
.: "distance"
<*> v
.: "elapsed_time"
<*> (v .:? "end_latlng" >>= parseLatlng)
<*> v
.:? "external_id"
<*> v
.: "flagged"
<*> v
.: "gear"
<*> v
.:? "gear_id"
<*> v
.: "has_kudoed"
<*> v
.: "id"
<*> v
.:? "instagram_primary_photo"
<*> v
.:? "kilojoules"
<*> v
.:? "location_city"
<*> v
.:? "location_country"
<*> v
.:? "location_state"
<*> v
.: "manual"
<*> v
.: "map"
<*> v
.:? "max_heartrate"
<*> v
.: "max_speed"
<*> v
.: "moving_time"
<*> v
.: "name"
<*> v
.: "photo_count"
<*> v
.: "private"
<*> v
.: "resource_state"
<*> v
.: "segment_efforts"
<*> v
.: "start_date"
<*> v
.: "start_date_local"
<*> v
.: "start_latitude"
<*> (v .:? "start_latlng" >>= parseLatlng)
<*> v
.: "start_longitude"
<*> v
.: "timezone"
<*> v
.: "total_elevation_gain"
<*> v
.: "trainer"
<*> v
.: "truncated"
<*> v
.: "type"
<*> v
.:? "upload_id"
<*> v
.:? "weighted_average_watts"
-- | <http://strava.github.io/api/v3/activities/#summary>
data ActivitySummary = ActivitySummary
{ activitySummary_achievementCount :: Integer
, activitySummary_athlete :: AthleteMeta
, activitySummary_athleteCount :: Integer
, activitySummary_averageSpeed :: Double
, activitySummary_averageWatts :: Maybe Double
, activitySummary_averageHeartrate :: Maybe Double
, activitySummary_commentCount :: Integer
, activitySummary_commute :: Bool
, activitySummary_deviceWatts :: Maybe Bool
, activitySummary_distance :: Double
, activitySummary_elapsedTime :: Integer
, activitySummary_endLatlng :: Maybe (Double, Double)
, activitySummary_externalId :: Maybe Text
, activitySummary_flagged :: Bool
, activitySummary_gearId :: Maybe Text
, activitySummary_hasKudoed :: Bool
, activitySummary_id :: Integer
, activitySummary_kilojoules :: Maybe Double
, activitySummary_kudosCount :: Integer
, activitySummary_locationCity :: Maybe Text
, activitySummary_locationCountry :: Maybe Text
, activitySummary_locationState :: Maybe Text
, activitySummary_manual :: Bool
, activitySummary_map :: PolylineSummary
, activitySummary_maxHeartrate :: Maybe Double
, activitySummary_maxSpeed :: Double
, activitySummary_movingTime :: Integer
, activitySummary_name :: Text
, activitySummary_photoCount :: Integer
, activitySummary_private :: Bool
, activitySummary_resourceState :: ResourceState
, activitySummary_startDate :: UTCTime
, activitySummary_startDateLocal :: UTCTime
, activitySummary_startLatitude :: Double
, activitySummary_startLatlng :: Maybe (Double, Double)
, activitySummary_startLongitude :: Double
, activitySummary_timezone :: Text
, activitySummary_totalElevationGain :: Double
, activitySummary_trainer :: Bool
, activitySummary_type :: ActivityType
, activitySummary_uploadId :: Maybe Integer
, activitySummary_weightedAverageWatts :: Maybe Integer
}
deriving Show
instance FromJSON ActivitySummary where
parseJSON = withObject "ActivitySummary" $ \v ->
ActivitySummary
<$> v
.: "achievement_count"
<*> v
.: "athlete"
<*> v
.: "athlete_count"
<*> v
.: "average_speed"
<*> v
.:? "average_watts"
<*> v
.:? "average_heartrate"
<*> v
.: "comment_count"
<*> v
.: "commute"
<*> v
.:? "device_watts"
<*> v
.: "distance"
<*> v
.: "elapsed_time"
<*> (v .:? "end_latlng" >>= parseLatlng)
<*> v
.:? "external_id"
<*> v
.: "flagged"
<*> v
.:? "gear_id"
<*> v
.: "has_kudoed"
<*> v
.: "id"
<*> v
.:? "kilojoules"
<*> v
.: "kudos_count"
<*> v
.:? "location_city"
<*> v
.:? "location_country"
<*> v
.:? "location_state"
<*> v
.: "manual"
<*> v
.: "map"
<*> v
.:? "max_heartrate"
<*> v
.: "max_speed"
<*> v
.: "moving_time"
<*> v
.: "name"
<*> v
.: "photo_count"
<*> v
.: "private"
<*> v
.: "resource_state"
<*> v
.: "start_date"
<*> v
.: "start_date_local"
<*> v
.: "start_latitude"
<*> (v .:? "start_latlng" >>= parseLatlng)
<*> v
.: "start_longitude"
<*> v
.: "timezone"
<*> v
.: "total_elevation_gain"
<*> v
.: "trainer"
<*> v
.: "type"
<*> v
.:? "upload_id"
<*> v
.:? "weighted_average_watts"
-- | <http://strava.github.io/api/v3/activities/#zones>
data ActivityZoneDistributionBucket = ActivityZoneDistributionBucket
{ activityZoneDistributionBucket_max :: Integer
, activityZoneDistributionBucket_min :: Integer
, activityZoneDistributionBucket_time :: Integer
}
deriving Show
$(deriveFromJSON options ''ActivityZoneDistributionBucket)
-- | <http://strava.github.io/api/v3/activities/#zones>
data ActivityZoneDetailed = ActivityZoneDetailed
{ activityZoneDetailed_distributionBuckets
:: [ActivityZoneDistributionBucket]
, activityZoneDetailed_resourceState :: ResourceState
, activityZoneDetailed_sensorBased :: Bool
, activityZoneDetailed_type :: ActivityZoneType
}
deriving Show
$(deriveFromJSON options ''ActivityZoneDetailed)
-- | <http://strava.github.io/api/v3/activities/#laps>
data ActivityLapSummary = ActivityLapSummary
{ activityLapSummary_activityId :: Integer
, activityLapSummary_athleteId :: Integer
, activityLapSummary_averageSpeed :: Double
, activityLapSummary_averageWatts :: Double
, activityLapSummary_distance :: Double
, activityLapSummary_elapsedTime :: Integer
, activityLapSummary_endIndex :: Integer
, activityLapSummary_id :: Integer
, activityLapSummary_lapIndex :: Integer
, activityLapSummary_maxSpeed :: Double
, activityLapSummary_movingTime :: Double
, activityLapSummary_name :: Text
, activityLapSummary_resourceState :: ResourceState
, activityLapSummary_startDate :: UTCTime
, activityLapSummary_startDateLocal :: UTCTime
, activityLapSummary_startIndex :: Integer
, activityLapSummary_totalElevationGain :: Double
}
deriving Show
instance FromJSON ActivityLapSummary where
parseJSON (Object o) =
ActivityLapSummary
<$> ((o .: "activity") >>= (.: "id"))
<*> ((o .: "athlete") >>= (.: "id"))
<*> o
.: "average_speed"
<*> o
.: "average_watts"
<*> o
.: "distance"
<*> o
.: "elapsed_time"
<*> o
.: "end_index"
<*> o
.: "id"
<*> o
.: "lap_index"
<*> o
.: "max_speed"
<*> o
.: "moving_time"
<*> o
.: "name"
<*> o
.: "resource_state"
<*> o
.: "start_date"
<*> o
.: "start_date_local"
<*> o
.: "start_index"
<*> o
.: "total_elevation_gain"
parseJSON _ = empty
| tfausak/strive | source/library/Strive/Types/Activities.hs | mit | 11,110 | 0 | 99 | 2,673 | 2,180 | 1,240 | 940 | 369 | 4 |
module Level.Grid.Types where
import Coord.Types
import qualified Data.Map as M
type CellMap = M.Map Coord Cell
data Cell
= GridEmpty -- Nothing done - default value
| GridFloor -- vertex in-graph or presence of edge
| GridEdgeWall -- Absence of an edge
| GridEdgeDoor -- Presence of an edge (special)
deriving (Eq)
data Grid = Grid
{ gMin :: Coord
, gMax :: Coord
, gCells :: CellMap
} deriving (Eq)
instance Show Cell where
show GridEmpty = "?"
show GridEdgeWall = "#"
show GridEdgeDoor = "+"
show GridFloor = "."
instance Show Grid where
show g = foldl appendGridRow [] (cellRows g)
where
appendGridRow s y = s ++ foldr buildRow "\n" (cellRowCoords g y)
buildRow c s' = show (cell c g) ++ s'
-- Find a cell in the grid. The cell may be an edge or a vertex.
cell :: Coord -> Grid -> Cell
cell c g = M.findWithDefault GridEmpty c $ gCells g
-- Get coords in cell space of every cell in a row
rowCells :: Int -> Grid -> [Coord]
rowCells y g =
case (gridToCell $ gMin g, gridToCell $ gMax g) of
((x, _), (x', _)) -> [(x'', 2 * y) | x'' <- [x .. x']]
colCells :: Int -> Grid -> [Coord]
colCells x g =
case (gridToCell $gMin g, gridToCell $ gMax g) of
((_, y), (_, y')) -> [(2 * x, y'') | y'' <- [y .. y']]
-- Is a coordinate in cell space a node (vertex) coordinate?
isNodeCoord :: Coord -> Bool
isNodeCoord (x, y) = even x && even y
-- Is a coordinate in cell space a link (edge) coordinate?
isLinkCoord :: Coord -> Bool
isLinkCoord (x, y) = odd x || odd y
cellRows :: Grid -> [Int]
cellRows (Grid gmin gmax _) =
case (gridToCell gmin, gridToCell gmax) of
((_, y), (_, y')) -> [y .. y']
cellRowCoords :: Grid -> Int -> [Coord]
cellRowCoords (Grid gmin gmax _) y =
case (gridToCell gmin, gridToCell gmax) of
((x, _), (x', _)) -> [(x'', y) | x'' <- [x .. x']]
allCellCoords :: Grid -> [Coord]
allCellCoords (Grid gmin gmax _) =
[(x'', y'') | x'' <- [x .. x'], y'' <- [y .. y']]
where
((x, y), (x', y')) = (gridToCell gmin, gridToCell gmax)
cellCoordsUsing :: (Coord -> Bool) -> Grid -> [Coord]
cellCoordsUsing f (Grid gmin gmax _) =
case (gridToCell gmin, gridToCell gmax) of
((x, y), (x', y')) ->
[(x'', y'') | x'' <- [x .. x'], y'' <- [y .. y'], f (x'', y'')]
nodeCoords :: Grid -> [Coord]
nodeCoords (Grid (x, y) (x', y') _) =
[(x'', y'') | x'' <- [x .. x'], y'' <- [y .. y']]
nodeCellCoords :: Grid -> [Coord]
nodeCellCoords = cellCoordsUsing isNodeCoord
linkCellCoords :: Grid -> [Coord]
linkCellCoords = cellCoordsUsing isLinkCoord
node :: Coord -> Grid -> Cell
node c = cell $ gridToCell c
gridToCell :: Coord -> Coord
gridToCell (x, y) = (x * 2, y * 2)
emptyGrid :: Coord -> Coord -> Grid
emptyGrid gmin gmax = Grid {gMin = gmin, gMax = gmax, gCells = M.empty}
allFloorGrid :: Coord -> Coord -> Grid
allFloorGrid gmin gmax = setAllCells GridFloor coords grid
where
grid = emptyGrid gmin gmax
coords = allCellCoords grid
emptyLinkedGrid :: Coord -> Coord -> Grid
emptyLinkedGrid gmin gmax = setAllLinks GridFloor $ emptyGrid gmin gmax
emptyUnlinkedGrid :: Coord -> Coord -> Grid
emptyUnlinkedGrid gmin gmax = setAllLinks GridEdgeWall $ emptyGrid gmin gmax
-- | Set a specific cell using a cell space coordinate
setCell :: Cell -> Grid -> Coord -> Grid
setCell c g x = g {gCells = M.insert x c $ gCells g}
-- | Get the cell space coordinate of the edge between two adjacent cells
edgeCoord :: Coord -> Coord -> Coord
edgeCoord c1 c2 = gridToCell c1 |+| delta
where
delta = c2 |-| c1
-- | set the edge between two adjacent cells
setLink :: Cell -> Grid -> Coord -> Coord -> Grid
setLink v g c1 c2 = setCell v g $ edgeCoord c1 c2
setNode :: Cell -> Grid -> Coord -> Grid
setNode v g c = setCell v g $ gridToCell c
setAllCells :: Cell -> [Coord] -> Grid -> Grid
setAllCells v cs g = foldl (setCell v) g cs
setAllLinks :: Cell -> Grid -> Grid
setAllLinks v g = setAllCells v (linkCellCoords g) g
-- | link two cells in grid space. They must be adjacent
link :: Grid -> Coord -> Coord -> Grid
link = setLink GridFloor
link' :: Grid -> Coord -> Coord -> Grid
link' = setLink GridEdgeDoor
-- | unlink two cells in grid space. They must be adjacent
unlink :: Grid -> Coord -> Coord -> Grid
unlink = setLink GridEdgeWall
-- | Visit a cell
visit :: Grid -> Coord -> Grid
visit = setNode GridFloor
-- | Carve - cut an edge and a cell
fCarve :: (Grid -> Coord -> Coord -> Grid) -> Grid -> Coord -> Coord -> Grid
fCarve f g c1 c2 = f (visit g c2) c1 c2
carve :: Grid -> Coord -> Coord -> Grid
carve = fCarve link
carve' :: Grid -> Coord -> Coord -> Grid
carve' = fCarve link'
uncarve :: Grid -> Coord -> Coord -> Grid
uncarve g c1 c2 = unlink (setNode GridEdgeWall g c2) c1 c2
-- Cross coordinates in unspecified space
crossCoords :: Coord -> [Coord]
crossCoords (x, y) = [(x - 1, y), (x + 1, y), (x, y - 1), (x, y + 1)]
-- Adjacent vertexes in grid space
adjacentNodes :: Coord -> [Coord]
adjacentNodes = crossCoords
adjacentNodesOf :: (Cell -> Bool) -> Coord -> Grid -> [Coord]
adjacentNodesOf f x g = filter predicate (adjacentNodes x)
where
predicate x' = isNodeInBounds g x' && f (node x' g)
unlinkedNeighbors :: Coord -> Grid -> [Coord]
unlinkedNeighbors x g = filter predicate (adjacentNodes x)
where
predicate x' = isNodeInBounds g x' && not (isLinked x x' g)
linkedNeighbors :: Coord -> Grid -> [Coord]
linkedNeighbors x g = filter predicate (adjacentNodes x)
where
predicate x' = isNodeInBounds g x' && isLinked x x' g
-- Is inbounds in grid space
isNodeInBounds :: Grid -> Coord -> Bool
isNodeInBounds (Grid (xMin, yMin) (xMax, yMax) _) (x, y) =
xMin <= x && yMin <= y && x <= xMax && y <= yMax
-- Is inbounds in cell space
isLinkInBounds :: Grid -> Coord -> Bool
isLinkInBounds (Grid gmin gmax _) (x, y) =
case (gridToCell gmin, gridToCell gmax) of
((xMin, yMin), (xMax, yMax)) ->
xMin <= x && yMin <= y && x <= xMax && y <= yMax
-- links in cell space of vertex in grid space
links :: Coord -> [Coord]
links = crossCoords . gridToCell
-- are two coords in grid space linked?
isLinked :: Coord -> Coord -> Grid -> Bool
isLinked c1 c2 grid =
elem c2 (adjacentNodes c1) && (cval == GridEdgeDoor || cval == GridFloor)
where cval = cell (edgeCoord c1 c2) grid
| wmarvel/haskellrogue | src/Level/Grid/Types.hs | mit | 6,251 | 0 | 14 | 1,374 | 2,417 | 1,305 | 1,112 | 132 | 1 |
{-# LANGUAGE RecordWildCards #-}
module UART.Tx (txInit, txRun) where
import CLaSH.Prelude
import Control.Lens
import Control.Monad
import Control.Monad.Trans.State
import Data.Tuple
import Types
data TxState = TxState
{ _tx :: Bit
, _tx_done_tick :: Bool
, _tx_state :: Unsigned 2
, _s_reg :: Unsigned 4 -- sampling counter
, _n_reg :: Unsigned 3 -- number of bits received
, _b_reg :: Byte -- byte register
}
makeLenses ''TxState
txInit :: TxState
txInit = TxState 1 False 0 0 0 0
txRun :: TxState -> (Bool, Byte, Bit) -> (TxState, (Bit, Bool))
txRun s@(TxState {..}) (tx_start, tx_din, s_tick) = swap $ flip runState s $ do
tx_done_tick .= False
case _tx_state of
0 -> idle
1 -> start
2 -> rdata
3 -> stop
done_tick <- use tx_done_tick
return (_tx, done_tick)
where
idle = do
tx .= 1
when tx_start $ do
tx_state .= 1
s_reg .= 0
b_reg .= tx_din
start = do
tx .= 0
when (s_tick == 1) $
if _s_reg == 15 then do
tx_state .= 2
s_reg .= 0
n_reg .= 0
else
s_reg += 1
rdata = do
tx .= _b_reg ! 0
when (s_tick == 1) $
if _s_reg == 15 then do
s_reg .= 0
b_reg .= _b_reg `shiftR` 1
if _n_reg == 7 then -- 8 bits
tx_state .= 3
else
n_reg += 1
else
s_reg += 1
stop = do
tx .= 1
when (s_tick == 1) $
if _s_reg == 15 then do
tx_state .= 0
tx_done_tick .= True
else
s_reg += 1
| aufheben/Y86 | SEQ/UART/Tx.hs | mit | 1,570 | 0 | 14 | 557 | 534 | 281 | 253 | -1 | -1 |
module Proxy.PathFinding.FloydWarshall where
import Proxy.Math.Graph
import Data.Array
import Data.Monoid
import Control.Monad
import Proxy.PathFinding.Specification
import Proxy.Math.InfNumber
sp :: (WeightedGraph g b) => g a b -> Node -> Node -> Maybe Path
sp g o d = floWar g ! (o,d,0)
shortestPaths :: (WeightedGraph g b) => g a b -> [(Node,Node)] -> [Maybe Path]
shortestPaths g = map ((allPaths !) . (\(x,y) -> (x,y,0)))
where allPaths = floWar g
floWar :: (WeightedGraph g b) => g a b -> Array (Node,Node,Node) (Maybe Path)
floWar g = theArray
where theArray = array ((0,0,0),(s - 1,s - 1,s)) (initialized ++ recursive)
s = size g
ns = nodes g
cost = weight g
initialized = [((i,j,s), f (i,j)) |
i <- ns,
j <- ns,
let f (x,y) = if arc g x y
then return . mkPath $ [x,y]
else Nothing]
recursive = [((i,j,k),p) |
i <- ns,
j <- ns,
k <- ns,
let p = minBy (liftM (pc g)) (theArray ! (i,j,k+1)) (liftM2 (+|+) (theArray ! (i,k,k+1)) (theArray ! (k,j,k+1)))]
floWarEf :: (WeightedGraph g b) => g a b -> Array (Node,Node,Node) (InfNumbers b,Maybe Node)
floWarEf g = theArray
where theArray = array ((0,0,0),(s - 1,s - 1,s)) (initialized ++ recursive)
s = size g
ns = nodes g
cost = weight g
initialized = [((i,j,s), f (i,j)) |
i <- ns,
j <- ns,
let f (x,y)
| x == y = (F 0,Nothing)
| arc g x y = (F (weight g (x,y)), Just y)
| otherwise = (Inf, Nothing)]
recursive = [((i,j,k),p) |
i <- ns,
j <- ns,
k <- ns,
let p = if fst (theArray ! (i,k,k+1)) + fst (theArray ! (k,j,k+1)) < fst (theArray ! (i,j,k+1))
then (fst (theArray ! (i,k,k+1)) + fst (theArray ! (k,j,k+1)), snd (theArray ! (i,k,k+1)))
else theArray ! (i,j,k+1)]
minBy :: (Ord b) => (a -> b) -> a -> a -> a
minBy f x y = if f x <= f y
then x
else y
sp' :: (WeightedGraph g b) => g a b -> Node -> Node -> Maybe Path
sp' g o d = liftM mkPath . path o d . floWarEf $ g
path :: (Real b) => Node -> Node -> Array (Node,Node,Node) (InfNumbers b,Maybe Node) -> Maybe [Node]
path n m a
| n == m = Just [n]
| otherwise = snd (a ! (n,m,0)) >>= (\s -> path s m a) >>= return . (n :)
{-
shortestPathD :: (WeightedGraph g b) => g a b -> Node -> Node -> Maybe b
shortestPathD g n1 n2 = getLeast (floWar ! (n1,n2,0))
where floWar = array ((0,0,0),(s - 1,s - 1,s)) (initialized ++ recursive)
s = size g
ns = nodes g
cost = weight g
initialized = [((i,j,s), Least $ cost (i,j) ) | i <- ns , j <- ns]
recursive = [((i,j,k),w) |
i <- ns,
j <- ns,
k <- ns,
let w = (floWar ! (i,j,k+1)) <> (liftM2 (+) (floWar ! (i,k,k+1)) (floWar ! (k,j,k+1)))]
shortestPaths :: (WeightedGraph g b) => g a b -> [(Node, Node)] -> [Maybe (WeightedPath b)]
shortestPaths g = map (getLeast . (pathBuilder !) . f)
where pathBuilder = floWar g
f (x,y) = (x,y,0)
floWar :: (WeightedGraph g b) => g a b -> Array (Node,Node,Node) (Least (WeightedPath b))
floWar g = theArray
where theArray = array ((0,0,0),(s - 1,s - 1,s)) (initialized ++ recursive)
s = size g
ns = nodes g
cost = weight g
initialized = [((i,j,s), f (i,j)) |
i <- ns,
j <- ns,
let f (x,y) = Least $ case cost (i,j) of
Nothing -> Nothing
Just a -> if a == 0
then Just $ WP a $ Path [i]
else Just $ WP a $ Path [i,j]]
recursive = [((i,j,k),w) |
i <- ns,
j <- ns,
k <- ns,
let w = (theArray ! (i,j,k+1)) <> (liftM2 (<>) (theArray ! (i,k,k+1)) (theArray ! (k,j,k+1)))]
shortestWeightedPath :: (WeightedGraph g b) => g a b -> Node -> Node -> Maybe (WeightedPath b)
shortestWeightedPath g n1 n2 = getLeast (pathBuilder ! (n1,n2,0))
where pathBuilder = floWar g
-} | mapinguari/SC_HS_Proxy | src/Proxy/PathFinding/FloydWarshall.hs | mit | 4,754 | 0 | 19 | 2,004 | 1,367 | 750 | 617 | 59 | 2 |
-- Sequence convergence
-- https://www.codewars.com/kata/59971e64bfccc70748000068
module SequenceConvergence.Kata (convergence) where
import Data.Char (digitToInt)
import Data.List (find)
import Data.Maybe (fromJust)
convergence :: Int -> Int
convergence = length . takeWhile (\e -> fromJust . fmap ( /= e) . find (>= e) $ base) . iterate f
base = iterate f 1
f :: Int -> Int
f n | (== 1) . length . show $ n = n + n
| otherwise = (+n) . product . filter (/= 0) . map digitToInt . show $ n
| gafiatulin/codewars | src/Beta/SequenceConvergence.hs | mit | 499 | 0 | 14 | 96 | 201 | 109 | 92 | 10 | 1 |
coprime :: Int -> Int -> Bool
coprime x y = gcd x y == 1
main :: IO ()
main = do
x <- getLine
y <- getLine
let value = coprime (read x) (read y)
print value
| zeyuanxy/haskell-playground | ninety-nine-haskell-problems/vol4/33.hs | mit | 174 | 1 | 12 | 56 | 97 | 44 | 53 | 8 | 1 |
{-# LANGUAGE OverloadedStrings #-}
module Day17 (day17, day17', run, Container(..), minimalCombinations, parseInput, validCombinations) where
import Data.Function (on)
import Data.List (delete, groupBy, sortBy)
import Data.Maybe (mapMaybe)
import Data.Text (Text, pack)
data Container = Container Int deriving (Show, Eq)
containerSize :: Container -> Int
containerSize (Container s) = s
parseInput :: String -> [Container]
parseInput = map (Container . read) . lines
validCombinations :: Int -> [Container] -> [[Container]]
validCombinations 0 _ = [[]]
validCombinations _ [] = []
validCombinations t (c:cs)
| t < 0 = []
| otherwise = prependedCombinations ++ validCombinations t cs
where
prependedCombinations :: [[Container]]
prependedCombinations = map (c :) $ validCombinations (t - containerSize c) cs
minimalCombinations :: Int -> [Container] -> [[Container]]
minimalCombinations t cs =
head
. groupBy ((==) `on` length)
. sortBy (compare `on` length)
$ validCombinations t cs
day17 :: String -> Int
day17 = length . validCombinations 150 . parseInput
day17' :: String -> Int
day17' = length . minimalCombinations 150 . parseInput
-- Input
run :: IO ()
run = do
putStrLn "Day 17 results: "
input <- readFile "inputs/day17.txt"
putStrLn $ " " ++ show (day17 input)
putStrLn $ " " ++ show (day17' input)
| brianshourd/adventOfCode2015 | src/Day17.hs | mit | 1,374 | 0 | 11 | 262 | 494 | 267 | 227 | 35 | 1 |
{-# OPTIONS -fno-warn-type-defaults #-}
{-| Constants contains the Haskell constants
The constants in this module are used in Haskell and are also
converted to Python.
Do not write any definitions in this file other than constants. Do
not even write helper functions. The definitions in this module are
automatically stripped to build the Makefile.am target
'ListConstants.hs'. If there are helper functions in this module,
they will also be dragged and it will cause compilation to fail.
Therefore, all helper functions should go to a separate module and
imported.
-}
{-
Copyright (C) 2013, 2014 Google Inc.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
02110-1301, USA.
-}
module Ganeti.Constants where
import Control.Arrow ((***))
import Data.List ((\\))
import Data.Map (Map)
import qualified Data.Map as Map (empty, fromList, keys, insert)
import qualified AutoConf
import Ganeti.ConstantUtils (PythonChar(..), FrozenSet, Protocol(..),
buildVersion)
import qualified Ganeti.ConstantUtils as ConstantUtils
import Ganeti.HTools.Types (AutoRepairResult(..), AutoRepairType(..))
import qualified Ganeti.HTools.Types as Types
import Ganeti.Logging (SyslogUsage(..))
import qualified Ganeti.Logging as Logging (syslogUsageToRaw)
import qualified Ganeti.Runtime as Runtime
import Ganeti.Runtime (GanetiDaemon(..), MiscGroup(..), GanetiGroup(..),
ExtraLogReason(..))
import Ganeti.THH (PyValueEx(..))
import Ganeti.Types
import qualified Ganeti.Types as Types
import Ganeti.Confd.Types (ConfdRequestType(..), ConfdReqField(..),
ConfdReplyStatus(..), ConfdNodeRole(..),
ConfdErrorType(..))
import qualified Ganeti.Confd.Types as Types
{-# ANN module "HLint: ignore Use camelCase" #-}
-- * 'autoconf' constants for Python only ('autotools/build-bash-completion')
htoolsProgs :: [String]
htoolsProgs = AutoConf.htoolsProgs
-- * 'autoconf' constants for Python only ('lib/constants.py')
drbdBarriers :: String
drbdBarriers = AutoConf.drbdBarriers
drbdNoMetaFlush :: Bool
drbdNoMetaFlush = AutoConf.drbdNoMetaFlush
lvmStripecount :: Int
lvmStripecount = AutoConf.lvmStripecount
hasGnuLn :: Bool
hasGnuLn = AutoConf.hasGnuLn
-- * 'autoconf' constants for Python only ('lib/pathutils.py')
-- ** Build-time constants
exportDir :: String
exportDir = AutoConf.exportDir
osSearchPath :: [String]
osSearchPath = AutoConf.osSearchPath
esSearchPath :: [String]
esSearchPath = AutoConf.esSearchPath
sshConfigDir :: String
sshConfigDir = AutoConf.sshConfigDir
xenConfigDir :: String
xenConfigDir = AutoConf.xenConfigDir
sysconfdir :: String
sysconfdir = AutoConf.sysconfdir
toolsdir :: String
toolsdir = AutoConf.toolsdir
localstatedir :: String
localstatedir = AutoConf.localstatedir
-- ** Paths which don't change for a virtual cluster
pkglibdir :: String
pkglibdir = AutoConf.pkglibdir
sharedir :: String
sharedir = AutoConf.sharedir
-- * 'autoconf' constants for Python only ('lib/build/sphinx_ext.py')
manPages :: Map String Int
manPages = Map.fromList AutoConf.manPages
-- * 'autoconf' constants for QA cluster only ('qa/qa_cluster.py')
versionedsharedir :: String
versionedsharedir = AutoConf.versionedsharedir
-- * 'autoconf' constants for Python only ('tests/py/docs_unittest.py')
gntScripts :: [String]
gntScripts = AutoConf.gntScripts
-- * Various versions
releaseVersion :: String
releaseVersion = AutoConf.packageVersion
versionMajor :: Int
versionMajor = AutoConf.versionMajor
versionMinor :: Int
versionMinor = AutoConf.versionMinor
versionRevision :: Int
versionRevision = AutoConf.versionRevision
dirVersion :: String
dirVersion = AutoConf.dirVersion
osApiV10 :: Int
osApiV10 = 10
osApiV15 :: Int
osApiV15 = 15
osApiV20 :: Int
osApiV20 = 20
osApiVersions :: FrozenSet Int
osApiVersions = ConstantUtils.mkSet [osApiV10, osApiV15, osApiV20]
exportVersion :: Int
exportVersion = 0
rapiVersion :: Int
rapiVersion = 2
configMajor :: Int
configMajor = AutoConf.versionMajor
configMinor :: Int
configMinor = AutoConf.versionMinor
-- | The configuration is supposed to remain stable across
-- revisions. Therefore, the revision number is cleared to '0'.
configRevision :: Int
configRevision = 0
configVersion :: Int
configVersion = buildVersion configMajor configMinor configRevision
-- | Similarly to the configuration (see 'configRevision'), the
-- protocols are supposed to remain stable across revisions.
protocolVersion :: Int
protocolVersion = buildVersion configMajor configMinor configRevision
-- * User separation
daemonsGroup :: String
daemonsGroup = Runtime.daemonGroup (ExtraGroup DaemonsGroup)
adminGroup :: String
adminGroup = Runtime.daemonGroup (ExtraGroup AdminGroup)
masterdUser :: String
masterdUser = Runtime.daemonUser GanetiMasterd
masterdGroup :: String
masterdGroup = Runtime.daemonGroup (DaemonGroup GanetiMasterd)
metadUser :: String
metadUser = Runtime.daemonUser GanetiMetad
metadGroup :: String
metadGroup = Runtime.daemonGroup (DaemonGroup GanetiMetad)
rapiUser :: String
rapiUser = Runtime.daemonUser GanetiRapi
rapiGroup :: String
rapiGroup = Runtime.daemonGroup (DaemonGroup GanetiRapi)
confdUser :: String
confdUser = Runtime.daemonUser GanetiConfd
confdGroup :: String
confdGroup = Runtime.daemonGroup (DaemonGroup GanetiConfd)
kvmdUser :: String
kvmdUser = Runtime.daemonUser GanetiKvmd
kvmdGroup :: String
kvmdGroup = Runtime.daemonGroup (DaemonGroup GanetiKvmd)
luxidUser :: String
luxidUser = Runtime.daemonUser GanetiLuxid
luxidGroup :: String
luxidGroup = Runtime.daemonGroup (DaemonGroup GanetiLuxid)
nodedUser :: String
nodedUser = Runtime.daemonUser GanetiNoded
nodedGroup :: String
nodedGroup = Runtime.daemonGroup (DaemonGroup GanetiNoded)
mondUser :: String
mondUser = Runtime.daemonUser GanetiMond
mondGroup :: String
mondGroup = Runtime.daemonGroup (DaemonGroup GanetiMond)
sshLoginUser :: String
sshLoginUser = AutoConf.sshLoginUser
sshConsoleUser :: String
sshConsoleUser = AutoConf.sshConsoleUser
-- * Cpu pinning separators and constants
cpuPinningSep :: String
cpuPinningSep = ":"
cpuPinningAll :: String
cpuPinningAll = "all"
-- | Internal representation of "all"
cpuPinningAllVal :: Int
cpuPinningAllVal = -1
-- | One "all" entry in a CPU list means CPU pinning is off
cpuPinningOff :: [Int]
cpuPinningOff = [cpuPinningAllVal]
-- | A Xen-specific implementation detail is that there is no way to
-- actually say "use any cpu for pinning" in a Xen configuration file,
-- as opposed to the command line, where you can say
-- @
-- xm vcpu-pin <domain> <vcpu> all
-- @
--
-- The workaround used in Xen is "0-63" (see source code function
-- "xm_vcpu_pin" in @<xen-source>/tools/python/xen/xm/main.py@).
--
-- To support future changes, the following constant is treated as a
-- blackbox string that simply means "use any cpu for pinning under
-- xen".
cpuPinningAllXen :: String
cpuPinningAllXen = "0-63"
-- | A KVM-specific implementation detail - the following value is
-- used to set CPU affinity to all processors (--0 through --31), per
-- taskset man page.
--
-- FIXME: This only works for machines with up to 32 CPU cores
cpuPinningAllKvm :: Int
cpuPinningAllKvm = 0xFFFFFFFF
-- * Wipe
ddCmd :: String
ddCmd = "dd"
-- | 1GB
maxWipeChunk :: Int
maxWipeChunk = 1024
minWipeChunkPercent :: Int
minWipeChunkPercent = 10
-- * Directories
runDirsMode :: Int
runDirsMode = 0o775
secureDirMode :: Int
secureDirMode = 0o700
secureFileMode :: Int
secureFileMode = 0o600
adoptableBlockdevRoot :: String
adoptableBlockdevRoot = "/dev/disk/"
-- * 'autoconf' enable/disable
enableConfd :: Bool
enableConfd = AutoConf.enableConfd
enableMond :: Bool
enableMond = AutoConf.enableMond
enableRestrictedCommands :: Bool
enableRestrictedCommands = AutoConf.enableRestrictedCommands
-- * SSH constants
ssh :: String
ssh = "ssh"
scp :: String
scp = "scp"
-- * Daemons
confd :: String
confd = Runtime.daemonName GanetiConfd
masterd :: String
masterd = Runtime.daemonName GanetiMasterd
metad :: String
metad = Runtime.daemonName GanetiMetad
mond :: String
mond = Runtime.daemonName GanetiMond
noded :: String
noded = Runtime.daemonName GanetiNoded
luxid :: String
luxid = Runtime.daemonName GanetiLuxid
rapi :: String
rapi = Runtime.daemonName GanetiRapi
kvmd :: String
kvmd = Runtime.daemonName GanetiKvmd
daemons :: FrozenSet String
daemons =
ConstantUtils.mkSet [confd,
luxid,
masterd,
mond,
noded,
rapi]
defaultConfdPort :: Int
defaultConfdPort = 1814
defaultMondPort :: Int
defaultMondPort = 1815
defaultMetadPort :: Int
defaultMetadPort = 8080
defaultNodedPort :: Int
defaultNodedPort = 1811
defaultRapiPort :: Int
defaultRapiPort = 5080
daemonsPorts :: Map String (Protocol, Int)
daemonsPorts =
Map.fromList
[ (confd, (Udp, defaultConfdPort))
, (metad, (Tcp, defaultMetadPort))
, (mond, (Tcp, defaultMondPort))
, (noded, (Tcp, defaultNodedPort))
, (rapi, (Tcp, defaultRapiPort))
, (ssh, (Tcp, 22))
]
firstDrbdPort :: Int
firstDrbdPort = 11000
lastDrbdPort :: Int
lastDrbdPort = 14999
daemonsLogbase :: Map String String
daemonsLogbase =
Map.fromList
[ (Runtime.daemonName d, Runtime.daemonLogBase d) | d <- [minBound..] ]
daemonsExtraLogbase :: Map String (Map String String)
daemonsExtraLogbase =
Map.fromList $
map (Runtime.daemonName *** id)
[ (GanetiMond, Map.fromList
[ ("access", Runtime.daemonsExtraLogbase GanetiMond AccessLog)
, ("error", Runtime.daemonsExtraLogbase GanetiMond ErrorLog)
])
]
extraLogreasonAccess :: String
extraLogreasonAccess = Runtime.daemonsExtraLogbase GanetiMond AccessLog
extraLogreasonError :: String
extraLogreasonError = Runtime.daemonsExtraLogbase GanetiMond ErrorLog
devConsole :: String
devConsole = ConstantUtils.devConsole
procMounts :: String
procMounts = "/proc/mounts"
-- * Luxi (Local UniX Interface) related constants
luxiEom :: PythonChar
luxiEom = PythonChar '\x03'
-- | Environment variable for the luxi override socket
luxiOverride :: String
luxiOverride = "FORCE_LUXI_SOCKET"
luxiOverrideMaster :: String
luxiOverrideMaster = "master"
luxiOverrideQuery :: String
luxiOverrideQuery = "query"
luxiVersion :: Int
luxiVersion = configVersion
-- * Syslog
syslogUsage :: String
syslogUsage = AutoConf.syslogUsage
syslogNo :: String
syslogNo = Logging.syslogUsageToRaw SyslogNo
syslogYes :: String
syslogYes = Logging.syslogUsageToRaw SyslogYes
syslogOnly :: String
syslogOnly = Logging.syslogUsageToRaw SyslogOnly
syslogSocket :: String
syslogSocket = "/dev/log"
exportConfFile :: String
exportConfFile = "config.ini"
-- * Xen
xenBootloader :: String
xenBootloader = AutoConf.xenBootloader
xenCmdXl :: String
xenCmdXl = "xl"
xenCmdXm :: String
xenCmdXm = "xm"
xenInitrd :: String
xenInitrd = AutoConf.xenInitrd
xenKernel :: String
xenKernel = AutoConf.xenKernel
-- FIXME: perhaps rename to 'validXenCommands' for consistency with
-- other constants
knownXenCommands :: FrozenSet String
knownXenCommands = ConstantUtils.mkSet [xenCmdXl, xenCmdXm]
-- * KVM and socat
kvmPath :: String
kvmPath = AutoConf.kvmPath
kvmKernel :: String
kvmKernel = AutoConf.kvmKernel
socatEscapeCode :: String
socatEscapeCode = "0x1d"
socatPath :: String
socatPath = AutoConf.socatPath
socatUseCompress :: Bool
socatUseCompress = AutoConf.socatUseCompress
socatUseEscape :: Bool
socatUseEscape = AutoConf.socatUseEscape
-- * Console types
-- | Display a message for console access
consMessage :: String
consMessage = "msg"
-- | Console as SPICE server
consSpice :: String
consSpice = "spice"
-- | Console as SSH command
consSsh :: String
consSsh = "ssh"
-- | Console as VNC server
consVnc :: String
consVnc = "vnc"
consAll :: FrozenSet String
consAll = ConstantUtils.mkSet [consMessage, consSpice, consSsh, consVnc]
-- | RSA key bit length
--
-- For RSA keys more bits are better, but they also make operations
-- more expensive. NIST SP 800-131 recommends a minimum of 2048 bits
-- from the year 2010 on.
rsaKeyBits :: Int
rsaKeyBits = 2048
-- | Ciphers allowed for SSL connections.
--
-- For the format, see ciphers(1). A better way to disable ciphers
-- would be to use the exclamation mark (!), but socat versions below
-- 1.5 can't parse exclamation marks in options properly. When
-- modifying the ciphers, ensure not to accidentially add something
-- after it's been removed. Use the "openssl" utility to check the
-- allowed ciphers, e.g. "openssl ciphers -v HIGH:-DES".
opensslCiphers :: String
opensslCiphers = "HIGH:-DES:-3DES:-EXPORT:-ADH"
-- * X509
-- | commonName (CN) used in certificates
x509CertCn :: String
x509CertCn = "ganeti.example.com"
-- | Default validity of certificates in days
x509CertDefaultValidity :: Int
x509CertDefaultValidity = 365 * 5
x509CertSignatureHeader :: String
x509CertSignatureHeader = "X-Ganeti-Signature"
-- | Digest used to sign certificates ("openssl x509" uses SHA1 by default)
x509CertSignDigest :: String
x509CertSignDigest = "SHA1"
-- * Import/export daemon mode
iemExport :: String
iemExport = "export"
iemImport :: String
iemImport = "import"
-- * Import/export transport compression
iecGzip :: String
iecGzip = "gzip"
iecNone :: String
iecNone = "none"
iecAll :: [String]
iecAll = [iecGzip, iecNone]
ieCustomSize :: String
ieCustomSize = "fd"
-- * Import/export I/O
-- | Direct file I/O, equivalent to a shell's I/O redirection using
-- '<' or '>'
ieioFile :: String
ieioFile = "file"
-- | Raw block device I/O using "dd"
ieioRawDisk :: String
ieioRawDisk = "raw"
-- | OS definition import/export script
ieioScript :: String
ieioScript = "script"
-- * Values
valueDefault :: String
valueDefault = "default"
valueAuto :: String
valueAuto = "auto"
valueGenerate :: String
valueGenerate = "generate"
valueNone :: String
valueNone = "none"
valueTrue :: String
valueTrue = "true"
valueFalse :: String
valueFalse = "false"
-- * Hooks
hooksNameCfgupdate :: String
hooksNameCfgupdate = "config-update"
hooksNameWatcher :: String
hooksNameWatcher = "watcher"
hooksPath :: String
hooksPath = "/sbin:/bin:/usr/sbin:/usr/bin"
hooksPhasePost :: String
hooksPhasePost = "post"
hooksPhasePre :: String
hooksPhasePre = "pre"
hooksVersion :: Int
hooksVersion = 2
-- * Hooks subject type (what object type does the LU deal with)
htypeCluster :: String
htypeCluster = "CLUSTER"
htypeGroup :: String
htypeGroup = "GROUP"
htypeInstance :: String
htypeInstance = "INSTANCE"
htypeNetwork :: String
htypeNetwork = "NETWORK"
htypeNode :: String
htypeNode = "NODE"
-- * Hkr
hkrSkip :: Int
hkrSkip = 0
hkrFail :: Int
hkrFail = 1
hkrSuccess :: Int
hkrSuccess = 2
-- * Storage types
stBlock :: String
stBlock = Types.storageTypeToRaw StorageBlock
stDiskless :: String
stDiskless = Types.storageTypeToRaw StorageDiskless
stExt :: String
stExt = Types.storageTypeToRaw StorageExt
stFile :: String
stFile = Types.storageTypeToRaw StorageFile
stSharedFile :: String
stSharedFile = Types.storageTypeToRaw StorageSharedFile
stLvmPv :: String
stLvmPv = Types.storageTypeToRaw StorageLvmPv
stLvmVg :: String
stLvmVg = Types.storageTypeToRaw StorageLvmVg
stRados :: String
stRados = Types.storageTypeToRaw StorageRados
storageTypes :: FrozenSet String
storageTypes = ConstantUtils.mkSet $ map Types.storageTypeToRaw [minBound..]
-- | The set of storage types for which full storage reporting is available
stsReport :: FrozenSet String
stsReport = ConstantUtils.mkSet [stFile, stLvmPv, stLvmVg]
-- | The set of storage types for which node storage reporting is available
-- | (as used by LUQueryNodeStorage)
stsReportNodeStorage :: FrozenSet String
stsReportNodeStorage = ConstantUtils.union stsReport $
ConstantUtils.mkSet [stSharedFile]
-- * Storage fields
-- ** First two are valid in LU context only, not passed to backend
sfNode :: String
sfNode = "node"
sfType :: String
sfType = "type"
-- ** and the rest are valid in backend
sfAllocatable :: String
sfAllocatable = Types.storageFieldToRaw SFAllocatable
sfFree :: String
sfFree = Types.storageFieldToRaw SFFree
sfName :: String
sfName = Types.storageFieldToRaw SFName
sfSize :: String
sfSize = Types.storageFieldToRaw SFSize
sfUsed :: String
sfUsed = Types.storageFieldToRaw SFUsed
validStorageFields :: FrozenSet String
validStorageFields =
ConstantUtils.mkSet $ map Types.storageFieldToRaw [minBound..] ++
[sfNode, sfType]
modifiableStorageFields :: Map String (FrozenSet String)
modifiableStorageFields =
Map.fromList [(Types.storageTypeToRaw StorageLvmPv,
ConstantUtils.mkSet [sfAllocatable])]
-- * Storage operations
soFixConsistency :: String
soFixConsistency = "fix-consistency"
validStorageOperations :: Map String (FrozenSet String)
validStorageOperations =
Map.fromList [(Types.storageTypeToRaw StorageLvmVg,
ConstantUtils.mkSet [soFixConsistency])]
-- * Volume fields
vfDev :: String
vfDev = "dev"
vfInstance :: String
vfInstance = "instance"
vfName :: String
vfName = "name"
vfNode :: String
vfNode = "node"
vfPhys :: String
vfPhys = "phys"
vfSize :: String
vfSize = "size"
vfVg :: String
vfVg = "vg"
-- * Local disk status
ldsFaulty :: Int
ldsFaulty = Types.localDiskStatusToRaw DiskStatusFaulty
ldsOkay :: Int
ldsOkay = Types.localDiskStatusToRaw DiskStatusOk
ldsUnknown :: Int
ldsUnknown = Types.localDiskStatusToRaw DiskStatusUnknown
ldsNames :: Map Int String
ldsNames =
Map.fromList [ (Types.localDiskStatusToRaw ds,
localDiskStatusName ds) | ds <- [minBound..] ]
-- * Disk template types
dtDiskless :: String
dtDiskless = Types.diskTemplateToRaw DTDiskless
dtFile :: String
dtFile = Types.diskTemplateToRaw DTFile
dtSharedFile :: String
dtSharedFile = Types.diskTemplateToRaw DTSharedFile
dtPlain :: String
dtPlain = Types.diskTemplateToRaw DTPlain
dtBlock :: String
dtBlock = Types.diskTemplateToRaw DTBlock
dtDrbd8 :: String
dtDrbd8 = Types.diskTemplateToRaw DTDrbd8
dtRbd :: String
dtRbd = Types.diskTemplateToRaw DTRbd
dtExt :: String
dtExt = Types.diskTemplateToRaw DTExt
dtGluster :: String
dtGluster = Types.diskTemplateToRaw DTGluster
-- | This is used to order determine the default disk template when
-- the list of enabled disk templates is inferred from the current
-- state of the cluster. This only happens on an upgrade from a
-- version of Ganeti that did not support the 'enabled_disk_templates'
-- so far.
diskTemplatePreference :: [String]
diskTemplatePreference =
map Types.diskTemplateToRaw
[DTBlock, DTDiskless, DTDrbd8, DTExt, DTFile,
DTPlain, DTRbd, DTSharedFile, DTGluster]
diskTemplates :: FrozenSet String
diskTemplates = ConstantUtils.mkSet $ map Types.diskTemplateToRaw [minBound..]
-- | Disk templates that are enabled by default
defaultEnabledDiskTemplates :: [String]
defaultEnabledDiskTemplates = map Types.diskTemplateToRaw [DTDrbd8, DTPlain]
-- | Mapping of disk templates to storage types
mapDiskTemplateStorageType :: Map String String
mapDiskTemplateStorageType =
Map.fromList $
map (Types.diskTemplateToRaw *** Types.storageTypeToRaw)
[(DTBlock, StorageBlock),
(DTDrbd8, StorageLvmVg),
(DTExt, StorageExt),
(DTSharedFile, StorageSharedFile),
(DTFile, StorageFile),
(DTDiskless, StorageDiskless),
(DTPlain, StorageLvmVg),
(DTRbd, StorageRados),
(DTGluster, StorageSharedFile)]
-- | The set of network-mirrored disk templates
dtsIntMirror :: FrozenSet String
dtsIntMirror = ConstantUtils.mkSet [dtDrbd8]
-- | 'DTDiskless' is 'trivially' externally mirrored
dtsExtMirror :: FrozenSet String
dtsExtMirror =
ConstantUtils.mkSet $
map Types.diskTemplateToRaw
[DTDiskless, DTBlock, DTExt, DTSharedFile, DTRbd, DTGluster]
-- | The set of non-lvm-based disk templates
dtsNotLvm :: FrozenSet String
dtsNotLvm =
ConstantUtils.mkSet $
map Types.diskTemplateToRaw
[DTSharedFile, DTDiskless, DTBlock, DTExt, DTFile, DTRbd, DTGluster]
-- | The set of disk templates which can be grown
dtsGrowable :: FrozenSet String
dtsGrowable =
ConstantUtils.mkSet $
map Types.diskTemplateToRaw
[DTSharedFile, DTDrbd8, DTPlain, DTExt, DTFile, DTRbd, DTGluster]
-- | The set of disk templates that allow adoption
dtsMayAdopt :: FrozenSet String
dtsMayAdopt =
ConstantUtils.mkSet $ map Types.diskTemplateToRaw [DTBlock, DTPlain]
-- | The set of disk templates that *must* use adoption
dtsMustAdopt :: FrozenSet String
dtsMustAdopt = ConstantUtils.mkSet [Types.diskTemplateToRaw DTBlock]
-- | The set of disk templates that allow migrations
dtsMirrored :: FrozenSet String
dtsMirrored = dtsIntMirror `ConstantUtils.union` dtsExtMirror
-- | The set of file based disk templates
dtsFilebased :: FrozenSet String
dtsFilebased =
ConstantUtils.mkSet $ map Types.diskTemplateToRaw
[DTSharedFile, DTFile, DTGluster]
-- | The set of disk templates that can be moved by copying
--
-- Note: a requirement is that they're not accessed externally or
-- shared between nodes; in particular, sharedfile is not suitable.
dtsCopyable :: FrozenSet String
dtsCopyable =
ConstantUtils.mkSet $ map Types.diskTemplateToRaw [DTPlain, DTFile]
-- | The set of disk templates that are supported by exclusive_storage
dtsExclStorage :: FrozenSet String
dtsExclStorage = ConstantUtils.mkSet $ map Types.diskTemplateToRaw [DTPlain]
-- | Templates for which we don't perform checks on free space
dtsNoFreeSpaceCheck :: FrozenSet String
dtsNoFreeSpaceCheck =
ConstantUtils.mkSet $
map Types.diskTemplateToRaw [DTExt, DTSharedFile, DTFile, DTRbd, DTGluster]
dtsBlock :: FrozenSet String
dtsBlock =
ConstantUtils.mkSet $
map Types.diskTemplateToRaw [DTPlain, DTDrbd8, DTBlock, DTRbd, DTExt]
-- | The set of lvm-based disk templates
dtsLvm :: FrozenSet String
dtsLvm = diskTemplates `ConstantUtils.difference` dtsNotLvm
-- | The set of lvm-based disk templates
dtsHaveAccess :: FrozenSet String
dtsHaveAccess = ConstantUtils.mkSet $
map Types.diskTemplateToRaw [DTRbd, DTGluster]
-- * Drbd
drbdHmacAlg :: String
drbdHmacAlg = "md5"
drbdDefaultNetProtocol :: String
drbdDefaultNetProtocol = "C"
drbdMigrationNetProtocol :: String
drbdMigrationNetProtocol = "C"
drbdStatusFile :: String
drbdStatusFile = "/proc/drbd"
-- | Size of DRBD meta block device
drbdMetaSize :: Int
drbdMetaSize = 128
-- * Drbd barrier types
drbdBDiskBarriers :: String
drbdBDiskBarriers = "b"
drbdBDiskDrain :: String
drbdBDiskDrain = "d"
drbdBDiskFlush :: String
drbdBDiskFlush = "f"
drbdBNone :: String
drbdBNone = "n"
-- | Valid barrier combinations: "n" or any non-null subset of "bfd"
drbdValidBarrierOpt :: FrozenSet (FrozenSet String)
drbdValidBarrierOpt =
ConstantUtils.mkSet
[ ConstantUtils.mkSet [drbdBNone]
, ConstantUtils.mkSet [drbdBDiskBarriers]
, ConstantUtils.mkSet [drbdBDiskDrain]
, ConstantUtils.mkSet [drbdBDiskFlush]
, ConstantUtils.mkSet [drbdBDiskDrain, drbdBDiskFlush]
, ConstantUtils.mkSet [drbdBDiskBarriers, drbdBDiskDrain]
, ConstantUtils.mkSet [drbdBDiskBarriers, drbdBDiskFlush]
, ConstantUtils.mkSet [drbdBDiskBarriers, drbdBDiskFlush, drbdBDiskDrain]
]
-- | Rbd tool command
rbdCmd :: String
rbdCmd = "rbd"
-- * File backend driver
fdBlktap :: String
fdBlktap = Types.fileDriverToRaw FileBlktap
fdBlktap2 :: String
fdBlktap2 = Types.fileDriverToRaw FileBlktap2
fdLoop :: String
fdLoop = Types.fileDriverToRaw FileLoop
fdDefault :: String
fdDefault = fdLoop
fileDriver :: FrozenSet String
fileDriver =
ConstantUtils.mkSet $
map Types.fileDriverToRaw [minBound..]
-- | The set of drbd-like disk types
dtsDrbd :: FrozenSet String
dtsDrbd = ConstantUtils.mkSet [Types.diskTemplateToRaw DTDrbd8]
-- * Disk access mode
diskRdonly :: String
diskRdonly = Types.diskModeToRaw DiskRdOnly
diskRdwr :: String
diskRdwr = Types.diskModeToRaw DiskRdWr
diskAccessSet :: FrozenSet String
diskAccessSet = ConstantUtils.mkSet $ map Types.diskModeToRaw [minBound..]
-- * Disk replacement mode
replaceDiskAuto :: String
replaceDiskAuto = Types.replaceDisksModeToRaw ReplaceAuto
replaceDiskChg :: String
replaceDiskChg = Types.replaceDisksModeToRaw ReplaceNewSecondary
replaceDiskPri :: String
replaceDiskPri = Types.replaceDisksModeToRaw ReplaceOnPrimary
replaceDiskSec :: String
replaceDiskSec = Types.replaceDisksModeToRaw ReplaceOnSecondary
replaceModes :: FrozenSet String
replaceModes =
ConstantUtils.mkSet $ map Types.replaceDisksModeToRaw [minBound..]
-- * Instance export mode
exportModeLocal :: String
exportModeLocal = Types.exportModeToRaw ExportModeLocal
exportModeRemote :: String
exportModeRemote = Types.exportModeToRaw ExportModeRemote
exportModes :: FrozenSet String
exportModes = ConstantUtils.mkSet $ map Types.exportModeToRaw [minBound..]
-- * Instance creation modes
instanceCreate :: String
instanceCreate = Types.instCreateModeToRaw InstCreate
instanceImport :: String
instanceImport = Types.instCreateModeToRaw InstImport
instanceRemoteImport :: String
instanceRemoteImport = Types.instCreateModeToRaw InstRemoteImport
instanceCreateModes :: FrozenSet String
instanceCreateModes =
ConstantUtils.mkSet $ map Types.instCreateModeToRaw [minBound..]
-- * Remote import/export handshake message and version
rieHandshake :: String
rieHandshake = "Hi, I'm Ganeti"
rieVersion :: Int
rieVersion = 0
-- | Remote import/export certificate validity (seconds)
rieCertValidity :: Int
rieCertValidity = 24 * 60 * 60
-- | Export only: how long to wait per connection attempt (seconds)
rieConnectAttemptTimeout :: Int
rieConnectAttemptTimeout = 20
-- | Export only: number of attempts to connect
rieConnectRetries :: Int
rieConnectRetries = 10
-- | Overall timeout for establishing connection
rieConnectTimeout :: Int
rieConnectTimeout = 180
-- | Give child process up to 5 seconds to exit after sending a signal
childLingerTimeout :: Double
childLingerTimeout = 5.0
-- * Import/export config options
inisectBep :: String
inisectBep = "backend"
inisectExp :: String
inisectExp = "export"
inisectHyp :: String
inisectHyp = "hypervisor"
inisectIns :: String
inisectIns = "instance"
inisectOsp :: String
inisectOsp = "os"
-- * Dynamic device modification
ddmAdd :: String
ddmAdd = Types.ddmFullToRaw DdmFullAdd
ddmModify :: String
ddmModify = Types.ddmFullToRaw DdmFullModify
ddmRemove :: String
ddmRemove = Types.ddmFullToRaw DdmFullRemove
ddmsValues :: FrozenSet String
ddmsValues = ConstantUtils.mkSet [ddmAdd, ddmRemove]
ddmsValuesWithModify :: FrozenSet String
ddmsValuesWithModify = ConstantUtils.mkSet $ map Types.ddmFullToRaw [minBound..]
-- * Common exit codes
exitSuccess :: Int
exitSuccess = 0
exitFailure :: Int
exitFailure = ConstantUtils.exitFailure
exitNotcluster :: Int
exitNotcluster = 5
exitNotmaster :: Int
exitNotmaster = 11
exitNodesetupError :: Int
exitNodesetupError = 12
-- | Need user confirmation
exitConfirmation :: Int
exitConfirmation = 13
-- | Exit code for query operations with unknown fields
exitUnknownField :: Int
exitUnknownField = 14
-- * Tags
tagCluster :: String
tagCluster = Types.tagKindToRaw TagKindCluster
tagInstance :: String
tagInstance = Types.tagKindToRaw TagKindInstance
tagNetwork :: String
tagNetwork = Types.tagKindToRaw TagKindNetwork
tagNode :: String
tagNode = Types.tagKindToRaw TagKindNode
tagNodegroup :: String
tagNodegroup = Types.tagKindToRaw TagKindGroup
validTagTypes :: FrozenSet String
validTagTypes = ConstantUtils.mkSet $ map Types.tagKindToRaw [minBound..]
maxTagLen :: Int
maxTagLen = 128
maxTagsPerObj :: Int
maxTagsPerObj = 4096
-- * Others
defaultBridge :: String
defaultBridge = "xen-br0"
defaultOvs :: String
defaultOvs = "switch1"
-- | 60 MiB/s, expressed in KiB/s
classicDrbdSyncSpeed :: Int
classicDrbdSyncSpeed = 60 * 1024
ip4AddressAny :: String
ip4AddressAny = "0.0.0.0"
ip4AddressLocalhost :: String
ip4AddressLocalhost = "127.0.0.1"
ip6AddressAny :: String
ip6AddressAny = "::"
ip6AddressLocalhost :: String
ip6AddressLocalhost = "::1"
ip4Version :: Int
ip4Version = 4
ip6Version :: Int
ip6Version = 6
validIpVersions :: FrozenSet Int
validIpVersions = ConstantUtils.mkSet [ip4Version, ip6Version]
tcpPingTimeout :: Int
tcpPingTimeout = 10
defaultVg :: String
defaultVg = "xenvg"
defaultDrbdHelper :: String
defaultDrbdHelper = "/bin/true"
minVgSize :: Int
minVgSize = 20480
defaultMacPrefix :: String
defaultMacPrefix = "aa:00:00"
-- | Default maximum instance wait time (seconds)
defaultShutdownTimeout :: Int
defaultShutdownTimeout = 120
-- | Node clock skew (seconds)
nodeMaxClockSkew :: Int
nodeMaxClockSkew = 150
-- | Time for an intra-cluster disk transfer to wait for a connection
diskTransferConnectTimeout :: Int
diskTransferConnectTimeout = 60
-- | Disk index separator
diskSeparator :: String
diskSeparator = AutoConf.diskSeparator
ipCommandPath :: String
ipCommandPath = AutoConf.ipPath
-- | Key for job IDs in opcode result
jobIdsKey :: String
jobIdsKey = "jobs"
-- * Runparts results
runpartsErr :: Int
runpartsErr = 2
runpartsRun :: Int
runpartsRun = 1
runpartsSkip :: Int
runpartsSkip = 0
runpartsStatus :: [Int]
runpartsStatus = [runpartsErr, runpartsRun, runpartsSkip]
-- * RPC
rpcEncodingNone :: Int
rpcEncodingNone = 0
rpcEncodingZlibBase64 :: Int
rpcEncodingZlibBase64 = 1
-- * Timeout table
--
-- Various time constants for the timeout table
rpcTmoUrgent :: Int
rpcTmoUrgent = Types.rpcTimeoutToRaw Urgent
rpcTmoFast :: Int
rpcTmoFast = Types.rpcTimeoutToRaw Fast
rpcTmoNormal :: Int
rpcTmoNormal = Types.rpcTimeoutToRaw Normal
rpcTmoSlow :: Int
rpcTmoSlow = Types.rpcTimeoutToRaw Slow
-- | 'rpcTmo_4hrs' contains an underscore to circumvent a limitation
-- in the 'Ganeti.THH.deCamelCase' function and generate the correct
-- Python name.
rpcTmo_4hrs :: Int
rpcTmo_4hrs = Types.rpcTimeoutToRaw FourHours
-- | 'rpcTmo_1day' contains an underscore to circumvent a limitation
-- in the 'Ganeti.THH.deCamelCase' function and generate the correct
-- Python name.
rpcTmo_1day :: Int
rpcTmo_1day = Types.rpcTimeoutToRaw OneDay
-- | Timeout for connecting to nodes (seconds)
rpcConnectTimeout :: Int
rpcConnectTimeout = 5
-- OS
osScriptCreate :: String
osScriptCreate = "create"
osScriptExport :: String
osScriptExport = "export"
osScriptImport :: String
osScriptImport = "import"
osScriptRename :: String
osScriptRename = "rename"
osScriptVerify :: String
osScriptVerify = "verify"
osScripts :: [String]
osScripts = [osScriptCreate, osScriptExport, osScriptImport, osScriptRename,
osScriptVerify]
osApiFile :: String
osApiFile = "ganeti_api_version"
osVariantsFile :: String
osVariantsFile = "variants.list"
osParametersFile :: String
osParametersFile = "parameters.list"
osValidateParameters :: String
osValidateParameters = "parameters"
osValidateCalls :: FrozenSet String
osValidateCalls = ConstantUtils.mkSet [osValidateParameters]
-- | External Storage (ES) related constants
esActionAttach :: String
esActionAttach = "attach"
esActionCreate :: String
esActionCreate = "create"
esActionDetach :: String
esActionDetach = "detach"
esActionGrow :: String
esActionGrow = "grow"
esActionRemove :: String
esActionRemove = "remove"
esActionSetinfo :: String
esActionSetinfo = "setinfo"
esActionVerify :: String
esActionVerify = "verify"
esScriptCreate :: String
esScriptCreate = esActionCreate
esScriptRemove :: String
esScriptRemove = esActionRemove
esScriptGrow :: String
esScriptGrow = esActionGrow
esScriptAttach :: String
esScriptAttach = esActionAttach
esScriptDetach :: String
esScriptDetach = esActionDetach
esScriptSetinfo :: String
esScriptSetinfo = esActionSetinfo
esScriptVerify :: String
esScriptVerify = esActionVerify
esScripts :: FrozenSet String
esScripts =
ConstantUtils.mkSet [esScriptAttach,
esScriptCreate,
esScriptDetach,
esScriptGrow,
esScriptRemove,
esScriptSetinfo,
esScriptVerify]
esParametersFile :: String
esParametersFile = "parameters.list"
-- * Reboot types
instanceRebootSoft :: String
instanceRebootSoft = Types.rebootTypeToRaw RebootSoft
instanceRebootHard :: String
instanceRebootHard = Types.rebootTypeToRaw RebootHard
instanceRebootFull :: String
instanceRebootFull = Types.rebootTypeToRaw RebootFull
rebootTypes :: FrozenSet String
rebootTypes = ConstantUtils.mkSet $ map Types.rebootTypeToRaw [minBound..]
-- * Instance reboot behaviors
instanceRebootAllowed :: String
instanceRebootAllowed = "reboot"
instanceRebootExit :: String
instanceRebootExit = "exit"
rebootBehaviors :: [String]
rebootBehaviors = [instanceRebootAllowed, instanceRebootExit]
-- * VTypes
vtypeBool :: VType
vtypeBool = VTypeBool
vtypeInt :: VType
vtypeInt = VTypeInt
vtypeMaybeString :: VType
vtypeMaybeString = VTypeMaybeString
-- | Size in MiBs
vtypeSize :: VType
vtypeSize = VTypeSize
vtypeString :: VType
vtypeString = VTypeString
enforceableTypes :: FrozenSet VType
enforceableTypes = ConstantUtils.mkSet [minBound..]
-- | Constant representing that the user does not specify any IP version
ifaceNoIpVersionSpecified :: Int
ifaceNoIpVersionSpecified = 0
validSerialSpeeds :: [Int]
validSerialSpeeds =
[75,
110,
300,
600,
1200,
1800,
2400,
4800,
9600,
14400,
19200,
28800,
38400,
57600,
115200,
230400,
345600,
460800]
-- * HV parameter names (global namespace)
hvAcpi :: String
hvAcpi = "acpi"
hvBlockdevPrefix :: String
hvBlockdevPrefix = "blockdev_prefix"
hvBootloaderArgs :: String
hvBootloaderArgs = "bootloader_args"
hvBootloaderPath :: String
hvBootloaderPath = "bootloader_path"
hvBootOrder :: String
hvBootOrder = "boot_order"
hvCdromImagePath :: String
hvCdromImagePath = "cdrom_image_path"
hvCpuCap :: String
hvCpuCap = "cpu_cap"
hvCpuCores :: String
hvCpuCores = "cpu_cores"
hvCpuMask :: String
hvCpuMask = "cpu_mask"
hvCpuSockets :: String
hvCpuSockets = "cpu_sockets"
hvCpuThreads :: String
hvCpuThreads = "cpu_threads"
hvCpuType :: String
hvCpuType = "cpu_type"
hvCpuWeight :: String
hvCpuWeight = "cpu_weight"
hvDeviceModel :: String
hvDeviceModel = "device_model"
hvDiskCache :: String
hvDiskCache = "disk_cache"
hvDiskType :: String
hvDiskType = "disk_type"
hvInitrdPath :: String
hvInitrdPath = "initrd_path"
hvInitScript :: String
hvInitScript = "init_script"
hvKernelArgs :: String
hvKernelArgs = "kernel_args"
hvKernelPath :: String
hvKernelPath = "kernel_path"
hvKeymap :: String
hvKeymap = "keymap"
hvKvmCdrom2ImagePath :: String
hvKvmCdrom2ImagePath = "cdrom2_image_path"
hvKvmCdromDiskType :: String
hvKvmCdromDiskType = "cdrom_disk_type"
hvKvmExtra :: String
hvKvmExtra = "kvm_extra"
hvKvmFlag :: String
hvKvmFlag = "kvm_flag"
hvKvmFloppyImagePath :: String
hvKvmFloppyImagePath = "floppy_image_path"
hvKvmMachineVersion :: String
hvKvmMachineVersion = "machine_version"
hvKvmPath :: String
hvKvmPath = "kvm_path"
hvKvmSpiceAudioCompr :: String
hvKvmSpiceAudioCompr = "spice_playback_compression"
hvKvmSpiceBind :: String
hvKvmSpiceBind = "spice_bind"
hvKvmSpiceIpVersion :: String
hvKvmSpiceIpVersion = "spice_ip_version"
hvKvmSpiceJpegImgCompr :: String
hvKvmSpiceJpegImgCompr = "spice_jpeg_wan_compression"
hvKvmSpiceLosslessImgCompr :: String
hvKvmSpiceLosslessImgCompr = "spice_image_compression"
hvKvmSpicePasswordFile :: String
hvKvmSpicePasswordFile = "spice_password_file"
hvKvmSpiceStreamingVideoDetection :: String
hvKvmSpiceStreamingVideoDetection = "spice_streaming_video"
hvKvmSpiceTlsCiphers :: String
hvKvmSpiceTlsCiphers = "spice_tls_ciphers"
hvKvmSpiceUseTls :: String
hvKvmSpiceUseTls = "spice_use_tls"
hvKvmSpiceUseVdagent :: String
hvKvmSpiceUseVdagent = "spice_use_vdagent"
hvKvmSpiceZlibGlzImgCompr :: String
hvKvmSpiceZlibGlzImgCompr = "spice_zlib_glz_wan_compression"
hvKvmUseChroot :: String
hvKvmUseChroot = "use_chroot"
hvKvmUserShutdown :: String
hvKvmUserShutdown = "user_shutdown"
hvMemPath :: String
hvMemPath = "mem_path"
hvMigrationBandwidth :: String
hvMigrationBandwidth = "migration_bandwidth"
hvMigrationDowntime :: String
hvMigrationDowntime = "migration_downtime"
hvMigrationMode :: String
hvMigrationMode = "migration_mode"
hvMigrationPort :: String
hvMigrationPort = "migration_port"
hvNicType :: String
hvNicType = "nic_type"
hvPae :: String
hvPae = "pae"
hvPassthrough :: String
hvPassthrough = "pci_pass"
hvRebootBehavior :: String
hvRebootBehavior = "reboot_behavior"
hvRootPath :: String
hvRootPath = "root_path"
hvSecurityDomain :: String
hvSecurityDomain = "security_domain"
hvSecurityModel :: String
hvSecurityModel = "security_model"
hvSerialConsole :: String
hvSerialConsole = "serial_console"
hvSerialSpeed :: String
hvSerialSpeed = "serial_speed"
hvSoundhw :: String
hvSoundhw = "soundhw"
hvUsbDevices :: String
hvUsbDevices = "usb_devices"
hvUsbMouse :: String
hvUsbMouse = "usb_mouse"
hvUseBootloader :: String
hvUseBootloader = "use_bootloader"
hvUseLocaltime :: String
hvUseLocaltime = "use_localtime"
hvVga :: String
hvVga = "vga"
hvVhostNet :: String
hvVhostNet = "vhost_net"
hvVifScript :: String
hvVifScript = "vif_script"
hvVifType :: String
hvVifType = "vif_type"
hvViridian :: String
hvViridian = "viridian"
hvVncBindAddress :: String
hvVncBindAddress = "vnc_bind_address"
hvVncPasswordFile :: String
hvVncPasswordFile = "vnc_password_file"
hvVncTls :: String
hvVncTls = "vnc_tls"
hvVncX509 :: String
hvVncX509 = "vnc_x509_path"
hvVncX509Verify :: String
hvVncX509Verify = "vnc_x509_verify"
hvVnetHdr :: String
hvVnetHdr = "vnet_hdr"
hvXenCmd :: String
hvXenCmd = "xen_cmd"
hvXenCpuid :: String
hvXenCpuid = "cpuid"
hvsParameterTitles :: Map String String
hvsParameterTitles =
Map.fromList
[(hvAcpi, "ACPI"),
(hvBootOrder, "Boot_order"),
(hvCdromImagePath, "CDROM_image_path"),
(hvCpuType, "cpu_type"),
(hvDiskType, "Disk_type"),
(hvInitrdPath, "Initrd_path"),
(hvKernelPath, "Kernel_path"),
(hvNicType, "NIC_type"),
(hvPae, "PAE"),
(hvPassthrough, "pci_pass"),
(hvVncBindAddress, "VNC_bind_address")]
hvsParameters :: FrozenSet String
hvsParameters = ConstantUtils.mkSet $ Map.keys hvsParameterTypes
hvsParameterTypes :: Map String VType
hvsParameterTypes = Map.fromList
[ (hvAcpi, VTypeBool)
, (hvBlockdevPrefix, VTypeString)
, (hvBootloaderArgs, VTypeString)
, (hvBootloaderPath, VTypeString)
, (hvBootOrder, VTypeString)
, (hvCdromImagePath, VTypeString)
, (hvCpuCap, VTypeInt)
, (hvCpuCores, VTypeInt)
, (hvCpuMask, VTypeString)
, (hvCpuSockets, VTypeInt)
, (hvCpuThreads, VTypeInt)
, (hvCpuType, VTypeString)
, (hvCpuWeight, VTypeInt)
, (hvDeviceModel, VTypeString)
, (hvDiskCache, VTypeString)
, (hvDiskType, VTypeString)
, (hvInitrdPath, VTypeString)
, (hvInitScript, VTypeString)
, (hvKernelArgs, VTypeString)
, (hvKernelPath, VTypeString)
, (hvKeymap, VTypeString)
, (hvKvmCdrom2ImagePath, VTypeString)
, (hvKvmCdromDiskType, VTypeString)
, (hvKvmExtra, VTypeString)
, (hvKvmFlag, VTypeString)
, (hvKvmFloppyImagePath, VTypeString)
, (hvKvmMachineVersion, VTypeString)
, (hvKvmPath, VTypeString)
, (hvKvmSpiceAudioCompr, VTypeBool)
, (hvKvmSpiceBind, VTypeString)
, (hvKvmSpiceIpVersion, VTypeInt)
, (hvKvmSpiceJpegImgCompr, VTypeString)
, (hvKvmSpiceLosslessImgCompr, VTypeString)
, (hvKvmSpicePasswordFile, VTypeString)
, (hvKvmSpiceStreamingVideoDetection, VTypeString)
, (hvKvmSpiceTlsCiphers, VTypeString)
, (hvKvmSpiceUseTls, VTypeBool)
, (hvKvmSpiceUseVdagent, VTypeBool)
, (hvKvmSpiceZlibGlzImgCompr, VTypeString)
, (hvKvmUseChroot, VTypeBool)
, (hvKvmUserShutdown, VTypeBool)
, (hvMemPath, VTypeString)
, (hvMigrationBandwidth, VTypeInt)
, (hvMigrationDowntime, VTypeInt)
, (hvMigrationMode, VTypeString)
, (hvMigrationPort, VTypeInt)
, (hvNicType, VTypeString)
, (hvPae, VTypeBool)
, (hvPassthrough, VTypeString)
, (hvRebootBehavior, VTypeString)
, (hvRootPath, VTypeMaybeString)
, (hvSecurityDomain, VTypeString)
, (hvSecurityModel, VTypeString)
, (hvSerialConsole, VTypeBool)
, (hvSerialSpeed, VTypeInt)
, (hvSoundhw, VTypeString)
, (hvUsbDevices, VTypeString)
, (hvUsbMouse, VTypeString)
, (hvUseBootloader, VTypeBool)
, (hvUseLocaltime, VTypeBool)
, (hvVga, VTypeString)
, (hvVhostNet, VTypeBool)
, (hvVifScript, VTypeString)
, (hvVifType, VTypeString)
, (hvViridian, VTypeBool)
, (hvVncBindAddress, VTypeString)
, (hvVncPasswordFile, VTypeString)
, (hvVncTls, VTypeBool)
, (hvVncX509, VTypeString)
, (hvVncX509Verify, VTypeBool)
, (hvVnetHdr, VTypeBool)
, (hvXenCmd, VTypeString)
, (hvXenCpuid, VTypeString)
]
-- * Migration statuses
hvMigrationActive :: String
hvMigrationActive = "active"
hvMigrationCancelled :: String
hvMigrationCancelled = "cancelled"
hvMigrationCompleted :: String
hvMigrationCompleted = "completed"
hvMigrationFailed :: String
hvMigrationFailed = "failed"
hvMigrationValidStatuses :: FrozenSet String
hvMigrationValidStatuses =
ConstantUtils.mkSet [hvMigrationActive,
hvMigrationCancelled,
hvMigrationCompleted,
hvMigrationFailed]
hvMigrationFailedStatuses :: FrozenSet String
hvMigrationFailedStatuses =
ConstantUtils.mkSet [hvMigrationFailed, hvMigrationCancelled]
-- | KVM-specific statuses
--
-- FIXME: this constant seems unnecessary
hvKvmMigrationValidStatuses :: FrozenSet String
hvKvmMigrationValidStatuses = hvMigrationValidStatuses
-- | Node info keys
hvNodeinfoKeyVersion :: String
hvNodeinfoKeyVersion = "hv_version"
-- * Hypervisor state
hvstCpuNode :: String
hvstCpuNode = "cpu_node"
hvstCpuTotal :: String
hvstCpuTotal = "cpu_total"
hvstMemoryHv :: String
hvstMemoryHv = "mem_hv"
hvstMemoryNode :: String
hvstMemoryNode = "mem_node"
hvstMemoryTotal :: String
hvstMemoryTotal = "mem_total"
hvstsParameters :: FrozenSet String
hvstsParameters =
ConstantUtils.mkSet [hvstCpuNode,
hvstCpuTotal,
hvstMemoryHv,
hvstMemoryNode,
hvstMemoryTotal]
hvstDefaults :: Map String Int
hvstDefaults =
Map.fromList
[(hvstCpuNode, 1),
(hvstCpuTotal, 1),
(hvstMemoryHv, 0),
(hvstMemoryTotal, 0),
(hvstMemoryNode, 0)]
hvstsParameterTypes :: Map String VType
hvstsParameterTypes =
Map.fromList [(hvstMemoryTotal, VTypeInt),
(hvstMemoryNode, VTypeInt),
(hvstMemoryHv, VTypeInt),
(hvstCpuTotal, VTypeInt),
(hvstCpuNode, VTypeInt)]
-- * Disk state
dsDiskOverhead :: String
dsDiskOverhead = "disk_overhead"
dsDiskReserved :: String
dsDiskReserved = "disk_reserved"
dsDiskTotal :: String
dsDiskTotal = "disk_total"
dsDefaults :: Map String Int
dsDefaults =
Map.fromList
[(dsDiskTotal, 0),
(dsDiskReserved, 0),
(dsDiskOverhead, 0)]
dssParameterTypes :: Map String VType
dssParameterTypes =
Map.fromList [(dsDiskTotal, VTypeInt),
(dsDiskReserved, VTypeInt),
(dsDiskOverhead, VTypeInt)]
dssParameters :: FrozenSet String
dssParameters =
ConstantUtils.mkSet [dsDiskTotal, dsDiskReserved, dsDiskOverhead]
dsValidTypes :: FrozenSet String
dsValidTypes = ConstantUtils.mkSet [Types.diskTemplateToRaw DTPlain]
-- Backend parameter names
beAlwaysFailover :: String
beAlwaysFailover = "always_failover"
beAutoBalance :: String
beAutoBalance = "auto_balance"
beMaxmem :: String
beMaxmem = "maxmem"
-- | Deprecated and replaced by max and min mem
beMemory :: String
beMemory = "memory"
beMinmem :: String
beMinmem = "minmem"
beSpindleUse :: String
beSpindleUse = "spindle_use"
beVcpus :: String
beVcpus = "vcpus"
besParameterTypes :: Map String VType
besParameterTypes =
Map.fromList [(beAlwaysFailover, VTypeBool),
(beAutoBalance, VTypeBool),
(beMaxmem, VTypeSize),
(beMinmem, VTypeSize),
(beSpindleUse, VTypeInt),
(beVcpus, VTypeInt)]
besParameterTitles :: Map String String
besParameterTitles =
Map.fromList [(beAutoBalance, "Auto_balance"),
(beMinmem, "ConfigMinMem"),
(beVcpus, "ConfigVCPUs"),
(beMaxmem, "ConfigMaxMem")]
besParameterCompat :: Map String VType
besParameterCompat = Map.insert beMemory VTypeSize besParameterTypes
besParameters :: FrozenSet String
besParameters =
ConstantUtils.mkSet [beAlwaysFailover,
beAutoBalance,
beMaxmem,
beMinmem,
beSpindleUse,
beVcpus]
-- | Instance specs
--
-- FIXME: these should be associated with 'Ganeti.HTools.Types.ISpec'
ispecMemSize :: String
ispecMemSize = ConstantUtils.ispecMemSize
ispecCpuCount :: String
ispecCpuCount = ConstantUtils.ispecCpuCount
ispecDiskCount :: String
ispecDiskCount = ConstantUtils.ispecDiskCount
ispecDiskSize :: String
ispecDiskSize = ConstantUtils.ispecDiskSize
ispecNicCount :: String
ispecNicCount = ConstantUtils.ispecNicCount
ispecSpindleUse :: String
ispecSpindleUse = ConstantUtils.ispecSpindleUse
ispecsParameterTypes :: Map String VType
ispecsParameterTypes =
Map.fromList
[(ConstantUtils.ispecDiskSize, VTypeInt),
(ConstantUtils.ispecCpuCount, VTypeInt),
(ConstantUtils.ispecSpindleUse, VTypeInt),
(ConstantUtils.ispecMemSize, VTypeInt),
(ConstantUtils.ispecNicCount, VTypeInt),
(ConstantUtils.ispecDiskCount, VTypeInt)]
ispecsParameters :: FrozenSet String
ispecsParameters =
ConstantUtils.mkSet [ConstantUtils.ispecCpuCount,
ConstantUtils.ispecDiskCount,
ConstantUtils.ispecDiskSize,
ConstantUtils.ispecMemSize,
ConstantUtils.ispecNicCount,
ConstantUtils.ispecSpindleUse]
ispecsMinmax :: String
ispecsMinmax = ConstantUtils.ispecsMinmax
ispecsMax :: String
ispecsMax = "max"
ispecsMin :: String
ispecsMin = "min"
ispecsStd :: String
ispecsStd = ConstantUtils.ispecsStd
ipolicyDts :: String
ipolicyDts = ConstantUtils.ipolicyDts
ipolicyVcpuRatio :: String
ipolicyVcpuRatio = ConstantUtils.ipolicyVcpuRatio
ipolicySpindleRatio :: String
ipolicySpindleRatio = ConstantUtils.ipolicySpindleRatio
ispecsMinmaxKeys :: FrozenSet String
ispecsMinmaxKeys = ConstantUtils.mkSet [ispecsMax, ispecsMin]
ipolicyParameters :: FrozenSet String
ipolicyParameters =
ConstantUtils.mkSet [ConstantUtils.ipolicyVcpuRatio,
ConstantUtils.ipolicySpindleRatio]
ipolicyAllKeys :: FrozenSet String
ipolicyAllKeys =
ConstantUtils.union ipolicyParameters $
ConstantUtils.mkSet [ConstantUtils.ipolicyDts,
ConstantUtils.ispecsMinmax,
ispecsStd]
-- | Node parameter names
ndExclusiveStorage :: String
ndExclusiveStorage = "exclusive_storage"
ndOobProgram :: String
ndOobProgram = "oob_program"
ndSpindleCount :: String
ndSpindleCount = "spindle_count"
ndOvs :: String
ndOvs = "ovs"
ndOvsLink :: String
ndOvsLink = "ovs_link"
ndOvsName :: String
ndOvsName = "ovs_name"
ndSshPort :: String
ndSshPort = "ssh_port"
ndsParameterTypes :: Map String VType
ndsParameterTypes =
Map.fromList
[(ndExclusiveStorage, VTypeBool),
(ndOobProgram, VTypeString),
(ndOvs, VTypeBool),
(ndOvsLink, VTypeMaybeString),
(ndOvsName, VTypeMaybeString),
(ndSpindleCount, VTypeInt),
(ndSshPort, VTypeInt)]
ndsParameters :: FrozenSet String
ndsParameters = ConstantUtils.mkSet (Map.keys ndsParameterTypes)
ndsParameterTitles :: Map String String
ndsParameterTitles =
Map.fromList
[(ndExclusiveStorage, "ExclusiveStorage"),
(ndOobProgram, "OutOfBandProgram"),
(ndOvs, "OpenvSwitch"),
(ndOvsLink, "OpenvSwitchLink"),
(ndOvsName, "OpenvSwitchName"),
(ndSpindleCount, "SpindleCount")]
-- * Logical Disks parameters
ldpAccess :: String
ldpAccess = "access"
ldpBarriers :: String
ldpBarriers = "disabled-barriers"
ldpDefaultMetavg :: String
ldpDefaultMetavg = "default-metavg"
ldpDelayTarget :: String
ldpDelayTarget = "c-delay-target"
ldpDiskCustom :: String
ldpDiskCustom = "disk-custom"
ldpDynamicResync :: String
ldpDynamicResync = "dynamic-resync"
ldpFillTarget :: String
ldpFillTarget = "c-fill-target"
ldpMaxRate :: String
ldpMaxRate = "c-max-rate"
ldpMinRate :: String
ldpMinRate = "c-min-rate"
ldpNetCustom :: String
ldpNetCustom = "net-custom"
ldpNoMetaFlush :: String
ldpNoMetaFlush = "disable-meta-flush"
ldpPlanAhead :: String
ldpPlanAhead = "c-plan-ahead"
ldpPool :: String
ldpPool = "pool"
ldpProtocol :: String
ldpProtocol = "protocol"
ldpResyncRate :: String
ldpResyncRate = "resync-rate"
ldpStripes :: String
ldpStripes = "stripes"
diskLdTypes :: Map String VType
diskLdTypes =
Map.fromList
[(ldpAccess, VTypeString),
(ldpResyncRate, VTypeInt),
(ldpStripes, VTypeInt),
(ldpBarriers, VTypeString),
(ldpNoMetaFlush, VTypeBool),
(ldpDefaultMetavg, VTypeString),
(ldpDiskCustom, VTypeString),
(ldpNetCustom, VTypeString),
(ldpProtocol, VTypeString),
(ldpDynamicResync, VTypeBool),
(ldpPlanAhead, VTypeInt),
(ldpFillTarget, VTypeInt),
(ldpDelayTarget, VTypeInt),
(ldpMaxRate, VTypeInt),
(ldpMinRate, VTypeInt),
(ldpPool, VTypeString)]
diskLdParameters :: FrozenSet String
diskLdParameters = ConstantUtils.mkSet (Map.keys diskLdTypes)
-- * Disk template parameters
--
-- Disk template parameters can be set/changed by the user via
-- gnt-cluster and gnt-group)
drbdResyncRate :: String
drbdResyncRate = "resync-rate"
drbdDataStripes :: String
drbdDataStripes = "data-stripes"
drbdMetaStripes :: String
drbdMetaStripes = "meta-stripes"
drbdDiskBarriers :: String
drbdDiskBarriers = "disk-barriers"
drbdMetaBarriers :: String
drbdMetaBarriers = "meta-barriers"
drbdDefaultMetavg :: String
drbdDefaultMetavg = "metavg"
drbdDiskCustom :: String
drbdDiskCustom = "disk-custom"
drbdNetCustom :: String
drbdNetCustom = "net-custom"
drbdProtocol :: String
drbdProtocol = "protocol"
drbdDynamicResync :: String
drbdDynamicResync = "dynamic-resync"
drbdPlanAhead :: String
drbdPlanAhead = "c-plan-ahead"
drbdFillTarget :: String
drbdFillTarget = "c-fill-target"
drbdDelayTarget :: String
drbdDelayTarget = "c-delay-target"
drbdMaxRate :: String
drbdMaxRate = "c-max-rate"
drbdMinRate :: String
drbdMinRate = "c-min-rate"
lvStripes :: String
lvStripes = "stripes"
rbdAccess :: String
rbdAccess = "access"
rbdPool :: String
rbdPool = "pool"
diskDtTypes :: Map String VType
diskDtTypes =
Map.fromList [(drbdResyncRate, VTypeInt),
(drbdDataStripes, VTypeInt),
(drbdMetaStripes, VTypeInt),
(drbdDiskBarriers, VTypeString),
(drbdMetaBarriers, VTypeBool),
(drbdDefaultMetavg, VTypeString),
(drbdDiskCustom, VTypeString),
(drbdNetCustom, VTypeString),
(drbdProtocol, VTypeString),
(drbdDynamicResync, VTypeBool),
(drbdPlanAhead, VTypeInt),
(drbdFillTarget, VTypeInt),
(drbdDelayTarget, VTypeInt),
(drbdMaxRate, VTypeInt),
(drbdMinRate, VTypeInt),
(lvStripes, VTypeInt),
(rbdAccess, VTypeString),
(rbdPool, VTypeString),
(glusterHost, VTypeString),
(glusterVolume, VTypeString),
(glusterPort, VTypeInt)
]
diskDtParameters :: FrozenSet String
diskDtParameters = ConstantUtils.mkSet (Map.keys diskDtTypes)
-- * Dynamic disk parameters
ddpLocalIp :: String
ddpLocalIp = "local-ip"
ddpRemoteIp :: String
ddpRemoteIp = "remote-ip"
ddpPort :: String
ddpPort = "port"
ddpLocalMinor :: String
ddpLocalMinor = "local-minor"
ddpRemoteMinor :: String
ddpRemoteMinor = "remote-minor"
-- * OOB supported commands
oobPowerOn :: String
oobPowerOn = Types.oobCommandToRaw OobPowerOn
oobPowerOff :: String
oobPowerOff = Types.oobCommandToRaw OobPowerOff
oobPowerCycle :: String
oobPowerCycle = Types.oobCommandToRaw OobPowerCycle
oobPowerStatus :: String
oobPowerStatus = Types.oobCommandToRaw OobPowerStatus
oobHealth :: String
oobHealth = Types.oobCommandToRaw OobHealth
oobCommands :: FrozenSet String
oobCommands = ConstantUtils.mkSet $ map Types.oobCommandToRaw [minBound..]
oobPowerStatusPowered :: String
oobPowerStatusPowered = "powered"
-- | 60 seconds
oobTimeout :: Int
oobTimeout = 60
-- | 2 seconds
oobPowerDelay :: Double
oobPowerDelay = 2.0
oobStatusCritical :: String
oobStatusCritical = Types.oobStatusToRaw OobStatusCritical
oobStatusOk :: String
oobStatusOk = Types.oobStatusToRaw OobStatusOk
oobStatusUnknown :: String
oobStatusUnknown = Types.oobStatusToRaw OobStatusUnknown
oobStatusWarning :: String
oobStatusWarning = Types.oobStatusToRaw OobStatusWarning
oobStatuses :: FrozenSet String
oobStatuses = ConstantUtils.mkSet $ map Types.oobStatusToRaw [minBound..]
-- | Instance Parameters Profile
ppDefault :: String
ppDefault = "default"
-- * nic* constants are used inside the ganeti config
nicLink :: String
nicLink = "link"
nicMode :: String
nicMode = "mode"
nicVlan :: String
nicVlan = "vlan"
nicsParameterTypes :: Map String VType
nicsParameterTypes =
Map.fromList [(nicMode, vtypeString),
(nicLink, vtypeString),
(nicVlan, vtypeString)]
nicsParameters :: FrozenSet String
nicsParameters = ConstantUtils.mkSet (Map.keys nicsParameterTypes)
nicModeBridged :: String
nicModeBridged = Types.nICModeToRaw NMBridged
nicModeRouted :: String
nicModeRouted = Types.nICModeToRaw NMRouted
nicModeOvs :: String
nicModeOvs = Types.nICModeToRaw NMOvs
nicIpPool :: String
nicIpPool = Types.nICModeToRaw NMPool
nicValidModes :: FrozenSet String
nicValidModes = ConstantUtils.mkSet $ map Types.nICModeToRaw [minBound..]
releaseAction :: String
releaseAction = "release"
reserveAction :: String
reserveAction = "reserve"
-- * idisk* constants are used in opcodes, to create/change disks
idiskAdopt :: String
idiskAdopt = "adopt"
idiskMetavg :: String
idiskMetavg = "metavg"
idiskMode :: String
idiskMode = "mode"
idiskName :: String
idiskName = "name"
idiskSize :: String
idiskSize = "size"
idiskSpindles :: String
idiskSpindles = "spindles"
idiskVg :: String
idiskVg = "vg"
idiskProvider :: String
idiskProvider = "provider"
idiskParamsTypes :: Map String VType
idiskParamsTypes =
Map.fromList [(idiskSize, VTypeSize),
(idiskSpindles, VTypeInt),
(idiskMode, VTypeString),
(idiskAdopt, VTypeString),
(idiskVg, VTypeString),
(idiskMetavg, VTypeString),
(idiskProvider, VTypeString),
(idiskName, VTypeMaybeString)]
idiskParams :: FrozenSet String
idiskParams = ConstantUtils.mkSet (Map.keys idiskParamsTypes)
modifiableIdiskParamsTypes :: Map String VType
modifiableIdiskParamsTypes =
Map.fromList [(idiskMode, VTypeString),
(idiskName, VTypeString)]
modifiableIdiskParams :: FrozenSet String
modifiableIdiskParams =
ConstantUtils.mkSet (Map.keys modifiableIdiskParamsTypes)
-- * inic* constants are used in opcodes, to create/change nics
inicBridge :: String
inicBridge = "bridge"
inicIp :: String
inicIp = "ip"
inicLink :: String
inicLink = "link"
inicMac :: String
inicMac = "mac"
inicMode :: String
inicMode = "mode"
inicName :: String
inicName = "name"
inicNetwork :: String
inicNetwork = "network"
inicVlan :: String
inicVlan = "vlan"
inicParamsTypes :: Map String VType
inicParamsTypes =
Map.fromList [(inicBridge, VTypeMaybeString),
(inicIp, VTypeMaybeString),
(inicLink, VTypeString),
(inicMac, VTypeString),
(inicMode, VTypeString),
(inicName, VTypeMaybeString),
(inicNetwork, VTypeMaybeString),
(inicVlan, VTypeMaybeString)]
inicParams :: FrozenSet String
inicParams = ConstantUtils.mkSet (Map.keys inicParamsTypes)
-- * Hypervisor constants
htXenPvm :: String
htXenPvm = Types.hypervisorToRaw XenPvm
htFake :: String
htFake = Types.hypervisorToRaw Fake
htXenHvm :: String
htXenHvm = Types.hypervisorToRaw XenHvm
htKvm :: String
htKvm = Types.hypervisorToRaw Kvm
htChroot :: String
htChroot = Types.hypervisorToRaw Chroot
htLxc :: String
htLxc = Types.hypervisorToRaw Lxc
hyperTypes :: FrozenSet String
hyperTypes = ConstantUtils.mkSet $ map Types.hypervisorToRaw [minBound..]
htsReqPort :: FrozenSet String
htsReqPort = ConstantUtils.mkSet [htXenHvm, htKvm]
vncBasePort :: Int
vncBasePort = 5900
vncDefaultBindAddress :: String
vncDefaultBindAddress = ip4AddressAny
-- * NIC types
htNicE1000 :: String
htNicE1000 = "e1000"
htNicI82551 :: String
htNicI82551 = "i82551"
htNicI8259er :: String
htNicI8259er = "i82559er"
htNicI85557b :: String
htNicI85557b = "i82557b"
htNicNe2kIsa :: String
htNicNe2kIsa = "ne2k_isa"
htNicNe2kPci :: String
htNicNe2kPci = "ne2k_pci"
htNicParavirtual :: String
htNicParavirtual = "paravirtual"
htNicPcnet :: String
htNicPcnet = "pcnet"
htNicRtl8139 :: String
htNicRtl8139 = "rtl8139"
htHvmValidNicTypes :: FrozenSet String
htHvmValidNicTypes =
ConstantUtils.mkSet [htNicE1000,
htNicNe2kIsa,
htNicNe2kPci,
htNicParavirtual,
htNicRtl8139]
htKvmValidNicTypes :: FrozenSet String
htKvmValidNicTypes =
ConstantUtils.mkSet [htNicE1000,
htNicI82551,
htNicI8259er,
htNicI85557b,
htNicNe2kIsa,
htNicNe2kPci,
htNicParavirtual,
htNicPcnet,
htNicRtl8139]
-- * Vif types
-- | Default vif type in xen-hvm
htHvmVifIoemu :: String
htHvmVifIoemu = "ioemu"
htHvmVifVif :: String
htHvmVifVif = "vif"
htHvmValidVifTypes :: FrozenSet String
htHvmValidVifTypes = ConstantUtils.mkSet [htHvmVifIoemu, htHvmVifVif]
-- * Disk types
htDiskIde :: String
htDiskIde = "ide"
htDiskIoemu :: String
htDiskIoemu = "ioemu"
htDiskMtd :: String
htDiskMtd = "mtd"
htDiskParavirtual :: String
htDiskParavirtual = "paravirtual"
htDiskPflash :: String
htDiskPflash = "pflash"
htDiskScsi :: String
htDiskScsi = "scsi"
htDiskSd :: String
htDiskSd = "sd"
htHvmValidDiskTypes :: FrozenSet String
htHvmValidDiskTypes = ConstantUtils.mkSet [htDiskIoemu, htDiskParavirtual]
htKvmValidDiskTypes :: FrozenSet String
htKvmValidDiskTypes =
ConstantUtils.mkSet [htDiskIde,
htDiskMtd,
htDiskParavirtual,
htDiskPflash,
htDiskScsi,
htDiskSd]
htCacheDefault :: String
htCacheDefault = "default"
htCacheNone :: String
htCacheNone = "none"
htCacheWback :: String
htCacheWback = "writeback"
htCacheWthrough :: String
htCacheWthrough = "writethrough"
htValidCacheTypes :: FrozenSet String
htValidCacheTypes =
ConstantUtils.mkSet [htCacheDefault,
htCacheNone,
htCacheWback,
htCacheWthrough]
-- * Mouse types
htMouseMouse :: String
htMouseMouse = "mouse"
htMouseTablet :: String
htMouseTablet = "tablet"
htKvmValidMouseTypes :: FrozenSet String
htKvmValidMouseTypes = ConstantUtils.mkSet [htMouseMouse, htMouseTablet]
-- * Boot order
htBoCdrom :: String
htBoCdrom = "cdrom"
htBoDisk :: String
htBoDisk = "disk"
htBoFloppy :: String
htBoFloppy = "floppy"
htBoNetwork :: String
htBoNetwork = "network"
htKvmValidBoTypes :: FrozenSet String
htKvmValidBoTypes =
ConstantUtils.mkSet [htBoCdrom, htBoDisk, htBoFloppy, htBoNetwork]
-- * SPICE lossless image compression options
htKvmSpiceLosslessImgComprAutoGlz :: String
htKvmSpiceLosslessImgComprAutoGlz = "auto_glz"
htKvmSpiceLosslessImgComprAutoLz :: String
htKvmSpiceLosslessImgComprAutoLz = "auto_lz"
htKvmSpiceLosslessImgComprGlz :: String
htKvmSpiceLosslessImgComprGlz = "glz"
htKvmSpiceLosslessImgComprLz :: String
htKvmSpiceLosslessImgComprLz = "lz"
htKvmSpiceLosslessImgComprOff :: String
htKvmSpiceLosslessImgComprOff = "off"
htKvmSpiceLosslessImgComprQuic :: String
htKvmSpiceLosslessImgComprQuic = "quic"
htKvmSpiceValidLosslessImgComprOptions :: FrozenSet String
htKvmSpiceValidLosslessImgComprOptions =
ConstantUtils.mkSet [htKvmSpiceLosslessImgComprAutoGlz,
htKvmSpiceLosslessImgComprAutoLz,
htKvmSpiceLosslessImgComprGlz,
htKvmSpiceLosslessImgComprLz,
htKvmSpiceLosslessImgComprOff,
htKvmSpiceLosslessImgComprQuic]
htKvmSpiceLossyImgComprAlways :: String
htKvmSpiceLossyImgComprAlways = "always"
htKvmSpiceLossyImgComprAuto :: String
htKvmSpiceLossyImgComprAuto = "auto"
htKvmSpiceLossyImgComprNever :: String
htKvmSpiceLossyImgComprNever = "never"
htKvmSpiceValidLossyImgComprOptions :: FrozenSet String
htKvmSpiceValidLossyImgComprOptions =
ConstantUtils.mkSet [htKvmSpiceLossyImgComprAlways,
htKvmSpiceLossyImgComprAuto,
htKvmSpiceLossyImgComprNever]
-- * SPICE video stream detection
htKvmSpiceVideoStreamDetectionAll :: String
htKvmSpiceVideoStreamDetectionAll = "all"
htKvmSpiceVideoStreamDetectionFilter :: String
htKvmSpiceVideoStreamDetectionFilter = "filter"
htKvmSpiceVideoStreamDetectionOff :: String
htKvmSpiceVideoStreamDetectionOff = "off"
htKvmSpiceValidVideoStreamDetectionOptions :: FrozenSet String
htKvmSpiceValidVideoStreamDetectionOptions =
ConstantUtils.mkSet [htKvmSpiceVideoStreamDetectionAll,
htKvmSpiceVideoStreamDetectionFilter,
htKvmSpiceVideoStreamDetectionOff]
-- * Security models
htSmNone :: String
htSmNone = "none"
htSmPool :: String
htSmPool = "pool"
htSmUser :: String
htSmUser = "user"
htKvmValidSmTypes :: FrozenSet String
htKvmValidSmTypes = ConstantUtils.mkSet [htSmNone, htSmPool, htSmUser]
-- * Kvm flag values
htKvmDisabled :: String
htKvmDisabled = "disabled"
htKvmEnabled :: String
htKvmEnabled = "enabled"
htKvmFlagValues :: FrozenSet String
htKvmFlagValues = ConstantUtils.mkSet [htKvmDisabled, htKvmEnabled]
-- * Migration type
htMigrationLive :: String
htMigrationLive = Types.migrationModeToRaw MigrationLive
htMigrationNonlive :: String
htMigrationNonlive = Types.migrationModeToRaw MigrationNonLive
htMigrationModes :: FrozenSet String
htMigrationModes =
ConstantUtils.mkSet $ map Types.migrationModeToRaw [minBound..]
-- * Cluster verify steps
verifyNplusoneMem :: String
verifyNplusoneMem = Types.verifyOptionalChecksToRaw VerifyNPlusOneMem
verifyOptionalChecks :: FrozenSet String
verifyOptionalChecks =
ConstantUtils.mkSet $ map Types.verifyOptionalChecksToRaw [minBound..]
-- * Cluster Verify error classes
cvTcluster :: String
cvTcluster = "cluster"
cvTgroup :: String
cvTgroup = "group"
cvTnode :: String
cvTnode = "node"
cvTinstance :: String
cvTinstance = "instance"
-- * Cluster Verify error levels
cvWarning :: String
cvWarning = "WARNING"
cvError :: String
cvError = "ERROR"
-- * Cluster Verify error codes and documentation
cvEclustercert :: (String, String, String)
cvEclustercert =
("cluster",
Types.cVErrorCodeToRaw CvECLUSTERCERT,
"Cluster certificate files verification failure")
cvEclusterclientcert :: (String, String, String)
cvEclusterclientcert =
("cluster",
Types.cVErrorCodeToRaw CvECLUSTERCLIENTCERT,
"Cluster client certificate files verification failure")
cvEclustercfg :: (String, String, String)
cvEclustercfg =
("cluster",
Types.cVErrorCodeToRaw CvECLUSTERCFG,
"Cluster configuration verification failure")
cvEclusterdanglinginst :: (String, String, String)
cvEclusterdanglinginst =
("node",
Types.cVErrorCodeToRaw CvECLUSTERDANGLINGINST,
"Some instances have a non-existing primary node")
cvEclusterdanglingnodes :: (String, String, String)
cvEclusterdanglingnodes =
("node",
Types.cVErrorCodeToRaw CvECLUSTERDANGLINGNODES,
"Some nodes belong to non-existing groups")
cvEclusterfilecheck :: (String, String, String)
cvEclusterfilecheck =
("cluster",
Types.cVErrorCodeToRaw CvECLUSTERFILECHECK,
"Cluster configuration verification failure")
cvEgroupdifferentpvsize :: (String, String, String)
cvEgroupdifferentpvsize =
("group",
Types.cVErrorCodeToRaw CvEGROUPDIFFERENTPVSIZE,
"PVs in the group have different sizes")
cvEinstancebadnode :: (String, String, String)
cvEinstancebadnode =
("instance",
Types.cVErrorCodeToRaw CvEINSTANCEBADNODE,
"Instance marked as running lives on an offline node")
cvEinstancedown :: (String, String, String)
cvEinstancedown =
("instance",
Types.cVErrorCodeToRaw CvEINSTANCEDOWN,
"Instance not running on its primary node")
cvEinstancefaultydisk :: (String, String, String)
cvEinstancefaultydisk =
("instance",
Types.cVErrorCodeToRaw CvEINSTANCEFAULTYDISK,
"Impossible to retrieve status for a disk")
cvEinstancelayout :: (String, String, String)
cvEinstancelayout =
("instance",
Types.cVErrorCodeToRaw CvEINSTANCELAYOUT,
"Instance has multiple secondary nodes")
cvEinstancemissingcfgparameter :: (String, String, String)
cvEinstancemissingcfgparameter =
("instance",
Types.cVErrorCodeToRaw CvEINSTANCEMISSINGCFGPARAMETER,
"A configuration parameter for an instance is missing")
cvEinstancemissingdisk :: (String, String, String)
cvEinstancemissingdisk =
("instance",
Types.cVErrorCodeToRaw CvEINSTANCEMISSINGDISK,
"Missing volume on an instance")
cvEinstancepolicy :: (String, String, String)
cvEinstancepolicy =
("instance",
Types.cVErrorCodeToRaw CvEINSTANCEPOLICY,
"Instance does not meet policy")
cvEinstancesplitgroups :: (String, String, String)
cvEinstancesplitgroups =
("instance",
Types.cVErrorCodeToRaw CvEINSTANCESPLITGROUPS,
"Instance with primary and secondary nodes in different groups")
cvEinstanceunsuitablenode :: (String, String, String)
cvEinstanceunsuitablenode =
("instance",
Types.cVErrorCodeToRaw CvEINSTANCEUNSUITABLENODE,
"Instance running on nodes that are not suitable for it")
cvEinstancewrongnode :: (String, String, String)
cvEinstancewrongnode =
("instance",
Types.cVErrorCodeToRaw CvEINSTANCEWRONGNODE,
"Instance running on the wrong node")
cvEnodedrbd :: (String, String, String)
cvEnodedrbd =
("node",
Types.cVErrorCodeToRaw CvENODEDRBD,
"Error parsing the DRBD status file")
cvEnodedrbdhelper :: (String, String, String)
cvEnodedrbdhelper =
("node",
Types.cVErrorCodeToRaw CvENODEDRBDHELPER,
"Error caused by the DRBD helper")
cvEnodedrbdversion :: (String, String, String)
cvEnodedrbdversion =
("node",
Types.cVErrorCodeToRaw CvENODEDRBDVERSION,
"DRBD version mismatch within a node group")
cvEnodefilecheck :: (String, String, String)
cvEnodefilecheck =
("node",
Types.cVErrorCodeToRaw CvENODEFILECHECK,
"Error retrieving the checksum of the node files")
cvEnodefilestoragepaths :: (String, String, String)
cvEnodefilestoragepaths =
("node",
Types.cVErrorCodeToRaw CvENODEFILESTORAGEPATHS,
"Detected bad file storage paths")
cvEnodefilestoragepathunusable :: (String, String, String)
cvEnodefilestoragepathunusable =
("node",
Types.cVErrorCodeToRaw CvENODEFILESTORAGEPATHUNUSABLE,
"File storage path unusable")
cvEnodehooks :: (String, String, String)
cvEnodehooks =
("node",
Types.cVErrorCodeToRaw CvENODEHOOKS,
"Communication failure in hooks execution")
cvEnodehv :: (String, String, String)
cvEnodehv =
("node",
Types.cVErrorCodeToRaw CvENODEHV,
"Hypervisor parameters verification failure")
cvEnodelvm :: (String, String, String)
cvEnodelvm =
("node",
Types.cVErrorCodeToRaw CvENODELVM,
"LVM-related node error")
cvEnoden1 :: (String, String, String)
cvEnoden1 =
("node",
Types.cVErrorCodeToRaw CvENODEN1,
"Not enough memory to accommodate instance failovers")
cvEnodenet :: (String, String, String)
cvEnodenet =
("node",
Types.cVErrorCodeToRaw CvENODENET,
"Network-related node error")
cvEnodeoobpath :: (String, String, String)
cvEnodeoobpath =
("node",
Types.cVErrorCodeToRaw CvENODEOOBPATH,
"Invalid Out Of Band path")
cvEnodeorphaninstance :: (String, String, String)
cvEnodeorphaninstance =
("node",
Types.cVErrorCodeToRaw CvENODEORPHANINSTANCE,
"Unknown intance running on a node")
cvEnodeorphanlv :: (String, String, String)
cvEnodeorphanlv =
("node",
Types.cVErrorCodeToRaw CvENODEORPHANLV,
"Unknown LVM logical volume")
cvEnodeos :: (String, String, String)
cvEnodeos =
("node",
Types.cVErrorCodeToRaw CvENODEOS,
"OS-related node error")
cvEnoderpc :: (String, String, String)
cvEnoderpc =
("node",
Types.cVErrorCodeToRaw CvENODERPC,
"Error during connection to the primary node of an instance")
cvEnodesetup :: (String, String, String)
cvEnodesetup =
("node",
Types.cVErrorCodeToRaw CvENODESETUP,
"Node setup error")
cvEnodesharedfilestoragepathunusable :: (String, String, String)
cvEnodesharedfilestoragepathunusable =
("node",
Types.cVErrorCodeToRaw CvENODESHAREDFILESTORAGEPATHUNUSABLE,
"Shared file storage path unusable")
cvEnodessh :: (String, String, String)
cvEnodessh =
("node",
Types.cVErrorCodeToRaw CvENODESSH,
"SSH-related node error")
cvEnodetime :: (String, String, String)
cvEnodetime =
("node",
Types.cVErrorCodeToRaw CvENODETIME,
"Node returned invalid time")
cvEnodeuserscripts :: (String, String, String)
cvEnodeuserscripts =
("node",
Types.cVErrorCodeToRaw CvENODEUSERSCRIPTS,
"User scripts not present or not executable")
cvEnodeversion :: (String, String, String)
cvEnodeversion =
("node",
Types.cVErrorCodeToRaw CvENODEVERSION,
"Protocol version mismatch or Ganeti version mismatch")
cvAllEcodes :: FrozenSet (String, String, String)
cvAllEcodes =
ConstantUtils.mkSet
[cvEclustercert,
cvEclustercfg,
cvEclusterdanglinginst,
cvEclusterdanglingnodes,
cvEclusterfilecheck,
cvEgroupdifferentpvsize,
cvEinstancebadnode,
cvEinstancedown,
cvEinstancefaultydisk,
cvEinstancelayout,
cvEinstancemissingcfgparameter,
cvEinstancemissingdisk,
cvEinstancepolicy,
cvEinstancesplitgroups,
cvEinstanceunsuitablenode,
cvEinstancewrongnode,
cvEnodedrbd,
cvEnodedrbdhelper,
cvEnodedrbdversion,
cvEnodefilecheck,
cvEnodefilestoragepaths,
cvEnodefilestoragepathunusable,
cvEnodehooks,
cvEnodehv,
cvEnodelvm,
cvEnoden1,
cvEnodenet,
cvEnodeoobpath,
cvEnodeorphaninstance,
cvEnodeorphanlv,
cvEnodeos,
cvEnoderpc,
cvEnodesetup,
cvEnodesharedfilestoragepathunusable,
cvEnodessh,
cvEnodetime,
cvEnodeuserscripts,
cvEnodeversion]
cvAllEcodesStrings :: FrozenSet String
cvAllEcodesStrings =
ConstantUtils.mkSet $ map Types.cVErrorCodeToRaw [minBound..]
-- * Node verify constants
nvBridges :: String
nvBridges = "bridges"
nvClientCert :: String
nvClientCert = "client-cert"
nvDrbdhelper :: String
nvDrbdhelper = "drbd-helper"
nvDrbdversion :: String
nvDrbdversion = "drbd-version"
nvDrbdlist :: String
nvDrbdlist = "drbd-list"
nvExclusivepvs :: String
nvExclusivepvs = "exclusive-pvs"
nvFilelist :: String
nvFilelist = "filelist"
nvAcceptedStoragePaths :: String
nvAcceptedStoragePaths = "allowed-file-storage-paths"
nvFileStoragePath :: String
nvFileStoragePath = "file-storage-path"
nvSharedFileStoragePath :: String
nvSharedFileStoragePath = "shared-file-storage-path"
nvHvinfo :: String
nvHvinfo = "hvinfo"
nvHvparams :: String
nvHvparams = "hvparms"
nvHypervisor :: String
nvHypervisor = "hypervisor"
nvInstancelist :: String
nvInstancelist = "instancelist"
nvLvlist :: String
nvLvlist = "lvlist"
nvMasterip :: String
nvMasterip = "master-ip"
nvNodelist :: String
nvNodelist = "nodelist"
nvNodenettest :: String
nvNodenettest = "node-net-test"
nvNodesetup :: String
nvNodesetup = "nodesetup"
nvOobPaths :: String
nvOobPaths = "oob-paths"
nvOslist :: String
nvOslist = "oslist"
nvPvlist :: String
nvPvlist = "pvlist"
nvTime :: String
nvTime = "time"
nvUserscripts :: String
nvUserscripts = "user-scripts"
nvVersion :: String
nvVersion = "version"
nvVglist :: String
nvVglist = "vglist"
nvVmnodes :: String
nvVmnodes = "vmnodes"
-- * Instance status
inststAdmindown :: String
inststAdmindown = Types.instanceStatusToRaw StatusDown
inststAdminoffline :: String
inststAdminoffline = Types.instanceStatusToRaw StatusOffline
inststErrordown :: String
inststErrordown = Types.instanceStatusToRaw ErrorDown
inststErrorup :: String
inststErrorup = Types.instanceStatusToRaw ErrorUp
inststNodedown :: String
inststNodedown = Types.instanceStatusToRaw NodeDown
inststNodeoffline :: String
inststNodeoffline = Types.instanceStatusToRaw NodeOffline
inststRunning :: String
inststRunning = Types.instanceStatusToRaw Running
inststUserdown :: String
inststUserdown = Types.instanceStatusToRaw UserDown
inststWrongnode :: String
inststWrongnode = Types.instanceStatusToRaw WrongNode
inststAll :: FrozenSet String
inststAll = ConstantUtils.mkSet $ map Types.instanceStatusToRaw [minBound..]
-- * Admin states
adminstDown :: String
adminstDown = Types.adminStateToRaw AdminDown
adminstOffline :: String
adminstOffline = Types.adminStateToRaw AdminOffline
adminstUp :: String
adminstUp = Types.adminStateToRaw AdminUp
adminstAll :: FrozenSet String
adminstAll = ConstantUtils.mkSet $ map Types.adminStateToRaw [minBound..]
-- * Node roles
nrDrained :: String
nrDrained = Types.nodeRoleToRaw NRDrained
nrMaster :: String
nrMaster = Types.nodeRoleToRaw NRMaster
nrMcandidate :: String
nrMcandidate = Types.nodeRoleToRaw NRCandidate
nrOffline :: String
nrOffline = Types.nodeRoleToRaw NROffline
nrRegular :: String
nrRegular = Types.nodeRoleToRaw NRRegular
nrAll :: FrozenSet String
nrAll = ConstantUtils.mkSet $ map Types.nodeRoleToRaw [minBound..]
-- * SSL certificate check constants (in days)
sslCertExpirationError :: Int
sslCertExpirationError = 7
sslCertExpirationWarn :: Int
sslCertExpirationWarn = 30
-- * Allocator framework constants
iallocatorVersion :: Int
iallocatorVersion = 2
iallocatorDirIn :: String
iallocatorDirIn = Types.iAllocatorTestDirToRaw IAllocatorDirIn
iallocatorDirOut :: String
iallocatorDirOut = Types.iAllocatorTestDirToRaw IAllocatorDirOut
validIallocatorDirections :: FrozenSet String
validIallocatorDirections =
ConstantUtils.mkSet $ map Types.iAllocatorTestDirToRaw [minBound..]
iallocatorModeAlloc :: String
iallocatorModeAlloc = Types.iAllocatorModeToRaw IAllocatorAlloc
iallocatorModeChgGroup :: String
iallocatorModeChgGroup = Types.iAllocatorModeToRaw IAllocatorChangeGroup
iallocatorModeMultiAlloc :: String
iallocatorModeMultiAlloc = Types.iAllocatorModeToRaw IAllocatorMultiAlloc
iallocatorModeNodeEvac :: String
iallocatorModeNodeEvac = Types.iAllocatorModeToRaw IAllocatorNodeEvac
iallocatorModeReloc :: String
iallocatorModeReloc = Types.iAllocatorModeToRaw IAllocatorReloc
validIallocatorModes :: FrozenSet String
validIallocatorModes =
ConstantUtils.mkSet $ map Types.iAllocatorModeToRaw [minBound..]
iallocatorSearchPath :: [String]
iallocatorSearchPath = AutoConf.iallocatorSearchPath
defaultIallocatorShortcut :: String
defaultIallocatorShortcut = "."
-- * Node evacuation
nodeEvacPri :: String
nodeEvacPri = Types.evacModeToRaw ChangePrimary
nodeEvacSec :: String
nodeEvacSec = Types.evacModeToRaw ChangeSecondary
nodeEvacAll :: String
nodeEvacAll = Types.evacModeToRaw ChangeAll
nodeEvacModes :: FrozenSet String
nodeEvacModes = ConstantUtils.mkSet $ map Types.evacModeToRaw [minBound..]
-- * Job queue
jobQueueVersion :: Int
jobQueueVersion = 1
jobQueueSizeHardLimit :: Int
jobQueueSizeHardLimit = 5000
jobQueueFilesPerms :: Int
jobQueueFilesPerms = 0o640
-- * Unchanged job return
jobNotchanged :: String
jobNotchanged = "nochange"
-- * Job status
jobStatusQueued :: String
jobStatusQueued = Types.jobStatusToRaw JOB_STATUS_QUEUED
jobStatusWaiting :: String
jobStatusWaiting = Types.jobStatusToRaw JOB_STATUS_WAITING
jobStatusCanceling :: String
jobStatusCanceling = Types.jobStatusToRaw JOB_STATUS_CANCELING
jobStatusRunning :: String
jobStatusRunning = Types.jobStatusToRaw JOB_STATUS_RUNNING
jobStatusCanceled :: String
jobStatusCanceled = Types.jobStatusToRaw JOB_STATUS_CANCELED
jobStatusSuccess :: String
jobStatusSuccess = Types.jobStatusToRaw JOB_STATUS_SUCCESS
jobStatusError :: String
jobStatusError = Types.jobStatusToRaw JOB_STATUS_ERROR
jobsPending :: FrozenSet String
jobsPending =
ConstantUtils.mkSet [jobStatusQueued, jobStatusWaiting, jobStatusCanceling]
jobsFinalized :: FrozenSet String
jobsFinalized =
ConstantUtils.mkSet $ map Types.finalizedJobStatusToRaw [minBound..]
jobStatusAll :: FrozenSet String
jobStatusAll = ConstantUtils.mkSet $ map Types.jobStatusToRaw [minBound..]
-- * OpCode status
-- ** Not yet finalized opcodes
opStatusCanceling :: String
opStatusCanceling = "canceling"
opStatusQueued :: String
opStatusQueued = "queued"
opStatusRunning :: String
opStatusRunning = "running"
opStatusWaiting :: String
opStatusWaiting = "waiting"
-- ** Finalized opcodes
opStatusCanceled :: String
opStatusCanceled = "canceled"
opStatusError :: String
opStatusError = "error"
opStatusSuccess :: String
opStatusSuccess = "success"
opsFinalized :: FrozenSet String
opsFinalized =
ConstantUtils.mkSet [opStatusCanceled, opStatusError, opStatusSuccess]
-- * OpCode priority
opPrioLowest :: Int
opPrioLowest = 19
opPrioHighest :: Int
opPrioHighest = -20
opPrioLow :: Int
opPrioLow = Types.opSubmitPriorityToRaw OpPrioLow
opPrioNormal :: Int
opPrioNormal = Types.opSubmitPriorityToRaw OpPrioNormal
opPrioHigh :: Int
opPrioHigh = Types.opSubmitPriorityToRaw OpPrioHigh
opPrioSubmitValid :: FrozenSet Int
opPrioSubmitValid = ConstantUtils.mkSet [opPrioLow, opPrioNormal, opPrioHigh]
opPrioDefault :: Int
opPrioDefault = opPrioNormal
-- * Lock recalculate mode
locksAppend :: String
locksAppend = "append"
locksReplace :: String
locksReplace = "replace"
-- * Lock timeout
--
-- The lock timeout (sum) before we transition into blocking acquire
-- (this can still be reset by priority change). Computed as max time
-- (10 hours) before we should actually go into blocking acquire,
-- given that we start from the default priority level.
lockAttemptsMaxwait :: Double
lockAttemptsMaxwait = 15.0
lockAttemptsMinwait :: Double
lockAttemptsMinwait = 1.0
lockAttemptsTimeout :: Int
lockAttemptsTimeout = (10 * 3600) `div` (opPrioDefault - opPrioHighest)
-- * Execution log types
elogMessage :: String
elogMessage = Types.eLogTypeToRaw ELogMessage
elogRemoteImport :: String
elogRemoteImport = Types.eLogTypeToRaw ELogRemoteImport
elogJqueueTest :: String
elogJqueueTest = Types.eLogTypeToRaw ELogJqueueTest
-- * /etc/hosts modification
etcHostsAdd :: String
etcHostsAdd = "add"
etcHostsRemove :: String
etcHostsRemove = "remove"
-- * Job queue test
jqtMsgprefix :: String
jqtMsgprefix = "TESTMSG="
jqtExec :: String
jqtExec = "exec"
jqtExpandnames :: String
jqtExpandnames = "expandnames"
jqtLogmsg :: String
jqtLogmsg = "logmsg"
jqtStartmsg :: String
jqtStartmsg = "startmsg"
jqtAll :: FrozenSet String
jqtAll = ConstantUtils.mkSet [jqtExec, jqtExpandnames, jqtLogmsg, jqtStartmsg]
-- * Query resources
qrCluster :: String
qrCluster = "cluster"
qrExport :: String
qrExport = "export"
qrExtstorage :: String
qrExtstorage = "extstorage"
qrGroup :: String
qrGroup = "group"
qrInstance :: String
qrInstance = "instance"
qrJob :: String
qrJob = "job"
qrLock :: String
qrLock = "lock"
qrNetwork :: String
qrNetwork = "network"
qrNode :: String
qrNode = "node"
qrOs :: String
qrOs = "os"
-- | List of resources which can be queried using 'Ganeti.OpCodes.OpQuery'
qrViaOp :: FrozenSet String
qrViaOp =
ConstantUtils.mkSet [qrCluster,
qrOs,
qrExtstorage]
-- | List of resources which can be queried using Local UniX Interface
qrViaLuxi :: FrozenSet String
qrViaLuxi = ConstantUtils.mkSet [qrGroup,
qrExport,
qrInstance,
qrJob,
qrLock,
qrNetwork,
qrNode]
-- | List of resources which can be queried using RAPI
qrViaRapi :: FrozenSet String
qrViaRapi = qrViaLuxi
-- | List of resources which can be queried via RAPI including PUT requests
qrViaRapiPut :: FrozenSet String
qrViaRapiPut = ConstantUtils.mkSet [qrLock, qrJob]
-- * Query field types
qftBool :: String
qftBool = "bool"
qftNumber :: String
qftNumber = "number"
qftOther :: String
qftOther = "other"
qftText :: String
qftText = "text"
qftTimestamp :: String
qftTimestamp = "timestamp"
qftUnit :: String
qftUnit = "unit"
qftUnknown :: String
qftUnknown = "unknown"
qftAll :: FrozenSet String
qftAll =
ConstantUtils.mkSet [qftBool,
qftNumber,
qftOther,
qftText,
qftTimestamp,
qftUnit,
qftUnknown]
-- * Query result field status
--
-- Don't change or reuse values as they're used by clients.
--
-- FIXME: link with 'Ganeti.Query.Language.ResultStatus'
-- | No data (e.g. RPC error), can be used instead of 'rsOffline'
rsNodata :: Int
rsNodata = 2
rsNormal :: Int
rsNormal = 0
-- | Resource marked offline
rsOffline :: Int
rsOffline = 4
-- | Value unavailable/unsupported for item; if this field is
-- supported but we cannot get the data for the moment, 'rsNodata' or
-- 'rsOffline' should be used
rsUnavail :: Int
rsUnavail = 3
rsUnknown :: Int
rsUnknown = 1
rsAll :: FrozenSet Int
rsAll =
ConstantUtils.mkSet [rsNodata,
rsNormal,
rsOffline,
rsUnavail,
rsUnknown]
-- | Special field cases and their verbose/terse formatting
rssDescription :: Map Int (String, String)
rssDescription =
Map.fromList [(rsUnknown, ("(unknown)", "??")),
(rsNodata, ("(nodata)", "?")),
(rsOffline, ("(offline)", "*")),
(rsUnavail, ("(unavail)", "-"))]
-- * Max dynamic devices
maxDisks :: Int
maxDisks = Types.maxDisks
maxNics :: Int
maxNics = Types.maxNics
-- | SSCONF file prefix
ssconfFileprefix :: String
ssconfFileprefix = "ssconf_"
-- * SSCONF keys
ssClusterName :: String
ssClusterName = "cluster_name"
ssClusterTags :: String
ssClusterTags = "cluster_tags"
ssFileStorageDir :: String
ssFileStorageDir = "file_storage_dir"
ssSharedFileStorageDir :: String
ssSharedFileStorageDir = "shared_file_storage_dir"
ssGlusterStorageDir :: String
ssGlusterStorageDir = "gluster_storage_dir"
ssMasterCandidates :: String
ssMasterCandidates = "master_candidates"
ssMasterCandidatesIps :: String
ssMasterCandidatesIps = "master_candidates_ips"
ssMasterCandidatesCerts :: String
ssMasterCandidatesCerts = "master_candidates_certs"
ssMasterIp :: String
ssMasterIp = "master_ip"
ssMasterNetdev :: String
ssMasterNetdev = "master_netdev"
ssMasterNetmask :: String
ssMasterNetmask = "master_netmask"
ssMasterNode :: String
ssMasterNode = "master_node"
ssNodeList :: String
ssNodeList = "node_list"
ssNodePrimaryIps :: String
ssNodePrimaryIps = "node_primary_ips"
ssNodeSecondaryIps :: String
ssNodeSecondaryIps = "node_secondary_ips"
ssOfflineNodes :: String
ssOfflineNodes = "offline_nodes"
ssOnlineNodes :: String
ssOnlineNodes = "online_nodes"
ssPrimaryIpFamily :: String
ssPrimaryIpFamily = "primary_ip_family"
ssInstanceList :: String
ssInstanceList = "instance_list"
ssReleaseVersion :: String
ssReleaseVersion = "release_version"
ssHypervisorList :: String
ssHypervisorList = "hypervisor_list"
ssMaintainNodeHealth :: String
ssMaintainNodeHealth = "maintain_node_health"
ssUidPool :: String
ssUidPool = "uid_pool"
ssNodegroups :: String
ssNodegroups = "nodegroups"
ssNetworks :: String
ssNetworks = "networks"
-- | This is not a complete SSCONF key, but the prefix for the
-- hypervisor keys
ssHvparamsPref :: String
ssHvparamsPref = "hvparams_"
-- * Hvparams keys
ssHvparamsXenChroot :: String
ssHvparamsXenChroot = ssHvparamsPref ++ htChroot
ssHvparamsXenFake :: String
ssHvparamsXenFake = ssHvparamsPref ++ htFake
ssHvparamsXenHvm :: String
ssHvparamsXenHvm = ssHvparamsPref ++ htXenHvm
ssHvparamsXenKvm :: String
ssHvparamsXenKvm = ssHvparamsPref ++ htKvm
ssHvparamsXenLxc :: String
ssHvparamsXenLxc = ssHvparamsPref ++ htLxc
ssHvparamsXenPvm :: String
ssHvparamsXenPvm = ssHvparamsPref ++ htXenPvm
validSsHvparamsKeys :: FrozenSet String
validSsHvparamsKeys =
ConstantUtils.mkSet [ssHvparamsXenChroot,
ssHvparamsXenLxc,
ssHvparamsXenFake,
ssHvparamsXenHvm,
ssHvparamsXenKvm,
ssHvparamsXenPvm]
ssFilePerms :: Int
ssFilePerms = 0o444
-- | Cluster wide default parameters
defaultEnabledHypervisor :: String
defaultEnabledHypervisor = htXenPvm
hvcDefaults :: Map Hypervisor (Map String PyValueEx)
hvcDefaults =
Map.fromList
[ (XenPvm, Map.fromList
[ (hvUseBootloader, PyValueEx False)
, (hvBootloaderPath, PyValueEx xenBootloader)
, (hvBootloaderArgs, PyValueEx "")
, (hvKernelPath, PyValueEx xenKernel)
, (hvInitrdPath, PyValueEx "")
, (hvRootPath, PyValueEx "/dev/xvda1")
, (hvKernelArgs, PyValueEx "ro")
, (hvMigrationPort, PyValueEx (8002 :: Int))
, (hvMigrationMode, PyValueEx htMigrationLive)
, (hvBlockdevPrefix, PyValueEx "sd")
, (hvRebootBehavior, PyValueEx instanceRebootAllowed)
, (hvCpuMask, PyValueEx cpuPinningAll)
, (hvCpuCap, PyValueEx (0 :: Int))
, (hvCpuWeight, PyValueEx (256 :: Int))
, (hvVifScript, PyValueEx "")
, (hvXenCmd, PyValueEx xenCmdXm)
, (hvXenCpuid, PyValueEx "")
, (hvSoundhw, PyValueEx "")
])
, (XenHvm, Map.fromList
[ (hvBootOrder, PyValueEx "cd")
, (hvCdromImagePath, PyValueEx "")
, (hvNicType, PyValueEx htNicRtl8139)
, (hvDiskType, PyValueEx htDiskParavirtual)
, (hvVncBindAddress, PyValueEx ip4AddressAny)
, (hvAcpi, PyValueEx True)
, (hvPae, PyValueEx True)
, (hvKernelPath, PyValueEx "/usr/lib/xen/boot/hvmloader")
, (hvDeviceModel, PyValueEx "/usr/lib/xen/bin/qemu-dm")
, (hvMigrationPort, PyValueEx (8002 :: Int))
, (hvMigrationMode, PyValueEx htMigrationNonlive)
, (hvUseLocaltime, PyValueEx False)
, (hvBlockdevPrefix, PyValueEx "hd")
, (hvPassthrough, PyValueEx "")
, (hvRebootBehavior, PyValueEx instanceRebootAllowed)
, (hvCpuMask, PyValueEx cpuPinningAll)
, (hvCpuCap, PyValueEx (0 :: Int))
, (hvCpuWeight, PyValueEx (256 :: Int))
, (hvVifType, PyValueEx htHvmVifIoemu)
, (hvVifScript, PyValueEx "")
, (hvViridian, PyValueEx False)
, (hvXenCmd, PyValueEx xenCmdXm)
, (hvXenCpuid, PyValueEx "")
, (hvSoundhw, PyValueEx "")
])
, (Kvm, Map.fromList
[ (hvKvmPath, PyValueEx kvmPath)
, (hvKernelPath, PyValueEx kvmKernel)
, (hvInitrdPath, PyValueEx "")
, (hvKernelArgs, PyValueEx "ro")
, (hvRootPath, PyValueEx "/dev/vda1")
, (hvAcpi, PyValueEx True)
, (hvSerialConsole, PyValueEx True)
, (hvSerialSpeed, PyValueEx (38400 :: Int))
, (hvVncBindAddress, PyValueEx "")
, (hvVncTls, PyValueEx False)
, (hvVncX509, PyValueEx "")
, (hvVncX509Verify, PyValueEx False)
, (hvVncPasswordFile, PyValueEx "")
, (hvKvmSpiceBind, PyValueEx "")
, (hvKvmSpiceIpVersion, PyValueEx ifaceNoIpVersionSpecified)
, (hvKvmSpicePasswordFile, PyValueEx "")
, (hvKvmSpiceLosslessImgCompr, PyValueEx "")
, (hvKvmSpiceJpegImgCompr, PyValueEx "")
, (hvKvmSpiceZlibGlzImgCompr, PyValueEx "")
, (hvKvmSpiceStreamingVideoDetection, PyValueEx "")
, (hvKvmSpiceAudioCompr, PyValueEx True)
, (hvKvmSpiceUseTls, PyValueEx False)
, (hvKvmSpiceTlsCiphers, PyValueEx opensslCiphers)
, (hvKvmSpiceUseVdagent, PyValueEx True)
, (hvKvmFloppyImagePath, PyValueEx "")
, (hvCdromImagePath, PyValueEx "")
, (hvKvmCdrom2ImagePath, PyValueEx "")
, (hvBootOrder, PyValueEx htBoDisk)
, (hvNicType, PyValueEx htNicParavirtual)
, (hvDiskType, PyValueEx htDiskParavirtual)
, (hvKvmCdromDiskType, PyValueEx "")
, (hvUsbMouse, PyValueEx "")
, (hvKeymap, PyValueEx "")
, (hvMigrationPort, PyValueEx (8102 :: Int))
, (hvMigrationBandwidth, PyValueEx (32 :: Int))
, (hvMigrationDowntime, PyValueEx (30 :: Int))
, (hvMigrationMode, PyValueEx htMigrationLive)
, (hvUseLocaltime, PyValueEx False)
, (hvDiskCache, PyValueEx htCacheDefault)
, (hvSecurityModel, PyValueEx htSmNone)
, (hvSecurityDomain, PyValueEx "")
, (hvKvmFlag, PyValueEx "")
, (hvVhostNet, PyValueEx False)
, (hvKvmUseChroot, PyValueEx False)
, (hvKvmUserShutdown, PyValueEx False)
, (hvMemPath, PyValueEx "")
, (hvRebootBehavior, PyValueEx instanceRebootAllowed)
, (hvCpuMask, PyValueEx cpuPinningAll)
, (hvCpuType, PyValueEx "")
, (hvCpuCores, PyValueEx (0 :: Int))
, (hvCpuThreads, PyValueEx (0 :: Int))
, (hvCpuSockets, PyValueEx (0 :: Int))
, (hvSoundhw, PyValueEx "")
, (hvUsbDevices, PyValueEx "")
, (hvVga, PyValueEx "")
, (hvKvmExtra, PyValueEx "")
, (hvKvmMachineVersion, PyValueEx "")
, (hvVnetHdr, PyValueEx True)])
, (Fake, Map.fromList [(hvMigrationMode, PyValueEx htMigrationLive)])
, (Chroot, Map.fromList [(hvInitScript, PyValueEx "/ganeti-chroot")])
, (Lxc, Map.fromList [(hvCpuMask, PyValueEx "")])
]
hvcGlobals :: FrozenSet String
hvcGlobals =
ConstantUtils.mkSet [hvMigrationBandwidth,
hvMigrationMode,
hvMigrationPort,
hvXenCmd]
becDefaults :: Map String PyValueEx
becDefaults =
Map.fromList
[ (beMinmem, PyValueEx (128 :: Int))
, (beMaxmem, PyValueEx (128 :: Int))
, (beVcpus, PyValueEx (1 :: Int))
, (beAutoBalance, PyValueEx True)
, (beAlwaysFailover, PyValueEx False)
, (beSpindleUse, PyValueEx (1 :: Int))
]
ndcDefaults :: Map String PyValueEx
ndcDefaults =
Map.fromList
[ (ndOobProgram, PyValueEx "")
, (ndSpindleCount, PyValueEx (1 :: Int))
, (ndExclusiveStorage, PyValueEx False)
, (ndOvs, PyValueEx False)
, (ndOvsName, PyValueEx defaultOvs)
, (ndOvsLink, PyValueEx "")
, (ndSshPort, PyValueEx (22 :: Int))
]
ndcGlobals :: FrozenSet String
ndcGlobals = ConstantUtils.mkSet [ndExclusiveStorage]
-- | Default delay target measured in sectors
defaultDelayTarget :: Int
defaultDelayTarget = 1
defaultDiskCustom :: String
defaultDiskCustom = ""
defaultDiskResync :: Bool
defaultDiskResync = False
-- | Default fill target measured in sectors
defaultFillTarget :: Int
defaultFillTarget = 0
-- | Default mininum rate measured in KiB/s
defaultMinRate :: Int
defaultMinRate = 4 * 1024
defaultNetCustom :: String
defaultNetCustom = ""
-- | Default plan ahead measured in sectors
--
-- The default values for the DRBD dynamic resync speed algorithm are
-- taken from the drbsetup 8.3.11 man page, except for c-plan-ahead
-- (that we don't need to set to 0, because we have a separate option
-- to enable it) and for c-max-rate, that we cap to the default value
-- for the static resync rate.
defaultPlanAhead :: Int
defaultPlanAhead = 20
defaultRbdPool :: String
defaultRbdPool = "rbd"
diskLdDefaults :: Map DiskTemplate (Map String PyValueEx)
diskLdDefaults =
Map.fromList
[ (DTBlock, Map.empty)
, (DTDrbd8, Map.fromList
[ (ldpBarriers, PyValueEx drbdBarriers)
, (ldpDefaultMetavg, PyValueEx defaultVg)
, (ldpDelayTarget, PyValueEx defaultDelayTarget)
, (ldpDiskCustom, PyValueEx defaultDiskCustom)
, (ldpDynamicResync, PyValueEx defaultDiskResync)
, (ldpFillTarget, PyValueEx defaultFillTarget)
, (ldpMaxRate, PyValueEx classicDrbdSyncSpeed)
, (ldpMinRate, PyValueEx defaultMinRate)
, (ldpNetCustom, PyValueEx defaultNetCustom)
, (ldpNoMetaFlush, PyValueEx drbdNoMetaFlush)
, (ldpPlanAhead, PyValueEx defaultPlanAhead)
, (ldpProtocol, PyValueEx drbdDefaultNetProtocol)
, (ldpResyncRate, PyValueEx classicDrbdSyncSpeed)
])
, (DTExt, Map.empty)
, (DTFile, Map.empty)
, (DTPlain, Map.fromList [(ldpStripes, PyValueEx lvmStripecount)])
, (DTRbd, Map.fromList
[ (ldpPool, PyValueEx defaultRbdPool)
, (ldpAccess, PyValueEx diskKernelspace)
])
, (DTSharedFile, Map.empty)
, (DTGluster, Map.fromList
[ (rbdAccess, PyValueEx diskKernelspace)
, (glusterHost, PyValueEx glusterHostDefault)
, (glusterVolume, PyValueEx glusterVolumeDefault)
, (glusterPort, PyValueEx glusterPortDefault)
])
]
diskDtDefaults :: Map DiskTemplate (Map String PyValueEx)
diskDtDefaults =
Map.fromList
[ (DTBlock, Map.empty)
, (DTDiskless, Map.empty)
, (DTDrbd8, Map.fromList
[ (drbdDataStripes, PyValueEx lvmStripecount)
, (drbdDefaultMetavg, PyValueEx defaultVg)
, (drbdDelayTarget, PyValueEx defaultDelayTarget)
, (drbdDiskBarriers, PyValueEx drbdBarriers)
, (drbdDiskCustom, PyValueEx defaultDiskCustom)
, (drbdDynamicResync, PyValueEx defaultDiskResync)
, (drbdFillTarget, PyValueEx defaultFillTarget)
, (drbdMaxRate, PyValueEx classicDrbdSyncSpeed)
, (drbdMetaBarriers, PyValueEx drbdNoMetaFlush)
, (drbdMetaStripes, PyValueEx lvmStripecount)
, (drbdMinRate, PyValueEx defaultMinRate)
, (drbdNetCustom, PyValueEx defaultNetCustom)
, (drbdPlanAhead, PyValueEx defaultPlanAhead)
, (drbdProtocol, PyValueEx drbdDefaultNetProtocol)
, (drbdResyncRate, PyValueEx classicDrbdSyncSpeed)
])
, (DTExt, Map.empty)
, (DTFile, Map.empty)
, (DTPlain, Map.fromList [(lvStripes, PyValueEx lvmStripecount)])
, (DTRbd, Map.fromList
[ (rbdPool, PyValueEx defaultRbdPool)
, (rbdAccess, PyValueEx diskKernelspace)
])
, (DTSharedFile, Map.empty)
, (DTGluster, Map.fromList
[ (rbdAccess, PyValueEx diskKernelspace)
, (glusterHost, PyValueEx glusterHostDefault)
, (glusterVolume, PyValueEx glusterVolumeDefault)
, (glusterPort, PyValueEx glusterPortDefault)
])
]
niccDefaults :: Map String PyValueEx
niccDefaults =
Map.fromList
[ (nicMode, PyValueEx nicModeBridged)
, (nicLink, PyValueEx defaultBridge)
, (nicVlan, PyValueEx "")
]
-- | All of the following values are quite arbitrary - there are no
-- "good" defaults, these must be customised per-site
ispecsMinmaxDefaults :: Map String (Map String Int)
ispecsMinmaxDefaults =
Map.fromList
[(ispecsMin,
Map.fromList
[(ConstantUtils.ispecMemSize, Types.iSpecMemorySize Types.defMinISpec),
(ConstantUtils.ispecCpuCount, Types.iSpecCpuCount Types.defMinISpec),
(ConstantUtils.ispecDiskCount, Types.iSpecDiskCount Types.defMinISpec),
(ConstantUtils.ispecDiskSize, Types.iSpecDiskSize Types.defMinISpec),
(ConstantUtils.ispecNicCount, Types.iSpecNicCount Types.defMinISpec),
(ConstantUtils.ispecSpindleUse, Types.iSpecSpindleUse Types.defMinISpec)]),
(ispecsMax,
Map.fromList
[(ConstantUtils.ispecMemSize, Types.iSpecMemorySize Types.defMaxISpec),
(ConstantUtils.ispecCpuCount, Types.iSpecCpuCount Types.defMaxISpec),
(ConstantUtils.ispecDiskCount, Types.iSpecDiskCount Types.defMaxISpec),
(ConstantUtils.ispecDiskSize, Types.iSpecDiskSize Types.defMaxISpec),
(ConstantUtils.ispecNicCount, Types.iSpecNicCount Types.defMaxISpec),
(ConstantUtils.ispecSpindleUse, Types.iSpecSpindleUse Types.defMaxISpec)])]
ipolicyDefaults :: Map String PyValueEx
ipolicyDefaults =
Map.fromList
[ (ispecsMinmax, PyValueEx [ispecsMinmaxDefaults])
, (ispecsStd, PyValueEx (Map.fromList
[ (ispecMemSize, 128)
, (ispecCpuCount, 1)
, (ispecDiskCount, 1)
, (ispecDiskSize, 1024)
, (ispecNicCount, 1)
, (ispecSpindleUse, 1)
] :: Map String Int))
, (ipolicyDts, PyValueEx (ConstantUtils.toList diskTemplates))
, (ipolicyVcpuRatio, PyValueEx (4.0 :: Double))
, (ipolicySpindleRatio, PyValueEx (32.0 :: Double))
]
masterPoolSizeDefault :: Int
masterPoolSizeDefault = 10
-- * Exclusive storage
-- | Error margin used to compare physical disks
partMargin :: Double
partMargin = 0.01
-- | Space reserved when creating instance disks
partReserved :: Double
partReserved = 0.02
-- * Luxid job scheduling
-- | Time intervall in seconds for polling updates on the job queue. This
-- intervall is only relevant if the number of running jobs reaches the maximal
-- allowed number, as otherwise new jobs will be started immediately anyway.
-- Also, as jobs are watched via inotify, scheduling usually works independent
-- of polling. Therefore we chose a sufficiently large interval, in the order of
-- 5 minutes. As with the interval for reloading the configuration, we chose a
-- prime number to avoid accidental 'same wakeup' with other processes.
luxidJobqueuePollInterval :: Int
luxidJobqueuePollInterval = 307
-- | The default value for the maximal number of jobs to be running at the same
-- time. Once the maximal number is reached, new jobs will just be queued and
-- only started, once some of the other jobs have finished.
luxidMaximalRunningJobsDefault :: Int
luxidMaximalRunningJobsDefault = 20
-- * Confd
confdProtocolVersion :: Int
confdProtocolVersion = ConstantUtils.confdProtocolVersion
-- Confd request type
confdReqPing :: Int
confdReqPing = Types.confdRequestTypeToRaw ReqPing
confdReqNodeRoleByname :: Int
confdReqNodeRoleByname = Types.confdRequestTypeToRaw ReqNodeRoleByName
confdReqNodePipByInstanceIp :: Int
confdReqNodePipByInstanceIp = Types.confdRequestTypeToRaw ReqNodePipByInstPip
confdReqClusterMaster :: Int
confdReqClusterMaster = Types.confdRequestTypeToRaw ReqClusterMaster
confdReqNodePipList :: Int
confdReqNodePipList = Types.confdRequestTypeToRaw ReqNodePipList
confdReqMcPipList :: Int
confdReqMcPipList = Types.confdRequestTypeToRaw ReqMcPipList
confdReqInstancesIpsList :: Int
confdReqInstancesIpsList = Types.confdRequestTypeToRaw ReqInstIpsList
confdReqNodeDrbd :: Int
confdReqNodeDrbd = Types.confdRequestTypeToRaw ReqNodeDrbd
confdReqNodeInstances :: Int
confdReqNodeInstances = Types.confdRequestTypeToRaw ReqNodeInstances
confdReqs :: FrozenSet Int
confdReqs =
ConstantUtils.mkSet .
map Types.confdRequestTypeToRaw $
[minBound..] \\ [ReqNodeInstances]
-- * Confd request type
confdReqfieldName :: Int
confdReqfieldName = Types.confdReqFieldToRaw ReqFieldName
confdReqfieldIp :: Int
confdReqfieldIp = Types.confdReqFieldToRaw ReqFieldIp
confdReqfieldMnodePip :: Int
confdReqfieldMnodePip = Types.confdReqFieldToRaw ReqFieldMNodePip
-- * Confd repl status
confdReplStatusOk :: Int
confdReplStatusOk = Types.confdReplyStatusToRaw ReplyStatusOk
confdReplStatusError :: Int
confdReplStatusError = Types.confdReplyStatusToRaw ReplyStatusError
confdReplStatusNotimplemented :: Int
confdReplStatusNotimplemented = Types.confdReplyStatusToRaw ReplyStatusNotImpl
confdReplStatuses :: FrozenSet Int
confdReplStatuses =
ConstantUtils.mkSet $ map Types.confdReplyStatusToRaw [minBound..]
-- * Confd node role
confdNodeRoleMaster :: Int
confdNodeRoleMaster = Types.confdNodeRoleToRaw NodeRoleMaster
confdNodeRoleCandidate :: Int
confdNodeRoleCandidate = Types.confdNodeRoleToRaw NodeRoleCandidate
confdNodeRoleOffline :: Int
confdNodeRoleOffline = Types.confdNodeRoleToRaw NodeRoleOffline
confdNodeRoleDrained :: Int
confdNodeRoleDrained = Types.confdNodeRoleToRaw NodeRoleDrained
confdNodeRoleRegular :: Int
confdNodeRoleRegular = Types.confdNodeRoleToRaw NodeRoleRegular
-- * A few common errors for confd
confdErrorUnknownEntry :: Int
confdErrorUnknownEntry = Types.confdErrorTypeToRaw ConfdErrorUnknownEntry
confdErrorInternal :: Int
confdErrorInternal = Types.confdErrorTypeToRaw ConfdErrorInternal
confdErrorArgument :: Int
confdErrorArgument = Types.confdErrorTypeToRaw ConfdErrorArgument
-- * Confd request query fields
confdReqqLink :: String
confdReqqLink = ConstantUtils.confdReqqLink
confdReqqIp :: String
confdReqqIp = ConstantUtils.confdReqqIp
confdReqqIplist :: String
confdReqqIplist = ConstantUtils.confdReqqIplist
confdReqqFields :: String
confdReqqFields = ConstantUtils.confdReqqFields
-- | Each request is "salted" by the current timestamp.
--
-- This constant decides how many seconds of skew to accept.
--
-- TODO: make this a default and allow the value to be more
-- configurable
confdMaxClockSkew :: Int
confdMaxClockSkew = 2 * nodeMaxClockSkew
-- | When we haven't reloaded the config for more than this amount of
-- seconds, we force a test to see if inotify is betraying us. Using a
-- prime number to ensure we get less chance of 'same wakeup' with
-- other processes.
confdConfigReloadTimeout :: Int
confdConfigReloadTimeout = 17
-- | If we receive more than one update in this amount of
-- microseconds, we move to polling every RATELIMIT seconds, rather
-- than relying on inotify, to be able to serve more requests.
confdConfigReloadRatelimit :: Int
confdConfigReloadRatelimit = 250000
-- | Magic number prepended to all confd queries.
--
-- This allows us to distinguish different types of confd protocols
-- and handle them. For example by changing this we can move the whole
-- payload to be compressed, or move away from json.
confdMagicFourcc :: String
confdMagicFourcc = "plj0"
-- | By default a confd request is sent to the minimum between this
-- number and all MCs. 6 was chosen because even in the case of a
-- disastrous 50% response rate, we should have enough answers to be
-- able to compare more than one.
confdDefaultReqCoverage :: Int
confdDefaultReqCoverage = 6
-- | Timeout in seconds to expire pending query request in the confd
-- client library. We don't actually expect any answer more than 10
-- seconds after we sent a request.
confdClientExpireTimeout :: Int
confdClientExpireTimeout = 10
-- | Maximum UDP datagram size.
--
-- On IPv4: 64K - 20 (ip header size) - 8 (udp header size) = 65507
-- On IPv6: 64K - 40 (ip6 header size) - 8 (udp header size) = 65487
-- (assuming we can't use jumbo frames)
-- We just set this to 60K, which should be enough
maxUdpDataSize :: Int
maxUdpDataSize = 61440
-- * User-id pool minimum/maximum acceptable user-ids
uidpoolUidMin :: Int
uidpoolUidMin = 0
-- | Assuming 32 bit user-ids
uidpoolUidMax :: Integer
uidpoolUidMax = 2 ^ 32 - 1
-- | Name or path of the pgrep command
pgrep :: String
pgrep = "pgrep"
-- | Name of the node group that gets created at cluster init or
-- upgrade
initialNodeGroupName :: String
initialNodeGroupName = "default"
-- * Possible values for NodeGroup.alloc_policy
allocPolicyLastResort :: String
allocPolicyLastResort = Types.allocPolicyToRaw AllocLastResort
allocPolicyPreferred :: String
allocPolicyPreferred = Types.allocPolicyToRaw AllocPreferred
allocPolicyUnallocable :: String
allocPolicyUnallocable = Types.allocPolicyToRaw AllocUnallocable
validAllocPolicies :: [String]
validAllocPolicies = map Types.allocPolicyToRaw [minBound..]
-- | Temporary external/shared storage parameters
blockdevDriverManual :: String
blockdevDriverManual = Types.blockDriverToRaw BlockDrvManual
-- | 'qemu-img' path, required for 'ovfconverter'
qemuimgPath :: String
qemuimgPath = AutoConf.qemuimgPath
-- | Whether htools was enabled at compilation time
--
-- FIXME: this should be moved next to the other enable constants,
-- such as, 'enableConfd', and renamed to 'enableHtools'.
htools :: Bool
htools = AutoConf.htools
-- | The hail iallocator
iallocHail :: String
iallocHail = "hail"
-- * Fake opcodes for functions that have hooks attached to them via
-- backend.RunLocalHooks
fakeOpMasterTurndown :: String
fakeOpMasterTurndown = "OP_CLUSTER_IP_TURNDOWN"
fakeOpMasterTurnup :: String
fakeOpMasterTurnup = "OP_CLUSTER_IP_TURNUP"
-- * Crypto Types
-- Types of cryptographic tokens used in node communication
cryptoTypeSslDigest :: String
cryptoTypeSslDigest = "ssl"
cryptoTypeSsh :: String
cryptoTypeSsh = "ssh"
-- So far only ssl keys are used in the context of this constant
cryptoTypes :: FrozenSet String
cryptoTypes = ConstantUtils.mkSet [cryptoTypeSslDigest]
-- * Crypto Actions
-- Actions that can be performed on crypto tokens
cryptoActionGet :: String
cryptoActionGet = "get"
-- This is 'create and get'
cryptoActionCreate :: String
cryptoActionCreate = "create"
cryptoActions :: FrozenSet String
cryptoActions = ConstantUtils.mkSet [cryptoActionGet, cryptoActionCreate]
-- * Options for CryptoActions
-- Filename of the certificate
cryptoOptionCertFile :: String
cryptoOptionCertFile = "cert_file"
-- * SSH key types
sshkDsa :: String
sshkDsa = "dsa"
sshkRsa :: String
sshkRsa = "rsa"
sshkAll :: FrozenSet String
sshkAll = ConstantUtils.mkSet [sshkRsa, sshkDsa]
-- * SSH authorized key types
sshakDss :: String
sshakDss = "ssh-dss"
sshakRsa :: String
sshakRsa = "ssh-rsa"
sshakAll :: FrozenSet String
sshakAll = ConstantUtils.mkSet [sshakDss, sshakRsa]
-- * SSH setup
sshsClusterName :: String
sshsClusterName = "cluster_name"
sshsSshHostKey :: String
sshsSshHostKey = "ssh_host_key"
sshsSshRootKey :: String
sshsSshRootKey = "ssh_root_key"
sshsNodeDaemonCertificate :: String
sshsNodeDaemonCertificate = "node_daemon_certificate"
-- * Key files for SSH daemon
sshHostDsaPriv :: String
sshHostDsaPriv = sshConfigDir ++ "/ssh_host_dsa_key"
sshHostDsaPub :: String
sshHostDsaPub = sshHostDsaPriv ++ ".pub"
sshHostRsaPriv :: String
sshHostRsaPriv = sshConfigDir ++ "/ssh_host_rsa_key"
sshHostRsaPub :: String
sshHostRsaPub = sshHostRsaPriv ++ ".pub"
sshDaemonKeyfiles :: Map String (String, String)
sshDaemonKeyfiles =
Map.fromList [ (sshkRsa, (sshHostRsaPriv, sshHostRsaPub))
, (sshkDsa, (sshHostDsaPriv, sshHostDsaPub))
]
-- * Node daemon setup
ndsClusterName :: String
ndsClusterName = "cluster_name"
ndsNodeDaemonCertificate :: String
ndsNodeDaemonCertificate = "node_daemon_certificate"
ndsSsconf :: String
ndsSsconf = "ssconf"
ndsStartNodeDaemon :: String
ndsStartNodeDaemon = "start_node_daemon"
-- * VCluster related constants
vClusterEtcHosts :: String
vClusterEtcHosts = "/etc/hosts"
vClusterVirtPathPrefix :: String
vClusterVirtPathPrefix = "/###-VIRTUAL-PATH-###,"
vClusterRootdirEnvname :: String
vClusterRootdirEnvname = "GANETI_ROOTDIR"
vClusterHostnameEnvname :: String
vClusterHostnameEnvname = "GANETI_HOSTNAME"
vClusterVpathWhitelist :: FrozenSet String
vClusterVpathWhitelist = ConstantUtils.mkSet [ vClusterEtcHosts ]
-- * The source reasons for the execution of an OpCode
opcodeReasonSrcClient :: String
opcodeReasonSrcClient = "gnt:client"
opcodeReasonSrcNoded :: String
opcodeReasonSrcNoded = "gnt:daemon:noded"
opcodeReasonSrcOpcode :: String
opcodeReasonSrcOpcode = "gnt:opcode"
opcodeReasonSrcRlib2 :: String
opcodeReasonSrcRlib2 = "gnt:library:rlib2"
opcodeReasonSrcUser :: String
opcodeReasonSrcUser = "gnt:user"
opcodeReasonSources :: FrozenSet String
opcodeReasonSources =
ConstantUtils.mkSet [opcodeReasonSrcClient,
opcodeReasonSrcNoded,
opcodeReasonSrcOpcode,
opcodeReasonSrcRlib2,
opcodeReasonSrcUser]
-- | Path generating random UUID
randomUuidFile :: String
randomUuidFile = ConstantUtils.randomUuidFile
-- * Auto-repair tag prefixes
autoRepairTagPrefix :: String
autoRepairTagPrefix = "ganeti:watcher:autorepair:"
autoRepairTagEnabled :: String
autoRepairTagEnabled = autoRepairTagPrefix
autoRepairTagPending :: String
autoRepairTagPending = autoRepairTagPrefix ++ "pending:"
autoRepairTagResult :: String
autoRepairTagResult = autoRepairTagPrefix ++ "result:"
autoRepairTagSuspended :: String
autoRepairTagSuspended = autoRepairTagPrefix ++ "suspend:"
-- * Auto-repair levels
autoRepairFailover :: String
autoRepairFailover = Types.autoRepairTypeToRaw ArFailover
autoRepairFixStorage :: String
autoRepairFixStorage = Types.autoRepairTypeToRaw ArFixStorage
autoRepairMigrate :: String
autoRepairMigrate = Types.autoRepairTypeToRaw ArMigrate
autoRepairReinstall :: String
autoRepairReinstall = Types.autoRepairTypeToRaw ArReinstall
autoRepairAllTypes :: FrozenSet String
autoRepairAllTypes =
ConstantUtils.mkSet [autoRepairFailover,
autoRepairFixStorage,
autoRepairMigrate,
autoRepairReinstall]
-- * Auto-repair results
autoRepairEnoperm :: String
autoRepairEnoperm = Types.autoRepairResultToRaw ArEnoperm
autoRepairFailure :: String
autoRepairFailure = Types.autoRepairResultToRaw ArFailure
autoRepairSuccess :: String
autoRepairSuccess = Types.autoRepairResultToRaw ArSuccess
autoRepairAllResults :: FrozenSet String
autoRepairAllResults =
ConstantUtils.mkSet [autoRepairEnoperm, autoRepairFailure, autoRepairSuccess]
-- | The version identifier for builtin data collectors
builtinDataCollectorVersion :: String
builtinDataCollectorVersion = "B"
-- | The reason trail opcode parameter name
opcodeReason :: String
opcodeReason = "reason"
diskstatsFile :: String
diskstatsFile = "/proc/diskstats"
-- * CPU load collector
statFile :: String
statFile = "/proc/stat"
cpuavgloadBufferSize :: Int
cpuavgloadBufferSize = 150
cpuavgloadWindowSize :: Int
cpuavgloadWindowSize = 600
-- * Monitoring daemon
-- | Mond's variable for periodical data collection
mondTimeInterval :: Int
mondTimeInterval = 5
-- | Mond's latest API version
mondLatestApiVersion :: Int
mondLatestApiVersion = 1
-- * Disk access modes
diskUserspace :: String
diskUserspace = Types.diskAccessModeToRaw DiskUserspace
diskKernelspace :: String
diskKernelspace = Types.diskAccessModeToRaw DiskKernelspace
diskValidAccessModes :: FrozenSet String
diskValidAccessModes =
ConstantUtils.mkSet $ map Types.diskAccessModeToRaw [minBound..]
-- | Timeout for queue draining in upgrades
upgradeQueueDrainTimeout :: Int
upgradeQueueDrainTimeout = 36 * 60 * 60 -- 1.5 days
-- | Intervall at which the queue is polled during upgrades
upgradeQueuePollInterval :: Int
upgradeQueuePollInterval = 10
-- * Hotplug Actions
hotplugActionAdd :: String
hotplugActionAdd = Types.hotplugActionToRaw HAAdd
hotplugActionRemove :: String
hotplugActionRemove = Types.hotplugActionToRaw HARemove
hotplugActionModify :: String
hotplugActionModify = Types.hotplugActionToRaw HAMod
hotplugAllActions :: FrozenSet String
hotplugAllActions =
ConstantUtils.mkSet $ map Types.hotplugActionToRaw [minBound..]
-- * Hotplug Device Targets
hotplugTargetNic :: String
hotplugTargetNic = Types.hotplugTargetToRaw HTNic
hotplugTargetDisk :: String
hotplugTargetDisk = Types.hotplugTargetToRaw HTDisk
hotplugAllTargets :: FrozenSet String
hotplugAllTargets =
ConstantUtils.mkSet $ map Types.hotplugTargetToRaw [minBound..]
-- | Timeout for disk removal (seconds)
diskRemoveRetryTimeout :: Int
diskRemoveRetryTimeout = 30
-- | Interval between disk removal retries (seconds)
diskRemoveRetryInterval :: Int
diskRemoveRetryInterval = 3
-- * UUID regex
uuidRegex :: String
uuidRegex = "^[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}$"
-- * Luxi constants
luxiSocketPerms :: Int
luxiSocketPerms = 0o660
luxiKeyMethod :: String
luxiKeyMethod = "method"
luxiKeyArgs :: String
luxiKeyArgs = "args"
luxiKeySuccess :: String
luxiKeySuccess = "success"
luxiKeyResult :: String
luxiKeyResult = "result"
luxiKeyVersion :: String
luxiKeyVersion = "version"
luxiReqSubmitJob :: String
luxiReqSubmitJob = "SubmitJob"
luxiReqSubmitJobToDrainedQueue :: String
luxiReqSubmitJobToDrainedQueue = "SubmitJobToDrainedQueue"
luxiReqSubmitManyJobs :: String
luxiReqSubmitManyJobs = "SubmitManyJobs"
luxiReqWaitForJobChange :: String
luxiReqWaitForJobChange = "WaitForJobChange"
luxiReqPickupJob :: String
luxiReqPickupJob = "PickupJob"
luxiReqCancelJob :: String
luxiReqCancelJob = "CancelJob"
luxiReqArchiveJob :: String
luxiReqArchiveJob = "ArchiveJob"
luxiReqChangeJobPriority :: String
luxiReqChangeJobPriority = "ChangeJobPriority"
luxiReqAutoArchiveJobs :: String
luxiReqAutoArchiveJobs = "AutoArchiveJobs"
luxiReqQuery :: String
luxiReqQuery = "Query"
luxiReqQueryFields :: String
luxiReqQueryFields = "QueryFields"
luxiReqQueryJobs :: String
luxiReqQueryJobs = "QueryJobs"
luxiReqQueryInstances :: String
luxiReqQueryInstances = "QueryInstances"
luxiReqQueryNodes :: String
luxiReqQueryNodes = "QueryNodes"
luxiReqQueryGroups :: String
luxiReqQueryGroups = "QueryGroups"
luxiReqQueryNetworks :: String
luxiReqQueryNetworks = "QueryNetworks"
luxiReqQueryExports :: String
luxiReqQueryExports = "QueryExports"
luxiReqQueryConfigValues :: String
luxiReqQueryConfigValues = "QueryConfigValues"
luxiReqQueryClusterInfo :: String
luxiReqQueryClusterInfo = "QueryClusterInfo"
luxiReqQueryTags :: String
luxiReqQueryTags = "QueryTags"
luxiReqSetDrainFlag :: String
luxiReqSetDrainFlag = "SetDrainFlag"
luxiReqSetWatcherPause :: String
luxiReqSetWatcherPause = "SetWatcherPause"
luxiReqAll :: FrozenSet String
luxiReqAll =
ConstantUtils.mkSet
[ luxiReqArchiveJob
, luxiReqAutoArchiveJobs
, luxiReqCancelJob
, luxiReqChangeJobPriority
, luxiReqQuery
, luxiReqQueryClusterInfo
, luxiReqQueryConfigValues
, luxiReqQueryExports
, luxiReqQueryFields
, luxiReqQueryGroups
, luxiReqQueryInstances
, luxiReqQueryJobs
, luxiReqQueryNodes
, luxiReqQueryNetworks
, luxiReqQueryTags
, luxiReqSetDrainFlag
, luxiReqSetWatcherPause
, luxiReqSubmitJob
, luxiReqSubmitJobToDrainedQueue
, luxiReqSubmitManyJobs
, luxiReqWaitForJobChange
, luxiReqPickupJob
]
luxiDefCtmo :: Int
luxiDefCtmo = 10
luxiDefRwto :: Int
luxiDefRwto = 60
-- | 'WaitForJobChange' timeout
luxiWfjcTimeout :: Int
luxiWfjcTimeout = (luxiDefRwto - 1) `div` 2
-- * Query language constants
-- ** Logic operators with one or more operands, each of which is a
-- filter on its own
qlangOpAnd :: String
qlangOpAnd = "&"
qlangOpOr :: String
qlangOpOr = "|"
-- ** Unary operators with exactly one operand
qlangOpNot :: String
qlangOpNot = "!"
qlangOpTrue :: String
qlangOpTrue = "?"
-- ** Binary operators with exactly two operands, the field name and
-- an operator-specific value
qlangOpContains :: String
qlangOpContains = "=[]"
qlangOpEqual :: String
qlangOpEqual = "="
qlangOpGe :: String
qlangOpGe = ">="
qlangOpGt :: String
qlangOpGt = ">"
qlangOpLe :: String
qlangOpLe = "<="
qlangOpLt :: String
qlangOpLt = "<"
qlangOpNotEqual :: String
qlangOpNotEqual = "!="
qlangOpRegexp :: String
qlangOpRegexp = "=~"
-- | Characters used for detecting user-written filters (see
-- L{_CheckFilter})
qlangFilterDetectionChars :: FrozenSet String
qlangFilterDetectionChars =
ConstantUtils.mkSet ["!", " ", "\"", "\'",
")", "(", "\x0b", "\n",
"\r", "\x0c", "/", "<",
"\t", ">", "=", "\\", "~"]
-- | Characters used to detect globbing filters
qlangGlobDetectionChars :: FrozenSet String
qlangGlobDetectionChars = ConstantUtils.mkSet ["*", "?"]
-- * Error related constants
--
-- 'OpPrereqError' failure types
-- | Environment error (e.g. node disk error)
errorsEcodeEnviron :: String
errorsEcodeEnviron = "environment_error"
-- | Entity already exists
errorsEcodeExists :: String
errorsEcodeExists = "already_exists"
-- | Internal cluster error
errorsEcodeFault :: String
errorsEcodeFault = "internal_error"
-- | Wrong arguments (at syntax level)
errorsEcodeInval :: String
errorsEcodeInval = "wrong_input"
-- | Entity not found
errorsEcodeNoent :: String
errorsEcodeNoent = "unknown_entity"
-- | Not enough resources (iallocator failure, disk space, memory, etc)
errorsEcodeNores :: String
errorsEcodeNores = "insufficient_resources"
-- | Resource not unique (e.g. MAC or IP duplication)
errorsEcodeNotunique :: String
errorsEcodeNotunique = "resource_not_unique"
-- | Resolver errors
errorsEcodeResolver :: String
errorsEcodeResolver = "resolver_error"
-- | Wrong entity state
errorsEcodeState :: String
errorsEcodeState = "wrong_state"
-- | Temporarily out of resources; operation can be tried again
errorsEcodeTempNores :: String
errorsEcodeTempNores = "temp_insufficient_resources"
errorsEcodeAll :: FrozenSet String
errorsEcodeAll =
ConstantUtils.mkSet [ errorsEcodeNores
, errorsEcodeExists
, errorsEcodeState
, errorsEcodeNotunique
, errorsEcodeTempNores
, errorsEcodeNoent
, errorsEcodeFault
, errorsEcodeResolver
, errorsEcodeInval
, errorsEcodeEnviron
]
-- * Jstore related constants
jstoreJobsPerArchiveDirectory :: Int
jstoreJobsPerArchiveDirectory = 10000
-- * Gluster settings
-- | Name of the Gluster host setting
glusterHost :: String
glusterHost = "host"
-- | Default value of the Gluster host setting
glusterHostDefault :: String
glusterHostDefault = "127.0.0.1"
-- | Name of the Gluster volume setting
glusterVolume :: String
glusterVolume = "volume"
-- | Default value of the Gluster volume setting
glusterVolumeDefault :: String
glusterVolumeDefault = "gv0"
-- | Name of the Gluster port setting
glusterPort :: String
glusterPort = "port"
-- | Default value of the Gluster port setting
glusterPortDefault :: Int
glusterPortDefault = 24007
-- * Instance communication
instanceCommunicationDoc :: String
instanceCommunicationDoc =
"Enable or disable the communication mechanism for an instance"
instanceCommunicationNetwork :: String
instanceCommunicationNetwork = "ganeti:network:communication"
instanceCommunicationNicPrefix :: String
instanceCommunicationNicPrefix = "ganeti:communication:"
| badp/ganeti | src/Ganeti/Constants.hs | gpl-2.0 | 122,814 | 0 | 13 | 23,977 | 21,945 | 12,966 | 8,979 | 3,059 | 1 |
{-# LANGUAGE TypeSynonymInstances #-}
{-# LANGUAGE FlexibleInstances #-}
module BN.Types where
import BN.Common
import qualified Data.Map as M
import Data.Maybe()
data TyDict = TyDict
{ tynames :: [String]
, tyvals :: M.Map String [String]
, tyvars :: M.Map String String
} deriving Show
type Ty = StateT TyDict
instance (M m) => M (Ty m) where
-----------------------------------------------------------------------------------------------
-- * Unwrapping
runTyped :: (M m) => [(String, [String])] -> Ty m a -> m a
runTyped tys = let ns = map fst tys
in assert (validTypes tys) $ flip evalStateT (TyDict ns (M.fromList tys) M.empty)
where
-- no empty types allowed.
validTypes = all ((> 0) . length . snd)
runTypedFor :: (M m) => TyDict -> Ty m a -> m a
runTypedFor tdict = flip evalStateT tdict
-----------------------------------------------------------------------------------------------
-- * Interface
-----------------------------------------------------------------------------------------------
-- ** Predicates
-- |Is a type declared?
tyTyIsDecl :: (M m) => String -> Ty m Bool
tyTyIsDecl s = (s `elem`) . tynames <$> get
-- |Is a variable declared?
tyVarIsDecl :: (M m) => String -> Ty m Bool
tyVarIsDecl s = (s `elem`) . M.keys . tyvars <$> get
-----------------------------------------------------------------------------------------------
-- ** Functionality
-- |Declares a new variable @v@ with type @t@. If @t@ is not registered as a type
-- will throw an exception. Will not check for redeclarations, though.
tyDeclVar :: (M m) => String -> String -> Ty m ()
tyDeclVar var ty
= do
assertM_ ((&&) <$> tyTyIsDecl ty <*> (not <$> tyVarIsDecl var))
modify (\s -> s { tyvars = M.insert var ty (tyvars s) })
-- |Deletes a variable
tyUndeclVar :: (M m) => String -> Ty m ()
tyUndeclVar var = modify (\s -> s { tyvars = M.delete var (tyvars s) })
-- |Returns the type of a variable, if it exists.
tyOf :: (M m) => String -> Ty m (Maybe String)
tyOf s = M.lookup s . tyvars <$> get
-- |Returns the values of a type, if it exists.
tyVals :: (M m) => String -> Ty m (Maybe [String])
tyVals s = M.lookup s . tyvals <$> get
tyValsFor :: (M m) => String -> Ty m (Maybe [String])
tyValsFor s = tyOf s >>= maybe (return Nothing) tyVals
-- |Is a given value member of a given type?
tyValidVal :: (M m) => String -> String -> Ty m Bool
tyValidVal val ty = (val `elem`) . maybe [] id <$> tyVals ty
-- |Returns all possible configurations for a given set of variables.
tyConfs :: (M m) => [String] -> Ty m (Maybe [[(String, String)]])
tyConfs vs
= do
vsvals <- mapM tyValsFor vs
return (sequence vsvals >>= return . combine vs)
where
combine :: [String] -> [[String]] -> [[(String, String)]]
combine [] _ = [[]]
combine (v:vs) (ty:tys)
= let cvs = combine vs tys
in [(v, t):c | t <- ty, c <- cvs ]
| VictorCMiraldo/hs-bn | BN/Types.hs | gpl-2.0 | 2,975 | 0 | 14 | 659 | 1,001 | 537 | 464 | 48 | 2 |
{-# LANGUAGE BangPatterns, CPP, ScopedTypeVariables, RankNTypes #-}
-- | To make it easier to build (multithreaded) tests
module TestHelpers
(
-- * Testing parameters
numElems, getNumAgents, producerRatio,
-- * Utility for controlling the number of threads used by generated tests.
setTestThreads,
-- * Test initialization, reading common configs
stdTestHarness,
-- * A replacement for defaultMain that uses a 1-thread worker pool
defaultMainSeqTests,
-- * Misc utilities
nTimes, for_, forDown_, assertOr, timeOut, assertNoTimeOut, splitRange, timeit,
theEnv,
-- timeOutPure,
exceptionOrTimeOut, allowSomeExceptions, assertException
)
where
import Control.Monad
import Control.Exception
--import Control.Concurrent
--import Control.Concurrent.MVar
import GHC.Conc
import Data.IORef
import Data.Word
import Data.Time.Clock
import Data.List (isInfixOf, intersperse, nub)
import Text.Printf
import Control.Concurrent (forkOS, forkIO, ThreadId)
-- import Control.Exception (catch, SomeException, fromException, bracket, AsyncException(ThreadKilled))
import Control.Exception (bracket)
import System.Environment (withArgs, getArgs, getEnvironment)
import System.IO (hFlush, stdout, stderr, hPutStrLn)
import System.IO.Unsafe (unsafePerformIO)
import System.Mem (performGC)
import System.Exit
import qualified Test.Framework as TF
import Test.Framework.Providers.HUnit (hUnitTestToTests)
import Data.Monoid (mappend, mempty)
import Test.Framework.Runners.Console (interpretArgs, defaultMainWithOpts)
import Test.Framework.Runners.Options (RunnerOptions'(..))
import Test.Framework.Options (TestOptions'(..))
import Test.HUnit as HU
import Debug.Trace (trace)
--------------------------------------------------------------------------------
#if __GLASGOW_HASKELL__ >= 704
import GHC.Conc (getNumCapabilities, setNumCapabilities, getNumProcessors)
#else
import GHC.Conc (numCapabilities)
getNumCapabilities :: IO Int
getNumCapabilities = return numCapabilities
setNumCapabilities :: Int -> IO ()
setNumCapabilities = error "setNumCapabilities not supported in this older GHC! Set NUMTHREADS and +RTS -N to match."
getNumProcessors :: IO Int
getNumProcessors = return 1
#endif
theEnv :: [(String, String)]
theEnv = unsafePerformIO getEnvironment
----------------------------------------------------------------------------------------------------
-- TODO: In addition to setting these parameters from environment
-- variables, it would be nice to route all of this through a
-- configuration record, so that it can be changed programmatically.
-- How many elements should each of the tests pump through the queue(s)?
numElems :: Maybe Int
numElems = case lookup "NUMELEMS" theEnv of
Nothing -> Nothing -- 100 * 1000 -- 500000
Just str -> warnUsing ("NUMELEMS = "++str) $
Just (read str)
forkThread :: IO () -> IO ThreadId
forkThread = case lookup "OSTHREADS" theEnv of
Nothing -> forkIO
Just x -> warnUsing ("OSTHREADS = "++x) $
case x of
"0" -> forkIO
"False" -> forkIO
"1" -> forkOS
"True" -> forkOS
oth -> error$"OSTHREAD environment variable set to unrecognized option: "++oth
-- | How many communicating agents are there? By default one per
-- thread used by the RTS.
getNumAgents :: IO Int
getNumAgents = case lookup "NUMAGENTS" theEnv of
Nothing -> getNumCapabilities
Just str -> warnUsing ("NUMAGENTS = "++str) $
return (read str)
-- | It is possible to have imbalanced concurrency where there is more
-- contention on the producing or consuming side (which corresponds to
-- settings of this parameter less than or greater than 1).
producerRatio :: Double
producerRatio = case lookup "PRODUCERRATIO" theEnv of
Nothing -> 1.0
Just str -> warnUsing ("PRODUCERRATIO = "++str) $
read str
warnUsing :: String -> a -> a
warnUsing str a = trace (" [Warning]: Using environment variable "++str) a
-- | Dig through the test constructors to find the leaf IO actions and bracket them
-- with a thread-setting action.
setTestThreads :: Int -> HU.Test -> HU.Test
setTestThreads nm tst = loop False tst
where
loop flg x =
case x of
TestLabel lb t2 -> TestLabel (decor flg lb) (loop True t2)
TestList ls -> TestList (map (loop flg) ls)
TestCase io -> TestCase (bracketThreads nm io)
-- We only need to insert the numcapabilities in the description string ONCE:
decor False lb = "N"++show nm++"_"++ lb
decor True lb = lb
bracketThreads :: Int -> IO a -> IO a
bracketThreads n act =
bracket (getNumCapabilities)
setNumCapabilities
(\_ -> do dbgPrint 1 ("\n [Setting # capabilities to "++show n++" before test] \n")
setNumCapabilities n
act)
-- | Repeat a group of tests while varying the number of OS threads used. Also,
-- read configuration info.
--
-- WARNING: uses setNumCapabilities.
stdTestHarness :: (IO Test) -> IO ()
stdTestHarness genTests = do
numAgents <- getNumAgents
putStrLn$ "Running with numElems "++show numElems++" and numAgents "++ show numAgents
putStrLn "Use NUMELEMS, NUMAGENTS, NUMTHREADS to control the size of this benchmark."
args <- getArgs
np <- getNumProcessors
putStrLn $"Running on a machine with "++show np++" hardware threads."
-- We allow the user to set this directly, because the "-t" based regexp selection
-- of benchmarks is quite limited.
let all_threads = case lookup "NUMTHREADS" theEnv of
Just str -> [read str]
Nothing -> nub [1, 2, np `quot` 2, np, 2*np ]
putStrLn $"Running tests for these thread settings: " ++show all_threads
all_tests <- genTests
-- Don't allow concurent tests (the tests are concurrent!):
withArgs (args ++ ["-j1","--jxml=test-results.xml"]) $ do
-- Hack, this shouldn't be necessary, but I'm having problems with -t:
tests <- case all_threads of
[one] -> do cap <- getNumCapabilities
unless (cap == one) $ setNumCapabilities one
return all_tests
_ -> return$ TestList [ setTestThreads n all_tests | n <- all_threads ]
TF.defaultMain$ hUnitTestToTests tests
----------------------------------------------------------------------------------------------------
-- DEBUGGING
----------------------------------------------------------------------------------------------------
-- | Debugging flag shared by all accelerate-backend-kit modules.
-- This is activated by setting the environment variable DEBUG=1..5
dbg :: Int
dbg = case lookup "DEBUG" theEnv of
Nothing -> defaultDbg
Just "" -> defaultDbg
Just "0" -> defaultDbg
Just s ->
trace (" ! Responding to env Var: DEBUG="++s)$
case reads s of
((n,_):_) -> n
[] -> error$"Attempt to parse DEBUG env var as Int failed: "++show s
defaultDbg :: Int
defaultDbg = 0
-- | Print if the debug level is at or above a threshold.
dbgPrint :: Int -> String -> IO ()
dbgPrint lvl str = if dbg < lvl then return () else do
-- hPutStrLn stderr str
-- hPrintf stderr str
-- hFlush stderr
printf str
hFlush stdout
dbgPrintLn :: Int -> String -> IO ()
dbgPrintLn lvl str = dbgPrint lvl (str++"\n")
------------------------------------------------------------------------------------------
-- Misc Helpers
------------------------------------------------------------------------------------------
-- | Ensure that executing an action returns an exception
-- containing one of the expected messages.
assertException :: [String] -> IO a -> IO ()
assertException msgs action = do
x <- catch (do action; return Nothing)
(\e -> do putStrLn $ "Good. Caught exception: " ++ show (e :: SomeException)
return (Just $ show e))
case x of
Nothing -> HU.assertFailure "Failed to get an exception!"
Just s ->
if any (`isInfixOf` s) msgs
then return ()
else HU.assertFailure $ "Got the wrong exception, expected one of the strings: "++ show msgs
++ "\nInstead got this exception:\n " ++ show s
-- | For testing quasi-deterministic programs: programs that always
-- either raise a particular exception or produce a particular answer.
allowSomeExceptions :: [String] -> IO a -> IO (Either SomeException a)
allowSomeExceptions msgs action = do
catch (do a <- action; evaluate a; return (Right a))
(\e ->
let estr = show e in
if any (`isInfixOf` estr) msgs
then do when True $ -- (dbgLvl>=1) $
putStrLn $ "Caught allowed exception: " ++ show (e :: SomeException)
return (Left e)
else do HU.assertFailure $ "Got the wrong exception, expected one of the strings: "++ show msgs
++ "\nInstead got this exception:\n " ++ show estr
error "Should not reach this..."
)
exceptionOrTimeOut :: Show a => Double -> [String] -> IO a -> IO ()
exceptionOrTimeOut time msgs action = do
x <- timeOut time $
allowSomeExceptions msgs action
case x of
Just (Right _val) -> HU.assertFailure "exceptionOrTimeOut: action returned successfully!"
Just (Left _exn) -> return () -- Error, yay!
Nothing -> return () -- Timeout.
-- | Simple wrapper around `timeOut` that throws an error if timeOut occurs.
assertNoTimeOut :: Show a => Double -> IO a -> IO a
assertNoTimeOut t a = do
m <- timeOut t a
case m of
Nothing -> do HU.assertFailure$ "assertNoTimeOut: thread failed or timeout occurred after "++show t++" seconds"
error "Should not reach this #2"
Just a -> return a
-- | Time-out an IO action by running it on a separate thread, which is killed when
-- the timer (in seconds) expires. This requires that the action do allocation, otherwise it will
-- be non-preemptable.
timeOut :: Show a => Double -> IO a -> IO (Maybe a)
timeOut interval act = do
result <- newIORef Nothing
tid <- forkIO (act >>= writeIORef result . Just)
t0 <- getCurrentTime
let loop = do
stat <- threadStatus tid
case stat of
ThreadFinished -> readIORef result
ThreadBlocked r -> timeCheckAndLoop
ThreadDied -> do putStrLn " [lvish-tests] Time-out check -- thread died!"
return Nothing
ThreadRunning -> timeCheckAndLoop
timeCheckAndLoop = do
now <- getCurrentTime
let delt :: Double
delt = fromRational$ toRational$ diffUTCTime now t0
if delt >= interval
then do putStrLn " [lvish-tests] Time-out: out of time, killing test thread.."
killThread tid
-- TODO: <- should probably wait for it to show up as dead.
return Nothing
else do threadDelay (10 * 1000) -- Sleep 10ms.
loop
loop
{-# NOINLINE timeOutPure #-}
-- | Evaluate a pure value to weak-head normal form, with timeout.
-- This is NONDETERMINISTIC, so its type is sketchy:
--
-- WARNING: This doesn't seem to work properly yet! I am seeing spurious failures.
-- -RRN [2013.10.24]
--
timeOutPure :: Show a => Double -> a -> Maybe a
timeOutPure tm thnk =
unsafePerformIO (timeOut tm (evaluate thnk))
assertOr :: Assertion -> Assertion -> Assertion
assertOr act1 act2 =
catch act1
(\(e::SomeException) -> act2)
nTimes :: Int -> (Int -> IO a) -> IO ()
nTimes 0 _ = return ()
nTimes n c = c n >> nTimes (n-1) c
{-# INLINE for_ #-}
-- | Inclusive/Inclusive
for_ :: Monad m => (Int, Int) -> (Int -> m ()) -> m ()
for_ (start, end) fn | start > end = forDown_ (end, start) fn
for_ (start, end) fn = loop start
where
loop !i | i > end = return ()
| otherwise = do fn i; loop (i+1)
-- | Inclusive/Inclusive, iterate downward.
forDown_ :: Monad m => (Int, Int) -> (Int -> m ()) -> m ()
forDown_ (start, end) _fn | start > end = error "forDown_: start is greater than end"
forDown_ (start, end) fn = loop end
where
loop !i | i < start = return ()
| otherwise = do fn i; loop (i-1)
-- | Split an inclusive range into N chunks.
-- This may return less than the desired number of pieces if there aren't enough
-- elements in the range.
splitRange :: Int -> (Int,Int) -> [(Int,Int)]
splitRange pieces (start,end)
| len < pieces = [ (i,i) | i <- [start .. end]]
| otherwise = chunks
where
len = end - start + 1
chunks = map largepiece [0..remain-1] ++
map smallpiece [remain..pieces-1]
(portion, remain) = len `quotRem` pieces
largepiece i =
let offset = start + (i * (portion + 1))
in (offset, (offset + portion))
smallpiece i =
let offset = start + (i * portion) + remain
in (offset, (offset + portion - 1))
-- | Print out a SELFTIMED message reporting the time from a given test.
timeit :: IO a -> IO a
timeit ioact = do
start <- getCurrentTime
res <- ioact
end <- getCurrentTime
putStrLn$ "SELFTIMED: " ++ show (diffUTCTime end start)
return res
-- | An alternate version of `defaultMain` which sets the number of test running
-- threads to one by default, unless the user explicitly overrules it with "-j".
defaultMainSeqTests :: [TF.Test] -> IO ()
defaultMainSeqTests tests = do
putStrLn " [*] Default test harness..."
args <- getArgs
x <- interpretArgs args
res <- try (case x of
Left err -> error$ "defaultMainSeqTests: "++err
Right (opts,_) -> do let opts' = ((mempty{ ropt_threads= Just 1
, ropt_test_options = Just (mempty{
topt_timeout=(Just$ Just defaultTestTimeout)})})
`mappend` opts)
putStrLn $ " [*] Using "++ show (ropt_threads opts')++ " worker threads for testing."
defaultMainWithOpts tests opts'
)
case res of
Left (e::ExitCode) -> do
putStrLn$ " [*] test-framework exiting with: "++show e
performGC
putStrLn " [*] GC finished on main thread."
threadDelay (30 * 1000)
putStrLn " [*] Main thread exiting."
exitWith e
-- | In nanoseconds.
defaultTestTimeout :: Int
-- defaultTestTimeout = 3*1000*1000
defaultTestTimeout = 10*1000*1000
-- defaultTestTimeout = 100*1000*1000
| MathiasBartl/Concurrent_Datastructures | tests/TestHelpers.hs | gpl-2.0 | 14,821 | 0 | 29 | 3,874 | 3,392 | 1,733 | 1,659 | 252 | 6 |
{-# LANGUAGE BangPatterns #-}
{-# LANGUAGE DeriveDataTypeable #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE StandaloneDeriving #-}
{-# LANGUAGE TemplateHaskell #-}
{-# LANGUAGE TypeSynonymInstances #-}
{-# LANGUAGE ViewPatterns #-}
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE DeriveAnyClass #-}
-- |
-- Copyright : (c) 2010-2012 Simon Meier & Benedikt Schmidt
-- License : GPL v3 (see LICENSE)
--
-- Maintainer : Simon Meier <iridcode@gmail.com>
-- Portability : GHC only
--
-- Types and operations for handling sorted first-order logic
module Theory.Model.Formula (
-- * Formulas
Connective(..)
, Quantifier(..)
, Formula(..)
, LNFormula
, LFormula
, quantify
, openFormula
, openFormulaPrefix
-- , unquantify
-- ** More convenient constructors
, lfalse
, ltrue
, (.&&.)
, (.||.)
, (.==>.)
, (.<=>.)
, exists
, forall
, hinted
-- ** General Transformations
, mapAtoms
, foldFormula
-- ** Pretty-Printing
, prettyLNFormula
) where
import Prelude hiding (negate)
import GHC.Generics (Generic)
import Data.Binary
-- import Data.Foldable (Foldable, foldMap)
import Data.Data
-- import Data.Monoid hiding (All)
-- import Data.Traversable
import Control.Basics
import Control.DeepSeq
import Control.Monad.Fresh
import qualified Control.Monad.Trans.PreciseFresh as Precise
import Theory.Model.Atom
import Text.PrettyPrint.Highlight
import Theory.Text.Pretty
import Term.LTerm
import Term.Substitution
------------------------------------------------------------------------------
-- Types
------------------------------------------------------------------------------
-- | Logical connectives.
data Connective = And | Or | Imp | Iff
deriving( Eq, Ord, Show, Enum, Bounded, Data, Typeable, Generic, NFData, Binary )
-- | Quantifiers.
data Quantifier = All | Ex
deriving( Eq, Ord, Show, Enum, Bounded, Data, Typeable, Generic, NFData, Binary )
-- | First-order formulas in locally nameless representation with hints for the
-- names/sorts of quantified variables.
data Formula s c v = Ato (Atom (VTerm c (BVar v)))
| TF !Bool
| Not (Formula s c v)
| Conn !Connective (Formula s c v) (Formula s c v)
| Qua !Quantifier s (Formula s c v)
deriving ( Generic, NFData, Binary )
-- Folding
----------
-- | Fold a formula.
{-# INLINE foldFormula #-}
foldFormula :: (Atom (VTerm c (BVar v)) -> b) -> (Bool -> b)
-> (b -> b) -> (Connective -> b -> b -> b)
-> (Quantifier -> s -> b -> b)
-> Formula s c v
-> b
foldFormula fAto fTF fNot fConn fQua =
go
where
go (Ato a) = fAto a
go (TF b) = fTF b
go (Not p) = fNot (go p)
go (Conn c p q) = fConn c (go p) (go q)
go (Qua qua x p) = fQua qua x (go p)
-- | Fold a formula.
{-# INLINE foldFormulaScope #-}
foldFormulaScope :: (Integer -> Atom (VTerm c (BVar v)) -> b) -> (Bool -> b)
-> (b -> b) -> (Connective -> b -> b -> b)
-> (Quantifier -> s -> b -> b)
-> Formula s c v
-> b
foldFormulaScope fAto fTF fNot fConn fQua =
go 0
where
go !i (Ato a) = fAto i a
go _ (TF b) = fTF b
go !i (Not p) = fNot (go i p)
go !i (Conn c p q) = fConn c (go i p) (go i q)
go !i (Qua qua x p) = fQua qua x (go (succ i) p)
-- Instances
------------
{-
instance Functor (Formula s c) where
fmap f = foldFormula (Ato . fmap (fmap (fmap (fmap f)))) TF Not Conn Qua
-}
instance Foldable (Formula s c) where
foldMap f = foldFormula (foldMap (foldMap (foldMap (foldMap f)))) mempty id
(const mappend) (const $ const id)
traverseFormula :: (Ord v, Ord c, Ord v', Applicative f)
=> (v -> f v') -> Formula s c v -> f (Formula s c v')
traverseFormula f = foldFormula (liftA Ato . traverse (traverseTerm (traverse (traverse f))))
(pure . TF) (liftA Not)
(liftA2 . Conn) ((liftA .) . Qua)
{-
instance Traversable (Formula a s) where
traverse f = foldFormula (liftA Ato . traverseAtom (traverseTerm (traverseLit (traverseBVar f))))
(pure . TF) (liftA Not)
(liftA2 . Conn) ((liftA .) . Qua)
-}
-- Abbreviations
----------------
infixl 3 .&&.
infixl 2 .||.
infixr 1 .==>.
infix 1 .<=>.
-- | Logically true.
ltrue :: Formula a s v
ltrue = TF True
-- | Logically false.
lfalse :: Formula a s v
lfalse = TF False
(.&&.), (.||.), (.==>.), (.<=>.) :: Formula a s v -> Formula a s v -> Formula a s v
(.&&.) = Conn And
(.||.) = Conn Or
(.==>.) = Conn Imp
(.<=>.) = Conn Iff
------------------------------------------------------------------------------
-- Dealing with bound variables
------------------------------------------------------------------------------
-- | @LFormula@ are FOL formulas with sorts abused to denote both a hint for
-- the name of the bound variable, as well as the variable's actual sort.
type LFormula c = Formula (String, LSort) c LVar
type LNFormula = Formula (String, LSort) Name LVar
-- | Change the representation of atoms.
mapAtoms :: (Integer -> Atom (VTerm c (BVar v))
-> Atom (VTerm c1 (BVar v1)))
-> Formula s c v -> Formula s c1 v1
mapAtoms f = foldFormulaScope (\i a -> Ato $ f i a) TF Not Conn Qua
-- | @openFormula f@ returns @Just (v,Q,f')@ if @f = Q v. f'@ modulo
-- alpha renaming and @Nothing otherwise@. @v@ is always chosen to be fresh.
openFormula :: (MonadFresh m, Ord c)
=> LFormula c -> Maybe (Quantifier, m (LVar, LFormula c))
openFormula (Qua qua (n,s) fm) =
Just ( qua
, do x <- freshLVar n s
return $ (x, mapAtoms (\i a -> fmap (mapLits (subst x i)) a) fm)
)
where
subst x i (Var (Bound i')) | i == i' = Var $ Free x
subst _ _ l = l
openFormula _ = Nothing
mapLits :: (Ord a, Ord b) => (a -> b) -> Term a -> Term b
mapLits f t = case viewTerm t of
Lit l -> lit . f $ l
FApp o as -> fApp o (map (mapLits f) as)
-- | @openFormulaPrefix f@ returns @Just (vs,Q,f')@ if @f = Q v_1 .. v_k. f'@
-- modulo alpha renaming and @Nothing otherwise@. @vs@ is always chosen to be
-- fresh.
openFormulaPrefix :: (MonadFresh m, Ord c)
=> LFormula c -> m ([LVar], Quantifier, LFormula c)
openFormulaPrefix f0 = case openFormula f0 of
Nothing -> error $ "openFormulaPrefix: no outermost quantifier"
Just (q, open) -> do
(x, f) <- open
go q [x] f
where
go q xs f = case openFormula f of
Just (q', open') | q' == q -> do (x', f') <- open'
go q (x' : xs) f'
-- no further quantifier of the same kind => return result
_ -> return (reverse xs, q, f)
-- Instances
------------
deriving instance Eq LNFormula
deriving instance Show LNFormula
deriving instance Ord LNFormula
instance HasFrees LNFormula where
foldFrees f = foldMap (foldFrees f)
foldFreesOcc _ _ = const mempty -- we ignore occurences in Formulas for now
mapFrees f = traverseFormula (mapFrees f)
instance Apply LNFormula where
apply subst = mapAtoms (const $ apply subst)
------------------------------------------------------------------------------
-- Formulas modulo E and modulo AC
------------------------------------------------------------------------------
-- | Introduce a bound variable for a free variable.
quantify :: (Ord c, Ord v) => v -> Formula s c v -> Formula s c v
quantify x =
mapAtoms (\i a -> fmap (mapLits (fmap (>>= subst i))) a)
where
subst i v | v == x = Bound i
| otherwise = Free v
-- | Create a universal quantification with a sort hint for the bound variable.
forall :: (Ord c, Ord v) => s -> v -> Formula s c v -> Formula s c v
forall hint x = Qua All hint . quantify x
-- | Create a existential quantification with a sort hint for the bound variable.
exists :: (Ord c, Ord v) => s -> v -> Formula s c v -> Formula s c v
exists hint x = Qua Ex hint . quantify x
-- | Transform @forall@ and @exists@ into functions that operate on logical variables
hinted :: ((String, LSort) -> LVar -> a) -> LVar -> a
hinted f v@(LVar n s _) = f (n,s) v
------------------------------------------------------------------------------
-- Pretty printing
------------------------------------------------------------------------------
-- | Pretty print a formula.
prettyLFormula :: (HighlightDocument d, MonadFresh m, Ord c)
=> (Atom (VTerm c LVar) -> d) -- ^ Function for pretty printing atoms
-> LFormula c -- ^ Formula to pretty print.
-> m d -- ^ Pretty printed formula.
prettyLFormula ppAtom =
pp
where
extractFree (Free v) = v
extractFree (Bound i) = error $ "prettyFormula: illegal bound variable '" ++ show i ++ "'"
pp (Ato a) = return $ ppAtom (fmap (mapLits (fmap extractFree)) a)
pp (TF True) = return $ operator_ "⊤" -- "T"
pp (TF False) = return $ operator_ "⊥" -- "F"
pp (Not p) = do
p' <- pp p
return $ operator_ "¬" <> opParens p' -- text "¬" <> parens (pp a)
-- return $ operator_ "not" <> opParens p' -- text "¬" <> parens (pp a)
pp (Conn op p q) = do
p' <- pp p
q' <- pp q
return $ sep [opParens p' <-> ppOp op, opParens q']
where
ppOp And = opLAnd
ppOp Or = opLOr
ppOp Imp = opImp
ppOp Iff = opIff
pp fm@(Qua _ _ _) =
scopeFreshness $ do
(vs,qua,fm') <- openFormulaPrefix fm
d' <- pp fm'
return $ sep
[ ppQuant qua <> ppVars vs <> operator_ "."
, nest 1 d']
where
ppVars = fsep . map (text . show)
ppQuant All = opForall
ppQuant Ex = opExists
-- | Pretty print a logical formula
prettyLNFormula :: HighlightDocument d => LNFormula -> d
prettyLNFormula fm =
Precise.evalFresh (prettyLFormula prettyNAtom fm) (avoidPrecise fm)
| kmilner/tamarin-prover | lib/theory/src/Theory/Model/Formula.hs | gpl-3.0 | 10,491 | 0 | 18 | 3,136 | 2,994 | 1,574 | 1,420 | 191 | 11 |
-- We have two command line frontends (one with GNU-style arguments, and one
-- that follows the requirements for COP5555). There are a lot of common
-- components. They go here.
module OptionHandler ( versionNumber
, versionString
, Opt(..)
, optDefaults
, optProcess
) where
import Paths_hs_rpal (version)
import Data.Version (showVersion)
import Control.Monad
import Lexer
import Parser
import Standardizer
import Evaluator
import Evaluator.Control
-- This auto-updates from the value in cabal! Isn't that cool?
versionNumber :: String
versionNumber = showVersion version
versionString :: String
versionString = "hs-rpal " ++ versionNumber
-- Define a record for all the arguments
data Opt = Opt { optVersion :: Bool
, optAst :: Bool
, optPartialSt :: Bool
, optFullSt :: Bool
, optLex :: Bool
, optListing :: Bool
, optControl :: Bool
, optQuiet :: Bool
, optFile :: Maybe String
}
-- Define the default settings
optDefaults :: Opt
optDefaults = Opt { optVersion = False
, optAst = False
, optPartialSt = False
, optFullSt = False
, optLex = False
, optListing = False
, optControl = False
, optQuiet = False
, optFile = Nothing
}
-- For each argument, evaluate it and execute subtasks
optProcess :: Opt -> IO ()
optProcess opt = do
source <- case optFile opt of
Just path -> readFile path -- From file
Nothing -> getContents -- From stdin
if optVersion opt then putStrLn versionString else
(when (optListing opt) $ putStr source) >>
(when (optLex opt) $ putStr $ unlines $ fmap show $ getTokens source) >>
(when (optAst opt) $ putStr $ show $ parse source) >>
(when (optPartialSt opt) $
putStr $ show $ standardizePartially $ parse source) >>
(when (optFullSt opt) $
putStr $ show $ standardizeFully $ parse source) >>
(when (optControl opt) $
putStrLn $ show $ generateControl $ standardizeFully
$ parse source) >>
(when (not (optQuiet opt)) $
evaluateSimple $ generateControl $ standardizeFully $ parse source)
| bgw/hs-rpal | src/OptionHandler.hs | gpl-3.0 | 2,482 | 0 | 21 | 884 | 542 | 299 | 243 | 54 | 3 |
import Criterion.Main
import Control.Concurrent
import System.Process
import Control.Monad
import Control.Parallel.Strategies
import qualified Data.Map as M
import Data.Graph
import Data.List
import System.Random
-- TYPES
data Workers a b =
Workers
{toThread :: Chan a,
fromThread :: Chan b}
-- | A triple of a node, its key, and the keys of its dependencies
type Node node = (node, Int, [Int])
-- | The solver input
data Input node soln =
Input {getNode :: Vertex -> Node node,
getVertex :: Int -> Maybe Vertex,
graph :: Graph}
-- | Construct an Input
input :: [Node node] -- ^ A node, a unique identifier for the node
-- and a list of node IDs upon which this node depends.
-> Input node soln -- ^ A new input object
input g =
Input {getNode = getter,
getVertex = vGetter,
graph = inGraph}
where (inGraph, getter, vGetter) = graphFromEdges g
-- | The final output of the solver. A map from keys to solutions
type Output soln = M.Map Int soln
-- SOLVING FUNCTIONS
-- | Get a list of nodes that are ready to be solved
readyNodes :: Input node soln -- ^ The input
-> Output soln -- ^ The current built up solution map
-> [(Node node, [soln])] -- ^ A list of nodes who's dependencies
-- have all been solved, paired with their solutions
readyNodes i o = map fromJust'
$ filter dropNothing
$ map pairSoln
$ filter readyNode nkdList
where
fromJust' (n, Just s) = (n, s)
dropNothing (_, Nothing) = False
dropNothing _ = True
pairSoln n = (n, getSolutions n o)
verts = vertices (graph i)
nkdList = map (getNode i) verts
readyNode (_, k', d') =
M.notMember k' o
&& not (any (`M.notMember` o) d')
-- | Get the solutions required to solve this node
getSolutions :: Node node -- ^ The node in question
-> Output soln -- ^ The current built up solution map
-> Maybe [soln] -- ^ A list of solutions, or Nothing if
-- there is an unsolved dependency
getSolutions (_, _, d) o = do
mSolns <- return $ map (`M.lookup` o) d
sequence mSolns
-- | Adds a list of key/solution pairs to a solution map
addAll :: Output soln -- ^ The solution map
-> [(Int, soln)] -- ^ The new solutions to add
-> Output soln -- ^ A new solution map with solutions added
addAll o [] = o
addAll o ((k, s):xs) = addAll (M.insert k s o) xs
initWorkers :: (Workers a b -> IO ()) -> Int -> IO (Workers a b)
initWorkers a n = do
tx <- newChan
rx <- newChan
let workers = Workers {toThread = tx, fromThread = rx}
let actions = replicate n $ a workers
mapM_ forkIO actions
return workers
--action :: (Int, Int) -> IO (Int, Float)
action :: (Node Int, [Float]) -> IO (Int, Float)
action ((n, k, _), s) = do
let plusOrMinusFive = (0.01 :: Float) * fromIntegral (4 + (n `mod` 3))
_ <- system $ "sleep " ++ show plusOrMinusFive
let useSolns = (foldl' (+) 0 s)
return $ (k, plusOrMinusFive + useSolns)
--workerAction :: Workers (Int, Int) (Int, Float) -> IO ()
workerAction w = forever $ do
arg <- readChan $ toThread w
res <- action arg
writeChan (fromThread w) res
--serSolve :: [(Int, Int)] -> IO [(Int, Float)]
serSolve = mapM action
--parSolve :: Workers (Int, Int) (Int, Float) -> [(Int, Int)] -> IO [(Int, Float)]
parSolve :: Workers (Node node, [soln]) (Int, soln)
-> [(Node node, [soln])]
-> IO [(Int, soln)]
parSolve w i = do
writeList2Chan (toThread w) i
waitForAll (length i) []
where
waitForAll 0 o = sequence o
waitForAll n o = waitForAll (n - 1) (readChan (fromThread w) : o)
solve :: ([(Node node, [soln])] -> IO [(Int, soln)])
-> Input node soln
-> IO (Output soln)
solve f i = solve' M.empty
where
solve' o = do
nodes <- return $ readyNodes i o
if null nodes
then
return o
else
do
o' <- f nodes
solve' $ addAll o o'
main = do
let rand = mkStdGen 1337
let testTen =
runEval $ evalList rdeepseq $ testGraph 10 10 rand nFn_sleep
let testHundred =
runEval $ evalList rdeepseq $ testGraph 100 10 rand nFn_sleep
tc <- getNumCapabilities
workers <- initWorkers workerAction tc
defaultMain [bgroup "10"
[bench "serial" $ nfIO $ (solve serSolve) (input testTen),
bench "parallel"
$ nfIO $ (solve $ parSolve workers) (input testTen)],
bgroup "100"
[bench "serial" $ nfIO $ (solve serSolve) (input testHundred),
bench "parallel"
$ nfIO $ (solve $ parSolve workers) (input testHundred)]]
-- TEST FUNCTIONS
nFn_sleep :: Int -> Int -> [Int] -> Node Int
nFn_sleep n k d = (5 + (n `mod` 3), k, d)
-- | Generates a test graph. This graph will have the form of a grid of w by d
-- nodes. The top row will have no dependencies. For all other rows, each node
-- will depend on all nodes of the row above it.
testGraph :: (RandomGen gen)
=> Int -- ^ The "width" of the test input. I.E., how many
-- nodes should be solvable in parallel. Must be > 0
-> Int -- ^ The depth of the test input. I.E., how many
-- levels of nodes there are. Must be > 0
-> gen -- ^ A random number generator
-> (Int -> Int -> [Int] -> Node node) -- ^ A function that takes a
-- random number, a key, and a list of dependencies, and produces a
-- node
-> [Node node]
testGraph w d g nFn = nodeList
where
lhsList = take (d * w) $ randoms g :: [Int]
midList = [0..(w * d)]
rhsList = join $ replicate w [] : init (map widthList intervalList)
intervalList = (*) <$> [0..(d - 1)] <*> [w]
widthList n = replicate w [n..(n + (w - 1))]
nodeList = zipWith3 nFn lhsList midList rhsList
| christetreault/graph-solver | src/Main.hs | gpl-3.0 | 6,041 | 14 | 15 | 1,836 | 1,785 | 939 | 846 | 122 | 2 |
{-# LANGUAGE OverloadedStrings #-}
module Database.Design.Ampersand.Output.ToPandoc.ChapterFunctionPointAnalysis where
import Database.Design.Ampersand.Output.ToPandoc.SharedAmongChapters
import Database.Design.Ampersand.FSpec.FPA
fatal :: Int -> String -> a
fatal = fatalMsg "Output.ToPandoc.ChapterFunctionPointAnalysis"
-- TODO: add introductory and explanatory text to chapter
-- TODO: what about KGVs?
chpFunctionPointAnalysis :: FSpec -> Blocks
chpFunctionPointAnalysis fSpec
= chptHeader (fsLang fSpec) FunctionPointAnalysis
<> para ( (str.l) (NL "Dit hoofdstuk ..."
,EN "This chapter ...")
)
<> -- Data model section:
table -- Caption:
((str.l) (NL "Datamodel", EN "Data model"))
-- Alignment:
(replicate 4 (AlignLeft, 1/4))
-- Header:
(map (plain.str.l)
[ (NL "Type" , EN "Type")
, (NL "Naam" , EN "Name")
, (NL "Complexiteit", EN "Complexity")
, (NL "FP" , EN "FP")
])
-- Data rows:
( [map (plain.str)
[ l (NL "ILGV", EN "ILGV(???)")
, nm
, showLang (fsLang fSpec) cmplxty
, (show.fpVal) fp
]
| fp@FP{fpType=ILGV, fpName=nm, fpComplexity=cmplxty} <- fst (dataModelFPA fpa)
]++
[ [ mempty
, mempty
, (plain.str.l) (NL "Totaal:", EN "Total:")
, (plain.str.show.snd.dataModelFPA) fpa
] ]
)
<> -- User transaction section:
table -- Caption
( (str.l) (NL "Gebruikerstransacties", EN "User transactions"))
-- Alignment:
(replicate 4 (AlignLeft, 1/4))
-- Header:
(map (plain.str.l)
[ (NL "Type" , EN "Type")
, (NL "Naam" , EN "Name")
, (NL "Complexiteit", EN "Complexity")
, (NL "FP" , EN "FP")
])
-- Data rows::
( [map plain
[ (str.showLang (fsLang fSpec).fpType) fp
, (str.fpName) fp
, (str.showLang (fsLang fSpec).fpComplexity) fp
, (str.show.fpVal) fp
]
|fp@FP{} <- fst (userTransactionFPA fpa)
]++
[ [ mempty
, mempty
, (plain.str.l) (NL "Totaal:", EN "Total:")
, (plain.str.show.snd.userTransactionFPA) fpa
] ]
)
where
-- shorthand for easy localizing
l :: LocalizedStr -> String
l lstr = localize (fsLang fSpec) lstr
fpa = fpAnalyze fSpec
| guoy34/ampersand | src/Database/Design/Ampersand/Output/ToPandoc/ChapterFunctionPointAnalysis.hs | gpl-3.0 | 2,815 | 0 | 17 | 1,141 | 732 | 399 | 333 | 54 | 1 |
{-# LANGUAGE DeriveDataTypeable #-}
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE NoImplicitPrelude #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE RecordWildCards #-}
{-# OPTIONS_GHC -fno-warn-unused-binds #-}
{-# OPTIONS_GHC -fno-warn-unused-imports #-}
-- |
-- Module : Network.Google.BinaryAuthorization.Types.Product
-- Copyright : (c) 2015-2016 Brendan Hay
-- License : Mozilla Public License, v. 2.0.
-- Maintainer : Brendan Hay <brendan.g.hay@gmail.com>
-- Stability : auto-generated
-- Portability : non-portable (GHC extensions)
--
module Network.Google.BinaryAuthorization.Types.Product where
import Network.Google.BinaryAuthorization.Types.Sum
import Network.Google.Prelude
-- | Verifiers (e.g. Kritis implementations) MUST verify signatures with
-- respect to the trust anchors defined in policy (e.g. a Kritis policy).
-- Typically this means that the verifier has been configured with a map
-- from \`public_key_id\` to public key material (and any required
-- parameters, e.g. signing algorithm). In particular, verification
-- implementations MUST NOT treat the signature \`public_key_id\` as
-- anything more than a key lookup hint. The \`public_key_id\` DOES NOT
-- validate or authenticate a public key; it only provides a mechanism for
-- quickly selecting a public key ALREADY CONFIGURED on the verifier
-- through a trusted channel. Verification implementations MUST reject
-- signatures in any of the following circumstances: * The
-- \`public_key_id\` is not recognized by the verifier. * The public key
-- that \`public_key_id\` refers to does not verify the signature with
-- respect to the payload. The \`signature\` contents SHOULD NOT be
-- \"attached\" (where the payload is included with the serialized
-- \`signature\` bytes). Verifiers MUST ignore any \"attached\" payload and
-- only verify signatures with respect to explicitly provided payload (e.g.
-- a \`payload\` field on the proto message that holds this Signature, or
-- the canonical serialization of the proto message that holds this
-- signature).
--
-- /See:/ 'signature' smart constructor.
data Signature =
Signature'
{ _sSignature :: !(Maybe Bytes)
, _sPublicKeyId :: !(Maybe Text)
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'Signature' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'sSignature'
--
-- * 'sPublicKeyId'
signature
:: Signature
signature = Signature' {_sSignature = Nothing, _sPublicKeyId = Nothing}
-- | The content of the signature, an opaque bytestring. The payload that
-- this signature verifies MUST be unambiguously provided with the
-- Signature during verification. A wrapper message might provide the
-- payload explicitly. Alternatively, a message might have a canonical
-- serialization that can always be unambiguously computed to derive the
-- payload.
sSignature :: Lens' Signature (Maybe ByteString)
sSignature
= lens _sSignature (\ s a -> s{_sSignature = a}) .
mapping _Bytes
-- | The identifier for the public key that verifies this signature. * The
-- \`public_key_id\` is required. * The \`public_key_id\` SHOULD be an
-- RFC3986 conformant URI. * When possible, the \`public_key_id\` SHOULD be
-- an immutable reference, such as a cryptographic digest. Examples of
-- valid \`public_key_id\`s: OpenPGP V4 public key fingerprint: *
-- \"openpgp4fpr:74FAF3B861BDA0870C7B6DEF607E48D2A663AEEA\" See
-- https:\/\/www.iana.org\/assignments\/uri-schemes\/prov\/openpgp4fpr for
-- more details on this scheme. RFC6920 digest-named SubjectPublicKeyInfo
-- (digest of the DER serialization): *
-- \"ni:\/\/\/sha-256;cD9o9Cq6LG3jD0iKXqEi_vdjJGecm_iXkbqVoScViaU\" *
-- \"nih:\/\/\/sha-256;703f68f42aba2c6de30f488a5ea122fef76324679c9bf89791ba95a1271589a5\"
sPublicKeyId :: Lens' Signature (Maybe Text)
sPublicKeyId
= lens _sPublicKeyId (\ s a -> s{_sPublicKeyId = a})
instance FromJSON Signature where
parseJSON
= withObject "Signature"
(\ o ->
Signature' <$>
(o .:? "signature") <*> (o .:? "publicKeyId"))
instance ToJSON Signature where
toJSON Signature'{..}
= object
(catMaybes
[("signature" .=) <$> _sSignature,
("publicKeyId" .=) <$> _sPublicKeyId])
-- | A public key in the PkixPublicKey format (see
-- https:\/\/tools.ietf.org\/html\/rfc5280#section-4.1.2.7 for details).
-- Public keys of this type are typically textually encoded using the PEM
-- format.
--
-- /See:/ 'pkixPublicKey' smart constructor.
data PkixPublicKey =
PkixPublicKey'
{ _ppkPublicKeyPem :: !(Maybe Text)
, _ppkSignatureAlgorithm :: !(Maybe PkixPublicKeySignatureAlgorithm)
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'PkixPublicKey' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'ppkPublicKeyPem'
--
-- * 'ppkSignatureAlgorithm'
pkixPublicKey
:: PkixPublicKey
pkixPublicKey =
PkixPublicKey' {_ppkPublicKeyPem = Nothing, _ppkSignatureAlgorithm = Nothing}
-- | A PEM-encoded public key, as described in
-- https:\/\/tools.ietf.org\/html\/rfc7468#section-13
ppkPublicKeyPem :: Lens' PkixPublicKey (Maybe Text)
ppkPublicKeyPem
= lens _ppkPublicKeyPem
(\ s a -> s{_ppkPublicKeyPem = a})
-- | The signature algorithm used to verify a message against a signature
-- using this key. These signature algorithm must match the structure and
-- any object identifiers encoded in \`public_key_pem\` (i.e. this
-- algorithm must match that of the public key).
ppkSignatureAlgorithm :: Lens' PkixPublicKey (Maybe PkixPublicKeySignatureAlgorithm)
ppkSignatureAlgorithm
= lens _ppkSignatureAlgorithm
(\ s a -> s{_ppkSignatureAlgorithm = a})
instance FromJSON PkixPublicKey where
parseJSON
= withObject "PkixPublicKey"
(\ o ->
PkixPublicKey' <$>
(o .:? "publicKeyPem") <*>
(o .:? "signatureAlgorithm"))
instance ToJSON PkixPublicKey where
toJSON PkixPublicKey'{..}
= object
(catMaybes
[("publicKeyPem" .=) <$> _ppkPublicKeyPem,
("signatureAlgorithm" .=) <$>
_ppkSignatureAlgorithm])
-- | Represents a textual expression in the Common Expression Language (CEL)
-- syntax. CEL is a C-like expression language. The syntax and semantics of
-- CEL are documented at https:\/\/github.com\/google\/cel-spec. Example
-- (Comparison): title: \"Summary size limit\" description: \"Determines if
-- a summary is less than 100 chars\" expression: \"document.summary.size()
-- \< 100\" Example (Equality): title: \"Requestor is owner\" description:
-- \"Determines if requestor is the document owner\" expression:
-- \"document.owner == request.auth.claims.email\" Example (Logic): title:
-- \"Public documents\" description: \"Determine whether the document
-- should be publicly visible\" expression: \"document.type != \'private\'
-- && document.type != \'internal\'\" Example (Data Manipulation): title:
-- \"Notification string\" description: \"Create a notification string with
-- a timestamp.\" expression: \"\'New message received at \' +
-- string(document.create_time)\" The exact variables and functions that
-- may be referenced within an expression are determined by the service
-- that evaluates it. See the service documentation for additional
-- information.
--
-- /See:/ 'expr' smart constructor.
data Expr =
Expr'
{ _eLocation :: !(Maybe Text)
, _eExpression :: !(Maybe Text)
, _eTitle :: !(Maybe Text)
, _eDescription :: !(Maybe Text)
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'Expr' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'eLocation'
--
-- * 'eExpression'
--
-- * 'eTitle'
--
-- * 'eDescription'
expr
:: Expr
expr =
Expr'
{ _eLocation = Nothing
, _eExpression = Nothing
, _eTitle = Nothing
, _eDescription = Nothing
}
-- | Optional. String indicating the location of the expression for error
-- reporting, e.g. a file name and a position in the file.
eLocation :: Lens' Expr (Maybe Text)
eLocation
= lens _eLocation (\ s a -> s{_eLocation = a})
-- | Textual representation of an expression in Common Expression Language
-- syntax.
eExpression :: Lens' Expr (Maybe Text)
eExpression
= lens _eExpression (\ s a -> s{_eExpression = a})
-- | Optional. Title for the expression, i.e. a short string describing its
-- purpose. This can be used e.g. in UIs which allow to enter the
-- expression.
eTitle :: Lens' Expr (Maybe Text)
eTitle = lens _eTitle (\ s a -> s{_eTitle = a})
-- | Optional. Description of the expression. This is a longer text which
-- describes the expression, e.g. when hovered over it in a UI.
eDescription :: Lens' Expr (Maybe Text)
eDescription
= lens _eDescription (\ s a -> s{_eDescription = a})
instance FromJSON Expr where
parseJSON
= withObject "Expr"
(\ o ->
Expr' <$>
(o .:? "location") <*> (o .:? "expression") <*>
(o .:? "title")
<*> (o .:? "description"))
instance ToJSON Expr where
toJSON Expr'{..}
= object
(catMaybes
[("location" .=) <$> _eLocation,
("expression" .=) <$> _eExpression,
("title" .=) <$> _eTitle,
("description" .=) <$> _eDescription])
-- | A generic empty message that you can re-use to avoid defining duplicated
-- empty messages in your APIs. A typical example is to use it as the
-- request or the response type of an API method. For instance: service Foo
-- { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } The
-- JSON representation for \`Empty\` is empty JSON object \`{}\`.
--
-- /See:/ 'empty' smart constructor.
data Empty =
Empty'
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'Empty' with the minimum fields required to make a request.
--
empty
:: Empty
empty = Empty'
instance FromJSON Empty where
parseJSON = withObject "Empty" (\ o -> pure Empty')
instance ToJSON Empty where
toJSON = const emptyObject
-- | Optional. Per-kubernetes-namespace admission rules. K8s namespace spec
-- format: [a-z.-]+, e.g. \'some-namespace\'
--
-- /See:/ 'policyKubernetesNamespaceAdmissionRules' smart constructor.
newtype PolicyKubernetesNamespaceAdmissionRules =
PolicyKubernetesNamespaceAdmissionRules'
{ _pknarAddtional :: HashMap Text AdmissionRule
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'PolicyKubernetesNamespaceAdmissionRules' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'pknarAddtional'
policyKubernetesNamespaceAdmissionRules
:: HashMap Text AdmissionRule -- ^ 'pknarAddtional'
-> PolicyKubernetesNamespaceAdmissionRules
policyKubernetesNamespaceAdmissionRules pPknarAddtional_ =
PolicyKubernetesNamespaceAdmissionRules'
{_pknarAddtional = _Coerce # pPknarAddtional_}
pknarAddtional :: Lens' PolicyKubernetesNamespaceAdmissionRules (HashMap Text AdmissionRule)
pknarAddtional
= lens _pknarAddtional
(\ s a -> s{_pknarAddtional = a})
. _Coerce
instance FromJSON
PolicyKubernetesNamespaceAdmissionRules
where
parseJSON
= withObject
"PolicyKubernetesNamespaceAdmissionRules"
(\ o ->
PolicyKubernetesNamespaceAdmissionRules' <$>
(parseJSONObject o))
instance ToJSON
PolicyKubernetesNamespaceAdmissionRules
where
toJSON = toJSON . _pknarAddtional
-- | Request message for \`SetIamPolicy\` method.
--
-- /See:/ 'setIAMPolicyRequest' smart constructor.
newtype SetIAMPolicyRequest =
SetIAMPolicyRequest'
{ _siprPolicy :: Maybe IAMPolicy
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'SetIAMPolicyRequest' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'siprPolicy'
setIAMPolicyRequest
:: SetIAMPolicyRequest
setIAMPolicyRequest = SetIAMPolicyRequest' {_siprPolicy = Nothing}
-- | REQUIRED: The complete policy to be applied to the \`resource\`. The
-- size of the policy is limited to a few 10s of KB. An empty policy is a
-- valid policy but certain Cloud Platform services (such as Projects)
-- might reject them.
siprPolicy :: Lens' SetIAMPolicyRequest (Maybe IAMPolicy)
siprPolicy
= lens _siprPolicy (\ s a -> s{_siprPolicy = a})
instance FromJSON SetIAMPolicyRequest where
parseJSON
= withObject "SetIAMPolicyRequest"
(\ o -> SetIAMPolicyRequest' <$> (o .:? "policy"))
instance ToJSON SetIAMPolicyRequest where
toJSON SetIAMPolicyRequest'{..}
= object (catMaybes [("policy" .=) <$> _siprPolicy])
-- | Request message for ValidationHelperV1.ValidateAttestationOccurrence.
--
-- /See:/ 'validateAttestationOccurrenceRequest' smart constructor.
data ValidateAttestationOccurrenceRequest =
ValidateAttestationOccurrenceRequest'
{ _vaorOccurrenceNote :: !(Maybe Text)
, _vaorAttestation :: !(Maybe AttestationOccurrence)
, _vaorOccurrenceResourceURI :: !(Maybe Text)
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'ValidateAttestationOccurrenceRequest' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'vaorOccurrenceNote'
--
-- * 'vaorAttestation'
--
-- * 'vaorOccurrenceResourceURI'
validateAttestationOccurrenceRequest
:: ValidateAttestationOccurrenceRequest
validateAttestationOccurrenceRequest =
ValidateAttestationOccurrenceRequest'
{ _vaorOccurrenceNote = Nothing
, _vaorAttestation = Nothing
, _vaorOccurrenceResourceURI = Nothing
}
-- | Required. The resource name of the Note to which the containing
-- Occurrence is associated.
vaorOccurrenceNote :: Lens' ValidateAttestationOccurrenceRequest (Maybe Text)
vaorOccurrenceNote
= lens _vaorOccurrenceNote
(\ s a -> s{_vaorOccurrenceNote = a})
-- | Required. An AttestationOccurrence to be checked that it can be verified
-- by the Attestor. It does not have to be an existing entity in Container
-- Analysis. It must otherwise be a valid AttestationOccurrence.
vaorAttestation :: Lens' ValidateAttestationOccurrenceRequest (Maybe AttestationOccurrence)
vaorAttestation
= lens _vaorAttestation
(\ s a -> s{_vaorAttestation = a})
-- | Required. The URI of the artifact (e.g. container image) that is the
-- subject of the containing Occurrence.
vaorOccurrenceResourceURI :: Lens' ValidateAttestationOccurrenceRequest (Maybe Text)
vaorOccurrenceResourceURI
= lens _vaorOccurrenceResourceURI
(\ s a -> s{_vaorOccurrenceResourceURI = a})
instance FromJSON
ValidateAttestationOccurrenceRequest
where
parseJSON
= withObject "ValidateAttestationOccurrenceRequest"
(\ o ->
ValidateAttestationOccurrenceRequest' <$>
(o .:? "occurrenceNote") <*> (o .:? "attestation")
<*> (o .:? "occurrenceResourceUri"))
instance ToJSON ValidateAttestationOccurrenceRequest
where
toJSON ValidateAttestationOccurrenceRequest'{..}
= object
(catMaybes
[("occurrenceNote" .=) <$> _vaorOccurrenceNote,
("attestation" .=) <$> _vaorAttestation,
("occurrenceResourceUri" .=) <$>
_vaorOccurrenceResourceURI])
--
-- /See:/ 'jwt' smart constructor.
newtype Jwt =
Jwt'
{ _jCompactJwt :: Maybe Text
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'Jwt' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'jCompactJwt'
jwt
:: Jwt
jwt = Jwt' {_jCompactJwt = Nothing}
-- | The compact encoding of a JWS, which is always three base64 encoded
-- strings joined by periods. For details, see:
-- https:\/\/tools.ietf.org\/html\/rfc7515.html#section-3.1
jCompactJwt :: Lens' Jwt (Maybe Text)
jCompactJwt
= lens _jCompactJwt (\ s a -> s{_jCompactJwt = a})
instance FromJSON Jwt where
parseJSON
= withObject "Jwt"
(\ o -> Jwt' <$> (o .:? "compactJwt"))
instance ToJSON Jwt where
toJSON Jwt'{..}
= object
(catMaybes [("compactJwt" .=) <$> _jCompactJwt])
-- | Response message for BinauthzManagementService.ListAttestors.
--
-- /See:/ 'listAttestorsResponse' smart constructor.
data ListAttestorsResponse =
ListAttestorsResponse'
{ _larNextPageToken :: !(Maybe Text)
, _larAttestors :: !(Maybe [Attestor])
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'ListAttestorsResponse' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'larNextPageToken'
--
-- * 'larAttestors'
listAttestorsResponse
:: ListAttestorsResponse
listAttestorsResponse =
ListAttestorsResponse' {_larNextPageToken = Nothing, _larAttestors = Nothing}
-- | A token to retrieve the next page of results. Pass this value in the
-- ListAttestorsRequest.page_token field in the subsequent call to the
-- \`ListAttestors\` method to retrieve the next page of results.
larNextPageToken :: Lens' ListAttestorsResponse (Maybe Text)
larNextPageToken
= lens _larNextPageToken
(\ s a -> s{_larNextPageToken = a})
-- | The list of attestors.
larAttestors :: Lens' ListAttestorsResponse [Attestor]
larAttestors
= lens _larAttestors (\ s a -> s{_larAttestors = a})
. _Default
. _Coerce
instance FromJSON ListAttestorsResponse where
parseJSON
= withObject "ListAttestorsResponse"
(\ o ->
ListAttestorsResponse' <$>
(o .:? "nextPageToken") <*>
(o .:? "attestors" .!= mempty))
instance ToJSON ListAttestorsResponse where
toJSON ListAttestorsResponse'{..}
= object
(catMaybes
[("nextPageToken" .=) <$> _larNextPageToken,
("attestors" .=) <$> _larAttestors])
-- | Response message for ValidationHelperV1.ValidateAttestationOccurrence.
--
-- /See:/ 'validateAttestationOccurrenceResponse' smart constructor.
data ValidateAttestationOccurrenceResponse =
ValidateAttestationOccurrenceResponse'
{ _vaorDenialReason :: !(Maybe Text)
, _vaorResult :: !(Maybe ValidateAttestationOccurrenceResponseResult)
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'ValidateAttestationOccurrenceResponse' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'vaorDenialReason'
--
-- * 'vaorResult'
validateAttestationOccurrenceResponse
:: ValidateAttestationOccurrenceResponse
validateAttestationOccurrenceResponse =
ValidateAttestationOccurrenceResponse'
{_vaorDenialReason = Nothing, _vaorResult = Nothing}
-- | The reason for denial if the Attestation couldn\'t be validated.
vaorDenialReason :: Lens' ValidateAttestationOccurrenceResponse (Maybe Text)
vaorDenialReason
= lens _vaorDenialReason
(\ s a -> s{_vaorDenialReason = a})
-- | The result of the Attestation validation.
vaorResult :: Lens' ValidateAttestationOccurrenceResponse (Maybe ValidateAttestationOccurrenceResponseResult)
vaorResult
= lens _vaorResult (\ s a -> s{_vaorResult = a})
instance FromJSON
ValidateAttestationOccurrenceResponse
where
parseJSON
= withObject "ValidateAttestationOccurrenceResponse"
(\ o ->
ValidateAttestationOccurrenceResponse' <$>
(o .:? "denialReason") <*> (o .:? "result"))
instance ToJSON ValidateAttestationOccurrenceResponse
where
toJSON ValidateAttestationOccurrenceResponse'{..}
= object
(catMaybes
[("denialReason" .=) <$> _vaorDenialReason,
("result" .=) <$> _vaorResult])
-- | An admission allowlist pattern exempts images from checks by admission
-- rules.
--
-- /See:/ 'admissionWhiteListPattern' smart constructor.
newtype AdmissionWhiteListPattern =
AdmissionWhiteListPattern'
{ _awlpNamePattern :: Maybe Text
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'AdmissionWhiteListPattern' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'awlpNamePattern'
admissionWhiteListPattern
:: AdmissionWhiteListPattern
admissionWhiteListPattern =
AdmissionWhiteListPattern' {_awlpNamePattern = Nothing}
-- | An image name pattern to allowlist, in the form
-- \`registry\/path\/to\/image\`. This supports a trailing \`*\` wildcard,
-- but this is allowed only in text after the \`registry\/\` part. This
-- also supports a trailing \`**\` wildcard which matches subdirectories of
-- a given entry.
awlpNamePattern :: Lens' AdmissionWhiteListPattern (Maybe Text)
awlpNamePattern
= lens _awlpNamePattern
(\ s a -> s{_awlpNamePattern = a})
instance FromJSON AdmissionWhiteListPattern where
parseJSON
= withObject "AdmissionWhiteListPattern"
(\ o ->
AdmissionWhiteListPattern' <$> (o .:? "namePattern"))
instance ToJSON AdmissionWhiteListPattern where
toJSON AdmissionWhiteListPattern'{..}
= object
(catMaybes [("namePattern" .=) <$> _awlpNamePattern])
-- | Optional. Per-istio-service-identity admission rules. Istio service
-- identity spec format: spiffe:\/\/\/ns\/\/sa\/ or \/ns\/\/sa\/ e.g.
-- spiffe:\/\/example.com\/ns\/test-ns\/sa\/default
--
-- /See:/ 'policyIstioServiceIdentityAdmissionRules' smart constructor.
newtype PolicyIstioServiceIdentityAdmissionRules =
PolicyIstioServiceIdentityAdmissionRules'
{ _pisiarAddtional :: HashMap Text AdmissionRule
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'PolicyIstioServiceIdentityAdmissionRules' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'pisiarAddtional'
policyIstioServiceIdentityAdmissionRules
:: HashMap Text AdmissionRule -- ^ 'pisiarAddtional'
-> PolicyIstioServiceIdentityAdmissionRules
policyIstioServiceIdentityAdmissionRules pPisiarAddtional_ =
PolicyIstioServiceIdentityAdmissionRules'
{_pisiarAddtional = _Coerce # pPisiarAddtional_}
pisiarAddtional :: Lens' PolicyIstioServiceIdentityAdmissionRules (HashMap Text AdmissionRule)
pisiarAddtional
= lens _pisiarAddtional
(\ s a -> s{_pisiarAddtional = a})
. _Coerce
instance FromJSON
PolicyIstioServiceIdentityAdmissionRules
where
parseJSON
= withObject
"PolicyIstioServiceIdentityAdmissionRules"
(\ o ->
PolicyIstioServiceIdentityAdmissionRules' <$>
(parseJSONObject o))
instance ToJSON
PolicyIstioServiceIdentityAdmissionRules
where
toJSON = toJSON . _pisiarAddtional
-- | An admission rule specifies either that all container images used in a
-- pod creation request must be attested to by one or more attestors, that
-- all pod creations will be allowed, or that all pod creations will be
-- denied. Images matching an admission allowlist pattern are exempted from
-- admission rules and will never block a pod creation.
--
-- /See:/ 'admissionRule' smart constructor.
data AdmissionRule =
AdmissionRule'
{ _arEnforcementMode :: !(Maybe AdmissionRuleEnforcementMode)
, _arEvaluationMode :: !(Maybe AdmissionRuleEvaluationMode)
, _arRequireAttestationsBy :: !(Maybe [Text])
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'AdmissionRule' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'arEnforcementMode'
--
-- * 'arEvaluationMode'
--
-- * 'arRequireAttestationsBy'
admissionRule
:: AdmissionRule
admissionRule =
AdmissionRule'
{ _arEnforcementMode = Nothing
, _arEvaluationMode = Nothing
, _arRequireAttestationsBy = Nothing
}
-- | Required. The action when a pod creation is denied by the admission
-- rule.
arEnforcementMode :: Lens' AdmissionRule (Maybe AdmissionRuleEnforcementMode)
arEnforcementMode
= lens _arEnforcementMode
(\ s a -> s{_arEnforcementMode = a})
-- | Required. How this admission rule will be evaluated.
arEvaluationMode :: Lens' AdmissionRule (Maybe AdmissionRuleEvaluationMode)
arEvaluationMode
= lens _arEvaluationMode
(\ s a -> s{_arEvaluationMode = a})
-- | Optional. The resource names of the attestors that must attest to a
-- container image, in the format \`projects\/*\/attestors\/*\`. Each
-- attestor must exist before a policy can reference it. To add an attestor
-- to a policy the principal issuing the policy change request must be able
-- to read the attestor resource. Note: this field must be non-empty when
-- the evaluation_mode field specifies REQUIRE_ATTESTATION, otherwise it
-- must be empty.
arRequireAttestationsBy :: Lens' AdmissionRule [Text]
arRequireAttestationsBy
= lens _arRequireAttestationsBy
(\ s a -> s{_arRequireAttestationsBy = a})
. _Default
. _Coerce
instance FromJSON AdmissionRule where
parseJSON
= withObject "AdmissionRule"
(\ o ->
AdmissionRule' <$>
(o .:? "enforcementMode") <*>
(o .:? "evaluationMode")
<*> (o .:? "requireAttestationsBy" .!= mempty))
instance ToJSON AdmissionRule where
toJSON AdmissionRule'{..}
= object
(catMaybes
[("enforcementMode" .=) <$> _arEnforcementMode,
("evaluationMode" .=) <$> _arEvaluationMode,
("requireAttestationsBy" .=) <$>
_arRequireAttestationsBy])
-- | Request message for \`TestIamPermissions\` method.
--
-- /See:/ 'testIAMPermissionsRequest' smart constructor.
newtype TestIAMPermissionsRequest =
TestIAMPermissionsRequest'
{ _tiprPermissions :: Maybe [Text]
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'TestIAMPermissionsRequest' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'tiprPermissions'
testIAMPermissionsRequest
:: TestIAMPermissionsRequest
testIAMPermissionsRequest =
TestIAMPermissionsRequest' {_tiprPermissions = Nothing}
-- | The set of permissions to check for the \`resource\`. Permissions with
-- wildcards (such as \'*\' or \'storage.*\') are not allowed. For more
-- information see [IAM
-- Overview](https:\/\/cloud.google.com\/iam\/docs\/overview#permissions).
tiprPermissions :: Lens' TestIAMPermissionsRequest [Text]
tiprPermissions
= lens _tiprPermissions
(\ s a -> s{_tiprPermissions = a})
. _Default
. _Coerce
instance FromJSON TestIAMPermissionsRequest where
parseJSON
= withObject "TestIAMPermissionsRequest"
(\ o ->
TestIAMPermissionsRequest' <$>
(o .:? "permissions" .!= mempty))
instance ToJSON TestIAMPermissionsRequest where
toJSON TestIAMPermissionsRequest'{..}
= object
(catMaybes [("permissions" .=) <$> _tiprPermissions])
-- | Optional. Per-kubernetes-service-account admission rules. Service
-- account spec format: \`namespace:serviceaccount\`. e.g.
-- \'test-ns:default\'
--
-- /See:/ 'policyKubernetesServiceAccountAdmissionRules' smart constructor.
newtype PolicyKubernetesServiceAccountAdmissionRules =
PolicyKubernetesServiceAccountAdmissionRules'
{ _pksaarAddtional :: HashMap Text AdmissionRule
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'PolicyKubernetesServiceAccountAdmissionRules' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'pksaarAddtional'
policyKubernetesServiceAccountAdmissionRules
:: HashMap Text AdmissionRule -- ^ 'pksaarAddtional'
-> PolicyKubernetesServiceAccountAdmissionRules
policyKubernetesServiceAccountAdmissionRules pPksaarAddtional_ =
PolicyKubernetesServiceAccountAdmissionRules'
{_pksaarAddtional = _Coerce # pPksaarAddtional_}
pksaarAddtional :: Lens' PolicyKubernetesServiceAccountAdmissionRules (HashMap Text AdmissionRule)
pksaarAddtional
= lens _pksaarAddtional
(\ s a -> s{_pksaarAddtional = a})
. _Coerce
instance FromJSON
PolicyKubernetesServiceAccountAdmissionRules
where
parseJSON
= withObject
"PolicyKubernetesServiceAccountAdmissionRules"
(\ o ->
PolicyKubernetesServiceAccountAdmissionRules' <$>
(parseJSONObject o))
instance ToJSON
PolicyKubernetesServiceAccountAdmissionRules
where
toJSON = toJSON . _pksaarAddtional
-- | An Identity and Access Management (IAM) policy, which specifies access
-- controls for Google Cloud resources. A \`Policy\` is a collection of
-- \`bindings\`. A \`binding\` binds one or more \`members\` to a single
-- \`role\`. Members can be user accounts, service accounts, Google groups,
-- and domains (such as G Suite). A \`role\` is a named list of
-- permissions; each \`role\` can be an IAM predefined role or a
-- user-created custom role. For some types of Google Cloud resources, a
-- \`binding\` can also specify a \`condition\`, which is a logical
-- expression that allows access to a resource only if the expression
-- evaluates to \`true\`. A condition can add constraints based on
-- attributes of the request, the resource, or both. To learn which
-- resources support conditions in their IAM policies, see the [IAM
-- documentation](https:\/\/cloud.google.com\/iam\/help\/conditions\/resource-policies).
-- **JSON example:** { \"bindings\": [ { \"role\":
-- \"roles\/resourcemanager.organizationAdmin\", \"members\": [
-- \"user:mike\'example.com\", \"group:admins\'example.com\",
-- \"domain:google.com\",
-- \"serviceAccount:my-project-id\'appspot.gserviceaccount.com\" ] }, {
-- \"role\": \"roles\/resourcemanager.organizationViewer\", \"members\": [
-- \"user:eve\'example.com\" ], \"condition\": { \"title\": \"expirable
-- access\", \"description\": \"Does not grant access after Sep 2020\",
-- \"expression\": \"request.time \<
-- timestamp(\'2020-10-01T00:00:00.000Z\')\", } } ], \"etag\":
-- \"BwWWja0YfJA=\", \"version\": 3 } **YAML example:** bindings: -
-- members: - user:mike\'example.com - group:admins\'example.com -
-- domain:google.com -
-- serviceAccount:my-project-id\'appspot.gserviceaccount.com role:
-- roles\/resourcemanager.organizationAdmin - members: -
-- user:eve\'example.com role: roles\/resourcemanager.organizationViewer
-- condition: title: expirable access description: Does not grant access
-- after Sep 2020 expression: request.time \<
-- timestamp(\'2020-10-01T00:00:00.000Z\') - etag: BwWWja0YfJA= - version:
-- 3 For a description of IAM and its features, see the [IAM
-- documentation](https:\/\/cloud.google.com\/iam\/docs\/).
--
-- /See:/ 'iamPolicy' smart constructor.
data IAMPolicy =
IAMPolicy'
{ _ipEtag :: !(Maybe Bytes)
, _ipVersion :: !(Maybe (Textual Int32))
, _ipBindings :: !(Maybe [Binding])
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'IAMPolicy' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'ipEtag'
--
-- * 'ipVersion'
--
-- * 'ipBindings'
iamPolicy
:: IAMPolicy
iamPolicy =
IAMPolicy' {_ipEtag = Nothing, _ipVersion = Nothing, _ipBindings = Nothing}
-- | \`etag\` is used for optimistic concurrency control as a way to help
-- prevent simultaneous updates of a policy from overwriting each other. It
-- is strongly suggested that systems make use of the \`etag\` in the
-- read-modify-write cycle to perform policy updates in order to avoid race
-- conditions: An \`etag\` is returned in the response to \`getIamPolicy\`,
-- and systems are expected to put that etag in the request to
-- \`setIamPolicy\` to ensure that their change will be applied to the same
-- version of the policy. **Important:** If you use IAM Conditions, you
-- must include the \`etag\` field whenever you call \`setIamPolicy\`. If
-- you omit this field, then IAM allows you to overwrite a version \`3\`
-- policy with a version \`1\` policy, and all of the conditions in the
-- version \`3\` policy are lost.
ipEtag :: Lens' IAMPolicy (Maybe ByteString)
ipEtag
= lens _ipEtag (\ s a -> s{_ipEtag = a}) .
mapping _Bytes
-- | Specifies the format of the policy. Valid values are \`0\`, \`1\`, and
-- \`3\`. Requests that specify an invalid value are rejected. Any
-- operation that affects conditional role bindings must specify version
-- \`3\`. This requirement applies to the following operations: * Getting a
-- policy that includes a conditional role binding * Adding a conditional
-- role binding to a policy * Changing a conditional role binding in a
-- policy * Removing any role binding, with or without a condition, from a
-- policy that includes conditions **Important:** If you use IAM
-- Conditions, you must include the \`etag\` field whenever you call
-- \`setIamPolicy\`. If you omit this field, then IAM allows you to
-- overwrite a version \`3\` policy with a version \`1\` policy, and all of
-- the conditions in the version \`3\` policy are lost. If a policy does
-- not include any conditions, operations on that policy may specify any
-- valid version or leave the field unset. To learn which resources support
-- conditions in their IAM policies, see the [IAM
-- documentation](https:\/\/cloud.google.com\/iam\/help\/conditions\/resource-policies).
ipVersion :: Lens' IAMPolicy (Maybe Int32)
ipVersion
= lens _ipVersion (\ s a -> s{_ipVersion = a}) .
mapping _Coerce
-- | Associates a list of \`members\` to a \`role\`. Optionally, may specify
-- a \`condition\` that determines how and when the \`bindings\` are
-- applied. Each of the \`bindings\` must contain at least one member.
ipBindings :: Lens' IAMPolicy [Binding]
ipBindings
= lens _ipBindings (\ s a -> s{_ipBindings = a}) .
_Default
. _Coerce
instance FromJSON IAMPolicy where
parseJSON
= withObject "IAMPolicy"
(\ o ->
IAMPolicy' <$>
(o .:? "etag") <*> (o .:? "version") <*>
(o .:? "bindings" .!= mempty))
instance ToJSON IAMPolicy where
toJSON IAMPolicy'{..}
= object
(catMaybes
[("etag" .=) <$> _ipEtag,
("version" .=) <$> _ipVersion,
("bindings" .=) <$> _ipBindings])
-- | An attestor public key that will be used to verify attestations signed
-- by this attestor.
--
-- /See:/ 'attestorPublicKey' smart constructor.
data AttestorPublicKey =
AttestorPublicKey'
{ _apkPkixPublicKey :: !(Maybe PkixPublicKey)
, _apkAsciiArmoredPgpPublicKey :: !(Maybe Text)
, _apkId :: !(Maybe Text)
, _apkComment :: !(Maybe Text)
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'AttestorPublicKey' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'apkPkixPublicKey'
--
-- * 'apkAsciiArmoredPgpPublicKey'
--
-- * 'apkId'
--
-- * 'apkComment'
attestorPublicKey
:: AttestorPublicKey
attestorPublicKey =
AttestorPublicKey'
{ _apkPkixPublicKey = Nothing
, _apkAsciiArmoredPgpPublicKey = Nothing
, _apkId = Nothing
, _apkComment = Nothing
}
-- | A raw PKIX SubjectPublicKeyInfo format public key. NOTE: \`id\` may be
-- explicitly provided by the caller when using this type of public key,
-- but it MUST be a valid RFC3986 URI. If \`id\` is left blank, a default
-- one will be computed based on the digest of the DER encoding of the
-- public key.
apkPkixPublicKey :: Lens' AttestorPublicKey (Maybe PkixPublicKey)
apkPkixPublicKey
= lens _apkPkixPublicKey
(\ s a -> s{_apkPkixPublicKey = a})
-- | ASCII-armored representation of a PGP public key, as the entire output
-- by the command \`gpg --export --armor foo\'example.com\` (either LF or
-- CRLF line endings). When using this field, \`id\` should be left blank.
-- The BinAuthz API handlers will calculate the ID and fill it in
-- automatically. BinAuthz computes this ID as the OpenPGP RFC4880 V4
-- fingerprint, represented as upper-case hex. If \`id\` is provided by the
-- caller, it will be overwritten by the API-calculated ID.
apkAsciiArmoredPgpPublicKey :: Lens' AttestorPublicKey (Maybe Text)
apkAsciiArmoredPgpPublicKey
= lens _apkAsciiArmoredPgpPublicKey
(\ s a -> s{_apkAsciiArmoredPgpPublicKey = a})
-- | The ID of this public key. Signatures verified by BinAuthz must include
-- the ID of the public key that can be used to verify them, and that ID
-- must match the contents of this field exactly. Additional restrictions
-- on this field can be imposed based on which public key type is
-- encapsulated. See the documentation on \`public_key\` cases below for
-- details.
apkId :: Lens' AttestorPublicKey (Maybe Text)
apkId = lens _apkId (\ s a -> s{_apkId = a})
-- | Optional. A descriptive comment. This field may be updated.
apkComment :: Lens' AttestorPublicKey (Maybe Text)
apkComment
= lens _apkComment (\ s a -> s{_apkComment = a})
instance FromJSON AttestorPublicKey where
parseJSON
= withObject "AttestorPublicKey"
(\ o ->
AttestorPublicKey' <$>
(o .:? "pkixPublicKey") <*>
(o .:? "asciiArmoredPgpPublicKey")
<*> (o .:? "id")
<*> (o .:? "comment"))
instance ToJSON AttestorPublicKey where
toJSON AttestorPublicKey'{..}
= object
(catMaybes
[("pkixPublicKey" .=) <$> _apkPkixPublicKey,
("asciiArmoredPgpPublicKey" .=) <$>
_apkAsciiArmoredPgpPublicKey,
("id" .=) <$> _apkId,
("comment" .=) <$> _apkComment])
-- | Response message for \`TestIamPermissions\` method.
--
-- /See:/ 'testIAMPermissionsResponse' smart constructor.
newtype TestIAMPermissionsResponse =
TestIAMPermissionsResponse'
{ _tiamprPermissions :: Maybe [Text]
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'TestIAMPermissionsResponse' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'tiamprPermissions'
testIAMPermissionsResponse
:: TestIAMPermissionsResponse
testIAMPermissionsResponse =
TestIAMPermissionsResponse' {_tiamprPermissions = Nothing}
-- | A subset of \`TestPermissionsRequest.permissions\` that the caller is
-- allowed.
tiamprPermissions :: Lens' TestIAMPermissionsResponse [Text]
tiamprPermissions
= lens _tiamprPermissions
(\ s a -> s{_tiamprPermissions = a})
. _Default
. _Coerce
instance FromJSON TestIAMPermissionsResponse where
parseJSON
= withObject "TestIAMPermissionsResponse"
(\ o ->
TestIAMPermissionsResponse' <$>
(o .:? "permissions" .!= mempty))
instance ToJSON TestIAMPermissionsResponse where
toJSON TestIAMPermissionsResponse'{..}
= object
(catMaybes
[("permissions" .=) <$> _tiamprPermissions])
-- | A policy for container image binary authorization.
--
-- /See:/ 'policy' smart constructor.
data Policy =
Policy'
{ _pDefaultAdmissionRule :: !(Maybe AdmissionRule)
, _pIstioServiceIdentityAdmissionRules :: !(Maybe PolicyIstioServiceIdentityAdmissionRules)
, _pAdmissionWhiteListPatterns :: !(Maybe [AdmissionWhiteListPattern])
, _pKubernetesServiceAccountAdmissionRules :: !(Maybe PolicyKubernetesServiceAccountAdmissionRules)
, _pClusterAdmissionRules :: !(Maybe PolicyClusterAdmissionRules)
, _pUpdateTime :: !(Maybe DateTime')
, _pName :: !(Maybe Text)
, _pKubernetesNamespaceAdmissionRules :: !(Maybe PolicyKubernetesNamespaceAdmissionRules)
, _pGlobalPolicyEvaluationMode :: !(Maybe PolicyGlobalPolicyEvaluationMode)
, _pDescription :: !(Maybe Text)
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'Policy' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'pDefaultAdmissionRule'
--
-- * 'pIstioServiceIdentityAdmissionRules'
--
-- * 'pAdmissionWhiteListPatterns'
--
-- * 'pKubernetesServiceAccountAdmissionRules'
--
-- * 'pClusterAdmissionRules'
--
-- * 'pUpdateTime'
--
-- * 'pName'
--
-- * 'pKubernetesNamespaceAdmissionRules'
--
-- * 'pGlobalPolicyEvaluationMode'
--
-- * 'pDescription'
policy
:: Policy
policy =
Policy'
{ _pDefaultAdmissionRule = Nothing
, _pIstioServiceIdentityAdmissionRules = Nothing
, _pAdmissionWhiteListPatterns = Nothing
, _pKubernetesServiceAccountAdmissionRules = Nothing
, _pClusterAdmissionRules = Nothing
, _pUpdateTime = Nothing
, _pName = Nothing
, _pKubernetesNamespaceAdmissionRules = Nothing
, _pGlobalPolicyEvaluationMode = Nothing
, _pDescription = Nothing
}
-- | Required. Default admission rule for a cluster without a per-cluster,
-- per- kubernetes-service-account, or per-istio-service-identity admission
-- rule.
pDefaultAdmissionRule :: Lens' Policy (Maybe AdmissionRule)
pDefaultAdmissionRule
= lens _pDefaultAdmissionRule
(\ s a -> s{_pDefaultAdmissionRule = a})
-- | Optional. Per-istio-service-identity admission rules. Istio service
-- identity spec format: spiffe:\/\/\/ns\/\/sa\/ or \/ns\/\/sa\/ e.g.
-- spiffe:\/\/example.com\/ns\/test-ns\/sa\/default
pIstioServiceIdentityAdmissionRules :: Lens' Policy (Maybe PolicyIstioServiceIdentityAdmissionRules)
pIstioServiceIdentityAdmissionRules
= lens _pIstioServiceIdentityAdmissionRules
(\ s a ->
s{_pIstioServiceIdentityAdmissionRules = a})
-- | Optional. Admission policy allowlisting. A matching admission request
-- will always be permitted. This feature is typically used to exclude
-- Google or third-party infrastructure images from Binary Authorization
-- policies.
pAdmissionWhiteListPatterns :: Lens' Policy [AdmissionWhiteListPattern]
pAdmissionWhiteListPatterns
= lens _pAdmissionWhiteListPatterns
(\ s a -> s{_pAdmissionWhiteListPatterns = a})
. _Default
. _Coerce
-- | Optional. Per-kubernetes-service-account admission rules. Service
-- account spec format: \`namespace:serviceaccount\`. e.g.
-- \'test-ns:default\'
pKubernetesServiceAccountAdmissionRules :: Lens' Policy (Maybe PolicyKubernetesServiceAccountAdmissionRules)
pKubernetesServiceAccountAdmissionRules
= lens _pKubernetesServiceAccountAdmissionRules
(\ s a ->
s{_pKubernetesServiceAccountAdmissionRules = a})
-- | Optional. Per-cluster admission rules. Cluster spec format:
-- \`location.clusterId\`. There can be at most one admission rule per
-- cluster spec. A \`location\` is either a compute zone (e.g.
-- us-central1-a) or a region (e.g. us-central1). For \`clusterId\` syntax
-- restrictions see
-- https:\/\/cloud.google.com\/container-engine\/reference\/rest\/v1\/projects.zones.clusters.
pClusterAdmissionRules :: Lens' Policy (Maybe PolicyClusterAdmissionRules)
pClusterAdmissionRules
= lens _pClusterAdmissionRules
(\ s a -> s{_pClusterAdmissionRules = a})
-- | Output only. Time when the policy was last updated.
pUpdateTime :: Lens' Policy (Maybe UTCTime)
pUpdateTime
= lens _pUpdateTime (\ s a -> s{_pUpdateTime = a}) .
mapping _DateTime
-- | Output only. The resource name, in the format \`projects\/*\/policy\`.
-- There is at most one policy per project.
pName :: Lens' Policy (Maybe Text)
pName = lens _pName (\ s a -> s{_pName = a})
-- | Optional. Per-kubernetes-namespace admission rules. K8s namespace spec
-- format: [a-z.-]+, e.g. \'some-namespace\'
pKubernetesNamespaceAdmissionRules :: Lens' Policy (Maybe PolicyKubernetesNamespaceAdmissionRules)
pKubernetesNamespaceAdmissionRules
= lens _pKubernetesNamespaceAdmissionRules
(\ s a -> s{_pKubernetesNamespaceAdmissionRules = a})
-- | Optional. Controls the evaluation of a Google-maintained global
-- admission policy for common system-level images. Images not covered by
-- the global policy will be subject to the project admission policy. This
-- setting has no effect when specified inside a global admission policy.
pGlobalPolicyEvaluationMode :: Lens' Policy (Maybe PolicyGlobalPolicyEvaluationMode)
pGlobalPolicyEvaluationMode
= lens _pGlobalPolicyEvaluationMode
(\ s a -> s{_pGlobalPolicyEvaluationMode = a})
-- | Optional. A descriptive comment.
pDescription :: Lens' Policy (Maybe Text)
pDescription
= lens _pDescription (\ s a -> s{_pDescription = a})
instance FromJSON Policy where
parseJSON
= withObject "Policy"
(\ o ->
Policy' <$>
(o .:? "defaultAdmissionRule") <*>
(o .:? "istioServiceIdentityAdmissionRules")
<*> (o .:? "admissionWhitelistPatterns" .!= mempty)
<*> (o .:? "kubernetesServiceAccountAdmissionRules")
<*> (o .:? "clusterAdmissionRules")
<*> (o .:? "updateTime")
<*> (o .:? "name")
<*> (o .:? "kubernetesNamespaceAdmissionRules")
<*> (o .:? "globalPolicyEvaluationMode")
<*> (o .:? "description"))
instance ToJSON Policy where
toJSON Policy'{..}
= object
(catMaybes
[("defaultAdmissionRule" .=) <$>
_pDefaultAdmissionRule,
("istioServiceIdentityAdmissionRules" .=) <$>
_pIstioServiceIdentityAdmissionRules,
("admissionWhitelistPatterns" .=) <$>
_pAdmissionWhiteListPatterns,
("kubernetesServiceAccountAdmissionRules" .=) <$>
_pKubernetesServiceAccountAdmissionRules,
("clusterAdmissionRules" .=) <$>
_pClusterAdmissionRules,
("updateTime" .=) <$> _pUpdateTime,
("name" .=) <$> _pName,
("kubernetesNamespaceAdmissionRules" .=) <$>
_pKubernetesNamespaceAdmissionRules,
("globalPolicyEvaluationMode" .=) <$>
_pGlobalPolicyEvaluationMode,
("description" .=) <$> _pDescription])
-- | An user owned Grafeas note references a Grafeas Attestation.Authority
-- Note created by the user.
--
-- /See:/ 'userOwnedGrafeasNote' smart constructor.
data UserOwnedGrafeasNote =
UserOwnedGrafeasNote'
{ _uognDelegationServiceAccountEmail :: !(Maybe Text)
, _uognPublicKeys :: !(Maybe [AttestorPublicKey])
, _uognNoteReference :: !(Maybe Text)
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'UserOwnedGrafeasNote' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'uognDelegationServiceAccountEmail'
--
-- * 'uognPublicKeys'
--
-- * 'uognNoteReference'
userOwnedGrafeasNote
:: UserOwnedGrafeasNote
userOwnedGrafeasNote =
UserOwnedGrafeasNote'
{ _uognDelegationServiceAccountEmail = Nothing
, _uognPublicKeys = Nothing
, _uognNoteReference = Nothing
}
-- | Output only. This field will contain the service account email address
-- that this Attestor will use as the principal when querying Container
-- Analysis. Attestor administrators must grant this service account the
-- IAM role needed to read attestations from the note_reference in
-- Container Analysis (\`containeranalysis.notes.occurrences.viewer\`).
-- This email address is fixed for the lifetime of the Attestor, but
-- callers should not make any other assumptions about the service account
-- email; future versions may use an email based on a different naming
-- pattern.
uognDelegationServiceAccountEmail :: Lens' UserOwnedGrafeasNote (Maybe Text)
uognDelegationServiceAccountEmail
= lens _uognDelegationServiceAccountEmail
(\ s a -> s{_uognDelegationServiceAccountEmail = a})
-- | Optional. Public keys that verify attestations signed by this attestor.
-- This field may be updated. If this field is non-empty, one of the
-- specified public keys must verify that an attestation was signed by this
-- attestor for the image specified in the admission request. If this field
-- is empty, this attestor always returns that no valid attestations exist.
uognPublicKeys :: Lens' UserOwnedGrafeasNote [AttestorPublicKey]
uognPublicKeys
= lens _uognPublicKeys
(\ s a -> s{_uognPublicKeys = a})
. _Default
. _Coerce
-- | Required. The Grafeas resource name of a Attestation.Authority Note,
-- created by the user, in the format: \`projects\/*\/notes\/*\`. This
-- field may not be updated. An attestation by this attestor is stored as a
-- Grafeas Attestation.Authority Occurrence that names a container image
-- and that links to this Note. Grafeas is an external dependency.
uognNoteReference :: Lens' UserOwnedGrafeasNote (Maybe Text)
uognNoteReference
= lens _uognNoteReference
(\ s a -> s{_uognNoteReference = a})
instance FromJSON UserOwnedGrafeasNote where
parseJSON
= withObject "UserOwnedGrafeasNote"
(\ o ->
UserOwnedGrafeasNote' <$>
(o .:? "delegationServiceAccountEmail") <*>
(o .:? "publicKeys" .!= mempty)
<*> (o .:? "noteReference"))
instance ToJSON UserOwnedGrafeasNote where
toJSON UserOwnedGrafeasNote'{..}
= object
(catMaybes
[("delegationServiceAccountEmail" .=) <$>
_uognDelegationServiceAccountEmail,
("publicKeys" .=) <$> _uognPublicKeys,
("noteReference" .=) <$> _uognNoteReference])
-- | Optional. Per-cluster admission rules. Cluster spec format:
-- \`location.clusterId\`. There can be at most one admission rule per
-- cluster spec. A \`location\` is either a compute zone (e.g.
-- us-central1-a) or a region (e.g. us-central1). For \`clusterId\` syntax
-- restrictions see
-- https:\/\/cloud.google.com\/container-engine\/reference\/rest\/v1\/projects.zones.clusters.
--
-- /See:/ 'policyClusterAdmissionRules' smart constructor.
newtype PolicyClusterAdmissionRules =
PolicyClusterAdmissionRules'
{ _pcarAddtional :: HashMap Text AdmissionRule
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'PolicyClusterAdmissionRules' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'pcarAddtional'
policyClusterAdmissionRules
:: HashMap Text AdmissionRule -- ^ 'pcarAddtional'
-> PolicyClusterAdmissionRules
policyClusterAdmissionRules pPcarAddtional_ =
PolicyClusterAdmissionRules' {_pcarAddtional = _Coerce # pPcarAddtional_}
pcarAddtional :: Lens' PolicyClusterAdmissionRules (HashMap Text AdmissionRule)
pcarAddtional
= lens _pcarAddtional
(\ s a -> s{_pcarAddtional = a})
. _Coerce
instance FromJSON PolicyClusterAdmissionRules where
parseJSON
= withObject "PolicyClusterAdmissionRules"
(\ o ->
PolicyClusterAdmissionRules' <$> (parseJSONObject o))
instance ToJSON PolicyClusterAdmissionRules where
toJSON = toJSON . _pcarAddtional
-- | An attestor that attests to container image artifacts. An existing
-- attestor cannot be modified except where indicated.
--
-- /See:/ 'attestor' smart constructor.
data Attestor =
Attestor'
{ _aUpdateTime :: !(Maybe DateTime')
, _aName :: !(Maybe Text)
, _aUserOwnedGrafeasNote :: !(Maybe UserOwnedGrafeasNote)
, _aDescription :: !(Maybe Text)
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'Attestor' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'aUpdateTime'
--
-- * 'aName'
--
-- * 'aUserOwnedGrafeasNote'
--
-- * 'aDescription'
attestor
:: Attestor
attestor =
Attestor'
{ _aUpdateTime = Nothing
, _aName = Nothing
, _aUserOwnedGrafeasNote = Nothing
, _aDescription = Nothing
}
-- | Output only. Time when the attestor was last updated.
aUpdateTime :: Lens' Attestor (Maybe UTCTime)
aUpdateTime
= lens _aUpdateTime (\ s a -> s{_aUpdateTime = a}) .
mapping _DateTime
-- | Required. The resource name, in the format:
-- \`projects\/*\/attestors\/*\`. This field may not be updated.
aName :: Lens' Attestor (Maybe Text)
aName = lens _aName (\ s a -> s{_aName = a})
-- | This specifies how an attestation will be read, and how it will be used
-- during policy enforcement.
aUserOwnedGrafeasNote :: Lens' Attestor (Maybe UserOwnedGrafeasNote)
aUserOwnedGrafeasNote
= lens _aUserOwnedGrafeasNote
(\ s a -> s{_aUserOwnedGrafeasNote = a})
-- | Optional. A descriptive comment. This field may be updated. The field
-- may be displayed in chooser dialogs.
aDescription :: Lens' Attestor (Maybe Text)
aDescription
= lens _aDescription (\ s a -> s{_aDescription = a})
instance FromJSON Attestor where
parseJSON
= withObject "Attestor"
(\ o ->
Attestor' <$>
(o .:? "updateTime") <*> (o .:? "name") <*>
(o .:? "userOwnedGrafeasNote")
<*> (o .:? "description"))
instance ToJSON Attestor where
toJSON Attestor'{..}
= object
(catMaybes
[("updateTime" .=) <$> _aUpdateTime,
("name" .=) <$> _aName,
("userOwnedGrafeasNote" .=) <$>
_aUserOwnedGrafeasNote,
("description" .=) <$> _aDescription])
-- | Occurrence that represents a single \"attestation\". The authenticity of
-- an attestation can be verified using the attached signature. If the
-- verifier trusts the public key of the signer, then verifying the
-- signature is sufficient to establish trust. In this circumstance, the
-- authority to which this attestation is attached is primarily useful for
-- lookup (how to find this attestation if you already know the authority
-- and artifact to be verified) and intent (for which authority this
-- attestation was intended to sign.
--
-- /See:/ 'attestationOccurrence' smart constructor.
data AttestationOccurrence =
AttestationOccurrence'
{ _aoSerializedPayload :: !(Maybe Bytes)
, _aoJwts :: !(Maybe [Jwt])
, _aoSignatures :: !(Maybe [Signature])
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'AttestationOccurrence' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'aoSerializedPayload'
--
-- * 'aoJwts'
--
-- * 'aoSignatures'
attestationOccurrence
:: AttestationOccurrence
attestationOccurrence =
AttestationOccurrence'
{_aoSerializedPayload = Nothing, _aoJwts = Nothing, _aoSignatures = Nothing}
-- | Required. The serialized payload that is verified by one or more
-- \`signatures\`.
aoSerializedPayload :: Lens' AttestationOccurrence (Maybe ByteString)
aoSerializedPayload
= lens _aoSerializedPayload
(\ s a -> s{_aoSerializedPayload = a})
. mapping _Bytes
-- | One or more JWTs encoding a self-contained attestation. Each JWT encodes
-- the payload that it verifies within the JWT itself. Verifier
-- implementation SHOULD ignore the \`serialized_payload\` field when
-- verifying these JWTs. If only JWTs are present on this
-- AttestationOccurrence, then the \`serialized_payload\` SHOULD be left
-- empty. Each JWT SHOULD encode a claim specific to the \`resource_uri\`
-- of this Occurrence, but this is not validated by Grafeas metadata API
-- implementations. The JWT itself is opaque to Grafeas.
aoJwts :: Lens' AttestationOccurrence [Jwt]
aoJwts
= lens _aoJwts (\ s a -> s{_aoJwts = a}) . _Default .
_Coerce
-- | One or more signatures over \`serialized_payload\`. Verifier
-- implementations should consider this attestation message verified if at
-- least one \`signature\` verifies \`serialized_payload\`. See
-- \`Signature\` in common.proto for more details on signature structure
-- and verification.
aoSignatures :: Lens' AttestationOccurrence [Signature]
aoSignatures
= lens _aoSignatures (\ s a -> s{_aoSignatures = a})
. _Default
. _Coerce
instance FromJSON AttestationOccurrence where
parseJSON
= withObject "AttestationOccurrence"
(\ o ->
AttestationOccurrence' <$>
(o .:? "serializedPayload") <*>
(o .:? "jwts" .!= mempty)
<*> (o .:? "signatures" .!= mempty))
instance ToJSON AttestationOccurrence where
toJSON AttestationOccurrence'{..}
= object
(catMaybes
[("serializedPayload" .=) <$> _aoSerializedPayload,
("jwts" .=) <$> _aoJwts,
("signatures" .=) <$> _aoSignatures])
-- | Associates \`members\` with a \`role\`.
--
-- /See:/ 'binding' smart constructor.
data Binding =
Binding'
{ _bMembers :: !(Maybe [Text])
, _bRole :: !(Maybe Text)
, _bCondition :: !(Maybe Expr)
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'Binding' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'bMembers'
--
-- * 'bRole'
--
-- * 'bCondition'
binding
:: Binding
binding =
Binding' {_bMembers = Nothing, _bRole = Nothing, _bCondition = Nothing}
-- | Specifies the identities requesting access for a Cloud Platform
-- resource. \`members\` can have the following values: * \`allUsers\`: A
-- special identifier that represents anyone who is on the internet; with
-- or without a Google account. * \`allAuthenticatedUsers\`: A special
-- identifier that represents anyone who is authenticated with a Google
-- account or a service account. * \`user:{emailid}\`: An email address
-- that represents a specific Google account. For example,
-- \`alice\'example.com\` . * \`serviceAccount:{emailid}\`: An email
-- address that represents a service account. For example,
-- \`my-other-app\'appspot.gserviceaccount.com\`. * \`group:{emailid}\`: An
-- email address that represents a Google group. For example,
-- \`admins\'example.com\`. * \`deleted:user:{emailid}?uid={uniqueid}\`: An
-- email address (plus unique identifier) representing a user that has been
-- recently deleted. For example,
-- \`alice\'example.com?uid=123456789012345678901\`. If the user is
-- recovered, this value reverts to \`user:{emailid}\` and the recovered
-- user retains the role in the binding. *
-- \`deleted:serviceAccount:{emailid}?uid={uniqueid}\`: An email address
-- (plus unique identifier) representing a service account that has been
-- recently deleted. For example,
-- \`my-other-app\'appspot.gserviceaccount.com?uid=123456789012345678901\`.
-- If the service account is undeleted, this value reverts to
-- \`serviceAccount:{emailid}\` and the undeleted service account retains
-- the role in the binding. * \`deleted:group:{emailid}?uid={uniqueid}\`:
-- An email address (plus unique identifier) representing a Google group
-- that has been recently deleted. For example,
-- \`admins\'example.com?uid=123456789012345678901\`. If the group is
-- recovered, this value reverts to \`group:{emailid}\` and the recovered
-- group retains the role in the binding. * \`domain:{domain}\`: The G
-- Suite domain (primary) that represents all the users of that domain. For
-- example, \`google.com\` or \`example.com\`.
bMembers :: Lens' Binding [Text]
bMembers
= lens _bMembers (\ s a -> s{_bMembers = a}) .
_Default
. _Coerce
-- | Role that is assigned to \`members\`. For example, \`roles\/viewer\`,
-- \`roles\/editor\`, or \`roles\/owner\`.
bRole :: Lens' Binding (Maybe Text)
bRole = lens _bRole (\ s a -> s{_bRole = a})
-- | The condition that is associated with this binding. If the condition
-- evaluates to \`true\`, then this binding applies to the current request.
-- If the condition evaluates to \`false\`, then this binding does not
-- apply to the current request. However, a different role binding might
-- grant the same role to one or more of the members in this binding. To
-- learn which resources support conditions in their IAM policies, see the
-- [IAM
-- documentation](https:\/\/cloud.google.com\/iam\/help\/conditions\/resource-policies).
bCondition :: Lens' Binding (Maybe Expr)
bCondition
= lens _bCondition (\ s a -> s{_bCondition = a})
instance FromJSON Binding where
parseJSON
= withObject "Binding"
(\ o ->
Binding' <$>
(o .:? "members" .!= mempty) <*> (o .:? "role") <*>
(o .:? "condition"))
instance ToJSON Binding where
toJSON Binding'{..}
= object
(catMaybes
[("members" .=) <$> _bMembers,
("role" .=) <$> _bRole,
("condition" .=) <$> _bCondition])
| brendanhay/gogol | gogol-binaryauthorization/gen/Network/Google/BinaryAuthorization/Types/Product.hs | mpl-2.0 | 62,418 | 0 | 20 | 12,916 | 7,991 | 4,687 | 3,304 | 914 | 1 |
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE DeriveDataTypeable #-}
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE NoImplicitPrelude #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE TypeOperators #-}
{-# OPTIONS_GHC -fno-warn-duplicate-exports #-}
{-# OPTIONS_GHC -fno-warn-unused-binds #-}
{-# OPTIONS_GHC -fno-warn-unused-imports #-}
-- |
-- Module : Network.Google.Resource.PlusDomains.Circles.Remove
-- Copyright : (c) 2015-2016 Brendan Hay
-- License : Mozilla Public License, v. 2.0.
-- Maintainer : Brendan Hay <brendan.g.hay@gmail.com>
-- Stability : auto-generated
-- Portability : non-portable (GHC extensions)
--
-- Delete a circle.
--
-- /See:/ <https://developers.google.com/+/domains/ Google+ Domains API Reference> for @plusDomains.circles.remove@.
module Network.Google.Resource.PlusDomains.Circles.Remove
(
-- * REST Resource
CirclesRemoveResource
-- * Creating a Request
, circlesRemove
, CirclesRemove
-- * Request Lenses
, crCircleId
) where
import Network.Google.PlusDomains.Types
import Network.Google.Prelude
-- | A resource alias for @plusDomains.circles.remove@ method which the
-- 'CirclesRemove' request conforms to.
type CirclesRemoveResource =
"plusDomains" :>
"v1" :>
"circles" :>
Capture "circleId" Text :>
QueryParam "alt" AltJSON :> Delete '[JSON] ()
-- | Delete a circle.
--
-- /See:/ 'circlesRemove' smart constructor.
newtype CirclesRemove = CirclesRemove'
{ _crCircleId :: Text
} deriving (Eq,Show,Data,Typeable,Generic)
-- | Creates a value of 'CirclesRemove' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'crCircleId'
circlesRemove
:: Text -- ^ 'crCircleId'
-> CirclesRemove
circlesRemove pCrCircleId_ =
CirclesRemove'
{ _crCircleId = pCrCircleId_
}
-- | The ID of the circle to delete.
crCircleId :: Lens' CirclesRemove Text
crCircleId
= lens _crCircleId (\ s a -> s{_crCircleId = a})
instance GoogleRequest CirclesRemove where
type Rs CirclesRemove = ()
type Scopes CirclesRemove =
'["https://www.googleapis.com/auth/plus.circles.write",
"https://www.googleapis.com/auth/plus.login"]
requestClient CirclesRemove'{..}
= go _crCircleId (Just AltJSON) plusDomainsService
where go
= buildClient (Proxy :: Proxy CirclesRemoveResource)
mempty
| rueshyna/gogol | gogol-plus-domains/gen/Network/Google/Resource/PlusDomains/Circles/Remove.hs | mpl-2.0 | 2,666 | 0 | 12 | 602 | 306 | 188 | 118 | 48 | 1 |
module Currying where
import Data.Char
diffFunc :: String -> Int -> Char -> String
diffFunc string int char = string ++ " " ++ [intToDigit int] ++ " " ++ [char]
| thewoolleyman/haskellbook | 05/09/chad/Currying.hs | unlicense | 164 | 0 | 9 | 34 | 63 | 34 | 29 | 4 | 1 |
-----------------------------------------------------------------------------
-- Copyright 2019, Ideas project team. This file is distributed under the
-- terms of the Apache License 2.0. For more information, see the files
-- "LICENSE.txt" and "NOTICE.txt", which are included in the distribution.
-----------------------------------------------------------------------------
-- |
-- Maintainer : bastiaan.heeren@ou.nl
-- Stability : provisional
-- Portability : portable (depends on ghc)
--
-- Formal mathematical properties (FMP)
--
-----------------------------------------------------------------------------
module Ideas.Text.OpenMath.FMP where
import Data.List (union)
import Ideas.Text.OpenMath.Dictionary.Quant1 (forallSymbol, existsSymbol)
import Ideas.Text.OpenMath.Dictionary.Relation1 (eqSymbol, neqSymbol)
import Ideas.Text.OpenMath.Object
import Ideas.Text.OpenMath.Symbol
data FMP = FMP
{ quantor :: Symbol
, metaVariables :: [String]
, leftHandSide :: OMOBJ
, relation :: Symbol
, rightHandSide :: OMOBJ
}
toObject :: FMP -> OMOBJ
toObject fmp
| null (metaVariables fmp) = body
| otherwise =
OMBIND (OMS (quantor fmp)) (metaVariables fmp) body
where
body = OMA [OMS (relation fmp), leftHandSide fmp, rightHandSide fmp]
eqFMP :: OMOBJ -> OMOBJ -> FMP
eqFMP lhs rhs = FMP
{ quantor = forallSymbol
, metaVariables = getOMVs lhs `union` getOMVs rhs
, leftHandSide = lhs
, relation = eqSymbol
, rightHandSide = rhs
}
-- | Represents a common misconception. In certain (most) situations,
-- the two objects are not the same.
buggyFMP :: OMOBJ -> OMOBJ -> FMP
buggyFMP lhs rhs = (eqFMP lhs rhs)
{ quantor = existsSymbol
, relation = neqSymbol
} | ideas-edu/ideas | src/Ideas/Text/OpenMath/FMP.hs | apache-2.0 | 1,807 | 0 | 11 | 369 | 333 | 198 | 135 | 29 | 1 |
{-
(c) The University of Glasgow 2006
(c) The GRASP/AQUA Project, Glasgow University, 1993-1998
A ``lint'' pass to check for Core correctness
-}
{-# LANGUAGE CPP #-}
{-# OPTIONS_GHC -fprof-auto #-}
module CoreLint (
lintCoreBindings, lintUnfolding,
lintPassResult, lintInteractiveExpr, lintExpr,
lintAnnots,
-- ** Debug output
CoreLint.showPass, showPassIO, endPass, endPassIO,
dumpPassResult,
CoreLint.dumpIfSet,
) where
#include "HsVersions.h"
import CoreSyn
import CoreFVs
import CoreUtils
import CoreMonad
import Bag
import Literal
import DataCon
import TysWiredIn
import TysPrim
import Var
import VarEnv
import VarSet
import Name
import Id
import PprCore
import ErrUtils
import Coercion
import SrcLoc
import Kind
import Type
import TypeRep
import TyCon
import CoAxiom
import BasicTypes
import ErrUtils as Err
import StaticFlags
import ListSetOps
import PrelNames
import Outputable
import FastString
import Util
import InstEnv ( instanceDFunId )
import OptCoercion ( checkAxInstCo )
import UniqSupply
import HscTypes
import DynFlags
import Control.Monad
import MonadUtils
import Data.Maybe
import Pair
{-
Note [GHC Formalism]
~~~~~~~~~~~~~~~~~~~~
This file implements the type-checking algorithm for System FC, the "official"
name of the Core language. Type safety of FC is heart of the claim that
executables produced by GHC do not have segmentation faults. Thus, it is
useful to be able to reason about System FC independently of reading the code.
To this purpose, there is a document ghc.pdf built in docs/core-spec that
contains a formalism of the types and functions dealt with here. If you change
just about anything in this file or you change other types/functions throughout
the Core language (all signposted to this note), you should update that
formalism. See docs/core-spec/README for more info about how to do so.
Summary of checks
~~~~~~~~~~~~~~~~~
Checks that a set of core bindings is well-formed. The PprStyle and String
just control what we print in the event of an error. The Bool value
indicates whether we have done any specialisation yet (in which case we do
some extra checks).
We check for
(a) type errors
(b) Out-of-scope type variables
(c) Out-of-scope local variables
(d) Ill-kinded types
(e) Incorrect unsafe coercions
If we have done specialisation the we check that there are
(a) No top-level bindings of primitive (unboxed type)
Outstanding issues:
-- Things are *not* OK if:
--
-- * Unsaturated type app before specialisation has been done;
--
-- * Oversaturated type app after specialisation (eta reduction
-- may well be happening...);
Note [Linting type lets]
~~~~~~~~~~~~~~~~~~~~~~~~
In the desugarer, it's very very convenient to be able to say (in effect)
let a = Type Int in <body>
That is, use a type let. See Note [Type let] in CoreSyn.
However, when linting <body> we need to remember that a=Int, else we might
reject a correct program. So we carry a type substitution (in this example
[a -> Int]) and apply this substitution before comparing types. The functin
lintInTy :: Type -> LintM Type
returns a substituted type; that's the only reason it returns anything.
When we encounter a binder (like x::a) we must apply the substitution
to the type of the binding variable. lintBinders does this.
For Ids, the type-substituted Id is added to the in_scope set (which
itself is part of the TvSubst we are carrying down), and when we
find an occurrence of an Id, we fetch it from the in-scope set.
Note [Bad unsafe coercion]
~~~~~~~~~~~~~~~~~~~~~~~~~~
For discussion see https://ghc.haskell.org/trac/ghc/wiki/BadUnsafeCoercions
Linter introduces additional rules that checks improper coercion between
different types, called bad coercions. Following coercions are forbidden:
(a) coercions between boxed and unboxed values;
(b) coercions between unlifted values of the different sizes, here
active size is checked, i.e. size of the actual value but not
the space allocated for value;
(c) coercions between floating and integral boxed values, this check
is not yet supported for unboxed tuples, as no semantics were
specified for that;
(d) coercions from / to vector type
(e) If types are unboxed tuples then tuple (# A_1,..,A_n #) can be
coerced to (# B_1,..,B_m #) if n=m and for each pair A_i, B_i rules
(a-e) holds.
************************************************************************
* *
Beginning and ending passes
* *
************************************************************************
These functions are not CoreM monad stuff, but they probably ought to
be, and it makes a conveneint place. place for them. They print out
stuff before and after core passes, and do Core Lint when necessary.
-}
showPass :: CoreToDo -> CoreM ()
showPass pass = do { dflags <- getDynFlags
; liftIO $ showPassIO dflags pass }
showPassIO :: DynFlags -> CoreToDo -> IO ()
showPassIO dflags pass = Err.showPass dflags (showPpr dflags pass)
endPass :: CoreToDo -> CoreProgram -> [CoreRule] -> CoreM ()
endPass pass binds rules
= do { hsc_env <- getHscEnv
; print_unqual <- getPrintUnqualified
; liftIO $ endPassIO hsc_env print_unqual pass binds rules }
endPassIO :: HscEnv -> PrintUnqualified
-> CoreToDo -> CoreProgram -> [CoreRule] -> IO ()
-- Used by the IO-is CorePrep too
endPassIO hsc_env print_unqual pass binds rules
= do { dumpPassResult dflags print_unqual mb_flag
(ppr pass) (pprPassDetails pass) binds rules
; lintPassResult hsc_env pass binds }
where
dflags = hsc_dflags hsc_env
mb_flag = case coreDumpFlag pass of
Just flag | dopt flag dflags -> Just flag
| dopt Opt_D_verbose_core2core dflags -> Just flag
_ -> Nothing
dumpIfSet :: DynFlags -> Bool -> CoreToDo -> SDoc -> SDoc -> IO ()
dumpIfSet dflags dump_me pass extra_info doc
= Err.dumpIfSet dflags dump_me (showSDoc dflags (ppr pass <+> extra_info)) doc
dumpPassResult :: DynFlags
-> PrintUnqualified
-> Maybe DumpFlag -- Just df => show details in a file whose
-- name is specified by df
-> SDoc -- Header
-> SDoc -- Extra info to appear after header
-> CoreProgram -> [CoreRule]
-> IO ()
dumpPassResult dflags unqual mb_flag hdr extra_info binds rules
| Just flag <- mb_flag
= Err.dumpSDoc dflags unqual flag (showSDoc dflags hdr) dump_doc
| otherwise
= Err.debugTraceMsg dflags 2 size_doc
-- Report result size
-- This has the side effect of forcing the intermediate to be evaluated
where
size_doc = sep [text "Result size of" <+> hdr, nest 2 (equals <+> ppr (coreBindsStats binds))]
dump_doc = vcat [ nest 2 extra_info
, size_doc
, blankLine
, pprCoreBindings binds
, ppUnless (null rules) pp_rules ]
pp_rules = vcat [ blankLine
, ptext (sLit "------ Local rules for imported ids --------")
, pprRules rules ]
coreDumpFlag :: CoreToDo -> Maybe DumpFlag
coreDumpFlag (CoreDoSimplify {}) = Just Opt_D_verbose_core2core
coreDumpFlag (CoreDoPluginPass {}) = Just Opt_D_verbose_core2core
coreDumpFlag CoreDoFloatInwards = Just Opt_D_verbose_core2core
coreDumpFlag (CoreDoFloatOutwards {}) = Just Opt_D_verbose_core2core
coreDumpFlag CoreLiberateCase = Just Opt_D_verbose_core2core
coreDumpFlag CoreDoStaticArgs = Just Opt_D_verbose_core2core
coreDumpFlag CoreDoCallArity = Just Opt_D_dump_call_arity
coreDumpFlag CoreDoStrictness = Just Opt_D_dump_stranal
coreDumpFlag CoreDoWorkerWrapper = Just Opt_D_dump_worker_wrapper
coreDumpFlag CoreDoSpecialising = Just Opt_D_dump_spec
coreDumpFlag CoreDoSpecConstr = Just Opt_D_dump_spec
coreDumpFlag CoreCSE = Just Opt_D_dump_cse
coreDumpFlag CoreDoVectorisation = Just Opt_D_dump_vect
coreDumpFlag CoreDesugar = Just Opt_D_dump_ds
coreDumpFlag CoreDesugarOpt = Just Opt_D_dump_ds
coreDumpFlag CoreTidy = Just Opt_D_dump_simpl
coreDumpFlag CorePrep = Just Opt_D_dump_prep
coreDumpFlag CoreDoPrintCore = Nothing
coreDumpFlag (CoreDoRuleCheck {}) = Nothing
coreDumpFlag CoreDoNothing = Nothing
coreDumpFlag (CoreDoPasses {}) = Nothing
{-
************************************************************************
* *
Top-level interfaces
* *
************************************************************************
-}
lintPassResult :: HscEnv -> CoreToDo -> CoreProgram -> IO ()
lintPassResult hsc_env pass binds
| not (gopt Opt_DoCoreLinting dflags)
= return ()
| otherwise
= do { let (warns, errs) = lintCoreBindings dflags pass (interactiveInScope hsc_env) binds
; Err.showPass dflags ("Core Linted result of " ++ showPpr dflags pass)
; displayLintResults dflags pass warns errs binds }
where
dflags = hsc_dflags hsc_env
displayLintResults :: DynFlags -> CoreToDo
-> Bag Err.MsgDoc -> Bag Err.MsgDoc -> CoreProgram
-> IO ()
displayLintResults dflags pass warns errs binds
| not (isEmptyBag errs)
= do { log_action dflags dflags Err.SevDump noSrcSpan defaultDumpStyle
(vcat [ lint_banner "errors" (ppr pass), Err.pprMessageBag errs
, ptext (sLit "*** Offending Program ***")
, pprCoreBindings binds
, ptext (sLit "*** End of Offense ***") ])
; Err.ghcExit dflags 1 }
| not (isEmptyBag warns)
, not opt_NoDebugOutput
, showLintWarnings pass
= log_action dflags dflags Err.SevDump noSrcSpan defaultDumpStyle
(lint_banner "warnings" (ppr pass) $$ Err.pprMessageBag warns)
| otherwise = return ()
where
lint_banner :: String -> SDoc -> SDoc
lint_banner string pass = ptext (sLit "*** Core Lint") <+> text string
<+> ptext (sLit ": in result of") <+> pass
<+> ptext (sLit "***")
showLintWarnings :: CoreToDo -> Bool
-- Disable Lint warnings on the first simplifier pass, because
-- there may be some INLINE knots still tied, which is tiresomely noisy
showLintWarnings (CoreDoSimplify _ (SimplMode { sm_phase = InitialPhase })) = False
showLintWarnings _ = True
lintInteractiveExpr :: String -> HscEnv -> CoreExpr -> IO ()
lintInteractiveExpr what hsc_env expr
| not (gopt Opt_DoCoreLinting dflags)
= return ()
| Just err <- lintExpr dflags (interactiveInScope hsc_env) expr
= do { display_lint_err err
; Err.ghcExit dflags 1 }
| otherwise
= return ()
where
dflags = hsc_dflags hsc_env
display_lint_err err
= do { log_action dflags dflags Err.SevDump noSrcSpan defaultDumpStyle
(vcat [ lint_banner "errors" (text what)
, err
, ptext (sLit "*** Offending Program ***")
, pprCoreExpr expr
, ptext (sLit "*** End of Offense ***") ])
; Err.ghcExit dflags 1 }
interactiveInScope :: HscEnv -> [Var]
-- In GHCi we may lint expressions, or bindings arising from 'deriving'
-- clauses, that mention variables bound in the interactive context.
-- These are Local things (see Note [Interactively-bound Ids in GHCi] in HscTypes).
-- So we have to tell Lint about them, lest it reports them as out of scope.
--
-- We do this by find local-named things that may appear free in interactive
-- context. This function is pretty revolting and quite possibly not quite right.
-- When we are not in GHCi, the interactive context (hsc_IC hsc_env) is empty
-- so this is a (cheap) no-op.
--
-- See Trac #8215 for an example
interactiveInScope hsc_env
= varSetElems tyvars ++ ids
where
-- C.f. TcRnDriver.setInteractiveContext, Desugar.deSugarExpr
ictxt = hsc_IC hsc_env
(cls_insts, _fam_insts) = ic_instances ictxt
te1 = mkTypeEnvWithImplicits (ic_tythings ictxt)
te = extendTypeEnvWithIds te1 (map instanceDFunId cls_insts)
ids = typeEnvIds te
tyvars = mapUnionVarSet (tyVarsOfType . idType) ids
-- Why the type variables? How can the top level envt have free tyvars?
-- I think it's because of the GHCi debugger, which can bind variables
-- f :: [t] -> [t]
-- where t is a RuntimeUnk (see TcType)
lintCoreBindings :: DynFlags -> CoreToDo -> [Var] -> CoreProgram -> (Bag MsgDoc, Bag MsgDoc)
-- Returns (warnings, errors)
-- If you edit this function, you may need to update the GHC formalism
-- See Note [GHC Formalism]
lintCoreBindings dflags pass local_in_scope binds
= initL dflags flags $
addLoc TopLevelBindings $
addInScopeVars local_in_scope $
addInScopeVars binders $
-- Put all the top-level binders in scope at the start
-- This is because transformation rules can bring something
-- into use 'unexpectedly'
do { checkL (null dups) (dupVars dups)
; checkL (null ext_dups) (dupExtVars ext_dups)
; mapM lint_bind binds }
where
flags = LF { lf_check_global_ids = check_globals
, lf_check_inline_loop_breakers = check_lbs }
-- See Note [Checking for global Ids]
check_globals = case pass of
CoreTidy -> False
CorePrep -> False
_ -> True
-- See Note [Checking for INLINE loop breakers]
check_lbs = case pass of
CoreDesugar -> False
CoreDesugarOpt -> False
_ -> True
binders = bindersOfBinds binds
(_, dups) = removeDups compare binders
-- dups_ext checks for names with different uniques
-- but but the same External name M.n. We don't
-- allow this at top level:
-- M.n{r3} = ...
-- M.n{r29} = ...
-- because they both get the same linker symbol
ext_dups = snd (removeDups ord_ext (map Var.varName binders))
ord_ext n1 n2 | Just m1 <- nameModule_maybe n1
, Just m2 <- nameModule_maybe n2
= compare (m1, nameOccName n1) (m2, nameOccName n2)
| otherwise = LT
-- If you edit this function, you may need to update the GHC formalism
-- See Note [GHC Formalism]
lint_bind (Rec prs) = mapM_ (lintSingleBinding TopLevel Recursive) prs
lint_bind (NonRec bndr rhs) = lintSingleBinding TopLevel NonRecursive (bndr,rhs)
{-
************************************************************************
* *
\subsection[lintUnfolding]{lintUnfolding}
* *
************************************************************************
We use this to check all unfoldings that come in from interfaces
(it is very painful to catch errors otherwise):
-}
lintUnfolding :: DynFlags
-> SrcLoc
-> [Var] -- Treat these as in scope
-> CoreExpr
-> Maybe MsgDoc -- Nothing => OK
lintUnfolding dflags locn vars expr
| isEmptyBag errs = Nothing
| otherwise = Just (pprMessageBag errs)
where
(_warns, errs) = initL dflags defaultLintFlags linter
linter = addLoc (ImportedUnfolding locn) $
addInScopeVars vars $
lintCoreExpr expr
lintExpr :: DynFlags
-> [Var] -- Treat these as in scope
-> CoreExpr
-> Maybe MsgDoc -- Nothing => OK
lintExpr dflags vars expr
| isEmptyBag errs = Nothing
| otherwise = Just (pprMessageBag errs)
where
(_warns, errs) = initL dflags defaultLintFlags linter
linter = addLoc TopLevelBindings $
addInScopeVars vars $
lintCoreExpr expr
{-
************************************************************************
* *
\subsection[lintCoreBinding]{lintCoreBinding}
* *
************************************************************************
Check a core binding, returning the list of variables bound.
-}
lintSingleBinding :: TopLevelFlag -> RecFlag -> (Id, CoreExpr) -> LintM ()
-- If you edit this function, you may need to update the GHC formalism
-- See Note [GHC Formalism]
lintSingleBinding top_lvl_flag rec_flag (binder,rhs)
= addLoc (RhsOf binder) $
-- Check the rhs
do { ty <- lintCoreExpr rhs
; lintBinder binder -- Check match to RHS type
; binder_ty <- applySubstTy binder_ty
; checkTys binder_ty ty (mkRhsMsg binder (ptext (sLit "RHS")) ty)
-- Check the let/app invariant
-- See Note [CoreSyn let/app invariant] in CoreSyn
; checkL (not (isUnLiftedType binder_ty)
|| (isNonRec rec_flag && exprOkForSpeculation rhs))
(mkRhsPrimMsg binder rhs)
-- Check that if the binder is top-level or recursive, it's not demanded
; checkL (not (isStrictId binder)
|| (isNonRec rec_flag && not (isTopLevel top_lvl_flag)))
(mkStrictMsg binder)
-- Check that if the binder is local, it is not marked as exported
; checkL (not (isExportedId binder) || isTopLevel top_lvl_flag)
(mkNonTopExportedMsg binder)
-- Check that if the binder is local, it does not have an external name
; checkL (not (isExternalName (Var.varName binder)) || isTopLevel top_lvl_flag)
(mkNonTopExternalNameMsg binder)
-- Check whether binder's specialisations contain any out-of-scope variables
; mapM_ (checkBndrIdInScope binder) bndr_vars
; flags <- getLintFlags
; when (lf_check_inline_loop_breakers flags
&& isStrongLoopBreaker (idOccInfo binder)
&& isInlinePragma (idInlinePragma binder))
(addWarnL (ptext (sLit "INLINE binder is (non-rule) loop breaker:") <+> ppr binder))
-- Only non-rule loop breakers inhibit inlining
-- Check whether arity and demand type are consistent (only if demand analysis
-- already happened)
--
-- Note (Apr 2014): this is actually ok. See Note [Demand analysis for trivial right-hand sides]
-- in DmdAnal. After eta-expansion in CorePrep the rhs is no longer trivial.
-- ; let dmdTy = idStrictness binder
-- ; checkL (case dmdTy of
-- StrictSig dmd_ty -> idArity binder >= dmdTypeDepth dmd_ty || exprIsTrivial rhs)
-- (mkArityMsg binder)
; lintIdUnfolding binder binder_ty (idUnfolding binder) }
-- We should check the unfolding, if any, but this is tricky because
-- the unfolding is a SimplifiableCoreExpr. Give up for now.
where
binder_ty = idType binder
bndr_vars = varSetElems (idFreeVars binder)
-- If you edit this function, you may need to update the GHC formalism
-- See Note [GHC Formalism]
lintBinder var | isId var = lintIdBndr var $ \_ -> (return ())
| otherwise = return ()
lintIdUnfolding :: Id -> Type -> Unfolding -> LintM ()
lintIdUnfolding bndr bndr_ty (CoreUnfolding { uf_tmpl = rhs, uf_src = src })
| isStableSource src
= do { ty <- lintCoreExpr rhs
; checkTys bndr_ty ty (mkRhsMsg bndr (ptext (sLit "unfolding")) ty) }
lintIdUnfolding _ _ _
= return () -- We could check more
{-
Note [Checking for INLINE loop breakers]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It's very suspicious if a strong loop breaker is marked INLINE.
However, the desugarer generates instance methods with INLINE pragmas
that form a mutually recursive group. Only after a round of
simplification are they unravelled. So we suppress the test for
the desugarer.
************************************************************************
* *
\subsection[lintCoreExpr]{lintCoreExpr}
* *
************************************************************************
-}
--type InKind = Kind -- Substitution not yet applied
type InType = Type
type InCoercion = Coercion
type InVar = Var
type InTyVar = TyVar
type OutKind = Kind -- Substitution has been applied to this,
-- but has not been linted yet
type LintedKind = Kind -- Substitution applied, and type is linted
type OutType = Type -- Substitution has been applied to this,
-- but has not been linted yet
type LintedType = Type -- Substitution applied, and type is linted
type OutCoercion = Coercion
type OutVar = Var
type OutTyVar = TyVar
lintCoreExpr :: CoreExpr -> LintM OutType
-- The returned type has the substitution from the monad
-- already applied to it:
-- lintCoreExpr e subst = exprType (subst e)
--
-- The returned "type" can be a kind, if the expression is (Type ty)
-- If you edit this function, you may need to update the GHC formalism
-- See Note [GHC Formalism]
lintCoreExpr (Var var)
= do { checkL (not (var == oneTupleDataConId))
(ptext (sLit "Illegal one-tuple"))
; checkL (isId var && not (isCoVar var))
(ptext (sLit "Non term variable") <+> ppr var)
; checkDeadIdOcc var
; var' <- lookupIdInScope var
; return (idType var') }
lintCoreExpr (Lit lit)
= return (literalType lit)
lintCoreExpr (Cast expr co)
= do { expr_ty <- lintCoreExpr expr
; co' <- applySubstCo co
; (_, from_ty, to_ty, r) <- lintCoercion co'
; checkRole co' Representational r
; checkTys from_ty expr_ty (mkCastErr expr co' from_ty expr_ty)
; return to_ty }
lintCoreExpr (Tick (Breakpoint _ ids) expr)
= do forM_ ids $ \id -> do
checkDeadIdOcc id
lookupIdInScope id
lintCoreExpr expr
lintCoreExpr (Tick _other_tickish expr)
= lintCoreExpr expr
lintCoreExpr (Let (NonRec tv (Type ty)) body)
| isTyVar tv
= -- See Note [Linting type lets]
do { ty' <- applySubstTy ty
; lintTyBndr tv $ \ tv' ->
do { addLoc (RhsOf tv) $ checkTyKind tv' ty'
-- Now extend the substitution so we
-- take advantage of it in the body
; extendSubstL tv' ty' $
addLoc (BodyOfLetRec [tv]) $
lintCoreExpr body } }
lintCoreExpr (Let (NonRec bndr rhs) body)
| isId bndr
= do { lintSingleBinding NotTopLevel NonRecursive (bndr,rhs)
; addLoc (BodyOfLetRec [bndr])
(lintAndScopeId bndr $ \_ -> (lintCoreExpr body)) }
| otherwise
= failWithL (mkLetErr bndr rhs) -- Not quite accurate
lintCoreExpr (Let (Rec pairs) body)
= lintAndScopeIds bndrs $ \_ ->
do { checkL (null dups) (dupVars dups)
; mapM_ (lintSingleBinding NotTopLevel Recursive) pairs
; addLoc (BodyOfLetRec bndrs) (lintCoreExpr body) }
where
bndrs = map fst pairs
(_, dups) = removeDups compare bndrs
lintCoreExpr e@(App _ _)
= do { fun_ty <- lintCoreExpr fun
; addLoc (AnExpr e) $ foldM lintCoreArg fun_ty args }
where
(fun, args) = collectArgs e
lintCoreExpr (Lam var expr)
= addLoc (LambdaBodyOf var) $
lintBinder var $ \ var' ->
do { body_ty <- lintCoreExpr expr
; if isId var' then
return (mkFunTy (idType var') body_ty)
else
return (mkForAllTy var' body_ty)
}
-- The applySubstTy is needed to apply the subst to var
lintCoreExpr e@(Case scrut var alt_ty alts) =
-- Check the scrutinee
do { scrut_ty <- lintCoreExpr scrut
; alt_ty <- lintInTy alt_ty
; var_ty <- lintInTy (idType var)
; case tyConAppTyCon_maybe (idType var) of
Just tycon
| debugIsOn &&
isAlgTyCon tycon &&
not (isFamilyTyCon tycon || isAbstractTyCon tycon) &&
null (tyConDataCons tycon) ->
pprTrace "Lint warning: case binder's type has no constructors" (ppr var <+> ppr (idType var))
-- This can legitimately happen for type families
$ return ()
_otherwise -> return ()
-- Don't use lintIdBndr on var, because unboxed tuple is legitimate
; subst <- getTvSubst
; checkTys var_ty scrut_ty (mkScrutMsg var var_ty scrut_ty subst)
; lintAndScopeId var $ \_ ->
do { -- Check the alternatives
mapM_ (lintCoreAlt scrut_ty alt_ty) alts
; checkCaseAlts e scrut_ty alts
; return alt_ty } }
-- This case can't happen; linting types in expressions gets routed through
-- lintCoreArgs
lintCoreExpr (Type ty)
= pprPanic "lintCoreExpr" (ppr ty)
lintCoreExpr (Coercion co)
= do { (_kind, ty1, ty2, role) <- lintInCo co
; return (mkCoercionType role ty1 ty2) }
{-
Note [Kind instantiation in coercions]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Consider the following coercion axiom:
ax_co [(k_ag :: BOX), (f_aa :: k_ag -> Constraint)] :: T k_ag f_aa ~ f_aa
Consider the following instantiation:
ax_co <* -> *> <Monad>
We need to split the co_ax_tvs into kind and type variables in order
to find out the coercion kind instantiations. Those can only be Refl
since we don't have kind coercions. This is just a way to represent
kind instantiation.
We use the number of kind variables to know how to split the coercions
instantiations between kind coercions and type coercions. We lint the
kind coercions and produce the following substitution which is to be
applied in the type variables:
k_ag ~~> * -> *
************************************************************************
* *
\subsection[lintCoreArgs]{lintCoreArgs}
* *
************************************************************************
The basic version of these functions checks that the argument is a
subtype of the required type, as one would expect.
-}
lintCoreArg :: OutType -> CoreArg -> LintM OutType
lintCoreArg fun_ty (Type arg_ty)
= do { arg_ty' <- applySubstTy arg_ty
; lintTyApp fun_ty arg_ty' }
lintCoreArg fun_ty arg
= do { arg_ty <- lintCoreExpr arg
; checkL (not (isUnLiftedType arg_ty) || exprOkForSpeculation arg)
(mkLetAppMsg arg)
; lintValApp arg fun_ty arg_ty }
-----------------
lintAltBinders :: OutType -- Scrutinee type
-> OutType -- Constructor type
-> [OutVar] -- Binders
-> LintM ()
-- If you edit this function, you may need to update the GHC formalism
-- See Note [GHC Formalism]
lintAltBinders scrut_ty con_ty []
= checkTys con_ty scrut_ty (mkBadPatMsg con_ty scrut_ty)
lintAltBinders scrut_ty con_ty (bndr:bndrs)
| isTyVar bndr
= do { con_ty' <- lintTyApp con_ty (mkTyVarTy bndr)
; lintAltBinders scrut_ty con_ty' bndrs }
| otherwise
= do { con_ty' <- lintValApp (Var bndr) con_ty (idType bndr)
; lintAltBinders scrut_ty con_ty' bndrs }
-----------------
lintTyApp :: OutType -> OutType -> LintM OutType
lintTyApp fun_ty arg_ty
| Just (tyvar,body_ty) <- splitForAllTy_maybe fun_ty
, isTyVar tyvar
= do { checkTyKind tyvar arg_ty
; return (substTyWith [tyvar] [arg_ty] body_ty) }
| otherwise
= failWithL (mkTyAppMsg fun_ty arg_ty)
-----------------
lintValApp :: CoreExpr -> OutType -> OutType -> LintM OutType
lintValApp arg fun_ty arg_ty
| Just (arg,res) <- splitFunTy_maybe fun_ty
= do { checkTys arg arg_ty err1
; return res }
| otherwise
= failWithL err2
where
err1 = mkAppMsg fun_ty arg_ty arg
err2 = mkNonFunAppMsg fun_ty arg_ty arg
checkTyKind :: OutTyVar -> OutType -> LintM ()
-- Both args have had substitution applied
-- If you edit this function, you may need to update the GHC formalism
-- See Note [GHC Formalism]
checkTyKind tyvar arg_ty
| isSuperKind tyvar_kind -- kind forall
= lintKind arg_ty
-- Arg type might be boxed for a function with an uncommitted
-- tyvar; notably this is used so that we can give
-- error :: forall a:*. String -> a
-- and then apply it to both boxed and unboxed types.
| otherwise -- type forall
= do { arg_kind <- lintType arg_ty
; unless (arg_kind `isSubKind` tyvar_kind)
(addErrL (mkKindErrMsg tyvar arg_ty $$ (text "xx" <+> ppr arg_kind))) }
where
tyvar_kind = tyVarKind tyvar
checkDeadIdOcc :: Id -> LintM ()
-- Occurrences of an Id should never be dead....
-- except when we are checking a case pattern
checkDeadIdOcc id
| isDeadOcc (idOccInfo id)
= do { in_case <- inCasePat
; checkL in_case
(ptext (sLit "Occurrence of a dead Id") <+> ppr id) }
| otherwise
= return ()
{-
************************************************************************
* *
\subsection[lintCoreAlts]{lintCoreAlts}
* *
************************************************************************
-}
checkCaseAlts :: CoreExpr -> OutType -> [CoreAlt] -> LintM ()
-- a) Check that the alts are non-empty
-- b1) Check that the DEFAULT comes first, if it exists
-- b2) Check that the others are in increasing order
-- c) Check that there's a default for infinite types
-- NB: Algebraic cases are not necessarily exhaustive, because
-- the simplifer correctly eliminates case that can't
-- possibly match.
checkCaseAlts e ty alts =
do { checkL (all non_deflt con_alts) (mkNonDefltMsg e)
; checkL (increasing_tag con_alts) (mkNonIncreasingAltsMsg e)
-- For types Int#, Word# with an infinite (well, large!) number of
-- possible values, there should usually be a DEFAULT case
-- But (see Note [Empty case alternatives] in CoreSyn) it's ok to
-- have *no* case alternatives.
-- In effect, this is a kind of partial test. I suppose it's possible
-- that we might *know* that 'x' was 1 or 2, in which case
-- case x of { 1 -> e1; 2 -> e2 }
-- would be fine.
; checkL (isJust maybe_deflt || not is_infinite_ty || null alts)
(nonExhaustiveAltsMsg e) }
where
(con_alts, maybe_deflt) = findDefault alts
-- Check that successive alternatives have increasing tags
increasing_tag (alt1 : rest@( alt2 : _)) = alt1 `ltAlt` alt2 && increasing_tag rest
increasing_tag _ = True
non_deflt (DEFAULT, _, _) = False
non_deflt _ = True
is_infinite_ty = case tyConAppTyCon_maybe ty of
Nothing -> False
Just tycon -> isPrimTyCon tycon
checkAltExpr :: CoreExpr -> OutType -> LintM ()
checkAltExpr expr ann_ty
= do { actual_ty <- lintCoreExpr expr
; checkTys actual_ty ann_ty (mkCaseAltMsg expr actual_ty ann_ty) }
lintCoreAlt :: OutType -- Type of scrutinee
-> OutType -- Type of the alternative
-> CoreAlt
-> LintM ()
-- If you edit this function, you may need to update the GHC formalism
-- See Note [GHC Formalism]
lintCoreAlt _ alt_ty (DEFAULT, args, rhs) =
do { checkL (null args) (mkDefaultArgsMsg args)
; checkAltExpr rhs alt_ty }
lintCoreAlt scrut_ty alt_ty (LitAlt lit, args, rhs)
| litIsLifted lit
= failWithL integerScrutinisedMsg
| otherwise
= do { checkL (null args) (mkDefaultArgsMsg args)
; checkTys lit_ty scrut_ty (mkBadPatMsg lit_ty scrut_ty)
; checkAltExpr rhs alt_ty }
where
lit_ty = literalType lit
lintCoreAlt scrut_ty alt_ty alt@(DataAlt con, args, rhs)
| isNewTyCon (dataConTyCon con)
= addErrL (mkNewTyDataConAltMsg scrut_ty alt)
| Just (tycon, tycon_arg_tys) <- splitTyConApp_maybe scrut_ty
= addLoc (CaseAlt alt) $ do
{ -- First instantiate the universally quantified
-- type variables of the data constructor
-- We've already check
checkL (tycon == dataConTyCon con) (mkBadConMsg tycon con)
; let con_payload_ty = applyTys (dataConRepType con) tycon_arg_tys
-- And now bring the new binders into scope
; lintBinders args $ \ args' -> do
{ addLoc (CasePat alt) (lintAltBinders scrut_ty con_payload_ty args')
; checkAltExpr rhs alt_ty } }
| otherwise -- Scrut-ty is wrong shape
= addErrL (mkBadAltMsg scrut_ty alt)
{-
************************************************************************
* *
\subsection[lint-types]{Types}
* *
************************************************************************
-}
-- When we lint binders, we (one at a time and in order):
-- 1. Lint var types or kinds (possibly substituting)
-- 2. Add the binder to the in scope set, and if its a coercion var,
-- we may extend the substitution to reflect its (possibly) new kind
lintBinders :: [Var] -> ([Var] -> LintM a) -> LintM a
lintBinders [] linterF = linterF []
lintBinders (var:vars) linterF = lintBinder var $ \var' ->
lintBinders vars $ \ vars' ->
linterF (var':vars')
-- If you edit this function, you may need to update the GHC formalism
-- See Note [GHC Formalism]
lintBinder :: Var -> (Var -> LintM a) -> LintM a
lintBinder var linterF
| isId var = lintIdBndr var linterF
| otherwise = lintTyBndr var linterF
lintTyBndr :: InTyVar -> (OutTyVar -> LintM a) -> LintM a
lintTyBndr tv thing_inside
= do { subst <- getTvSubst
; let (subst', tv') = Type.substTyVarBndr subst tv
; lintTyBndrKind tv'
; updateTvSubst subst' (thing_inside tv') }
lintIdBndr :: Id -> (Id -> LintM a) -> LintM a
-- Do substitution on the type of a binder and add the var with this
-- new type to the in-scope set of the second argument
-- ToDo: lint its rules
lintIdBndr id linterF
= do { lintAndScopeId id $ \id' -> linterF id' }
lintAndScopeIds :: [Var] -> ([Var] -> LintM a) -> LintM a
lintAndScopeIds ids linterF
= go ids
where
go [] = linterF []
go (id:ids) = lintAndScopeId id $ \id ->
lintAndScopeIds ids $ \ids ->
linterF (id:ids)
lintAndScopeId :: InVar -> (OutVar -> LintM a) -> LintM a
lintAndScopeId id linterF
= do { flags <- getLintFlags
; checkL (not (lf_check_global_ids flags) || isLocalId id)
(ptext (sLit "Non-local Id binder") <+> ppr id)
-- See Note [Checking for global Ids]
; ty <- lintInTy (idType id)
; let id' = setIdType id ty
; addInScopeVar id' $ (linterF id') }
{-
************************************************************************
* *
Types and kinds
* *
************************************************************************
We have a single linter for types and kinds. That is convenient
because sometimes it's not clear whether the thing we are looking
at is a type or a kind.
-}
lintInTy :: InType -> LintM LintedType
-- Types only, not kinds
-- Check the type, and apply the substitution to it
-- See Note [Linting type lets]
lintInTy ty
= addLoc (InType ty) $
do { ty' <- applySubstTy ty
; _k <- lintType ty'
; return ty' }
-------------------
lintTyBndrKind :: OutTyVar -> LintM ()
-- Handles both type and kind foralls.
lintTyBndrKind tv = lintKind (tyVarKind tv)
-------------------
lintType :: OutType -> LintM LintedKind
-- The returned Kind has itself been linted
-- If you edit this function, you may need to update the GHC formalism
-- See Note [GHC Formalism]
lintType (TyVarTy tv)
= do { checkTyCoVarInScope tv
; return (tyVarKind tv) }
-- We checked its kind when we added it to the envt
lintType ty@(AppTy t1 t2)
= do { k1 <- lintType t1
; k2 <- lintType t2
; lint_ty_app ty k1 [(t2,k2)] }
lintType ty@(FunTy t1 t2) -- (->) has two different rules, for types and kinds
= do { k1 <- lintType t1
; k2 <- lintType t2
; lintArrow (ptext (sLit "type or kind") <+> quotes (ppr ty)) k1 k2 }
lintType ty@(TyConApp tc tys)
| Just ty' <- coreView ty
= lintType ty' -- Expand type synonyms, so that we do not bogusly complain
-- about un-saturated type synonyms
| isUnLiftedTyCon tc || isTypeSynonymTyCon tc || isTypeFamilyTyCon tc
-- See Note [The kind invariant] in TypeRep
-- Also type synonyms and type families
, length tys < tyConArity tc
= failWithL (hang (ptext (sLit "Un-saturated type application")) 2 (ppr ty))
| otherwise
= do { ks <- mapM lintType tys
; lint_ty_app ty (tyConKind tc) (tys `zip` ks) }
lintType (ForAllTy tv ty)
= do { lintTyBndrKind tv
; addInScopeVar tv (lintType ty) }
lintType ty@(LitTy l) = lintTyLit l >> return (typeKind ty)
lintKind :: OutKind -> LintM ()
-- If you edit this function, you may need to update the GHC formalism
-- See Note [GHC Formalism]
lintKind k = do { sk <- lintType k
; unless (isSuperKind sk)
(addErrL (hang (ptext (sLit "Ill-kinded kind:") <+> ppr k)
2 (ptext (sLit "has kind:") <+> ppr sk))) }
lintArrow :: SDoc -> LintedKind -> LintedKind -> LintM LintedKind
-- If you edit this function, you may need to update the GHC formalism
-- See Note [GHC Formalism]
lintArrow what k1 k2 -- Eg lintArrow "type or kind `blah'" k1 k2
-- or lintarrow "coercion `blah'" k1 k2
| isSuperKind k1
= return superKind
| otherwise
= do { unless (okArrowArgKind k1) (addErrL (msg (ptext (sLit "argument")) k1))
; unless (okArrowResultKind k2) (addErrL (msg (ptext (sLit "result")) k2))
; return liftedTypeKind }
where
msg ar k
= vcat [ hang (ptext (sLit "Ill-kinded") <+> ar)
2 (ptext (sLit "in") <+> what)
, what <+> ptext (sLit "kind:") <+> ppr k ]
lint_ty_app :: Type -> LintedKind -> [(LintedType,LintedKind)] -> LintM LintedKind
lint_ty_app ty k tys
= lint_app (ptext (sLit "type") <+> quotes (ppr ty)) k tys
----------------
lint_co_app :: Coercion -> LintedKind -> [(LintedType,LintedKind)] -> LintM LintedKind
lint_co_app ty k tys
= lint_app (ptext (sLit "coercion") <+> quotes (ppr ty)) k tys
----------------
lintTyLit :: TyLit -> LintM ()
lintTyLit (NumTyLit n)
| n >= 0 = return ()
| otherwise = failWithL msg
where msg = ptext (sLit "Negative type literal:") <+> integer n
lintTyLit (StrTyLit _) = return ()
lint_app :: SDoc -> LintedKind -> [(LintedType,LintedKind)] -> LintM Kind
-- (lint_app d fun_kind arg_tys)
-- We have an application (f arg_ty1 .. arg_tyn),
-- where f :: fun_kind
-- Takes care of linting the OutTypes
-- If you edit this function, you may need to update the GHC formalism
-- See Note [GHC Formalism]
lint_app doc kfn kas
= foldlM go_app kfn kas
where
fail_msg = vcat [ hang (ptext (sLit "Kind application error in")) 2 doc
, nest 2 (ptext (sLit "Function kind =") <+> ppr kfn)
, nest 2 (ptext (sLit "Arg kinds =") <+> ppr kas) ]
go_app kfn ka
| Just kfn' <- coreView kfn
= go_app kfn' ka
go_app (FunTy kfa kfb) (_,ka)
= do { unless (ka `isSubKind` kfa) (addErrL fail_msg)
; return kfb }
go_app (ForAllTy kv kfn) (ta,ka)
= do { unless (ka `isSubKind` tyVarKind kv) (addErrL fail_msg)
; return (substKiWith [kv] [ta] kfn) }
go_app _ _ = failWithL fail_msg
{-
************************************************************************
* *
Linting coercions
* *
************************************************************************
-}
lintInCo :: InCoercion -> LintM (LintedKind, LintedType, LintedType, Role)
-- Check the coercion, and apply the substitution to it
-- See Note [Linting type lets]
lintInCo co
= addLoc (InCo co) $
do { co' <- applySubstCo co
; lintCoercion co' }
lintCoercion :: OutCoercion -> LintM (LintedKind, LintedType, LintedType, Role)
-- Check the kind of a coercion term, returning the kind
-- Post-condition: the returned OutTypes are lint-free
-- and have the same kind as each other
-- If you edit this function, you may need to update the GHC formalism
-- See Note [GHC Formalism]
lintCoercion (Refl r ty)
= do { k <- lintType ty
; return (k, ty, ty, r) }
lintCoercion co@(TyConAppCo r tc cos)
| tc `hasKey` funTyConKey
, [co1,co2] <- cos
= do { (k1,s1,t1,r1) <- lintCoercion co1
; (k2,s2,t2,r2) <- lintCoercion co2
; rk <- lintArrow (ptext (sLit "coercion") <+> quotes (ppr co)) k1 k2
; checkRole co1 r r1
; checkRole co2 r r2
; return (rk, mkFunTy s1 s2, mkFunTy t1 t2, r) }
| Just {} <- synTyConDefn_maybe tc
= failWithL (ptext (sLit "Synonym in TyConAppCo:") <+> ppr co)
| otherwise
= do { (ks,ss,ts,rs) <- mapAndUnzip4M lintCoercion cos
; rk <- lint_co_app co (tyConKind tc) (ss `zip` ks)
; _ <- zipWith3M checkRole cos (tyConRolesX r tc) rs
; return (rk, mkTyConApp tc ss, mkTyConApp tc ts, r) }
lintCoercion co@(AppCo co1 co2)
= do { (k1,s1,t1,r1) <- lintCoercion co1
; (k2,s2,t2,r2) <- lintCoercion co2
; rk <- lint_co_app co k1 [(s2,k2)]
; if r1 == Phantom
then checkL (r2 == Phantom || r2 == Nominal)
(ptext (sLit "Second argument in AppCo cannot be R:") $$
ppr co)
else checkRole co Nominal r2
; return (rk, mkAppTy s1 s2, mkAppTy t1 t2, r1) }
lintCoercion (ForAllCo tv co)
= do { lintTyBndrKind tv
; (k, s, t, r) <- addInScopeVar tv (lintCoercion co)
; return (k, mkForAllTy tv s, mkForAllTy tv t, r) }
lintCoercion (CoVarCo cv)
| not (isCoVar cv)
= failWithL (hang (ptext (sLit "Bad CoVarCo:") <+> ppr cv)
2 (ptext (sLit "With offending type:") <+> ppr (varType cv)))
| otherwise
= do { checkTyCoVarInScope cv
; cv' <- lookupIdInScope cv
; let (s,t) = coVarKind cv'
k = typeKind s
r = coVarRole cv'
; when (isSuperKind k) $
do { checkL (r == Nominal) (hang (ptext (sLit "Non-nominal kind equality"))
2 (ppr cv))
; checkL (s `eqKind` t) (hang (ptext (sLit "Non-refl kind equality"))
2 (ppr cv)) }
; return (k, s, t, r) }
-- See Note [Bad unsafe coercion]
lintCoercion (UnivCo _prov r ty1 ty2)
= do { k1 <- lintType ty1
; k2 <- lintType ty2
-- ; unless (k1 `eqKind` k2) $
-- failWithL (hang (ptext (sLit "Unsafe coercion changes kind"))
-- 2 (ppr co))
; when (r /= Phantom && isSubOpenTypeKind k1 && isSubOpenTypeKind k2)
(checkTypes ty1 ty2)
; return (k1, ty1, ty2, r) }
where
report s = hang (text $ "Unsafe coercion between " ++ s)
2 (vcat [ text "From:" <+> ppr ty1
, text " To:" <+> ppr ty2])
isUnBoxed :: PrimRep -> Bool
isUnBoxed PtrRep = False
isUnBoxed _ = True
checkTypes t1 t2
= case (repType t1, repType t2) of
(UnaryRep _, UnaryRep _) ->
validateCoercion (typePrimRep t1)
(typePrimRep t2)
(UbxTupleRep rep1, UbxTupleRep rep2) -> do
checkWarnL (length rep1 == length rep2)
(report "unboxed tuples of different length")
zipWithM_ checkTypes rep1 rep2
_ -> addWarnL (report "unboxed tuple and ordinary type")
validateCoercion :: PrimRep -> PrimRep -> LintM ()
validateCoercion rep1 rep2
= do { dflags <- getDynFlags
; checkWarnL (isUnBoxed rep1 == isUnBoxed rep2)
(report "unboxed and boxed value")
; checkWarnL (TyCon.primRepSizeW dflags rep1
== TyCon.primRepSizeW dflags rep2)
(report "unboxed values of different size")
; let fl = liftM2 (==) (TyCon.primRepIsFloat rep1)
(TyCon.primRepIsFloat rep2)
; case fl of
Nothing -> addWarnL (report "vector types")
Just False -> addWarnL (report "float and integral values")
_ -> return ()
}
lintCoercion (SymCo co)
= do { (k, ty1, ty2, r) <- lintCoercion co
; return (k, ty2, ty1, r) }
lintCoercion co@(TransCo co1 co2)
= do { (k1, ty1a, ty1b, r1) <- lintCoercion co1
; (_, ty2a, ty2b, r2) <- lintCoercion co2
; checkL (ty1b `eqType` ty2a)
(hang (ptext (sLit "Trans coercion mis-match:") <+> ppr co)
2 (vcat [ppr ty1a, ppr ty1b, ppr ty2a, ppr ty2b]))
; checkRole co r1 r2
; return (k1, ty1a, ty2b, r1) }
lintCoercion the_co@(NthCo n co)
= do { (_,s,t,r) <- lintCoercion co
; case (splitTyConApp_maybe s, splitTyConApp_maybe t) of
(Just (tc_s, tys_s), Just (tc_t, tys_t))
| tc_s == tc_t
, tys_s `equalLength` tys_t
, n < length tys_s
-> return (ks, ts, tt, tr)
where
ts = getNth tys_s n
tt = getNth tys_t n
tr = nthRole r tc_s n
ks = typeKind ts
_ -> failWithL (hang (ptext (sLit "Bad getNth:"))
2 (ppr the_co $$ ppr s $$ ppr t)) }
lintCoercion the_co@(LRCo lr co)
= do { (_,s,t,r) <- lintCoercion co
; checkRole co Nominal r
; case (splitAppTy_maybe s, splitAppTy_maybe t) of
(Just s_pr, Just t_pr)
-> return (k, s_pick, t_pick, Nominal)
where
s_pick = pickLR lr s_pr
t_pick = pickLR lr t_pr
k = typeKind s_pick
_ -> failWithL (hang (ptext (sLit "Bad LRCo:"))
2 (ppr the_co $$ ppr s $$ ppr t)) }
lintCoercion (InstCo co arg_ty)
= do { (k,s,t,r) <- lintCoercion co
; arg_kind <- lintType arg_ty
; case (splitForAllTy_maybe s, splitForAllTy_maybe t) of
(Just (tv1,ty1), Just (tv2,ty2))
| arg_kind `isSubKind` tyVarKind tv1
-> return (k, substTyWith [tv1] [arg_ty] ty1,
substTyWith [tv2] [arg_ty] ty2, r)
| otherwise
-> failWithL (ptext (sLit "Kind mis-match in inst coercion"))
_ -> failWithL (ptext (sLit "Bad argument of inst")) }
lintCoercion co@(AxiomInstCo con ind cos)
= do { unless (0 <= ind && ind < brListLength (coAxiomBranches con))
(bad_ax (ptext (sLit "index out of range")))
-- See Note [Kind instantiation in coercions]
; let CoAxBranch { cab_tvs = ktvs
, cab_roles = roles
, cab_lhs = lhs
, cab_rhs = rhs } = coAxiomNthBranch con ind
; unless (equalLength ktvs cos) (bad_ax (ptext (sLit "lengths")))
; in_scope <- getInScope
; let empty_subst = mkTvSubst in_scope emptyTvSubstEnv
; (subst_l, subst_r) <- foldlM check_ki
(empty_subst, empty_subst)
(zip3 ktvs roles cos)
; let lhs' = Type.substTys subst_l lhs
rhs' = Type.substTy subst_r rhs
; case checkAxInstCo co of
Just bad_branch -> bad_ax $ ptext (sLit "inconsistent with") <+> (pprCoAxBranch (coAxiomTyCon con) bad_branch)
Nothing -> return ()
; return (typeKind rhs', mkTyConApp (coAxiomTyCon con) lhs', rhs', coAxiomRole con) }
where
bad_ax what = addErrL (hang (ptext (sLit "Bad axiom application") <+> parens what)
2 (ppr co))
check_ki (subst_l, subst_r) (ktv, role, co)
= do { (k, t1, t2, r) <- lintCoercion co
; checkRole co role r
; let ktv_kind = Type.substTy subst_l (tyVarKind ktv)
-- Using subst_l is ok, because subst_l and subst_r
-- must agree on kind equalities
; unless (k `isSubKind` ktv_kind)
(bad_ax (ptext (sLit "check_ki2") <+> vcat [ ppr co, ppr k, ppr ktv, ppr ktv_kind ] ))
; return (Type.extendTvSubst subst_l ktv t1,
Type.extendTvSubst subst_r ktv t2) }
lintCoercion co@(SubCo co')
= do { (k,s,t,r) <- lintCoercion co'
; checkRole co Nominal r
; return (k,s,t,Representational) }
lintCoercion this@(AxiomRuleCo co ts cs)
= do _ks <- mapM lintType ts
eqs <- mapM lintCoercion cs
let tyNum = length ts
case compare (coaxrTypeArity co) tyNum of
EQ -> return ()
LT -> err "Too many type arguments"
[ txt "expected" <+> int (coaxrTypeArity co)
, txt "provided" <+> int tyNum ]
GT -> err "Not enough type arguments"
[ txt "expected" <+> int (coaxrTypeArity co)
, txt "provided" <+> int tyNum ]
checkRoles 0 (coaxrAsmpRoles co) eqs
case coaxrProves co ts [ Pair l r | (_,l,r,_) <- eqs ] of
Nothing -> err "Malformed use of AxiomRuleCo" [ ppr this ]
Just (Pair l r) ->
do kL <- lintType l
kR <- lintType r
unless (eqKind kL kR)
$ err "Kind error in CoAxiomRule"
[ppr kL <+> txt "/=" <+> ppr kR]
return (kL, l, r, coaxrRole co)
where
txt = ptext . sLit
err m xs = failWithL $
hang (txt m) 2 $ vcat (txt "Rule:" <+> ppr (coaxrName co) : xs)
checkRoles n (e : es) ((_,_,_,r) : rs)
| e == r = checkRoles (n+1) es rs
| otherwise = err "Argument roles mismatch"
[ txt "In argument:" <+> int (n+1)
, txt "Expected:" <+> ppr e
, txt "Found:" <+> ppr r ]
checkRoles _ [] [] = return ()
checkRoles n [] rs = err "Too many coercion arguments"
[ txt "Expected:" <+> int n
, txt "Provided:" <+> int (n + length rs) ]
checkRoles n es [] = err "Not enough coercion arguments"
[ txt "Expected:" <+> int (n + length es)
, txt "Provided:" <+> int n ]
{-
************************************************************************
* *
\subsection[lint-monad]{The Lint monad}
* *
************************************************************************
-}
-- If you edit this type, you may need to update the GHC formalism
-- See Note [GHC Formalism]
data LintEnv
= LE { le_flags :: LintFlags -- Linting the result of this pass
, le_loc :: [LintLocInfo] -- Locations
, le_subst :: TvSubst -- Current type substitution; we also use this
-- to keep track of all the variables in scope,
-- both Ids and TyVars
, le_dynflags :: DynFlags -- DynamicFlags
}
data LintFlags
= LF { lf_check_global_ids :: Bool -- See Note [Checking for global Ids]
, lf_check_inline_loop_breakers :: Bool -- See Note [Checking for INLINE loop breakers]
}
defaultLintFlags :: LintFlags
defaultLintFlags = LF { lf_check_global_ids = False
, lf_check_inline_loop_breakers = True }
newtype LintM a =
LintM { unLintM ::
LintEnv ->
WarnsAndErrs -> -- Error and warning messages so far
(Maybe a, WarnsAndErrs) } -- Result and messages (if any)
type WarnsAndErrs = (Bag MsgDoc, Bag MsgDoc)
{- Note [Checking for global Ids]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before CoreTidy, all locally-bound Ids must be LocalIds, even
top-level ones. See Note [Exported LocalIds] and Trac #9857.
Note [Type substitution]
~~~~~~~~~~~~~~~~~~~~~~~~
Why do we need a type substitution? Consider
/\(a:*). \(x:a). /\(a:*). id a x
This is ill typed, because (renaming variables) it is really
/\(a:*). \(x:a). /\(b:*). id b x
Hence, when checking an application, we can't naively compare x's type
(at its binding site) with its expected type (at a use site). So we
rename type binders as we go, maintaining a substitution.
The same substitution also supports let-type, current expressed as
(/\(a:*). body) ty
Here we substitute 'ty' for 'a' in 'body', on the fly.
-}
instance Functor LintM where
fmap = liftM
instance Applicative LintM where
pure = return
(<*>) = ap
instance Monad LintM where
return x = LintM (\ _ errs -> (Just x, errs))
fail err = failWithL (text err)
m >>= k = LintM (\ env errs ->
let (res, errs') = unLintM m env errs in
case res of
Just r -> unLintM (k r) env errs'
Nothing -> (Nothing, errs'))
instance HasDynFlags LintM where
getDynFlags = LintM (\ e errs -> (Just (le_dynflags e), errs))
data LintLocInfo
= RhsOf Id -- The variable bound
| LambdaBodyOf Id -- The lambda-binder
| BodyOfLetRec [Id] -- One of the binders
| CaseAlt CoreAlt -- Case alternative
| CasePat CoreAlt -- The *pattern* of the case alternative
| AnExpr CoreExpr -- Some expression
| ImportedUnfolding SrcLoc -- Some imported unfolding (ToDo: say which)
| TopLevelBindings
| InType Type -- Inside a type
| InCo Coercion -- Inside a coercion
initL :: DynFlags -> LintFlags -> LintM a -> WarnsAndErrs -- Errors and warnings
initL dflags flags m
= case unLintM m env (emptyBag, emptyBag) of
(_, errs) -> errs
where
env = LE { le_flags = flags, le_subst = emptyTvSubst, le_loc = [], le_dynflags = dflags }
getLintFlags :: LintM LintFlags
getLintFlags = LintM $ \ env errs -> (Just (le_flags env), errs)
checkL :: Bool -> MsgDoc -> LintM ()
checkL True _ = return ()
checkL False msg = failWithL msg
checkWarnL :: Bool -> MsgDoc -> LintM ()
checkWarnL True _ = return ()
checkWarnL False msg = addWarnL msg
failWithL :: MsgDoc -> LintM a
failWithL msg = LintM $ \ env (warns,errs) ->
(Nothing, (warns, addMsg env errs msg))
addErrL :: MsgDoc -> LintM ()
addErrL msg = LintM $ \ env (warns,errs) ->
(Just (), (warns, addMsg env errs msg))
addWarnL :: MsgDoc -> LintM ()
addWarnL msg = LintM $ \ env (warns,errs) ->
(Just (), (addMsg env warns msg, errs))
addMsg :: LintEnv -> Bag MsgDoc -> MsgDoc -> Bag MsgDoc
addMsg env msgs msg
= ASSERT( notNull locs )
msgs `snocBag` mk_msg msg
where
locs = le_loc env
(loc, cxt1) = dumpLoc (head locs)
cxts = [snd (dumpLoc loc) | loc <- locs]
context | opt_PprStyle_Debug = vcat (reverse cxts) $$ cxt1 $$
ptext (sLit "Substitution:") <+> ppr (le_subst env)
| otherwise = cxt1
mk_msg msg = mkLocMessage SevWarning (mkSrcSpan loc loc) (context $$ msg)
addLoc :: LintLocInfo -> LintM a -> LintM a
addLoc extra_loc m
= LintM $ \ env errs ->
unLintM m (env { le_loc = extra_loc : le_loc env }) errs
inCasePat :: LintM Bool -- A slight hack; see the unique call site
inCasePat = LintM $ \ env errs -> (Just (is_case_pat env), errs)
where
is_case_pat (LE { le_loc = CasePat {} : _ }) = True
is_case_pat _other = False
addInScopeVars :: [Var] -> LintM a -> LintM a
addInScopeVars vars m
= LintM $ \ env errs ->
unLintM m (env { le_subst = extendTvInScopeList (le_subst env) vars })
errs
addInScopeVar :: Var -> LintM a -> LintM a
addInScopeVar var m
= LintM $ \ env errs ->
unLintM m (env { le_subst = extendTvInScope (le_subst env) var }) errs
extendSubstL :: TyVar -> Type -> LintM a -> LintM a
extendSubstL tv ty m
= LintM $ \ env errs ->
unLintM m (env { le_subst = Type.extendTvSubst (le_subst env) tv ty }) errs
updateTvSubst :: TvSubst -> LintM a -> LintM a
updateTvSubst subst' m
= LintM $ \ env errs -> unLintM m (env { le_subst = subst' }) errs
getTvSubst :: LintM TvSubst
getTvSubst = LintM (\ env errs -> (Just (le_subst env), errs))
getInScope :: LintM InScopeSet
getInScope = LintM (\ env errs -> (Just (getTvInScope (le_subst env)), errs))
applySubstTy :: InType -> LintM OutType
applySubstTy ty = do { subst <- getTvSubst; return (Type.substTy subst ty) }
applySubstCo :: InCoercion -> LintM OutCoercion
applySubstCo co = do { subst <- getTvSubst; return (substCo (tvCvSubst subst) co) }
lookupIdInScope :: Id -> LintM Id
lookupIdInScope id
| not (mustHaveLocalBinding id)
= return id -- An imported Id
| otherwise
= do { subst <- getTvSubst
; case lookupInScope (getTvInScope subst) id of
Just v -> return v
Nothing -> do { addErrL out_of_scope
; return id } }
where
out_of_scope = pprBndr LetBind id <+> ptext (sLit "is out of scope")
oneTupleDataConId :: Id -- Should not happen
oneTupleDataConId = dataConWorkId (tupleCon BoxedTuple 1)
checkBndrIdInScope :: Var -> Var -> LintM ()
checkBndrIdInScope binder id
= checkInScope msg id
where
msg = ptext (sLit "is out of scope inside info for") <+>
ppr binder
checkTyCoVarInScope :: Var -> LintM ()
checkTyCoVarInScope v = checkInScope (ptext (sLit "is out of scope")) v
checkInScope :: SDoc -> Var -> LintM ()
checkInScope loc_msg var =
do { subst <- getTvSubst
; checkL (not (mustHaveLocalBinding var) || (var `isInScope` subst))
(hsep [pprBndr LetBind var, loc_msg]) }
checkTys :: OutType -> OutType -> MsgDoc -> LintM ()
-- check ty2 is subtype of ty1 (ie, has same structure but usage
-- annotations need only be consistent, not equal)
-- Assumes ty1,ty2 are have alrady had the substitution applied
checkTys ty1 ty2 msg = checkL (ty1 `eqType` ty2) msg
checkRole :: Coercion
-> Role -- expected
-> Role -- actual
-> LintM ()
checkRole co r1 r2
= checkL (r1 == r2)
(ptext (sLit "Role incompatibility: expected") <+> ppr r1 <> comma <+>
ptext (sLit "got") <+> ppr r2 $$
ptext (sLit "in") <+> ppr co)
{-
************************************************************************
* *
\subsection{Error messages}
* *
************************************************************************
-}
dumpLoc :: LintLocInfo -> (SrcLoc, SDoc)
dumpLoc (RhsOf v)
= (getSrcLoc v, brackets (ptext (sLit "RHS of") <+> pp_binders [v]))
dumpLoc (LambdaBodyOf b)
= (getSrcLoc b, brackets (ptext (sLit "in body of lambda with binder") <+> pp_binder b))
dumpLoc (BodyOfLetRec [])
= (noSrcLoc, brackets (ptext (sLit "In body of a letrec with no binders")))
dumpLoc (BodyOfLetRec bs@(_:_))
= ( getSrcLoc (head bs), brackets (ptext (sLit "in body of letrec with binders") <+> pp_binders bs))
dumpLoc (AnExpr e)
= (noSrcLoc, text "In the expression:" <+> ppr e)
dumpLoc (CaseAlt (con, args, _))
= (noSrcLoc, text "In a case alternative:" <+> parens (ppr con <+> pp_binders args))
dumpLoc (CasePat (con, args, _))
= (noSrcLoc, text "In the pattern of a case alternative:" <+> parens (ppr con <+> pp_binders args))
dumpLoc (ImportedUnfolding locn)
= (locn, brackets (ptext (sLit "in an imported unfolding")))
dumpLoc TopLevelBindings
= (noSrcLoc, Outputable.empty)
dumpLoc (InType ty)
= (noSrcLoc, text "In the type" <+> quotes (ppr ty))
dumpLoc (InCo co)
= (noSrcLoc, text "In the coercion" <+> quotes (ppr co))
pp_binders :: [Var] -> SDoc
pp_binders bs = sep (punctuate comma (map pp_binder bs))
pp_binder :: Var -> SDoc
pp_binder b | isId b = hsep [ppr b, dcolon, ppr (idType b)]
| otherwise = hsep [ppr b, dcolon, ppr (tyVarKind b)]
------------------------------------------------------
-- Messages for case expressions
mkDefaultArgsMsg :: [Var] -> MsgDoc
mkDefaultArgsMsg args
= hang (text "DEFAULT case with binders")
4 (ppr args)
mkCaseAltMsg :: CoreExpr -> Type -> Type -> MsgDoc
mkCaseAltMsg e ty1 ty2
= hang (text "Type of case alternatives not the same as the annotation on case:")
4 (vcat [ppr ty1, ppr ty2, ppr e])
mkScrutMsg :: Id -> Type -> Type -> TvSubst -> MsgDoc
mkScrutMsg var var_ty scrut_ty subst
= vcat [text "Result binder in case doesn't match scrutinee:" <+> ppr var,
text "Result binder type:" <+> ppr var_ty,--(idType var),
text "Scrutinee type:" <+> ppr scrut_ty,
hsep [ptext (sLit "Current TV subst"), ppr subst]]
mkNonDefltMsg, mkNonIncreasingAltsMsg :: CoreExpr -> MsgDoc
mkNonDefltMsg e
= hang (text "Case expression with DEFAULT not at the beginnning") 4 (ppr e)
mkNonIncreasingAltsMsg e
= hang (text "Case expression with badly-ordered alternatives") 4 (ppr e)
nonExhaustiveAltsMsg :: CoreExpr -> MsgDoc
nonExhaustiveAltsMsg e
= hang (text "Case expression with non-exhaustive alternatives") 4 (ppr e)
mkBadConMsg :: TyCon -> DataCon -> MsgDoc
mkBadConMsg tycon datacon
= vcat [
text "In a case alternative, data constructor isn't in scrutinee type:",
text "Scrutinee type constructor:" <+> ppr tycon,
text "Data con:" <+> ppr datacon
]
mkBadPatMsg :: Type -> Type -> MsgDoc
mkBadPatMsg con_result_ty scrut_ty
= vcat [
text "In a case alternative, pattern result type doesn't match scrutinee type:",
text "Pattern result type:" <+> ppr con_result_ty,
text "Scrutinee type:" <+> ppr scrut_ty
]
integerScrutinisedMsg :: MsgDoc
integerScrutinisedMsg
= text "In a LitAlt, the literal is lifted (probably Integer)"
mkBadAltMsg :: Type -> CoreAlt -> MsgDoc
mkBadAltMsg scrut_ty alt
= vcat [ text "Data alternative when scrutinee is not a tycon application",
text "Scrutinee type:" <+> ppr scrut_ty,
text "Alternative:" <+> pprCoreAlt alt ]
mkNewTyDataConAltMsg :: Type -> CoreAlt -> MsgDoc
mkNewTyDataConAltMsg scrut_ty alt
= vcat [ text "Data alternative for newtype datacon",
text "Scrutinee type:" <+> ppr scrut_ty,
text "Alternative:" <+> pprCoreAlt alt ]
------------------------------------------------------
-- Other error messages
mkAppMsg :: Type -> Type -> CoreExpr -> MsgDoc
mkAppMsg fun_ty arg_ty arg
= vcat [ptext (sLit "Argument value doesn't match argument type:"),
hang (ptext (sLit "Fun type:")) 4 (ppr fun_ty),
hang (ptext (sLit "Arg type:")) 4 (ppr arg_ty),
hang (ptext (sLit "Arg:")) 4 (ppr arg)]
mkNonFunAppMsg :: Type -> Type -> CoreExpr -> MsgDoc
mkNonFunAppMsg fun_ty arg_ty arg
= vcat [ptext (sLit "Non-function type in function position"),
hang (ptext (sLit "Fun type:")) 4 (ppr fun_ty),
hang (ptext (sLit "Arg type:")) 4 (ppr arg_ty),
hang (ptext (sLit "Arg:")) 4 (ppr arg)]
mkLetErr :: TyVar -> CoreExpr -> MsgDoc
mkLetErr bndr rhs
= vcat [ptext (sLit "Bad `let' binding:"),
hang (ptext (sLit "Variable:"))
4 (ppr bndr <+> dcolon <+> ppr (varType bndr)),
hang (ptext (sLit "Rhs:"))
4 (ppr rhs)]
mkTyAppMsg :: Type -> Type -> MsgDoc
mkTyAppMsg ty arg_ty
= vcat [text "Illegal type application:",
hang (ptext (sLit "Exp type:"))
4 (ppr ty <+> dcolon <+> ppr (typeKind ty)),
hang (ptext (sLit "Arg type:"))
4 (ppr arg_ty <+> dcolon <+> ppr (typeKind arg_ty))]
mkRhsMsg :: Id -> SDoc -> Type -> MsgDoc
mkRhsMsg binder what ty
= vcat
[hsep [ptext (sLit "The type of this binder doesn't match the type of its") <+> what <> colon,
ppr binder],
hsep [ptext (sLit "Binder's type:"), ppr (idType binder)],
hsep [ptext (sLit "Rhs type:"), ppr ty]]
mkLetAppMsg :: CoreExpr -> MsgDoc
mkLetAppMsg e
= hang (ptext (sLit "This argument does not satisfy the let/app invariant:"))
2 (ppr e)
mkRhsPrimMsg :: Id -> CoreExpr -> MsgDoc
mkRhsPrimMsg binder _rhs
= vcat [hsep [ptext (sLit "The type of this binder is primitive:"),
ppr binder],
hsep [ptext (sLit "Binder's type:"), ppr (idType binder)]
]
mkStrictMsg :: Id -> MsgDoc
mkStrictMsg binder
= vcat [hsep [ptext (sLit "Recursive or top-level binder has strict demand info:"),
ppr binder],
hsep [ptext (sLit "Binder's demand info:"), ppr (idDemandInfo binder)]
]
mkNonTopExportedMsg :: Id -> MsgDoc
mkNonTopExportedMsg binder
= hsep [ptext (sLit "Non-top-level binder is marked as exported:"), ppr binder]
mkNonTopExternalNameMsg :: Id -> MsgDoc
mkNonTopExternalNameMsg binder
= hsep [ptext (sLit "Non-top-level binder has an external name:"), ppr binder]
mkKindErrMsg :: TyVar -> Type -> MsgDoc
mkKindErrMsg tyvar arg_ty
= vcat [ptext (sLit "Kinds don't match in type application:"),
hang (ptext (sLit "Type variable:"))
4 (ppr tyvar <+> dcolon <+> ppr (tyVarKind tyvar)),
hang (ptext (sLit "Arg type:"))
4 (ppr arg_ty <+> dcolon <+> ppr (typeKind arg_ty))]
{- Not needed now
mkArityMsg :: Id -> MsgDoc
mkArityMsg binder
= vcat [hsep [ptext (sLit "Demand type has"),
ppr (dmdTypeDepth dmd_ty),
ptext (sLit "arguments, rhs has"),
ppr (idArity binder),
ptext (sLit "arguments,"),
ppr binder],
hsep [ptext (sLit "Binder's strictness signature:"), ppr dmd_ty]
]
where (StrictSig dmd_ty) = idStrictness binder
-}
mkCastErr :: CoreExpr -> Coercion -> Type -> Type -> MsgDoc
mkCastErr expr co from_ty expr_ty
= vcat [ptext (sLit "From-type of Cast differs from type of enclosed expression"),
ptext (sLit "From-type:") <+> ppr from_ty,
ptext (sLit "Type of enclosed expr:") <+> ppr expr_ty,
ptext (sLit "Actual enclosed expr:") <+> ppr expr,
ptext (sLit "Coercion used in cast:") <+> ppr co
]
dupVars :: [[Var]] -> MsgDoc
dupVars vars
= hang (ptext (sLit "Duplicate variables brought into scope"))
2 (ppr vars)
dupExtVars :: [[Name]] -> MsgDoc
dupExtVars vars
= hang (ptext (sLit "Duplicate top-level variables with the same qualified name"))
2 (ppr vars)
{-
************************************************************************
* *
\subsection{Annotation Linting}
* *
************************************************************************
-}
-- | This checks whether a pass correctly looks through debug
-- annotations (@SourceNote@). This works a bit different from other
-- consistency checks: We check this by running the given task twice,
-- noting all differences between the results.
lintAnnots :: SDoc -> (ModGuts -> CoreM ModGuts) -> ModGuts -> CoreM ModGuts
lintAnnots pname pass guts = do
-- Run the pass as we normally would
dflags <- getDynFlags
when (gopt Opt_DoAnnotationLinting dflags) $
liftIO $ Err.showPass dflags "Annotation linting - first run"
nguts <- pass guts
-- If appropriate re-run it without debug annotations to make sure
-- that they made no difference.
when (gopt Opt_DoAnnotationLinting dflags) $ do
liftIO $ Err.showPass dflags "Annotation linting - second run"
nguts' <- withoutAnnots pass guts
-- Finally compare the resulting bindings
liftIO $ Err.showPass dflags "Annotation linting - comparison"
let binds = flattenBinds $ mg_binds nguts
binds' = flattenBinds $ mg_binds nguts'
(diffs,_) = diffBinds True (mkRnEnv2 emptyInScopeSet) binds binds'
when (not (null diffs)) $ CoreMonad.putMsg $ vcat
[ lint_banner "warning" pname
, text "Core changes with annotations:"
, withPprStyle defaultDumpStyle $ nest 2 $ vcat diffs
]
-- Return actual new guts
return nguts
-- | Run the given pass without annotations. This means that we both
-- remove the @Opt_Debug@ flag from the environment as well as all
-- annotations from incoming modules.
withoutAnnots :: (ModGuts -> CoreM ModGuts) -> ModGuts -> CoreM ModGuts
withoutAnnots pass guts = do
-- Remove debug flag from environment.
dflags <- getDynFlags
let removeFlag env = env{hsc_dflags = gopt_unset dflags Opt_Debug}
withoutFlag corem =
liftIO =<< runCoreM <$> fmap removeFlag getHscEnv <*> getRuleBase <*>
getUniqueSupplyM <*> getModule <*>
getPrintUnqualified <*> pure corem
-- Nuke existing ticks in module.
-- TODO: Ticks in unfoldings. Maybe change unfolding so it removes
-- them in absence of @Opt_Debug@?
let nukeTicks = stripTicksE (not . tickishIsCode)
nukeAnnotsBind :: CoreBind -> CoreBind
nukeAnnotsBind bind = case bind of
Rec bs -> Rec $ map (\(b,e) -> (b, nukeTicks e)) bs
NonRec b e -> NonRec b $ nukeTicks e
nukeAnnotsMod mg@ModGuts{mg_binds=binds}
= mg{mg_binds = map nukeAnnotsBind binds}
-- Perform pass with all changes applied
fmap fst $ withoutFlag $ pass (nukeAnnotsMod guts)
| gcampax/ghc | compiler/coreSyn/CoreLint.hs | bsd-3-clause | 70,867 | 50 | 25 | 20,579 | 16,656 | 8,443 | 8,213 | 1,105 | 17 |
module CircGen.TestCircs.CS
( cs
) where
import CircUtils.Circuit
import CircGen.Add.SimpleRipple
import CircGen.Misc
cs :: Circuit
cs = Circuit csLines csGates []
where csLines = LineInfo clines clines clines clines
csGates :: [Gate]
csGates = copies ++ adders ++ applyMux "ctrl" ylines yclines
where adders = combineLists (applySimpleRipple xlines ylines "z0" "c0") (applySimpleRipple xclines yclines "z1" "c1")
copies = combineLists (applyCopy xlines xclines) (applyCopy ylines yclines)
clines :: [String]
clines = ["ctrl"] ++ xlines ++ ["c0"] ++ ylines ++ ["z0"] ++ xclines ++ ["c1"] ++ yclines ++ ["z1"]
xlines = ["x0" ,"x1" ,"x2" ,"x3" ]
ylines = ["y0" ,"y1" ,"y2" ,"y3" ]
xclines = ["xc0","xc1","xc2","xc3"]
yclines = ["yc0","yc1","yc2","yc3"]
combineLists :: [a] -> [a] -> [a]
combineLists [] [] = []
combineLists x [] = x
combineLists [] y = y
combineLists (x:xs) (y:ys) = x : y : combineLists xs ys
| aparent/qacg | src/QACG/CircGen/TestCircs/CS.hs | bsd-3-clause | 934 | 0 | 13 | 165 | 380 | 212 | 168 | 23 | 1 |
{-#LANGUAGE OverloadedStrings#-}
{-
Min Zhang
3-29-2015
ListToFa
Turn tab separated tables to Fasta and can be loaded to UCSC blat mapping.
-}
module Main
where
import qualified Data.Text.Lazy as T
import qualified Data.Text.Lazy.IO as TextIO
import Data.List (foldl')
import System.Environment
import Control.Applicative
import qualified Data.Map as M
import Data.Monoid
import IO
import DataTypes
import MyText
main = do
[input, output] <- take 2 <$> getArgs
fa <- map listToFasta <$> textToList input
outputFasta output fa
textToList :: FilePath -> IO [[T.Text]]
textToList input = do
a <- map tab . T.lines <$> TextIO.readFile input
return a
listToFasta :: [T.Text] -> Fasta
listToFasta [a, b] = Fasta (T.concat [">", a]) b
| Min-/fourseq | src/utils/ListToFa.hs | bsd-3-clause | 758 | 0 | 10 | 140 | 223 | 124 | 99 | 22 | 1 |
module FizzBuzz where
-- | Compute Fizz-Buzz with guard
-- >>> fizzBuzz 1
-- "1"
-- >>> fizzBuzz 3
-- "Fizz"
-- >>> fizzBuzz 5
-- "Buzz"
-- >>> fizzBuzz 15
-- "FizzBuzz"
fizzBuzz n
| isFizz && isBuzz = "FizzBuzz"
| isBuzz = "Buzz"
| isFizz = "Fizz"
| otherwise = show n
where
isFizz = (mod n 3) == 0
isBuzz = (mod n 5) == 0
main = mapM_ (putStrLn . fizzBuzz) [1..100]
| temmings/FizzBuzz | Haskell/FizzBuzz.hs | bsd-3-clause | 388 | 0 | 9 | 96 | 119 | 64 | 55 | 9 | 1 |
module Main (main) where
import System.Giraffe.ApplicationTest (applicationTest)
import System.Giraffe.TrackerTest (trackerTest)
import Test.Framework (defaultMain)
main :: IO ()
main = defaultMain
[ applicationTest
, trackerTest
]
| ameingast/giraffe | test/TestMain.hs | bsd-3-clause | 296 | 0 | 6 | 89 | 65 | 39 | 26 | 8 | 1 |
module DB.Acid where
import Data.Acid
import Framework.Location
import Framework.Profile
import Framework.Auth
data Acid = Acid
{ authState :: AcidState AuthState
, profileState :: AcidState ProfileState
, locationState :: AcidState LocationState
}
| ojw/admin-and-dev | src/DB/Acid.hs | bsd-3-clause | 269 | 0 | 9 | 51 | 62 | 36 | 26 | 9 | 0 |
{-# LANGUAGE UndecidableInstances #-}
{-# LANGUAGE ConstraintKinds #-}
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE KindSignatures #-}
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE TypeOperators #-}
{-# LANGUAGE PolyKinds #-}
{-# LANGUAGE GADTs #-}
{-# LANGUAGE RankNTypes #-}
module Lecture.DT.April15 where
import Data.Proxy
import Data.Constraint
import GHC.TypeLits
data Expr a where
Add :: Expr Int -> Expr Int -> Expr Int
Mul :: Expr Int -> Expr Int -> Expr Int
And :: Expr Bool -> Expr Bool -> Expr Bool
Or :: Expr Bool -> Expr Bool -> Expr Bool
IntExpr :: Int -> Expr Int
BoolExpr :: Bool -> Expr Bool
IfElse :: Expr Bool -> Expr a -> Expr a -> Expr a
type family Bar (a :: *) (b :: *) where
Bar (Maybe a) b = Bar a b
Bar a Bool = Bar Int Bool
Bar Int Bool = Bool
type family Deducible (b :: Bool) :: Constraint where
Deducible 'True = ()
type family NotElem (x :: k) (xs :: [k]) :: Constraint where
NotElem x '[] = ()
NotElem x (x ': ys) = Deducible False
NotElem x (y ': ys) = NotElem x ys
-- | No runtime promotion forces us to pair our elements with their type-level
-- componenets.
data UniqueList (xs :: [k]) a where
Nil :: UniqueList '[] a
Cons :: NotElem a' xs =>
(a, Proxy a')
-> UniqueList xs a
-> UniqueList (a' ': xs) a
| athanclark/dt-haskell-intro | src/Lecture/DT/April15.hs | bsd-3-clause | 1,333 | 0 | 10 | 339 | 440 | 239 | 201 | 37 | 0 |
module Control.Monad.Incremental.Adapton (
module Control.Monad.Incremental
, module Control.Monad.Incremental.Display
, module Control.Monad.Incremental.Draw
, module Control.Monad.Incremental.Internal.Adapton.Algorithm
, module Control.Monad.Incremental.Internal.Adapton.Display
, module Control.Monad.Incremental.Internal.Adapton.Draw
, module Control.Monad.Incremental.Internal.Adapton.Layers
, module Control.Monad.Incremental.Internal.Adapton.Types
, module Control.Monad.Incremental.Internal.Adapton.Memo
) where
import Control.Monad.Incremental
import Control.Monad.Incremental.Display
import Control.Monad.Incremental.Draw
import Control.Monad.Incremental.Internal.Adapton.Algorithm ()
import Control.Monad.Incremental.Internal.Adapton.Display
import Control.Monad.Incremental.Internal.Adapton.Draw
import Control.Monad.Incremental.Internal.Adapton.Layers (Inner,Outer,proxyAdapton,IncParams(..))
import Control.Monad.Incremental.Internal.Adapton.Types (U,M,L,S,Adapton)
import Control.Monad.Incremental.Internal.Adapton.Memo | cornell-pl/HsAdapton | src/Control/Monad/Incremental/Adapton.hs | bsd-3-clause | 1,050 | 4 | 6 | 67 | 209 | 155 | 54 | 19 | 0 |
{-# LANGUAGE RecordWildCards #-}
-- | read/write ImpulseTracker file headers
module Codec.Tracker.IT.Header (
Header (..)
, getHeader
, putHeader
) where
import Control.Monad
import Data.Binary
import Data.Binary.Get
import Data.Binary.Put
-- | ImpulseTracker file header
data Header = Header { magicString :: Word32 -- ^ \"IMPM\" starting at v2.03
, songName :: [Word8] -- ^ 26 bytes
, hpad0 :: [Word8] -- ^ padding (2 bytes)
, songLength :: Word16 -- ^ number of entries in pattern order table
, numInstruments :: Word16 -- ^ number of instruments
, numSamples :: Word16 -- ^ number of samples
, numPatterns :: Word16 -- ^ number of patterns
, trackerVersion :: Word16
, formatVersion :: Word16
, flags :: Word16 -- ^ bit 0: stereo/mono
--
-- bit 1: no mixing occurs if volume
-- equals zero (deprecated)
--
-- bit 2: use samples/instruments
--
-- bit 3: linear/amiga slides
--
-- bit 4: use old effects
--
-- bits 5-15: undefined
, special :: Word16 -- ^ bit 0: there's message attached to the song
-- starting at `messageOffset`
--
-- bits 1-15: undefined
, globalVolume :: Word8 -- ^ global volume
, mixVolume :: Word8
, initialSpeed :: Word8 -- ^ initial speed
, initialTempo :: Word8 -- ^ initial tempo
, panSeparation :: Word8
, hpad1 :: Word8 -- ^ padding
, messageLength :: Word16
, messageOffset :: Word32
, hpad2 :: [Word8] -- ^ padding (4 bytes)
, channelPanning :: [Word8]
, channelVolume :: [Word8]
}
deriving (Show, Eq)
-- | Read a `Header` from the monad state.
getHeader :: Get Header
getHeader = label "IT.Header" $
Header <$> getWord32le
<*> replicateM 26 getWord8
<*> replicateM 2 getWord8
<*> getWord16le
<*> getWord16le
<*> getWord16le
<*> getWord16le
<*> getWord16le
<*> getWord16le
<*> getWord16le
<*> getWord16le
<*> getWord8
<*> getWord8
<*> getWord8
<*> getWord8
<*> getWord8
<*> getWord8
<*> getWord16le
<*> getWord32le
<*> replicateM 4 getWord8
<*> replicateM 64 getWord8
<*> replicateM 64 getWord8
-- | Write a `Header` to the buffer.
putHeader :: Header -> Put
putHeader Header{..} = do
putWord32le magicString
mapM_ putWord8 songName
mapM_ putWord8 hpad0
mapM_ putWord16le
[ songLength
, numInstruments
, numSamples
, numPatterns
, trackerVersion
, formatVersion
, flags
, special
]
mapM_ putWord8
[ globalVolume
, mixVolume
, initialSpeed
, initialTempo
, panSeparation
, hpad1
]
putWord16le messageLength
putWord32le messageOffset
mapM_ putWord8 hpad2
mapM_ putWord8 channelPanning
mapM_ putWord8 channelVolume
| riottracker/modfile | src/Codec/Tracker/IT/Header.hs | bsd-3-clause | 4,476 | 0 | 28 | 2,425 | 517 | 303 | 214 | 82 | 1 |
------------------------------------------------------------------------------
-- |
-- Module : Data.Datamining.Clustering.Gsom.Node.Lattice
-- Copyright : (c) 2009 Stephan Günther
-- License : BSD3
--
-- Maintainer : gnn.github@gmail.com
-- Stability : experimental
-- Portability : non-portable (requires STM)
--
-- The type @'Lattice'@ is the type of the network build by the GSOM
-- algorithm. This type and most of the functions dealing with it are defined
-- in this module.
------------------------------------------------------------------------------
module Data.Datamining.Clustering.Gsom.Lattice(
Lattice,
newCentered, newRandom,
bmu, grow, vent,
nodes,
putLattice, putWeights) where
------------------------------------------------------------------------------
-- Standard modules
------------------------------------------------------------------------------
import Control.Concurrent.STM
import Control.Monad
import Data.List
import Data.Map(Map(..))
import qualified Data.Map as Map
import Data.Maybe
import System.Random
------------------------------------------------------------------------------
-- Private modules
------------------------------------------------------------------------------
import Data.Datamining.Clustering.Gsom.Coordinates
import Data.Datamining.Clustering.Gsom.Input
import Data.Datamining.Clustering.Gsom.Node
------------------------------------------------------------------------------
-- The Lattice type
------------------------------------------------------------------------------
-- | The lattice type. Since global access to nodes is needed they're
-- stored in a 'Data.Map' indexed by their coordinates.
type Lattice = Map Coordinates (TVar Node)
------------------------------------------------------------------------------
-- Creation
------------------------------------------------------------------------------
-- | @'newRandom' g dimension@ creates a new minimal lattice where weights are
-- randomly initialized with values between 0 and 1 using the random number
-- generator @g@ and with the weight vectors having the specified @dimension@.
newRandom :: RandomGen g => g -> Int -> IO Lattice
newRandom g dimension = let
gs g = let (g1, g2) = split g in g1 : gs g2
weights = [randomRs (0, 1) g' | g' <- gs g]
in new weights dimension
-- | @'newNormalized' dimension@ creates a new minimal lattice where weights
-- are initialized with all components having the value @0.5@ the and with
-- the weight vectors having length @dimension@.
newCentered :: Int -> IO Lattice
newCentered = new (cycle [cycle [0.5]])
------------------------------------------------------------------------------
-- Reading
------------------------------------------------------------------------------
-- | Returns the nodes stored in lattice.
nodes :: Lattice -> STM Nodes
nodes = mapM readTVar . Map.elems
-- | @'bmu' input lattice@ returns the best matching unit i.e. the node with
-- minimal distance to the given input vector.
bmu :: Input -> Lattice -> STM Node
bmu i l = liftM (filter isNode) (nodes l) >>= (\l' ->
let ws = readTVar.weights in case l' of
[] -> error "error in bmu: empty lattices shouldn't occur."
(x:xs) ->
foldM (\n1 n2 -> do
w1 <- ws n1
boundary <- boundaryNode n1
w2 <- ws n2
let {d1 = distance i w1; d2 = distance i w2}
return $! if d1 < d2 || (d1 == d2 && boundary)
then n1
else n2)
x xs
)
------------------------------------------------------------------------------
-- Manipulating
------------------------------------------------------------------------------
-- | @'grow' lattice node@ will create new neighbours for every Leaf
-- neighbour of the given @node@ and add the created nodes to @lattice@.
-- It will return the list of spawned nodes and the new lattice containing
-- every node created in the process of spawning.
grow :: Lattice -> Node -> STM (Lattice, Nodes)
grow lattice node = do
holes <- liftM (findIndices isLeaf) (unwrappedNeighbours node)
newLattice <- foldM (`spawn` node) lattice holes
spawned <- unwrappedNeighbours node >>= (\ns -> return $! map (ns !!) holes)
return $! (newLattice, spawned)
-- | Used to spawn only a particular node. Returns the new lattice
-- containing the spawned node.
-- @'spawn' lattice parent direction@ will create a new node as a
-- neighbour of @parent@ at index @direction@, making @parent@ the neighbour
-- of the new node at index @invert direction@ with the new node having an.
spawn :: Lattice -> Node -> Int -> STM Lattice
spawn _ Leaf _ = error "in spawn: spawn called with a Leaf parent."
spawn lattice parent direction = let
spawnCoordinates = neighbour (location parent) direction
nCs = neighbourCoordinates spawnCoordinates in do
-- first we have to check whether there are already some TVars existing
-- at the locations of the neighbours of the new node and create those that
-- don't exist yet.
newLattice <- foldM (\m k -> if not $ Map.member k m
then newTVar Leaf >>= (\v -> return $! Map.insert k v m)
else return $! m) lattice nCs
-- after creating all the necessary neighbours we can create the new
-- node with it's neighbours and calculate it's new weight vector
let ns = specificElements newLattice nCs
result <- node (neighbour (location parent) direction) [] ns
writeTVar (fromJust $ Map.lookup spawnCoordinates lattice) result
newWeight result direction
return $! newLattice
-- | @'vent' lattice node growthThreshold@ will check the accumulated error
-- of the @node@ against the given @growthThreshold@ and will do nothing if
-- the errror value is below the growth threshhold. Otherwise it will either
-- spawn new nodes or it will propagate the accumulated error value to it's
-- neighbours, depending on whether the node is a boundary node or not.
-- If new nodes are spawned they will be added to @lattice@ and returned as
-- the second component of the resulting pair.
vent :: Lattice -> Node -> Double -> STM (Lattice, [Node])
vent _ Leaf _ = error "in vent: vent called with a Leaf as argument."
vent lattice node gt = do
qE <- readTVar $ quantizationError node
if qE > gt then do
ns <- unwrappedNeighbours node
let leaves = findIndices isLeaf ns
let noleaves = null leaves
r@(l', affected) <- if noleaves
then return $! (lattice, ns)
else grow lattice node
propagate node affected
return $! if noleaves then (lattice, []) else r
else return $! (lattice, [])
------------------------------------------------------------------------------
-- Internal
------------------------------------------------------------------------------
-- | Generates a new @'Lattice'@ given the supply of @weights@ with each node
-- having a weight vector of the given @dimension@.
new :: Inputs -> Int -> IO Lattice
new ws dimension = let
origin = (0,0)
nodeCoordinates = origin : neighbourCoordinates origin
leafCoordinates =
nub (concatMap neighbourCoordinates nodeCoordinates) \\ nodeCoordinates
in atomically $ do
-- create a map with the TVars for the initial nodes
lattice <- foldM (\m k -> do
v <- newTVar Leaf
return $! Map.insert k v m) Map.empty (nodeCoordinates ++ leafCoordinates)
-- now that we have all the nodes we need to create the actual non leaf
-- nodes present in the starting map and write them into the corresoonding
-- TVars.
let nodeTVars = specificElements lattice nodeCoordinates
nodes <- sequence $ zipWith3 node
nodeCoordinates
(map (take dimension) ws)
(map (specificElements lattice . neighbourCoordinates) nodeCoordinates)
zipWithM_ writeTVar nodeTVars nodes
return $! lattice
specificElements :: Ord k => Map k a -> [k] -> [a]
specificElements m = map (fromJust . flip Map.lookup m)
------------------------------------------------------------------------------
-- Output
------------------------------------------------------------------------------
putLattice :: Lattice -> IO String
putLattice lattice = do
ns <- atomically (nodes lattice) >>= liftM concat . mapM putNode
return $ unlines ("Lattice: " : (" Count: " ++ show (Map.size lattice)) :
map (" " ++ ) ns)
putWeights :: Lattice -> IO String
putWeights lattice = do
ws <- atomically $ nodes lattice >>=
filterM (return.isNode) >>=
mapM (readTVar . weights)
return $!
unlines $
map (unwords . map show)
ws
| gnn/hsgsom | Data/Datamining/Clustering/Gsom/lattice.hs | bsd-3-clause | 8,460 | 0 | 23 | 1,471 | 1,599 | 845 | 754 | 106 | 4 |
{- |
Module : Data.Set.BKTree.Internal
Copyright : (c) Josef Svenningsson 2010
License : BSD-style
Maintainer : josef.svenningsson@gmail.com
Stability : Alpha quality. Interface may change without notice.
Portability : portable
This module exposes the internal representation of Burkhard-Keller trees.
-}
module Data.Set.BKTree.Internal where
import Data.IntMap
-- | The type of Burkhard-Keller trees.
data BKTree a = Node a !Int (IntMap (BKTree a))
| Empty
#ifdef DEBUG
deriving Show
#endif
| josefs/bktrees | Data/Set/BKTree/Internal.hs | bsd-3-clause | 561 | 0 | 10 | 137 | 50 | 31 | 19 | 6 | 0 |
{-
(c) The University of Glasgow 2006
(c) The GRASP/AQUA Project, Glasgow University, 1992-1998
GHC.Hs.ImpExp: Abstract syntax: imports, exports, interfaces
-}
{-# LANGUAGE DeriveDataTypeable #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE StandaloneDeriving #-}
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE UndecidableInstances #-} -- Note [Pass sensitive types]
-- in module GHC.Hs.PlaceHolder
module GHC.Hs.ImpExp where
import GhcPrelude
import Module ( ModuleName )
import GHC.Hs.Doc ( HsDocString )
import OccName ( HasOccName(..), isTcOcc, isSymOcc )
import BasicTypes ( SourceText(..), StringLiteral(..), pprWithSourceText )
import FieldLabel ( FieldLbl(..) )
import Outputable
import FastString
import SrcLoc
import GHC.Hs.Extension
import Data.Data
import Data.Maybe
{-
************************************************************************
* *
\subsection{Import and export declaration lists}
* *
************************************************************************
One per \tr{import} declaration in a module.
-}
-- | Located Import Declaration
type LImportDecl pass = Located (ImportDecl pass)
-- ^ When in a list this may have
--
-- - 'ApiAnnotation.AnnKeywordId' : 'ApiAnnotation.AnnSemi'
-- For details on above see note [Api annotations] in ApiAnnotation
-- | If/how an import is 'qualified'.
data ImportDeclQualifiedStyle
= QualifiedPre -- ^ 'qualified' appears in prepositive position.
| QualifiedPost -- ^ 'qualified' appears in postpositive position.
| NotQualified -- ^ Not qualified.
deriving (Eq, Data)
-- | Given two possible located 'qualified' tokens, compute a style
-- (in a conforming Haskell program only one of the two can be not
-- 'Nothing'). This is called from 'Parser.y'.
importDeclQualifiedStyle :: Maybe (Located a)
-> Maybe (Located a)
-> ImportDeclQualifiedStyle
importDeclQualifiedStyle mPre mPost =
if isJust mPre then QualifiedPre
else if isJust mPost then QualifiedPost else NotQualified
-- | Convenience function to answer the question if an import decl. is
-- qualified.
isImportDeclQualified :: ImportDeclQualifiedStyle -> Bool
isImportDeclQualified NotQualified = False
isImportDeclQualified _ = True
-- | Import Declaration
--
-- A single Haskell @import@ declaration.
data ImportDecl pass
= ImportDecl {
ideclExt :: XCImportDecl pass,
ideclSourceSrc :: SourceText,
-- Note [Pragma source text] in BasicTypes
ideclName :: Located ModuleName, -- ^ Module name.
ideclPkgQual :: Maybe StringLiteral, -- ^ Package qualifier.
ideclSource :: Bool, -- ^ True <=> {-\# SOURCE \#-} import
ideclSafe :: Bool, -- ^ True => safe import
ideclQualified :: ImportDeclQualifiedStyle, -- ^ If/how the import is qualified.
ideclImplicit :: Bool, -- ^ True => implicit import (of Prelude)
ideclAs :: Maybe (Located ModuleName), -- ^ as Module
ideclHiding :: Maybe (Bool, Located [LIE pass])
-- ^ (True => hiding, names)
}
| XImportDecl (XXImportDecl pass)
-- ^
-- 'ApiAnnotation.AnnKeywordId's
--
-- - 'ApiAnnotation.AnnImport'
--
-- - 'ApiAnnotation.AnnOpen', 'ApiAnnotation.AnnClose' for ideclSource
--
-- - 'ApiAnnotation.AnnSafe','ApiAnnotation.AnnQualified',
-- 'ApiAnnotation.AnnPackageName','ApiAnnotation.AnnAs',
-- 'ApiAnnotation.AnnVal'
--
-- - 'ApiAnnotation.AnnHiding','ApiAnnotation.AnnOpen',
-- 'ApiAnnotation.AnnClose' attached
-- to location in ideclHiding
-- For details on above see note [Api annotations] in ApiAnnotation
type instance XCImportDecl (GhcPass _) = NoExtField
type instance XXImportDecl (GhcPass _) = NoExtCon
simpleImportDecl :: ModuleName -> ImportDecl (GhcPass p)
simpleImportDecl mn = ImportDecl {
ideclExt = noExtField,
ideclSourceSrc = NoSourceText,
ideclName = noLoc mn,
ideclPkgQual = Nothing,
ideclSource = False,
ideclSafe = False,
ideclImplicit = False,
ideclQualified = NotQualified,
ideclAs = Nothing,
ideclHiding = Nothing
}
instance OutputableBndrId p
=> Outputable (ImportDecl (GhcPass p)) where
ppr (ImportDecl { ideclSourceSrc = mSrcText, ideclName = mod'
, ideclPkgQual = pkg
, ideclSource = from, ideclSafe = safe
, ideclQualified = qual, ideclImplicit = implicit
, ideclAs = as, ideclHiding = spec })
= hang (hsep [text "import", ppr_imp from, pp_implicit implicit, pp_safe safe,
pp_qual qual False, pp_pkg pkg, ppr mod', pp_qual qual True, pp_as as])
4 (pp_spec spec)
where
pp_implicit False = empty
pp_implicit True = ptext (sLit ("(implicit)"))
pp_pkg Nothing = empty
pp_pkg (Just (StringLiteral st p))
= pprWithSourceText st (doubleQuotes (ftext p))
pp_qual QualifiedPre False = text "qualified" -- Prepositive qualifier/prepositive position.
pp_qual QualifiedPost True = text "qualified" -- Postpositive qualifier/postpositive position.
pp_qual QualifiedPre True = empty -- Prepositive qualifier/postpositive position.
pp_qual QualifiedPost False = empty -- Postpositive qualifier/prepositive position.
pp_qual NotQualified _ = empty
pp_safe False = empty
pp_safe True = text "safe"
pp_as Nothing = empty
pp_as (Just a) = text "as" <+> ppr a
ppr_imp True = case mSrcText of
NoSourceText -> text "{-# SOURCE #-}"
SourceText src -> text src <+> text "#-}"
ppr_imp False = empty
pp_spec Nothing = empty
pp_spec (Just (False, (L _ ies))) = ppr_ies ies
pp_spec (Just (True, (L _ ies))) = text "hiding" <+> ppr_ies ies
ppr_ies [] = text "()"
ppr_ies ies = char '(' <+> interpp'SP ies <+> char ')'
ppr (XImportDecl x) = ppr x
{-
************************************************************************
* *
\subsection{Imported and exported entities}
* *
************************************************************************
-}
-- | A name in an import or export specification which may have adornments. Used
-- primarily for accurate pretty printing of ParsedSource, and API Annotation
-- placement.
data IEWrappedName name
= IEName (Located name) -- ^ no extra
| IEPattern (Located name) -- ^ pattern X
| IEType (Located name) -- ^ type (:+:)
deriving (Eq,Data)
-- | Located name with possible adornment
-- - 'ApiAnnotation.AnnKeywordId's : 'ApiAnnotation.AnnType',
-- 'ApiAnnotation.AnnPattern'
type LIEWrappedName name = Located (IEWrappedName name)
-- For details on above see note [Api annotations] in ApiAnnotation
-- | Located Import or Export
type LIE pass = Located (IE pass)
-- ^ When in a list this may have
--
-- - 'ApiAnnotation.AnnKeywordId' : 'ApiAnnotation.AnnComma'
-- For details on above see note [Api annotations] in ApiAnnotation
-- | Imported or exported entity.
data IE pass
= IEVar (XIEVar pass) (LIEWrappedName (IdP pass))
-- ^ Imported or Exported Variable
| IEThingAbs (XIEThingAbs pass) (LIEWrappedName (IdP pass))
-- ^ Imported or exported Thing with Absent list
--
-- The thing is a Class/Type (can't tell)
-- - 'ApiAnnotation.AnnKeywordId's : 'ApiAnnotation.AnnPattern',
-- 'ApiAnnotation.AnnType','ApiAnnotation.AnnVal'
-- For details on above see note [Api annotations] in ApiAnnotation
-- See Note [Located RdrNames] in GHC.Hs.Expr
| IEThingAll (XIEThingAll pass) (LIEWrappedName (IdP pass))
-- ^ Imported or exported Thing with All imported or exported
--
-- The thing is a Class/Type and the All refers to methods/constructors
--
-- - 'ApiAnnotation.AnnKeywordId's : 'ApiAnnotation.AnnOpen',
-- 'ApiAnnotation.AnnDotdot','ApiAnnotation.AnnClose',
-- 'ApiAnnotation.AnnType'
-- For details on above see note [Api annotations] in ApiAnnotation
-- See Note [Located RdrNames] in GHC.Hs.Expr
| IEThingWith (XIEThingWith pass)
(LIEWrappedName (IdP pass))
IEWildcard
[LIEWrappedName (IdP pass)]
[Located (FieldLbl (IdP pass))]
-- ^ Imported or exported Thing With given imported or exported
--
-- The thing is a Class/Type and the imported or exported things are
-- methods/constructors and record fields; see Note [IEThingWith]
-- - 'ApiAnnotation.AnnKeywordId's : 'ApiAnnotation.AnnOpen',
-- 'ApiAnnotation.AnnClose',
-- 'ApiAnnotation.AnnComma',
-- 'ApiAnnotation.AnnType'
-- For details on above see note [Api annotations] in ApiAnnotation
| IEModuleContents (XIEModuleContents pass) (Located ModuleName)
-- ^ Imported or exported module contents
--
-- (Export Only)
--
-- - 'ApiAnnotation.AnnKeywordId's : 'ApiAnnotation.AnnModule'
-- For details on above see note [Api annotations] in ApiAnnotation
| IEGroup (XIEGroup pass) Int HsDocString -- ^ Doc section heading
| IEDoc (XIEDoc pass) HsDocString -- ^ Some documentation
| IEDocNamed (XIEDocNamed pass) String -- ^ Reference to named doc
| XIE (XXIE pass)
type instance XIEVar (GhcPass _) = NoExtField
type instance XIEThingAbs (GhcPass _) = NoExtField
type instance XIEThingAll (GhcPass _) = NoExtField
type instance XIEThingWith (GhcPass _) = NoExtField
type instance XIEModuleContents (GhcPass _) = NoExtField
type instance XIEGroup (GhcPass _) = NoExtField
type instance XIEDoc (GhcPass _) = NoExtField
type instance XIEDocNamed (GhcPass _) = NoExtField
type instance XXIE (GhcPass _) = NoExtCon
-- | Imported or Exported Wildcard
data IEWildcard = NoIEWildcard | IEWildcard Int deriving (Eq, Data)
{-
Note [IEThingWith]
~~~~~~~~~~~~~~~~~~
A definition like
module M ( T(MkT, x) ) where
data T = MkT { x :: Int }
gives rise to
IEThingWith T [MkT] [FieldLabel "x" False x)] (without DuplicateRecordFields)
IEThingWith T [MkT] [FieldLabel "x" True $sel:x:MkT)] (with DuplicateRecordFields)
See Note [Representing fields in AvailInfo] in Avail for more details.
-}
ieName :: IE (GhcPass p) -> IdP (GhcPass p)
ieName (IEVar _ (L _ n)) = ieWrappedName n
ieName (IEThingAbs _ (L _ n)) = ieWrappedName n
ieName (IEThingWith _ (L _ n) _ _ _) = ieWrappedName n
ieName (IEThingAll _ (L _ n)) = ieWrappedName n
ieName _ = panic "ieName failed pattern match!"
ieNames :: IE (GhcPass p) -> [IdP (GhcPass p)]
ieNames (IEVar _ (L _ n) ) = [ieWrappedName n]
ieNames (IEThingAbs _ (L _ n) ) = [ieWrappedName n]
ieNames (IEThingAll _ (L _ n) ) = [ieWrappedName n]
ieNames (IEThingWith _ (L _ n) _ ns _) = ieWrappedName n
: map (ieWrappedName . unLoc) ns
ieNames (IEModuleContents {}) = []
ieNames (IEGroup {}) = []
ieNames (IEDoc {}) = []
ieNames (IEDocNamed {}) = []
ieNames (XIE nec) = noExtCon nec
ieWrappedName :: IEWrappedName name -> name
ieWrappedName (IEName (L _ n)) = n
ieWrappedName (IEPattern (L _ n)) = n
ieWrappedName (IEType (L _ n)) = n
lieWrappedName :: LIEWrappedName name -> name
lieWrappedName (L _ n) = ieWrappedName n
ieLWrappedName :: LIEWrappedName name -> Located name
ieLWrappedName (L l n) = L l (ieWrappedName n)
replaceWrappedName :: IEWrappedName name1 -> name2 -> IEWrappedName name2
replaceWrappedName (IEName (L l _)) n = IEName (L l n)
replaceWrappedName (IEPattern (L l _)) n = IEPattern (L l n)
replaceWrappedName (IEType (L l _)) n = IEType (L l n)
replaceLWrappedName :: LIEWrappedName name1 -> name2 -> LIEWrappedName name2
replaceLWrappedName (L l n) n' = L l (replaceWrappedName n n')
instance OutputableBndrId p => Outputable (IE (GhcPass p)) where
ppr (IEVar _ var) = ppr (unLoc var)
ppr (IEThingAbs _ thing) = ppr (unLoc thing)
ppr (IEThingAll _ thing) = hcat [ppr (unLoc thing), text "(..)"]
ppr (IEThingWith _ thing wc withs flds)
= ppr (unLoc thing) <> parens (fsep (punctuate comma
(ppWiths ++
map (ppr . flLabel . unLoc) flds)))
where
ppWiths =
case wc of
NoIEWildcard ->
map (ppr . unLoc) withs
IEWildcard pos ->
let (bs, as) = splitAt pos (map (ppr . unLoc) withs)
in bs ++ [text ".."] ++ as
ppr (IEModuleContents _ mod')
= text "module" <+> ppr mod'
ppr (IEGroup _ n _) = text ("<IEGroup: " ++ show n ++ ">")
ppr (IEDoc _ doc) = ppr doc
ppr (IEDocNamed _ string) = text ("<IEDocNamed: " ++ string ++ ">")
ppr (XIE x) = ppr x
instance (HasOccName name) => HasOccName (IEWrappedName name) where
occName w = occName (ieWrappedName w)
instance (OutputableBndr name) => OutputableBndr (IEWrappedName name) where
pprBndr bs w = pprBndr bs (ieWrappedName w)
pprPrefixOcc w = pprPrefixOcc (ieWrappedName w)
pprInfixOcc w = pprInfixOcc (ieWrappedName w)
instance (OutputableBndr name) => Outputable (IEWrappedName name) where
ppr (IEName n) = pprPrefixOcc (unLoc n)
ppr (IEPattern n) = text "pattern" <+> pprPrefixOcc (unLoc n)
ppr (IEType n) = text "type" <+> pprPrefixOcc (unLoc n)
pprImpExp :: (HasOccName name, OutputableBndr name) => name -> SDoc
pprImpExp name = type_pref <+> pprPrefixOcc name
where
occ = occName name
type_pref | isTcOcc occ && isSymOcc occ = text "type"
| otherwise = empty
| sdiehl/ghc | compiler/GHC/Hs/ImpExp.hs | bsd-3-clause | 14,775 | 0 | 19 | 4,322 | 3,022 | 1,612 | 1,410 | 193 | 3 |
{-|
Module : ZFS
Description : ZFS API bindings
Copyright : (c) Ian Duncan
License : BSD3
Maintainer : ian@iankduncan.com
Stability : experimental
Portability : Only tested on Linux x86_64
Here is a longer description of this module, containing some
commentary with @some markup@.
-}
{-# LANGUAGE OverloadedStrings #-}
module ZFS where
import Control.Monad
import Data.Bits ((.|.))
import qualified Data.ByteString.Char8 as C
import qualified Data.List as L
import Data.Monoid ((<>))
import Data.Maybe (fromMaybe)
import Data.Int
import qualified Data.Text as T
import qualified Data.Text.Foreign as T
import Data.Word
import qualified Foreign.C as FC
import Foreign.Ptr (FunPtr, Ptr, nullPtr, freeHaskellFunPtr)
import Data.IORef (newIORef, readIORef, writeIORef)
import System.Posix.Types (Fd(..))
import qualified ZFS.Primitive as Z
import ZFS.Primitive
import qualified ZFS.Primitive.FS as Z
import qualified ZFS.Primitive.NVPair as Z
-- * Library initialization & finalization
initializeZFS = Z.libzfs_init
finalizeZFS = Z.libzfs_fini
class Dataset a where
dataset :: a -> Ptr ()
instance Dataset ZFSHandle where
-- | Get the underlying library handle from pool or filesystem handles
class GetHandle h where
getHandle :: h -> IO Z.LibZFSHandle
instance GetHandle Z.ZPoolHandle where
getHandle = Z.zpool_get_handle
instance GetHandle Z.ZFSHandle where
getHandle = Z.zfs_get_handle
-- | Toggle error printing on or off.
printOnError = Z.libzfs_print_on_error
-- | Add message to zfs log
logHistory h str = C.useAsCString str $ Z.zpool_log_history h
errno = Z.libzfs_errno
errorAction = Z.libzfs_error_action >=> C.packCString
errorDescription = Z.libzfs_error_description >=> C.packCString
initializeMountTable = Z.libzfs_mnttab_init
finalizeMountTable = Z.libzfs_mnttab_fini
cacheMountTable = Z.libzfs_mnttab_cache
newtype MountTableEntry = MountTableEntry { fromMountTableEntry :: C.ByteString }
findMountTable h e t = C.useAsCString (fromMountTableEntry e) $ \str -> Z.libzfs_mnttab_find h str t
-- addMountToMountTable = Z.libzfsMnttabAdd
-- removeMountFromMountTable = Z.libzfsMnttabRemove
-- * Basic handle functions
newtype PoolName = PoolName { fromPoolName :: C.ByteString }
deriving (Show, Eq, Ord)
data PoolOpenOptions = RefuseFaultedPools | AllowFaultedPools
-- | Open a handle to the given pool
openPool h (PoolName p) opt = do
zh@(Z.ZPoolHandle h) <- C.useAsCString p $ \str -> case opt of
RefuseFaultedPools -> Z.zpool_open h str
AllowFaultedPools -> Z.zpool_open_canfail h str
return $! if h == nullPtr
then Nothing
else Just zh
closePool = Z.zpool_close
getPoolName = Z.zpool_get_name >=> (fmap PoolName . C.packCString)
getPoolState = Z.zpool_get_state
freeAllPoolHandles = Z.zpool_free_handles
iterateOverPools :: Z.LibZFSHandle -> (Z.ZPoolHandle -> a -> IO (Either b a)) -> a -> IO (Either b a)
iterateOverPools h f x = do
sp <- newIORef x
intermediate <- newIORef $ Right x
fp <- Z.wrapZPoolIterator $ \h _ -> do
val <- readIORef sp
r <- f h val
writeIORef intermediate r
case r of
Left _ -> return 1
Right r -> do
writeIORef sp r
return 0
void $ Z.zpool_iter h fp nullPtr
freeHaskellFunPtr fp
readIORef intermediate
-- * Functions to create and destroy pools
createPool h (PoolName p) nvRoot props fsprops = C.useAsCString p $ \str -> Z.zpool_create h str nvRoot props fsprops
-- | Destroy the given pool. It is up to the caller to ensure that there are no
-- datasets left in the pool.
destroyPool h log = C.useAsCString log $ Z.zpool_destroy h
-- * Functions to manipulate pool and vdev state
newtype Path = Path { fromPath :: C.ByteString }
-- | Scan the pool.
scanPool = Z.zpool_scan
-- | Clear the errors for the pool, or the particular device if specified.
clearPool h (Path p) rewind = C.useAsCString p $ \str -> Z.zpool_clear h str rewind
-- | Change the GUID for a pool.
reguidPool = Z.zpool_reguid
-- | Reopen the pool.
reopenPool = Z.zpool_reopen
-- | Bring the specified vdev online.
bringVirtualDeviceOnline :: Z.ZPoolHandle -> Path -> [Z.OnlineFlag] -> IO (Z.ZFSError, Z.VirtualDeviceState)
bringVirtualDeviceOnline h (Path p) fs = C.useAsCString p $ \str ->
Z.zpool_vdev_online h str $ mask fs
data OfflineFlag = TemporarilyOffline | PermanentlyOffline
deriving (Show, Eq)
-- | Take the specified vdev offline.
takeVirtualDeviceOffline h (Path p) b = C.useAsCString p $ \str -> Z.zpool_vdev_offline h str offlineMode
where offlineMode = case b of
TemporarilyOffline -> True
PermanentlyOffline -> False
newtype NewDisk = NewDisk { fromNewDisk :: Path }
newtype OldDisk = OldDisk { fromOldDisk :: Path }
newtype Cookie = Cookie { fromCookie :: FC.CInt }
-- | Attach new disk (fully described by nvroot) to old disk.
-- If replacing is specified, the new disk will replace the old one.
attachVirtualDevice h (OldDisk (Path oldDisk)) (NewDisk (Path newDisk)) nvroot mc = C.useAsCString oldDisk $ \oldStr ->
C.useAsCString newDisk $ \newStr -> Z.zpool_vdev_attach h oldStr newStr nvroot $ fromCookie $ fromMaybe (Cookie 0) mc
-- TODO splitVirtualDevice
newtype GUID = GUID { fromGUID :: FC.CULong }
-- | Mark the given vdev as faulted.
faultVirtualDevice h (GUID g) aux = Z.zpool_vdev_fault h g aux
-- | Mark the given vdev as degraded.
degradeVirtualDevice h (GUID g) aux = Z.zpool_vdev_degrade h g aux
-- | Similar to `clearPool`, but takes a GUID (needs better description once I understand this)
clearVirtualDevice h (GUID g) = Z.zpool_vdev_clear h g
data SpareFlag = SpareAvailable | SpareUnavailable
deriving (Show, Eq)
data L2CacheFlag = L2CacheYes | L2CacheNo
deriving (Show, Eq)
data LogFlag = LogYes | LogNo
deriving (Show, Eq)
findVirtualDevice h (Path p) = C.useAsCString p $ \str -> do
(nvl, sf, l2c, l) <- Z.zpool_find_vdev h str
return ( nvl
, if sf then SpareAvailable else SpareUnavailable
, if l2c then L2CacheYes else L2CacheNo
, if l then LogYes else LogNo
)
findVirtualDeviceByPhysicalPath h (Path p) = C.useAsCString p $ \str -> do
(nvl, sf, l2c, l) <- Z.zpool_find_vdev_by_physpath h str
return ( nvl
, if sf then SpareAvailable else SpareUnavailable
, if l2c then L2CacheYes else L2CacheNo
, if l then LogYes else LogNo
)
-- | Wait timeout miliseconds for a newly created device to be available
-- from the given path. There is a small window when a /dev/ device
-- will exist andd the udev link will not, so we must wait for the
-- symlink. Depending on the udev rules this may take a few seconds.
waitForDiskLabel (Path p) t = C.useAsCString p $ \str -> Z.zpool_label_disk_wait str t
-- | Label an individual disk. The name provided is the short name,
-- stripped of any leading /dev path.
labelDisk lh ph (Path p) = C.useAsCString p $ \str -> Z.zpool_label_disk lh ph str
-- * Functions to manage pool properties
-- zpool_get_prop
-- zpool_get_prop_literal
-- zpool_get_prop_int
-- zpool_prop_to_name
-- zpool_prop_values
-- * Pool health statistics
-- zpool_get_status
-- zpool_import_status
-- zpool_dump_ddt
-- * Statistics and configuration functions
-- | Retrieve the configuration for the given pool. The configuration is a nvlist
-- describing the vdevs, as well as the statistics associated with each one.
-- getConfig :: ZPoolHandle -> Maybe NVList -> IO NVList
-- | Retrieves a list of enabled features and their refcounts and caches it in
-- the pool handle.
getFeatures = Z.zpool_get_features
type PoolIsMissing = Bool
-- | Refresh the vdev statistics associated with the given pool. This is used in
-- iostat to show configuration changes and determine the delta from the last
-- time the function was called. This function can fail, in case the pool has
-- been destroyed.
refreshStats :: Z.ZPoolHandle -> IO (Z.ZFSError, PoolIsMissing)
refreshStats = Z.zpool_refresh_stats
-- getErrorLog ::
-- * Import and export functions
type SoftForce = Bool
-- | Exports the pool from the system. The caller must ensure that there are no
-- mounted datasets in the pool.
exportZPool :: Z.ZPoolHandle -> SoftForce -> C.ByteString -> IO Z.ZFSError
exportZPool h f msg = C.useAsCString msg $ Z.zpool_export h f
forceExportZPool h msg = C.useAsCString msg $ Z.zpool_export_force h
type Config = Z.NVList
-- | Applications should use importZPoolWithProperties to import a pool with
-- new properties value to be set.
importZPool :: Z.LibZFSHandle -> Config -> Maybe PoolName -> Maybe Path -> IO Z.ZFSError
importZPool h c mn mp = n $ \nstr -> p $ \pstr -> Z.zpool_import h c nstr pstr
where
n = maybe ($ nullPtr) C.useAsCString $ fmap fromPoolName mn
p = maybe ($ nullPtr) C.useAsCString $ fmap fromPath mp
-- | Import the given pool using the known configuration and a list of
-- properties to be set. The configuration should have come from
-- importZPoolWithProperties. The 'newname' parameters control whether the pool
-- is imported with a different name.
-- importZPoolWithProperties :: LibZFSHandle -> NVList -> Maybe PoolTime -> NVList -> ????
printUnsupportedFeatures = Z.zpool_print_unsup_feat
-- * Search for pools to import
-- searchForZPools
-- * Miscellaneous pool functions
-- * Basic handle manipulations. These functions do not create or destroy the
-- underlying datasets, only the references to them.
-- | Opens the given snapshot, filesystem, or volume. The 'types'
-- argument is a mask of acceptable types. The function will print an
-- appropriate error message and return Nothing if it can't be opened.
openZFSHandle :: Z.LibZFSHandle -> Path -> [Z.ZFSType] -> IO (Maybe Z.ZFSHandle)
openZFSHandle h (Path p) ts = do
zh@(Z.ZFSHandle h') <- C.useAsCString p $ \str -> Z.zfs_open h str $ mask ts
return $! if h' == nullPtr
then Nothing
else Just zh
duplicateZFSHandle = Z.zfs_handle_dup
closeZFSHandle = Z.zfs_close
getZFSHandleType = error "TODO"
getName = Z.zfs_get_name >=> C.packCString
getPoolHandle = Z.zfs_get_pool_handle
-- * Property management functions. Some functions are shared with the kernel,
-- and are found in sys/fs/zfs.h
-- ** zfs dataset property management
-- ** zpool property management
-- ** Functions shared by zfs and zpool property management
-- * Functions to create and destroy datasets
-- * Miscellaneous functions
zfsTypeToName = Z.zfs_type_to_name >=> C.packCString
refreshProperties = Z.zfs_refresh_properties
nameValid str t = C.useAsCString str $ \p -> Z.zfs_name_valid p t
-- | Given a name, determine whether or not it's a valid path
-- (starts with '/' or "./"). If so, walk the mnttab trying
-- to match the device number. If not, treat the path as an
-- fs/vol/snap name.
pathToZFSHandle h (Path p) t = do
zh@(Z.ZFSHandle inner) <- C.useAsCString p $ \str -> Z.zfs_path_to_zhandle h str t
return $! if inner == nullPtr
then Nothing
else Just zh
-- | Finds whether the dataset of the given type(s) exists.
datasetExists h (Path p) t = do
-- TODO support type unions?
C.useAsCString p $ \str -> Z.zfs_dataset_exists h str t
spaVersion = Z.zfs_spa_version
-- | Append partition suffix to an otherwise fully qualified device path.
-- This is used to generate the name the full path as its stored in
-- ZPOOL_CONFIG_PATH for whole disk devices. On success the new length
-- of 'path' will be returned on error a negative value is returned.
appendPartition (Path p) = do
-- make room in final copy for appending bits
C.useAsCString (p <> "\0\0\0\0\0\0") $ \str -> do
len <- Z.zfs_append_partition str $ fromIntegral (C.length p + 6)
let l = fromIntegral l
if l < 0
then return Nothing
else fmap Just $ C.packCStringLen (str, fromIntegral len)
-- | Given a shorthand device name check if a file by that name exists in any
-- of the 'zpool_default_import_path' or ZPOOL_IMPORT_PATH directories. If
-- one is found, store its fully qualified path in the 'path' buffer passed
resolveShortname = error "TODO"
-- | Given either a shorthand or fully qualified path name look for a match
-- against 'cmp'. The passed name will be expanded as needed for comparison
-- purposes and redundant slashes stripped to ensure an accurate match.
comparePathnames = error "TODO"
-- * Mount support functions
-- | Checks to see if the mount is active. If the filesystem is mounted,
-- returns true and the current mountpoint
isMounted h str = do
-- TODO what is the str argument in isMounted supposed to mean?
(mounted, where_) <- C.useAsCString str $ Z.is_mounted h
where_' <- if mounted
then fmap (Just . Path) $ C.packCString where_
else return Nothing
return (mounted, where_')
zfsIsMounted h = do
(mounted, where_) <- Z.zfs_is_mounted h
where_' <- if mounted
then fmap (Just . Path) $ C.packCString where_
else return Nothing
return (mounted, where_')
-- | Unmount the given filesystem.
unmount h mount flags = do
-- TODO what is flags here?
let mountPtrFun f = case mount of
Nothing -> f nullPtr
Just (Path p) -> C.useAsCString p f
mountPtrFun $ \str -> Z.zfs_unmount h str flags
-- TODO flags here too.
unmountAll h flags = Z.zfs_unmountall h flags
-- * Share support
isShared = Z.zfs_is_shared
share = Z.zfs_share
unshare = Z.zfs_unshare
-- ** Protocol-specific share support
isSharedNFS h = do
(shared, p) <- Z.zfs_is_shared_nfs h
p' <- C.packCString p
return (shared, p)
isSharedSMB h = do
(shared, p) <- Z.zfs_is_shared_smb h
p' <- C.packCString p
return (shared, p)
shareNFS = Z.zfs_share_nfs
shareSMB = Z.zfs_share_smb
unshareNFS = Z.zfs_unshare_nfs
unshareSMB = Z.zfs_unshare_smb
unshareAllNFS = Z.zfs_unshareall_nfs
unshareAllSMB = Z.zfs_unshareall_smb
unshareAllByPath h path = do
let pathPtrFun f = case path of
Nothing -> f nullPtr
Just (Path p) -> C.useAsCString p f
pathPtrFun $ \str -> Z.zfs_unshareall_bypath h str
-- * Utility functions
niceNumber = error "TODO"
niceStringToNumber = error "TODO"
isPoolInUse h (Fd fd) = do
(err, st, n, inUse) <- Z.zpool_in_use h fd
let state = if inUse
then Just . toEnum . fromIntegral $ st
else Nothing
name <- if inUse
then fmap (Just . PoolName) $ C.packCString n
else return Nothing
return (err, state, name, inUse)
-- ** Label manipulation
iteratorWrapper :: (FunPtr (ZFSIterator a) -> Ptr a -> IO FC.CInt) -> (ZFSHandle -> a -> IO (Either b a)) -> a -> IO (Either b a)
iteratorWrapper call f startState = do
sp <- newIORef startState
intermediate <- newIORef $ Right startState
fp <- wrapZFSIterator $ \h _ -> do
val <- readIORef sp
r <- f h val
writeIORef intermediate r
case r of
Left _ -> return 1
Right r -> do
writeIORef sp r
return 0
void $ call fp nullPtr
freeHaskellFunPtr fp
readIORef intermediate
iterateOverZFSRoot h = iteratorWrapper (zfs_iter_root h)
iterateOverZFSChildren h = iteratorWrapper (zfs_iter_children h)
type AllowRecursion = Bool
iterateOverZFSDependents :: ZFSHandle -> AllowRecursion -> (ZFSHandle -> a -> IO (Either b a)) -> a -> IO (Either b a)
iterateOverZFSDependents h b = iteratorWrapper (zfs_iter_dependents h b)
iterateOverZFSFilesystems h = iteratorWrapper (zfs_iter_filesystems h)
type Simple = Bool
iterateOverZFSSnapshots :: ZFSHandle -> Simple -> (ZFSHandle -> a -> IO (Either b a)) -> a -> IO (Either b a)
iterateOverZFSSnapshots h b = iteratorWrapper (zfs_iter_snapshots h b)
iterateOverZFSSnapshotsSorted h = iteratorWrapper (zfs_iter_snapshots_sorted h)
type SpecFormat = C.ByteString
{-
spec is a string like "A,B%C,D"
<snaps>, where <snaps> can be:
<snap> (single snapshot)
<snap>%<snap> (range of snapshots, inclusive)
%<snap> (range of snapshots, starting with earliest)
<snap>% (range of snapshots, ending with last)
% (all snapshots)
<snaps>[,...] (comma separated list of the above)
If a snapshot can not be opened, continue trying to open the others, but
return ENOENT at the end.
-}
iterateOverZFSSnapSpec :: ZFSHandle -> SpecFormat -> (ZFSHandle -> a -> IO (Either b a)) -> a -> IO (Either b a)
iterateOverZFSSnapSpec h orig f x = C.useAsCString orig $ \str -> iteratorWrapper (zfs_iter_snapspec h str) f x
addHandle :: GetAllCallback -> ZFSHandle -> IO ()
addHandle = error "TODO"
datasetCompare :: (Dataset a, Dataset b) => a -> b -> IO FC.CInt
datasetCompare x y = libzfs_dataset_cmp (dataset x) (dataset y)
-- ** Functions to create and destroy datasets.
create :: LibZFSHandle -> Path -> Z.ZFSType -> Z.NVList -> IO Z.ZFSError
create h (Path p) t n = C.useAsCString p $ \str -> zfs_create h str t n
createAncestors :: LibZFSHandle -> Path -> IO Z.ZFSError
createAncestors h (Path p) = C.useAsCString p $ zfs_create_ancestors h
type Defer = Bool
destroy :: ZFSHandle -> Defer -> IO Z.ZFSError
destroy = zfs_destroy
newtype SnapshotName = SnapshotName { fromSnapshotName :: C.ByteString }
destroySnaps :: ZFSHandle -> SnapshotName -> Defer -> IO Z.ZFSError
destroySnaps h (SnapshotName n) d = C.useAsCString n $ \str -> zfs_destroy_snaps h str d
destroySnapsNVL :: LibZFSHandle -> Z.NVList -> Defer -> IO Z.ZFSError
destroySnapsNVL = zfs_destroy_snaps_nvl
data SnapshotDepth = Shallow | Recursive
snapshot :: Z.LibZFSHandle -> Path -> SnapshotDepth -> Z.NVList -> IO Z.ZFSError
snapshot h (Path p) d n = C.useAsCString p $ \str -> Z.zfs_snapshot h str (case d of
Shallow -> False
Recursive -> True) n
snapshotNVL :: LibZFSHandle -> Z.NVList -> Z.NVList -> IO ZFSError
snapshotNVL h n1 n2 = Z.zfs_snapshot_nvl h n1 n2
type ForceRollback = Bool
rollback :: ZFSHandle -> ZFSHandle -> ForceRollback -> IO ZFSError
rollback = zfs_rollback
-- ** Management interfaces for SMB ACL
-- Haskell utilities
mask :: (Enum a, Num b) => [a] -> b
mask = fromIntegral . L.foldl' (\i f -> i .|. fromEnum f) 0
| iand675/hs-zfs | src/ZFS.hs | bsd-3-clause | 18,207 | 0 | 17 | 3,733 | 4,344 | 2,275 | 2,069 | 269 | 4 |
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE DeriveAnyClass #-}
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE TypeOperators #-}
-- | This module describes the todo API in all of it's simplistic glory. This
-- follows the 'Api' and 'handler' pattern establisehd in the Files part.
module PureFlowy.Api.Todos where
import Control.Monad.IO.Class (liftIO)
import Data.Aeson (FromJSON, ToJSON)
import Data.Foldable (find)
import Data.IORef (IORef, modifyIORef', newIORef,
readIORef)
import GHC.Generics (Generic)
import Servant
import System.IO.Unsafe (unsafePerformIO)
-- | The type of the API is namespaced under "todos" and we delegate to
-- a further type 'TodoCrud' for the actual handling and routes.
type Api =
"todos" :> TodoCRUD
-- | This specifies four endpoints:
--
-- * At @GET todos@, get a list of the todos.
-- * At @GET todos/:id@, get a todo by id.
-- * At @POST todos@, create a 'Todo' and return the ID.
-- * At @DELETE todos/:id@, delete the todo with the given ID.
type TodoCRUD
= QueryParam "sortBy" String :> Get '[JSON] [Todo]
:<|> Capture "id" Int :> Get '[JSON] Todo
:<|> ReqBody '[JSON] Todo :> Post '[JSON] Int
:<|> Capture "id" Int :> Delete '[JSON] ()
data Todo = Todo
{ todoDone :: Bool
, todoId :: Int
, todoItem :: String
} deriving (Eq, Show, Generic, ToJSON, FromJSON)
-- | The definition of the handlers lines up exactly with the definition of the
-- type.
handler :: Server Api
handler = listTodos
:<|> getTodo
:<|> createTodo
:<|> deleteTodo
-- | To list all the available 'Todo', we read the 'IORef' in memory database of
-- 'Todo's.
listTodos :: Maybe String -> Handler [Todo]
listTodos _ = liftIO (readIORef todoDatabase)
-- | Getting a single 'Todo' by the 'Int' identifier reads rather naturally.
-- First, we reuse the 'listTodos' handler function, returning the list of
-- 'Todo's. Then, we use the 'find' function to find the 'Todo' with the
-- matching ID. If that comes up with 'Nothing', then we throw a 404 error.
-- Otherwise, we return the matching todo item.
getTodo :: Int -> Handler Todo
getTodo i = do
todos <- listTodos Nothing
case find (\todo -> i == todoId todo) todos of
Nothing ->
throwError err404
Just todo ->
pure todo
-- | When creating a 'Todo', we ignore the ID, and instead give it the next ID
-- in the list.
createTodo :: Todo -> Handler Int
createTodo postedTodo = do
todos <- listTodos Nothing
let newId = 1 + maximum (map todoId todos)
todo = postedTodo { todoId = newId }
liftIO (modifyIORef' todoDatabase (todo :))
pure newId
-- | When deleting a 'Todo', we just filter the todo lists, keeping only todos
-- that do not have the same todo ID.
deleteTodo :: Int -> Handler ()
deleteTodo i =
liftIO (modifyIORef' todoDatabase (filter (\todo -> i /= todoId todo)))
-- | This is the "global mutable reference" pattern. Generally, this is a really
-- bad idea: you'll want to pass things as parameters, not refer to them like
-- this.
todoDatabase :: IORef [Todo]
todoDatabase = unsafePerformIO . newIORef $
zipWith
(Todo False)
[1..]
["Wash the cats.", "Save the trees."]
{-# NOINLINE todoDatabase #-}
| parsonsmatt/pureflowy | src/PureFlowy/Api/Todos.hs | bsd-3-clause | 3,394 | 0 | 14 | 856 | 609 | 334 | 275 | 57 | 2 |
{-# LANGUAGE DeriveFunctor, DeriveFoldable, DeriveTraversable, UndecidableInstances #-}
module AST where
import Data.Foldable (Foldable)
import Data.Traversable (Traversable)
type AId = String
data AEntry t
= AProgram [t]
| AClass AId [t] [t]
| AVar AVarType AId
| AMethod AVarType AId [t] [t] [t] t -- retType name args vars code retExpr
| AStatScope [t]
| AIf t t t
| AWhile t t
| APrint t
| AAssignment t t
| AIndexedAssignment t t t
| AExprOp AOperand t t
| AExprList t t
| AExprLength t
| AExprInvocation t AId [t] -- expression, method name, args
| AExprInt Int
| AExprTrue
| AExprFalse
| AExprIdentifier AId
| AExprThis
| AExprIntArray t
| AExprNewObject AId
| AExprNegation t
| AExprVoid
deriving (Functor, Foldable, Show, Traversable)
data AVarType
= TypeIntegerArray
| TypeBoolean
| TypeInteger
| TypeString
| TypeStringArray
| TypeAppDefined AId
| TypeVoid
deriving (Eq, Ord, Show)
data AOperand
= OperandLogicalAnd
| OperandLogicalOr
| OperandLess
| OperandLessEqual
| OperandEqual
| OperandPlus
| OperandMinus
| OperandMult
deriving (Eq, Ord, Show)
newtype Fix f = Fix (f (Fix f))
instance (Show (f (Fix f))) => Show (Fix f) where
showsPrec p (Fix f) = showsPrec p f
type UnnAST = Fix AEntry
view :: UnnAST -> AEntry UnnAST
view (Fix e) = e
| davnils/minijava-compiler | src/AST.hs | bsd-3-clause | 1,564 | 0 | 10 | 498 | 423 | 244 | 179 | -1 | -1 |
module Data.CounterSpec where
import Control.Applicative
import Data.Binary
import Data.Counter (Counter)
import Data.Map (Map)
import Data.Monoid
import Data.Set (Set)
import Prelude
import Test.Hspec
import Test.Hspec.QuickCheck
import Test.QuickCheck
import qualified Data.Map as Map
import qualified Data.Set as Set
import qualified Data.Counter as Counter
instance (Arbitrary a, Ord a) => Arbitrary (Counter a) where
arbitrary = Counter.fromList <$> arbitrary
shrink c = Counter.fromCounts <$> shrink (Counter.toList c)
instance (Arbitrary a, Ord a) => Arbitrary (Set a) where
arbitrary = Set.fromList <$> arbitrary
shrink s = Set.fromList <$> shrink (Set.toList s)
instance (Arbitrary a, Ord a, Arbitrary b) => Arbitrary (Map a b) where
arbitrary = Map.fromList <$> arbitrary
shrink m = Map.fromList <$> shrink (Map.toList m)
main :: IO ()
main = hspec spec
spec :: Spec
spec =
describe "Counter" $ do
describe "Monoid" $ do
prop "mempty <> x == x" $ \(x :: Counter Int) ->
mempty <> x == x
prop "x <> mempty == x" $ \(x :: Counter Int) ->
x <> mempty == x
prop "x <> (y <> z) == (x <> y) <> z" $ \(x :: Counter String) y z ->
x <> (y <> z) == (x <> y) <> z
describe "Arbitrary" $ do
prop "valid x" $ \(x :: Counter String) -> Counter.valid x
prop "all valid (shrink x)" $ \(x :: Counter String) ->
all Counter.valid (shrink x)
describe "Binary" $ do
prop "decode . encode == id" $ \(x :: Counter Int) ->
decode (encode x) == x
describe "fromMap" $ do
prop "fromMap (Map.fromListWith (+) xs) == fromCounts xs" $ \(xs :: [(Int, Integer)]) ->
Counter.fromMap (Map.fromListWith (+) xs) == Counter.fromCounts xs
prop "fromMap . toMap == id" $ \(x :: Counter String) ->
Counter.fromMap (Counter.toMap x) == x
prop "toMap . fromMap == id" $ \(x :: Map String Integer) ->
Counter.toMap (Counter.fromMap x) == Map.filter (> 0) x
describe "singleton" $ do
prop "singleton x == fromCounts [(x, 1)]" $ \(x :: String) ->
Counter.singleton x == Counter.fromCounts [(x, 1)]
prop "singleton x == fromList [x]" $ \(x :: String) ->
Counter.singleton x == Counter.fromList [x]
prop "singleton x == fromSet (Set.singleton x)" $ \(x :: String) ->
Counter.singleton x == Counter.fromSet (Set.singleton x)
describe "fromSet" $
prop "fromSet xs == fromList (Set.toList xs)" $ \(xs :: Set Integer) ->
Counter.fromSet xs == Counter.fromList (Set.toList xs)
describe "increment" $ do
prop "singleton x <> singleton x == increment x (singleton x)" $ \(x :: String) ->
Counter.singleton x <> Counter.singleton x == Counter.increment x (Counter.singleton x)
prop "fromList [x, x] == increment x (singleton x)" $ \(x :: String) ->
Counter.fromList [x, x] == Counter.increment x (Counter.singleton x)
describe "lookup" $ do
prop "lookup x (singleton x) == 1" $ \(x :: String) ->
Counter.lookup x (Counter.singleton x) == 1
prop "lookup x mempty == 0" $ \(x :: String) ->
Counter.lookup x mempty == 0
describe "valid" $ do
prop "valid (singleton x)" $ \(x :: String) ->
Counter.valid (Counter.singleton x)
prop "valid (fromMap xs)" $ \(xs :: Map String Integer) ->
Counter.valid (Counter.fromMap xs)
prop "valid (fromCounts xs)" $ \(xs :: [(String, Integer)]) ->
Counter.valid (Counter.fromCounts xs)
prop "valid (fromList xs)" $ \(xs :: [String]) ->
Counter.valid (Counter.fromList xs)
prop "valid (fromSet xs)" $ \(xs :: Set String) ->
Counter.valid (Counter.fromSet xs)
| intolerable/data-counter | test/Data/CounterSpec.hs | bsd-3-clause | 3,712 | 0 | 17 | 914 | 1,351 | 673 | 678 | -1 | -1 |
{-# LANGUAGE CPP, MagicHash, UnboxedTuples, DeriveDataTypeable, BangPatterns #-}
-- |
-- Module : Data.Primitive.Array
-- Copyright : (c) Roman Leshchinskiy 2009-2012
-- License : BSD-style
--
-- Maintainer : Roman Leshchinskiy <rl@cse.unsw.edu.au>
-- Portability : non-portable
--
-- Primitive boxed arrays
--
module Data.Primitive.Array (
Array(..), MutableArray(..),
newArray, readArray, writeArray, indexArray, indexArrayM,
unsafeFreezeArray, unsafeThawArray, sameMutableArray,
copyArray, copyMutableArray,
cloneArray, cloneMutableArray
) where
import Control.Monad.Primitive
import GHC.Base ( Int(..) )
import GHC.Prim
import Data.Typeable ( Typeable )
import Data.Data ( Data(..) )
import Data.Primitive.Internal.Compat ( isTrue#, mkNoRepType )
import Control.Monad.ST(runST)
-- | Boxed arrays
data Array a = Array (Array# a) deriving ( Typeable )
-- | Mutable boxed arrays associated with a primitive state token.
data MutableArray s a = MutableArray (MutableArray# s a)
deriving ( Typeable )
-- | Create a new mutable array of the specified size and initialise all
-- elements with the given value.
newArray :: PrimMonad m => Int -> a -> m (MutableArray (PrimState m) a)
{-# INLINE newArray #-}
newArray (I# n#) x = primitive
(\s# -> case newArray# n# x s# of
(# s'#, arr# #) -> (# s'#, MutableArray arr# #))
-- | Read a value from the array at the given index.
readArray :: PrimMonad m => MutableArray (PrimState m) a -> Int -> m a
{-# INLINE readArray #-}
readArray (MutableArray arr#) (I# i#) = primitive (readArray# arr# i#)
-- | Write a value to the array at the given index.
writeArray :: PrimMonad m => MutableArray (PrimState m) a -> Int -> a -> m ()
{-# INLINE writeArray #-}
writeArray (MutableArray arr#) (I# i#) x = primitive_ (writeArray# arr# i# x)
-- | Read a value from the immutable array at the given index.
indexArray :: Array a -> Int -> a
{-# INLINE indexArray #-}
indexArray (Array arr#) (I# i#) = case indexArray# arr# i# of (# x #) -> x
-- | Monadically read a value from the immutable array at the given index.
-- This allows us to be strict in the array while remaining lazy in the read
-- element which is very useful for collective operations. Suppose we want to
-- copy an array. We could do something like this:
--
-- > copy marr arr ... = do ...
-- > writeArray marr i (indexArray arr i) ...
-- > ...
--
-- But since primitive arrays are lazy, the calls to 'indexArray' will not be
-- evaluated. Rather, @marr@ will be filled with thunks each of which would
-- retain a reference to @arr@. This is definitely not what we want!
--
-- With 'indexArrayM', we can instead write
--
-- > copy marr arr ... = do ...
-- > x <- indexArrayM arr i
-- > writeArray marr i x
-- > ...
--
-- Now, indexing is executed immediately although the returned element is
-- still not evaluated.
--
indexArrayM :: Monad m => Array a -> Int -> m a
{-# INLINE indexArrayM #-}
indexArrayM (Array arr#) (I# i#)
= case indexArray# arr# i# of (# x #) -> return x
-- | Convert a mutable array to an immutable one without copying. The
-- array should not be modified after the conversion.
unsafeFreezeArray :: PrimMonad m => MutableArray (PrimState m) a -> m (Array a)
{-# INLINE unsafeFreezeArray #-}
unsafeFreezeArray (MutableArray arr#)
= primitive (\s# -> case unsafeFreezeArray# arr# s# of
(# s'#, arr'# #) -> (# s'#, Array arr'# #))
-- | Convert an immutable array to an mutable one without copying. The
-- immutable array should not be used after the conversion.
unsafeThawArray :: PrimMonad m => Array a -> m (MutableArray (PrimState m) a)
{-# INLINE unsafeThawArray #-}
unsafeThawArray (Array arr#)
= primitive (\s# -> case unsafeThawArray# arr# s# of
(# s'#, arr'# #) -> (# s'#, MutableArray arr'# #))
-- | Check whether the two arrays refer to the same memory block.
sameMutableArray :: MutableArray s a -> MutableArray s a -> Bool
{-# INLINE sameMutableArray #-}
sameMutableArray (MutableArray arr#) (MutableArray brr#)
= isTrue# (sameMutableArray# arr# brr#)
-- | Copy a slice of an immutable array to a mutable array.
copyArray :: PrimMonad m
=> MutableArray (PrimState m) a -- ^ destination array
-> Int -- ^ offset into destination array
-> Array a -- ^ source array
-> Int -- ^ offset into source array
-> Int -- ^ number of elements to copy
-> m ()
{-# INLINE copyArray #-}
#if __GLASGOW_HASKELL__ > 706
-- NOTE: copyArray# and copyMutableArray# are slightly broken in GHC 7.6.* and earlier
copyArray (MutableArray dst#) (I# doff#) (Array src#) (I# soff#) (I# len#)
= primitive_ (copyArray# src# soff# dst# doff# len#)
#else
copyArray !dst !doff !src !soff !len = go 0
where
go i | i < len = do
x <- indexArrayM src (soff+i)
writeArray dst (doff+i) x
go (i+1)
| otherwise = return ()
#endif
-- | Copy a slice of a mutable array to another array. The two arrays may
-- not be the same.
copyMutableArray :: PrimMonad m
=> MutableArray (PrimState m) a -- ^ destination array
-> Int -- ^ offset into destination array
-> MutableArray (PrimState m) a -- ^ source array
-> Int -- ^ offset into source array
-> Int -- ^ number of elements to copy
-> m ()
{-# INLINE copyMutableArray #-}
#if __GLASGOW_HASKELL__ >= 706
-- NOTE: copyArray# and copyMutableArray# are slightly broken in GHC 7.6.* and earlier
copyMutableArray (MutableArray dst#) (I# doff#)
(MutableArray src#) (I# soff#) (I# len#)
= primitive_ (copyMutableArray# src# soff# dst# doff# len#)
#else
copyMutableArray !dst !doff !src !soff !len = go 0
where
go i | i < len = do
x <- readArray src (soff+i)
writeArray dst (doff+i) x
go (i+1)
| otherwise = return ()
#endif
-- | Return a newly allocated Array with the specified subrange of the
-- provided Array. The provided Array should contain the full subrange
-- specified by the two Ints, but this is not checked.
cloneArray :: Array a -- ^ source array
-> Int -- ^ offset into destination array
-> Int -- ^ number of elements to copy
-> Array a
{-# INLINE cloneArray #-}
#if __GLASGOW_HASKELL__ >= 702
cloneArray (Array arr#) (I# off#) (I# len#)
= case cloneArray# arr# off# len# of arr'# -> Array arr'#
#else
cloneArray arr off len = runST $ do
marr2 <- newArray len (error "Undefined element")
copyArray marr2 0 arr off len
unsafeFreezeArray marr2
#endif
-- | Return a newly allocated MutableArray. with the specified subrange of
-- the provided MutableArray. The provided MutableArray should contain the
-- full subrange specified by the two Ints, but this is not checked.
cloneMutableArray :: PrimMonad m
=> MutableArray (PrimState m) a -- ^ source array
-> Int -- ^ offset into destination array
-> Int -- ^ number of elements to copy
-> m (MutableArray (PrimState m) a)
{-# INLINE cloneMutableArray #-}
#if __GLASGOW_HASKELL__ >= 702
cloneMutableArray (MutableArray arr#) (I# off#) (I# len#) = primitive
(\s# -> case cloneMutableArray# arr# off# len# s# of
(# s'#, arr'# #) -> (# s'#, MutableArray arr'# #))
#else
cloneMutableArray marr off len = do
marr2 <- newArray len (error "Undefined element")
let go !i !j c
| c >= len = return marr2
| otherwise = do
b <- readArray marr i
writeArray marr2 j b
go (i+1) (j+1) (c+1)
go off 0 0
#endif
instance Typeable a => Data (Array a) where
toConstr _ = error "toConstr"
gunfold _ _ = error "gunfold"
dataTypeOf _ = mkNoRepType "Data.Primitive.Array.Array"
instance (Typeable s, Typeable a) => Data (MutableArray s a) where
toConstr _ = error "toConstr"
gunfold _ _ = error "gunfold"
dataTypeOf _ = mkNoRepType "Data.Primitive.Array.MutableArray"
| fpco/primitive | Data/Primitive/Array.hs | bsd-3-clause | 8,486 | 0 | 13 | 2,336 | 1,431 | 772 | 659 | 109 | 1 |
{-# LANGUAGE OverloadedStrings, QuasiQuotes #-}
module Tests.Readers.RST (tests) where
import Text.Pandoc.Definition
import Test.Framework
import Tests.Helpers
import Tests.Arbitrary()
import Text.Pandoc.Builder
import Text.Pandoc
rst :: String -> Pandoc
rst = readRST defaultParserState{ stateStandalone = True }
infix 5 =:
(=:) :: ToString c
=> String -> (String, c) -> Test
(=:) = test rst
tests :: [Test]
tests = [ "line block with blank line" =:
"| a\n|\n| b" =?> para (str "a" +++ linebreak +++
linebreak +++ str " " +++ str "b")
, "field list" =:
[_LIT|
:Hostname: media08
:IP address: 10.0.0.19
:Size: 3ru
:Date: 2001-08-16
:Version: 1
:Authors: - Me
- Myself
- I
:Indentation: Since the field marker may be quite long, the second
and subsequent lines of the field body do not have to line up
with the first line, but they must be indented relative to the
field name marker, and they must line up with each other.
:Parameter i: integer
:Final: item
on two lines
|] =?> ( setAuthors ["Me","Myself","I"]
$ setDate "2001-08-16"
$ doc
$ definitionList [ (str "Hostname", [para "media08"])
, (str "IP address", [para "10.0.0.19"])
, (str "Size", [para "3ru"])
, (str "Version", [para "1"])
, (str "Indentation", [para "Since the field marker may be quite long, the second and subsequent lines of the field body do not have to line up with the first line, but they must be indented relative to the field name marker, and they must line up with each other."])
, (str "Parameter i", [para "integer"])
, (str "Final", [para "item on two lines"])
])
, "URLs with following punctuation" =:
("http://google.com, http://yahoo.com; http://foo.bar.baz.\n" ++
"http://foo.bar/baz_(bam) (http://foo.bar)") =?>
para (link "http://google.com" "" "http://google.com" +++ ", " +++
link "http://yahoo.com" "" "http://yahoo.com" +++ "; " +++
link "http://foo.bar.baz" "" "http://foo.bar.baz" +++ ". " +++
link "http://foo.bar/baz_(bam)" "" "http://foo.bar/baz_(bam)"
+++ " (" +++ link "http://foo.bar" "" "http://foo.bar" +++ ")")
]
| Lythimus/lptv | sites/all/modules/jgm-pandoc-8be6cc2/src/Tests/Readers/RST.hs | gpl-2.0 | 2,517 | 0 | 18 | 810 | 440 | 241 | 199 | -1 | -1 |
-- https://projecteuler.net/problem=9
import Data.List
specialPythagoreanTripletProduct :: Integer
specialPythagoreanTripletProduct = (\(a,b,c) -> a*b*c) specialPythagoreanTriplet
where
specialPythagoreanTriplet = head $ filter isPythagoreanTriplet specialTriplets
specialTriplets = [(a,b,1000-a-b) | a <- [1..1000], b <- [a+1..1000], a + b < 1000]
isPythagoreanTriplet (a,b,c) = a^2 + b^2 == c^2
| nothiphop/project-euler | 009/solution.hs | apache-2.0 | 412 | 0 | 9 | 59 | 166 | 91 | 75 | 6 | 1 |
-- Copyright 2017 Google Inc.
--
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
{-| Intermediate data structure from GhcAnalyser.
This will be transformed to actual Grok/Kythe schema.
We need to keep this relatively lightweight to avoid mirroring GHC AST
structures. The goal for now should be support for code browsing and
crossreferencing, _not_ faithfully mirroring exact types for static analysis.
Should not expose info about the producer (GHC), to make them switchable.
Should not make assumptions about the consuming backend for same reason.
-}
module Language.Haskell.Indexer.Translate
( SourcePath(..)
, Pos(..), Span(..), spanFile
, Tick(..)
--
, XRef(..)
, AnalysedFile(..)
, ModuleTick(..)
, Decl(..)
, DeclExtra(..), emptyExtra, withExtra
, DocUriDecl(..)
, ModuleDocUriDecl(..)
, PkgModule(..)
, StringyType(..)
, TickReference(..), ReferenceKind(..)
, Relation(..), RelationKind(..)
) where
import Data.Maybe (fromMaybe)
import Data.Text (Text)
-- | Like 'FilePath', but uses 'Text' and is a real wrapper, not a type alias.
-- Note: source paths usually live in a temporary workdir, or maybe under some
-- top-level dir. If possible, feed this top-level dir to the frontend, and
-- it should strip it from the emitted paths.
newtype SourcePath = SourcePath { unSourcePath :: Text }
deriving (Eq, Ord)
instance Show SourcePath where
show (SourcePath p) = show p
-- | Line, col and filename. Col is in characters (not bytes).
data Pos = Pos !Int !Int !SourcePath
deriving (Eq, Ord, Show)
-- | A text range with start and end position (inclusive/exclusive
-- respectively).
data Span = Span !Pos !Pos
deriving (Eq, Ord, Show)
-- | File containing Span.
spanFile :: Span -> SourcePath
spanFile (Span (Pos _ _ f) _) = f
-- | All data for a given input file. TODO rename?
-- Contains lists, to give lazy evaluation a chance and results can eventually
-- be streamed with lower peek memory residency.
data XRef = XRef
{ xrefFile :: !AnalysedFile
, xrefModule :: !ModuleTick
, xrefDecls :: [Decl]
, xrefDocDecls :: [DocUriDecl]
, xrefModuleDocDecls :: [ModuleDocUriDecl]
, xrefCrossRefs :: [TickReference]
, xrefRelations :: [Relation]
, xrefImports :: [ModuleTick]
}
deriving (Eq, Show)
data AnalysedFile = AnalysedFile
{ analysedTempPath :: !SourcePath
-- ^ The path of the analysed file on the actual filesystem, not
-- yet stripped of the temporary directory prefix.
-- The frontend can use this path to read the file contents if needed.
, analysedOriginalPath :: !SourcePath
-- ^ A nice, abstract path, which developers think of as the location of
-- the file. Ideally stripped of temporary workdirs.
}
deriving (Eq, Show)
-- | Info required to reference a module.
data ModuleTick = ModuleTick
{ mtPkgModule :: !PkgModule
, mtSpan :: !(Maybe Span)
-- ^ Span of the module name.
-- For example, 'X' in 'module X where'.
-- Main modules can have this missing.
}
deriving (Eq, Ord, Show)
-- | A Tick(et) is a globally unique identifier for some entity in the AST.
-- Not mandatory, but is ideally stable across multiple compilations.
--
-- Not related to GHC's SCC annotations (called ticks internally).
data Tick = Tick
{ tickSourcePath :: !(Maybe SourcePath)
, tickPkgModule :: !PkgModule
, tickThing :: !Text
-- ^ The unqualified name of the entity.
, tickSpan :: !(Maybe Span)
-- ^ Can be broader or just loosely connected to the physical location
-- of the entity in source. Should only be used for generating a unique
-- tick string. Use other spans in Decl for source linking.
, tickUniqueInModule :: !Bool
-- ^ If true, the generated unique name can omit the span.
-- This usually signals top levelness too.
-- TODO(robinpalotai): make the distinction clear? Rename?
, tickTermLevel :: !Bool
-- ^ Needed to disambiguate same name occuring in term and type level.
}
deriving (Eq, Ord, Show)
data PkgModule = PkgModule
{ getPackage :: !Text
, getModule :: !Text
, getPackageWithVersion :: !Text
}
deriving (Eq, Ord, Show)
data Decl = Decl
{ declTick :: !Tick
, declIdentifierSpan :: !(Maybe Span)
-- ^ Points to a potentially narrow span containing the identifier of
-- the decl. Also see other spanny fields in DeclExtra.
, declType :: !StringyType
-- ^ Should go away once we switch to emitting separate type Decls.
, declExtra :: !(Maybe DeclExtra)
-- ^ Since rarely present, in 'Maybe' to minimize memory usage, and to
-- let DeclExtra grow without concern.
}
deriving (Eq, Show)
data DocUriDecl = DocUriDecl
{ ddeclTick :: !Tick
, ddeclDocUri :: !Text -- ^ Document URI for the ticket.
}
deriving (Eq, Show)
data ModuleDocUriDecl = ModuleDocUriDecl
{ mddeclTick :: !ModuleTick
, mddeclDocUri :: !Text -- ^ Document URI for the module.
}
deriving (Eq, Show)
-- | Additional information about the decl, for info that is rarely present.
data DeclExtra = DeclExtra
{ methodForInstance :: !(Maybe Text)
-- ^ A readable unqualified name of the instance, in the form of
-- "Cls Inst". Frontends can use this data to provide more descripive
-- identifier name ("Cls Inst.foo" instead just "foo"), which is
-- helpful when listed in an UI.
, alternateIdSpan :: !(Maybe Span)
-- ^ Set if the declIdentifierSpan overlaps with other spans, making it
-- problematic for UI tools. Then the alternateIdSpan can be used by
-- frontends for example for hyperlinking.
}
deriving (Eq, Show)
emptyExtra :: DeclExtra
emptyExtra = DeclExtra Nothing Nothing
-- | Modifies declExtra of the given decl, and creates one if it is Nothing so
-- far.
withExtra :: (DeclExtra -> DeclExtra) -> Decl -> Decl
withExtra f d =
let extra = fromMaybe emptyExtra (declExtra d)
in d { declExtra = Just $! f extra }
-- | Ideally types of things are referred using the tick of those types.
-- But for complex types, such a tick (and the accompanying decl) must be
-- assembled (recursively) on the spot. This is not yet done, in that case
-- we lie that this is just a simple type (represented by the given string).
-- The loss with that is that the type graph is unusable for programmatic
-- queries.
-- TODO(robinpalotai): remove this and add a proper Decl for the type +
-- a RelationKind ctor 'IsTypeOf'.
data StringyType = StringyType
{ declQualifiedType :: !Text
, declUserFriendlyType :: !Text
}
deriving (Eq, Show)
-- | Reference to the given tick from the given span.
data TickReference = TickReference
{ refTargetTick :: !Tick
, refSourceSpan :: !Span
-- ^ The precise location of the reference. Frontends probably want to
-- make this a hyperlink on the UI.
, refHighLevelContext :: !(Maybe Tick)
-- ^ The context from which the reference originates. Theoretically a
-- frontend could infer the context (enclosing scope) from the reference
-- source span, but 1) it is not obvious how large context to choose,
-- and 2) since the compiler already has the scoping info, it is easier
-- for the indexer to emit it.
--
-- Here we pragmatically set the context to the current top-level
-- function, if any. On the UI, this might show up as the next element
-- in the call chain - see 'ReferenceKind'.
, refKind :: !ReferenceKind
} deriving (Eq, Show)
-- | Distinguishing a plain reference from a call is traditional in imperative
-- languages, but in a functional language - or an imperative one with
-- functional elements - these concepts are fuzzy. For example, we might see
-- partial application as either reference or call.
-- Disclaimer: author of this comment is not an expert on the topic, so
-- following might be imprecise or even foolish.
--
-- Traditionally a call puts a new entry on the call stack.
-- But with partial application or lazy evaluation our runtime representation
-- is a call graph (with thunks as nodes) rather than a linear stack.
--
-- So here we instead resort to considering what a call means for the user.
-- An UI with code crossref support can typically display the call stack or
-- possible call chain of a function. This chain is expanded by the user to
-- discover sites that deal with that function - an example situation can be
-- debugging a problem or understanding new code.
--
-- Here we adopt a simplistic distinction - call is anything with at least one
-- argument application, the rest are reference. The frontend is free to
-- disregard this information and treat everything as calls or references
-- though.
data ReferenceKind
= Ref -- ^ Reference
| Call -- ^ Function call
| TypeDecl -- ^ Usage of identifier in type declaration, left to "::"
| Import -- ^ Imported entities
deriving (Eq, Ord, Show)
-- | A Relation is between standalone semantic nodes, in contrast to
-- TickReference, which is between a source span and a semantic node.
--
-- Read it aloud as 'relSourceTick' 'relKind' 'relTargetTick'.
data Relation = Relation
{ relSourceTick :: !Tick
, relKind :: !RelationKind
, relTargetTick :: !Tick
}
deriving (Eq, Show)
data RelationKind = ImplementsMethod | InstantiatesClass
deriving (Eq, Ord, Show)
| google/haskell-indexer | haskell-indexer-translate/src/Language/Haskell/Indexer/Translate.hs | apache-2.0 | 10,096 | 0 | 11 | 2,316 | 1,144 | 705 | 439 | 183 | 1 |
module Language.Hakaru.Util.Finite (CanonicallyFinite(..), enumEverything, enumCardinality, suchThat) where
import Data.List (tails)
import Data.Maybe (fromJust)
import Data.Bits (shiftL)
import qualified Data.Set as S
import qualified Data.Map as M
-- This used to be called Finite, but that is not quite what was implemented,
-- what is implemented is CanonicallyFinite, which is much (much!) stronger.
-- So renamed it.
class (Ord a) => CanonicallyFinite a where
everything :: [a]
cardinality :: a -> Integer
enumEverything :: (Enum a, Bounded a) => [a]
enumEverything = [minBound..maxBound]
enumCardinality :: (Enum a, Bounded a) => a -> Integer
enumCardinality dummy = succ
$ fromIntegral (fromEnum (maxBound `asTypeOf` dummy))
- fromIntegral (fromEnum (minBound `asTypeOf` dummy))
instance CanonicallyFinite () where
everything = enumEverything
cardinality = enumCardinality
instance CanonicallyFinite Bool where
everything = enumEverything
cardinality = enumCardinality
instance CanonicallyFinite Ordering where
everything = enumEverything
cardinality = enumCardinality
instance (CanonicallyFinite a) => CanonicallyFinite (Maybe a) where
everything = Nothing : map Just everything
cardinality = succ . cardinality . fromJust
instance (CanonicallyFinite a, CanonicallyFinite b) => CanonicallyFinite (Either a b) where
everything = map Left everything ++ map Right everything
cardinality x = cardinality l + cardinality r where
(Left l, Right r) = (x, x)
instance (CanonicallyFinite a, CanonicallyFinite b) => CanonicallyFinite (a, b) where
everything = [ (a, b) | a <- everything, b <- everything ]
cardinality ~(a, b) = cardinality a * cardinality b
instance (CanonicallyFinite a, CanonicallyFinite b, CanonicallyFinite c) => CanonicallyFinite (a, b, c) where
everything = [ (a, b, c) | a <- everything, b <- everything, c <- everything ]
cardinality ~(a, b, c) = cardinality a * cardinality b * cardinality c
instance (CanonicallyFinite a, CanonicallyFinite b, CanonicallyFinite c, CanonicallyFinite d) => CanonicallyFinite (a, b, c, d) where
everything = [ (a, b, c, d) | a <- everything, b <- everything, c <- everything, d <- everything ]
cardinality ~(a, b, c, d) = cardinality a * cardinality b * cardinality c * cardinality d
instance (CanonicallyFinite a, CanonicallyFinite b, CanonicallyFinite c, CanonicallyFinite d, CanonicallyFinite e) => CanonicallyFinite (a, b, c, d, e) where
everything = [ (a, b, c, d, e) | a <- everything, b <- everything, c <- everything, d <- everything, e <- everything ]
cardinality ~(a, b, c, d, e) = cardinality a * cardinality b * cardinality c * cardinality d * cardinality e
instance (CanonicallyFinite a) => CanonicallyFinite (S.Set a) where
everything = loop everything S.empty where
loop candidates set = set
: concat [ loop xs (S.insert x set) | x:xs <- tails candidates ]
cardinality set = shiftL 1 (fromIntegral (cardinality (S.findMin set)))
instance (CanonicallyFinite a, Eq b) => Eq (a -> b) where
f == g = all (\x -> f x == g x) everything
f /= g = any (\x -> f x /= g x) everything
-- canonical finiteness is crucial for the definition below to make sense
instance (CanonicallyFinite a, Ord b) => Ord (a -> b) where
f `compare` g = map f everything `compare` map g everything
f < g = map f everything < map g everything
f > g = map f everything > map g everything
f <= g = map f everything <= map g everything
f >= g = map f everything >= map g everything
instance (CanonicallyFinite a, CanonicallyFinite b) => CanonicallyFinite (a -> b) where
everything = [ (M.!) (M.fromDistinctAscList m)
| m <- loop everything ] where
loop [] = [[]]
loop (a:as) = [ (a,b):rest | b <- everything, rest <- loop as ]
cardinality f = cardinality y ^ cardinality x where
(x, y) = (x, f x)
suchThat :: (CanonicallyFinite a) => (a -> Bool) -> S.Set a
suchThat p = S.fromDistinctAscList (filter p everything)
| suhailshergill/hakaru | Language/Hakaru/Util/Finite.hs | bsd-3-clause | 4,170 | 0 | 14 | 942 | 1,535 | 809 | 726 | 66 | 1 |
-- -fno-warn-deprecations for use of Map.foldWithKey
{-# OPTIONS_GHC -fno-warn-deprecations #-}
-----------------------------------------------------------------------------
-- |
-- Module : Distribution.PackageDescription.Configuration
-- Copyright : Thomas Schilling, 2007
-- License : BSD3
--
-- Maintainer : cabal-devel@haskell.org
-- Portability : portable
--
-- This is about the cabal configurations feature. It exports
-- 'finalizePackageDescription' and 'flattenPackageDescription' which are
-- functions for converting 'GenericPackageDescription's down to
-- 'PackageDescription's. It has code for working with the tree of conditions
-- and resolving or flattening conditions.
module Distribution.PackageDescription.Configuration (
finalizePackageDescription,
flattenPackageDescription,
-- Utils
parseCondition,
freeVars,
mapCondTree,
mapTreeData,
mapTreeConds,
mapTreeConstrs,
) where
import Distribution.Package
( PackageName, Dependency(..) )
import Distribution.PackageDescription
( GenericPackageDescription(..), PackageDescription(..)
, Library(..), Executable(..), BuildInfo(..)
, Flag(..), FlagName(..), FlagAssignment
, Benchmark(..), CondTree(..), ConfVar(..), Condition(..)
, TestSuite(..) )
import Distribution.PackageDescription.Utils
( cabalBug, userBug )
import Distribution.Version
( VersionRange, anyVersion, intersectVersionRanges, withinRange )
import Distribution.Compiler
( CompilerId(CompilerId) )
import Distribution.System
( Platform(..), OS, Arch )
import Distribution.Simple.Utils
( currentDir, lowercase )
import Distribution.Simple.Compiler
( CompilerInfo(..) )
import Distribution.Text
( Text(parse) )
import Distribution.Compat.ReadP as ReadP hiding ( char )
import Control.Arrow (first)
import qualified Distribution.Compat.ReadP as ReadP ( char )
import Distribution.Compat.Semigroup as Semi
import Data.Char ( isAlphaNum )
import Data.Maybe ( mapMaybe, maybeToList )
import Data.Map ( Map, fromListWith, toList )
import qualified Data.Map as Map
------------------------------------------------------------------------------
-- | Simplify the condition and return its free variables.
simplifyCondition :: Condition c
-> (c -> Either d Bool) -- ^ (partial) variable assignment
-> (Condition d, [d])
simplifyCondition cond i = fv . walk $ cond
where
walk cnd = case cnd of
Var v -> either Var Lit (i v)
Lit b -> Lit b
CNot c -> case walk c of
Lit True -> Lit False
Lit False -> Lit True
c' -> CNot c'
COr c d -> case (walk c, walk d) of
(Lit False, d') -> d'
(Lit True, _) -> Lit True
(c', Lit False) -> c'
(_, Lit True) -> Lit True
(c',d') -> COr c' d'
CAnd c d -> case (walk c, walk d) of
(Lit False, _) -> Lit False
(Lit True, d') -> d'
(_, Lit False) -> Lit False
(c', Lit True) -> c'
(c',d') -> CAnd c' d'
-- gather free vars
fv c = (c, fv' c)
fv' c = case c of
Var v -> [v]
Lit _ -> []
CNot c' -> fv' c'
COr c1 c2 -> fv' c1 ++ fv' c2
CAnd c1 c2 -> fv' c1 ++ fv' c2
-- | Simplify a configuration condition using the OS and arch names. Returns
-- the names of all the flags occurring in the condition.
simplifyWithSysParams :: OS -> Arch -> CompilerInfo -> Condition ConfVar
-> (Condition FlagName, [FlagName])
simplifyWithSysParams os arch cinfo cond = (cond', flags)
where
(cond', flags) = simplifyCondition cond interp
interp (OS os') = Right $ os' == os
interp (Arch arch') = Right $ arch' == arch
interp (Impl comp vr)
| matchImpl (compilerInfoId cinfo) = Right True
| otherwise = case compilerInfoCompat cinfo of
-- fixme: treat Nothing as unknown, rather than empty list once we
-- support partial resolution of system parameters
Nothing -> Right False
Just compat -> Right (any matchImpl compat)
where
matchImpl (CompilerId c v) = comp == c && v `withinRange` vr
interp (Flag f) = Left f
-- TODO: Add instances and check
--
-- prop_sC_idempotent cond a o = cond' == cond''
-- where
-- cond' = simplifyCondition cond a o
-- cond'' = simplifyCondition cond' a o
--
-- prop_sC_noLits cond a o = isLit res || not (hasLits res)
-- where
-- res = simplifyCondition cond a o
-- hasLits (Lit _) = True
-- hasLits (CNot c) = hasLits c
-- hasLits (COr l r) = hasLits l || hasLits r
-- hasLits (CAnd l r) = hasLits l || hasLits r
-- hasLits _ = False
--
-- | Parse a configuration condition from a string.
parseCondition :: ReadP r (Condition ConfVar)
parseCondition = condOr
where
condOr = sepBy1 condAnd (oper "||") >>= return . foldl1 COr
condAnd = sepBy1 cond (oper "&&")>>= return . foldl1 CAnd
cond = sp >> (boolLiteral +++ inparens condOr +++ notCond +++ osCond
+++ archCond +++ flagCond +++ implCond )
inparens = between (ReadP.char '(' >> sp) (sp >> ReadP.char ')' >> sp)
notCond = ReadP.char '!' >> sp >> cond >>= return . CNot
osCond = string "os" >> sp >> inparens osIdent >>= return . Var
archCond = string "arch" >> sp >> inparens archIdent >>= return . Var
flagCond = string "flag" >> sp >> inparens flagIdent >>= return . Var
implCond = string "impl" >> sp >> inparens implIdent >>= return . Var
boolLiteral = fmap Lit parse
archIdent = fmap Arch parse
osIdent = fmap OS parse
flagIdent = fmap (Flag . FlagName . lowercase) (munch1 isIdentChar)
isIdentChar c = isAlphaNum c || c == '_' || c == '-'
oper s = sp >> string s >> sp
sp = skipSpaces
implIdent = do i <- parse
vr <- sp >> option anyVersion parse
return $ Impl i vr
------------------------------------------------------------------------------
mapCondTree :: (a -> b) -> (c -> d) -> (Condition v -> Condition w)
-> CondTree v c a -> CondTree w d b
mapCondTree fa fc fcnd (CondNode a c ifs) =
CondNode (fa a) (fc c) (map g ifs)
where
g (cnd, t, me) = (fcnd cnd, mapCondTree fa fc fcnd t,
fmap (mapCondTree fa fc fcnd) me)
mapTreeConstrs :: (c -> d) -> CondTree v c a -> CondTree v d a
mapTreeConstrs f = mapCondTree id f id
mapTreeConds :: (Condition v -> Condition w) -> CondTree v c a -> CondTree w c a
mapTreeConds f = mapCondTree id id f
mapTreeData :: (a -> b) -> CondTree v c a -> CondTree v c b
mapTreeData f = mapCondTree f id id
-- | Result of dependency test. Isomorphic to @Maybe d@ but renamed for
-- clarity.
data DepTestRslt d = DepOk | MissingDeps d
instance Semigroup d => Monoid (DepTestRslt d) where
mempty = DepOk
mappend = (Semi.<>)
instance Semigroup d => Semigroup (DepTestRslt d) where
DepOk <> x = x
x <> DepOk = x
(MissingDeps d) <> (MissingDeps d') = MissingDeps (d <> d')
data BT a = BTN a | BTB (BT a) (BT a) -- very simple binary tree
-- | Try to find a flag assignment that satisfies the constraints of all trees.
--
-- Returns either the missing dependencies, or a tuple containing the
-- resulting data, the associated dependencies, and the chosen flag
-- assignments.
--
-- In case of failure, the _smallest_ number of of missing dependencies is
-- returned. [TODO: Could also be specified with a function argument.]
--
-- TODO: The current algorithm is rather naive. A better approach would be to:
--
-- * Rule out possible paths, by taking a look at the associated dependencies.
--
-- * Infer the required values for the conditions of these paths, and
-- calculate the required domains for the variables used in these
-- conditions. Then picking a flag assignment would be linear (I guess).
--
-- This would require some sort of SAT solving, though, thus it's not
-- implemented unless we really need it.
--
resolveWithFlags ::
[(FlagName,[Bool])]
-- ^ Domain for each flag name, will be tested in order.
-> OS -- ^ OS as returned by Distribution.System.buildOS
-> Arch -- ^ Arch as returned by Distribution.System.buildArch
-> CompilerInfo -- ^ Compiler information
-> [Dependency] -- ^ Additional constraints
-> [CondTree ConfVar [Dependency] PDTagged]
-> ([Dependency] -> DepTestRslt [Dependency]) -- ^ Dependency test function.
-> Either [Dependency] (TargetSet PDTagged, FlagAssignment)
-- ^ Either the missing dependencies (error case), or a pair of
-- (set of build targets with dependencies, chosen flag assignments)
resolveWithFlags dom os arch impl constrs trees checkDeps =
case try dom [] of
Right r -> Right r
Left dbt -> Left $ findShortest dbt
where
extraConstrs = toDepMap constrs
-- simplify trees by (partially) evaluating all conditions and converting
-- dependencies to dependency maps.
simplifiedTrees = map ( mapTreeConstrs toDepMap -- convert to maps
. mapTreeConds (fst . simplifyWithSysParams os arch impl))
trees
-- @try@ recursively tries all possible flag assignments in the domain and
-- either succeeds or returns a binary tree with the missing dependencies
-- encountered in each run. Since the tree is constructed lazily, we
-- avoid some computation overhead in the successful case.
try [] flags =
let targetSet = TargetSet $ flip map simplifiedTrees $
-- apply additional constraints to all dependencies
first (`constrainBy` extraConstrs) .
simplifyCondTree (env flags)
deps = overallDependencies targetSet
in case checkDeps (fromDepMap deps) of
DepOk -> Right (targetSet, flags)
MissingDeps mds -> Left (BTN mds)
try ((n, vals):rest) flags =
tryAll $ map (\v -> try rest ((n, v):flags)) vals
tryAll = foldr mp mz
-- special version of `mplus' for our local purposes
mp (Left xs) (Left ys) = (Left (BTB xs ys))
mp (Left _) m@(Right _) = m
mp m@(Right _) _ = m
-- `mzero'
mz = Left (BTN [])
env flags flag = (maybe (Left flag) Right . lookup flag) flags
-- for the error case we inspect our lazy tree of missing dependencies and
-- pick the shortest list of missing dependencies
findShortest (BTN x) = x
findShortest (BTB lt rt) =
let l = findShortest lt
r = findShortest rt
in case (l,r) of
([], xs) -> xs -- [] is too short
(xs, []) -> xs
([x], _) -> [x] -- single elem is optimum
(_, [x]) -> [x]
(xs, ys) -> if lazyLengthCmp xs ys
then xs else ys
-- lazy variant of @\xs ys -> length xs <= length ys@
lazyLengthCmp [] _ = True
lazyLengthCmp _ [] = False
lazyLengthCmp (_:xs) (_:ys) = lazyLengthCmp xs ys
-- | A map of dependencies. Newtyped since the default monoid instance is not
-- appropriate. The monoid instance uses 'intersectVersionRanges'.
newtype DependencyMap = DependencyMap { unDependencyMap :: Map PackageName VersionRange }
deriving (Show, Read)
instance Monoid DependencyMap where
mempty = DependencyMap Map.empty
mappend = (Semi.<>)
instance Semigroup DependencyMap where
(DependencyMap a) <> (DependencyMap b) =
DependencyMap (Map.unionWith intersectVersionRanges a b)
toDepMap :: [Dependency] -> DependencyMap
toDepMap ds =
DependencyMap $ fromListWith intersectVersionRanges [ (p,vr) | Dependency p vr <- ds ]
fromDepMap :: DependencyMap -> [Dependency]
fromDepMap m = [ Dependency p vr | (p,vr) <- toList (unDependencyMap m) ]
simplifyCondTree :: (Monoid a, Monoid d) =>
(v -> Either v Bool)
-> CondTree v d a
-> (d, a)
simplifyCondTree env (CondNode a d ifs) =
mconcat $ (d, a) : mapMaybe simplifyIf ifs
where
simplifyIf (cnd, t, me) =
case simplifyCondition cnd env of
(Lit True, _) -> Just $ simplifyCondTree env t
(Lit False, _) -> fmap (simplifyCondTree env) me
_ -> error $ "Environment not defined for all free vars"
-- | Flatten a CondTree. This will resolve the CondTree by taking all
-- possible paths into account. Note that since branches represent exclusive
-- choices this may not result in a \"sane\" result.
ignoreConditions :: (Monoid a, Monoid c) => CondTree v c a -> (a, c)
ignoreConditions (CondNode a c ifs) = (a, c) `mappend` mconcat (concatMap f ifs)
where f (_, t, me) = ignoreConditions t
: maybeToList (fmap ignoreConditions me)
freeVars :: CondTree ConfVar c a -> [FlagName]
freeVars t = [ f | Flag f <- freeVars' t ]
where
freeVars' (CondNode _ _ ifs) = concatMap compfv ifs
compfv (c, ct, mct) = condfv c ++ freeVars' ct ++ maybe [] freeVars' mct
condfv c = case c of
Var v -> [v]
Lit _ -> []
CNot c' -> condfv c'
COr c1 c2 -> condfv c1 ++ condfv c2
CAnd c1 c2 -> condfv c1 ++ condfv c2
------------------------------------------------------------------------------
-- | A set of targets with their package dependencies
newtype TargetSet a = TargetSet [(DependencyMap, a)]
-- | Combine the target-specific dependencies in a TargetSet to give the
-- dependencies for the package as a whole.
overallDependencies :: TargetSet PDTagged -> DependencyMap
overallDependencies (TargetSet targets) = mconcat depss
where
(depss, _) = unzip $ filter (removeDisabledSections . snd) targets
removeDisabledSections :: PDTagged -> Bool
removeDisabledSections (Lib _) = True
removeDisabledSections (Exe _ _) = True
removeDisabledSections (Test _ t) = testEnabled t
removeDisabledSections (Bench _ b) = benchmarkEnabled b
removeDisabledSections PDNull = True
-- Apply extra constraints to a dependency map.
-- Combines dependencies where the result will only contain keys from the left
-- (first) map. If a key also exists in the right map, both constraints will
-- be intersected.
constrainBy :: DependencyMap -- ^ Input map
-> DependencyMap -- ^ Extra constraints
-> DependencyMap
constrainBy left extra =
DependencyMap $
Map.foldWithKey tightenConstraint (unDependencyMap left)
(unDependencyMap extra)
where tightenConstraint n c l =
case Map.lookup n l of
Nothing -> l
Just vr -> Map.insert n (intersectVersionRanges vr c) l
-- | Collect up the targets in a TargetSet of tagged targets, storing the
-- dependencies as we go.
flattenTaggedTargets :: TargetSet PDTagged ->
(Maybe Library, [(String, Executable)], [(String, TestSuite)]
, [(String, Benchmark)])
flattenTaggedTargets (TargetSet targets) = foldr untag (Nothing, [], [], []) targets
where
untag (_, Lib _) (Just _, _, _, _) = userBug "Only one library expected"
untag (deps, Lib l) (Nothing, exes, tests, bms) =
(Just l', exes, tests, bms)
where
l' = l {
libBuildInfo = (libBuildInfo l) { targetBuildDepends = fromDepMap deps }
}
untag (deps, Exe n e) (mlib, exes, tests, bms)
| any ((== n) . fst) exes =
userBug $ "There exist several exes with the same name: '" ++ n ++ "'"
| any ((== n) . fst) tests =
userBug $ "There exists a test with the same name as an exe: '" ++ n ++ "'"
| any ((== n) . fst) bms =
userBug $ "There exists a benchmark with the same name as an exe: '" ++ n ++ "'"
| otherwise = (mlib, (n, e'):exes, tests, bms)
where
e' = e {
buildInfo = (buildInfo e) { targetBuildDepends = fromDepMap deps }
}
untag (deps, Test n t) (mlib, exes, tests, bms)
| any ((== n) . fst) tests =
userBug $ "There exist several tests with the same name: '" ++ n ++ "'"
| any ((== n) . fst) exes =
userBug $ "There exists an exe with the same name as the test: '" ++ n ++ "'"
| any ((== n) . fst) bms =
userBug $ "There exists a benchmark with the same name as the test: '" ++ n ++ "'"
| otherwise = (mlib, exes, (n, t'):tests, bms)
where
t' = t {
testBuildInfo = (testBuildInfo t)
{ targetBuildDepends = fromDepMap deps }
}
untag (deps, Bench n b) (mlib, exes, tests, bms)
| any ((== n) . fst) bms =
userBug $ "There exist several benchmarks with the same name: '" ++ n ++ "'"
| any ((== n) . fst) exes =
userBug $ "There exists an exe with the same name as the benchmark: '" ++ n ++ "'"
| any ((== n) . fst) tests =
userBug $ "There exists a test with the same name as the benchmark: '" ++ n ++ "'"
| otherwise = (mlib, exes, tests, (n, b'):bms)
where
b' = b {
benchmarkBuildInfo = (benchmarkBuildInfo b)
{ targetBuildDepends = fromDepMap deps }
}
untag (_, PDNull) x = x -- actually this should not happen, but let's be liberal
------------------------------------------------------------------------------
-- Convert GenericPackageDescription to PackageDescription
--
data PDTagged = Lib Library
| Exe String Executable
| Test String TestSuite
| Bench String Benchmark
| PDNull
deriving Show
instance Monoid PDTagged where
mempty = PDNull
mappend = (Semi.<>)
instance Semigroup PDTagged where
PDNull <> x = x
x <> PDNull = x
Lib l <> Lib l' = Lib (l <> l')
Exe n e <> Exe n' e' | n == n' = Exe n (e <> e')
Test n t <> Test n' t' | n == n' = Test n (t <> t')
Bench n b <> Bench n' b' | n == n' = Bench n (b <> b')
_ <> _ = cabalBug "Cannot combine incompatible tags"
-- | Create a package description with all configurations resolved.
--
-- This function takes a `GenericPackageDescription` and several environment
-- parameters and tries to generate `PackageDescription` by finding a flag
-- assignment that result in satisfiable dependencies.
--
-- It takes as inputs a not necessarily complete specifications of flags
-- assignments, an optional package index as well as platform parameters. If
-- some flags are not assigned explicitly, this function will try to pick an
-- assignment that causes this function to succeed. The package index is
-- optional since on some platforms we cannot determine which packages have
-- been installed before. When no package index is supplied, every dependency
-- is assumed to be satisfiable, therefore all not explicitly assigned flags
-- will get their default values.
--
-- This function will fail if it cannot find a flag assignment that leads to
-- satisfiable dependencies. (It will not try alternative assignments for
-- explicitly specified flags.) In case of failure it will return a /minimum/
-- number of dependencies that could not be satisfied. On success, it will
-- return the package description and the full flag assignment chosen.
--
finalizePackageDescription ::
FlagAssignment -- ^ Explicitly specified flag assignments
-> (Dependency -> Bool) -- ^ Is a given dependency satisfiable from the set of
-- available packages? If this is unknown then use
-- True.
-> Platform -- ^ The 'Arch' and 'OS'
-> CompilerInfo -- ^ Compiler information
-> [Dependency] -- ^ Additional constraints
-> GenericPackageDescription
-> Either [Dependency]
(PackageDescription, FlagAssignment)
-- ^ Either missing dependencies or the resolved package
-- description along with the flag assignments chosen.
finalizePackageDescription userflags satisfyDep
(Platform arch os) impl constraints
(GenericPackageDescription pkg flags mlib0 exes0 tests0 bms0) =
case resolveFlags of
Right ((mlib, exes', tests', bms'), targetSet, flagVals) ->
Right ( pkg { library = mlib
, executables = exes'
, testSuites = tests'
, benchmarks = bms'
, buildDepends = fromDepMap (overallDependencies targetSet)
--TODO: we need to find a way to avoid pulling in deps
-- for non-buildable components. However cannot simply
-- filter at this stage, since if the package were not
-- available we would have failed already.
}
, flagVals )
Left missing -> Left missing
where
-- Combine lib, exes, and tests into one list of @CondTree@s with tagged data
condTrees = maybeToList (fmap (mapTreeData Lib) mlib0 )
++ map (\(name,tree) -> mapTreeData (Exe name) tree) exes0
++ map (\(name,tree) -> mapTreeData (Test name) tree) tests0
++ map (\(name,tree) -> mapTreeData (Bench name) tree) bms0
resolveFlags =
case resolveWithFlags flagChoices os arch impl constraints condTrees check of
Right (targetSet, fs) ->
let (mlib, exes, tests, bms) = flattenTaggedTargets targetSet in
Right ( (fmap libFillInDefaults mlib,
map (\(n,e) -> (exeFillInDefaults e) { exeName = n }) exes,
map (\(n,t) -> (testFillInDefaults t) { testName = n }) tests,
map (\(n,b) -> (benchFillInDefaults b) { benchmarkName = n }) bms),
targetSet, fs)
Left missing -> Left missing
flagChoices = map (\(MkFlag n _ d manual) -> (n, d2c manual n d)) flags
d2c manual n b = case lookup n userflags of
Just val -> [val]
Nothing
| manual -> [b]
| otherwise -> [b, not b]
--flagDefaults = map (\(n,x:_) -> (n,x)) flagChoices
check ds = let missingDeps = filter (not . satisfyDep) ds
in if null missingDeps
then DepOk
else MissingDeps missingDeps
{-
let tst_p = (CondNode [1::Int] [Distribution.Package.Dependency "a" AnyVersion] [])
let tst_p2 = (CondNode [1::Int] [Distribution.Package.Dependency "a" (EarlierVersion (Version [1,0] [])), Distribution.Package.Dependency "a" (LaterVersion (Version [2,0] []))] [])
let p_index = Distribution.Simple.PackageIndex.fromList [Distribution.Package.PackageIdentifier "a" (Version [0,5] []), Distribution.Package.PackageIdentifier "a" (Version [2,5] [])]
let look = not . null . Distribution.Simple.PackageIndex.lookupDependency p_index
let looks ds = mconcat $ map (\d -> if look d then DepOk else MissingDeps [d]) ds
resolveWithFlags [] Distribution.System.Linux Distribution.System.I386 (Distribution.Compiler.GHC,Version [6,8,2] []) [tst_p] looks ===> Right ...
resolveWithFlags [] Distribution.System.Linux Distribution.System.I386 (Distribution.Compiler.GHC,Version [6,8,2] []) [tst_p2] looks ===> Left ...
-}
-- | Flatten a generic package description by ignoring all conditions and just
-- join the field descriptors into on package description. Note, however,
-- that this may lead to inconsistent field values, since all values are
-- joined into one field, which may not be possible in the original package
-- description, due to the use of exclusive choices (if ... else ...).
--
-- TODO: One particularly tricky case is defaulting. In the original package
-- description, e.g., the source directory might either be the default or a
-- certain, explicitly set path. Since defaults are filled in only after the
-- package has been resolved and when no explicit value has been set, the
-- default path will be missing from the package description returned by this
-- function.
flattenPackageDescription :: GenericPackageDescription -> PackageDescription
flattenPackageDescription (GenericPackageDescription pkg _ mlib0 exes0 tests0 bms0) =
pkg { library = mlib
, executables = reverse exes
, testSuites = reverse tests
, benchmarks = reverse bms
, buildDepends = ldeps ++ reverse edeps ++ reverse tdeps ++ reverse bdeps
}
where
(mlib, ldeps) = case mlib0 of
Just lib -> let (l,ds) = ignoreConditions lib in
(Just (libFillInDefaults l), ds)
Nothing -> (Nothing, [])
(exes, edeps) = foldr flattenExe ([],[]) exes0
(tests, tdeps) = foldr flattenTst ([],[]) tests0
(bms, bdeps) = foldr flattenBm ([],[]) bms0
flattenExe (n, t) (es, ds) =
let (e, ds') = ignoreConditions t in
( (exeFillInDefaults $ e { exeName = n }) : es, ds' ++ ds )
flattenTst (n, t) (es, ds) =
let (e, ds') = ignoreConditions t in
( (testFillInDefaults $ e { testName = n }) : es, ds' ++ ds )
flattenBm (n, t) (es, ds) =
let (e, ds') = ignoreConditions t in
( (benchFillInDefaults $ e { benchmarkName = n }) : es, ds' ++ ds )
-- This is in fact rather a hack. The original version just overrode the
-- default values, however, when adding conditions we had to switch to a
-- modifier-based approach. There, nothing is ever overwritten, but only
-- joined together.
--
-- This is the cleanest way i could think of, that doesn't require
-- changing all field parsing functions to return modifiers instead.
libFillInDefaults :: Library -> Library
libFillInDefaults lib@(Library { libBuildInfo = bi }) =
lib { libBuildInfo = biFillInDefaults bi }
exeFillInDefaults :: Executable -> Executable
exeFillInDefaults exe@(Executable { buildInfo = bi }) =
exe { buildInfo = biFillInDefaults bi }
testFillInDefaults :: TestSuite -> TestSuite
testFillInDefaults tst@(TestSuite { testBuildInfo = bi }) =
tst { testBuildInfo = biFillInDefaults bi }
benchFillInDefaults :: Benchmark -> Benchmark
benchFillInDefaults bm@(Benchmark { benchmarkBuildInfo = bi }) =
bm { benchmarkBuildInfo = biFillInDefaults bi }
biFillInDefaults :: BuildInfo -> BuildInfo
biFillInDefaults bi =
if null (hsSourceDirs bi)
then bi { hsSourceDirs = [currentDir] }
else bi
| martinvlk/cabal | Cabal/Distribution/PackageDescription/Configuration.hs | bsd-3-clause | 26,656 | 0 | 20 | 7,285 | 6,295 | 3,389 | 2,906 | 376 | 19 |
module Tools.Quarry.Cache
( CacheTable
, emptyCache
, withCache
) where
import Control.Concurrent.MVar
import qualified Data.Map as M
import Control.Monad.Trans
type CacheTable a b = MVar (M.Map a b)
emptyCache :: MonadIO m => m (CacheTable a b)
emptyCache = liftIO $ newMVar M.empty
withCache :: (Ord a, MonadIO m) => CacheTable a b -> a -> m (Maybe b) -> m (Maybe b)
withCache t k f = do
table <- liftIO $ readMVar t
case M.lookup k table of
Nothing -> do mv <- f
case mv of
Nothing -> return mv
Just v -> liftIO $ modifyMVar_ t (return . M.insert k v) >> return mv
Just v -> return $ Just v
| vincenthz/quarry | Tools/Quarry/Cache.hs | bsd-3-clause | 714 | 0 | 21 | 235 | 275 | 139 | 136 | 19 | 3 |
-- Copyright (c) Microsoft. All rights reserved.
-- Licensed under the MIT license. See LICENSE file in the project root for full license information.
{-# LANGUAGE QuasiQuotes, OverloadedStrings #-}
{-# OPTIONS_GHC -fno-warn-orphans #-}
{-|
Copyright : (c) Microsoft
License : MIT
Maintainer : adamsap@microsoft.com
Stability : provisional
Portability : portable
Helper functions for creating common structures useful in code generation.
These functions often operate on 'Text' objects.
-}
module Language.Bond.Codegen.Util
( commonHeader
, commaSep
, newlineSep
, commaLineSep
, newlineSepEnd
, newlineBeginSep
, doubleLineSep
, doubleLineSepEnd
, uniqueName
, uniqueNames
, indent
, newLine
, slashForward
) where
import Data.Int (Int64)
import Data.Word
import Prelude
import Data.Text.Lazy (Text, justifyRight)
import Text.Shakespeare.Text
import Paths_bond (version)
import Data.Version (showVersion)
import Language.Bond.Util
instance ToText Word16 where
toText = toText . show
instance ToText Double where
toText = toText . show
instance ToText Integer where
toText = toText . show
indent :: Int64 -> Text
indent n = justifyRight (4 * n) ' ' ""
commaLine :: Int64 -> Text
commaLine n = [lt|,
#{indent n}|]
newLine :: Int64 -> Text
newLine n = [lt|
#{indent n}|]
doubleLine :: Int64 -> Text
doubleLine n = [lt|
#{indent n}|]
-- | Separates elements of a list with a comma.
commaSep :: (a -> Text) -> [a] -> Text
commaSep = sepBy ", "
newlineSep, commaLineSep, newlineSepEnd, newlineBeginSep, doubleLineSep, doubleLineSepEnd
:: Int64 -> (a -> Text) -> [a] -> Text
-- | Separates elements of a list with new lines. Starts new lines at the
-- specified indentation level.
newlineSep = sepBy . newLine
-- | Separates elements of a list with comma followed by a new line. Starts
-- new lines at the specified indentation level.
commaLineSep = sepBy . commaLine
-- | Separates elements of a list with new lines, ending with a new line.
-- Starts new lines at the specified indentation level.
newlineSepEnd = sepEndBy . newLine
-- | Separates elements of a list with new lines, beginning with a new line.
-- Starts new lines at the specified indentation level.
newlineBeginSep = sepBeginBy . newLine
-- | Separates elements of a list with two new lines. Starts new lines at
-- the specified indentation level.
doubleLineSep = sepBy . doubleLine
-- | Separates elements of a list with two new lines, ending with two new
-- lines. Starts new lines at the specified indentation level.
doubleLineSepEnd = sepEndBy . doubleLine
-- | Returns common header for generated files using specified single-line
-- comment lead character(s) and a file name.
commonHeader :: ToText a => a -> a -> a -> Text
commonHeader c input output = [lt|
#{c}------------------------------------------------------------------------------
#{c} This code was generated by a tool.
#{c}
#{c} Tool : Bond Compiler #{showVersion version}
#{c} Input filename: #{input}
#{c} Output filename: #{output}
#{c}
#{c} Changes to this file may cause incorrect behavior and will be lost when
#{c} the code is regenerated.
#{c} <auto-generated />
#{c}------------------------------------------------------------------------------
|]
-- | Given an intended name and a list of already taken names, returns a
-- unique name. Assumes that it's legal to append digits to the end of the
-- intended name.
uniqueName :: String -> [String] -> String
uniqueName baseName taken = go baseName (0::Integer)
where go name counter
| not (name `elem` taken) = name
| otherwise = go newName (counter + 1)
where newName = baseName ++ (show counter)
-- | Given a list of names with duplicates and a list of reserved names,
-- create a list of unique names using the uniqueName function.
uniqueNames :: [String] -> [String] -> [String]
uniqueNames names reservedInit = reverse $ go names [] reservedInit
where
go [] acc _ = acc
go (name:remaining) acc reservedAcc = go remaining (newName:acc) (newName:reservedAcc)
where
newName = uniqueName name reservedAcc
-- | Converts all file path slashes to forward slashes.
slashForward :: String -> String
slashForward path = map replace path
where replace '\\' = '/'
replace c = c
| jdubrule/bond | compiler/src/Language/Bond/Codegen/Util.hs | mit | 4,356 | 0 | 12 | 851 | 702 | 407 | 295 | 49 | 2 |
module Top.Menu where
import Data.SelectTree
import Data.List as List
import Text.Logging
import Control.Monad.Error
import System.FilePath
import Utils
import Base
import Editor.Scene (initEditorScene)
import Editor.Menu (editLevel)
import Editor.Pickle
import Editor.Pickle.LevelFile
import Editor.Pickle.LevelLoading
import Top.Game (playLevel)
import Distribution.AutoUpdate
import Distribution.AutoUpdate.MenuItem
import StoryMode.Menus
import LevelServer.Client
-- | top level application state
startAppState :: Application -> AppState
startAppState app = NoGUIAppState $ do
mLevel <- gets play_level
play_levelA %= Nothing
case mLevel of
Nothing -> return $ mainMenu app 0
Just file -> io $ play app (mainMenu app 0) <$> mkUnknownLevel file
mainMenu :: Application -> Int -> AppState
mainMenu app ps =
menuAppState app MainMenu Nothing (
MenuItem (r $ storyModeMenuItem) (storyMode app (play app) . this) :
MenuItem (r $ p "community levels") (community app 0 . this) :
MenuItem (r $ p "options") (generalOptions app 0 . this) :
MenuItem (r autoUpdateMenuItem) (autoUpdate app . this) :
MenuItem (r $ p "credits") (credits app . this) :
MenuItem (r $ p "quit") (const $ FinalAppState) :
[]) ps
where
r :: Renderable a => a -> RenderableInstance
r = renderable
this :: Int -> AppState
this = mainMenu app
credits :: Application -> Parent -> AppState
credits app parent = NoGUIAppState $ do
file <- rm2m $ getDataFileName ("manual" </> "credits" <.> "txt")
prose <- io $ pFile file
return $ scrollingAppState app prose parent
community :: Application -> Int -> Parent -> AppState
community app ps parent =
menuAppState app (NormalMenu (p "community levels") Nothing) (Just parent) (
MenuItem (p "play levels") (selectLevelPlay app . this) :
MenuItem (p "download levels") (downloadedLevels app (play app) 0 . this) :
MenuItem (p "editor") (selectLevelEdit app 0 . this) :
[]) ps
where
this ps = community app ps parent
-- | select a saved level.
selectLevelPlay :: Application -> Parent -> AppState
selectLevelPlay app parent = NoGUIAppState $ rm2m $ do
levelFiles <- lookupPlayableLevels
return $ if null $ ftoList levelFiles then
message app [p "no levels found :("] parent
else
treeToMenu app parent (p "choose a level")
(\ lf -> showLevelTreeForMenu <$> getHighScores <*> pure lf)
levelFiles (play app) 0
selectLevelEdit :: Application -> Int -> Parent -> AppState
selectLevelEdit app ps parent = menuAppState app menuType (Just parent) (
MenuItem (p "new level") (pickNewLevelEdit app . this) :
MenuItem (p "edit existing level") (selectExistingLevelEdit app . this) :
[]) ps
where
menuType = NormalMenu (p "editor") (Just $ p "create a new level or edit an existing one?")
this ps = selectLevelEdit app ps parent
pickNewLevelEdit :: Application -> AppState -> AppState
pickNewLevelEdit app parent = NoGUIAppState $ rm2m $ do
pathToEmptyLevel <- getDataFileName (templateLevelsDir </> "empty.nl")
templateLevelPaths <- filter (not . ("empty.nl" `List.isSuffixOf`)) <$>
getDataFiles templateLevelsDir (Just ".nl")
return $ menuAppState app menuType (Just parent) (
map mkMenuItem templateLevelPaths ++
MenuItem (p "empty level") (const $ edit app parent (TemplateLevel pathToEmptyLevel)) :
[]) 0
where
menuType = NormalMenu (p "new level") (Just $ p "choose a template to start from")
mkMenuItem templatePath =
MenuItem
(pVerbatim $ takeBaseName templatePath)
(const $ edit app parent (TemplateLevel templatePath))
selectExistingLevelEdit app parent = NoGUIAppState $ io $ do
editableLevels <- lookupUserLevels "your levels"
return $ if null $ ftoList editableLevels then
message app [p "no levels found :("] parent
else
treeToMenu app parent (p "choose a level to edit") (return . pVerbatim . (^. labelA))
editableLevels
(\ parent chosen -> edit app parent chosen) 0
-- | loads a level and plays it.
play :: Application -> Parent -> LevelFile -> AppState
play app parent levelFile = loadingEditorScene app levelFile parent (playLevel app parent False)
edit :: Application -> Parent -> LevelFile -> AppState
edit app parent levelFile = loadingEditorScene app levelFile parent (editLevel app)
-- | load a level, got to playing state afterwards
-- This AppState is a hack to do things from the logic thread
-- in the rendering thread. Cause Qt's pixmap loading is not threadsafe.
loadingEditorScene :: Application -> LevelFile -> AppState
-> (EditorScene Sort_ -> AppState) -> AppState
loadingEditorScene app file abortion follower =
appState (busyMessage $ p "loading...") $ io $ do
eGrounds <- runErrorT $ loadByFilePath (ftoList $ allSorts app) (getAbsoluteFilePath file)
case eGrounds of
Right diskLevel ->
-- level successfully loaded
return $ follower $ initEditorScene (allSorts app) file diskLevel
Left errMsg -> do
fmapM_ (logg Error) $ fmap getString errMsg
return $ message app errMsg abortion
| geocurnoff/nikki | src/Top/Menu.hs | lgpl-3.0 | 5,346 | 0 | 18 | 1,259 | 1,594 | 799 | 795 | 103 | 2 |
<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE helpset PUBLIC "-//Sun Microsystems Inc.//DTD JavaHelp HelpSet Version 2.0//EN" "http://java.sun.com/products/javahelp/helpset_2_0.dtd">
<helpset version="2.0" xml:lang="sr-SP">
<title>Active Scan Rules - Alpha | ZAP Extension</title>
<maps>
<homeID>top</homeID>
<mapref location="map.jhm"/>
</maps>
<view>
<name>TOC</name>
<label>Contents</label>
<type>org.zaproxy.zap.extension.help.ZapTocView</type>
<data>toc.xml</data>
</view>
<view>
<name>Index</name>
<label>Index</label>
<type>javax.help.IndexView</type>
<data>index.xml</data>
</view>
<view>
<name>Search</name>
<label>Search</label>
<type>javax.help.SearchView</type>
<data engine="com.sun.java.help.search.DefaultSearchEngine">
JavaHelpSearch
</data>
</view>
<view>
<name>Favorites</name>
<label>Favorites</label>
<type>javax.help.FavoritesView</type>
</view>
</helpset> | kingthorin/zap-extensions | addOns/ascanrulesAlpha/src/main/javahelp/org/zaproxy/zap/extension/ascanrulesAlpha/resources/help_sr_SP/helpset_sr_SP.hs | apache-2.0 | 986 | 83 | 53 | 162 | 403 | 212 | 191 | -1 | -1 |
module Invariant where
{-@ using [a] as {v : [a] | (len v) > 0 } @-}
xs = []
add x xs = x:xs
bar xs = head xs
foo xs = tail xs
| mightymoose/liquidhaskell | tests/neg/StreamInvariants.hs | bsd-3-clause | 133 | 0 | 5 | 41 | 48 | 25 | 23 | 5 | 1 |
-- Salesman's Travel
-- http://www.codewars.com/kata/56af1a20509ce5b9b000001e
module Codewars.G964.Salesmantravel where
import Data.List (intercalate, isInfixOf)
import Data.List.Split (splitOn)
travel :: String -> String -> String
travel r zipcode = zipcode ++ ":" ++ cityStreat ss ++ "/" ++ houses ss
where ss | (== 8) . length $ zipcode = filter (zipcode `isInfixOf`) . splitOn "," $ r
| otherwise = []
cityStreat [] = []
cityStreat xs = intercalate "," . map ( unwords . init . init . tail . words) $ xs
houses [] = []
houses xs = intercalate "," . map (head . words) $ xs
| gafiatulin/codewars | src/6 kyu/Salesmantravel.hs | mit | 638 | 0 | 14 | 159 | 228 | 119 | 109 | 11 | 3 |
{-# LANGUAGE OverloadedStrings, QuasiQuotes #-}
module StrSpec (spec) where
import Test.Hspec
import Test.HUnit
import Str
spec :: Spec
spec = do
testStrT
testStrMultiT
testStrCrMultiT
testStrCrMultiBlank
testStrT :: Spec
testStrT = it "works on basic Text" $ do
let converted = [strT|
test string|]
let expected = "test string"
assertEqual "doesn't match" expected converted
testStrMultiT :: Spec
testStrMultiT = it "works on two line Text" $ do
let converted = [strT|
test string
second line|]
let expected = "test string\n second line"
assertEqual "doesn't match" expected converted
testStrCrT :: Spec
testStrCrT = it "works on basic CR Text" $ do
let converted = [strCrT|
test string|]
let expected = "test string\n"
assertEqual "doesn't match" expected converted
testStrCrMultiT :: Spec
testStrCrMultiT = it "works on two line CR Text" $ do
let converted = [strCrT|
test string
second line|]
let expected = "test string\n second line\n"
assertEqual "doesn't match" expected converted
testStrCrMultiBlank :: Spec
testStrCrMultiBlank = it "works with blank lines" $ do
let converted = [strCrT|
test string
second line|]
let expected = "test string\n\n second line\n"
assertEqual "doesn't match" expected converted
| emmanueltouzery/cigale-timesheet | tests/StrSpec.hs | mit | 1,526 | 0 | 10 | 490 | 290 | 150 | 140 | 36 | 1 |
{-# LANGUAGE OverloadedStrings #-}
import qualified Data.Conduit as C
import qualified Data.Conduit.List as CL
import qualified Data.Text.IO as T
import qualified Data.Text as T
import Control.Monad.IO.Class (liftIO)
import Text.JSON.Yocto
import Web.Twitter.Conduit (stream, statusesFilterByTrack)
import Common
import Control.Lens ((^!), (^.), act)
import Data.Map ((!))
import Data.List (isInfixOf, or)
import Web.Twitter.Types
main :: IO ()
main = do
let query = "london"
T.putStrLn $ T.concat [ "Streaming Tweets that match \"", query, "\"..."]
analyze query
analyze :: T.Text -> IO ()
analyze query = runTwitterFromEnv' $ do
src <- stream $ statusesFilterByTrack query
src C.$$+- CL.mapM_ (^! act (liftIO . process))
process :: StreamingAPI -> IO ()
process (SStatus s) = printStatus s
process s = return ()
parseStatus :: Status -> T.Text
parseStatus (s) = T.concat ["@", (userScreenName $ statusUser s), ": ", (statusText s)]
printStatus :: Status -> IO ()
printStatus (s) = T.putStrLn $ parseStatus s
| DanielTomlinson/Twitter-Stream-Haskell | Main.hs | mit | 1,022 | 1 | 13 | 159 | 381 | 212 | 169 | 29 | 1 |
module Haskeroids.Bullet
( Bullet
, initBullet
, updateBullets
) where
import Haskeroids.Render
import Haskeroids.Collision
import Haskeroids.Particles
import Haskeroids.Geometry
import Haskeroids.Geometry.Body
import Control.Applicative
-- | Speed of a bullet in pixels per tick
bulletSpeed :: Float
bulletSpeed = 10.0
-- | The visual line for a bullet
bulletLine :: LineSegment
bulletLine = LineSegment ((0,-bulletSpeed/2.0),(0,bulletSpeed/2.0))
-- | The collision line for a bullet
bulletLine' :: LineSegment
bulletLine' = LineSegment ((0,-bulletSpeed/2.0),(0,bulletSpeed))
-- | The maximum number of ticks that a bullet stays active
bulletMaxLife :: Int
bulletMaxLife = 70
data Bullet = Bullet
{ bulletLife :: Int
, bulletBody :: Body
}
instance LineRenderable Bullet where
interpolatedLines f (Bullet _ b) = [transform b' bulletLine] where
b' = interpolatedBody f b
instance Collider Bullet where
collisionCenter = bodyPos . bulletBody
collisionRadius = const bulletSpeed
collisionLines b = return $ transform (bulletBody b) bulletLine'
collisionParticles b = do
let body = bulletBody b
emitDir = bodyAngle body + pi
addParticles 5 NewParticle
{ npPosition = bodyPos body
, npRadius = 3
, npDirection = emitDir
, npSpread = pi/2
, npSpeed = (2.0, 5.0)
, npLifeTime = (5, 15)
, npSize = (1,2)
}
-- | Initialize a new bullet with the given position and direction
initBullet :: Vec2 -> Float -> Bullet
initBullet pos angle = Bullet bulletMaxLife body where
body = Body pos' angle vel 0 pos' angle
vel = polar bulletSpeed angle
pos' = pos /+/ polar 12.0 angle
-- | Update a bullet to a new position
updateBullet :: Bullet -> Bullet
updateBullet (Bullet l b) = Bullet (l-1) $ updateBody b
-- | Update a list of bullets
updateBullets :: [Bullet] -> [Bullet]
updateBullets = filter bulletActive . map updateBullet
-- | Test wether a bullet is still active
bulletActive :: Bullet -> Bool
bulletActive (Bullet l _) = l > 0
| shangaslammi/haskeroids | Haskeroids/Bullet.hs | mit | 2,151 | 0 | 12 | 535 | 551 | 306 | 245 | 50 | 1 |
-- -------------------------------------------------------------------------------------
-- Author: Sourabh S Joshi (cbrghostrider); Copyright - All rights reserved.
-- For email, run on linux (perl v5.8.5):
-- perl -e 'print pack "H*","736f75726162682e732e6a6f73686940676d61696c2e636f6d0a"'
-- -------------------------------------------------------------------------------------
-- same O(n) algorithm as before, but using arrays instead of lists, and it runs lightning fast!
import Data.Ratio
import Data.List
import Data.Array
type Accum = (Int, Int) -- (number of 1s in window, total accumulated combos)
foldFunc :: Array Int Char -> Int -> Accum -> Int -> Accum
foldFunc arr k (nOnesWindow, accumCombos) newInd =
let droppedInd = newInd - (k+1)
onesWindow = if (arr!droppedInd) == '1' then nOnesWindow - 1 else nOnesWindow
newOnesWindow = if (arr!newInd) == '1' then onesWindow + 1 else onesWindow
addCombos = if (arr!newInd) == '1' then newOnesWindow * 2 - 1 else 0
in (newOnesWindow, accumCombos + addCombos)
computeProbability :: Array Int Char -> String -> Int -> Int -> Ratio Int
computeProbability arr ss n k =
let num1sFstkp1= length . filter (== '1') . take (k + 1) $ ss ++ (repeat '0')
initCombos = num1sFstkp1 ^ 2
restCombos = snd . foldl' (foldFunc arr k) (num1sFstkp1, 0) $ drop (k+1) [0..(length ss - 1)]
nume = if length ss <= (k+1) then initCombos else initCombos + restCombos
deno = (length ss) ^ 2
in (nume % deno)
runTests :: Int -> IO ()
runTests 0 = return ()
runTests nt = do
nkstr <- getLine
ss <- getLine
let [n, k] = map read . words $ nkstr
let arr = listArray (0, length ss -1) ss
let rat = computeProbability arr ss n k
putStrLn $ (show . numerator $ rat) ++ "/" ++ (show . denominator $ rat)
runTests (nt-1)
main :: IO ()
main = do
tcstr <- getLine
runTests (read tcstr)
| cbrghostrider/Hacking | HackerRank/Mathematics/Probability/sherlockAndProbabilityArrays.hs | mit | 1,972 | 1 | 14 | 462 | 623 | 324 | 299 | 33 | 4 |
{-# LANGUAGE ForeignFunctionInterface #-}
module Main where
import Control.Concurrent (MVar, takeMVar, newMVar)
import Foreign
import Foreign.C
type Callback = CInt -> IO ()
-- | rs_function function from Rust library
foreign import ccall "rs_function"
rs_function :: Int -> IO ()
-- | Register a callback within the rust library via rs_register
foreign import ccall "rs_register"
rs_register :: FunPtr Callback -> IO ()
-- | Create a callback wrapper to be called by the Rust library
foreign import ccall "wrapper"
makeCallback :: Callback -> IO (FunPtr Callback)
callback :: MVar CInt -> CInt -> IO ()
callback mi i = print (msg i)
>> takeMVar mi
>>= print . statemsg
where
msg = (++) "Haskell-callback invoked with value: " . show
statemsg = (++) " Haskell-callback carrying state: " . show
main :: IO ()
main = do
rs_function 0
st <- newMVar 42
rs_register =<< makeCallback (callback st)
rs_function 1
putStrLn "Haskell main done"
| creichert/haskellrustdemo | Rust.hs | mit | 1,031 | 0 | 10 | 250 | 261 | 134 | 127 | 25 | 1 |
-- Copyright © 2013 Bart Massey
-- [This work is licensed under the "MIT License"]
-- Please see the file COPYING in the source
-- distribution of this software for license terms.
-- Priority Queue for O'Neill Sieve
module MPQ (empty, deleteMin, insert, deleteMinAndInsert, findMin, minKey)
where
-- | I choose to use the priority queue functionality of
-- Data.Map, which is mostly sufficient for this
-- example. Note that deleteMinAndInsert is not supported by
-- Data.Map, so the "heap speedup" will not apply. Rather
-- than doing "multi-map" tricks, this version simply merges
-- lists that share the same key.
import qualified Data.Map as M
import Data.Word
-- | This merge keeps only one copy of
-- duplicate elements. It assumes that
-- neither list contains duplicates
-- to begin with.
merge :: Ord a => [a] -> [a] -> [a]
merge xs1 [] = xs1
merge [] xs2 = xs2
merge l1@(x1 : xs1) l2@(x2 : xs2) =
case compare x1 x2 of
LT -> x1 : merge xs1 l2
EQ -> x1 : merge xs1 xs2
GT -> x2 : merge l1 xs2
empty :: M.Map Word64 [Word64]
empty = M.empty
deleteMin :: M.Map Word64 [Word64] ->
M.Map Word64 [Word64]
deleteMin q = M.deleteMin q
insert :: Word64 -> [Word64] ->
M.Map Word64 [Word64] ->
M.Map Word64 [Word64]
insert k xs q =
case M.lookup k q of
Nothing -> M.insert k xs q
Just xs' -> M.insert k (merge xs xs') q
deleteMinAndInsert :: Word64 -> [Word64] ->
M.Map Word64 [Word64] ->
M.Map Word64 [Word64]
deleteMinAndInsert k v q =
insert k v $ deleteMin q
findMin :: M.Map Word64 [Word64] -> (Word64, [Word64])
findMin q = M.findMin q
minKey :: M.Map Word64 [Word64] -> Word64
minKey q = fst $ M.findMin q
| BartMassey/genuine-sieve | MPQ.hs | mit | 1,730 | 0 | 10 | 413 | 498 | 266 | 232 | 32 | 3 |
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE CPP #-}
{-# LANGUAGE TemplateHaskell #-}
import Control.Applicative
import Control.Concurrent
import Control.Lens
import qualified Control.Exception as Exception
import Control.Monad.IO.Class
import qualified System.IO as IO
import qualified Data.ByteString as BS
import qualified Data.ByteString.Char8 as BSC
import Data.Maybe
import qualified Data.Text as T
import qualified Data.Text.Encoding as T
import Data.Time
import Database.PostgreSQL.Simple
import Servant.Server.Internal.SnapShims
import Snap
import Snap.Http.Server
import Snap.Snaplet.Auth
import Snap.Snaplet.Auth.Backends.PostgresqlSimple
import Snap.Snaplet.Config
import Snap.Snaplet.Session
import Snap.Snaplet.Session.Backends.CookieSession
import Snap.Snaplet.PostgresqlSimple
import Snap.Snaplet.Heist
import Snap.Util.FileServe
import Servant.API
import Servant (serveSnap, Server)
import Text.Read
#ifdef DEVELOPMENT
--import Snap.Loader.Dynamic (loadSnapTH)
#else
import Snap.Loader.Static (loadSnapTH)
#endif
import Snap.Loader.Static (loadSnapTH)
import Application
import Db
import Handler
import Types.Poll
import Types.Channel
--------------------------------------------------------------------------------
routes :: [(BS.ByteString, Handler Pollock Pollock ())]
routes = [ ("/" , handlerIndex)
, ("/app" , with auth handlerDashboard)
, ("/signup" , with auth handlerSignup)
, ("/login" , with auth handlerLogin)
, ("/logout" , with auth handlerLogout)
, ("/polls/new" , with auth handlerPollNew)
, ("/poll/view/:pollid" , with auth handlerPollView)
, ("/poll/delete/:pollid" , with auth handlerPollDelete)
, ("static" , serveDirectory "static")
, (".well-known" , serveDirectory ".well-known")
]
-- | Build a new Pollock snaplet.
appInit :: SnapletInit Pollock Pollock
appInit =
makeSnaplet "Pollock" "Best polling system!" Nothing $ do
h <- nestSnaplet "heist" heist $
heistInit "templates"
d <- nestSnaplet "db" db $
pgsInit
s <- nestSnaplet "sess" sess $
initCookieSessionManager "site_key.txt" "sess" Nothing (Just 3600)
a <- nestSnaplet "auth" auth $
initPostgresAuth sess d
addRoutes routes
addAuthSplices h auth -- add <ifLoggedIn> <ifLoggedOut> tags support
return $ Pollock { _heist = h, _sess = s, _auth = a , _db=d}
getConf :: IO (Config Snap AppConfig)
getConf = commandLineAppConfig defaultConfig
getActions :: Config Snap AppConfig -> IO (Snap (), IO ())
getActions conf = do
(msgs, site, cleanup) <- runSnaplet (appEnvironment =<< getOther conf) appInit
IO.hPutStrLn IO.stderr (T.unpack msgs)
return (site, cleanup)
main :: IO ()
main = do
(conf, site, cleanup) <- $(loadSnapTH [| getConf |] 'getActions ["snaplets/heist/templates"])
_ <- Exception.try (httpServe conf site) :: IO (Either Exception.SomeException ())
cleanup
| sigrlami/pollock | app-snap/src/Main.hs | mit | 3,429 | 1 | 12 | 957 | 761 | 431 | 330 | 75 | 1 |
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE DeriveGeneric #-}
-- | This module implements type checking on the 'Term' language.
module HM.Typecheck
( -- * Type checking operations
NameSource(..)
, TcError(..)
, typecheck
) where
import Control.Lens
import Control.Monad
import Control.Monad.EitherK
import Control.Monad.Trans.Class
import Control.Unification
import Control.Unification.IntVar
import Data.Functor.Fixedpoint
import Data.List ((\\))
import Data.Map (Map)
import qualified Data.Map as Map
import qualified Data.Set as Set
import GHC.Generics
import HM.Annotation
import HM.Env
import HM.Mono
import HM.Poly
import HM.Term
-- | Class for types that can be used as type variables. The
-- 'nameSource' list will be consulted during generalization
-- when picking names for the newly quantified variables.
class NameSource a where nameSource :: [a]
instance NameSource Integer where nameSource = [0..]
-- | Errors that can occur during type checking.
data TcError e t v
-- | A cyclic term was encountered. See 'occursFailure'
= OccursError IntVar (UMono t v)
-- | The top-most level of the terms do not match. See 'mismatchFailure'
| MismatchError (MonoF t v (UMono t v)) (MonoF t v (UMono t v))
-- | A binder didn't exist in the environment
| EnvironmentError e
deriving (Show, Eq, Generic)
instance Fallible (MonoF t v) IntVar (TcError e t v) where
occursFailure = OccursError
mismatchFailure x y = MismatchError x y
-- | Type used when running the type checker which provides
-- unification and exception handling
type M e t v = EitherKT (TcError e t v) (IntBindingT (MonoF t v) Identity)
-- | Unwrap 'M'
runM :: M e t v a -> Either (TcError e t v) a
runM = runIdentity . evalIntBindingT . runEitherKT
-- | Generate a fresh unification type variable.
freshType :: (Eq t, Eq v) => M e t v (UMono t v)
freshType = UVar <$> lift freeVar
-- | Infer the most general type for a given term within the given
-- environment. The returned type will be monomorphic but will
-- contain unification variables for the unconstrained parts.
inferType :: (NameSource v, Show e, TypeCon t, Ord e, Ord v) =>
Env e t v {- ^ Current typing environment -} ->
(a -> Poly t v) {- ^ typing function for atoms -} ->
Annot (UMono t v) (TermF a e) {- ^ Term to type check -} ->
M e t v (UMono t v) {- ^ Computed type of given term -}
inferType env atomType (Annot u e) =
case e of
AtomF a ->
do t <- instantiate (polyUnfreeze (atomType a))
u =:= t
VarF v ->
do s <- case view (at v) env of
Nothing -> throwEitherKT (EnvironmentError v)
Just a -> return a
t <- instantiate s
u =:= t
AppF e1 e2 ->
do t1 <- inferType env atomType e1
t2 <- inferType env atomType e2
_ <- t1 =:= UMonoApp mkFunction [t2,u]
return u
AbsF x e ->
do argType <- freshType
let env' = set (at x) (Just (UPoly [] argType)) env
t1 <- inferType env' atomType e
u =:= UMonoApp mkFunction [argType,t1]
LetF x e1 e2 ->
do t <- freshType
t1 <- inferType (set (at x) (Just (UPoly [] t)) env) atomType e1
t2 <- t =:= t1
t3 <- generalize env t2
let env' = set (at x) (Just t3) env
t4 <- inferType env' atomType e2
u =:= t4
-- | Generalize a type by universally quantifying the
-- unconstrainted unification variables in the given
-- type.
generalize :: (Eq t, Ord v, NameSource v) =>
Env e t v {- ^ typing environment -} ->
UMono t v {- ^ type to generalize -} ->
M e t v (UPoly t v) {- ^ generalized type -}
generalize env t = lift $
do envVars <- getFreeVarsAll (toListOf (each . upolyMono) env)
termVars <- getFreeVars t
let freeVars = termVars \\ envVars
usedVars = umonoFreeVars t
availNames = filter (`Set.notMember` usedVars) nameSource
pickedNames = zipWith const availNames freeVars
zipWithM_ (\x y -> bindVar x (UMonoVar y)) freeVars pickedNames
return (UPoly pickedNames t)
-- | Substitute all of the quantified types variables in a polymorphic
-- type with unification variables.
instantiate :: (Eq t, Ord v) => UPoly t v -> M e t v (UMono t v)
instantiate (UPoly vs t) =
do subst <- Map.fromList <$> traverse aux vs
substUMono subst <$> applyBindings t
where
aux v =
do u <- lift freeVar
return (v,u)
-- | Compute the most general type of a term assuming a given
-- environment.
typecheck' ::
(Show e, NameSource v, Ord e, Ord v, TypeCon t, Eq t) =>
Map e (Poly t v) {- ^ typing environment -} ->
(a -> Poly t v) ->
Annot (UMono t v) (TermF a e) {- ^ term to typecheck -} ->
M e t v (Poly t v, Annot (Mono t v) (TermF a e)) {- ^ most general type of term -}
typecheck' env atomType term =
do let uenv = envFromMap env
t <- inferType uenv atomType term
UPoly vs t1 <- generalize uenv t
t2 <- applyBindings t1
s <- case freeze t2 of
Nothing -> fail "typecheck: implementation bug"
Just t3 -> return (Poly vs t3)
term' <- hmapM (traverseAnnot freezeM) term
return (s, term')
freezeM :: (Eq t, Eq v) => UMono t v -> M e t v (Mono t v)
freezeM t =
do t' <- applyBindings t
let Just m = freeze t'
return m
addTypes ::
(Eq t, Ord v) =>
Term a e ->
M e t v (Annot (UMono t v) (TermF a e))
addTypes = cataM $ \t ->
do u <- freshType
return (Annot u t)
-- | Compute the most general type of a term assuming a given
-- environment.
typecheck ::
(Show e, NameSource v, Ord e, Ord v, TypeCon t, Eq t) =>
Map e (Poly t v) {- ^ typing environment -} ->
(a -> Poly t v) {- ^ typing rule for primitives -} ->
Term a e {- ^ term to typecheck -} ->
Either (TcError e t v) (Poly t v, Annot (Mono t v) (TermF a e))
{- ^ most general type of term or error explaining failure -}
typecheck env atomTypes term = runM (typecheck' env atomTypes =<< addTypes term)
| glguy/hm | src/HM/Typecheck.hs | mit | 6,191 | 0 | 18 | 1,729 | 1,974 | 996 | 978 | 128 | 6 |
-- Hutton's Razor
-- http://www.codewars.com/kata/543833d86f032f0942000264/
module Razor where
data Razor = Lit Int | Add Razor Razor
interpret :: Razor -> Int
interpret (Lit a) = a
interpret (Add a b) = interpret a + interpret b
pretty :: Razor -> String
pretty (Lit a) = show a
pretty (Add a b) = "(" ++ pretty a ++ "+" ++ pretty b ++ ")"
| gafiatulin/codewars | src/6 kyu/Razor.hs | mit | 348 | 0 | 9 | 71 | 135 | 69 | 66 | 8 | 1 |
module Ch23.FizzBuzz where
import Control.Monad
import Control.Monad.Trans.State
fizzBuzz :: Integer -> String
fizzBuzz n
| n `mod` 15 == 0 = "FizzBuzz"
| n `mod` 5 == 0 = "Buzz"
| n `mod` 3 == 0 = "Fizz"
| otherwise = show n
fizzbuzzList :: [Integer] -> [String]
fizzbuzzList list = execState (mapM_ addResult list) []
addResult :: Integer -> State [String] ()
addResult n = do
xs <- get
let result = fizzBuzz n
put (result : xs)
fizzbuzzFromTo :: Integer -> Integer -> [String]
fizzbuzzFromTo start end = fizzbuzzList $ enumFromThenTo end (end - 1) start
main :: IO ()
main = mapM_ putStrLn $ fizzbuzzFromTo 1 100
| andrewMacmurray/haskell-book-solutions | src/ch23/FizzBuzz.hs | mit | 636 | 0 | 10 | 130 | 269 | 138 | 131 | 20 | 1 |
{-# LANGUAGE OverloadedStrings, RecordWildCards #-}
module Main where
import Data.ByteString (ByteString)
import Data.Conduit
import Data.Conduit.Binary (sourceHandle, sinkHandle)
import Data.Conduit.List as CL
import Data.CSV.Conduit
import Data.Text (Text)
import System.Environment
import System.IO (stdout, stdin, Handle, openFile, IOMode(..))
import Options.Applicative
import Control.Applicative
process :: Monad m => Conduit (Row Text) m (Row Text)
process = CL.map id
data Conf = Conf {
delimiter :: Char
, source :: String
}
main :: IO ()
main = do
Conf{..} <- execParser opts
let inDSVSettings = CSVSettings delimiter Nothing
source' <- case source of
"-" -> return stdin
f -> openFile f ReadMode
runResourceT $
transformCSV'
inDSVSettings
defCSVSettings
(sourceHandle source')
process
(sinkHandle stdout)
opts = info (helper <*> parseOpts)
(fullDesc
<> progDesc "Converts DSV to CSV format"
<> header "dsv2csv"
<> footer "See https://github.com/danchoi/csv2dsv for more information.")
parseOpts :: Parser Conf
parseOpts = Conf
<$> (Prelude.head <$>
strOption (value "\t"
<> short 'd'
<> long "delimiter"
<> metavar "CHAR"
<> help "Delimiter characters of DSV input. Defaults to \\t."))
<*> strArgument (metavar "FILE" <> help "Source DSV file. '-' for STDIN")
| danchoi/csv2dsv | Main2.hs | mit | 1,566 | 0 | 15 | 471 | 391 | 205 | 186 | 45 | 2 |
module EC where
import EC.ES
import EC.GA | banacorn/evolutionary-computation | EC.hs | mit | 42 | 0 | 4 | 7 | 14 | 9 | 5 | 3 | 0 |
-- Author: David Fonenot, 2014 fontenod@onid.oregonstate.edu
-- compile with -main-is SudokuSolver to get to work with GHC
module SudokuSolver where
import SudokuReader
import System.Environment
import System.Exit
import Data.Array
import Data.List
import Data.Char
import Data.Ord
sudokuToStr :: SBoard -> String
sudokuToStr board | (bounds board) /= ((0,0),(8,8)) = "Not a valid sudoku board"
sudokuToStr board = sudokuToStr' $ assocs board
where
sudokuToStr' ((_,v):[]) = (intToDigit v):[]
sudokuToStr' (((r,c),v):vs) | c == 8 = (intToDigit v):'\n':sudokuToStr' vs
sudokuToStr' (((r,c),v):vs) = (intToDigit v):sudokuToStr' vs
-- nothing on complete sudoku board (no zeroes)
advance :: [SValue] -> (Int,Int) -> Maybe (Int,Int)
advance [] _ = Nothing
advance ((pt,_):rst) start | pt < start = advance rst start
advance ((pt,v):rst) start | pt >= start = if v == 0 then Just pt else advance rst start
-- is the point point 1 in the range of sudoku value point 2
pointInRange :: (Int,Int) -> (Int,Int) -> Bool
pointInRange (r1,c1) (r2,c2) = or [r1==r2,c1==c2,sameBox]
where
lowRow = (r2 `div` 3) * 3
highRow = lowRow + 2
lowCol = (c2 `div` 3) * 3
highCol = lowCol + 2
sameBox = and [r1>=lowRow,r1<=highRow,c1>=lowCol,c1<=highCol]
-- not protecting against values outside of the range of the
-- sudoku board, function meant for being called internally only
selectVal :: [SValue] -> (Int,Int) -> Int
selectVal vals pt = (\((_,_),v) -> v) ((filter (\(p,v) -> p == pt) vals) !! 0)
-- return a new updated list with the new value in place
setVal :: [SValue] -> (Int,Int) -> Int -> [SValue]
setVal (((r,c),v):rs) (r1,c1) newVal | and [r==r1,c==c1] = ((r,c),newVal):rs
setVal (val:rs) pt newVal = val:(setVal rs pt newVal)
-- return all of the options for the given point on the sudoku board
--
-- note: options is never called on a point with a number already in it
options :: [SValue] -> (Int,Int) -> [Int]
options vals pt = options' vals pt [1..9]
where
options' [] _ left = left
options' ((newpt,v):vn) pt left =
if pointInRange newpt pt then options' vn pt (delete v left) else options' vn pt left
solve :: SBoard -> Maybe SBoard
solve board = case solve' (assocs board) (0,0) of
Just vals -> Just (array ((0,0),(8,8)) vals)
_ -> Nothing
solve' :: [SValue] -> (Int,Int) -> Maybe [SValue]
solve' vals pt = case advance vals pt of
Just newPt -> fit vals (options vals newPt) newPt
_ -> Just vals
fit :: [SValue] -> [Int] -> (Int,Int) -> Maybe [SValue]
fit _ [] _ = Nothing
fit vals (fstopt:rstopt) pos = case advance vals pos of
Just newPt -> case solve' (setVal vals pos fstopt) newPt of
Just board -> Just board
Nothing -> fit vals rstopt pos
_ -> Just vals
-- process command line arguments and get the filename
getSudokuFile [] = putStrLn "Usage: ./SudokuSolver <filename>" >> exit
getSudokuFile [filename] = readFile filename
exit = exitWith ExitSuccess
failure = exitWith (ExitFailure 1)
main :: IO ()
main = getArgs >>= getSudokuFile >>= (\f -> case SudokuReader.readSudokuFile f of
Just board -> case solve board of
Just answer -> (putStrLn $ sudokuToStr answer) >> exit
_ -> (putStrLn "No solution") >> failure
_ -> (putStrLn "Not a valid sudoku board") >> failure)
| davidfontenot/haskell_sudoku | SudokuSolver.hs | gpl-2.0 | 3,739 | 0 | 17 | 1,101 | 1,351 | 727 | 624 | 60 | 3 |
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE GADTs #-}
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE RecursiveDo #-}
{-# LANGUAGE TemplateHaskell #-}
import Control.Lens
import Data.Default
import qualified Data.Map as M
import qualified GHC.Generics as GHC
import Reflex
import Reflex.Dom
----------
type Task = String
type Id = Int
type IdPick = (Id, (Task, [Task]))
----------
data Cmd = Cmd_Quit
| Cmd_AddTask
| Cmd_DeleteTask
deriving (Show, Read, Eq, Ord, Enum, Bounded, GHC.Generic)
makePrisms ''Cmd
---
data Updt = Updt_Quit [Task]
| Updt_AddTask Task
| Updt_DeleteTask IdPick
deriving (Show, Read, Eq, GHC.Generic)
makePrisms ''Updt
---
data Upgr = Upgr_CmdMay (Maybe Cmd)
| Upgr_Updt Updt
deriving (Show, Read, Eq, GHC.Generic)
makePrisms ''Upgr
instance Default Upgr where
def = Upgr_CmdMay def
----------
data Config = Config { _config_maxMenuItemLength :: Int
, _config_suffix :: String
, _config_commands :: M.Map (Maybe Cmd) String
}
deriving (Show, Read, Eq, GHC.Generic)
makeLenses ''Config
myCommands :: M.Map (Maybe Cmd) String
myCommands = M.fromList $ [ (Cmd_Quit, "Quit")
, (Cmd_AddTask, "Add task")
, (Cmd_DeleteTask, "Delete task")
] & traverse . _1 %~ review _Just
instance Default Config where
def = Config { _config_maxMenuItemLength = 16
, _config_suffix = "..."
, _config_commands = myCommands
}
---
data Todo = Todo { _todo_tasks :: [Task]
}
deriving (Show, Read, Eq, GHC.Generic)
makeLenses ''Todo
instance Default Todo where
def = Todo { _todo_tasks = []
}
---
data View = View { _view_isHidden :: Bool
}
deriving (Show, Read, Eq, GHC.Generic)
makeLenses ''View
instance Default View where
def = View { _view_isHidden = False
}
----------
trunc :: String -> String
trunc xs =
let n = def ^. config_maxMenuItemLength
sfx = def ^. config_suffix
in case (compare (length xs) n) of
GT -> (take n xs) ++ sfx
EQ -> xs
LT -> xs
picks :: [a] -> [(a, [a])] -- preserves cardinality
picks = \case
[] -> []
x : xs -> (x, xs) : ((picks xs) & traverse._2 %~ (x :))
myTaskIdPicks :: Todo -> M.Map (Maybe IdPick) String
myTaskIdPicks s0 =
let ts = reverse $ s0 ^. todo_tasks -- [Task]
ps = picks ts -- [(Task, [Task])]
ns = [1 ..] -- [Id]
vs = fmap show $ zip ns $ fmap trunc ts -- [String]
ks = zip ns ps -- [(Id, (Task, [Task]))]
zs = zip ks vs & traverse . _1 %~ review _Just
in M.fromList zs
----------
nextCommandMay :: MonadWidget t m => m (Event t (Maybe Cmd))
nextCommandMay = do
text "Please select the next command: "
dd <- dropdown Nothing (constDyn $ def ^. config_commands) def
return $ updated (value dd)
---
confirmQuit :: MonadWidget t m => Todo -> m (Event t [Task])
confirmQuit s0 = do
el "div" $ text "Please confirm quit. "
b <- button "Confirm"
return $ tag (constant $ s0 ^. todo_tasks) b
askTask :: MonadWidget t m => m (Event t Task)
askTask = do
el "div" $ text "Please describe the task: "
i <- el "div" $ textInput def
b <- button "Submit"
return $ ffilter (/= "") $ tag (current (value i)) b
askTaskIdPick :: MonadWidget t m => Todo -> m (Event t IdPick)
askTaskIdPick s0 = do
el "div" $ text "Please select an item from the list: "
d <- dropdown Nothing (constDyn $ myTaskIdPicks s0) def
b <- button "Remove"
return $ fmapMaybe id $ tag (current (value d)) b
---
processTodoCommand :: MonadWidget t m => (Todo, Cmd) -> m (Event t Updt)
processTodoCommand (s0, c) = case c of
Cmd_Quit -> do
e <- confirmQuit s0
return $ fmap (review _Updt_Quit) e
Cmd_AddTask -> el "div" $ do
e <- askTask
return $ fmap (review _Updt_AddTask) e
Cmd_DeleteTask -> do
e <- askTaskIdPick s0
return $ fmap (review _Updt_DeleteTask) e
processTodoCommandMay :: MonadWidget t m => (Todo, Maybe Cmd) -> m (Event t Updt)
processTodoCommandMay (s0, mc) = case mc of
Nothing -> return $ never
Just c -> processTodoCommand (s0, c)
stepTodo :: Updt -> Todo -> Todo
stepTodo u s0 = case u of
Updt_Quit _ -> s0 & todo_tasks .~ (def ^. todo_tasks)
Updt_AddTask x -> s0 & todo_tasks %~ (x :)
Updt_DeleteTask (_, (_, xs)) -> s0 & todo_tasks .~ (reverse xs)
controlTodo :: MonadWidget Spider m
=> (Todo, Maybe Cmd) -> m (Event Spider (Todo, Upgr))
controlTodo (td, mc) = do
rec eCmdMay <- nextCommandMay
dCmdMay <- holdDyn mc eCmdMay
dTodoCmdMay <- combineDyn (,) dTodo dCmdMay
deUpdt <- widgetHold (return never) $
fmap processTodoCommandMay $ updated dTodoCmdMay
let eUpdt :: Event Spider Updt
eUpdt = switchPromptlyDyn deUpdt
dTodo <- foldDyn stepTodo td eUpdt
let eUpgr = mergeWith const $
[ fmap (review _Upgr_CmdMay) eCmdMay
, fmap (review _Upgr_Updt) eUpdt
] -- leftmost
return $ attachDyn dTodo eUpgr
---
isUpdtQuit :: (Todo, Upgr) -> Maybe Todo
isUpdtQuit (td, ug) =
either (const Nothing) (const $ Just td) $
matching (_Upgr_Updt . _Updt_Quit) ug
---
todoUpgr :: (t ~ Spider, MonadWidget t m)
=> (Todo, Maybe Cmd) -> m (Event t (Todo, Upgr))
todoUpgr (td, mc) = do
rec let eTodoQ = fmapMaybe isUpdtQuit eTodoUpgr
-- deTodoUpgr :: Dynamic Spider (Event Spider (Todo, Upgr))
deTodoUpgr <- widgetHold (controlTodo (td, mc)) $
fmap (\x -> controlTodo (x, mc)) eTodoQ
let eTodoUpgr :: Event Spider (Todo, Upgr)
eTodoUpgr = switchPromptlyDyn deTodoUpgr
return eTodoUpgr
----------
buttonT :: (t ~ Spider, MonadWidget t m)
=> (Bool -> String) -> Bool -> m (Event t ())
buttonT f x = do
rec d <- foldDyn (const $ not) x e1 -- toggle
d1 <- widgetHold (button $ f x) $
fmap (button . f) (updated d)
let e1 :: Event Spider ()
e1 = switchPromptlyDyn d1
return e1
---
stepView :: () -> View -> View
stepView () v = v & view_isHidden %~ not
buttonHideShow :: (t ~ Spider, MonadWidget t m)
=> View -> m (Event t View)
buttonHideShow v0 = do
b <- buttonT (\x -> case x of
False -> "Hide"
True -> "Show") $ v0 ^. view_isHidden
d <- foldDyn stepView v0 b
return $ updated d
dispTodoView :: (t ~ Spider, MonadWidget t m)
=> (Todo, View) -> m (Event t View)
dispTodoView (td, v0) = do
let ns = [(1 :: Id) ..]
zs = fmap show $ zip ns $ reverse $ td ^. todo_tasks
in do
-- e :: Event Spider View
e <- el "div" $ do
el "em" $ text "Current todo list: "
buttonHideShow v0
case (v0 ^. view_isHidden) of
False -> case zs of
[] -> return ()
_ : _ -> mapM_ (el "div" . text) zs
True -> return ()
el "div" $ text $ "You have " ++ (show $ length zs) ++ " task(s) in the list."
return e
todoView :: (t ~ Spider, MonadWidget t m)
=> (Todo, View) -> Event t Todo -> m (Event t View)
todoView (td0, v0) eTodo = do
dTodo <- holdDyn td0 eTodo
rec dTodoView <- combineDyn (,) dTodo dView
-- deView :: Dynamic Spider (Event Spider View)
deView <- widgetHold (dispTodoView (td0, v0)) $
fmap dispTodoView (updated dTodoView)
let eView :: Event Spider View
eView = switchPromptlyDyn deView
dView <- holdDyn v0 eView
return eView
----------
dispUpgr :: MonadWidget t m => Upgr -> m ()
dispUpgr = \case
Upgr_CmdMay _ -> return ()
Upgr_Updt u -> do
el "div" $ el "em" $ text $ "Status update: "
case u of
Updt_Quit xs -> do
el "div" $ text $ "Quit: " ++ (show $ zip [(1 :: Id) ..] (reverse xs))
Updt_AddTask x -> do
el "div" $ text $ "Task " ++ (show x) ++ "added to the list."
Updt_DeleteTask (n, (x, _)) -> do
el "div" $ text $ "Task " ++ (show n) ++ " " ++ (show x) ++ " deleted from the list."
statusUpdate :: MonadWidget t m
=> Upgr -> Event t Upgr -> m (Event t ())
statusUpdate ug0 e = do
d <- widgetHold (dispUpgr ug0) $ fmap dispUpgr e
return $ updated d
----------
main :: IO ()
main = mainWidget $ el "div" $ do
eTodoUpgr <- todoUpgr (def, def)
let eTodo = fmap fst eTodoUpgr
_ <- todoView (def, def) eTodo
let eUpgr = fmap snd eTodoUpgr
_ <- statusUpdate def eUpgr
return ()
----------
| artuuge/reflex-examples | todoListReflex.hs | gpl-2.0 | 8,671 | 284 | 12 | 2,548 | 3,014 | 1,633 | 1,381 | -1 | -1 |
module TestParse (
testParse
) where
import Data.Either
import Test.HUnit
import Quenelle.Parse
isSuccess (Success _ _) = True
isSuccess _ = False
t rule = TestCase $ assertBool ("Failed to parse: " ++ show rule) (isSuccess $ parseRuleFromString "rule" rule)
f rule = TestCase $ assertBool ("Successfully parsed: " ++ show rule) (not $ isSuccess $ parseRuleFromString "rule" rule)
testParse :: Test
testParse = TestLabel "parseRuleFromString" $ TestList [
t "replace:\nx\nwith:\ny\n"
, f "replace\nx\nwith:\ny\n"
, f "replace:x\nwith:\ny\n"
, f "replace:\n\nwith:\ny\n"
, f "replace:\nx\nwith\ny\n"
, f "replace:\nx\nwith:y\n"
, f "replace:\nx\nwith:\n\n"
, f "replace:\nx\nwith:\ny"
]
| philipturnbull/quenelle | test/TestParse.hs | gpl-2.0 | 734 | 0 | 9 | 141 | 198 | 101 | 97 | 19 | 1 |
import Data.Time
import Data.Time.Clock
import Data.List
import Text.Printf
import Control.Monad
import Control.Concurrent
import System.Environment
import System.Locale
import System.IO
showTimeDiff :: Double -> String
showTimeDiff d = sgn ++ printf "%02d:%04.1f" min (abs sec)
where min = abs (truncate (d / 60)) :: Integer
sec = d - fromIntegral min * 60
sgn = if d >= 0 then "" else "-"
readTimeDiff :: String -> Double
readTimeDiff s = case s2 of
[] -> fromIntegral (read s1 :: Integer)
':' : s2 -> fromIntegral (read s1 :: Integer) * 60 + read s2
_ -> error "no parse"
where (s1, s2) = break (== ':') s
deleteLine = putStr "\ESC[1A" >> hFlush stdout
prompt s = putStrLn s >> hFlush stdout >> getLine >> deleteLine >> getCurrentTime
updateTime lap t remainingTime lapTime = forever $
do threadDelay 10000
t' <- getCurrentTime
let delta = realToFrac (diffUTCTime t' t)
deleteLine
printLapInfo lap (remainingTime - delta) lapTime
printLapInfo lap remainingTime lapTime =
putStrLn $ printf "Lap %d (remaining time: %s, remaining time per lap: %s)"
(lap + 1) (showTimeDiff remainingTime) (showTimeDiff lapTime)
processLap lap _ nLaps remainingTime time lapTimes
| lap >= nLaps =
do let avgTime = (time - remainingTime) / fromIntegral nLaps
putStrLn $ replicate 80 '-'
putStrLn $ "Lap times:\n" ++ intercalate "\n" (map showTimeDiff lapTimes)
putStrLn $ replicate 80 '-'
putStrLn $ printf "Race finished. Remaining time: %s, average lap time: %s"
(showTimeDiff remainingTime) (showTimeDiff avgTime)
return lapTimes
processLap lap t nLaps remainingTime time lapTimes =
do let lapTime = remainingTime / fromIntegral (nLaps - lap)
tid <- forkIO (updateTime lap t remainingTime lapTime)
t' <- prompt ""
killThread tid
let delta = realToFrac (diffUTCTime t' t)
processLap (lap + 1) t' nLaps (remainingTime - delta) time (lapTimes ++ [delta])
makeLog :: UTCTime -> UTCTime -> Integer -> Double -> [Double] -> String
makeLog t t' nLaps time lapTimes = printf
("Race with %d laps, total allowed time: %s\nStart: %s\nEnd: %s\n" ++
replicate 80 '-' ++ "\nLap times:\n%s\n" ++ replicate 80 '-' ++
"\nTime taken: %s\nRemaining time: %s\nAverage lap time: %s\n")
nLaps (showTimeDiff time) (show t) (show t')
(intercalate "\n" lapInfos) (showTimeDiff delta) (showTimeDiff (time - delta))
(showTimeDiff (delta / fromIntegral nLaps))
where delta = realToFrac (diffUTCTime t' t)
lapInfos = zipWith (\a b -> showTimeDiff a ++ " (rem. time/lap: " ++ showTimeDiff b ++ ")")
lapTimes remTimes :: [String]
partialSums = snd $ mapAccumL (\a b -> (a-b,a-b)) time (0:lapTimes)
remTimes = zipWith (\a b -> a / fromIntegral (nLaps - b)) partialSums [0..]
main =
do args <- getArgs
if length args /= 2 && length args /= 3 then
putStrLn "Usage: laptime N_LAPS TOTAL_TIME [LOGFILE]"
else do
t <- prompt "Press RETURN to start measurement."
let (nLaps, time) = (read (args !! 0) :: Integer, readTimeDiff (args !! 1))
lapTimes <- processLap 0 t nLaps time time []
let logFile =
if length args == 3 then args !! 2
else "laptime_" ++ formatTime defaultTimeLocale "%F_%T" t ++ ".log"
t' <- getCurrentTime
writeFile logFile (makeLog t t' nLaps time lapTimes)
| 3of8/haskell_playground | laptime/Laptime.hs | gpl-2.0 | 3,543 | 108 | 10 | 926 | 1,130 | 587 | 543 | 73 | 3 |
import Control.Applicative
import Control.Monad
import System.IO
import Control.Monad.Writer.Lazy
import Data.Char
import Data.List
import Control.Monad.Error
import Control.Exception
import Calculator
{-calculate :: String -> IO String
calculate x =
(show $ eval $ calculateLine x) `Control.Exception.catch` possibleErrors
where
possibleErrors :: InvalidCommand -> IO String
possibleErrors error = return $ "Error happens: " ++ show error
-}
mainLoop :: IO()
mainLoop = forever $ do
hSetBuffering stdout NoBuffering
command <- putStr "> " *> getLine
-- result <- calculate command
-- putStrLn $ result
result <- try $ evaluate $ calculateLine command
:: IO (Either InvalidCommand (Maybe NumberType))
case result of
Left exception -> putStrLn $ "Fault: " ++ show exception
Right (Just value) -> print value
Right Nothing -> putStrLn ""
main :: IO()
main =
--mainLoop
--calcTreeTest
-- divideTextLineTest
dividedCommandToTreeTest
--mainTest
| collia/SimpleCalc | src/Main.hs | gpl-3.0 | 1,100 | 0 | 13 | 290 | 205 | 106 | 99 | 22 | 3 |
module HTCF.TcfLayers
( TcfLayers(..)
, getTextLayer
, getTokens
, getSentences
, getPOStags
, getLemmas
) where
import qualified HTCF.TokenLayer as T
import qualified HTCF.SentenceLayer as S
import qualified HTCF.POStagLayer as P
import qualified HTCF.LemmaLayer as L
import qualified HTCF.TextLayer as Tx
-- | An ADT that collects all the layers from TCF.
data TcfLayers = TcfLayers
{ text :: [Tx.Text]
, tokens:: [T.Token]
, sentences :: [S.Sentence]
, posTags :: [P.POStag]
, lemmas :: [L.Lemma]
} deriving (Eq, Show)
-- * getting the record fields of 'TcfLayers'
getTextLayer :: TcfLayers -> [Tx.Text]
getTextLayer (TcfLayers t _ _ _ _) = t
getTokens :: TcfLayers -> [T.Token]
getTokens (TcfLayers _ toks _ _ _) = toks
getSentences :: TcfLayers -> [S.Sentence]
getSentences (TcfLayers _ _ s _ _) = s
getPOStags :: TcfLayers -> [P.POStag]
getPOStags (TcfLayers _ _ _ ts _) = ts
getLemmas :: TcfLayers -> [L.Lemma]
getLemmas (TcfLayers _ _ _ _ ls) = ls
| lueck/htcf | src/HTCF/TcfLayers.hs | gpl-3.0 | 993 | 0 | 10 | 194 | 327 | 193 | 134 | 29 | 1 |
{-# language LambdaCase, OverloadedStrings #-}
module HMail.Brick.MailView (
handleEvent
, draw
) where
import HMail.Types
import HMail.Brick.EventH
import HMail.State
import HMail.ImapMail
import HMail.Header
import HMail.Brick.Widgets
import HMail.Brick.Util
import HMail.Brick.ViewSwitching
import HMail.Brick.Banner
import Brick.Types
import Brick.Main
import Brick.Widgets.Core
import Brick.Widgets.Center
import Brick.Widgets.Border
import Graphics.Vty.Input.Events
import Control.Lens
import Control.Monad.Base
import Control.Monad.RWS
import Data.Maybe
import qualified Data.Map.Lazy as M
import qualified Data.Text as T
handleEvent :: BrickEv e -> EventH MailView ()
handleEvent = \case
VtyEvent ev -> case ev of
EvKey key mods -> handleKeyEvent key mods
_ -> pure ()
_ -> pure ()
handleKeyEvent :: Key -> [Modifier] -> EventH MailView ()
handleKeyEvent key mods = case key of
KUp -> liftBase $ if haveMod
then vScrollPage vp Up
else vScrollBy vp (-1)
KDown -> liftBase $ if haveMod
then vScrollPage vp Down
else vScrollBy vp 1
KLeft -> liftBase $ if haveMod
then hScrollBy vp (-10)
else hScrollBy vp (-1)
KRight -> liftBase $ if haveMod
then hScrollBy vp 10
else hScrollBy vp 1
KPageUp -> liftBase $ vScrollToBeginning vp
KPageDown -> liftBase $ vScrollToEnd vp
KChar ' ' -> liftBase $ vScrollPage vp Down
KChar 'y' -> do
mbox <- view mailViewBoxName
enterMailBoxView mbox
KChar 'f' -> do
v <- ask
tellView . IsMailView $ (mailViewShowFullHeader %~ not) v
KChar 'r' -> do
mbox <- view mailViewBoxName
uid <- view mailViewUid
sendCommand $ FetchContent mbox [uid]
key -> logDebug $ "unbound key pressed: " <> show key
where
haveMod = mods /= []
vp = viewportScroll ResMainViewport
draw :: MailView -> HMailState -> Widget ResName
draw (MailView mbox uid fullHdr) st =
(banner mailViewHelp <=>)
. fromMaybe errorWidget $ do
box <- st ^. mailBoxes . at mbox
mail <- box ^. mails . at uid
Just $ case renderContent (mail ^. immContent) of
Nothing -> loadingWidget
Just cont -> viewport ResMainViewport Vertical
$ renderHeader (mail ^. immHeader)
<=> withAttr "body" cont
where
renderContent :: MailContent -> Maybe (Widget ResName)
renderContent = \case
ContentIs content -> Just $ txtWrap content
ContentUnknown -> Nothing
renderHeader :: Header -> Widget ResName
renderHeader =
let f key val wgt = txtWrap (key <> ":" <> val) <=> wgt
in withAttr "header"
. M.foldrWithKey f emptyWidget
. (if fullHdr then id else M.filterWithKey important)
. asTextMap
important :: T.Text -> a -> Bool
important key _ = key `elem`
[ "Date", "From", "Subject", "To", "User-Agent" ]
mailViewHelp :: [String]
mailViewHelp = genericHelp ++ [ "f:toggle-full-header" ]
| xaverdh/hmail | HMail/Brick/MailView.hs | gpl-3.0 | 2,921 | 0 | 16 | 671 | 929 | 479 | 450 | -1 | -1 |
{-
This file is part of pia.
pia is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
any later version.
pia is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with pia. If not, see <http://www.gnu.org/licenses/>.
-}
{- |
Copyright : (c) Simon Woertz 2011-2012
Maintainer : Simon Woertz <simon@woertz.at>
Stability : provisional
-}
module TRS.Rule (
Rule(..)
, amendFuncVarR
, amendFuncVarRs
)where
import TRS.Term
data Rule = Rule {lhs::Term, rhs::Term}
-- show a rule as follows: lhs -> rhs
instance Show Rule where
show r = show (lhs r) ++ " -> " ++ show (rhs r)
-- | 'amendFuncVarR' @r vs@ takes a rule and a list of terms and converts every 'Var' occuring in the given whichs name occurs
-- in @vs@ to a 0-ary function with the same name
amendFuncVarR :: Rule -> [Term] -> Rule
amendFuncVarR r vs = Rule (amendFuncVarT (lhs r) vs) (amendFuncVarT (rhs r) vs)
-- | 'amendFuncVarRs' @rs vs@ works like 'amendFuncVarR' but takes a list of rules
amendFuncVarRs :: [Rule] -> [Term] -> [Rule]
amendFuncVarRs rs vs = map (flip amendFuncVarR $ vs) rs
| swoertz/pia | src/TRS/Rule.hs | gpl-3.0 | 1,522 | 0 | 10 | 344 | 201 | 111 | 90 | 12 | 1 |
--project euler problem 45
{--
Triangle, pentagonal, and hexagonal numbers are generated by the following formulae:
Triangle T_(n)=n(n+1)/2 1, 3, 6, 10, 15, ...
Pentagonal P_(n)=n(3n−1)/2 1, 5, 12, 22, 35, ...
Hexagonal H_(n)=n(2n−1) 1, 6, 15, 28, 45, ...
It can be verified that T_(285) = P_(165) = H_(143) = 40755.
Find the next triangle number that is also pentagonal and hexagonal.
--}
--all triangle numbers are hexagonals as well
pentagonals = [n*(3*n-1) `div` 2 | n<-[1..]::[Integer]]
hexagonals = [n*(2*n-1) | n<-[1..]::[Integer]]
sorted_inter xx@(x:xs) yy@(y:ys)
| x == y = x : sorted_inter xs ys
| x > y = sorted_inter xx ys
| x < y = sorted_inter xs yy
main = print $ head $ dropWhile (<=40755) $ sorted_inter hexagonals pentagonals
--could have also done by supplying isHex filter
--main = print $ head $ dropWhile(<=40755) $ filter isHex pentagonals
| goalieca/haskelling | 045.hs | gpl-3.0 | 923 | 0 | 10 | 198 | 206 | 111 | 95 | 7 | 1 |
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE DeriveDataTypeable #-}
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE NoImplicitPrelude #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE TypeOperators #-}
{-# OPTIONS_GHC -fno-warn-duplicate-exports #-}
{-# OPTIONS_GHC -fno-warn-unused-binds #-}
{-# OPTIONS_GHC -fno-warn-unused-imports #-}
-- |
-- Module : Network.Google.Resource.Blogger.Posts.Search
-- Copyright : (c) 2015-2016 Brendan Hay
-- License : Mozilla Public License, v. 2.0.
-- Maintainer : Brendan Hay <brendan.g.hay@gmail.com>
-- Stability : auto-generated
-- Portability : non-portable (GHC extensions)
--
-- Search for a post.
--
-- /See:/ <https://developers.google.com/blogger/docs/3.0/getting_started Blogger API Reference> for @blogger.posts.search@.
module Network.Google.Resource.Blogger.Posts.Search
(
-- * REST Resource
PostsSearchResource
-- * Creating a Request
, postsSearch
, PostsSearch
-- * Request Lenses
, psOrderBy
, psBlogId
, psQ
, psFetchBodies
) where
import Network.Google.Blogger.Types
import Network.Google.Prelude
-- | A resource alias for @blogger.posts.search@ method which the
-- 'PostsSearch' request conforms to.
type PostsSearchResource =
"blogger" :>
"v3" :>
"blogs" :>
Capture "blogId" Text :>
"posts" :>
"search" :>
QueryParam "q" Text :>
QueryParam "orderBy" PostsSearchOrderBy :>
QueryParam "fetchBodies" Bool :>
QueryParam "alt" AltJSON :> Get '[JSON] PostList
-- | Search for a post.
--
-- /See:/ 'postsSearch' smart constructor.
data PostsSearch = PostsSearch'
{ _psOrderBy :: !PostsSearchOrderBy
, _psBlogId :: !Text
, _psQ :: !Text
, _psFetchBodies :: !Bool
} deriving (Eq,Show,Data,Typeable,Generic)
-- | Creates a value of 'PostsSearch' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'psOrderBy'
--
-- * 'psBlogId'
--
-- * 'psQ'
--
-- * 'psFetchBodies'
postsSearch
:: Text -- ^ 'psBlogId'
-> Text -- ^ 'psQ'
-> PostsSearch
postsSearch pPsBlogId_ pPsQ_ =
PostsSearch'
{ _psOrderBy = PSOBPublished
, _psBlogId = pPsBlogId_
, _psQ = pPsQ_
, _psFetchBodies = True
}
-- | Sort search results
psOrderBy :: Lens' PostsSearch PostsSearchOrderBy
psOrderBy
= lens _psOrderBy (\ s a -> s{_psOrderBy = a})
-- | ID of the blog to fetch the post from.
psBlogId :: Lens' PostsSearch Text
psBlogId = lens _psBlogId (\ s a -> s{_psBlogId = a})
-- | Query terms to search this blog for matching posts.
psQ :: Lens' PostsSearch Text
psQ = lens _psQ (\ s a -> s{_psQ = a})
-- | Whether the body content of posts is included (default: true). This
-- should be set to false when the post bodies are not required, to help
-- minimize traffic.
psFetchBodies :: Lens' PostsSearch Bool
psFetchBodies
= lens _psFetchBodies
(\ s a -> s{_psFetchBodies = a})
instance GoogleRequest PostsSearch where
type Rs PostsSearch = PostList
type Scopes PostsSearch =
'["https://www.googleapis.com/auth/blogger",
"https://www.googleapis.com/auth/blogger.readonly"]
requestClient PostsSearch'{..}
= go _psBlogId (Just _psQ) (Just _psOrderBy)
(Just _psFetchBodies)
(Just AltJSON)
bloggerService
where go
= buildClient (Proxy :: Proxy PostsSearchResource)
mempty
| rueshyna/gogol | gogol-blogger/gen/Network/Google/Resource/Blogger/Posts/Search.hs | mpl-2.0 | 3,769 | 0 | 17 | 977 | 544 | 322 | 222 | 82 | 1 |
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
{-# OPTIONS_GHC -Wall -fno-warn-orphans #-}
----------------------------------------------------------------------
-- |
-- Module : FRP.Reactive.Future
-- Copyright : (c) Conal Elliott 2007-2008
-- License : GNU AGPLv3 (see COPYING)
--
-- Maintainer : conal@conal.net
-- Stability : experimental
--
-- A simple formulation of functional /futures/, roughly as
-- described at <http://en.wikipedia.org/wiki/Futures_and_promises>.
--
-- A /future/ is a value with an associated time of /arrival/. Typically,
-- neither the time nor the value can be known until the arrival time.
--
-- Primitive futures can be things like /the value of the next key you
-- press/, or /the value of LambdaPix stock at noon next Monday/.
--
-- Composition is via standard type classes: 'Functor', 'Applicative',
-- 'Monad', and 'Monoid'. Some comments on the 'Future' instances of
-- these classes:
--
-- * Monoid: 'mempty' is a future that never arrives (infinite time and
-- undefined value), and @a `mappend` b@ is the earlier of @a@ and @b@,
-- preferring @a@ when simultaneous.
--
-- * 'Functor': apply a function to a future argument. The (future)
-- result arrives simultaneously with the argument.
--
-- * 'Applicative': 'pure' gives value arriving negative infinity.
-- '(\<*\>)' applies a future function to a future argument, yielding a
-- future result that arrives once /both/ function and argument have
-- arrived (coinciding with the later of the two times).
--
-- * 'Monad': 'return' is the same as 'pure' (as usual). @(>>=)@ cascades
-- futures. 'join' resolves a future future value into a future value.
--
-- Futures are parametric over /time/ as well as /value/ types. The time
-- parameter can be any ordered type and is particularly useful with time
-- types that have rich partial information structure, such as /improving
-- values/.
----------------------------------------------------------------------
module FRP.Reactive.Future
(
-- * Time & futures
Time, ftime
, FutureG(..), isNeverF, inFuture, inFuture2, futTime, futVal, future
, withTimeF
-- * Tests
#ifdef TEST
, batch
#endif
) where
import Data.Monoid (Monoid(..))
import Data.Semigroup (Semigroup(..), Max(..))
-- import Data.AddBounds
import FRP.Reactive.Internal.Future
#ifdef TEST
-- Testing
import Test.QuickCheck
import Test.QuickCheck.Checkers
import Test.QuickCheck.Classes
#endif
{----------------------------------------------------------
Time and futures
----------------------------------------------------------}
-- | Make a finite time
ftime :: t -> Time t
ftime = Max
#ifdef TEST
-- FutureG representation in Internal.Future
instance (Bounded t, Eq t, EqProp t, EqProp a) => EqProp (FutureG t a) where
u =-= v | isNeverF u && isNeverF v = property True
Future a =-= Future b = a =-= b
#endif
-- I'd rather say:
--
-- instance (Bounded t, EqProp t, EqProp a) => EqProp (FutureG t a) where
-- Future a =-= Future b =
-- (fst a =-= maxBound && fst b =-= maxBound) .|. a =-= b
--
-- However, I don't know how to define disjunction on QuickCheck properties.
-- | A future's time
futTime :: FutureG t a -> Time t
futTime = fst . unFuture
-- | A future's value
futVal :: FutureG t a -> a
futVal = snd . unFuture
-- | A future value with given time & value
future :: t -> a -> FutureG t a
future t a = Future (ftime t, a)
-- | Access time of future
withTimeF :: FutureG t a -> FutureG t (Time t, a)
withTimeF = inFuture $ \ (t,a) -> (t,(t,a))
-- withTimeF = inFuture duplicate (with Comonad)
-- TODO: Eliminate this Monoid instance. Derive Monoid along with all the
-- other classes. And don't use mempty and mappend for the operations
-- below. For one thing, the current instance makes Future a monoid but
-- unFuture not be a monoid morphism.
instance Ord t => Semigroup (FutureG t a) where
Future (s,a) <> Future (t,b) =
Future (s `min` t, if s <= t then a else b)
instance (Ord t, Bounded t) => Monoid (FutureG t a) where
mempty = Future (maxBound, error "Future mempty: it'll never happen, buddy")
-- Pick the earlier future.
Future (s,a) `mappend` Future (t,b) =
Future (s `min` t, if s <= t then a else b)
-- Consider the following simpler definition:
--
-- fa@(Future (s,_)) `mappend` fb@(Future (t,_)) =
-- if s <= t then fa else fb
--
-- Nothing can be known about the resulting future until @s <= t@ is
-- determined. In particular, we cannot know lower bounds for the time.
-- In contrast, the actual 'mappend' definition can potentially yield
-- useful partial information, such as lower bounds, about the future
-- time, if the type parameter @t@ has rich partial information structure
-- (non-flat).
-- For some choices of @t@, there may be an efficient combination of 'min'
-- and '(<=)', so the 'mappend' definition is sub-optimal. In particular,
-- 'Improving' has 'minI'.
-- -- A future known never to happen (by construction), i.e., infinite time.
-- isNever :: FutureG t a -> Bool
-- isNever = isMaxBound . futTime
-- where
-- isMaxBound (Max MaxBound) = True
-- isMaxBound _ = False
--
-- This function is an abstraction leak. Don't export it to library
-- users.
{----------------------------------------------------------
Tests
----------------------------------------------------------}
-- Represents times at a given instant.
newtype TimeInfo t = TimeInfo (Maybe t)
#ifdef TEST
deriving EqProp
#endif
instance Bounded t => Bounded (TimeInfo t) where
minBound = TimeInfo (Just minBound)
maxBound = TimeInfo Nothing
-- A time at a given instant can be some unknown time in the future
unknownTimeInFuture :: TimeInfo a
unknownTimeInFuture = TimeInfo Nothing
instance Eq a => Eq (TimeInfo a) where
TimeInfo Nothing == TimeInfo Nothing = error "Cannot tell if two unknown times in the future are equal"
TimeInfo (Just _) == TimeInfo Nothing = False
TimeInfo Nothing == TimeInfo (Just _) = False
TimeInfo (Just a) == TimeInfo (Just b) = a == b
instance Ord a => Ord (TimeInfo a) where
-- The minimum of two unknown times in the future is an unkown time in the
-- future.
TimeInfo Nothing `min` TimeInfo Nothing = unknownTimeInFuture
TimeInfo Nothing `min` b = b
a `min` TimeInfo Nothing = a
TimeInfo (Just a) `min` TimeInfo (Just b) = (TimeInfo . Just) (a `min` b)
TimeInfo Nothing <= TimeInfo Nothing = error "Cannot tell if one unknown time in the future is less than another."
TimeInfo Nothing <= TimeInfo (Just _) = False
TimeInfo (Just _) <= TimeInfo Nothing = True
TimeInfo (Just a) <= TimeInfo (Just b) = a <= b
#ifdef TEST
-- or, a known time in the past. We're ignoring known future times for now.
knownTimeInPast :: a -> TimeInfo a
knownTimeInPast = TimeInfo . Just
-- Move to checkers
type BoundedT = Int
batch :: TestBatch
batch = ( "FRP.Reactive.Future"
, concatMap unbatch
[ monoid (undefined :: FutureG NumT T)
, functorMonoid (undefined :: FutureG NumT
(T,NumT))
-- Checking the semantics here isn't necessary because
-- the implementation is identical to them.
--
-- Also, Functor, Applicative, and Monad don't require checking
-- since they are automatically derived.
--
-- , semanticMonoid' (undefined :: FutureG NumT T)
-- , functor (undefined :: FutureG NumT (T,NumT,T))
-- , semanticFunctor (undefined :: FutureG NumT ())
-- , applicative (undefined :: FutureG NumT (NumT,T,NumT))
-- , semanticApplicative (undefined :: FutureG NumT ())
-- , monad (undefined :: FutureG NumT (NumT,T,NumT))
-- , semanticMonad (undefined :: FutureG NumT ())
, ("specifics",
[ ("laziness", property laziness )
])
]
)
where
laziness :: BoundedT -> T -> Property
laziness t a = (uf `mappend` uf) `mappend` kf =-= kf
where
uf = unknownFuture
kf = knownFuture
knownFuture = future (knownTimeInPast t) a
unknownFuture = future unknownTimeInFuture (error "cannot retrieve value at unknown time at the future")
#endif
| ekmett/reactive | src/FRP/Reactive/Future.hs | agpl-3.0 | 8,306 | 0 | 11 | 1,827 | 1,286 | 729 | 557 | 47 | 1 |
{-# LANGUAGE OverloadedStrings #-}
import Control.Monad.IO.Class (liftIO)
import Data.Aeson (Value (Object, String))
import Data.Aeson (encode, object, (.=))
import Data.Aeson.Parser (json)
import Data.Conduit (($$+-))
import Data.Conduit.Attoparsec (sinkParser)
import Network.HTTP.Conduit (RequestBody (RequestBodyLBS),
Response (..), http, method, parseUrl, Request(..), httpLbs,
requestBody, withManager)
import Network.HTTP.Types
import Types
import qualified Data.ByteString.Lazy as LBS
import qualified Data.ByteString as BS
import Data.Digest.Pure.SHA
import Data.CaseInsensitive (CI, mk)
import qualified Data.Binary as B
main :: IO ()
main = withManager $ \manager -> do
valueBS <- liftIO makeValue
-- We need to know the size of the request body, so we convert to a
-- ByteString
req' <- liftIO $ parseUrl "http://localhost:8000/"
let req = req' { method = "POST", requestBody = RequestBodyLBS valueBS }
req'' = sign_request req ("nonce","identity","secret",valueBS)
res <- withManager $ httpLbs req'' -- manager
let resValue = responseBody res -- $$+- sinkParser json
liftIO $ print resValue
-- Application-specific function to make the request value
makeValue = return $ encode $ LocRequest 5.0 6.0
-- Application-specific function to handle the response from the server
handleResponse :: Value -> IO ()
handleResponse = print
type Nonce = LBS.ByteString
type Identity = LBS.ByteString
type Secret = LBS.ByteString
type Message = LBS.ByteString
type SigComponents = (Nonce,Identity,Secret,Message)
signature_header :: CI BS.ByteString
signature_header = mk "relay-signature"
identity_header :: CI BS.ByteString
identity_header = mk "relay-identity"
nonce_header :: CI BS.ByteString
nonce_header = mk "relay-nonce"
sign_request :: Request -> SigComponents -> Request
sign_request req (n,i,s,m) = let m' = LBS.append n m
signature = B.encode $ hmacSha256 s m'
headers = [
(signature_header, signature),
(identity_header, i),
(nonce_header, n)
]
headers' = map (\(x,y) -> (x, LBS.toStrict y)) headers
oldheaders = requestHeaders req
in req {requestHeaders = (oldheaders ++ headers')}
| igraves/relay-server | cruft/yesodclient.hs | agpl-3.0 | 2,720 | 0 | 14 | 902 | 621 | 363 | 258 | 50 | 1 |
resource = plugin {
function = map toUpper
}
| Changaco/haskell-plugins | testsuite/shell/shell/Plugin.hs | lgpl-2.1 | 55 | 0 | 7 | 19 | 17 | 9 | 8 | 2 | 1 |
------------------------------------------------------------------------------
-- |
-- Module : Semaphore
-- Copyright : (C) 2010 Aliaksiej Artamonaŭ
-- License : LGPL
--
-- Maintainer : aliaksiej.artamonau@gmail.com
-- Stability : unstable
-- Portability : unportable
--
-- Eating philosophers using semaphors implemented on top of
-- 'Control.Concurrent.Condition'.
------------------------------------------------------------------------------
------------------------------------------------------------------------------
module Main
(
main
) where
------------------------------------------------------------------------------
import Control.Applicative ( (<$>), (<*>) )
import Control.Concurrent ( threadDelay, forkIO )
import Control.Monad ( when, forever, mapM )
import Data.IORef ( IORef, newIORef, readIORef, modifyIORef )
import Data.Time ( getCurrentTime, diffUTCTime )
import System.Random ( randomRIO )
------------------------------------------------------------------------------
import Control.Concurrent.Lock ( Lock )
import qualified Control.Concurrent.Lock as Lock
import Control.Concurrent.Condition ( Condition, with )
import qualified Control.Concurrent.Condition as Condition
------------------------------------------------------------------------------
-- Semaphores implemented on top of conditions. --
------------------------------------------------------------------------------
-- | Type representation of semaphore.
data Semaphore =
Semaphore { value :: IORef Int -- ^ Current value of semaphore.
, cond :: Condition -- ^ Condition used to synchronize access.
}
------------------------------------------------------------------------------
-- | Creates a semaphore with initial value of zero.
new :: IO Semaphore
new = Semaphore <$> newIORef 0
<*> Condition.new_
------------------------------------------------------------------------------
-- | Creates a semaphore with specific initial value.
new_ :: Int -> IO Semaphore
new_ value = Semaphore <$> newIORef value
<*> Condition.new_
------------------------------------------------------------------------------
-- | Increases a value of semaphore by 1.
post :: Semaphore -> IO ()
post (Semaphore value cond) =
with cond $ do
modifyIORef value (+1)
Condition.notify cond
------------------------------------------------------------------------------
-- | Decreases a value of semaphore by 1. Blocks if the value is equal to
-- zero.
wait :: Semaphore -> IO ()
wait (Semaphore value cond) =
with cond $ do
empty <- fmap (== 0) (readIORef value)
when empty (Condition.wait cond)
modifyIORef value (flip (-) 1)
------------------------------------------------------------------------------
------------------------------------------------------------------------------
-- Eating philosophers. --
------------------------------------------------------------------------------
-- | Maximum time in seconds philosopher can spend on thinking.
maxThinkingTime :: Int
maxThinkingTime = 20
------------------------------------------------------------------------------
-- | Maximum time in seconds philosopher can spend on eating.
maxEatingTime :: Int
maxEatingTime = 10
------------------------------------------------------------------------------
-- | Number of philosophers to simulate.
philosophersCount :: Int
philosophersCount = 10
------------------------------------------------------------------------------
-- | Delay current thread for the specified number of seconds.
sleep :: Int -> IO ()
sleep = threadDelay . (* 1000000)
------------------------------------------------------------------------------
-- | Type for function to say something.
type Say = String -> IO ()
------------------------------------------------------------------------------
-- | Simulates philosopher's thinking.
think :: Say -> IO ()
think say = do
time <- randomRIO (1, maxThinkingTime)
say $ "It's time to think for " ++ show time ++ " seconds."
sleep time
say "Enough thinking for now."
------------------------------------------------------------------------------
-- | Fork is a pair of its identifier and a semaphore.
data Fork = Fork Int Semaphore
------------------------------------------------------------------------------
instance Show Fork where
show (Fork id _) = "Fork: id=" ++ show id
------------------------------------------------------------------------------
-- | Creates a fork with the specified identifier.
mkFork :: Int -> IO Fork
mkFork id = Fork id
<$> new_ 1
------------------------------------------------------------------------------
-- | Acquires a fork.
acquire :: Fork -> IO ()
acquire (Fork _ sem) = wait sem
------------------------------------------------------------------------------
-- | Releases a fork.
release :: Fork -> IO ()
release (Fork _ sem) = post sem
------------------------------------------------------------------------------
eat :: Say -> (Fork, Fork) -> IO ()
eat say (left, right) = do
time <- randomRIO (1, maxEatingTime)
say $ "It's time to eat for " ++ show time ++ " seconds."
say $ "Acquiring left fork (" ++ show left ++ ")."
acquire left
say $ "Left fork acquired."
say $ "Acquiring right fork (" ++ show left ++ ")."
acquire right
say $ "Right fork acquired."
say "Can begin eating now."
sleep time
say "Done with eating for now."
say $ "Releasing left fork (" ++ show left ++ ")."
release left
say $ "Left fork released."
say $ "Releasing right fork (" ++ show left ++ ")."
release right
say $ "Right fork released."
------------------------------------------------------------------------------
-- | Simulates single philosopher.
philosopher :: Say -> (Fork, Fork) -> IO ()
philosopher say forks = do
forkIO $
forever $ do
think say
eat say forks
return ()
------------------------------------------------------------------------------
main :: IO ()
main = do
forks <- prepareForks `fmap` (mapM mkFork [0 .. philosophersCount - 1])
sayLock <- Lock.new
says <- mapM (mkSay sayLock) [0 .. philosophersCount - 1]
let phs = map (uncurry philosopher) $ zip says forks
sequence_ phs
sleep 120
where mkSay :: Lock -> Int -> IO Say
mkSay lock num = do
start <- getCurrentTime
let say msg =
Lock.with lock $ do
time <- getCurrentTime
let diff = diffUTCTime time start
putStrLn $ show diff ++ " : philosopher "
++ show num ++ " : " ++ msg
return say
prepareForks :: [Fork] -> [(Fork, Fork)]
prepareForks fs = flip $ zip fs (shift fs)
where shift (x : xs) = xs ++ [x]
flip (x : xs) = (snd x, fst x) : xs
| aartamonau/haskell-condition | examples/Philosophers.hs | lgpl-3.0 | 6,987 | 0 | 20 | 1,363 | 1,326 | 694 | 632 | 106 | 1 |
-- Copyright (c) 2013-2014 PivotCloud, Inc.
--
-- Control.Concurrent.STM.CloseableQueue
--
-- Please feel free to contact us at licensing@pivotmail.com with any
-- contributions, additions, or other feedback; we would love to hear from
-- you.
--
-- Licensed under the Apache License, Version 2.0 (the "License"); you may
-- not use this file except in compliance with the License. You may obtain a
-- copy of the License at http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-- License for the specific language governing permissions and limitations
-- under the License.
--
{-# LANGUAGE Safe #-}
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE UnicodeSyntax #-}
module Control.Concurrent.STM.CloseableQueue
( CloseableQueue(..)
) where
import Control.Concurrent.STM
import Control.Concurrent.STM.TBMQueue
import Control.Concurrent.STM.TMQueue
import Control.Concurrent.STM.Queue
class Queue q ⇒ CloseableQueue q where
closeQueue
∷ q α
→ STM ()
isClosedQueue
∷ q α
→ STM Bool
instance CloseableQueue TMQueue where
closeQueue = closeTMQueue
isClosedQueue = isClosedTMQueue
instance CloseableQueue TBMQueue where
closeQueue = closeTBMQueue
isClosedQueue = isClosedTBMQueue
| alephcloud/hs-stm-queue-extras | src/Control/Concurrent/STM/CloseableQueue.hs | apache-2.0 | 1,433 | 0 | 9 | 226 | 149 | 94 | 55 | 22 | 0 |
-- Copyright 2015 Robin Bate Boerop <me@robinbb.com>
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
-- http://www.apache.org/licenses/LICENSE-2.0
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
module Main where
import Data.List (sortBy, length, sort)
import Test.QuickCheck
orderCounted :: Bool -> (Int, a) -> (Int, a) -> Ordering
orderCounted ascending (a, _) (b, _) =
case (compare a b) of
LT -> if ascending then LT else GT
GT -> if ascending then GT else LT
EQ -> EQ
prop_orderCountedAscending asc (a :: (Int, ())) (b :: (Int, ())) =
asc ==>
if fst a == fst b
then orderCounted asc a b == EQ
else orderCounted asc a b == if fst a < fst b then LT else GT
prop_orderCountedDescending asc (a :: (Int, ())) (b :: (Int, ())) =
not asc ==>
if fst a == fst b
then orderCounted asc a b == EQ
else orderCounted asc a b == if fst a < fst b then GT else LT
sortCounted :: Bool -> [(Int, a)] -> [(Int, a)]
sortCounted ascending =
sortBy (orderCounted ascending)
prop_sortCountedLengthsMatch asc (xs :: [(Int, ())]) =
l == length xs
where
l = length $ sortCounted asc xs
prop_sortCountedAscending asc (xs :: [(Int,())]) =
asc ==> l == r
where
l = map fst $ sortCounted asc xs
r = sort $ map fst xs
prop_sortCountedDescending asc (xs :: [(Int, ())]) =
not asc ==> l == r
where
l = map fst $ sortCounted asc xs
r = reverse $ sort $ map fst xs
prop_sortCountedPreservesPairing asc (li :: [Int]) =
pairsHaveSameNum new && pairsHaveSameNum (sortCounted asc new)
where new = map (\count -> (count, replicate count count)) li
pairsHaveSameNum [] = True
pairsHaveSameNum ((c, l):r) = all (c ==) l && pairsHaveSameNum r
listLengths :: [[a]] -> [(Int, [a])]
listLengths =
map (\l -> (length l, l))
prop_listLengthsEqualContents (lists :: [[()]]) =
and $ map (\(c, l) -> c == length l) $ listLengths lists
prop_listLengthsIntContents (lists :: [[Int]]) =
and $ map (\(c, l) -> c == length l) $ listLengths lists
sortByListSize :: Bool -> [[a]] -> [[a]]
sortByListSize ascending =
map snd . sortCounted ascending . listLengths
isSortedByListSize :: Bool -> [[a]] -> Bool
isSortedByListSize asc l =
and $ zipWith
(\a b -> if asc then length a <= length b
else length a >= length b)
l (tail l)
prop_sortByListSizeToughList asc =
let toughList =
[ [], [], []
, [1], [2], [3]
, [4, 5], [6, 7]
, [8, 9, 10], [11, 12, 13]
, [14, 15, 16, 17, 18]
]
permutes = shuffle toughList
in forAll permutes $ \l ->
isSortedByListSize asc (sortByListSize asc l)
prop_sortByListLizeUnit asc (l :: [[()]]) =
isSortedByListSize asc $ sortByListSize asc l
prop_sortByListLizeInt asc (l :: [[Int]]) =
isSortedByListSize asc $ sortByListSize asc l
prop_ParticularResult =
sortByListSize True
[ [1, 2]
, [3, 4, 5, 6]
, []
, [7, 8, 9]
]
== [ []
, [1, 2]
, [7, 8, 9]
, [3, 4, 5, 6]
]
return [] -- Ridiculous TH hack. Necessary to make quickCheckAll work.
main = $quickCheckAll
| robinbb/haskell-snippets | sort-by-list-size/src/Main.hs | apache-2.0 | 3,649 | 0 | 11 | 1,000 | 1,330 | 732 | 598 | -1 | -1 |
<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE helpset PUBLIC "-//Sun Microsystems Inc.//DTD JavaHelp HelpSet Version 2.0//EN" "http://java.sun.com/products/javahelp/helpset_2_0.dtd">
<helpset version="2.0" xml:lang="ms-MY">
<title>HTTPS Info Add-on</title>
<maps>
<homeID>httpsinfo</homeID>
<mapref location="map.jhm"/>
</maps>
<view>
<name>TOC</name>
<label>Contents</label>
<type>org.zaproxy.zap.extension.help.ZapTocView</type>
<data>toc.xml</data>
</view>
<view>
<name>Index</name>
<label>Index</label>
<type>javax.help.IndexView</type>
<data>index.xml</data>
</view>
<view>
<name>Search</name>
<label>Search</label>
<type>javax.help.SearchView</type>
<data engine="com.sun.java.help.search.DefaultSearchEngine">
JavaHelpSearch
</data>
</view>
<view>
<name>Favorites</name>
<label>Favorites</label>
<type>javax.help.FavoritesView</type>
</view>
</helpset> | secdec/zap-extensions | addOns/httpsInfo/src/main/javahelp/org/zaproxy/zap/extension/httpsinfo/resources/help_ms_MY/helpset_ms_MY.hs | apache-2.0 | 968 | 77 | 67 | 157 | 413 | 209 | 204 | -1 | -1 |
{-# LANGUAGE ExistentialQuantification #-}
{-
- Copyright (c) 2015, Peter Lebbing <peter@digitalbrains.com>
- All rights reserved.
-
- Redistribution and use in source and binary forms, with or without
- modification, are permitted provided that the following conditions are met:
-
- 1. Redistributions of source code must retain the above copyright notice,
- this list of conditions and the following disclaimer.
-
- 2. Redistributions in binary form must reproduce the above copyright notice,
- this list of conditions and the following disclaimer in the documentation
- and/or other materials provided with the distribution.
-
- THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
- LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
- INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
- CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
- ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
- POSSIBILITY OF SUCH DAMAGE.
-}
module Simul.Common
( module Simul.Common
, module CLaSH.Prelude
) where
import qualified Text.Show.Pretty as Pr
import CLaSH.Prelude
-- Pretty print argument
pretty :: Show a => a -> IO ()
pretty = putStrLn . Pr.ppShow
data EventList b = Ticks Int | Time Float | forall a. Set (a -> b) a | Infinity
{-
- Construct a list of inputs from a list of events
-
- The function `eventList` is to deal with the input to CLaSH's simulate
- function on another conceptual level. A list of events coupled with a
- user-defined function to convert a "Set" event to a new input is used to
- generate a list of inputs (i.e., a stream).
-
- Arguments:
- tr - The user-defined state transformer
- f - Operating frequency (for using "Time" events)
- s - The initial state of the inputs
- es - The event list
-
- As an example, let's define the input of the function to simulate as
- (Bit, Unsigned 8); i.e., it has two inputs: just a wire and an 8-bit port
- interpreted as an unsigned number. Let's call the inputs a and n
- respectively. Suppose we define a type as follows:
-
- data SI = A Bit | N (Unsigned 8)
-
- Now the state transformer is simply:
-
- trSI (a, n) (A a') = (a', n )
- trSI (a, n) (N n') = (a , n')
-
- And we could generate an input stream from an event list as follows:
-
- eventList trSI 50e6 (L, 0) [ Set A H, Set N 5, Ticks 1, Set A L, Time 52e-9
- , Set N 0, Infinity ]
-
- Every Time or Ticks statement advances time. A Time statement advances to
- the first clock tick /after/ the amount of time specified has past. A Set
- statement does not advance time, so you can alter multiple inputs in one
- clock tick.
-
- The statement 'Infinity' simply keeps the inputs the same for an infinite
- amount of time.
-}
eventList :: Real f => (c -> b -> c) -> f -> c -> [EventList b] -> [c]
eventList tr f s [] = []
eventList tr f s ((Set i v):es) = eventList tr f (tr s (i v)) es
eventList tr f s ((Ticks n):es) = (replicate n s) ++ eventList tr f s es
eventList tr f s ((Time t):es) = (replicate n s) ++ eventList tr f s es
where
n = ceiling (t * (fromRational . toRational) f)
eventList tr f s (Infinity:_) = repeat s
| DigitalBrains1/clash-lt24 | Simul/Common.hs | bsd-2-clause | 3,667 | 0 | 12 | 800 | 372 | 199 | 173 | 16 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.