code
stringlengths 5
1M
| repo_name
stringlengths 5
109
| path
stringlengths 6
208
| language
stringclasses 1
value | license
stringclasses 15
values | size
int64 5
1M
|
|---|---|---|---|---|---|
package com.github.cthulhu314.scalaba.persistance.files
class NopFileRepository extends FileRepository {
def create(file: Array[Byte]): Option[String] = {
None
}
def delete(name: String): Boolean = {
false
}
}
|
cthulhu314/scalaba
|
src/main/scala/com/github/cthulhu314/scalaba/persistance/files/NopFileRepository.scala
|
Scala
|
mit
| 227
|
/*
* Copyright (c) 2014-2015 by its authors. Some rights reserved.
* See the project homepage at: http://www.monifu.org
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package monifu.reactive
import java.io.PrintStream
import java.util.concurrent.Callable
import monifu.concurrent.cancelables.BooleanCancelable
import monifu.concurrent.{Cancelable, Scheduler}
import monifu.reactive.Ack.{Cancel, Continue}
import monifu.reactive.OverflowStrategy.{default => defaultStrategy}
import monifu.reactive.internals._
import monifu.reactive.observables.{CachedObservable, ConnectableObservable, GroupedObservable}
import monifu.reactive.observers._
import monifu.reactive.subjects.{AsyncSubject, BehaviorSubject, PublishSubject, ReplaySubject}
import org.reactivestreams.{Publisher => RPublisher, Subscriber => RSubscriber}
import scala.concurrent.duration.{Duration, FiniteDuration}
import scala.concurrent.{Future, Promise}
import scala.language.implicitConversions
import scala.util.control.NonFatal
/**
* The Observable interface in the Rx pattern.
*
* ==Interface==
*
* An Observable is characterized by a `onSubscribe` method that needs
* to be implemented. In simple terms, an Observable might as well be
* just a function like:
* {{{
* type Observable[+T] = Subscriber[T] => Unit
* }}}
*
* In other words an Observable is something that provides a
* side-effecting function that can connect a [[Subscriber]] to a
* stream of data. A `Subscriber` is a cross between an [[Observer]]
* and a [[monifu.concurrent.Scheduler Scheduler]]. We need a
* `Scheduler` when calling `subscribe` because that's when the
* side-effects happen and a context capable of scheduling tasks for
* asynchronous execution is needed. An [[Observer]] on the other hand
* is the interface implemented by consumer and that receives events
* according to the Rx grammar.
*
* On `onSubscribe`, because we need the interesting operators and
* the polymorphic behavior provided by OOP, the Observable is
* being described as an interface that has to be implemented:
* {{{
* class MySampleObservable(unit: Int) extends Observable[Int] {
* def onSubscribe(sub: Subscriber[Int]): Unit = {
* implicit val s = sub.scheduler
* // note we must apply back-pressure
* // when calling `onNext`
* sub.onNext(unit).onComplete {
* case Success(Cancel) =>
* () // do nothing
* case Success(Continue) =>
* sub.onComplete()
* case Failure(ex) =>
* sub.onError(ex)
* }
* }
* }
* }}}
*
* Of course, you don't need to inherit from this trait, as you can just
* use [[Observable.create]], the following example being equivalent
* to the above:
* {{{
* Observable.create[Int] { sub =>
* implicit val s = sub.scheduler
* // note we must apply back-pressure
* // when calling `onNext`
* sub.onNext(unit).onComplete {
* case Success(Cancel) =>
* () // do nothing
* case Success(Continue) =>
* sub.onComplete()
* case Failure(ex) =>
* sub.onError(ex)
* }
* }
* }}}
*
* The above is describing how to create your own Observables, however
* Monifu already provides already made utilities in the
* [[Observable$ Observable companion object]]. For example, to
* periodically make a request to a web service, you could do it like
* this:
* {{{
* // just some http client
* import play.api.libs.ws._
*
* // triggers an auto-incremented number every second
* Observable.intervalAtFixedRate(1.second)
* .flatMap(_ => WS.request(s"http://some.endpoint.com/request").get())
* }}}
*
* As you might notice, in the above example we are doing
* [[Observable!.flatMap]] on an Observable that emits `Future`
* instances. And it works, because Monifu considers Scala's Futures
* to be just a subset of Observables, see the automatic
* [[Observable.FutureIsObservable FutureIsObservable]] conversion that
* it defines. Or you could just use [[Observable.fromFuture]] for
* explicit conversions, an Observable builder available
* [[Observable$ amongst others]].
*
* ==Contract==
*
* Observables must obey Monifu's contract, this is why if you get away
* with already built and tested observables, that would be better than
* implementing your own by means of inheriting the interface or by using
* [[Observable.create create]]. The contract is this:
*
* - the supplied `onSubscribe` method MUST NOT throw exceptions, any
* unforeseen errors that happen in user-code must be emitted to
* the observers and the streaming closed
* - events MUST follow this grammar: `onNext* (onComplete | onError)`
* - a data source can emit zero or many `onNext` events
* - the stream can be infinite, but when the stream is closed
* (and not canceled by the observer), then
* it always emits a final `onComplete` or `onError`
* - MUST apply back-pressure when emitting events, which means that sending
* events is always done in response to demand signaled by observers and
* that observers should only receive events in response to that signaled
* demand
* - emitting a new `onNext` event must happen only after the previous
* `onNext` completed with a [[Ack.Continue Continue]]
* - streaming must stop immediately after an `onNext` event
* is signaling a [[Ack.Cancel Cancel]]
* - back-pressure must be applied for the final events as well,
* so `onComplete` and `onError` must happen only after the previous
* `onNext` was completed with a [[Ack.Continue Continue]]
* acknowledgement
* - the first `onNext` event can be sent directly, since there are no
* previous events
* - if there are no previous `onNext` events, then streams can be
* closed with `onComplete` and `onError` directly
* - if buffering of events happens, it is acceptable for events
* to get dropped when `onError` happens such that its delivery
* is prioritized
*
* ===On Dealing with the contract===
*
* Back-pressure means in essence that the speed with which the data-source
* produces events is adjusted to the speed with which the consumer consumes.
*
* For example, lets say we want to feed an iterator into an observer,
* similar to what we are doing in [[Observer.Extensions.feed(iterable* Observer.feed]],
* we might build a loop like this:
* {{{
* /** Transforms any Iterable into an Observable */
* def fromIterator[T](iterable: Iterable[T]): Observable[T] =
* Observable.create { sub =>
* implicit val s = sub.scheduler
* loop(sub.observer, iterable.iterator).onComplete {
* case Success(Cancel) =>
* () // do nothing
* case Success(Continue) =>
* sub.onComplete()
* case Failed(ex) =>
* reportError(ex)
* }
* }
*
* private def loop[T](o: Observer[T], iterator: Iterator[T])
* (implicit s: Scheduler): Future[Ack] = {
*
* try {
* if (iterator.hasNext) {
* val next = iterator.next()
* // signaling event, applying back-pressure
* o.onNext(next).flatMap {
* case Cancel => Cancel
* case Continue =>
* // signal next event (recursive, but async)
* loop(o, iterator)
* }
* }
* else {
* // nothing left to do, and because we are implementing
* // Observer.feed, the final acknowledgement is a `Continue`
* // assuming that the observer hasn't canceled or failed
* Continue
* }
* }
* catch {
* case NonFatal(ex) =>
* reportError(ex)
* }
* }
*
* private def reportError[T](o: Observer[T], ex: Throwable): Cancel =
* try o.onError(ex) catch {
* case NonFatal(err) =>
* // oops, onError failed, trying to
* // report it somewhere
* s.reportFailure(ex)
* s.reportFailure(err)
* Cancel
* }
* }}}
*
* There are cases in which the data-source can't be slowed down in response
* to the demand signaled through back-pressure. For such cases buffering
* is needed.
*
* For example to "imperatively" build an Observable, we could use channels:
* {{{
* val channel = PublishChannel[Int](OverflowStrategy.DropNew(bufferSize = 100))
*
* // look mum, no back-pressure concerns
* channel.pushNext(1)
* channel.pushNext(2)
* channel.pushNext(3)
* channel.pushComplete()
* }}}
*
* In Monifu a [[Channel]] is much like a [[Subject]], meaning that it can be
* used to construct observables, except that a `Channel` has a buffer
* attached and IS NOT an `Observer` (like the `Subject` is). In Monifu
* (compared to Rx implementations) [[Subject Subjects]] are subject to
* back-pressure concerns as well, so they can't be used in an imperative way,
* like described above, hence the need for Channels.
*
* Or for more serious and lower level jobs, you can simply take an
* `Observer` and wrap it into a
* [[monifu.reactive.observers.BufferedSubscriber BufferedSubscriber]].
*
* @see [[monifu.reactive.Observer Observer]], the interface that must be
* implemented by consumers
* @see [[monifu.concurrent.Scheduler Scheduler]], our enhanced `ExecutionContext`
* @see [[monifu.reactive.Subscriber Subscriber]], the cross between an
* [[Observer]] and a [[monifu.concurrent.Scheduler Scheduler]]
* @see [[monifu.concurrent.Cancelable Cancelable]], the type returned by higher
* level `subscribe` variants and that can be used to cancel subscriptions
* @see [[monifu.reactive.Subject Subject]], which are both Observables and Observers
* @see [[monifu.reactive.Channel Channel]], which are meant for imperatively building
* Observables without back-pressure concerns
*
* @define concatDescription Concatenates the sequence
* of Observables emitted by the source into one Observable,
* without any transformation.
*
* You can combine the items emitted by multiple Observables
* so that they act like a single Observable by using this
* method.
*
* The difference between the `concat` operation and
* [[Observable!.merge[U](implicit* merge]] is that `concat` cares about
* ordering of emitted items (e.g. all items emitted by the
* first observable in the sequence will come before the
* elements emitted by the second observable), whereas
* `merge` doesn't care about that (elements get emitted as
* they come). Because of back-pressure applied to
* observables, [[Observable!.concat]] is safe to use in all
* contexts, whereas `merge` requires buffering.
*
* @define concatReturn an Observable that emits items that are the result of
* flattening the items emitted by the Observables emitted
* by `this`
*
* @define mergeMapDescription Creates a new Observable by applying a
* function that you supply to each item emitted by the source
* Observable, where that function returns an Observable, and then
* merging those resulting Observables and emitting the
* results of this merger.
*
* This function is the equivalent of `observable.map(f).merge`.
*
* The difference between [[Observable!.concat concat]] and
* `merge` is that `concat` cares about ordering of emitted
* items (e.g. all items emitted by the first observable in
* the sequence will come before the elements emitted by the
* second observable), whereas `merge` doesn't care about that
* (elements get emitted as they come). Because of
* back-pressure applied to observables, [[Observable!.concat concat]]
* is safe to use in all contexts, whereas
* [[Observable!.merge[U](implicit* merge]] requires buffering.
*
* @define mergeMapReturn an Observable that emits the result of applying the
* transformation function to each item emitted by the source
* Observable and merging the results of the Observables
* obtained from this transformation.
*
* @define mergeDescription Merges the sequence of Observables emitted by
* the source into one Observable, without any transformation.
*
* You can combine the items emitted by multiple Observables
* so that they act like a single Observable by using this
* method.
*
* @define mergeReturn an Observable that emits items that are the
* result of flattening the items emitted by the Observables
* emitted by `this`
*
* @define overflowStrategyParam the [[OverflowStrategy overflow strategy]]
* used for buffering, which specifies what to do in case we're
* dealing with a slow consumer - should an unbounded buffer be used,
* should back-pressure be applied, should the pipeline drop newer or
* older events, should it drop the whole buffer? See
* [[OverflowStrategy]] for more details
*
* @define onOverflowParam a function that is used for signaling a special
* event used to inform the consumers that an overflow event
* happened, function that receives the number of dropped events as
* a parameter (see [[OverflowStrategy.Evicted]])
*
* @define delayErrorsDescription This version
* is reserving onError notifications until all of the
* Observables complete and only then passing the issued
* errors(s) along to the observers. Note that the streamed
* error is a [[monifu.reactive.exceptions.CompositeException CompositeException]],
* since multiple errors from multiple streams can happen.
*
* @define defaultOverflowStrategy this operation needs to do buffering
* and by not specifying an [[OverflowStrategy]], the
* [[OverflowStrategy.default default strategy]] is being
* used.
*
* @define switchDescription Convert an Observable that emits Observables
* into a single Observable that emits the items emitted by the
* most-recently-emitted of those Observables.
*
* @define switchMapDescription Returns a new Observable that emits the items
* emitted by the Observable most recently generated by the mapping
* function.
*
* @define asyncBoundaryDescription Forces a buffered asynchronous boundary.
*
* Internally it wraps the observer implementation given to
* `onSubscribe` into a
* [[monifu.reactive.observers.BufferedSubscriber BufferedSubscriber]].
*
* Normally Monifu's implementation guarantees that events
* are not emitted concurrently, and that the publisher MUST
* NOT emit the next event without acknowledgement from the
* consumer that it may proceed, however for badly behaved
* publishers, this wrapper provides the guarantee that the
* downstream [[monifu.reactive.Observer Observer]] given in
* `subscribe` will not receive concurrent events.
*
* WARNING: if the buffer created by this operator is
* unbounded, it can blow up the process if the data source
* is pushing events faster than what the observer can
* consume, as it introduces an asynchronous boundary that
* eliminates the back-pressure requirements of the data
* source. Unbounded is the default
* [[monifu.reactive.OverflowStrategy overflowStrategy]], see
* [[monifu.reactive.OverflowStrategy OverflowStrategy]] for
* options.
*/
trait Observable[+T] { self =>
/**
* Characteristic function for an `Observable` instance, that
* creates the subscription and that eventually starts the streaming
* of events to the given [[Observer]], being meant to be overridden
* in custom combinators or in classes implementing Observable.
*
* This function is "unsafe" to call because it does not protect the
* calls to the given [[Observer]] implementation in regards to
* unexpected exceptions that violate the contract, therefore the
* given instance must respect its contract and not throw any
* exceptions when the observable calls `onNext`, `onComplete` and
* `onError`. If it does, then the behavior is undefined.
*
* @see [[Observable.subscribe(observer* subscribe]].
*/
def onSubscribe(subscriber: Subscriber[T]): Unit
/**
* Subscribes to the stream.
*
* This function is "unsafe" to call because it does not protect the
* calls to the given [[Observer]] implementation in regards to
* unexpected exceptions that violate the contract, therefore the
* given instance must respect its contract and not throw any
* exceptions when the observable calls `onNext`, `onComplete` and
* `onError`. If it does, then the behavior is undefined.
*
* @param observer is an [[monifu.reactive.Observer Observer]] that respects
* the Monifu Rx contract
*
* @param s is the [[monifu.concurrent.Scheduler Scheduler]]
* used for creating the subscription
*/
def onSubscribe(observer: Observer[T])(implicit s: Scheduler): Unit = {
onSubscribe(Subscriber(observer, s))
}
/**
* Subscribes to the stream.
*
* @return a subscription that can be used to cancel the streaming.
*/
def subscribe(subscriber: Subscriber[T]): BooleanCancelable = {
val cancelable = BooleanCancelable()
takeWhileNotCanceled(cancelable).onSubscribe(SafeSubscriber[T](subscriber))
cancelable
}
/**
* Subscribes to the stream.
*
* @return a subscription that can be used to cancel the streaming.
*/
def subscribe(observer: Observer[T])(implicit s: Scheduler): BooleanCancelable = {
subscribe(Subscriber(observer, s))
}
/**
* Subscribes to the stream.
*
* @return a subscription that can be used to cancel the streaming.
*/
def subscribe(nextFn: T => Future[Ack], errorFn: Throwable => Unit, completedFn: () => Unit)
(implicit s: Scheduler): BooleanCancelable = {
subscribe(new Observer[T] {
def onNext(elem: T) = nextFn(elem)
def onComplete() = completedFn()
def onError(ex: Throwable) = errorFn(ex)
})
}
/**
* Subscribes to the stream.
*
* @return a subscription that can be used to cancel the streaming.
*/
def subscribe(nextFn: T => Future[Ack], errorFn: Throwable => Unit)(implicit s: Scheduler): BooleanCancelable =
subscribe(nextFn, errorFn, () => ())
/**
* Subscribes to the stream.
*
* @return a subscription that can be used to cancel the streaming.
*/
def subscribe()(implicit s: Scheduler): Cancelable =
subscribe(elem => Continue)
/**
* Subscribes to the stream.
*
* @return a subscription that can be used to cancel the streaming.
*/
def subscribe(nextFn: T => Future[Ack])(implicit s: Scheduler): BooleanCancelable =
subscribe(nextFn, error => s.reportFailure(error), () => ())
/**
* Wraps this Observable into a `org.reactivestreams.Publisher`.
* See the [[http://www.reactive-streams.org/ Reactive Streams]]
* protocol that Monifu implements.
*/
def toReactive[U >: T](implicit s: Scheduler): RPublisher[U] =
new RPublisher[U] {
def subscribe(subscriber: RSubscriber[_ >: U]): Unit = {
onSubscribe(SafeSubscriber(Subscriber.fromReactiveSubscriber(subscriber)))
}
}
/**
* Returns an Observable that applies the given function to each item emitted by an
* Observable and emits the result.
*
* @param f a function to apply to each item emitted by the Observable
* @return an Observable that emits the items from the source Observable, transformed by the given function
*/
def map[U](f: T => U): Observable[U] =
operators.map(self)(f)
/**
* Returns an Observable which only emits those items for which the given predicate holds.
*
* @param p a function that evaluates the items emitted by the source Observable,
* returning `true` if they pass the filter
*
* @return an Observable that emits only those items in the original Observable
* for which the filter evaluates as `true`
*/
def filter(p: T => Boolean): Observable[T] =
operators.filter(self)(p)
/**
* Returns an Observable by applying the given partial function to the source observable
* for each element for which the given partial function is defined.
*
* Useful to be used instead of a filter & map combination.
*
* @param pf the function that filters and maps the resulting observable
* @return an Observable that emits the transformed items by the given partial function
*/
def collect[U](pf: PartialFunction[T, U]): Observable[U] =
operators.collect(self)(pf)
/**
* Creates a new Observable by applying a function that you supply to each item emitted by
* the source Observable, where that function returns an Observable, and then concatenating those
* resulting Observables and emitting the results of this concatenation.
*
* @param f a function that, when applied to an item emitted by the source Observable, returns an Observable
* @return an Observable that emits the result of applying the transformation function to each
* item emitted by the source Observable and concatenating the results of the Observables
* obtained from this transformation.
*/
def flatMap[U](f: T => Observable[U]): Observable[U] =
map(f).flatten
/**
* Creates a new Observable by applying a function that you supply to each item emitted by
* the source Observable, where that function returns an Observable, and then concatenating those
* resulting Observables and emitting the results of this concatenation.
*
* It's an alias for [[Observable!.concatMapDelayError]].
*
* @param f a function that, when applied to an item emitted by the source Observable, returns an Observable
* @return an Observable that emits the result of applying the transformation function to each
* item emitted by the source Observable and concatenating the results of the Observables
* obtained from this transformation.
*/
def flatMapDelayError[U](f: T => Observable[U]): Observable[U] =
map(f).concatDelayError
/**
* Creates a new Observable by applying a function that you supply to each item emitted by
* the source Observable, where that function returns an Observable, and then concatenating those
* resulting Observables and emitting the results of this concatenation.
*
* @param f a function that, when applied to an item emitted by the source Observable, returns an Observable
* @return an Observable that emits the result of applying the transformation function to each
* item emitted by the source Observable and concatenating the results of the Observables
* obtained from this transformation.
*/
def concatMap[U](f: T => Observable[U]): Observable[U] =
map(f).concat
/**
* Creates a new Observable by applying a function that you supply to each item emitted by
* the source Observable, where that function returns an Observable, and then concatenating those
* resulting Observables and emitting the results of this concatenation.
*
* It's like [[Observable!.concatMap]], except that the created observable is reserving onError
* notifications until all of the merged Observables complete and only then passing it along
* to the observers.
*
* @param f a function that, when applied to an item emitted by the source Observable, returns an Observable
* @return an Observable that emits the result of applying the transformation function to each
* item emitted by the source Observable and concatenating the results of the Observables
* obtained from this transformation.
*/
def concatMapDelayError[U](f: T => Observable[U]): Observable[U] =
map(f).concatDelayError
/**
* $mergeMapDescription
*
* @param f - the transformation function
* @return $mergeMapReturn
*/
def mergeMap[U](f: T => Observable[U]): Observable[U] =
map(f).merge
/**
* $mergeMapDescription
*
* $delayErrorsDescription
*
* @param f - the transformation function
* @return $mergeMapReturn
*/
def mergeMapDelayErrors[U](f: T => Observable[U]): Observable[U] =
map(f).mergeDelayErrors
/**
* Alias for [[Observable!.concat]].
*
* $concatDescription
*
* @return $concatReturn
*/
def flatten[U](implicit ev: T <:< Observable[U]): Observable[U] =
concat
/**
* Alias for [[Observable!.concatDelayError]].
*
* $concatDescription
* $delayErrorsDescription
*
* @return $concatReturn
*/
def flattenDelayError[U](implicit ev: T <:< Observable[U]): Observable[U] =
concatDelayError
/**
* $concatDescription
*
* @return $concatReturn
*/
def concat[U](implicit ev: T <:< Observable[U]): Observable[U] =
operators.flatten.concat(self, delayErrors = false)
/**
* $concatDescription
*
* $delayErrorsDescription
*
* @return $concatReturn
*/
def concatDelayError[U](implicit ev: T <:< Observable[U]): Observable[U] =
operators.flatten.concat(self, delayErrors = true)
/**
* $mergeDescription
*
* @note $defaultOverflowStrategy
* @return $mergeReturn
*/
def merge[U](implicit ev: T <:< Observable[U]): Observable[U] = {
operators.flatten.merge(self)(defaultStrategy,
onOverflow = null, delayErrors = false)
}
/**
* $mergeDescription
*
* @param overflowStrategy - $overflowStrategyParam
* @return $mergeReturn
*/
def merge[U](overflowStrategy: OverflowStrategy)
(implicit ev: T <:< Observable[U]): Observable[U] = {
operators.flatten.merge(self)(overflowStrategy,
onOverflow = null, delayErrors = false)
}
/**
* $mergeDescription
*
* @param overflowStrategy - $overflowStrategyParam
* @param onOverflow - $onOverflowParam
* @return $mergeReturn
*/
def merge[U](overflowStrategy: OverflowStrategy.Evicted, onOverflow: Long => U)
(implicit ev: T <:< Observable[U]): Observable[U] = {
operators.flatten.merge(self)(overflowStrategy,
onOverflow, delayErrors = false)
}
/**
* $mergeDescription
*
* $delayErrorsDescription
*
* @note $defaultOverflowStrategy
* @return $mergeReturn
*/
def mergeDelayErrors[U](implicit ev: T <:< Observable[U]): Observable[U] = {
operators.flatten.merge(self)(defaultStrategy, null, delayErrors = true)
}
/**
* $mergeDescription
*
* $delayErrorsDescription
*
* @param overflowStrategy - $overflowStrategyParam
* @return $mergeReturn
*/
def mergeDelayErrors[U](overflowStrategy: OverflowStrategy)
(implicit ev: T <:< Observable[U]): Observable[U] = {
operators.flatten.merge(self)(overflowStrategy, null, delayErrors = true)
}
/**
* $mergeDescription
*
* $delayErrorsDescription
*
* @param overflowStrategy - $overflowStrategyParam
* @param onOverflow - $onOverflowParam
* @return $mergeReturn
*/
def mergeDelayErrors[U](overflowStrategy: OverflowStrategy.Evicted, onOverflow: Long => U)
(implicit ev: T <:< Observable[U]): Observable[U] = {
operators.flatten.merge(self)(overflowStrategy, onOverflow, delayErrors = true)
}
/**
* $switchDescription
*/
def switch[U](implicit ev: T <:< Observable[U]): Observable[U] =
operators.switch(self)
/**
* $switchMapDescription
*/
def switchMap[U](f: T => Observable[U]): Observable[U] =
map(f).switch
/**
* Alias for [[Observable!.switch]]
*
* $switchDescription
*/
def flattenLatest[U](implicit ev: T <:< Observable[U]): Observable[U] =
operators.switch(self)
/**
* An alias of [[Observable!.switchMap]].
*
* $switchMapDescription
*/
def flatMapLatest[U](f: T => Observable[U]): Observable[U] =
map(f).flattenLatest
/**
* Given the source observable and another `Observable`, emits all of the items
* from the first of these Observables to emit an item and cancel the other.
*/
def ambWith[U >: T](other: Observable[U]): Observable[U] = {
Observable.amb(self, other)
}
/**
* Emit items from the source Observable, or emit a default item if
* the source Observable completes after emitting no items.
*/
def defaultIfEmpty[U >: T](default: U): Observable[U] =
operators.misc.defaultIfEmpty(self, default)
/**
* Selects the first ''n'' elements (from the start).
*
* @param n the number of elements to take
* @return a new Observable that emits only the first ''n'' elements from the source
*/
def take(n: Long): Observable[T] =
operators.take.left(self, n)
/**
* Creates a new Observable that emits the events of the source, only
* for the specified `timestamp`, after which it completes.
*
* @param timespan the window of time during which the new Observable
* is allowed to emit the events of the source
*/
def take(timespan: FiniteDuration): Observable[T] =
operators.take.leftByTimespan(self, timespan)
/**
* Creates a new Observable that only emits the last `n` elements
* emitted by the source.
*/
def takeRight(n: Int): Observable[T] =
operators.take.right(self, n)
/**
* Drops the first ''n'' elements (from the start).
*
* @param n the number of elements to drop
* @return a new Observable that drops the first ''n'' elements
* emitted by the source
*/
def drop(n: Int): Observable[T] =
operators.drop.byCount(self, n)
/**
* Creates a new Observable that drops the events of the source, only
* for the specified `timestamp` window.
*
* @param timespan the window of time during which the new Observable
* is must drop the events emitted by the source
*/
def dropByTimespan(timespan: FiniteDuration): Observable[T] =
operators.drop.byTimespan(self, timespan)
/**
* Drops the longest prefix of elements that satisfy the given predicate
* and returns a new Observable that emits the rest.
*/
def dropWhile(p: T => Boolean): Observable[T] =
operators.drop.byPredicate(self)(p)
/**
* Drops the longest prefix of elements that satisfy the given function
* and returns a new Observable that emits the rest. In comparison with
* [[dropWhile]], this version accepts a function that takes an additional
* parameter: the zero-based index of the element.
*/
def dropWhileWithIndex(p: (T, Int) => Boolean): Observable[T] =
operators.drop.byPredicateWithIndex(self)(p)
/**
* Takes longest prefix of elements that satisfy the given predicate
* and returns a new Observable that emits those elements.
*/
def takeWhile(p: T => Boolean): Observable[T] =
operators.take.byPredicate(self)(p)
/**
* Takes longest prefix of elements that satisfy the given predicate
* and returns a new Observable that emits those elements.
*/
def takeWhileNotCanceled(c: BooleanCancelable): Observable[T] =
operators.take.takeWhileNotCanceled(self, c)
/**
* Creates a new Observable that emits the total number of `onNext` events
* that were emitted by the source.
*
* Note that this Observable emits only one item after the source is complete.
* And in case the source emits an error, then only that error will be
* emitted.
*/
def count: Observable[Long] =
operators.math.count(self)
/**
* Periodically gather items emitted by an Observable into bundles and emit
* these bundles rather than emitting the items one at a time. This version
* of `buffer` is emitting items once the internal buffer has the reached the
* given count.
*
* @param count the maximum size of each buffer before it should be emitted
*/
def buffer(count: Int): Observable[Seq[T]] =
operators.buffer.skipped(self, count, count)
/**
* Returns an Observable that emits buffers of items it collects from the
* source Observable. The resulting Observable emits buffers every `skip`
* items, each containing `count` items. When the source Observable completes
* or encounters an error, the resulting Observable emits the current buffer
* and propagates the notification from the source Observable.
*
* There are 3 possibilities:
*
* 1. in case `skip == count`, then there are no items dropped and no overlap,
* the call being equivalent to `window(count)`
* 2. in case `skip < count`, then overlap between windows happens, with the
* number of elements being repeated being `count - skip`
* 3. in case `skip > count`, then `skip - count` elements start getting
* dropped between windows
*
* @param count the maximum size of each buffer before it should be emitted
* @param skip how many items emitted by the source Observable should be
* skipped before starting a new buffer. Note that when skip and
* count are equal, this is the same operation as `buffer(count)`
*/
def buffer(count: Int, skip: Int): Observable[Seq[T]] =
operators.buffer.skipped(self, count, skip)
/**
* Periodically gather items emitted by an Observable into bundles and emit
* these bundles rather than emitting the items one at a time.
*
* This version of `buffer` emits a new bundle of items periodically,
* every timespan amount of time, containing all items emitted by the
* source Observable since the previous bundle emission.
*
* @param timespan the interval of time at which it should emit the buffered bundle
*/
def buffer(timespan: FiniteDuration): Observable[Seq[T]] =
operators.buffer.timed(self, maxCount = 0, timespan = timespan)
/**
* Periodically gather items emitted by an Observable into bundles and emit
* these bundles rather than emitting the items one at a time.
*
* The resulting Observable emits connected, non-overlapping buffers, each of
* a fixed duration specified by the `timespan` argument or a maximum size
* specified by the `maxSize` argument (whichever is reached first). When the
* source Observable completes or encounters an error, the resulting
* Observable emits the current buffer and propagates the notification from
* the source Observable.
*
* @param timespan the interval of time at which it should emit the buffered bundle
* @param maxSize is the maximum bundle size
*/
def buffer(timespan: FiniteDuration, maxSize: Int): Observable[Seq[T]] =
operators.buffer.timed(self, timespan, maxSize)
/**
* Periodically subdivide items from an Observable into Observable windows and
* emit these windows rather than emitting the items one at a time.
*
* This variant of window opens its first window immediately. It closes the
* currently open window and immediately opens a new one whenever the current
* window has emitted count items. It will also close the currently open
* window if it receives an onCompleted or onError notification from the
* source Observable. This variant of window emits a series of non-overlapping
* windows whose collective emissions correspond one-to-one with those of
* the source Observable.
*
* @param count the bundle size
*/
def window(count: Int): Observable[Observable[T]] =
operators.window.skipped(self, count, count)
/**
* Returns an Observable that emits windows of items it collects from the
* source Observable. The resulting Observable emits windows every skip items,
* each containing no more than count items. When the source Observable
* completes or encounters an error, the resulting Observable emits the
* current window and propagates the notification from the source Observable.
*
* There are 3 possibilities:
*
* 1. in case `skip == count`, then there are no items dropped and no overlap,
* the call being equivalent to `window(count)`
* 2. in case `skip < count`, then overlap between windows happens, with the
* number of elements being repeated being `count - skip`
* 3. in case `skip > count`, then `skip - count` elements start getting
* dropped between windows
*
* @param count - the maximum size of each window before it should be emitted
* @param skip - how many items need to be skipped before starting a new window
*/
def window(count: Int, skip: Int): Observable[Observable[T]] =
operators.window.skipped(self, count, skip)
/**
* Periodically subdivide items from an Observable into Observable windows and
* emit these windows rather than emitting the items one at a time.
*
* The resulting Observable emits connected, non-overlapping windows,
* each of a fixed duration specified by the timespan argument. When
* the source Observable completes or encounters an error, the resulting
* Observable emits the current window and propagates the notification
* from the source Observable.
*
* @param timespan the interval of time at which it should complete the
* current window and emit a new one
*/
def window(timespan: FiniteDuration): Observable[Observable[T]] =
operators.window.timed(self, timespan, maxCount = 0)
/**
* Periodically subdivide items from an Observable into Observable windows and
* emit these windows rather than emitting the items one at a time.
*
* The resulting Observable emits connected, non-overlapping windows,
* each of a fixed duration specified by the timespan argument. When
* the source Observable completes or encounters an error, the resulting
* Observable emits the current window and propagates the notification
* from the source Observable.
*
* @param timespan the interval of time at which it should complete the
* current window and emit a new one
* @param maxCount the maximum size of each window
*/
def window(timespan: FiniteDuration, maxCount: Int): Observable[Observable[T]] =
operators.window.timed(self, timespan, maxCount)
/**
* Groups the items emitted by an Observable according to a specified
* criterion, and emits these grouped items as GroupedObservables,
* one GroupedObservable per group.
*
* Note: A [[monifu.reactive.observables.GroupedObservable GroupedObservable]]
* will cache the items it is to emit until such time as it is
* subscribed to. For this reason, in order to avoid memory leaks,
* you should not simply ignore those GroupedObservables that do not
* concern you. Instead, you can signal to them that they may
* discard their buffers by doing something like `source.take(0)`.
*
* @param keySelector - a function that extracts the key for each item
*/
def groupBy[K](keySelector: T => K): Observable[GroupedObservable[K,T]] =
operators.groupBy.apply(self, OverflowStrategy.Unbounded, keySelector)
/**
* Groups the items emitted by an Observable according to a specified
* criterion, and emits these grouped items as GroupedObservables,
* one GroupedObservable per group.
*
* A [[monifu.reactive.observables.GroupedObservable GroupedObservable]]
* will cache the items it is to emit until such time as it is
* subscribed to. For this reason, in order to avoid memory leaks,
* you should not simply ignore those GroupedObservables that do not
* concern you. Instead, you can signal to them that they may
* discard their buffers by doing something like `source.take(0)`.
*
* This variant of `groupBy` specifies a `keyBufferSize` representing the
* size of the buffer that holds our keys. We cannot block when emitting
* new `GroupedObservable`. So by specifying a buffer size, on overflow
* the resulting observable will terminate with an onError.
*
* @param keySelector - a function that extracts the key for each item
* @param keyBufferSize - the buffer size used for buffering keys
*/
def groupBy[K](keyBufferSize: Int, keySelector: T => K): Observable[GroupedObservable[K,T]] =
operators.groupBy.apply(self, OverflowStrategy.Fail(keyBufferSize), keySelector)
/**
* Returns an Observable that emits only the last item emitted by the source
* Observable during sequential time windows of a specified duration.
*
* This differs from [[Observable!.throttleFirst]] in that this ticks along
* at a scheduled interval whereas `throttleFirst` does not tick, it just
* tracks passage of time.
*
* @param period - duration of windows within which the last item
* emitted by the source Observable will be emitted
*/
def throttleLast(period: FiniteDuration): Observable[T] =
sample(period)
/**
* Returns an Observable that emits only the first item emitted by the source
* Observable during sequential time windows of a specified duration.
*
* This differs from [[Observable!.throttleLast]] in that this only tracks
* passage of time whereas `throttleLast` ticks at scheduled intervals.
*
* @param interval time to wait before emitting another item after
* emitting the last item
*/
def throttleFirst(interval: FiniteDuration): Observable[T] =
operators.throttle.first(self, interval)
/**
* Alias for [[Observable!.debounce(timeout* debounce]].
*
* Returns an Observable that only emits those items emitted by the source
* Observable that are not followed by another emitted item within a
* specified time window.
*
* Note: If the source Observable keeps emitting items more frequently than
* the length of the time window then no items will be emitted by the
* resulting Observable.
*
* @param timeout - the length of the window of time that must pass after the
* emission of an item from the source Observable in which that
* Observable emits no items in order for the item to be
* emitted by the resulting Observable
*/
def throttleWithTimeout(timeout: FiniteDuration): Observable[T] =
debounce(timeout)
/**
* Emit the most recent items emitted by an Observable within periodic time
* intervals.
*
* Use the `sample` operator to periodically look at an Observable
* to see what item it has most recently emitted since the previous
* sampling. Note that if the source Observable has emitted no
* items since the last time it was sampled, the Observable that
* results from the sample( ) operator will emit no item for
* that sampling period.
*
* @param delay the timespan at which sampling occurs and note that this is
* not accurate as it is subject to back-pressure concerns - as in
* if the delay is 1 second and the processing of an event on `onNext`
* in the observer takes one second, then the actual sampling delay
* will be 2 seconds.
*/
def sample(delay: FiniteDuration): Observable[T] =
sample(delay, delay)
/**
* Emit the most recent items emitted by an Observable within periodic time
* intervals.
*
* Use the sample() method to periodically look at an Observable
* to see what item it has most recently emitted since the previous
* sampling. Note that if the source Observable has emitted no
* items since the last time it was sampled, the Observable that
* results from the sample( ) operator will emit no item for
* that sampling period.
*
* @param initialDelay the initial delay after which sampling can happen
*
* @param delay the timespan at which sampling occurs and note that this is
* not accurate as it is subject to back-pressure concerns - as in
* if the delay is 1 second and the processing of an event on `onNext`
* in the observer takes one second, then the actual sampling delay
* will be 2 seconds.
*/
def sample(initialDelay: FiniteDuration, delay: FiniteDuration): Observable[T] =
operators.sample.once(self, initialDelay, delay)
/**
* Returns an Observable that, when the specified sampler Observable emits an
* item or completes, emits the most recently emitted item (if any) emitted
* by the source Observable since the previous emission from the sampler
* Observable.
*
* Use the sample() method to periodically look at an Observable
* to see what item it has most recently emitted since the previous
* sampling. Note that if the source Observable has emitted no
* items since the last time it was sampled, the Observable that
* results from the sample( ) operator will emit no item.
*
* @param sampler - the Observable to use for sampling the source Observable
*/
def sample[U](sampler: Observable[U]): Observable[T] =
operators.sample.once(self, sampler)
/**
* Emit the most recent items emitted by an Observable within periodic time
* intervals. If no new value has been emitted since the last time it
* was sampled, the emit the last emitted value anyway.
*
* Also see [[Observable!.sample(delay* Observable.sample]].
*
* @param delay the timespan at which sampling occurs and note that this is
* not accurate as it is subject to back-pressure concerns - as in
* if the delay is 1 second and the processing of an event on `onNext`
* in the observer takes one second, then the actual sampling delay
* will be 2 seconds.
*/
def sampleRepeated(delay: FiniteDuration): Observable[T] =
sampleRepeated(delay, delay)
/**
* Emit the most recent items emitted by an Observable within periodic time
* intervals. If no new value has been emitted since the last time it
* was sampled, the emit the last emitted value anyway.
*
* Also see [[Observable!.sample(initial* sample]].
*
* @param initialDelay the initial delay after which sampling can happen
*
* @param delay the timespan at which sampling occurs and note that this is
* not accurate as it is subject to back-pressure concerns - as in
* if the delay is 1 second and the processing of an event on `onNext`
* in the observer takes one second, then the actual sampling delay
* will be 2 seconds.
*/
def sampleRepeated(initialDelay: FiniteDuration, delay: FiniteDuration): Observable[T] =
operators.sample.repeated(self, initialDelay, delay)
/**
* Returns an Observable that, when the specified sampler Observable emits an
* item or completes, emits the most recently emitted item (if any) emitted
* by the source Observable since the previous emission from the sampler
* Observable. If no new value has been emitted since the last time it
* was sampled, the emit the last emitted value anyway.
*
* @see [[Observable!.sample[U](sampler* Observable.sample]]
*
* @param sampler - the Observable to use for sampling the source Observable
*/
def sampleRepeated[U](sampler: Observable[U]): Observable[T] =
operators.sample.repeated(self, sampler)
/**
* Only emit an item from an Observable if a particular
* timespan has passed without it emitting another item.
*
* Note: If the source Observable keeps emitting items more frequently
* than the length of the time window then no items will be emitted
* by the resulting Observable.
*
* @param timeout the length of the window of time that must pass after
* the emission of an item from the source Observable in which
* that Observable emits no items in order for the item to
* be emitted by the resulting Observable
*
* @see [[Observable.echoOnce echoOnce]] for a similar operator that also
* mirrors the source observable
*/
def debounce(timeout: FiniteDuration): Observable[T] =
operators.debounce.timeout(self, timeout, repeat = false)
/**
* Emits the last item from the source Observable if a particular
* timespan has passed without it emitting another item,
* and keeps emitting that item at regular intervals until
* the source breaks the silence.
*
* So compared to regular [[Observable!.debounce(timeout* debounce]]
* it keeps emitting the last item of the source.
*
* Note: If the source Observable keeps emitting items more frequently
* than the length of the time window then no items will be emitted
* by the resulting Observable.
*
* @param period the length of the window of time that must pass after
* the emission of an item from the source Observable in which
* that Observable emits no items in order for the item to
* be emitted by the resulting Observable at regular intervals,
* also determined by period
*
* @see [[Observable.echoRepeated echoRepeated]] for a similar operator
* that also mirrors the source observable
*/
def debounceRepeated(period: FiniteDuration): Observable[T] =
operators.debounce.timeout(self, period, repeat = true)
/**
* Doesn't emit anything until a `timeout` period passes without
* the source emitting anything. When that timeout happens,
* we subscribe to the observable generated by the given function,
* an observable that will keep emitting until the source will
* break the silence by emitting another event.
*
* Note: If the source Observable keeps emitting items more frequently
* than the length of the time window, then no items will be emitted
* by the resulting Observable.
*
* @param f is a function that receives the last element generated by the
* source, generating an observable to be subscribed when the
* source is timing out
*
* @param timeout the length of the window of time that must pass after
* the emission of an item from the source Observable in which
* that Observable emits no items in order for the item to
* be emitted by the resulting Observable
*/
def debounce[U](timeout: FiniteDuration, f: T => Observable[U]): Observable[U] =
operators.debounce.flatten(self, timeout, f)
/**
* Only emit an item from an Observable if a particular
* timespan has passed without it emitting another item,
* a timespan indicated by the completion of an observable
* generated the `selector` function.
*
* Note: If the source Observable keeps emitting items more frequently
* than the length of the time window then no items will be emitted
* by the resulting Observable.
*
* @param selector function to retrieve a sequence that indicates the
* throttle duration for each item
*/
def debounce(selector: T => Observable[Any]): Observable[T] =
operators.debounce.bySelector(self, selector)
/**
* Only emit an item from an Observable if a particular
* timespan has passed without it emitting another item,
* a timespan indicated by the completion of an observable
* generated the `selector` function.
*
* Note: If the source Observable keeps emitting items more frequently
* than the length of the time window then no items will be emitted
* by the resulting Observable.
*
* @param selector function to retrieve a sequence that indicates the
* throttle duration for each item
*
* @param f is a function that receives the last element generated by the
* source, generating an observable to be subscribed when the
* source is timing out
*/
def debounce[U](selector: T => Observable[Any], f: T => Observable[U]): Observable[U] =
operators.debounce.flattenBySelector(self, selector, f)
/**
* Mirror the source observable as long as the source keeps emitting items,
* otherwise if `timeout` passes without the source emitting anything new
* then the observable will emit the last item.
*
* This is the rough equivalent of:
* {{{
* Observable.merge(source, source.debounce(period))
* }}}
*
* Note: If the source Observable keeps emitting items more frequently
* than the length of the time window then the resulting observable
* will mirror the source exactly.
*
* @param timeout the window of silence that must pass in order for the
* observable to echo the last item
*/
def echoOnce(timeout: FiniteDuration): Observable[T] =
operators.echo.apply(self, timeout, onlyOnce = true)
/**
* Mirror the source observable as long as the source keeps emitting items,
* otherwise if `timeout` passes without the source emitting anything new
* then the observable will start emitting the last item repeatedly.
*
* This is the rough equivalent of:
* {{{
* source.switch { e =>
* e +: Observable.intervalWithFixedDelay(delay, delay)
* }
* }}}
*
* Note: If the source Observable keeps emitting items more frequently
* than the length of the time window then the resulting observable
* will mirror the source exactly.
*
* @param timeout the window of silence that must pass in order for the
* observable to start echoing the last item
*/
def echoRepeated(timeout: FiniteDuration): Observable[T] =
operators.echo.apply(self, timeout, onlyOnce = false)
/**
* Hold an Observer's subscription request until the given `trigger`
* observable either emits an item or completes, before passing it on to
* the source Observable.
*
* If the given `trigger` completes in error, then the subscription is
* terminated with `onError`.
*
* @param trigger - the observable that must either emit an item or
* complete in order for the source to be subscribed.
*/
def delaySubscription[U](trigger: Observable[U]): Observable[T] =
operators.delaySubscription.onTrigger(self, trigger)
/**
* Hold an Observer's subscription request for a specified
* amount of time before passing it on to the source Observable.
*
* @param timespan is the time to wait before the subscription
* is being initiated.
*/
def delaySubscription(timespan: FiniteDuration): Observable[T] =
operators.delaySubscription.onTimespan(self, timespan)
/**
* Returns an Observable that emits the items emitted by the source
* Observable shifted forward in time by a specified delay.
*
* Each time the source Observable emits an item, delay starts a timer,
* and when that timer reaches the given duration, the Observable
* returned from delay emits the same item.
*
* NOTE: this delay refers strictly to the time between the `onNext`
* event coming from our source and the time it takes the downstream
* observer to get this event. On the other hand the operator is also
* applying back-pressure, so on slow observers the actual time passing
* between two successive events may be higher than the
* specified `duration`.
*
* @param duration - the delay to shift the source by
* @return the source Observable shifted in time by the specified delay
*/
def delay(duration: FiniteDuration): Observable[T] =
operators.delay.byDuration(self, duration)
/**
* Returns an Observable that emits the items emitted by the source
* Observable shifted forward in time.
*
* This variant of `delay` sets its delay duration on a per-item basis by
* passing each item from the source Observable into a function that returns
* an Observable and then monitoring those Observables. When any such
* Observable emits an item or completes, the Observable returned
* by delay emits the associated item.
*
* @see [[Observable!.delay(duration* delay(duration)]] for the other variant
*
* @param selector - a function that returns an Observable for each item
* emitted by the source Observable, which is then used
* to delay the emission of that item by the resulting
* Observable until the Observable returned
* from `selector` emits an item
*
* @return the source Observable shifted in time by
* the specified delay
*/
def delay[U](selector: T => Observable[U]): Observable[T] =
operators.delay.bySelector(self, selector)
/**
* Applies a binary operator to a start value and all elements of this Observable,
* going left to right and returns a new Observable that emits only one item
* before `onComplete`.
*/
def foldLeft[R](initial: R)(op: (R, T) => R): Observable[R] =
operators.foldLeft(self, initial)(op)
/**
* Applies a binary operator to a start value and all elements of this Observable,
* going left to right and returns a new Observable that emits only one item
* before `onComplete`.
*/
def reduce[U >: T](op: (U, U) => U): Observable[U] =
operators.reduce(self : Observable[U])(op)
/**
* Applies a binary operator to a start value and all elements of this Observable,
* going left to right and returns a new Observable that emits on each step the result
* of the applied function.
*
* Similar to [[foldLeft]], but emits the state on each step. Useful for modeling finite
* state machines.
*/
def scan[R](initial: R)(op: (R, T) => R): Observable[R] =
operators.scan(self, initial)(op)
/**
* Applies a binary operator to a start value and to elements produced
* by the source observable, going from left to right, producing
* and concatenating observables along the way.
*
* It's the combination between [[monifu.reactive.Observable.scan scan]]
* and [[monifu.reactive.Observable.flatten]].
*/
def flatScan[R](initial: R)(op: (R, T) => Observable[R]): Observable[R] =
operators.flatScan(self, initial)(op)
/**
* Applies a binary operator to a start value and to elements produced
* by the source observable, going from left to right, producing
* and concatenating observables along the way.
*
* It's the combination between [[monifu.reactive.Observable.scan scan]]
* and [[monifu.reactive.Observable.flattenDelayError]].
*/
def flatScanDelayError[R](initial: R)(op: (R, T) => Observable[R]): Observable[R] =
operators.flatScan.delayError(self, initial)(op)
/**
* Executes the given callback when the stream has ended,
* but before the complete event is emitted.
*
* @param cb the callback to execute when the subscription is canceled
*/
def doOnComplete(cb: => Unit): Observable[T] =
operators.doWork.onComplete(self)(cb)
/**
* Executes the given callback for each element generated by the source
* Observable, useful for doing side-effects.
*
* @return a new Observable that executes the specified callback for each element
*/
def doWork(cb: T => Unit): Observable[T] =
operators.doWork.onNext(self)(cb)
/**
* Executes the given callback only for the first element generated by the source
* Observable, useful for doing a piece of computation only when the stream started.
*
* @return a new Observable that executes the specified callback only for the first element
*/
def doOnStart(cb: T => Unit): Observable[T] =
operators.doWork.onStart(self)(cb)
/**
* Executes the given callback if the downstream observer
* has canceled the streaming.
*/
def doOnCanceled(cb: => Unit): Observable[T] =
operators.doWork.onCanceled(self)(cb)
/**
* Executes the given callback when the stream is interrupted
* with an error, before the `onError` event is emitted downstream.
*
* NOTE: should protect the code in this callback, because if it
* throws an exception the `onError` event will prefer signaling the
* original exception and otherwise the behavior is undefined.
*/
def doOnError(cb: Throwable => Unit): Observable[T] =
operators.doWork.onError(self)(cb)
/**
* Returns an Observable which only emits the first item for which the predicate holds.
*
* @param p a function that evaluates the items emitted by the source Observable, returning `true` if they pass the filter
* @return an Observable that emits only the first item in the original Observable for which the filter evaluates as `true`
*/
def find(p: T => Boolean): Observable[T] =
filter(p).head
/**
* Returns an Observable which emits a single value, either true, in case the given predicate holds for at least
* one item, or false otherwise.
*
* @param p a function that evaluates the items emitted by the source Observable, returning `true` if they pass the filter
* @return an Observable that emits only true or false in case the given predicate holds or not for at least one item
*/
def exists(p: T => Boolean): Observable[Boolean] =
find(p).foldLeft(false)((_, _) => true)
/**
* Returns an Observable that emits true if the source Observable
* is empty, otherwise false.
*/
def isEmpty: Observable[Boolean] =
operators.misc.isEmpty(self)
/**
* Returns an Observable that emits false if the source Observable
* is empty, otherwise true.
*/
def nonEmpty: Observable[Boolean] =
operators.misc.isEmpty(self).map(isEmpty => !isEmpty)
/**
* Returns an Observable that emits a single boolean, either true, in case the given predicate holds for all the items
* emitted by the source, or false in case at least one item is not verifying the given predicate.
*
* @param p a function that evaluates the items emitted by the source Observable, returning `true` if they pass the filter
* @return an Observable that emits only true or false in case the given predicate holds or not for all the items
*/
def forAll(p: T => Boolean): Observable[Boolean] =
exists(e => !p(e)).map(r => !r)
/**
* Alias for [[Observable!.complete]].
*
* Ignores all items emitted by the source Observable and
* only calls onCompleted or onError.
*
* @return an empty Observable that only calls onCompleted or onError,
* based on which one is called by the source Observable
*/
def ignoreElements: Observable[Nothing] =
operators.misc.complete(this)
/**
* Ignores all items emitted by the source Observable and
* only calls onCompleted or onError.
*
* @return an empty Observable that only calls onCompleted or onError,
* based on which one is called by the source Observable
*/
def complete: Observable[Nothing] =
operators.misc.complete(this)
/**
* Returns an Observable that emits a single Throwable,
* in case an error was thrown by the source Observable,
* otherwise it isn't going to emit anything.
*/
def error: Observable[Throwable] =
operators.misc.error(this)
/**
* Emits the given exception instead of `onComplete`.
* @param error the exception to emit onComplete
* @return a new Observable that emits an exception onComplete
*/
def endWithError(error: Throwable): Observable[T] =
operators.misc.endWithError(this)(error)
/**
* Creates a new Observable that emits the given element
* and then it also emits the events of the source (prepend operation).
*
* @example {{{
* val source = 1 +: Observable(2, 3, 4)
* source.dump("O").subscribe()
*
* // 0: O-->1
* // 1: O-->2
* // 2: O-->3
* // 3: O-->4
* // 4: O completed
* }}}
*/
def +:[U >: T](elem: U): Observable[U] =
Observable.unit(elem) ++ this
/**
* Creates a new Observable that emits the given elements
* and then it also emits the events of the source (prepend operation).
*/
def startWith[U >: T](elems: U*): Observable[U] =
Observable.fromIterable(elems) ++ this
/**
* Creates a new Observable that emits the events of the source
* and then it also emits the given element (appended to the stream).
*
* @example {{{
* val source = Observable(1, 2, 3) :+ 4
* source.dump("O").subscribe()
*
* // 0: O-->1
* // 1: O-->2
* // 2: O-->3
* // 3: O-->4
* // 4: O completed
* }}}
*/
def :+[U >: T](elem: U): Observable[U] =
this ++ Observable.unit(elem)
/**
* Creates a new Observable that emits the events of the source
* and then it also emits the given elements (appended to the stream).
*/
def endWith[U >: T](elems: U*): Observable[U] =
this ++ Observable.fromIterable(elems)
/**
* Concatenates the source Observable with the other Observable, as specified.
*
* Ordering of subscription is preserved, so the second observable
* starts only after the source observable is completed successfully with
* an `onComplete`. On the other hand, the second observable is never
* subscribed if the source completes with an error.
*
* @example {{{
* val concat = Observable(1,2,3) ++ Observable(4,5)
* concat.dump("O").subscribe()
*
* // 0: O-->1
* // 1: O-->2
* // 2: O-->3
* // 3: O-->4
* // 4: O-->5
* // 5: O completed
* }}}
*/
def ++[U >: T](other: => Observable[U]): Observable[U] =
Observable.concat(this, other)
/**
* Only emits the first element emitted by the source observable, after which it's completed immediately.
*/
def head: Observable[T] = take(1)
/**
* Drops the first element of the source observable, emitting the rest.
*/
def tail: Observable[T] = drop(1)
/**
* Only emits the last element emitted by the source observable, after which it's completed immediately.
*/
def last: Observable[T] =
takeRight(1)
/**
* Emits the first element emitted by the source, or otherwise if the source is completed without
* emitting anything, then the `default` is emitted.
*/
def headOrElse[B >: T](default: => B): Observable[B] =
head.foldLeft(Option.empty[B])((_, elem) => Some(elem)) map {
case Some(elem) => elem
case None => default
}
/**
* Emits the first element emitted by the source, or otherwise if the source is completed without
* emitting anything, then the `default` is emitted.
*
* Alias for `headOrElse`.
*/
def firstOrElse[U >: T](default: => U): Observable[U] =
headOrElse(default)
/**
* Creates a new Observable from this Observable and another given Observable,
* by emitting elements combined in pairs. If one of the Observable emits fewer
* events than the other, then the rest of the unpaired events are ignored.
*/
def zip[U](other: Observable[U]): Observable[(T, U)] =
operators.zip.two(self, other)
/**
* Zips the emitted elements of the source with their indices.
*/
def zipWithIndex: Observable[(T, Long)] =
operators.zip.withIndex(self)
/**
* Creates a new Observable from this Observable and another given Observable.
*
* This operator behaves in a similar way to [[zip]], but while `zip` emits items
* only when all of the zipped source Observables have emitted a previously unzipped item,
* `combine` emits an item whenever any of the source Observables emits
* an item (so long as each of the source Observables has emitted at least one item).
*/
def combineLatest[U](other: Observable[U]): Observable[(T, U)] =
operators.combineLatest(self, other, delayErrors = false)
/**
* Creates a new Observable from this Observable and another given Observable.
*
* This operator behaves in a similar way to [[zip]], but while `zip` emits items
* only when all of the zipped source Observables have emitted a previously unzipped item,
* `combine` emits an item whenever any of the source Observables emits
* an item (so long as each of the source Observables has emitted at least one item).
*
* This version of [[Observable!.combineLatest combineLatest]]
* is reserving `onError` notifications until all of the combined Observables
* complete and only then passing it along to the observers.
*
* @see [[Observable!.combineLatest]]
*/
def combineLatestDelayError[U](other: Observable[U]): Observable[(T, U)] =
operators.combineLatest(self, other, delayErrors = true)
/**
* Takes the elements of the source Observable and emits the maximum value,
* after the source has completed.
*/
def max[U >: T](implicit ev: Ordering[U]): Observable[U] =
operators.math.max(this : Observable[U])
/**
* Takes the elements of the source Observable and emits the element that has
* the maximum key value, where the key is generated by the given function `f`.
*/
def maxBy[U](f: T => U)(implicit ev: Ordering[U]): Observable[T] =
operators.math.maxBy(this)(f)(ev)
/**
* Takes the elements of the source Observable and emits the minimum value,
* after the source has completed.
*/
def min[U >: T](implicit ev: Ordering[U]): Observable[U] =
operators.math.min(this : Observable[U])
/**
* Takes the elements of the source Observable and emits the element that has
* the minimum key value, where the key is generated by the given function `f`.
*/
def minBy[U](f: T => U)(implicit ev: Ordering[U]): Observable[T] =
operators.math.minBy(this)(f)
/**
* Given a source that emits numeric values, the `sum` operator
* sums up all values and at onComplete it emits the total.
*/
def sum[U >: T](implicit ev: Numeric[U]): Observable[U] =
operators.math.sum(this : Observable[U])
/**
* Suppress the duplicate elements emitted by the source Observable.
*
* WARNING: this requires unbounded buffering.
*/
def distinct: Observable[T] =
operators.distinct.distinct(this)
/**
* Given a function that returns a key for each element emitted by
* the source Observable, suppress duplicates items.
*
* WARNING: this requires unbounded buffering.
*/
def distinct[U](fn: T => U): Observable[T] =
operators.distinct.distinctBy(this)(fn)
/**
* Suppress duplicate consecutive items emitted by the source Observable
*/
def distinctUntilChanged: Observable[T] =
operators.distinct.untilChanged(this)
/**
* Suppress duplicate consecutive items emitted by the source Observable
*/
def distinctUntilChanged[U](fn: T => U): Observable[T] =
operators.distinct.untilChangedBy(this)(fn)
/**
* Returns a new Observable that uses the specified
* `Scheduler` for initiating the subscription.
*/
def subscribeOn(s: Scheduler): Observable[T] = {
Observable.create(o => s.execute(onSubscribe(o)))
}
/**
* Converts the source Observable that emits `T` into an Observable
* that emits `Notification[T]`.
*
* NOTE: `onComplete` is still emitted after an `onNext(OnComplete)` notification
* however an `onError(ex)` notification is emitted as an `onNext(OnError(ex))`
* followed by an `onComplete`.
*/
def materialize: Observable[Notification[T]] =
operators.materialize(self)
/**
* Utility that can be used for debugging purposes.
*/
def dump(prefix: String, out: PrintStream = System.out): Observable[T] =
operators.debug.dump(self, prefix, out)
/**
* Repeats the items emitted by this Observable continuously. It caches the generated items until `onComplete`
* and repeats them ad infinitum. On error it terminates.
*/
def repeat: Observable[T] =
operators.repeat.elements(self)
/**
* Converts this observable into a multicast observable, useful for turning a cold observable into
* a hot one (i.e. whose source is shared by all observers).
*/
def multicast[U >: T, R](subject: Subject[U, R])(implicit s: Scheduler): ConnectableObservable[R] =
ConnectableObservable(this, subject)
/**
* $asyncBoundaryDescription
*
* @param overflowStrategy - $overflowStrategyParam
*/
def asyncBoundary(overflowStrategy: OverflowStrategy): Observable[T] =
Observable.create { subscriber =>
onSubscribe(BufferedSubscriber(subscriber, overflowStrategy))
}
/**
* $asyncBoundaryDescription
*
* @param overflowStrategy - $overflowStrategyParam
* @param onOverflow - $onOverflowParam
*/
def asyncBoundary[U >: T](overflowStrategy: OverflowStrategy.Evicted, onOverflow: Long => U): Observable[U] =
Observable.create { subscriber =>
onSubscribe(BufferedSubscriber(subscriber, overflowStrategy))
}
/**
* While the destination observer is busy, drop the incoming events.
*/
def whileBusyDropEvents: Observable[T] =
operators.whileBusy.dropEvents(self)
/**
* While the destination observer is busy, drop the incoming events.
* When the downstream recovers, we can signal a special event
* meant to inform the downstream observer how many events
* where dropped.
*
* @param onOverflow - $onOverflowParam
*/
def whileBusyDropEvents[U >: T](onOverflow: Long => U): Observable[U] =
operators.whileBusy.dropEventsThenSignalOverflow(self, onOverflow)
/**
* While the destination observer is busy, buffers events, applying
* the given overflowStrategy.
*
* @param overflowStrategy - $overflowStrategyParam
*/
def whileBusyBuffer[U >: T](overflowStrategy: OverflowStrategy.Synchronous): Observable[U] =
asyncBoundary(overflowStrategy)
/**
* While the destination observer is busy, buffers events, applying
* the given overflowStrategy.
*
* @param overflowStrategy - $overflowStrategyParam
* @param onOverflow - $onOverflowParam
*/
def whileBusyBuffer[U >: T](overflowStrategy: OverflowStrategy.Evicted, onOverflow: Long => U): Observable[U] =
asyncBoundary(overflowStrategy, onOverflow)
/**
* Converts this observable into a multicast observable, useful for turning a cold observable into
* a hot one (i.e. whose source is shared by all observers). The underlying subject used is a
* [[monifu.reactive.subjects.PublishSubject PublishSubject]].
*/
def publish(implicit s: Scheduler): ConnectableObservable[T] =
multicast(PublishSubject[T]())
/**
* Returns a new Observable that multi-casts (shares) the original Observable.
*/
def share(implicit s: Scheduler): Observable[T] =
publish.refCount
/**
* Caches the emissions from the source Observable and replays them
* in order to any subsequent Subscribers. This method has similar
* behavior to [[Observable!.replay(implicit* replay]] except that
* this auto-subscribes to the source Observable rather than
* returning a [[monifu.reactive.observables.ConnectableObservable ConnectableObservable]]
* for which you must call
* [[monifu.reactive.observables.ConnectableObservable.connect connect]]
* to activate the subscription.
*
* When you call cache, it does not yet subscribe to the source Observable
* and so does not yet begin caching items. This only happens when the
* first Subscriber calls the resulting Observable's `subscribe` method.
*
* Note: You sacrifice the ability to cancel the origin when you use
* the cache operator so be careful not to use this on Observables that emit an
* infinite or very large number of items that will use up memory.
*
* @return an Observable that, when first subscribed to, caches all of its
* items and notifications for the benefit of subsequent subscribers
*/
def cache: Observable[T] =
CachedObservable.create(self)
/**
* Caches the emissions from the source Observable and replays them
* in order to any subsequent Subscribers. This method has similar
* behavior to [[Observable!.replay(implicit* replay]] except that this
* auto-subscribes to the source Observable rather than returning a
* [[monifu.reactive.observables.ConnectableObservable ConnectableObservable]]
* for which you must call
* [[monifu.reactive.observables.ConnectableObservable.connect connect]]
* to activate the subscription.
*
* When you call cache, it does not yet subscribe to the source Observable
* and so does not yet begin caching items. This only happens when the
* first Subscriber calls the resulting Observable's `subscribe` method.
*
* @param maxCapacity is the maximum buffer size after which old events
* start being dropped (according to what happens when using
* [[subjects.ReplaySubject.createWithSize ReplaySubject.createWithSize]])
*
* @return an Observable that, when first subscribed to, caches all of its
* items and notifications for the benefit of subsequent subscribers
*/
def cache(maxCapacity: Int): Observable[T] =
CachedObservable.create(self, maxCapacity)
/**
* Converts this observable into a multicast observable, useful for turning a cold observable into
* a hot one (i.e. whose source is shared by all observers). The underlying subject used is a
* [[monifu.reactive.subjects.BehaviorSubject BehaviorSubject]].
*/
def behavior[U >: T](initialValue: U)(implicit s: Scheduler): ConnectableObservable[U] =
multicast(BehaviorSubject[U](initialValue))
/**
* Converts this observable into a multicast observable, useful for turning a cold observable into
* a hot one (i.e. whose source is shared by all observers). The underlying subject used is a
* [[monifu.reactive.subjects.ReplaySubject ReplaySubject]].
*/
def replay(implicit s: Scheduler): ConnectableObservable[T] =
multicast(ReplaySubject[T]())
/**
* Converts this observable into a multicast observable, useful for turning a cold observable into
* a hot one (i.e. whose source is shared by all observers). The underlying subject used is a
* [[monifu.reactive.subjects.ReplaySubject ReplaySubject]].
*
* @param bufferSize is the size of the buffer limiting the number of items
* that can be replayed (on overflow the head starts being
* dropped)
*/
def replay(bufferSize: Int)(implicit s: Scheduler): ConnectableObservable[T] =
multicast(ReplaySubject.createWithSize[T](bufferSize))
/**
* Converts this observable into a multicast observable, useful for turning a cold observable into
* a hot one (i.e. whose source is shared by all observers). The underlying subject used is a
* [[monifu.reactive.subjects.AsyncSubject AsyncSubject]].
*/
def publishLast(implicit s: Scheduler): ConnectableObservable[T] =
multicast(AsyncSubject[T]())
/**
* Returns an Observable that mirrors the behavior of the source,
* unless the source is terminated with an `onError`, in which
* case the streaming of events continues with the specified
* backup sequence generated by the given partial function.
*
* The created Observable mirrors the behavior of the source
* in case the source does not end with an error or if the
* thrown `Throwable` is not matched.
*
* NOTE that compared with `onErrorResumeNext` from Rx.NET,
* the streaming is not resumed in case the source is
* terminated normally with an `onComplete`.
*
* @param pf - a partial function that matches errors with a
* backup throwable that is subscribed when the source
* throws an error.
*/
def onErrorRecoverWith[U >: T](pf: PartialFunction[Throwable, Observable[U]]): Observable[U] =
operators.onError.recoverWith(self, pf)
/**
* Returns an Observable that mirrors the behavior of the source,
* unless the source is terminated with an `onError`, in which
* case the streaming of events continues with the specified
* backup sequence.
*
* The created Observable mirrors the behavior of the source
* in case the source does not end with an error.
*
* NOTE that compared with `onErrorResumeNext` from Rx.NET,
* the streaming is not resumed in case the source is
* terminated normally with an `onComplete`.
*
* @param that - a backup sequence that's being subscribed
* in case the source terminates with an error.
*/
def onErrorFallbackTo[U >: T](that: => Observable[U]): Observable[U] =
operators.onError.fallbackTo(self, that)
/**
* Returns an Observable that mirrors the behavior of the source,
* unless the source is terminated with an `onError`, in which case
* it tries subscribing to the source again in the hope that
* it will complete without an error.
*
* NOTE: The number of retries is unlimited, so something like
* `Observable.error(new RuntimeException).onErrorRetryUnlimited` will loop
* forever.
*/
def onErrorRetryUnlimited: Observable[T] =
operators.onError.retryUnlimited(self)
/**
* Returns an Observable that mirrors the behavior of the source,
* unless the source is terminated with an `onError`, in which case
* it tries subscribing to the source again in the hope that
* it will complete without an error.
*
* The number of retries is limited by the specified `maxRetries`
* parameter, so for an Observable that always ends in error the
* total number of subscriptions that will eventually happen is
* `maxRetries + 1`.
*/
def onErrorRetry(maxRetries: Long): Observable[T] =
operators.onError.retryCounted(self, maxRetries)
/**
* Returns an Observable that mirrors the behavior of the source,
* unless the source is terminated with an `onError`, in which case
* it tries subscribing to the source again in the hope that
* it will complete without an error.
*
* The given predicate establishes if the subscription should be
* retried or not.
*/
def onErrorRetryIf(p: Throwable => Boolean): Observable[T] =
operators.onError.retryIf(self, p)
/**
* Returns an Observable that mirrors the source Observable but
* applies a timeout overflowStrategy for each emitted item. If the next item
* isn't emitted within the specified timeout duration starting from
* its predecessor, the resulting Observable terminates and notifies
* observers of a TimeoutException.
*
* @param timeout maximum duration between emitted items before
* a timeout occurs
*/
def timeout(timeout: FiniteDuration): Observable[T] =
operators.timeout.emitError(self, timeout)
/**
* Returns an Observable that mirrors the source Observable but
* applies a timeout overflowStrategy for each emitted item. If the next item
* isn't emitted within the specified timeout duration starting from
* its predecessor, the resulting Observable begins instead to
* mirror a backup Observable.
*
* @param timeout maximum duration between emitted items before
* a timeout occurs
* @param backup is the backup observable to subscribe to
* in case of a timeout
*/
def timeout[U >: T](timeout: FiniteDuration, backup: Observable[U]): Observable[U] =
operators.timeout.switchToBackup(self, timeout, backup)
/**
* Given a function that transforms an `Observable[T]` into an `Observable[U]`,
* it transforms the source observable into an `Observable[U]`.
*/
def lift[U](f: Observable[T] => Observable[U]): Observable[U] =
f(self)
/**
* Returns the first generated result as a Future and then cancels
* the subscription.
*/
def asFuture(implicit s: Scheduler): Future[Option[T]] = {
val promise = Promise[Option[T]]()
head.onSubscribe(new Observer[T] {
def onNext(elem: T) = {
promise.trySuccess(Some(elem))
Cancel
}
def onComplete() = {
promise.trySuccess(None)
}
def onError(ex: Throwable) = {
promise.tryFailure(ex)
}
})
promise.future
}
/**
* Subscribes to the source `Observable` and foreach element emitted by the source
* it executes the given callback.
*/
def foreach(cb: T => Unit)(implicit s: Scheduler): Unit =
onSubscribe(new SynchronousObserver[T] {
def onNext(elem: T) =
try { cb(elem); Continue } catch {
case NonFatal(ex) =>
onError(ex)
Cancel
}
def onComplete() = ()
def onError(ex: Throwable) = {
s.reportFailure(ex)
}
})
}
object Observable {
/**
* Observable constructor for creating an [[Observable]] from the
* specified function.
*/
def create[T](f: Subscriber[T] => Unit): Observable[T] = {
new Observable[T] {
def onSubscribe(subscriber: Subscriber[T]): Unit =
try f(subscriber) catch {
case NonFatal(ex) =>
subscriber.onError(ex)
}
}
}
/**
* Creates an observable that doesn't emit anything, but immediately
* calls `onComplete` instead.
*/
def empty: Observable[Nothing] =
builders.unit.empty
/**
* Creates an Observable that only emits the given ''a''
*/
def unit[A](elem: A): Observable[A] =
builders.unit.one(elem)
/**
* Creates an Observable that emits an error.
*/
def error(ex: Throwable): Observable[Nothing] =
builders.unit.error(ex)
/**
* Creates an Observable that doesn't emit anything and that never
* completes.
*/
def never: Observable[Nothing] =
builders.unit.never
/**
* Returns an Observable that calls an Observable factory to create
* an Observable for each new Observer that subscribes. That is, for
* each subscriber, the actual Observable that subscriber observes is
* determined by the factory function.
*
* The defer Observer allows you to defer or delay emitting items
* from an Observable until such time as an Observer subscribes
* to the Observable. This allows an Observer to easily obtain updates
* or a refreshed version of the sequence.
*
* @param factory is the Observable factory function to invoke for each
* Observer that subscribes to the resulting Observable
*/
def defer[T](factory: => Observable[T]): Observable[T] = {
create[T](s => factory.onSubscribe(s))
}
/**
* Creates an Observable that emits auto-incremented natural numbers
* (longs) spaced by a given time interval. Starts from 0 with no
* delay, after which it emits incremented numbers spaced by the
* `period` of time. The given `period` of time acts as a fixed
* delay between successive events.
*
* @param delay the delay between 2 successive events
*/
def intervalWithFixedDelay(delay: FiniteDuration): Observable[Long] =
builders.interval.withFixedDelay(Duration.Zero, delay)
/**
* Creates an Observable that emits auto-incremented natural numbers
* (longs) spaced by a given time interval. Starts from 0 with no
* delay, after which it emits incremented numbers spaced by the
* `period` of time. The given `period` of time acts as a fixed
* delay between successive events.
*
* @param initialDelay is the delay to wait before emitting the first event
* @param delay the time to wait between 2 successive events
*/
def intervalWithFixedDelay(initialDelay: FiniteDuration, delay: FiniteDuration): Observable[Long] =
builders.interval.withFixedDelay(initialDelay, delay)
/**
* Creates an Observable that emits auto-incremented natural numbers
* (longs) spaced by a given time interval. Starts from 0 with no
* delay, after which it emits incremented numbers spaced by the
* `period` of time. The given `period` of time acts as a fixed
* delay between successive events.
*
* @param delay the delay between 2 successive events
*/
def interval(delay: FiniteDuration): Observable[Long] =
intervalWithFixedDelay(delay)
/**
* Creates an Observable that emits auto-incremented natural numbers
* (longs) at a fixed rate, as given by the specified `period`. The
* time it takes to process an `onNext` event gets subtracted from
* the specified `period` and thus the created observable tries to
* emit events spaced by the given time interval, regardless of how
* long the processing of `onNext` takes.
*
* @param period the period between 2 successive `onNext` events
*/
def intervalAtFixedRate(period: FiniteDuration): Observable[Long] =
builders.interval.atFixedRate(Duration.Zero, period)
/**
* Creates an Observable that emits auto-incremented natural numbers
* (longs) at a fixed rate, as given by the specified `period`. The
* time it takes to process an `onNext` event gets subtracted from
* the specified `period` and thus the created observable tries to
* emit events spaced by the given time interval, regardless of how
* long the processing of `onNext` takes.
*
* This version of the `intervalAtFixedRate` allows specifying an
* `initialDelay` before events start being emitted.
*
* @param initialDelay is the initial delay before emitting the first event
* @param period the period between 2 successive `onNext` events
*/
def intervalAtFixedRate(initialDelay: FiniteDuration, period: FiniteDuration): Observable[Long] =
builders.interval.atFixedRate(initialDelay, period)
/**
* Creates an Observable that continuously emits the given ''item'' repeatedly.
*/
def repeat[T](elems: T*): Observable[T] =
builders.repeat(elems : _*)
/**
* Repeats the execution of the given `task`, emitting
* the results indefinitely.
*/
def repeatTask[T](task: => T): Observable[T] =
operators.repeat.task(task)
/**
* Creates an Observable that emits items in the given range.
*
* @param from the range start
* @param until the range end
* @param step increment step, either positive or negative
*/
def range(from: Long, until: Long, step: Long = 1L): Observable[Long] =
builders.range(from, until, step)
/**
* Creates an Observable that emits the given elements.
*
* Usage sample: {{{
* val obs = Observable(1, 2, 3, 4)
*
* obs.dump("MyObservable").subscribe()
* //=> 0: MyObservable-->1
* //=> 1: MyObservable-->2
* //=> 2: MyObservable-->3
* //=> 3: MyObservable-->4
* //=> 4: MyObservable completed
* }}}
*/
def apply[T](elems: T*): Observable[T] = {
fromIterable(elems)
}
/**
* Given an initial state and a generator function that produces
* the next state and the next element in the sequence, creates
* an observable that keeps generating elements produced by our
* generator function.
*
* {{{
* from monifu.concurrent.Implicits.{globalScheduler => s}
* from monifu.util import Random
*
* def randomDoubles(): Observable[Double] =
* Observable.fromStateAction(Random.double)(s.currentTimeMillis())
* }}}
*/
def fromStateAction[S,A](f: S => (A,S))(initialState: S): Observable[A] =
builders.from.stateAction(f)(initialState)
/**
* Converts a Future to an Observable.
*/
def fromFuture[T](future: Future[T]): Observable[T] =
builders.from.future(future)
/**
* Creates an Observable that emits the elements of the given ''iterable''.
*/
def fromIterable[T](iterable: Iterable[T]): Observable[T] =
builders.from.iterable(iterable)
/**
* Creates an Observable that emits the elements of the given `iterator`.
*/
def fromIterator[T](iterator: Iterator[T]): Observable[T] =
builders.from.iterator(iterator)
/**
* Creates an Observable that emits the given elements exactly.
*/
def from[T](elems: T*): Observable[T] =
builders.from.iterable(elems)
/**
* Given a `org.reactivestreams.Publisher`, converts it into a
* Monifu / Rx Observable.
*
* See the [[http://www.reactive-streams.org/ Reactive Streams]]
* protocol that Monifu implements.
*
* @see [[Observable!.toReactive]] for converting ``
*/
def fromReactivePublisher[T](publisher: RPublisher[T]): Observable[T] =
Observable.create[T] { sub =>
publisher.subscribe(sub.toReactive)
}
/**
* Given a lazy by-name argument, converts it into an Observable
* that emits a single element.
*/
def fromTask[T](task: => T): Observable[T] =
builders.from.task(task)
/**
* Given a runnable, converts it into an Observable that executes it,
* then signals the execution with a `Unit` being emitted.
*/
def fromRunnable(r: Runnable): Observable[Unit] =
builders.from.runnable(r)
/**
* Given a `java.util.concurrent.Callable`, converts it into an
* Observable that executes it, then emits the result.
*/
def fromCallable[T](c: Callable[T]): Observable[T] =
builders.from.callable(c)
/**
* Wraps this Observable into a `org.reactivestreams.Publisher`.
* See the [[http://www.reactive-streams.org/ Reactive Streams]]
* protocol that Monifu implements.
*/
def toReactivePublisher[T](source: Observable[T])(implicit s: Scheduler): RPublisher[T] =
new RPublisher[T] {
def subscribe(subscriber: RSubscriber[_ >: T]): Unit = {
source.onSubscribe(SafeSubscriber(Observer.fromReactiveSubscriber(subscriber)))
}
}
/**
* Wraps this Observable into a `org.reactivestreams.Publisher`.
* See the [[http://www.reactive-streams.org/ Reactive Streams]]
* protocol that Monifu implements.
*
* @param requestSize is
*/
def toReactivePublisher[T](source: Observable[T], requestSize: Int)(implicit s: Scheduler): RPublisher[T] =
new RPublisher[T] {
def subscribe(subscriber: RSubscriber[_ >: T]): Unit = {
source.onSubscribe(SafeSubscriber(Observer.fromReactiveSubscriber(subscriber)))
}
}
/**
* Create an Observable that emits a single item after a given delay.
*/
def unitDelayed[T](delay: FiniteDuration, unit: T): Observable[T] =
builders.unit.oneDelayed(delay, unit)
/**
* Create an Observable that repeatedly emits the given `item`, until
* the underlying Observer cancels.
*/
def timerRepeated[T](initialDelay: FiniteDuration, period: FiniteDuration, unit: T): Observable[T] =
builders.timer.repeated(initialDelay, period, unit)
/**
* Concatenates the given list of ''observables'' into a single observable.
*/
def flatten[T](sources: Observable[T]*): Observable[T] =
Observable.fromIterable(sources).concat
/**
* Concatenates the given list of ''observables'' into a single observable.
* Delays errors until the end.
*/
def flattenDelayError[T](sources: Observable[T]*): Observable[T] =
Observable.fromIterable(sources).concatDelayError
/**
* Merges the given list of ''observables'' into a single observable.
*/
def merge[T](sources: Observable[T]*): Observable[T] =
Observable.fromIterable(sources).merge
/**
* Merges the given list of ''observables'' into a single observable.
* Delays errors until the end.
*/
def mergeDelayError[T](sources: Observable[T]*): Observable[T] =
Observable.fromIterable(sources).mergeDelayErrors
/**
* Concatenates the given list of ''observables'' into a single observable.
*/
def concat[T](sources: Observable[T]*): Observable[T] =
Observable.fromIterable(sources).concat
/**
* Concatenates the given list of ''observables'' into a single observable.
* Delays errors until the end.
*/
def concatDelayError[T](sources: Observable[T]*): Observable[T] =
Observable.fromIterable(sources).concatDelayError
/**
* Creates a new Observable from two observables, by emitting
* elements combined in pairs. If one of the Observable emits fewer
* events than the other, then the rest of the unpaired events are
* ignored.
*/
def zip[T1, T2](obs1: Observable[T1], obs2: Observable[T2]): Observable[(T1,T2)] =
obs1.zip(obs2)
/**
* Creates a new Observable from three observables, by emitting
* elements combined in tuples of 3 elements. If one of the
* Observable emits fewer events than the others, then the rest of
* the unpaired events are ignored.
*/
def zip[T1, T2, T3](obs1: Observable[T1], obs2: Observable[T2], obs3: Observable[T3]): Observable[(T1, T2, T3)] =
obs1.zip(obs2).zip(obs3).map { case ((t1, t2), t3) => (t1, t2, t3) }
/**
* Creates a new Observable from three observables, by emitting
* elements combined in tuples of 4 elements. If one of the
* Observable emits fewer events than the others, then the rest of
* the unpaired events are ignored.
*/
def zip[T1, T2, T3, T4](obs1: Observable[T1], obs2: Observable[T2], obs3: Observable[T3], obs4: Observable[T4]): Observable[(T1, T2, T3, T4)] =
obs1.zip(obs2).zip(obs3).zip(obs4).map { case (((t1, t2), t3), t4) => (t1, t2, t3, t4) }
/**
* Creates a new Observable from three observables, by emitting
* elements combined in tuples of 5 elements. If one of the
* Observable emits fewer events than the others, then the rest of
* the unpaired events are ignored.
*/
def zip[T1, T2, T3, T4, T5](
obs1: Observable[T1], obs2: Observable[T2], obs3: Observable[T3],
obs4: Observable[T4], obs5: Observable[T5]): Observable[(T1, T2, T3, T4, T5)] = {
obs1.zip(obs2).zip(obs3).zip(obs4).zip(obs5)
.map { case ((((t1, t2), t3), t4), t5) => (t1, t2, t3, t4, t5) }
}
/**
* Creates a new Observable from three observables, by emitting
* elements combined in tuples of 6 elements. If one of the
* Observable emits fewer events than the others, then the rest of
* the unpaired events are ignored.
*/
def zip[T1, T2, T3, T4, T5, T6](
obs1: Observable[T1], obs2: Observable[T2], obs3: Observable[T3],
obs4: Observable[T4], obs5: Observable[T5], obs6: Observable[T6]): Observable[(T1, T2, T3, T4, T5, T6)] = {
obs1.zip(obs2).zip(obs3).zip(obs4).zip(obs5).zip(obs6)
.map { case (((((t1, t2), t3), t4), t5), t6) => (t1, t2, t3, t4, t5, t6) }
}
/**
* Given an observable sequence, it [[Observable!.zip zips]] them together
* returning a new observable that generates sequences.
*/
def zipList[T](sources: Observable[T]*): Observable[Seq[T]] = {
if (sources.isEmpty) Observable.empty else {
val seed = sources.head.map(t => Vector(t))
sources.tail.foldLeft(seed) { (acc, obs) =>
acc.zip(obs).map { case (seq, elem) => seq :+ elem }
}
}
}
/**
* Creates a combined observable from 2 source observables.
*
* This operator behaves in a similar way to [[Observable!.zip]],
* but while `zip` emits items only when all of the zipped source
* Observables have emitted a previously unzipped item, `combine`
* emits an item whenever any of the source Observables emits an
* item (so long as each of the source Observables has emitted at
* least one item).
*/
def combineLatest[T1, T2](first: Observable[T1], second: Observable[T2]): Observable[(T1,T2)] = {
first.combineLatest(second)
}
/**
* Creates a combined observable from 3 source observables.
*
* This operator behaves in a similar way to [[Observable!.zip]],
* but while `zip` emits items only when all of the zipped source
* Observables have emitted a previously unzipped item, `combine`
* emits an item whenever any of the source Observables emits an
* item (so long as each of the source Observables has emitted at
* least one item).
*/
def combineLatest[T1, T2, T3]
(first: Observable[T1], second: Observable[T2], third: Observable[T3]): Observable[(T1,T2,T3)] = {
first.combineLatest(second).combineLatest(third)
.map { case ((t1, t2), t3) => (t1, t2, t3) }
}
/**
* Creates a combined observable from 4 source observables.
*
* This operator behaves in a similar way to [[Observable!.zip]],
* but while `zip` emits items only when all of the zipped source
* Observables have emitted a previously unzipped item, `combine`
* emits an item whenever any of the source Observables emits an
* item (so long as each of the source Observables has emitted at
* least one item).
*/
def combineLatest[T1, T2, T3, T4]
(first: Observable[T1], second: Observable[T2],
third: Observable[T3], fourth: Observable[T4]): Observable[(T1, T2, T3, T4)] = {
first.combineLatest(second).combineLatest(third).combineLatest(fourth)
.map { case (((t1, t2), t3), t4) => (t1, t2, t3, t4) }
}
/**
* Creates a combined observable from 5 source observables.
*
* This operator behaves in a similar way to [[Observable!.zip]],
* but while `zip` emits items only when all of the zipped source
* Observables have emitted a previously unzipped item, `combine`
* emits an item whenever any of the source Observables emits an
* item (so long as each of the source Observables has emitted at
* least one item).
*/
def combineLatest[T1, T2, T3, T4, T5](
obs1: Observable[T1], obs2: Observable[T2], obs3: Observable[T3],
obs4: Observable[T4], obs5: Observable[T5]): Observable[(T1, T2, T3, T4, T5)] = {
obs1.combineLatest(obs2).combineLatest(obs3)
.combineLatest(obs4).combineLatest(obs5)
.map { case ((((t1, t2), t3), t4), t5) => (t1, t2, t3, t4, t5) }
}
/**
* Creates a combined observable from 6 source observables.
*
* This operator behaves in a similar way to [[Observable!.zip]],
* but while `zip` emits items only when all of the zipped source
* Observables have emitted a previously unzipped item, `combine`
* emits an item whenever any of the source Observables emits an
* item (so long as each of the source Observables has emitted at
* least one item).
*/
def combineLatest[T1, T2, T3, T4, T5, T6](
obs1: Observable[T1], obs2: Observable[T2], obs3: Observable[T3],
obs4: Observable[T4], obs5: Observable[T5], obs6: Observable[T6]): Observable[(T1, T2, T3, T4, T5, T6)] = {
obs1.combineLatest(obs2).combineLatest(obs3)
.combineLatest(obs4).combineLatest(obs5).combineLatest(obs6)
.map { case (((((t1, t2), t3), t4), t5), t6) => (t1, t2, t3, t4, t5, t6) }
}
/**
* Given an observable sequence, it [[Observable!.zip zips]] them together
* returning a new observable that generates sequences.
*/
def combineLatestList[T](sources: Observable[T]*): Observable[Seq[T]] = {
if (sources.isEmpty) Observable.empty else {
val seed = sources.head.map(t => Vector(t))
sources.tail.foldLeft(seed) { (acc, obs) =>
acc.combineLatest(obs).map { case (seq, elem) => seq :+ elem }
}
}
}
/**
* Given a list of source Observables, emits all of the items from
* the first of these Observables to emit an item and cancel the
* rest.
*/
def amb[T](source: Observable[T]*): Observable[T] =
builders.amb(source : _*)
/**
* Implicit conversion from Future to Observable.
*/
implicit def FutureIsObservable[T](future: Future[T]): Observable[T] =
Observable.fromFuture(future)
/**
* Implicit conversion from Observable to Publisher.
*/
implicit def ObservableIsReactive[T](source: Observable[T])(implicit s: Scheduler): RPublisher[T] =
source.toReactive
}
|
sergius/monifu
|
monifu/shared/src/main/scala/monifu/reactive/Observable.scala
|
Scala
|
apache-2.0
| 103,583
|
import sbt.File
import java.io.ByteArrayOutputStream
import java.io.PrintStream
import org.scalastyle._
import com.typesafe.config.ConfigFactory
object StyleChecker {
val maxResult = 100
class CustomTextOutput[T <: FileSpec](stream: PrintStream) extends Output[T] {
// Use the parent class loader because sbt runs our code in a class loader that does not
// contain the reference.conf file
private val messageHelper = new MessageHelper(ConfigFactory.load(getClass.getClassLoader.getParent))
var fileCount: Int = _
override def message(m: Message[T]): Unit = m match {
case StartWork() =>
case EndWork() =>
case StartFile(file) =>
stream.print("Checking file " + file + "...")
fileCount = 0
case EndFile(file) =>
if (fileCount == 0) stream.println(" OK!")
case StyleError(file, clazz, key, level, args, line, column, customMessage) =>
report(line, column, messageHelper.text(level.name),
Output.findMessage(messageHelper, key, args, customMessage))
case StyleException(file, clazz, message, stacktrace, line, column) =>
report(line, column, "error", message)
}
private def report(line: Option[Int], column: Option[Int], level: String, message: String) {
if (fileCount == 0) stream.println("")
fileCount += 1
stream.println(" " + fileCount + ". " + level + pos(line, column) + ":")
stream.println(" " + message)
}
private def pos(line: Option[Int], column: Option[Int]): String = line match {
case Some(lineNumber) => " at line " + lineNumber + (column match {
case Some(columnNumber) => " character " + columnNumber
case None => ""
})
case None => ""
}
}
def score(outputResult: OutputResult) = {
val penalties = outputResult.errors + outputResult.warnings
scala.math.max(maxResult - penalties, 0)
}
def assess(sources: Seq[File], styleSheetPath: String): (String, Int) = {
val configFile = new File(styleSheetPath).getAbsolutePath
val messages = new ScalastyleChecker().checkFiles(
ScalastyleConfiguration.readFromXml(configFile),
Directory.getFiles(None, sources))
val output = new ByteArrayOutputStream()
val outputResult = new CustomTextOutput(new PrintStream(output)).output(messages)
val msg = s"""${output.toString}
|Processed ${outputResult.files} file(s)
|Found ${outputResult.errors} errors
|Found ${outputResult.warnings} warnings
|""".stripMargin
(msg, score(outputResult))
}
}
|
giovannidoni/Scala-course-1
|
week6/project/StyleChecker.scala
|
Scala
|
gpl-3.0
| 2,616
|
import sbt._
class ShakesEMProject(info: ProjectInfo) extends DefaultProject(info)
{
override def compileOptions = Unchecked :: super.compileOptions.toList
}
|
jpate/ShakesEM
|
project/build/Project.scala
|
Scala
|
gpl-3.0
| 164
|
package org.jetbrains.plugins.scala
package lang.refactoring.extractTrait
import com.intellij.refactoring.actions.ExtractSuperActionBase
import com.intellij.lang.refactoring.RefactoringSupportProvider
import org.jetbrains.plugins.scala.lang.refactoring.ScalaRefactoringSupportProvider
/**
* Nikolay.Tropin
* 2014-05-20
*/
class ScalaExtractTraitAction extends ExtractSuperActionBase {
override def getRefactoringHandler(provider: RefactoringSupportProvider) = provider match {
case _: ScalaRefactoringSupportProvider => new ScalaExtractTraitHandler
case _ => null
}
}
|
consulo/consulo-scala
|
src/org/jetbrains/plugins/scala/lang/refactoring/extractTrait/ScalaExtractTraitAction.scala
|
Scala
|
apache-2.0
| 586
|
package price
import enumspckg.AddOnType.AddOnType
import enumspckg.ChargeType.ChargeType
import enumspckg.MediaType.MediaType
import enumspckg.OfferStatus.OfferStatus
import enumspckg.PriceStatus.PriceStatus
import org.joda.time.DateTime
import settings.OfferSetting
import utils.{UserMetaData, DateMetaData}
/**
* Created by harsh on 10/8/14.
*/
case class OtherCharge(amount:Double,charge_type:ChargeType)
case class BaseAddOn(id:Long,add_on_type:AddOnType,title:String,dates:DateMetaData,users:UserMetaData,precision:Int)
case class Offer(offer_id:Long,title:String,media:List[MediaType],
short_description:String,adspace:String,disclaimer:String,detail:String,startdate:DateTime,
enddate:DateTime,effective_startdate:DateTime,effective_enddate:DateTime,status:OfferStatus,
dates:DateMetaData,users:UserMetaData,offersettings:OfferSetting)
case class Price(id:Long,product_group_id:Option[Long],product_id:Long,cost:Double,unit:Double,retail:Double,
threshold:Double,lower_limit:Double,upper_limit:Double,dates:DateMetaData,users:UserMetaData,
status:PriceStatus)
|
hardmettle/slick-postgress-samples
|
tmp/price/GlobalPrice.scala
|
Scala
|
apache-2.0
| 1,161
|
/*
* Copyright 2015 LG CNS.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
package scouter.server.db.status;
import java.io.IOException;
import java.util.Hashtable;
import scouter.server.db.io.RealDataFile;
import scouter.util.FileUtil;
import scouter.util.IClose;
object StatusWriter {
val table = new Hashtable[String, StatusWriter]();
def open(file: String): StatusWriter = {
table.synchronized {
var reader = table.get(file);
if (reader != null) {
reader.refrence += 1;
} else {
reader = new StatusWriter(file);
table.put(file, reader);
}
return reader;
}
}
}
class StatusWriter(file: String) extends IClose {
var refrence = 0;
val out = new RealDataFile(file + ".pshot");
def write(bytes: Array[Byte]): Long = {
this.synchronized {
val point = out.getOffset();
out.writeShort(bytes.length.toShort);
out.write(bytes);
out.flush();
return point;
}
}
override def close() {
StatusWriter.table.synchronized {
if (this.refrence == 0) {
StatusWriter.table.remove(this.file)
FileUtil.close(out);
} else {
this.refrence -= 1
}
}
}
}
|
jahnaviancha/scouter
|
scouter.server/src/scouter/server/db/status/StatusWriter.scala
|
Scala
|
apache-2.0
| 1,954
|
package gsd.linux.stats
import java.io.PrintStream
import gsd.linux.cnf.{SATBuilder, DimacsReader}
import gsd.linux.{Hierarchy, HierarchyAnalysis, KConfigParser}
object HierarchyMain {
def main(args: Array[String]) {
if (args.size < 2) {
System.err.println("Parameters: <exconfig file> <dimacs file> <output file>")
System exit 1
}
val out: PrintStream =
if (args.size > 2) new PrintStream(args(2))
else System.out
println("Reading extract...")
val k = KConfigParser.parseKConfigFile(args(0))
println("Reading dimacs...")
val header = DimacsReader.readHeaderFile(args(1))
val problem = DimacsReader.readFile(args(1))
val idMap = header.idMap
println("Initializing SAT solver...")
val sat = new SATBuilder(problem.cnf, problem.numVars, header.generated)
println("Finding hierarchy violating configs...")
val violating = HierarchyAnalysis.findViolatingConfigs(k, sat, idMap)
val parentMap = Hierarchy.mkParentMap(k)
violating foreach { c =>
out.println(c.name + "," + parentMap(c).name)
}
}
}
|
scas-mdd/linux-variability-analysis-tools.fm-translation
|
src/main/scala/gsd/linux/stats/HierarchyMain.scala
|
Scala
|
gpl-3.0
| 1,101
|
package scaffvis.client.components
import scaffvis.client.components.common.{GlyphIcon, ReusableCmps}
import scaffvis.client.store.model.Model
import scaffvis.shared.model.Scaffold
import diode.data.{Failed, Pending, Ready}
import japgolly.scalajs.react._
import japgolly.scalajs.react.vdom.prefix_<^._
object Footer {
case class Props(model: Model, currentScaffold: Scaffold)
class Backend($: BackendScope[Props, Unit]) {
def render(props: Props) = {
import props._
<.div(^.className := "footer",
<.div(^.className := "footer-text",
model.molecules match {
case Ready(molecules) => {
val datasetSize = molecules.molecules.size
val datasetSelectionSize = if(molecules.selected.isEmpty) None else Some(molecules.selected.size)
val subtreeSize = molecules.scaffoldMolecules(currentScaffold.id).size
val subtreeSelection = ReusableCmps.selectedMoleculesInSubtree(molecules, currentScaffold)
val subtreeSelectionSize = if(subtreeSelection.isEmpty) None else Some(subtreeSelection.size)
s"Dataset: $subtreeSize molecules in current subtree" +
subtreeSelectionSize.map(n => s" ($n selected)").getOrElse("") +
s", $datasetSize molecules total" +
datasetSelectionSize.map(n => s" ($n selected)").getOrElse("")
}
case Pending(_) => Seq[ReactNode]("Loading dataset ", GlyphIcon.refresh)
case Failed(e) => Seq[ReactNode](
<.span(^.color := "red", GlyphIcon.exclamationSign),
s" Loading dataset failed. You might be trying to use an unsupported file format. Error: ${e.getMessage}"
)
case _ => "No dataset loaded."
}
),
<.div(^.className := "footer-text", ^.float := "right",
<.a(^.href := "https://github.com/velkoborsky/scaffvis/issues", "Report a problem")
)
)
}
}
val component = ReactComponentB[Props]("Footer")
.renderBackend[Backend]
.build
def apply(props: Props) = component(props)
}
|
velkoborsky/scaffvis
|
client/src/main/scala/scaffvis/client/components/Footer.scala
|
Scala
|
gpl-3.0
| 2,132
|
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.spark.sql.execution.exchange
import java.util.Random
import java.util.function.Supplier
import scala.concurrent.Future
import org.apache.spark._
import org.apache.spark.internal.config
import org.apache.spark.rdd.RDD
import org.apache.spark.serializer.Serializer
import org.apache.spark.shuffle.{ShuffleWriteMetricsReporter, ShuffleWriteProcessor}
import org.apache.spark.shuffle.sort.SortShuffleManager
import org.apache.spark.sql.catalyst.InternalRow
import org.apache.spark.sql.catalyst.errors._
import org.apache.spark.sql.catalyst.expressions.{Attribute, BoundReference, UnsafeProjection, UnsafeRow}
import org.apache.spark.sql.catalyst.expressions.codegen.LazilyGeneratedOrdering
import org.apache.spark.sql.catalyst.plans.logical.Statistics
import org.apache.spark.sql.catalyst.plans.physical._
import org.apache.spark.sql.execution._
import org.apache.spark.sql.execution.metric.{SQLMetric, SQLMetrics, SQLShuffleReadMetricsReporter, SQLShuffleWriteMetricsReporter}
import org.apache.spark.sql.internal.SQLConf
import org.apache.spark.sql.types.StructType
import org.apache.spark.util.MutablePair
import org.apache.spark.util.collection.unsafe.sort.{PrefixComparators, RecordComparator}
/**
* Common trait for all shuffle exchange implementations to facilitate pattern matching.
*/
trait ShuffleExchangeLike extends Exchange {
/**
* Returns the number of mappers of this shuffle.
*/
def numMappers: Int
/**
* Returns the shuffle partition number.
*/
def numPartitions: Int
/**
* Returns whether the shuffle partition number can be changed.
*/
def canChangeNumPartitions: Boolean
/**
* The asynchronous job that materializes the shuffle.
*/
def mapOutputStatisticsFuture: Future[MapOutputStatistics]
/**
* Returns the shuffle RDD with specified partition specs.
*/
def getShuffleRDD(partitionSpecs: Array[ShufflePartitionSpec]): RDD[_]
/**
* Returns the runtime statistics after shuffle materialization.
*/
def runtimeStatistics: Statistics
}
/**
* Performs a shuffle that will result in the desired partitioning.
*/
case class ShuffleExchangeExec(
override val outputPartitioning: Partitioning,
child: SparkPlan,
canChangeNumPartitions: Boolean = true) extends ShuffleExchangeLike {
private lazy val writeMetrics =
SQLShuffleWriteMetricsReporter.createShuffleWriteMetrics(sparkContext)
private[sql] lazy val readMetrics =
SQLShuffleReadMetricsReporter.createShuffleReadMetrics(sparkContext)
override lazy val metrics = Map(
"dataSize" -> SQLMetrics.createSizeMetric(sparkContext, "data size")
) ++ readMetrics ++ writeMetrics
override def nodeName: String = "Exchange"
private val serializer: Serializer =
new UnsafeRowSerializer(child.output.size, longMetric("dataSize"))
@transient lazy val inputRDD: RDD[InternalRow] = child.execute()
// 'mapOutputStatisticsFuture' is only needed when enable AQE.
@transient override lazy val mapOutputStatisticsFuture: Future[MapOutputStatistics] = {
if (inputRDD.getNumPartitions == 0) {
Future.successful(null)
} else {
sparkContext.submitMapStage(shuffleDependency)
}
}
override def numMappers: Int = shuffleDependency.rdd.getNumPartitions
override def numPartitions: Int = shuffleDependency.partitioner.numPartitions
override def getShuffleRDD(partitionSpecs: Array[ShufflePartitionSpec]): RDD[InternalRow] = {
new ShuffledRowRDD(shuffleDependency, readMetrics, partitionSpecs)
}
override def runtimeStatistics: Statistics = {
val dataSize = metrics("dataSize").value
val rowCount = metrics(SQLShuffleWriteMetricsReporter.SHUFFLE_RECORDS_WRITTEN).value
Statistics(dataSize, Some(rowCount))
}
/**
* A [[ShuffleDependency]] that will partition rows of its child based on
* the partitioning scheme defined in `newPartitioning`. Those partitions of
* the returned ShuffleDependency will be the input of shuffle.
*/
@transient
lazy val shuffleDependency : ShuffleDependency[Int, InternalRow, InternalRow] = {
ShuffleExchangeExec.prepareShuffleDependency(
inputRDD,
child.output,
outputPartitioning,
serializer,
writeMetrics)
}
/**
* Caches the created ShuffleRowRDD so we can reuse that.
*/
private var cachedShuffleRDD: ShuffledRowRDD = null
protected override def doExecute(): RDD[InternalRow] = attachTree(this, "execute") {
// Returns the same ShuffleRowRDD if this plan is used by multiple plans.
if (cachedShuffleRDD == null) {
cachedShuffleRDD = new ShuffledRowRDD(shuffleDependency, readMetrics)
}
cachedShuffleRDD
}
}
object ShuffleExchangeExec {
/**
* Determines whether records must be defensively copied before being sent to the shuffle.
* Several of Spark's shuffle components will buffer deserialized Java objects in memory. The
* shuffle code assumes that objects are immutable and hence does not perform its own defensive
* copying. In Spark SQL, however, operators' iterators return the same mutable `Row` object. In
* order to properly shuffle the output of these operators, we need to perform our own copying
* prior to sending records to the shuffle. This copying is expensive, so we try to avoid it
* whenever possible. This method encapsulates the logic for choosing when to copy.
*
* In the long run, we might want to push this logic into core's shuffle APIs so that we don't
* have to rely on knowledge of core internals here in SQL.
*
* See SPARK-2967, SPARK-4479, and SPARK-7375 for more discussion of this issue.
*
* @param partitioner the partitioner for the shuffle
* @return true if rows should be copied before being shuffled, false otherwise
*/
private def needToCopyObjectsBeforeShuffle(partitioner: Partitioner): Boolean = {
// Note: even though we only use the partitioner's `numPartitions` field, we require it to be
// passed instead of directly passing the number of partitions in order to guard against
// corner-cases where a partitioner constructed with `numPartitions` partitions may output
// fewer partitions (like RangePartitioner, for example).
val conf = SparkEnv.get.conf
val shuffleManager = SparkEnv.get.shuffleManager
val sortBasedShuffleOn = shuffleManager.isInstanceOf[SortShuffleManager]
val bypassMergeThreshold = conf.get(config.SHUFFLE_SORT_BYPASS_MERGE_THRESHOLD)
val numParts = partitioner.numPartitions
if (sortBasedShuffleOn) {
if (numParts <= bypassMergeThreshold) {
// If we're using the original SortShuffleManager and the number of output partitions is
// sufficiently small, then Spark will fall back to the hash-based shuffle write path, which
// doesn't buffer deserialized records.
// Note that we'll have to remove this case if we fix SPARK-6026 and remove this bypass.
false
} else if (numParts <= SortShuffleManager.MAX_SHUFFLE_OUTPUT_PARTITIONS_FOR_SERIALIZED_MODE) {
// SPARK-4550 and SPARK-7081 extended sort-based shuffle to serialize individual records
// prior to sorting them. This optimization is only applied in cases where shuffle
// dependency does not specify an aggregator or ordering and the record serializer has
// certain properties and the number of partitions doesn't exceed the limitation. If this
// optimization is enabled, we can safely avoid the copy.
//
// Exchange never configures its ShuffledRDDs with aggregators or key orderings, and the
// serializer in Spark SQL always satisfy the properties, so we only need to check whether
// the number of partitions exceeds the limitation.
false
} else {
// Spark's SortShuffleManager uses `ExternalSorter` to buffer records in memory, so we must
// copy.
true
}
} else {
// Catch-all case to safely handle any future ShuffleManager implementations.
true
}
}
/**
* Returns a [[ShuffleDependency]] that will partition rows of its child based on
* the partitioning scheme defined in `newPartitioning`. Those partitions of
* the returned ShuffleDependency will be the input of shuffle.
*/
def prepareShuffleDependency(
rdd: RDD[InternalRow],
outputAttributes: Seq[Attribute],
newPartitioning: Partitioning,
serializer: Serializer,
writeMetrics: Map[String, SQLMetric])
: ShuffleDependency[Int, InternalRow, InternalRow] = {
val part: Partitioner = newPartitioning match {
case RoundRobinPartitioning(numPartitions) => new HashPartitioner(numPartitions)
case HashPartitioning(_, n) =>
new Partitioner {
override def numPartitions: Int = n
// For HashPartitioning, the partitioning key is already a valid partition ID, as we use
// `HashPartitioning.partitionIdExpression` to produce partitioning key.
override def getPartition(key: Any): Int = key.asInstanceOf[Int]
}
case RangePartitioning(sortingExpressions, numPartitions) =>
// Extract only fields used for sorting to avoid collecting large fields that does not
// affect sorting result when deciding partition bounds in RangePartitioner
val rddForSampling = rdd.mapPartitionsInternal { iter =>
val projection =
UnsafeProjection.create(sortingExpressions.map(_.child), outputAttributes)
val mutablePair = new MutablePair[InternalRow, Null]()
// Internally, RangePartitioner runs a job on the RDD that samples keys to compute
// partition bounds. To get accurate samples, we need to copy the mutable keys.
iter.map(row => mutablePair.update(projection(row).copy(), null))
}
// Construct ordering on extracted sort key.
val orderingAttributes = sortingExpressions.zipWithIndex.map { case (ord, i) =>
ord.copy(child = BoundReference(i, ord.dataType, ord.nullable))
}
implicit val ordering = new LazilyGeneratedOrdering(orderingAttributes)
new RangePartitioner(
numPartitions,
rddForSampling,
ascending = true,
samplePointsPerPartitionHint = SQLConf.get.rangeExchangeSampleSizePerPartition)
case SinglePartition =>
new Partitioner {
override def numPartitions: Int = 1
override def getPartition(key: Any): Int = 0
}
case _ => sys.error(s"Exchange not implemented for $newPartitioning")
// TODO: Handle BroadcastPartitioning.
}
def getPartitionKeyExtractor(): InternalRow => Any = newPartitioning match {
case RoundRobinPartitioning(numPartitions) =>
// Distributes elements evenly across output partitions, starting from a random partition.
var position = new Random(TaskContext.get().partitionId()).nextInt(numPartitions)
(row: InternalRow) => {
// The HashPartitioner will handle the `mod` by the number of partitions
position += 1
position
}
case h: HashPartitioning =>
val projection = UnsafeProjection.create(h.partitionIdExpression :: Nil, outputAttributes)
row => projection(row).getInt(0)
case RangePartitioning(sortingExpressions, _) =>
val projection = UnsafeProjection.create(sortingExpressions.map(_.child), outputAttributes)
row => projection(row)
case SinglePartition => identity
case _ => sys.error(s"Exchange not implemented for $newPartitioning")
}
val isRoundRobin = newPartitioning.isInstanceOf[RoundRobinPartitioning] &&
newPartitioning.numPartitions > 1
val rddWithPartitionIds: RDD[Product2[Int, InternalRow]] = {
// [SPARK-23207] Have to make sure the generated RoundRobinPartitioning is deterministic,
// otherwise a retry task may output different rows and thus lead to data loss.
//
// Currently we following the most straight-forward way that perform a local sort before
// partitioning.
//
// Note that we don't perform local sort if the new partitioning has only 1 partition, under
// that case all output rows go to the same partition.
val newRdd = if (isRoundRobin && SQLConf.get.sortBeforeRepartition) {
rdd.mapPartitionsInternal { iter =>
val recordComparatorSupplier = new Supplier[RecordComparator] {
override def get: RecordComparator = new RecordBinaryComparator()
}
// The comparator for comparing row hashcode, which should always be Integer.
val prefixComparator = PrefixComparators.LONG
// The prefix computer generates row hashcode as the prefix, so we may decrease the
// probability that the prefixes are equal when input rows choose column values from a
// limited range.
val prefixComputer = new UnsafeExternalRowSorter.PrefixComputer {
private val result = new UnsafeExternalRowSorter.PrefixComputer.Prefix
override def computePrefix(row: InternalRow):
UnsafeExternalRowSorter.PrefixComputer.Prefix = {
// The hashcode generated from the binary form of a [[UnsafeRow]] should not be null.
result.isNull = false
result.value = row.hashCode()
result
}
}
val pageSize = SparkEnv.get.memoryManager.pageSizeBytes
val sorter = UnsafeExternalRowSorter.createWithRecordComparator(
StructType.fromAttributes(outputAttributes),
recordComparatorSupplier,
prefixComparator,
prefixComputer,
pageSize,
// We are comparing binary here, which does not support radix sort.
// See more details in SPARK-28699.
false)
sorter.sort(iter.asInstanceOf[Iterator[UnsafeRow]])
}
} else {
rdd
}
// round-robin function is order sensitive if we don't sort the input.
val isOrderSensitive = isRoundRobin && !SQLConf.get.sortBeforeRepartition
if (needToCopyObjectsBeforeShuffle(part)) {
newRdd.mapPartitionsWithIndexInternal((_, iter) => {
val getPartitionKey = getPartitionKeyExtractor()
iter.map { row => (part.getPartition(getPartitionKey(row)), row.copy()) }
}, isOrderSensitive = isOrderSensitive)
} else {
newRdd.mapPartitionsWithIndexInternal((_, iter) => {
val getPartitionKey = getPartitionKeyExtractor()
val mutablePair = new MutablePair[Int, InternalRow]()
iter.map { row => mutablePair.update(part.getPartition(getPartitionKey(row)), row) }
}, isOrderSensitive = isOrderSensitive)
}
}
// Now, we manually create a ShuffleDependency. Because pairs in rddWithPartitionIds
// are in the form of (partitionId, row) and every partitionId is in the expected range
// [0, part.numPartitions - 1]. The partitioner of this is a PartitionIdPassthrough.
val dependency =
new ShuffleDependency[Int, InternalRow, InternalRow](
rddWithPartitionIds,
new PartitionIdPassthrough(part.numPartitions),
serializer,
shuffleWriterProcessor = createShuffleWriteProcessor(writeMetrics))
dependency
}
/**
* Create a customized [[ShuffleWriteProcessor]] for SQL which wrap the default metrics reporter
* with [[SQLShuffleWriteMetricsReporter]] as new reporter for [[ShuffleWriteProcessor]].
*/
def createShuffleWriteProcessor(metrics: Map[String, SQLMetric]): ShuffleWriteProcessor = {
new ShuffleWriteProcessor {
override protected def createMetricsReporter(
context: TaskContext): ShuffleWriteMetricsReporter = {
new SQLShuffleWriteMetricsReporter(context.taskMetrics().shuffleWriteMetrics, metrics)
}
}
}
}
|
dbtsai/spark
|
sql/core/src/main/scala/org/apache/spark/sql/execution/exchange/ShuffleExchangeExec.scala
|
Scala
|
apache-2.0
| 16,706
|
/*
* Copyright 2012 The SIRIS Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
* The SIRIS Project is a cooperation between Beuth University, Berlin and the
* HCI Group at the University of Würzburg. The project is funded by the German
* Federal Ministry of Education and Research (grant no. 17N4409).
*/
package simx.core.svaractor.synclayer
import simx.core.svaractor.TimedRingBuffer.ContentType
import simx.core.svaractor.{StateParticle, SVarActor}
import simx.core.entity.Entity
import collection.mutable
/**
* This class implements a three level cache for the sync groups.
*
* The first level saves the current active world state. A new world state is collected in the second level cache.
* If a complete world step is saved in the second level cache, a next world step is collected in the third level cache.
* If the third an second level caches contain a complete world step, the new values in the third level cache are
* "lifted" to the second level cache and a new world step is collected in the third level cache.
*
* Every time a new world state is available the cache calls a handler function.
*
* The method update lift the current second level cache to the first level cache. The observe function of the sVars
* are called at this step. The values in the current third level cache are lifted to the second level cache and the
* third level cache is cleaned.
*
* @author Stephan Rehfeld
*
* @param syncGroup The [[simx.core.svaractor.synclayer.SyncGroup]] of this cache.
* @param onWorldStepComplete The function that is called, if a new world step is available.
* @param actorContext The actor context (filled out by the compiler/runtime)
*/
private[synclayer] class Cache( val syncGroup : SyncGroup, onWorldStepComplete : (SyncGroup) => Unit )( implicit actorContext : SVarActor ) {
require( syncGroup != null, "The parameter 'syncGroup' must not be 'null'!" )
require( onWorldStepComplete != null, "The parameter 'onWorldStepComplete' must not be 'null'!" )
require( actorContext != null, "The parameter 'actorContext' must not be 'null'!" )
/**
* The first level cache.
*/
private val firstLevel = mutable.WeakHashMap[StateParticle[_],Option[ContentType[_]]]()
/**
* The second level cache.
*/
private val secondLevel = mutable.WeakHashMap[StateParticle[_],Option[ContentType[_]]]()
/**
* The third level cache.
*/
private val thirdLevel = mutable.WeakHashMap[StateParticle[_],Option[ContentType[_]]]()
/**
* The observe functions of the state variables.
*/
private val updateFunctions = mutable.WeakHashMap[StateParticle[_],(Any => Unit)]()
/**
* A flag, if the second level cache contains a complete world step.
*/
private var secondLevelCacheIsSealed = false
/**
* This method adds an entity to the cache. Memory is allocated for the variables that are synced by the
* [[simx.core.svaractor.synclayer.SyncGroup]]. The current values are read.
*
* @param e The entity that should be added to the cache.
*/
def add( e : Entity ) {
require( e != null, "The parameter 'e' must not be 'null'!" )
for( sVarDescription <- syncGroup.sVarDescriptions ) {
// val sVar = e.get(sVarDescription).head
// firstLevel = firstLevel + ( sVar -> None )
// secondLevel = secondLevel + (sVar -> None )
// thirdLevel = thirdLevel + (sVar -> None )
//
// sVar.get( (x) => {
// firstLevel = firstLevel + ( sVar -> Some(x) )
// secondLevel = secondLevel + (sVar -> Some(x) )
// })
}
}
/**
* This function is called to signal the cache that a world step is completed. Regarding to the current state the
* second level cache is sealed or updated by values of the third level cache. New values are saved in the third level
* cache.
*/
def worldStepComplete() {
if( secondLevelCacheIsSealed ) {
for( (sVar,data) <- secondLevel ) {
if( thirdLevel( sVar ).isDefined ) {
secondLevel.update(sVar, thirdLevel( sVar ) )
}
thirdLevel.update(sVar, None)
}
} else {
secondLevelCacheIsSealed = true
}
this.onWorldStepComplete( syncGroup )
}
/**
* This method updates the cache. Are values of the second level cache are written into the first level cache.
* Registered observe functions are called. New values are saved in the second level cache, the third level cache
* is cleaned.
*/
def update() {
if( secondLevelCacheIsSealed ) {
val handlerToExecute = mutable.Map[StateParticle[_],(Any=>Unit)]()
for( (sVar,data) <- secondLevel ) {
if( data.isDefined ) {
firstLevel.update(sVar, data)
if( updateFunctions contains sVar ) handlerToExecute.update(sVar, updateFunctions(sVar) )
}
if( thirdLevel( sVar ).isDefined ) {
secondLevel.update(sVar, thirdLevel( sVar ) )
}
thirdLevel.update(sVar, None)
}
for( (sVar,function) <- handlerToExecute ) function( firstLevel( sVar ).get )
secondLevelCacheIsSealed = false
}
}
/**
* This method returns if the cache, caches values for the given state variable.
*
* @param sVar The state variable.
* @return Returns 'true', if the cache caches data for this state variable.
*/
def doesCacheSVar( sVar : StateParticle[_] ) = firstLevel.contains( sVar )
/**
* Return if the cache has any data for this sVar.
*
* @param sVar The state variable.
* @return Returns 'true', if the cache holds any data for the state variable.
*/
def hasDataFor( sVar : StateParticle[_] ) = doesCacheSVar( sVar ) && firstLevel( sVar ).isDefined
/**
* This method returns the value of the state variable that is saved in the first level cache.
*
* @param sVar The state variable.
* @tparam T The state type of the encapsulated data.
* @return The value of the state variable read from the cache.
*/
def getDataFor[T]( sVar : StateParticle[T] ) : ContentType[T] = firstLevel( sVar ).get.asInstanceOf[ContentType[T]]
/**
* This method sets the observe function of a state variable. This function is called on an update of the cache.
*
* @param sVar The state variable.
* @param f The update function.
*/
def addUpdateFunction[T]( sVar : StateParticle[T], f : (Any => Unit) ) {
require( sVar != null, "The parameter 'sVar' must not be 'null'!" )
require( f != null, "The parameter 'f' must not be 'null'!" )
updateFunctions += (sVar -> f)
}
/**
* This method removes a observe function from state variable.
*
* @param sVar The state variable.
*/
def removeUpdateFunction( sVar : StateParticle[_] ) {
require( sVar != null, "The parameter 'sVar' must not be 'null'!" )
updateFunctions.remove(sVar)
}
/**
* This method updates the cached data of a state variable.
*
* @param sVar The state variable.
* @param v The variable
* @return Nothing
*/
def updateData[T]( sVar : StateParticle[T] )( v : ContentType[Any] ) {
if( !secondLevelCacheIsSealed ) {
secondLevel.update(sVar, Some( v ) )
} else {
thirdLevel.update( sVar, Some( v ) )
}
}
/**
* This method returns if a new world step is available.
*
* @return Returns 'true' if a new world step is available.
*/
def isWorldNextWorldStepAvailable = this.secondLevelCacheIsSealed
// CURRENTLY NOT NEEDED, COMMENTED OUT TO PREVENT DEAD CODE
/*def remove( e : Entity ) {
for( sVarDescription <- syncGroup.sVarDescriptions ) {
val sVar = e.get(sVarDescription).get
firstLevel = firstLevel - sVar
secondLevel = secondLevel - sVar
thirdLevel = thirdLevel - sVar
sVar.ignore()
}
} */
}
|
simulator-x/core
|
src/simx/core/svaractor/synclayer/Cache.scala
|
Scala
|
apache-2.0
| 8,271
|
package iot.pood.integration.actors
import akka.actor.Actor.Receive
import com.typesafe.config.{Config, ConfigFactory}
import akka.actor.{Actor, ActorLogging, ActorRef, Props}
import iot.pood.base.actors.BaseActor
import iot.pood.base.integration.IntegrationConfig.IntegrationConfig
/**
* Created by rafik on 31.7.2017.
*/
object IntegrationGuardian {
val NAME = "integration"
sealed trait IntegrationMessage{
def messageId: Long
}
object RegisterMessages {
//register listener
case class RegisterDataListener(messageId: Long,actorRef: ActorRef) extends IntegrationMessage
case class RegisterCommandListener(messageId: Long, actorRef: ActorRef) extends IntegrationMessage
case class ListenerRegistered(messageId: Long) extends IntegrationMessage
//get producer
case class ProducerRequest(messageId: Long,actorRef: ActorRef) extends IntegrationMessage
case class ProducerSend(messageId: Long, actorRef: ActorRef) extends IntegrationMessage
}
def props(integrationConfig: IntegrationConfig): Props = Props(new KafkaGuardianActor(integrationConfig))
}
class KafkaGuardianActor(integrationConfig: IntegrationConfig) extends BaseActor {
import IntegrationGuardian.RegisterMessages._
import Consumer.SubscribeMessage._
val dataConsumer = context.actorOf(Consumer.propsData(integrationConfig),Consumer.DATA)
val commandConsumer = context.actorOf(Consumer.propsCommand(integrationConfig),Consumer.COMMAND)
val producer = context.actorOf(Producer.props(integrationConfig),Producer.NAME)
override def receive: Receive = {
case m: RegisterDataListener => {
log.info("Register actor: {} for data listener",m.actorRef)
dataConsumer ! SubscribeListener(m.messageId,m.actorRef)
sender() ! ListenerRegistered(m.messageId)
}
case m: RegisterCommandListener => {
log.info("Register actor: {} fro command listener",m.actorRef)
commandConsumer ! SubscribeListener(m.messageId,m.actorRef)
sender() ! ListenerRegistered(m.messageId)
}
case m: ProducerRequest =>{
log.info("Request for producer: {}",sender)
m.actorRef ! ProducerSend(m.messageId,producer)
}
}
}
|
rafajpet/iot-pood
|
iot-pood-integration/src/main/scala/iot/pood/integration/actors/KafkaGuardian.scala
|
Scala
|
mit
| 2,193
|
package com.owtelse.models
import java.io.File
import scalaz.StreamT
/**
* Created by IntelliJ IDEA.
* User: robertk
*/
trait SplatConfig {
val propertyDirs:List[File]
val templatesDirs:List[File]
}
|
karlroberts/splat2
|
src/main/scala/com/owtelse/models/SplatConfig.scala
|
Scala
|
bsd-3-clause
| 208
|
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.flink.table.planner.plan.rules.physical.batch
import org.apache.flink.table.api.{TableConfig, TableException}
import org.apache.flink.table.data.binary.BinaryRowData
import org.apache.flink.table.functions.{AggregateFunction, UserDefinedFunction}
import org.apache.flink.table.planner.JArrayList
import org.apache.flink.table.planner.calcite.FlinkTypeFactory
import org.apache.flink.table.planner.functions.aggfunctions.DeclarativeAggregateFunction
import org.apache.flink.table.planner.functions.utils.UserDefinedFunctionUtils._
import org.apache.flink.table.planner.plan.nodes.physical.batch.{BatchPhysicalGroupAggregateBase, BatchPhysicalLocalHashAggregate, BatchPhysicalLocalSortAggregate}
import org.apache.flink.table.planner.plan.utils.{AggregateUtil, FlinkRelOptUtil}
import org.apache.flink.table.planner.utils.AggregatePhaseStrategy
import org.apache.flink.table.planner.utils.TableConfigUtils.getAggPhaseStrategy
import org.apache.flink.table.runtime.types.LogicalTypeDataTypeConverter.fromDataTypeToLogicalType
import org.apache.flink.table.types.DataType
import org.apache.flink.table.types.logical.LogicalType
import org.apache.calcite.plan.{RelOptCluster, RelTraitSet}
import org.apache.calcite.rel.`type`.RelDataType
import org.apache.calcite.rel.core.{Aggregate, AggregateCall}
import org.apache.calcite.rel.{RelCollation, RelCollations, RelFieldCollation, RelNode}
import org.apache.calcite.util.Util
import scala.collection.JavaConversions._
trait BatchPhysicalAggRuleBase {
protected def inferLocalAggType(
inputRowType: RelDataType,
agg: Aggregate,
groupSet: Array[Int],
auxGroupSet: Array[Int],
aggFunctions: Array[UserDefinedFunction],
aggBufferTypes: Array[Array[LogicalType]]): RelDataType = {
val typeFactory = agg.getCluster.getTypeFactory.asInstanceOf[FlinkTypeFactory]
val aggCallNames = Util.skip(
agg.getRowType.getFieldNames, groupSet.length + auxGroupSet.length).toList.toArray[String]
inferLocalAggType(
inputRowType, typeFactory, aggCallNames, groupSet, auxGroupSet, aggFunctions, aggBufferTypes)
}
protected def inferLocalAggType(
inputRowType: RelDataType,
typeFactory: FlinkTypeFactory,
aggCallNames: Array[String],
groupSet: Array[Int],
auxGroupSet: Array[Int],
aggFunctions: Array[UserDefinedFunction],
aggBufferTypes: Array[Array[LogicalType]]): RelDataType = {
val aggBufferFieldNames = new Array[Array[String]](aggFunctions.length)
var index = -1
aggFunctions.zipWithIndex.foreach {
case (udf, aggIndex) =>
aggBufferFieldNames(aggIndex) = udf match {
case _: AggregateFunction[_, _] =>
Array(aggCallNames(aggIndex))
case agf: DeclarativeAggregateFunction =>
agf.aggBufferAttributes.map { attr =>
index += 1
s"${attr.getName}$$$index"
}
case _: UserDefinedFunction =>
throw new TableException(s"Don't get localAgg merge name")
}
}
// local agg output order: groupSet + auxGroupSet + aggCalls
val aggBufferSqlTypes = aggBufferTypes.flatten.map { t =>
val nullable = !FlinkTypeFactory.isTimeIndicatorType(t)
typeFactory.createFieldTypeFromLogicalType(t)
}
val localAggFieldTypes = (
groupSet.map(inputRowType.getFieldList.get(_).getType) ++ // groupSet
auxGroupSet.map(inputRowType.getFieldList.get(_).getType) ++ // auxGroupSet
aggBufferSqlTypes // aggCalls
).toList
val localAggFieldNames = (
groupSet.map(inputRowType.getFieldList.get(_).getName) ++ // groupSet
auxGroupSet.map(inputRowType.getFieldList.get(_).getName) ++ // auxGroupSet
aggBufferFieldNames.flatten.toArray[String] // aggCalls
).toList
typeFactory.createStructType(localAggFieldTypes, localAggFieldNames)
}
protected def isTwoPhaseAggWorkable(
aggFunctions: Array[UserDefinedFunction],
tableConfig: TableConfig): Boolean = {
getAggPhaseStrategy(tableConfig) match {
case AggregatePhaseStrategy.ONE_PHASE => false
case _ => doAllSupportMerge(aggFunctions)
}
}
protected def isOnePhaseAggWorkable(
agg: Aggregate,
aggFunctions: Array[UserDefinedFunction],
tableConfig: TableConfig): Boolean = {
getAggPhaseStrategy(tableConfig) match {
case AggregatePhaseStrategy.ONE_PHASE => true
case AggregatePhaseStrategy.TWO_PHASE => !doAllSupportMerge(aggFunctions)
case AggregatePhaseStrategy.AUTO =>
if (!doAllSupportMerge(aggFunctions)) {
true
} else {
// if ndv of group key in aggregate is Unknown and all aggFunctions are splittable,
// use two-phase agg.
// else whether choose one-phase agg or two-phase agg depends on CBO.
val mq = agg.getCluster.getMetadataQuery
mq.getDistinctRowCount(agg.getInput, agg.getGroupSet, null) != null
}
}
}
protected def doAllSupportMerge(aggFunctions: Array[UserDefinedFunction]): Boolean = {
val supportLocalAgg = aggFunctions.forall {
case _: DeclarativeAggregateFunction => true
case a => ifMethodExistInFunction("merge", a)
}
//it means grouping without aggregate functions
aggFunctions.isEmpty || supportLocalAgg
}
protected def isEnforceOnePhaseAgg(tableConfig: TableConfig): Boolean = {
getAggPhaseStrategy(tableConfig) == AggregatePhaseStrategy.ONE_PHASE
}
protected def isEnforceTwoPhaseAgg(tableConfig: TableConfig): Boolean = {
getAggPhaseStrategy(tableConfig) == AggregatePhaseStrategy.TWO_PHASE
}
protected def isAggBufferFixedLength(agg: Aggregate): Boolean = {
val (_, aggCallsWithoutAuxGroupCalls) = AggregateUtil.checkAndSplitAggCalls(agg)
val (_, aggBufferTypes, _) = AggregateUtil.transformToBatchAggregateFunctions(
FlinkTypeFactory.toLogicalRowType(agg.getInput.getRowType), aggCallsWithoutAuxGroupCalls)
isAggBufferFixedLength(aggBufferTypes.map(_.map(fromDataTypeToLogicalType)))
}
protected def isAggBufferFixedLength(aggBufferTypes: Array[Array[LogicalType]]): Boolean = {
val aggBuffAttributesTypes = aggBufferTypes.flatten
val isAggBufferFixedLength = aggBuffAttributesTypes.forall(
t => BinaryRowData.isMutable(t))
// it means grouping without aggregate functions
aggBuffAttributesTypes.isEmpty || isAggBufferFixedLength
}
protected def createRelCollation(groupSet: Array[Int]): RelCollation = {
val fields = new JArrayList[RelFieldCollation]()
for (field <- groupSet) {
fields.add(FlinkRelOptUtil.ofRelFieldCollation(field))
}
RelCollations.of(fields)
}
protected def getGlobalAggGroupSetPair(
localAggGroupSet: Array[Int], localAggAuxGroupSet: Array[Int]): (Array[Int], Array[Int]) = {
val globalGroupSet = localAggGroupSet.indices.toArray
val globalAuxGroupSet = (localAggGroupSet.length until localAggGroupSet.length +
localAggAuxGroupSet.length).toArray
(globalGroupSet, globalAuxGroupSet)
}
protected def createLocalAgg(
cluster: RelOptCluster,
traitSet: RelTraitSet,
input: RelNode,
originalAggRowType: RelDataType,
grouping: Array[Int],
auxGrouping: Array[Int],
aggBufferTypes: Array[Array[DataType]],
aggCallToAggFunction: Seq[(AggregateCall, UserDefinedFunction)],
isLocalHashAgg: Boolean): BatchPhysicalGroupAggregateBase = {
val inputRowType = input.getRowType
val aggFunctions = aggCallToAggFunction.map(_._2).toArray
val typeFactory = input.getCluster.getTypeFactory.asInstanceOf[FlinkTypeFactory]
val aggCallNames = Util.skip(
originalAggRowType.getFieldNames, grouping.length + auxGrouping.length).toList.toArray
val localAggRowType = inferLocalAggType(
inputRowType,
typeFactory,
aggCallNames,
grouping,
auxGrouping,
aggFunctions,
aggBufferTypes.map(_.map(fromDataTypeToLogicalType)))
if (isLocalHashAgg) {
new BatchPhysicalLocalHashAggregate(
cluster,
traitSet,
input,
localAggRowType,
inputRowType,
grouping,
auxGrouping,
aggCallToAggFunction)
} else {
new BatchPhysicalLocalSortAggregate(
cluster,
traitSet,
input,
localAggRowType,
inputRowType,
grouping,
auxGrouping,
aggCallToAggFunction)
}
}
}
|
apache/flink
|
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/batch/BatchPhysicalAggRuleBase.scala
|
Scala
|
apache-2.0
| 9,278
|
package js.util
import java.io.File
import java.io.FileNotFoundException
abstract class JsApp extends App {
def processFile(file: File)
def init(): Unit = ()
case class Config(debug: Boolean = false,
eval: Boolean = true,
files: List[File] = Nil)
val usage = """JakartaScript interpreter 1.0
Usage: run [options] [<file>...]
-d | --debug
Print debug messages
-ne | --noeval
Only check types but do not evaluate
-h | --help
prints this usage text
<file>...
JakartaScript files to be interpreted
"""
val config = ((Some(Config()): Option[Config]) /: args) {
case (Some(c), "-d") => Some(c.copy(debug = true))
case (Some(c), "--debug") => Some(c.copy(debug = true))
case (Some(c), "-ne") => Some(c.copy(eval = false))
case (Some(c), "--noeval") => Some(c.copy(eval = false))
case (Some(c), "-h") => None
case (Some(c), "--help") => None
case (Some(c), f) => Some(c.copy(files = c.files :+ new File(f)))
case (None, _) => None
} getOrElse {
println(usage)
System.exit(1)
Config()
}
var debug: Boolean = config.debug
var eval: Boolean = config.eval
var maxSteps: Option[Int] = None
var optFile: Option[File] = None
def handle[T](default: => T)(e: => T): T =
try e catch {
case ex: JsException =>
val fileName = optFile map (_.getName()) getOrElse "[eval]"
println(s"$fileName:$ex")
default
case ex: FileNotFoundException =>
optFile match {
case Some(f) =>
println("Error: cannot find module '" + f.getCanonicalPath + "'")
default
case None =>
ex.printStackTrace(System.out)
default
}
case ex: Throwable =>
ex.printStackTrace(System.out)
default
}
def fail(): Nothing = scala.sys.exit(1)
init()
for (f: File <- config.files) {
optFile = Some(f)
handle(fail())(processFile(f))
}
}
|
mpgarate/ProgLang-Assignments
|
HW6/src/main/scala/js/util/JsApp.scala
|
Scala
|
mit
| 2,049
|
package com.twitter.finagle
import org.jboss.netty.buffer.{ChannelBuffer => CB}
private object ThriftMuxUtil {
val role = Stack.Role("ProtocolRecorder")
def bufferToArray(buf: CB): Array[Byte] =
if (buf.hasArray && buf.arrayOffset == 0
&& buf.readableBytes == buf.array().length) {
buf.array()
} else {
val arr = new Array[Byte](buf.readableBytes)
buf.readBytes(arr)
arr
}
def classForName(name: String) =
try Class.forName(name) catch {
case cause: ClassNotFoundException =>
throw new IllegalArgumentException("Iface is not a valid thrift iface", cause)
}
val protocolRecorder: Stackable[ServiceFactory[mux.Request, mux.Response]] =
new Stack.Module1[param.Stats, ServiceFactory[mux.Request, mux.Response]] {
val role = ThriftMuxUtil.role
val description = "Record ThriftMux protocol usage"
def make(_stats: param.Stats, next: ServiceFactory[mux.Request, mux.Response]) = {
val param.Stats(stats) = _stats
stats.scope("protocol").provideGauge("thriftmux")(1)
next
}
}
}
|
lysu/finagle
|
finagle-thriftmux/src/main/scala/com/twitter/finagle/ThriftMuxUtil.scala
|
Scala
|
apache-2.0
| 1,105
|
package org.graphexecutor
import bench.BenchControl
import org.scalatest.FunSuite
import org.scalatest.matchers.ShouldMatchers
class BenchMarkTests extends FunSuite with ShouldMatchers {
test("adding and removing nodes from the singleton controller") {
BenchControl.clear()
BenchControl.listBenchers should be ('empty)
BenchControl.size should be (0)
BenchControl.numreports should be (0)
val nd1 = NodeRunner("nd1", new NoopWork() ,true)
BenchControl.listBenchers should include ("nd1")
BenchControl.size should be (1)
BenchControl.numreports should be (0)
BenchControl.clear
~nd1
BenchControl.listBenchers should be ('empty)
BenchControl.size should be (0)
BenchControl.numreports should be (0)
val n1 = NodeRunner("n1", SSsystem(1000, 0.1), true )
BenchControl.listBenchers should include ("n1")
BenchControl.size should be (1)
BenchControl.numreports should be (0)
val n2 = NodeRunner("n2", SSsystem(1000, 0.1), true )
BenchControl.listBenchers should include ("n2")
BenchControl.size should be (2)
BenchControl.numreports should be (0)
val n3 = NodeRunner("n3", SSsystem(1000, 0.1), true )
BenchControl.listBenchers should include ("n3")
BenchControl.size should be (3)
BenchControl.numreports should be (0)
val n4 = NodeRunner("n4", SSsystem(1000, 0.1), true )
BenchControl.listBenchers should include ("n4")
BenchControl.size should be (4)
BenchControl.numreports should be (0)
n1 -> n2 -> n3 -> n4
n1.solveAsync()
n4.blockUntilSolved(1)
val report1 = BenchControl.reportData()
BenchControl.numreports should be (4)
println(report1)
report1 should ( include ("n1") and include ("n2") and include ("n3") and include ("n4") )
n1.solveAsync()
n4.blockUntilSolved(2)
BenchControl.clear()
BenchControl.listBenchers should be ('empty)
BenchControl.size should be (0)
BenchControl.numreports should be (0)
~n1; ~n2; ~n3; ~n4
val x1 = NodeRunner("x1", SSsystem(1000, 0.1), true )
BenchControl.listBenchers should include ("x1")
BenchControl.size should be (1)
BenchControl.numreports should be (0)
val x2 = NodeRunner("x2", SSsystem(1000, 0.1), true )
BenchControl.listBenchers should include ("x2")
BenchControl.size should be (2)
BenchControl.numreports should be (0)
val x3 = NodeRunner("x3", SSsystem(1000, 0.1), true )
BenchControl.listBenchers should include ("x3")
BenchControl.size should be (3)
BenchControl.numreports should be (0)
val x4 = NodeRunner("x4", SSsystem(1000, 0.1), true )
BenchControl.listBenchers should include ("x4")
BenchControl.size should be (4)
BenchControl.numreports should be (0)
x1 -> x2 -> x3 -> x4
x1.solveAsync()
x4.blockUntilSolved(1)
val report2 = BenchControl.reportData()
BenchControl.numreports should be (4)
println(report2)
report2 should ( include ("x1") and include ("x2") and include ("x3") and include ("x4") )
~x1; ~x2; ~x3; ~x4
}
}
|
johanprinsloo/GraphExecutor
|
src/test/scala/org/graphexecutor/BenchMarkTests.scala
|
Scala
|
apache-2.0
| 3,083
|
package com.twitter.finagle.loadbalancer
import com.twitter.finagle._
import com.twitter.finagle.client.{StackClient, StringClient}
import com.twitter.finagle.param.Stats
import com.twitter.finagle.server.StringServer
import com.twitter.finagle.stats.{InMemoryHostStatsReceiver, InMemoryStatsReceiver}
import com.twitter.finagle.util.Rng
import com.twitter.util.{Await, Future, Var}
import java.net.{InetAddress, InetSocketAddress, SocketAddress}
import org.junit.runner.RunWith
import org.scalatest.concurrent.{Eventually, IntegrationPatience}
import org.scalatest.FunSuite
import org.scalatest.junit.JUnitRunner
@RunWith(classOf[JUnitRunner])
class LoadBalancerFactoryTest extends FunSuite
with StringClient
with StringServer
with Eventually
with IntegrationPatience {
val echoService = Service.mk[String, String](Future.value(_))
trait PerHostFlagCtx extends App {
val label = "myclient"
val client = stringClient.configured(param.Label(label))
val port = "localhost:8080"
val perHostStatKey = Seq(label, port, "available")
}
test("reports per-host stats when flag is true") {
new PerHostFlagCtx {
val sr = new InMemoryHostStatsReceiver
val sr1 = new InMemoryStatsReceiver
perHostStats.let(true) {
client.configured(LoadBalancerFactory.HostStats(sr))
.newService(port)
eventually {
assert(sr.self.gauges(perHostStatKey).apply == 1.0)
}
client.configured(LoadBalancerFactory.HostStats(sr1))
.newService(port)
eventually {
assert(sr1.gauges(perHostStatKey).apply == 1.0)
}
}
}
}
test("does not report per-host stats when flag is false") {
new PerHostFlagCtx {
val sr = new InMemoryHostStatsReceiver
val sr1 = new InMemoryStatsReceiver
perHostStats.let(false) {
client.configured(LoadBalancerFactory.HostStats(sr))
.newService(port)
assert(sr.self.gauges.contains(perHostStatKey) == false)
client.configured(LoadBalancerFactory.HostStats(sr1))
.newService(port)
assert(sr1.gauges.contains(perHostStatKey) == false)
}
}
}
test("make service factory stack") {
val addr1 = new InetSocketAddress(InetAddress.getLoopbackAddress, 0)
val server1 = stringServer.serve(addr1, echoService)
val addr2 = new InetSocketAddress(InetAddress.getLoopbackAddress, 0)
val server2 = stringServer.serve(addr2, echoService)
val sr = new InMemoryStatsReceiver
val client = stringClient
.configured(Stats(sr))
.newService(Name.bound(Address(server1.boundAddress.asInstanceOf[InetSocketAddress]), Address(server2.boundAddress.asInstanceOf[InetSocketAddress])), "client")
assert(sr.counters(Seq("client", "loadbalancer", "adds")) == 2)
assert(Await.result(client("hello\\n")) == "hello")
}
test("throws NoBrokersAvailableException with negative addresses") {
val next: Stack[ServiceFactory[String, String]] =
Stack.Leaf(Stack.Role("mock"), ServiceFactory.const[String, String](
Service.mk[String, String](req => Future.value(s"$req"))))
val stack = new LoadBalancerFactory.StackModule[String, String] {
val description = "mock"
}.toStack(next)
val addrs = Seq(Addr.Neg)
addrs.foreach { addr =>
val dest = LoadBalancerFactory.Dest(Var(addr))
val factory = stack.make(Stack.Params.empty + dest)
intercept[NoBrokersAvailableException](Await.result(factory()))
}
}
}
@RunWith(classOf[JUnitRunner])
class ConcurrentLoadBalancerFactoryTest extends FunSuite with StringClient with StringServer {
val echoService = Service.mk[String, String](Future.value(_))
test("makes service factory stack") {
val address = new InetSocketAddress(InetAddress.getLoopbackAddress, 0)
val server = stringServer.serve(address, echoService)
val sr = new InMemoryStatsReceiver
val clientStack =
StackClient.newStack.replace(
LoadBalancerFactory.role, ConcurrentLoadBalancerFactory.module[String, String])
val client = stringClient.withStack(clientStack)
.configured(Stats(sr))
.newService(Name.bound(Address(server.boundAddress.asInstanceOf[InetSocketAddress])), "client")
assert(sr.counters(Seq("client", "loadbalancer", "adds")) == 4)
assert(Await.result(client("hello\\n")) == "hello")
}
test("creates fixed number of service factories based on params") {
val addr1 = new InetSocketAddress(InetAddress.getLoopbackAddress, 0)
val server1 = stringServer.serve(addr1, echoService)
val addr2 = new InetSocketAddress(InetAddress.getLoopbackAddress, 0)
val server2 = stringServer.serve(addr2, echoService)
val sr = new InMemoryStatsReceiver
val clientStack =
StackClient.newStack.replace(
LoadBalancerFactory.role, ConcurrentLoadBalancerFactory.module[String, String])
val client = stringClient.withStack(clientStack)
.configured(Stats(sr))
.configured(ConcurrentLoadBalancerFactory.Param(3))
.newService(Name.bound(Address(server1.boundAddress.asInstanceOf[InetSocketAddress]), Address(server2.boundAddress.asInstanceOf[InetSocketAddress])), "client")
assert(sr.counters(Seq("client", "loadbalancer", "adds")) == 6)
}
}
|
sveinnfannar/finagle
|
finagle-core/src/test/scala/com/twitter/finagle/loadbalancer/LoadBalancerFactoryTest.scala
|
Scala
|
apache-2.0
| 5,273
|
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.spark.sql.catalyst.expressions
import java.nio.charset.StandardCharsets
import java.time.{ZoneId, ZoneOffset}
import scala.collection.mutable.ArrayBuffer
import scala.language.implicitConversions
import org.apache.commons.codec.digest.DigestUtils
import org.scalatest.exceptions.TestFailedException
import org.apache.spark.SparkFunSuite
import org.apache.spark.sql.{RandomDataGenerator, Row}
import org.apache.spark.sql.catalyst.InternalRow
import org.apache.spark.sql.catalyst.encoders.{ExamplePointUDT, RowEncoder}
import org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection
import org.apache.spark.sql.catalyst.util.{ArrayBasedMapData, DateTimeUtils, GenericArrayData, IntervalUtils}
import org.apache.spark.sql.types.{ArrayType, StructType, _}
import org.apache.spark.unsafe.types.UTF8String
class HashExpressionsSuite extends SparkFunSuite with ExpressionEvalHelper {
val random = new scala.util.Random
implicit def stringToUTF8Str(str: String): UTF8String = UTF8String.fromString(str)
test("md5") {
checkEvaluation(Md5(Literal("ABC".getBytes(StandardCharsets.UTF_8))),
"902fbdd2b1df0c4f70b4a5d23525e932")
checkEvaluation(Md5(Literal.create(Array[Byte](1, 2, 3, 4, 5, 6), BinaryType)),
"6ac1e56bc78f031059be7be854522c4c")
checkEvaluation(Md5(Literal.create(null, BinaryType)), null)
checkConsistencyBetweenInterpretedAndCodegen(Md5, BinaryType)
}
test("sha1") {
checkEvaluation(Sha1(Literal("ABC".getBytes(StandardCharsets.UTF_8))),
"3c01bdbb26f358bab27f267924aa2c9a03fcfdb8")
checkEvaluation(Sha1(Literal.create(Array[Byte](1, 2, 3, 4, 5, 6), BinaryType)),
"5d211bad8f4ee70e16c7d343a838fc344a1ed961")
checkEvaluation(Sha1(Literal.create(null, BinaryType)), null)
checkEvaluation(Sha1(Literal("".getBytes(StandardCharsets.UTF_8))),
"da39a3ee5e6b4b0d3255bfef95601890afd80709")
checkConsistencyBetweenInterpretedAndCodegen(Sha1, BinaryType)
}
test("sha2") {
checkEvaluation(Sha2(Literal("ABC".getBytes(StandardCharsets.UTF_8)), Literal(256)),
DigestUtils.sha256Hex("ABC"))
checkEvaluation(Sha2(Literal.create(Array[Byte](1, 2, 3, 4, 5, 6), BinaryType), Literal(384)),
DigestUtils.sha384Hex(Array[Byte](1, 2, 3, 4, 5, 6)))
// unsupported bit length
checkEvaluation(Sha2(Literal.create(null, BinaryType), Literal(1024)), null)
checkEvaluation(Sha2(Literal.create(null, BinaryType), Literal(512)), null)
checkEvaluation(Sha2(Literal("ABC".getBytes(StandardCharsets.UTF_8)),
Literal.create(null, IntegerType)), null)
checkEvaluation(Sha2(Literal.create(null, BinaryType), Literal.create(null, IntegerType)), null)
}
test("crc32") {
checkEvaluation(Crc32(Literal("ABC".getBytes(StandardCharsets.UTF_8))), 2743272264L)
checkEvaluation(Crc32(Literal.create(Array[Byte](1, 2, 3, 4, 5, 6), BinaryType)),
2180413220L)
checkEvaluation(Crc32(Literal.create(null, BinaryType)), null)
checkConsistencyBetweenInterpretedAndCodegen(Crc32, BinaryType)
}
def checkHiveHash(input: Any, dataType: DataType, expected: Long): Unit = {
// Note : All expected hashes need to be computed using Hive 1.2.1
val actual = HiveHashFunction.hash(input, dataType, seed = 0)
withClue(s"hash mismatch for input = `$input` of type `$dataType`.") {
assert(actual == expected)
}
}
def checkHiveHashForIntegralType(dataType: DataType): Unit = {
// corner cases
checkHiveHash(null, dataType, 0)
checkHiveHash(1, dataType, 1)
checkHiveHash(0, dataType, 0)
checkHiveHash(-1, dataType, -1)
checkHiveHash(Int.MaxValue, dataType, Int.MaxValue)
checkHiveHash(Int.MinValue, dataType, Int.MinValue)
// random values
for (_ <- 0 until 10) {
val input = random.nextInt()
checkHiveHash(input, dataType, input)
}
}
test("hive-hash for null") {
checkHiveHash(null, NullType, 0)
}
test("hive-hash for boolean") {
checkHiveHash(true, BooleanType, 1)
checkHiveHash(false, BooleanType, 0)
}
test("hive-hash for byte") {
checkHiveHashForIntegralType(ByteType)
}
test("hive-hash for short") {
checkHiveHashForIntegralType(ShortType)
}
test("hive-hash for int") {
checkHiveHashForIntegralType(IntegerType)
}
test("hive-hash for long") {
checkHiveHash(1L, LongType, 1L)
checkHiveHash(0L, LongType, 0L)
checkHiveHash(-1L, LongType, 0L)
checkHiveHash(Long.MaxValue, LongType, -2147483648)
// Hive's fails to parse this.. but the hashing function itself can handle this input
checkHiveHash(Long.MinValue, LongType, -2147483648)
for (_ <- 0 until 10) {
val input = random.nextLong()
checkHiveHash(input, LongType, ((input >>> 32) ^ input).toInt)
}
}
test("hive-hash for float") {
checkHiveHash(0F, FloatType, 0)
checkHiveHash(0.0F, FloatType, 0)
checkHiveHash(1.1F, FloatType, 1066192077L)
checkHiveHash(-1.1F, FloatType, -1081291571)
checkHiveHash(99999999.99999999999F, FloatType, 1287568416L)
checkHiveHash(Float.MaxValue, FloatType, 2139095039)
checkHiveHash(Float.MinValue, FloatType, -8388609)
}
test("hive-hash for double") {
checkHiveHash(0, DoubleType, 0)
checkHiveHash(0.0, DoubleType, 0)
checkHiveHash(1.1, DoubleType, -1503133693)
checkHiveHash(-1.1, DoubleType, 644349955)
checkHiveHash(1000000000.000001, DoubleType, 1104006509)
checkHiveHash(1000000000.0000000000000000000000001, DoubleType, 1104006501)
checkHiveHash(9999999999999999999.9999999999999999999, DoubleType, 594568676)
checkHiveHash(Double.MaxValue, DoubleType, -2146435072)
checkHiveHash(Double.MinValue, DoubleType, 1048576)
}
test("hive-hash for string") {
checkHiveHash(UTF8String.fromString("apache spark"), StringType, 1142704523L)
checkHiveHash(UTF8String.fromString("!@#$%^&*()_+=-"), StringType, -613724358L)
checkHiveHash(UTF8String.fromString("abcdefghijklmnopqrstuvwxyz"), StringType, 958031277L)
checkHiveHash(UTF8String.fromString("AbCdEfGhIjKlMnOpQrStUvWxYz012"), StringType, -648013852L)
// scalastyle:off nonascii
checkHiveHash(UTF8String.fromString("数据砖头"), StringType, -898686242L)
checkHiveHash(UTF8String.fromString("नमस्ते"), StringType, 2006045948L)
// scalastyle:on nonascii
}
test("hive-hash for date type") {
def checkHiveHashForDateType(dateString: String, expected: Long): Unit = {
checkHiveHash(
DateTimeUtils.stringToDate(UTF8String.fromString(dateString), ZoneOffset.UTC).get,
DateType,
expected)
}
// basic case
checkHiveHashForDateType("2017-01-01", 17167)
// boundary cases
checkHiveHashForDateType("0000-01-01", -719528)
checkHiveHashForDateType("9999-12-31", 2932896)
// epoch
checkHiveHashForDateType("1970-01-01", 0)
// before epoch
checkHiveHashForDateType("1800-01-01", -62091)
// Invalid input: bad date string. Hive returns 0 for such cases
intercept[NoSuchElementException](checkHiveHashForDateType("0-0-0", 0))
intercept[NoSuchElementException](checkHiveHashForDateType("-1212-01-01", 0))
intercept[NoSuchElementException](checkHiveHashForDateType("2016-99-99", 0))
// Invalid input: Empty string. Hive returns 0 for this case
intercept[NoSuchElementException](checkHiveHashForDateType("", 0))
// Invalid input: February 30th for a leap year. Hive supports this but Spark doesn't
intercept[NoSuchElementException](checkHiveHashForDateType("2016-02-30", 16861))
}
test("hive-hash for timestamp type") {
def checkHiveHashForTimestampType(
timestamp: String,
expected: Long,
zoneId: ZoneId = ZoneOffset.UTC): Unit = {
checkHiveHash(
DateTimeUtils.stringToTimestamp(UTF8String.fromString(timestamp), zoneId).get,
TimestampType,
expected)
}
// basic case
checkHiveHashForTimestampType("2017-02-24 10:56:29", 1445725271)
// with higher precision
checkHiveHashForTimestampType("2017-02-24 10:56:29.111111", 1353936655)
// with different timezone
checkHiveHashForTimestampType("2017-02-24 10:56:29", 1445732471,
DateTimeUtils.getZoneId("US/Pacific"))
// boundary cases
checkHiveHashForTimestampType("0001-01-01 00:00:00", 1645969984)
checkHiveHashForTimestampType("9999-01-01 00:00:00", -1081818240)
// epoch
checkHiveHashForTimestampType("1970-01-01 00:00:00", 0)
// before epoch
checkHiveHashForTimestampType("1800-01-01 03:12:45", -267420885)
// Invalid input: bad timestamp string. Hive returns 0 for such cases
intercept[NoSuchElementException](checkHiveHashForTimestampType("0-0-0 0:0:0", 0))
intercept[NoSuchElementException](checkHiveHashForTimestampType("-99-99-99 99:99:45", 0))
intercept[NoSuchElementException](checkHiveHashForTimestampType("555555-55555-5555", 0))
// Invalid input: Empty string. Hive returns 0 for this case
intercept[NoSuchElementException](checkHiveHashForTimestampType("", 0))
// Invalid input: February 30th is a leap year. Hive supports this but Spark doesn't
intercept[NoSuchElementException](checkHiveHashForTimestampType("2016-02-30 00:00:00", 0))
// Invalid input: Hive accepts upto 9 decimal place precision but Spark uses upto 6
intercept[TestFailedException](checkHiveHashForTimestampType("2017-02-24 10:56:29.11111111", 0))
}
test("hive-hash for CalendarInterval type") {
def checkHiveHashForIntervalType(interval: String, expected: Long): Unit = {
checkHiveHash(IntervalUtils.stringToInterval(UTF8String.fromString(interval)),
CalendarIntervalType, expected)
}
// ----- MICROSEC -----
// basic case
checkHiveHashForIntervalType("interval 1 microsecond", 24273)
// negative
checkHiveHashForIntervalType("interval -1 microsecond", 22273)
// edge / boundary cases
checkHiveHashForIntervalType("interval 0 microsecond", 23273)
checkHiveHashForIntervalType("interval 999 microsecond", 1022273)
checkHiveHashForIntervalType("interval -999 microsecond", -975727)
// ----- MILLISEC -----
// basic case
checkHiveHashForIntervalType("interval 1 millisecond", 1023273)
// negative
checkHiveHashForIntervalType("interval -1 millisecond", -976727)
// edge / boundary cases
checkHiveHashForIntervalType("interval 0 millisecond", 23273)
checkHiveHashForIntervalType("interval 999 millisecond", 999023273)
checkHiveHashForIntervalType("interval -999 millisecond", -998976727)
// ----- SECOND -----
// basic case
checkHiveHashForIntervalType("interval 1 second", 23310)
// negative
checkHiveHashForIntervalType("interval -1 second", 23273)
// edge / boundary cases
checkHiveHashForIntervalType("interval 0 second", 23273)
checkHiveHashForIntervalType("interval 2147483647 second", -2147460412)
checkHiveHashForIntervalType("interval -2147483648 second", -2147460412)
// Out of range for both Hive and Spark
// Hive throws an exception. Spark overflows and returns wrong output
// checkHiveHashForIntervalType("interval 9999999999 second", 0)
// ----- MINUTE -----
// basic cases
checkHiveHashForIntervalType("interval 1 minute", 25493)
// negative
checkHiveHashForIntervalType("interval -1 minute", 25456)
// edge / boundary cases
checkHiveHashForIntervalType("interval 0 minute", 23273)
checkHiveHashForIntervalType("interval 2147483647 minute", 21830)
checkHiveHashForIntervalType("interval -2147483648 minute", 22163)
// Out of range for both Hive and Spark
// Hive throws an exception. Spark overflows and returns wrong output
// checkHiveHashForIntervalType("interval 9999999999 minute", 0)
// ----- HOUR -----
// basic case
checkHiveHashForIntervalType("interval 1 hour", 156473)
// negative
checkHiveHashForIntervalType("interval -1 hour", 156436)
// edge / boundary cases
checkHiveHashForIntervalType("interval 0 hour", 23273)
checkHiveHashForIntervalType("interval 2147483647 hour", -62308)
checkHiveHashForIntervalType("interval -2147483648 hour", -43327)
// Out of range for both Hive and Spark
// Hive throws an exception. Spark overflows and returns wrong output
// checkHiveHashForIntervalType("interval 9999999999 hour", 0)
// ----- DAY -----
// basic cases
checkHiveHashForIntervalType("interval 1 day", 3220073)
// negative
checkHiveHashForIntervalType("interval -1 day", 3220036)
// edge / boundary cases
checkHiveHashForIntervalType("interval 0 day", 23273)
checkHiveHashForIntervalType("interval 106751991 day", -451506760)
checkHiveHashForIntervalType("interval -106751991 day", -451514123)
// Hive supports `day` for a longer range but Spark's range is smaller
// The check for range is done at the parser level so this does not fail in Spark
// checkHiveHashForIntervalType("interval -2147483648 day", -1575127)
// checkHiveHashForIntervalType("interval 2147483647 day", -4767228)
// Out of range for both Hive and Spark
// Hive throws an exception. Spark overflows and returns wrong output
// checkHiveHashForIntervalType("interval 9999999999 day", 0)
// ----- MIX -----
checkHiveHashForIntervalType("interval 0 day 0 hour", 23273)
checkHiveHashForIntervalType("interval 0 day 0 hour 0 minute", 23273)
checkHiveHashForIntervalType("interval 0 day 0 hour 0 minute 0 second", 23273)
checkHiveHashForIntervalType("interval 0 day 0 hour 0 minute 0 second 0 millisecond", 23273)
checkHiveHashForIntervalType(
"interval 0 day 0 hour 0 minute 0 second 0 millisecond 0 microsecond", 23273)
checkHiveHashForIntervalType("interval 6 day 15 hour", 21202073)
checkHiveHashForIntervalType("interval 5 day 4 hour 8 minute", 16557833)
checkHiveHashForIntervalType("interval -23 day 56 hour -1111113 minute 9898989 second",
-2128468593)
checkHiveHashForIntervalType("interval 66 day 12 hour 39 minute 23 second 987 millisecond",
1199697904)
checkHiveHashForIntervalType(
"interval 66 day 12 hour 39 minute 23 second 987 millisecond 123 microsecond", 1199820904)
}
test("hive-hash for array") {
// empty array
checkHiveHash(
input = new GenericArrayData(Array[Int]()),
dataType = ArrayType(IntegerType, containsNull = false),
expected = 0)
// basic case
checkHiveHash(
input = new GenericArrayData(Array(1, 10000, Int.MaxValue)),
dataType = ArrayType(IntegerType, containsNull = false),
expected = -2147172688L)
// with negative values
checkHiveHash(
input = new GenericArrayData(Array(-1L, 0L, 999L, Int.MinValue.toLong)),
dataType = ArrayType(LongType, containsNull = false),
expected = -2147452680L)
// with nulls only
val arrayTypeWithNull = ArrayType(IntegerType, containsNull = true)
checkHiveHash(
input = new GenericArrayData(Array(null, null)),
dataType = arrayTypeWithNull,
expected = 0)
// mix with null
checkHiveHash(
input = new GenericArrayData(Array(-12221, 89, null, 767)),
dataType = arrayTypeWithNull,
expected = -363989515)
// nested with array
checkHiveHash(
input = new GenericArrayData(
Array(
new GenericArrayData(Array(1234L, -9L, 67L)),
new GenericArrayData(Array(null, null)),
new GenericArrayData(Array(55L, -100L, -2147452680L))
)),
dataType = ArrayType(ArrayType(LongType)),
expected = -1007531064)
// nested with map
checkHiveHash(
input = new GenericArrayData(
Array(
new ArrayBasedMapData(
new GenericArrayData(Array(-99, 1234)),
new GenericArrayData(Array(UTF8String.fromString("sql"), null))),
new ArrayBasedMapData(
new GenericArrayData(Array(67)),
new GenericArrayData(Array(UTF8String.fromString("apache spark"))))
)),
dataType = ArrayType(MapType(IntegerType, StringType)),
expected = 1139205955)
}
test("hive-hash for map") {
val mapType = MapType(IntegerType, StringType)
// empty map
checkHiveHash(
input = new ArrayBasedMapData(new GenericArrayData(Array()), new GenericArrayData(Array())),
dataType = mapType,
expected = 0)
// basic case
checkHiveHash(
input = new ArrayBasedMapData(
new GenericArrayData(Array(1, 2)),
new GenericArrayData(Array(UTF8String.fromString("foo"), UTF8String.fromString("bar")))),
dataType = mapType,
expected = 198872)
// with null value
checkHiveHash(
input = new ArrayBasedMapData(
new GenericArrayData(Array(55, -99)),
new GenericArrayData(Array(UTF8String.fromString("apache spark"), null))),
dataType = mapType,
expected = 1142704473)
// nesting (only values can be nested as keys have to be primitive datatype)
val nestedMapType = MapType(IntegerType, MapType(IntegerType, StringType))
checkHiveHash(
input = new ArrayBasedMapData(
new GenericArrayData(Array(1, -100)),
new GenericArrayData(
Array(
new ArrayBasedMapData(
new GenericArrayData(Array(-99, 1234)),
new GenericArrayData(Array(UTF8String.fromString("sql"), null))),
new ArrayBasedMapData(
new GenericArrayData(Array(67)),
new GenericArrayData(Array(UTF8String.fromString("apache spark"))))
))),
dataType = nestedMapType,
expected = -1142817416)
}
test("hive-hash for struct") {
// basic
val row = new GenericInternalRow(Array[Any](1, 2, 3))
checkHiveHash(
input = row,
dataType =
new StructType()
.add("col1", IntegerType)
.add("col2", IntegerType)
.add("col3", IntegerType),
expected = 1026)
// mix of several datatypes
val structType = new StructType()
.add("null", NullType)
.add("boolean", BooleanType)
.add("byte", ByteType)
.add("short", ShortType)
.add("int", IntegerType)
.add("long", LongType)
.add("arrayOfString", arrayOfString)
.add("mapOfString", mapOfString)
val rowValues = new ArrayBuffer[Any]()
rowValues += null
rowValues += true
rowValues += 1
rowValues += 2
rowValues += Int.MaxValue
rowValues += Long.MinValue
rowValues += new GenericArrayData(Array(
UTF8String.fromString("apache spark"),
UTF8String.fromString("hello world")
))
rowValues += new ArrayBasedMapData(
new GenericArrayData(Array(UTF8String.fromString("project"), UTF8String.fromString("meta"))),
new GenericArrayData(Array(UTF8String.fromString("apache spark"), null))
)
val row2 = new GenericInternalRow(rowValues.toArray)
checkHiveHash(
input = row2,
dataType = structType,
expected = -2119012447)
}
private val structOfString = new StructType().add("str", StringType)
private val structOfUDT = new StructType().add("udt", new ExamplePointUDT, false)
private val arrayOfString = ArrayType(StringType)
private val arrayOfNull = ArrayType(NullType)
private val mapOfString = MapType(StringType, StringType)
private val arrayOfUDT = ArrayType(new ExamplePointUDT, false)
testHash(
new StructType()
.add("null", NullType)
.add("boolean", BooleanType)
.add("byte", ByteType)
.add("short", ShortType)
.add("int", IntegerType)
.add("long", LongType)
.add("float", FloatType)
.add("double", DoubleType)
.add("bigDecimal", DecimalType.SYSTEM_DEFAULT)
.add("smallDecimal", DecimalType.USER_DEFAULT)
.add("string", StringType)
.add("binary", BinaryType)
.add("date", DateType)
.add("timestamp", TimestampType)
.add("udt", new ExamplePointUDT))
testHash(
new StructType()
.add("arrayOfNull", arrayOfNull)
.add("arrayOfString", arrayOfString)
.add("arrayOfArrayOfString", ArrayType(arrayOfString))
.add("arrayOfArrayOfInt", ArrayType(ArrayType(IntegerType)))
.add("arrayOfMap", ArrayType(mapOfString))
.add("arrayOfStruct", ArrayType(structOfString))
.add("arrayOfUDT", arrayOfUDT))
testHash(
new StructType()
.add("mapOfIntAndString", MapType(IntegerType, StringType))
.add("mapOfStringAndArray", MapType(StringType, arrayOfString))
.add("mapOfArrayAndInt", MapType(arrayOfString, IntegerType))
.add("mapOfArray", MapType(arrayOfString, arrayOfString))
.add("mapOfStringAndStruct", MapType(StringType, structOfString))
.add("mapOfStructAndString", MapType(structOfString, StringType))
.add("mapOfStruct", MapType(structOfString, structOfString)))
testHash(
new StructType()
.add("structOfString", structOfString)
.add("structOfStructOfString", new StructType().add("struct", structOfString))
.add("structOfArray", new StructType().add("array", arrayOfString))
.add("structOfMap", new StructType().add("map", mapOfString))
.add("structOfArrayAndMap",
new StructType().add("array", arrayOfString).add("map", mapOfString))
.add("structOfUDT", structOfUDT))
test("hive-hash for decimal") {
def checkHiveHashForDecimal(
input: String,
precision: Int,
scale: Int,
expected: Long): Unit = {
val decimalType = DataTypes.createDecimalType(precision, scale)
val decimal = {
val value = Decimal.apply(new java.math.BigDecimal(input))
if (value.changePrecision(precision, scale)) value else null
}
checkHiveHash(decimal, decimalType, expected)
}
checkHiveHashForDecimal("18", 38, 0, 558)
checkHiveHashForDecimal("-18", 38, 0, -558)
checkHiveHashForDecimal("-18", 38, 12, -558)
checkHiveHashForDecimal("18446744073709001000", 38, 19, 0)
checkHiveHashForDecimal("-18446744073709001000", 38, 22, 0)
checkHiveHashForDecimal("-18446744073709001000", 38, 3, 17070057)
checkHiveHashForDecimal("18446744073709001000", 38, 4, -17070057)
checkHiveHashForDecimal("9223372036854775807", 38, 4, 2147482656)
checkHiveHashForDecimal("-9223372036854775807", 38, 5, -2147482656)
checkHiveHashForDecimal("00000.00000000000", 38, 34, 0)
checkHiveHashForDecimal("-00000.00000000000", 38, 11, 0)
checkHiveHashForDecimal("123456.1234567890", 38, 2, 382713974)
checkHiveHashForDecimal("123456.1234567890", 38, 20, 1871500252)
checkHiveHashForDecimal("123456.1234567890", 38, 10, 1871500252)
checkHiveHashForDecimal("-123456.1234567890", 38, 10, -1871500234)
checkHiveHashForDecimal("123456.1234567890", 38, 0, 3827136)
checkHiveHashForDecimal("-123456.1234567890", 38, 0, -3827136)
checkHiveHashForDecimal("123456.1234567890", 38, 20, 1871500252)
checkHiveHashForDecimal("-123456.1234567890", 38, 20, -1871500234)
checkHiveHashForDecimal("123456.123456789012345678901234567890", 38, 0, 3827136)
checkHiveHashForDecimal("-123456.123456789012345678901234567890", 38, 0, -3827136)
checkHiveHashForDecimal("123456.123456789012345678901234567890", 38, 10, 1871500252)
checkHiveHashForDecimal("-123456.123456789012345678901234567890", 38, 10, -1871500234)
checkHiveHashForDecimal("123456.123456789012345678901234567890", 38, 20, 236317582)
checkHiveHashForDecimal("-123456.123456789012345678901234567890", 38, 20, -236317544)
checkHiveHashForDecimal("123456.123456789012345678901234567890", 38, 30, 1728235666)
checkHiveHashForDecimal("-123456.123456789012345678901234567890", 38, 30, -1728235608)
checkHiveHashForDecimal("123456.123456789012345678901234567890", 38, 31, 1728235666)
}
test("SPARK-18207: Compute hash for a lot of expressions") {
def checkResult(schema: StructType, input: InternalRow): Unit = {
val exprs = schema.fields.zipWithIndex.map { case (f, i) =>
BoundReference(i, f.dataType, true)
}
val murmur3HashExpr = Murmur3Hash(exprs, 42)
val murmur3HashPlan = GenerateMutableProjection.generate(Seq(murmur3HashExpr))
val murmursHashEval = Murmur3Hash(exprs, 42).eval(input)
assert(murmur3HashPlan(input).getInt(0) == murmursHashEval)
val xxHash64Expr = XxHash64(exprs, 42)
val xxHash64Plan = GenerateMutableProjection.generate(Seq(xxHash64Expr))
val xxHash64Eval = XxHash64(exprs, 42).eval(input)
assert(xxHash64Plan(input).getLong(0) == xxHash64Eval)
val hiveHashExpr = HiveHash(exprs)
val hiveHashPlan = GenerateMutableProjection.generate(Seq(hiveHashExpr))
val hiveHashEval = HiveHash(exprs).eval(input)
assert(hiveHashPlan(input).getInt(0) == hiveHashEval)
}
val N = 1000
val wideRow = new GenericInternalRow(
Seq.tabulate(N)(i => UTF8String.fromString(i.toString)).toArray[Any])
val schema = StructType((1 to N).map(i => StructField(i.toString, StringType)))
checkResult(schema, wideRow)
val nestedRow = InternalRow(wideRow)
val nestedSchema = new StructType().add("nested", schema)
checkResult(nestedSchema, nestedRow)
}
test("SPARK-22284: Compute hash for nested structs") {
val M = 80
val N = 10
val L = M * N
val O = 50
val seed = 42
val wideRow = new GenericInternalRow(Seq.tabulate(O)(k =>
new GenericInternalRow(Seq.tabulate(M)(j =>
new GenericInternalRow(Seq.tabulate(N)(i =>
new GenericInternalRow(Array[Any](
UTF8String.fromString((k * L + j * N + i).toString))))
.toArray[Any])).toArray[Any])).toArray[Any])
val inner = new StructType(
(0 until N).map(_ => StructField("structOfString", structOfString)).toArray)
val outer = new StructType(
(0 until M).map(_ => StructField("structOfStructOfString", inner)).toArray)
val schema = new StructType(
(0 until O).map(_ => StructField("structOfStructOfStructOfString", outer)).toArray)
val exprs = schema.fields.zipWithIndex.map { case (f, i) =>
BoundReference(i, f.dataType, true)
}
val murmur3HashExpr = Murmur3Hash(exprs, 42)
val murmur3HashPlan = GenerateMutableProjection.generate(Seq(murmur3HashExpr))
val murmursHashEval = Murmur3Hash(exprs, 42).eval(wideRow)
assert(murmur3HashPlan(wideRow).getInt(0) == murmursHashEval)
}
private def testHash(inputSchema: StructType): Unit = {
val inputGenerator = RandomDataGenerator.forType(inputSchema, nullable = false).get
val encoder = RowEncoder(inputSchema)
val seed = scala.util.Random.nextInt()
test(s"murmur3/xxHash64/hive hash: ${inputSchema.simpleString}") {
for (_ <- 1 to 10) {
val input = encoder.toRow(inputGenerator.apply().asInstanceOf[Row]).asInstanceOf[UnsafeRow]
val literals = input.toSeq(inputSchema).zip(inputSchema.map(_.dataType)).map {
case (value, dt) => Literal.create(value, dt)
}
// Only test the interpreted version has same result with codegen version.
checkEvaluation(Murmur3Hash(literals, seed), Murmur3Hash(literals, seed).eval())
checkEvaluation(XxHash64(literals, seed), XxHash64(literals, seed).eval())
checkEvaluation(HiveHash(literals), HiveHash(literals).eval())
}
}
}
}
|
jkbradley/spark
|
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/HashExpressionsSuite.scala
|
Scala
|
apache-2.0
| 28,182
|
package spire.laws.shadows
import spire.algebra.IsIntegral
import spire.laws.InvalidTestException
import spire.math.NumberTag
object Shadowing {
def apply[A, S](f: A => S, g: S => Option[A]): Shadowing[A, S] = new Shadowing[A, S] {
def toShadow(a: A): S = f(a)
def fromShadow(s: S): Option[A] = g(s)
}
def bigInt[A:IsIntegral:NumberTag](fromBigInt: BigInt => A): Shadowing[A, BigInt] =
new Shadowing[A, BigInt] {
def toShadow(a: A): BigInt = IsIntegral[A].toBigInt(a)
def fromShadow(s: BigInt): Option[A] = {
NumberTag[A].hasMinValue match {
case Some(m) if s < IsIntegral[A].toBigInt(m) => return None
case _ =>
}
NumberTag[A].hasMaxValue match {
case Some(m) if s > IsIntegral[A].toBigInt(m) => return None
case _ =>
}
Some(fromBigInt(s))
}
}
}
trait Shadowing[A, S] {
def toShadow(a: A): S
def fromShadow(s: S): Option[A]
def isValid(s: S): Boolean = fromShadow(s).nonEmpty
def checked(s: S): S =
if (!isValid(s)) throw new InvalidTestException else s
}
|
adampingel/spire
|
laws/src/main/scala/spire/laws/shadows/Shadowing.scala
|
Scala
|
mit
| 1,093
|
package mesosphere.marathon
package storage.repository
import java.util.UUID
import akka.Done
import mesosphere.AkkaUnitTest
import mesosphere.marathon.core.storage.repository.{ Repository, VersionedRepository }
import mesosphere.marathon.core.storage.store.impl.cache.{ LazyCachingPersistenceStore, LazyVersionCachingPersistentStore, LoadTimeCachingPersistenceStore }
import mesosphere.marathon.core.storage.store.impl.memory.InMemoryPersistenceStore
import mesosphere.marathon.core.storage.store.impl.zk.ZkPersistenceStore
import mesosphere.marathon.integration.setup.ZookeeperServerTest
import mesosphere.marathon.state.{ AppDefinition, PathId, Timestamp, VersionInfo }
import mesosphere.marathon.stream.Sink
import org.scalatest.GivenWhenThen
import org.scalatest.time.{ Seconds, Span }
import scala.concurrent.duration._
class RepositoryTest extends AkkaUnitTest with ZookeeperServerTest with GivenWhenThen {
import PathId._
def randomAppId = UUID.randomUUID().toString.toRootPath
def randomApp = AppDefinition(randomAppId, versionInfo = VersionInfo.OnlyVersion(Timestamp.now()))
override implicit lazy val patienceConfig: PatienceConfig = PatienceConfig(timeout = Span(10, Seconds))
def basic(name: String, createRepo: () => Repository[PathId, AppDefinition]): Unit = {
s"$name:unversioned" should {
"get of a non-existent value should return nothing" in {
val repo = createRepo()
repo.get(randomAppId).futureValue should be('empty)
}
"delete should be idempotent" in {
val repo = createRepo()
val id = randomAppId
repo.delete(id).futureValue should be(Done)
repo.delete(id).futureValue should be(Done)
}
"ids should return nothing" in {
val repo = createRepo()
repo.ids().runWith(Sink.seq).futureValue should be('empty)
}
"retrieve the previously stored value for two keys" in {
val repo = createRepo()
val app1 = randomApp
val app2 = randomApp
repo.store(app1).futureValue
repo.store(app2).futureValue
repo.get(app1.id).futureValue.value should equal(app1)
repo.get(app2.id).futureValue.value should equal(app2)
}
"store with the same id should update the object" in {
val repo = createRepo()
val start = randomApp
val end = start.copy(cmd = Some("abcd"))
repo.store(start).futureValue
repo.store(end).futureValue
repo.get(end.id).futureValue.value should equal(end)
repo.get(start.id).futureValue.value should equal(end)
repo.all().runWith(Sink.seq).futureValue should equal(Seq(end))
}
"stored objects should list in the ids and all" in {
val repo = createRepo()
val app1 = randomApp
val app2 = randomApp
Given("Two objects")
repo.store(app1).futureValue
repo.store(app2).futureValue
Then("They should list in the ids and all")
repo.ids().runWith(Sink.seq).futureValue should contain theSameElementsAs Seq(app1.id, app2.id)
repo.all().runWith(Sink.seq).futureValue should contain theSameElementsAs Seq(app1, app2)
When("one of them is removed")
repo.delete(app2.id).futureValue
Then("it should no longer be in the ids")
repo.ids().runWith(Sink.seq).futureValue should contain theSameElementsAs Seq(app1.id)
repo.all().runWith(Sink.seq).futureValue should contain theSameElementsAs Seq(app1)
}
}
}
def versioned(name: String, createRepo: () => VersionedRepository[PathId, AppDefinition]): Unit = {
s"$name:versioned" should {
"list no versions when empty" in {
val repo = createRepo()
repo.versions(randomAppId).runWith(Sink.seq).futureValue should be('empty)
}
"list and retrieve the current and all previous versions up to the cap" in {
val repo = createRepo()
val app = randomApp.copy(versionInfo = VersionInfo.OnlyVersion(Timestamp(1)))
val lastVersion = app.copy(versionInfo = VersionInfo.OnlyVersion(Timestamp(4)))
// two previous versions and current (so app is gone)
val versions = Seq(
app,
app.copy(versionInfo = VersionInfo.OnlyVersion(Timestamp(2))),
app.copy(versionInfo = VersionInfo.OnlyVersion(Timestamp(3))),
lastVersion)
versions.foreach { v => repo.store(v).futureValue }
// New Persistence Stores are Garbage collected so they can store extra versions...
versions.tail.map(_.version.toOffsetDateTime).toSet.diff(
repo.versions(app.id).runWith(Sink.set).futureValue) should be ('empty)
versions.tail.toSet.diff(repo.versions(app.id).mapAsync(Int.MaxValue)(repo.getVersion(app.id, _))
.collect { case Some(g) => g }
.runWith(Sink.set).futureValue) should be ('empty)
repo.get(app.id).futureValue.value should equal(lastVersion)
When("deleting the current version")
repo.deleteCurrent(app.id).futureValue
Then("The versions are still list-able, including the current one")
versions.tail.map(_.version.toOffsetDateTime).toSet.diff(
repo.versions(app.id).runWith(Sink.set).futureValue) should be('empty)
versions.tail.toSet.diff(
repo.versions(app.id).mapAsync(Int.MaxValue)(repo.getVersion(app.id, _))
.collect { case Some(g) => g }
.runWith(Sink.set).futureValue
) should be ('empty)
And("Get of the current will fail")
repo.get(app.id).futureValue should be('empty)
When("deleting all")
repo.delete(app.id).futureValue
Then("No versions remain")
repo.versions(app.id).runWith(Sink.seq).futureValue should be('empty)
}
"be able to store a specific version" in {
val repo = createRepo()
val app = randomApp
repo.storeVersion(app).futureValue
repo.versions(app.id).runWith(Sink.seq).futureValue should
contain theSameElementsAs Seq(app.version.toOffsetDateTime)
repo.get(app.id).futureValue should be ('empty)
repo.getVersion(app.id, app.version.toOffsetDateTime).futureValue.value should equal(app)
}
}
}
def createInMemRepo(): AppRepository = {
AppRepository.inMemRepository(new InMemoryPersistenceStore())
}
def createLoadTimeCachingRepo(): AppRepository = {
val cached = new LoadTimeCachingPersistenceStore(new InMemoryPersistenceStore())
cached.preDriverStarts.futureValue
AppRepository.inMemRepository(cached)
}
def createZKRepo(): AppRepository = {
val root = UUID.randomUUID().toString
val rootClient = zkClient(namespace = Some(root))
val store = new ZkPersistenceStore(rootClient, Duration.Inf)
AppRepository.zkRepository(store)
}
def createLazyCachingRepo(): AppRepository = {
AppRepository.inMemRepository(LazyCachingPersistenceStore(new InMemoryPersistenceStore()))
}
def createLazyVersionCachingRepo(): AppRepository = {
AppRepository.inMemRepository(LazyVersionCachingPersistentStore(new InMemoryPersistenceStore()))
}
behave like basic("InMemoryPersistence", createInMemRepo)
behave like basic("ZkPersistence", createZKRepo)
behave like basic("LoadTimeCachingPersistence", createLoadTimeCachingRepo)
behave like basic("LazyCachingPersistence", createLazyCachingRepo)
behave like versioned("InMemoryPersistence", createInMemRepo)
behave like versioned("ZkPersistence", createZKRepo)
behave like versioned("LoadTimeCachingPersistence", createLoadTimeCachingRepo)
behave like versioned("LazyCachingPersistence", createLazyCachingRepo)
behave like versioned("LazyVersionCachingPersistence", createLazyVersionCachingRepo)
}
|
natemurthy/marathon
|
src/test/scala/mesosphere/marathon/storage/repository/RepositoryTest.scala
|
Scala
|
apache-2.0
| 7,764
|
package com.sksamuel.elastic4s.testkit
import com.sksamuel.elastic4s.embedded.LocalNode
import com.sksamuel.elastic4s.http.{HttpClient, HttpExecutable}
import com.sksamuel.elastic4s.{ElasticsearchClientUri, Executable, JsonFormat, TcpClient}
import org.elasticsearch.{ElasticsearchException, ElasticsearchWrapperException}
import org.scalatest._
import org.slf4j.LoggerFactory
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future
trait DualClient extends SuiteMixin {
this: Suite with DualElasticSugar =>
var node: LocalNode = getNode
var client: TcpClient = node.elastic4sclient(false)
private val logger = LoggerFactory.getLogger(getClass)
// Runs twice (once for HTTP and once for TCP)
protected def beforeRunTests(): Unit = {
}
var useHttpClient = true
val http = HttpClient(ElasticsearchClientUri("elasticsearch://" + node.ipAndPort))
def execute[T, R, Q1, Q2](request: T)(implicit tcpExec: Executable[T, R, Q1],
httpExec: HttpExecutable[T, Q2],
format: JsonFormat[Q2],
tcpConv: ResponseConverter[Q1, Q2]): Future[Q2] = {
if (useHttpClient) {
logger.debug("Using HTTP client...")
httpExec.execute(http.rest, request, format)
} else {
try {
logger.debug("Using TCP client...")
tcpExec(client.java, request).map(tcpConv.convert)
} catch {
case e: ElasticsearchException => Future.failed(e)
case e: ElasticsearchWrapperException => Future.failed(e)
}
}
}
override abstract def runTests(testName: Option[String], args: Args): Status = {
val httpStatus = runTestsOnce(testName, args)
// Get a new node for running the TCP tests
node = getNode
client = node.elastic4sclient(false)
useHttpClient = !useHttpClient
val tcpStatus = runTestsOnce(testName, args)
new CompositeStatus(Set(httpStatus, tcpStatus))
}
private def runTestsOnce(testName: Option[String], args: Args): Status = {
try {
beforeRunTests()
super.runTests(testName, args)
} finally {
node.stop(true)
}
}
def tcpOnly(block: => Unit): Unit = if (!useHttpClient) block
def httpOnly(block: => Unit): Unit = if (useHttpClient) block
}
|
FabienPennequin/elastic4s
|
elastic4s-testkit/src/main/scala/com/sksamuel/elastic4s/testkit/DualClient.scala
|
Scala
|
apache-2.0
| 2,330
|
package java.io
class FilterInputStream protected (protected val in: InputStream)
extends InputStream {
override def read(): Int =
in.read()
override def read(b: Array[Byte]): Int =
read(b, 0, b.length) // this is spec! must not do in.read(b)
override def read(b: Array[Byte], off: Int, len: Int): Int =
in.read(b, off, len)
override def skip(n: Long): Long = in.skip(n)
override def available(): Int = in.available()
override def close(): Unit = in.close()
override def mark(readlimit: Int): Unit = in.mark(readlimit)
override def markSupported(): Boolean = in.markSupported()
override def reset(): Unit = in.reset()
}
|
cedricviaccoz/scala-native
|
javalib/src/main/scala/java/io/FilterInputStream.scala
|
Scala
|
bsd-3-clause
| 677
|
/*
* Copyright (C) 2015 Stratio (http://stratio.com)
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.spark.sql.crossdata.execution.auth
import com.stratio.crossdata.security.{Action, _}
import org.apache.log4j.Logger
import org.apache.spark.sql.catalyst.analysis.UnresolvedRelation
import org.apache.spark.sql.catalyst.plans.logical.{InsertIntoTable, LogicalPlan}
import org.apache.spark.sql.catalyst.{TableIdentifier, plans}
import org.apache.spark.sql.crossdata.XDSQLConf
import org.apache.spark.sql.crossdata.catalyst.execution.{AddApp, AddJar, CreateExternalTable, CreateGlobalIndex, CreateTempView, CreateView, DropAllTables, DropExternalTable, DropTable, DropView, ExecuteApp, ImportTablesUsingWithOptions, InsertIntoTable => XDInsertIntoTable}
import org.apache.spark.sql.crossdata.catalyst.streaming._
import org.apache.spark.sql.crossdata.execution.XDQueryExecution
import org.apache.spark.sql.execution._
import org.apache.spark.sql.execution.datasources.{CreateTableUsing, CreateTableUsingAsSelect, RefreshTable, DescribeCommand => LogicalDescribeCommand}
class AuthDirectivesExtractor(crossdataInstances: Seq[String], catalogIdentifier: String) {
private lazy val logger = Logger.getLogger(classOf[XDQueryExecution])
def extractResourcesAndActions(parsedPlan: LogicalPlan): Seq[(Resource, Action)] = extResAndOps(parsedPlan)
private[auth] def extResAndOps =
createPlanToResourcesAndOps orElse
insertPlanToResourcesAndOps orElse
dropPlanToResourcesAndOps orElse
streamingPlanToResourcesAndOps orElse
insecurePlanToResourcesAndOps orElse
metadataPlanToResourcesAndOps orElse
cachePlanToResourcesAndOps orElse
configCommandPlanToResourcesAndOps orElse
queryPlanToResourcesAndOps
implicit def tupleToSeq(tuple: (Resource, Action)): Seq[(Resource, Action)] = Seq(tuple)
private[auth] def createPlanToResourcesAndOps: PartialFunction[LogicalPlan, Seq[(Resource, Action)]] = {
case CreateTableUsing(tableIdent, _, _, isTemporary, _, _, _) =>
(catalogResource, Write)
case CreateView(viewIdentifier, selectPlan, _) =>
collectTableResources(selectPlan).map((_, Read)) :+ (catalogResource, Write)
case CreateTempView(viewIdentifier, selectPlan, _) =>
collectTableResources(selectPlan).map((_, Read)) :+ (catalogResource, Write)
case ImportTablesUsingWithOptions(datasource, _) =>
(catalogResource, Write)
case _: CreateExternalTable =>
(catalogResource, Write) :+ (allDatastoreResource, Write)
case CreateTableUsingAsSelect(tableIdent, _, isTemporary, _, _, _, selectPlan) =>
collectTableResources(selectPlan).map((_, Read)) :+ (catalogResource, Write) :+ (allDatastoreResource, Write)
}
private[auth] def insertPlanToResourcesAndOps: PartialFunction[LogicalPlan, Seq[(Resource, Action)]] = {
case XDInsertIntoTable(tableIdentifier, _, _) =>
(tableResource(tableIdentifier), Write) :+ (allDatastoreResource, Write)
case InsertIntoTable(writePlan, _, selectPlan, _, _) =>
collectTableResources(writePlan).map((_, Write)) ++ collectTableResources(selectPlan).map((_, Read)) :+ (allDatastoreResource, Write)
}
private[auth] def dropPlanToResourcesAndOps: PartialFunction[LogicalPlan, Seq[(Resource, Action)]] = {
case DropTable(tableIdentifier) =>
(catalogResource, Write) :+ (tableResource(tableIdentifier), Drop)
case DropView(viewIdentifier) =>
(catalogResource, Write) :+ (tableResource(viewIdentifier), Drop)
case DropExternalTable(tableIdentifier) =>
(catalogResource, Write) :+ (tableResource(tableIdentifier), Drop) :+ (allDatastoreResource, Drop)
case DropAllTables =>
(catalogResource, Write) :+ (allTableResource, Drop)
}
private[auth] def streamingPlanToResourcesAndOps: PartialFunction[LogicalPlan, Seq[(Resource, Action)]] = {
case lPlan@ShowAllEphemeralStatuses => throw new RuntimeException(s"Unauthorized command: $lPlan")
case lPlan@DropAllEphemeralTables => throw new RuntimeException(s"Unauthorized command: $lPlan")
case lPlan: CreateEphemeralTable => throw new RuntimeException(s"Unauthorized command: $lPlan")
case lPlan: AddEphemeralQuery => throw new RuntimeException(s"Unauthorized command: $lPlan")
case lPlan: DropEphemeralTable => throw new RuntimeException(s"Unauthorized command: $lPlan")
case lPlan: DropAllEphemeralQueries => throw new RuntimeException(s"Unauthorized command: $lPlan")
case lPlan: DescribeEphemeralTable => throw new RuntimeException(s"Unauthorized command: $lPlan")
case lPlan: ShowEphemeralQueries => throw new RuntimeException(s"Unauthorized command: $lPlan")
case lPlan: DropEphemeralQuery => throw new RuntimeException(s"Unauthorized command: $lPlan")
case lPlan: ShowEphemeralStatus => throw new RuntimeException(s"Unauthorized command: $lPlan")
case lPlan@ShowEphemeralTables => throw new RuntimeException(s"Unauthorized command: $lPlan")
case lPlan: StopProcess => throw new RuntimeException(s"Unauthorized command: $lPlan")
case lPlan: StartProcess => throw new RuntimeException(s"Unauthorized command: $lPlan")
}
private[auth] def insecurePlanToResourcesAndOps: PartialFunction[LogicalPlan, Seq[(Resource, Action)]] = {
case lPlan: CreateGlobalIndex => throw new RuntimeException(s"Unauthorized command: $lPlan")
case lPlan: AddApp => throw new RuntimeException(s"Unauthorized command: $lPlan")
case lPlan: ExecuteApp => throw new RuntimeException(s"Unauthorized command: $lPlan")
case lPlan: AddJar => throw new RuntimeException(s"Unauthorized command: $lPlan")
}
private[auth] def metadataPlanToResourcesAndOps: PartialFunction[LogicalPlan, Seq[(Resource, Action)]] = {
case ShowTablesCommand(databaseOpt) =>
(catalogResource, Describe)
case LogicalDescribeCommand(table, isExtended) =>
collectTableResources(table).map((_, Describe))
case plans.logical.DescribeFunction(functionName, _) =>
Seq.empty
case showFunctions: plans.logical.ShowFunctions =>
Seq.empty
}
private[auth] def configCommandPlanToResourcesAndOps: PartialFunction[LogicalPlan, Seq[(Resource, Action)]] = {
case lPlan@SetCommand(Some((key, value))) if key == XDSQLConf.UserIdPropertyKey =>
throw new RuntimeException(s"Unauthorized command: $lPlan")
case SetCommand(Some((key, value))) =>
logger.info(s"Set command received: $key=$value)") // TODO log
Seq.empty
}
private[auth] def cachePlanToResourcesAndOps: PartialFunction[LogicalPlan, Seq[(Resource, Action)]] = {
case CacheTableCommand(tableName, Some(toCachePlan), _) =>
collectTableResources(toCachePlan).map((_, Read)) :+ (catalogResource, Write) :+ (allTableResource, Cache)
case CacheTableCommand(tableName, None, _) =>
(tableResource(tableName), Cache)
case UncacheTableCommand(tableIdentifier) =>
(tableResource(tableIdentifier), Cache)
case ClearCacheCommand =>
(allTableResource, Cache)
case RefreshTable(tableIdentifier) =>
(tableResource(tableIdentifier), Cache)
}
private[auth] def queryPlanToResourcesAndOps: PartialFunction[LogicalPlan, Seq[(Resource, Action)]] = {
case queryWithUnresolvedAttributes =>
collectTableResources(queryWithUnresolvedAttributes).map((_, Read))
}
private[auth] def collectTableResources(parsedPlan: LogicalPlan) = parsedPlan.collect {
case UnresolvedRelation(tableIdentifier, _) =>
tableResource(tableIdentifier)
}
private lazy val catalogResource = Resource(crossdataInstances, CatalogResource, catalogIdentifier)
private lazy val allDatastoreResource = Resource(crossdataInstances, DatastoreResource, Resource.AllResourceName)
private def tableResource(tableIdentifier: TableIdentifier): Resource =
tableResource(tableIdentifier.unquotedString)
private def tableResource(tableResourceName: String): Resource =
Resource(crossdataInstances, TableResource, tableStr2ResourceName(tableResourceName))
private lazy val allTableResource: Resource = tableResource(Resource.AllResourceName)
private def tableStr2ResourceName(tableName: String): String = // TODO remove Spark 2.0 (required for Uncache plans)
Seq(catalogIdentifier, tableName) mkString "."
}
|
darroyocazorla/crossdata
|
core/src/main/scala/org/apache/spark/sql/crossdata/execution/auth/AuthDirectivesExtractor.scala
|
Scala
|
apache-2.0
| 8,814
|
package generators.v24
import generators.Helper._
import org.specs2.mutable.Specification
import utils.WithApplication
import scala.xml.XML
/**
* Tests to verify the generation of HTML
*
*
*/
class HtmlFunctionalCasesSpec extends Specification {
val version = "/0.24"
"The HTML generator should generate HTML from C3" should {
"Claim Functional cases" in new WithApplication {
for (i <- 1 to 15) {
val fileLocation = s"functionalTestCase${i}_testGeneratorResultIsSuccess.html"
val source = getClass.getResource(s"$version/claim/c3_functional$i.xml")
deleteFile(fileLocation)
generateHTML(fileLocation, XML.load(source))
}
}
"Change of circumstances Functional cases" in new WithApplication {
for (i <- 1 to 13) {
val fileLocation = s"functionalTestCase${i}_circs_testGeneratorResultIsSuccess.html"
val source = getClass.getResource(s"$version/circs/c3_functional${i}_circs.xml")
deleteFile(fileLocation)
generateHTML(fileLocation, XML.load(source))
}
for (i <- 20 to 28) {
val fileLocation = s"functionalTestCase${i}_circs_testGeneratorResultIsSuccess.html"
val source = getClass.getResource(s"$version/circs/c3_functional${i}_circs.xml")
deleteFile(fileLocation)
generateHTML(fileLocation, XML.load(source))
}
}
}
}
|
Department-for-Work-and-Pensions/RenderingService
|
test/generators/v24/HtmlFunctionalCasesSpec.scala
|
Scala
|
mit
| 1,396
|
package com.sfxcode.nosql.mongo.database
import com.sfxcode.nosql.mongo.database.MongoPoolOptions._
case class MongoPoolOptions(
maxConnectionIdleTime: Int = DefaultMaxConnectionIdleTime,
maxSize: Int = DefaultMaxSize,
minSize: Int = DefaultMinSize,
maintenanceInitialDelay: Int = DefaultMaintenanceInitialDelay
) {}
object MongoPoolOptions {
val DefaultMaxConnectionIdleTime = 60
val DefaultMaxSize = 50
val DefaultMinSize = 0
val DefaultMaintenanceInitialDelay = 0
}
|
sfxcode/simple-mongo
|
src/main/scala/com/sfxcode/nosql/mongo/database/MongoPoolOptions.scala
|
Scala
|
apache-2.0
| 530
|
package nodes.stats
import breeze.linalg.{*, DenseMatrix, DenseVector}
import breeze.stats.distributions._
import breeze.numerics.cos
import breeze.stats.distributions.Rand
import org.apache.spark.rdd.RDD
import utils.MatrixUtils
import org.apache.commons.math3.random.MersenneTwister
import workflow.Transformer
/**
* Transformer that extracts random cosine features from a feature vector
* @param W A matrix of dimension (# output features) by (# input features)
* @param b a dense vector of dimension (# output features)
*
* Transformer maps vector x to cos(x * transpose(W) + b).
* Kernel trick to allow Linear Solver to learn cosine interaction terms of the input
*/
class SeededCosineRandomFeatures(numInputFeatures:Int, numOutputFeatures:Int, seed: Int, gamma: Double)
extends Transformer[DenseVector[Double], DenseVector[Double]] {
override def apply(in: RDD[DenseVector[Double]]): RDD[DenseVector[Double]] = {
in.mapPartitions { part =>
implicit val randBasis: RandBasis = new RandBasis(new ThreadLocalRandomGenerator(new MersenneTwister(seed)))
val gaussian = new Gaussian(0, 1)
val uniform = new Uniform(0, 1)
val W = DenseMatrix.rand(numOutputFeatures, numInputFeatures, gaussian) :* gamma
val b = DenseVector.rand(numOutputFeatures, uniform) :* (2*math.Pi)
val data = MatrixUtils.rowsToMatrix(part)
val features: DenseMatrix[Double] = data * W.t
features(*,::) :+= b
cos.inPlace(features)
MatrixUtils.matrixToRowArray(features).iterator
}
}
override def apply(in: DenseVector[Double]): DenseVector[Double] = {
implicit val randBasis: RandBasis = new RandBasis(new ThreadLocalRandomGenerator(new MersenneTwister(seed)))
val gaussian = new Gaussian(0, 1)
val uniform = new Uniform(0, 1)
val W = DenseMatrix.rand(numOutputFeatures, numInputFeatures, gaussian) :* gamma
val b = DenseVector.rand(numOutputFeatures, uniform) :* (2*math.Pi)
val features = (in.t * W.t).t
features :+= b
cos.inPlace(features)
features
}
}
/**
* Companion Object to generate random cosine features from various distributions
*/
object SeededCosineRandomFeatures {
/** Generate Random Cosine Features from the given distributions **/
def apply(
numInputFeatures: Int,
numOutputFeatures: Int,
gamma: Double,
seed: Int
) = {
new SeededCosineRandomFeatures(numInputFeatures, numOutputFeatures, seed, gamma)
}
}
|
Vaishaal/ckm
|
keystone_pipeline/src/main/scala/nodes/stats/SeededCosineRandomFeatures.scala
|
Scala
|
apache-2.0
| 2,515
|
/*-------------------------------------------------------------------------*\\
** ScalaCheck **
** Copyright (c) 2007-2018 Rickard Nilsson. All rights reserved. **
** http://www.scalacheck.org **
** **
** This software is released under the terms of the Revised BSD License. **
** There is NO WARRANTY. See the file LICENSE for the full text. **
\\*------------------------------------------------------------------------ */
package org.scalacheck.util
import scala.collection.{mutable, Map => _, _}
trait Buildable[T,C] extends Serializable {
def builder: mutable.Builder[T,C]
def fromIterable(it: Traversable[T]): C = {
val b = builder
b ++= it
b.result()
}
}
object Buildable extends BuildableVersionSpecific {
import java.util.ArrayList
implicit def buildableArrayList[T]: Buildable[T, ArrayList[T]] = new Buildable[T,ArrayList[T]] {
def builder = new ArrayListBuilder[T]
}
}
/*
object Buildable2 {
implicit def buildableMutableMap[T,U] = new Buildable2[T,U,mutable.Map] {
def builder = mutable.Map.newBuilder
}
implicit def buildableImmutableMap[T,U] = new Buildable2[T,U,immutable.Map] {
def builder = immutable.Map.newBuilder
}
implicit def buildableMap[T,U] = new Buildable2[T,U,Map] {
def builder = Map.newBuilder
}
implicit def buildableImmutableSortedMap[T: Ordering, U] = new Buildable2[T,U,immutable.SortedMap] {
def builder = immutable.SortedMap.newBuilder
}
implicit def buildableSortedMap[T: Ordering, U] = new Buildable2[T,U,SortedMap] {
def builder = SortedMap.newBuilder
}
}
*/
|
martijnhoekstra/scala
|
src/scalacheck/org/scalacheck/util/Buildable.scala
|
Scala
|
apache-2.0
| 1,775
|
package edu.cmu.dynet
class FastLstmBuilder private[dynet](private[dynet] val builder: internal.FastLSTMBuilder)
extends RnnBuilder(builder) {
def this() { this(new internal.FastLSTMBuilder()) }
def this(layers: Long, inputDim: Long, hiddenDim: Long, model: ParameterCollection) {
this(new internal.FastLSTMBuilder(layers, inputDim, hiddenDim, model.model))
}
}
|
xunzhang/dynet
|
contrib/swig/src/main/scala/edu/cmu/dynet/FastLstmBuilder.scala
|
Scala
|
apache-2.0
| 378
|
package net.gnmerritt.tetris.parser
import net.gnmerritt.tetris.engine.{Position, Piece}
import net.gnmerritt.tetris.player.GameState
/**
* Parses out round-by-round information
*/
object UpdateParser extends GameParser {
def update(curr: GameState, parts: Array[String]): GameState = {
parts(2) match {
case "round" =>
val round = parts(3).toInt
return curr.copy(round = curr.round.copy(roundNum = round))
case "this_piece_type" =>
Piece.bySymbol(parts(3)) match {
case Some(piece) => return curr.copy(round = curr.round.copy(thisPiece = piece))
case None => false
}
case "this_piece_position" =>
val pos = parts(3).split(",")
if (pos.length == 2) {
val newPos = new Position(pos(0).toInt, pos(1).toInt)
return curr.copy(round = curr.round.copy(thisPiecePosition = newPos))
}
case "next_piece_type" =>
Piece.bySymbol(parts(3)) match {
case Some(piece) => return curr.copy(round = curr.round.copy(nextPiece = piece))
case None => false
}
case "field" =>
return FieldParser.update(curr, parts)
case _ => false
}
curr
}
}
|
gnmerritt/aig-tetris
|
src/main/scala/net/gnmerritt/tetris/parser/UpdateParser.scala
|
Scala
|
mit
| 1,216
|
package mesosphere.marathon
package core.deployment.impl
import mesosphere.UnitTest
import mesosphere.marathon.core.deployment.DeploymentPlan
import mesosphere.marathon.state.PathId._
import mesosphere.marathon.state._
import mesosphere.marathon.test.GroupCreation
class DeploymentPlanRevertTest extends UnitTest with GroupCreation {
case class Deployment(name: String, change: RootGroup => RootGroup)
/**
* An assert equals which provides better feedback about what's different for groups.
*/
private def assertEqualsExceptVersion(expectedOrig: RootGroup, actualOrig: RootGroup): Unit = {
val expected: RootGroup = expectedOrig.withNormalizedVersions
val actual: RootGroup = actualOrig.withNormalizedVersions
if (expected != actual) {
val actualGroupIds = actual.transitiveGroupsById.keySet
val expectedGroupIds = expected.transitiveGroupsById.keySet
val unexpectedGroupIds = actualGroupIds -- expectedGroupIds
val missingGroupIds = expectedGroupIds -- actualGroupIds
withClue(s"unexpected groups $unexpectedGroupIds, missing groups $missingGroupIds: ") {
actualGroupIds should equal(expectedGroupIds)
}
for (groupId <- expectedGroupIds) {
withClue(s"for group id $groupId") {
actual.group(groupId) should equal(expected.group(groupId))
}
}
val actualAppIds = actual.transitiveAppIds
val expectedAppIds = expected.transitiveAppIds
val unexpectedAppIds = actualAppIds.filter(appId => expected.app(appId).isEmpty)
val missingAppIds = expectedAppIds.filter(appId => actual.app(appId).isEmpty)
withClue(s"unexpected apps $unexpectedAppIds, missing apps $missingAppIds: ") {
actualAppIds should equal(expectedAppIds)
}
for (appId <- expectedAppIds) {
withClue(s"for app id $appId") {
actual.app(appId) should equal(expected.app(appId))
}
}
// just in case we missed differences
actual should equal(expected)
}
}
private[this] def removeApp(appId: String) = Deployment(s"remove app '$appId'", _.removeApp(appId.toRootPath))
private[this] def addApp(appId: String) = Deployment(s"add app '$appId'", _.updateApp(appId.toRootPath, _ => AppDefinition(appId.toRootPath, cmd = Some("sleep")), Timestamp.now()))
private[this] def addGroup(groupId: String) = Deployment(s"add group '$groupId'", _.makeGroup(groupId.toRootPath))
private[this] def removeGroup(groupId: String) = Deployment(s"remove group '$groupId'", _.removeGroup(groupId.toRootPath))
private[this] def testWithConcurrentChange(originalBeforeChanges: RootGroup, changesBeforeTest: Deployment*)(deployments: Deployment*)(expectedReverted: RootGroup): Unit = {
val firstDeployment = deployments.head
def performDeployments(orig: RootGroup, deployments: Seq[Deployment]): RootGroup = {
deployments.foldLeft(orig) {
case (last: RootGroup, deployment: Deployment) =>
deployment.change(last)
}
}
s"Reverting ${firstDeployment.name} after deploying ${deployments.tail.map(_.name).mkString(", ")}" in {
Given("an existing group with apps")
val original = performDeployments(originalBeforeChanges, changesBeforeTest.to[Seq])
When(s"performing a series of deployments (${deployments.map(_.name).mkString(", ")})")
val targetWithAllDeployments = performDeployments(original, deployments.to[Seq])
When("reverting the first one while we reset the versions before that")
val newVersion = Timestamp(1)
val deploymentReverterForFirst = DeploymentPlanReverter.revert(
original.withNormalizedVersions,
firstDeployment.change(original).withNormalizedVersions,
newVersion)
val reverted = deploymentReverterForFirst(targetWithAllDeployments.withNormalizedVersions)
Then("The result should only contain items with the prior or the new version")
for (app <- reverted.transitiveApps) {
withClue(s"version for app ${app.id} ") {
app.version.millis should be <= 1L
}
}
for (group <- reverted.transitiveGroupsById.values) {
withClue(s"version for group ${group.id} ") {
group.version.millis should be <= 1L
}
}
Then("the result should be the same as if we had only applied all the other deployments")
val targetWithoutFirstDeployment = performDeployments(original, deployments.tail.to[Seq])
withClue("while comparing reverted with targetWithoutFirstDeployment: ") {
assertEqualsExceptVersion(targetWithoutFirstDeployment, reverted)
}
Then("we should have the expected groups and apps")
withClue("while comparing reverted with expected: ") {
assertEqualsExceptVersion(expectedReverted, reverted)
}
}
}
private[this] def changeGroupDependencies(groupId: String, add: Seq[String] = Seq.empty, remove: Seq[String] = Seq.empty) = {
val addedIds = add.map(_.toRootPath)
val removedIds = remove.map(_.toRootPath)
val name = if (removedIds.isEmpty)
s"group '$groupId' add deps {${addedIds.mkString(", ")}}"
else if (addedIds.isEmpty)
s"group '$groupId' remove deps {${removedIds.mkString(", ")}}"
else
s"group '$groupId' change deps -{${removedIds.mkString(", ")}} +{${addedIds.mkString(", ")}}"
Deployment(name, _.updateDependencies(groupId.toRootPath, _ ++ addedIds -- removedIds, Timestamp.now()))
}
"RevertDeploymentPlan" should {
"Revert app addition" in {
Given("an unrelated group")
val unrelatedGroup = {
val id = "unrelated".toRootPath
val app1 = AppDefinition(id / "app1", cmd = Some("sleep"))
val app2 = AppDefinition(id / "app2", cmd = Some("sleep"))
createGroup(
id,
apps = Map(
app1.id -> app1,
app2.id -> app2
)
)
}
val original = createRootGroup(groups = Set(unrelatedGroup))
When("we add an unrelated app and try to revert that without concurrent changes")
val target = RootGroup.fromGroup(original.updateApp("test".toRootPath, _ => AppDefinition("test".toRootPath), Timestamp.now()))
val plan = DeploymentPlan(original, target)
val revertToOriginal = plan.revert(target)
Then("we get back the original definitions")
assertEqualsExceptVersion(original, actualOrig = revertToOriginal)
}
"Revert app removal" in {
Given("an existing group with apps")
val changeme = {
val id = "changeme".toRootPath
val app1 = AppDefinition(id / "app1", cmd = Some("sleep"))
val app2 = AppDefinition(id / "app2", cmd = Some("sleep"))
createGroup(
id,
apps = Map(
app1.id -> app1,
app2.id -> app2
)
)
}
val original = createRootGroup(groups = Set(changeme))
When("we remove an app and try to revert that without concurrent changes")
val appId = "/changeme/app1".toRootPath
val target = original.removeApp(appId)
target.app(appId) should be('empty)
val plan = DeploymentPlan(original, target)
val revertToOriginal = plan.revert(target)
Then("we get back the original definitions")
assertEqualsExceptVersion(original, actualOrig = revertToOriginal)
}
"Revert removing a group without apps" in {
Given("a group")
val original = createRootGroup(groups = Set(createGroup("changeme".toRootPath)))
When("we remove the group and try to revert that without concurrent changes")
val target = original.removeGroup("changeme".toRootPath)
val plan = DeploymentPlan(original, target)
val revertToOriginal = plan.revert(target)
Then("we get back the original definitions")
assertEqualsExceptVersion(original, actualOrig = revertToOriginal)
}
"Revert removing a group with apps" in {
Given("a group")
val changeme = {
val id = "changeme".toRootPath
val app1 = AppDefinition(id / "app1", cmd = Some("sleep"))
val app2 = AppDefinition(id / "app2", cmd = Some("sleep"))
createGroup(
id,
apps = Map(
app1.id -> app1,
app2.id -> app2
)
)
}
val original = createRootGroup(groups = Set(changeme))
When("we remove the group and try to revert that without concurrent changes")
val target = original.removeGroup("changeme".toRootPath)
val plan = DeploymentPlan(original, target)
val revertToOriginal = plan.revert(target)
Then("we get back the original definitions")
assertEqualsExceptVersion(original, actualOrig = revertToOriginal)
}
"Revert group dependency changes" in {
Given("an existing group with apps")
val existingGroup = {
val id = "changeme".toRootPath
val app1 = AppDefinition(id / "app1", cmd = Some("sleep"))
val app2 = AppDefinition(id / "app2", cmd = Some("sleep"))
createGroup(
id,
dependencies = Set(
"othergroup1".toRootPath,
"othergroup2".toRootPath
),
apps = Map(
app1.id -> app1,
app2.id -> app2
)
)
}
val original = createRootGroup(
groups = Set(
createGroup("othergroup1".toRootPath),
createGroup("othergroup2".toRootPath),
createGroup("othergroup3".toRootPath),
existingGroup
)
)
When("we change the dependencies to the existing group")
val target = original.updateDependencies(
existingGroup.id,
_ => Set("othergroup2".toRootPath, "othergroup3".toRootPath),
original.version)
val plan = DeploymentPlan(original, target)
val revertToOriginal = plan.revert(target)
Then("we get back the original definitions")
assertEqualsExceptVersion(original, actualOrig = revertToOriginal)
}
val existingGroup = {
val id = "changeme".toRootPath
val app1 = AppDefinition(id / "app1", cmd = Some("sleep"))
val app2 = AppDefinition(id / "app2", cmd = Some("sleep"))
createGroup(
id,
dependencies = Set(
"othergroup1".toRootPath,
"othergroup2".toRootPath
),
apps = Map(
app1.id -> app1,
app2.id -> app2
)
)
}
val original = createRootGroup(
groups = Set(
createGroup("othergroup1".toRootPath),
createGroup("othergroup2".toRootPath),
createGroup("othergroup3".toRootPath),
existingGroup
)
)
testWithConcurrentChange(original)(
removeApp("/changeme/app1"),
// unrelated app changes
addApp("/changeme/app3"),
addApp("/other/app4"),
removeApp("/changeme/app2")
) {
createRootGroup(
groups = Set(
createGroup("othergroup1".toRootPath),
createGroup("othergroup2".toRootPath),
createGroup("othergroup3".toRootPath),
{
val id = "other".toRootPath
val app4 = AppDefinition(id / "app4", cmd = Some("sleep"))
createGroup(
id,
apps = Map(app4.id -> app4) // app4 was added
)
},
{
val id = "changeme".toRootPath
val app1 = AppDefinition(id / "app1", cmd = Some("sleep"))
val app3 = AppDefinition(id / "app3", cmd = Some("sleep"))
createGroup(
id,
dependencies = Set(
"othergroup1".toRootPath,
"othergroup2".toRootPath
),
apps = Map(
app1.id -> app1, // app1 was kept
// app2 was removed
app3.id -> app3 // app3 was added
)
)
}
)
)
}
testWithConcurrentChange(original)(
changeGroupDependencies("/withdeps", add = Seq("/a", "/b", "/c")),
// cannot delete /withdeps in revert
addGroup("/withdeps/some")
) {
// expected outcome after revert of first deployment
createRootGroup(
groups = Set(
createGroup("othergroup1".toRootPath),
createGroup("othergroup2".toRootPath),
createGroup("othergroup3".toRootPath),
{
val id = "withdeps".toRootPath // withdeps still exists because of the subgroup
createGroup(
id,
apps = Group.defaultApps,
groups = Set(createGroup(id / "some")),
dependencies = Set() // dependencies were introduce with first deployment, should be gone now
)
},
{
val id = "changeme".toRootPath
val app1 = AppDefinition(id / "app1", cmd = Some("sleep"))
val app2 = AppDefinition(id / "app2", cmd = Some("sleep"))
createGroup(
id,
dependencies = Set(
"othergroup1".toRootPath,
"othergroup2".toRootPath
),
apps = Map(
app1.id -> app1,
app2.id -> app2
)
)
}
)
)
}
testWithConcurrentChange(original)(
changeGroupDependencies("/changeme", remove = Seq("/othergroup1"), add = Seq("/othergroup3")),
// "conflicting" dependency changes
changeGroupDependencies("/changeme", remove = Seq("/othergroup2"), add = Seq("/othergroup4"))
) {
// expected outcome after revert of first deployment
createRootGroup(
groups = Set(
createGroup("othergroup1".toRootPath),
createGroup("othergroup2".toRootPath),
createGroup("othergroup3".toRootPath),
{
val id = "changeme".toRootPath
val app1 = AppDefinition(id / "app1", cmd = Some("sleep"))
val app2 = AppDefinition(id / "app2", cmd = Some("sleep"))
createGroup(
id,
dependencies = Set(
// othergroup2 was removed and othergroup4 added
"othergroup1".toRootPath,
"othergroup4".toRootPath
),
apps = Map(
app1.id -> app1,
app2.id -> app2
)
)
}
)
)
}
testWithConcurrentChange(original)(
removeGroup("/othergroup3"),
// unrelated dependency changes
changeGroupDependencies("/changeme", remove = Seq("/othergroup2"), add = Seq("/othergroup4"))
) {
// expected outcome after revert of first deployment
createRootGroup(
groups = Set(
createGroup("othergroup1".toRootPath),
createGroup("othergroup2".toRootPath),
createGroup("othergroup3".toRootPath),
{
val id = "changeme".toRootPath
val app1 = AppDefinition(id / "app1", cmd = Some("sleep"))
val app2 = AppDefinition(id / "app2", cmd = Some("sleep"))
createGroup(
id,
dependencies = Set(
// othergroup2 was removed and othergroup4 added
"othergroup1".toRootPath,
"othergroup4".toRootPath
),
apps = Map(
app1.id -> app1,
app2.id -> app2
)
)
}
)
)
}
testWithConcurrentChange(
original,
addGroup("/changeme/some")
)(
// revert first
addGroup("/changeme/some/a"),
// concurrent deployments
addGroup("/changeme/some/b")
) {
// expected outcome after revert
createRootGroup(
groups = Set(
createGroup("othergroup1".toRootPath),
createGroup("othergroup2".toRootPath),
createGroup("othergroup3".toRootPath),
{
val id = "changeme".toRootPath
val app1 = AppDefinition(id / "app1", cmd = Some("sleep"))
val app2 = AppDefinition(id / "app2", cmd = Some("sleep"))
createGroup(
id,
dependencies = Set(
"othergroup1".toRootPath,
"othergroup2".toRootPath
),
apps = Map(
app1.id -> app1,
app2.id -> app2
),
groups = Set(
createGroup(
id / "some",
groups = Set(
createGroup(id / "some" / "b")
)
)
)
)
}
)
)
}
testWithConcurrentChange(
original
)(
// revert first
addApp("/changeme/some/a"),
// concurrent deployments
addApp("/changeme/some/b/a"),
addApp("/changeme/some/b/b"),
addApp("/changeme/some/b/c")
) {
// expected outcome after revert
createRootGroup(
groups = Set(
createGroup("othergroup1".toRootPath),
createGroup("othergroup2".toRootPath),
createGroup("othergroup3".toRootPath),
{
val id = "changeme".toRootPath
val app1 = AppDefinition(id / "app1", cmd = Some("sleep"))
val app2 = AppDefinition(id / "app2", cmd = Some("sleep"))
val appBA = AppDefinition(id / "some" / "b" / "a", cmd = Some("sleep"))
val appBB = AppDefinition(id / "some" / "b" / "b", cmd = Some("sleep"))
val appBC = AppDefinition(id / "some" / "b" / "c", cmd = Some("sleep"))
createGroup(
id,
dependencies = Set(
"othergroup1".toRootPath,
"othergroup2".toRootPath
),
apps = Map(
app1.id -> app1,
app2.id -> app2
),
groups = Set(
createGroup(
id / "some",
groups = Set(
createGroup(
id / "some" / "b",
apps = Map(
appBA.id -> appBA,
appBB.id -> appBB,
appBC.id -> appBC
)
)
)
)
)
)
}
)
)
}
}
}
|
janisz/marathon
|
src/test/scala/mesosphere/marathon/core/deployment/impl/DeploymentPlanRevertTest.scala
|
Scala
|
apache-2.0
| 18,908
|
package backend
object Currencies {
val all: Array[String] = Array(
"AUD/CAD", "AUD/CHF", "AUD/NZD", "AUD/USD", "CAD/CHF",
"EUR/GBP", "EUR/CHF", "EUR/USD", "GBP/AUD", "GBP/CAD",
"GBP/CHF", "GBP/USD", "USD/CAD", "USD/CHF", "NZD/USD")
}
|
intelix/reactiveservices-examples
|
reactivefx/legacyservice-api/src/main/scala/backend/Currencies.scala
|
Scala
|
apache-2.0
| 250
|
package scdbpf
import passera.unsigned._
import DbpfUtil.toHex
import Tgi._
/** Represents Type, Group, Instance identifiers of `DbpfEntries`.
* `Tgi` objects are immutable.
*
* Instances of this class may be obtained via the companion object's `apply`
* method, for example:
*
* {{{
* val tgi = Tgi(0, 0, 0x12345678)
* }}}
*
* Alternatively, the `copy` methods can be used to create modified copies.
*
* {{{
* tgi.copy(iid = 0x87654321)
* tgi.copy(Tgi.Sc4Path)
* }}}
*
* The [[matches]] method is used to test whether a `Tgi` matches another `Tgi`
* object or `TgiMask`.
*
* @define SELF `Tgi`
*/
sealed trait Tgi extends LabeledTgi {
type IdType = Int
type SelfType = Tgi
/** Creates a new `Tgi` from this object with the non-`None` parameters of
* `mask` replaced. For example, `copy(Tgi.Sc4Path)` would replace the `tid`.
*/
def copy(mask: TgiMask): Tgi = {
Tgi(mask.tid.getOrElse(this.tid),
mask.gid.getOrElse(this.gid),
mask.iid.getOrElse(this.iid))
}
def copy(tid: Int = tid, gid: Int = gid, iid: Int = iid): Tgi = Tgi(tid, gid, iid)
/** Tests if all the IDs match the non-masked IDs of `tgi`.
*/
def matches(tgi: TgiLike): Boolean = tgi match {
case tgi: Tgi => this.ids == tgi.ids
case mask: TgiMask => this.ids zip mask.ids forall {
case (a, bOpt) => bOpt forall (_ == a)
}
}
def label: String = {
val labeledTgiOpt = Tgi.LabeledTgis.values.find(this.matches(_))
labeledTgiOpt.get.label
}
override def toString: String = {
"T:" + toHex(tid) + ", G:" + toHex(gid) + ", I:" + toHex(iid)
}
}
/** Provides various masks that `Tgi`s can be matched against.
*/
object Tgi {
/** @define SELF `TgiLike` */
sealed trait TgiLike {
type IdType
type SelfType
val tid, gid, iid: IdType
/** Creates a new $SELF from this object with the specified parameters
* replaced.
*/
def copy(tid: IdType = tid, gid: IdType = gid, iid: IdType = iid): SelfType
private[scdbpf] def ids = Iterable(tid, gid, iid)
final override def equals(obj: Any): Boolean = obj match {
case that: TgiLike => this.ids == that.ids
case _ => false
}
final override def hashCode(): Int = {
val p = 4229
var result = 1
result = p * result + tid.hashCode
result = p * result + iid.hashCode
result = p * result + gid.hashCode
result
}
}
sealed trait LabeledTgi extends TgiLike {
/** a descriptive label specifying the general type like `Exemplar`, `S3D`
* or `Unknown`.
*/
def label: String
}
private class TgiImpl(val tid: Int, val gid: Int, val iid: Int) extends Tgi
def apply(tid: Int, gid: Int, iid: Int): Tgi = new TgiImpl(tid, gid, iid)
private[scdbpf] object LabeledTgis extends Enumeration {
import scala.language.implicitConversions
implicit def value2LabeledTgi(v: Value): LabeledTgi = v.asInstanceOf[LabeledTgi]
private[Tgi] class TgiValImpl(val tid: Int, val gid: Int, val iid: Int, override val label: String)
extends Val with Tgi
private[Tgi] class TgiMaskValImpl(val tid: Option[Int], val gid: Option[Int], val iid: Option[Int], val label: String)
extends Val with TgiMask with LabeledTgi
}
import LabeledTgis.{ TgiValImpl, TgiMaskValImpl }
private def TgiVal(tid: Int, gid: Int, iid: Int, label: String): Tgi =
new TgiValImpl(tid, gid, iid, label)
private def MaskVal(tid: Option[Int], gid: Option[Int], iid: Option[Int], label: String): TgiMask with LabeledTgi =
new TgiMaskValImpl(tid, gid, iid, label)
private def MaskVal(tid: Int, gid: Option[Int], iid: Option[Int], label: String): TgiMask with LabeledTgi =
new TgiMaskValImpl(Some(tid), gid, iid, label)
private def MaskVal(tid: Int, gid: Int, iid: Option[Int], label: String): TgiMask with LabeledTgi =
new TgiMaskValImpl(Some(tid), Some(gid), iid, label)
private implicit val uintOnIntOrdering = UIntOrdering.on[Int](UInt(_))
private val tupOrd = Ordering[(Int, Int, Int)]
/** the default implicit `Tgi` ordering that sorts by IID, TID, GID */
implicit val itgOrdering: Ordering[Tgi] = tupOrd.on(x => (x.iid, x.tid, x.gid))
/** a `Tgi` ordering that sorts by IID, GID, TID */
val igtOrdering: Ordering[Tgi] = tupOrd.on(x => (x.iid, x.gid, x.iid))
/** a `Tgi` ordering that sorts by TID, IID, GID */
val tigOrdering: Ordering[Tgi] = tupOrd.on(x => (x.tid, x.iid, x.gid))
val Blank = TgiVal (0, 0, 0, "-")
val Directory = TgiVal (0xe86b1eef, 0xe86b1eef, 0x286b1f03, "Directory")
val Ld = MaskVal(0x6be74c60, 0x6be74c60, None, "LD");
val S3dMaxis = MaskVal(0x5ad0e817, 0xbadb57f1, None, "S3D (Maxis)");
val S3d = MaskVal(0x5ad0e817, None, None, "S3D");
val Cohort = MaskVal(0x05342861, None, None, "Cohort")
val ExemplarRoad = MaskVal(0x6534284a, 0x2821ed93, None, "Exemplar (Road)");
val ExemplarStreet = MaskVal(0x6534284a, 0xa92a02ea, None, "Exemplar (Street)");
val ExemplarOnewayroad = MaskVal(0x6534284a, 0xcbe084cb, None, "Exemplar (One-Way Road)");
val ExemplarAvenue = MaskVal(0x6534284a, 0xcb730fac, None, "Exemplar (Avenue)");
val ExemplarHighway = MaskVal(0x6534284a, 0xa8434037, None, "Exemplar (Highway)");
val ExemplarGroundhighway = MaskVal(0x6534284a, 0xebe084d1, None, "Exemplar (Ground Highway)");
val ExemplarDirtroad = MaskVal(0x6534284a, 0x6be08658, None, "Exemplar (Dirtroad)");
val ExemplarRail = MaskVal(0x6534284a, 0xe8347989, None, "Exemplar (Rail)");
val ExemplarLightrail = MaskVal(0x6534284a, 0x2b79dffb, None, "Exemplar (Lightrail)");
val ExemplarMonorail = MaskVal(0x6534284a, 0xebe084c2, None, "Exemplar (Monorail)");
val ExemplarPowerpole = MaskVal(0x6534284a, 0x088e1962, None, "Exemplar (Power Pole)");
val ExemplarT21 = MaskVal(0x6534284a, 0x89ac5643, None, "Exemplar (T21)");
val Exemplar = MaskVal(0x6534284a, None, None, "Exemplar")
val FshMisc = MaskVal(0x7ab50e44, 0x1abe787d, None, "FSH (Misc)");
val FshBaseOverlay = MaskVal(0x7ab50e44, 0x0986135e, None, "FSH (Base/Overlay Texture)");
val FshShadow = MaskVal(0x7ab50e44, 0x2BC2759a, None, "FSH (Shadow Mask)");
val FshAnimProps = MaskVal(0x7ab50e44, 0x2a2458f9, None, "FSH (Animation Sprites (Props))");
val FshAnimNonprops = MaskVal(0x7ab50e44, 0x49a593e7, None, "FSH (Animation Sprites (Non Props))");
val FshTerrainFoundation = MaskVal(0x7ab50e44, 0x891b0e1a, None, "FSH (Terrain/Foundation)");
val FshUi = MaskVal(0x7ab50e44, 0x46a006b0, None, "FSH (UI Image)");
val Fsh = MaskVal(0x7ab50e44, None, None, "FSH")
val Sc4Path2d = MaskVal(0x296678f7, 0x69668828, None, "SC4Path (2D)");
val Sc4Path3d = MaskVal(0x296678f7, 0xa966883f, None, "SC4Path (3D)");
val Sc4Path = MaskVal(0x296678f7, None, None, "SC4Path");
val PngIcon = MaskVal(0x856ddbac, 0x6a386d26, None, "PNG (Icon)");
val Png = MaskVal(0x856ddbac, None, None, "PNG");
val Lua = MaskVal(0xca63e2a3, 0x4a5e8ef6, None, "Lua");
val LuaGen = MaskVal(0xca63e2a3, 0x4a5e8f3f, None, "Lua (Generators)");
val Wav = MaskVal(0x2026960b, 0xaa4d1933, None, "WAV");
val LText = MaskVal(0x2026960b, None, None, "LText");
val IniFont = TgiVal (0, 0x4a87bfe8, 0x2a87bffc, "INI (Font Table)");
val IniNetwork = TgiVal (0, 0x8a5971c5, 0x8a5993b9, "INI (Networks)");
val Ini = MaskVal(0, 0x8a5971c5, None, "INI");
val Rul = MaskVal(0x0a5bcf4b, 0xaa5bcf57, None, "RUL");
val EffDir = MaskVal(0xea5118b0, None, None, "EffDir");
val Null = MaskVal(None, None, None, "Unknown")
}
/** Represents masks of TGIs that are used for the `match` method of [[Tgi]].
*
* Instances of this class may be obtained via the companion object's `apply`
* methods.
*
* @define SELF `TgiMask`
*/
sealed trait TgiMask extends TgiLike {
type IdType = Option[Int]
type SelfType = TgiMask
def copy(tid: Option[Int] = tid, gid: Option[Int] = gid, iid: Option[Int] = iid): TgiMask = TgiMask(tid, gid, iid)
/** Creates a `Tgi` from this mask. If one of its IDs is `None`, a
* `NoSuchElementException` is thrown.
*/
def toTgi: Tgi = Tgi(tid.get, gid.get, iid.get)
override def toString: String = {
val s = "__________"
"T:" + tid.map(toHex(_)).getOrElse(s) +
", G:" + gid.map(toHex(_)).getOrElse(s) +
", I:" + iid.map(toHex(_)).getOrElse(s)
}
}
/** Provides factory methods for creating `TgiMask`s.
*/
object TgiMask {
private class TgiMaskImpl(val tid: Option[Int], val gid: Option[Int], val iid: Option[Int]) extends TgiMask
def apply(tid: Int, gid: Int, iid: Int ): TgiMask = TgiMask(Some(tid), Some(gid), Some(iid))
def apply(tid: Int, gid: Int, iid: Option[Int]): TgiMask = TgiMask(Some(tid), Some(gid), iid)
def apply(tid: Int, gid: Option[Int], iid: Option[Int]): TgiMask = TgiMask(Some(tid), gid, iid)
def apply(tid: Option[Int], gid: Option[Int], iid: Option[Int]): TgiMask = new TgiMaskImpl(tid, gid, iid)
}
|
memo33/scdbpf
|
src/main/scala/scdbpf/tgi.scala
|
Scala
|
mit
| 9,724
|
package fr.inria.spirals.actress.runtime
import org.scalatest.BeforeAndAfterAll
import org.scalatest.Matchers
import org.scalatest.WordSpecLike
import actress.sys.OSInfo
import actress.sys.OSInfoBinding
import akka.actor.ActorSystem
import akka.actor.actorRef2Scala
import akka.testkit.ImplicitSender
import akka.testkit.TestKit
import fr.inria.spirals.actress.runtime.protocol.Capabilities
import fr.inria.spirals.actress.runtime.protocol.GetCapabilities
import akka.actor.ActorRef
import fr.inria.spirals.actress.runtime.protocol.GetAttribute
import akka.testkit.TestActorRef
import fr.inria.spirals.actress.runtime.protocol.GetAttributes
import fr.inria.spirals.actress.runtime.protocol.Attributes
import fr.inria.spirals.actress.runtime.protocol.Attributes
class ActressServerSpec(_system: ActorSystem) extends TestKit(_system) with ImplicitSender with WordSpecLike with Matchers with BeforeAndAfterAll {
def this() = this(ActorSystem("ActressServerSpec"))
override def afterAll {
TestKit.shutdownActorSystem(system)
}
"ServiceLocator" when {
"no services have been registered" should {
"report no services" in {
val server = new ActressServer
server.serviceLocator ! GetCapabilities()
expectMsg(Capabilities(Seq()))
}
}
"a service is registered" should {
"report its endpoint" in {
val bf = { _: String ⇒ new OSInfoBinding }
val server = new ActressServer
server.registerModel[OSInfo]("os", bf)
server.serviceLocator ! GetCapabilities()
val msg = receiveOne(remaining)
println(msg)
// expectMsg(Capabilities(_))
}
}
}
"NodeActor" should {
"get attributes" in {
val bf = { _: String ⇒ new OSInfoBinding }
val na = TestActorRef(new ModelActor(bf))
na ! GetAttributes
val r = expectMsgType[Attributes]
r.attributes should contain only("name")
}
"get an attribute value" in {
val bf = { _: String ⇒ new OSInfoBinding }
val server = new ActressServer
server.registerModel[OSInfo]("os", bf)
server.serviceLocator ! GetCapabilities()
receiveOne(remaining) match {
case Capabilities(Seq(("os", ref: ActorRef))) =>
ref ! GetAttribute("", "name")
println(receiveOne(remaining))
}
}
}
}
|
fikovnik/actress-mrt
|
src/test/scala/fr/inria/spirals/actress/runtime/ActressServerSpec.scala
|
Scala
|
apache-2.0
| 2,383
|
/**
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package ly.stealth.mesos.kafka
import joptsimple.{BuiltinHelpFormatter, OptionException, OptionSet, OptionParser}
import java.net.{HttpURLConnection, URLEncoder, URL}
import scala.io.Source
import java.io._
import java.util
import scala.collection.JavaConversions._
import java.util.{Properties, Collections}
import ly.stealth.mesos.kafka.Util.{BindAddress, Str, Period}
object Cli {
var api: String = null
var out: PrintStream = System.out
var err: PrintStream = System.err
def main(args: Array[String]): Unit = {
try { exec(args) }
catch { case e: Error =>
err.println("Error: " + e.getMessage)
System.exit(1)
}
}
def exec(_args: Array[String]): Unit = {
var args = _args
if (args.length == 0) {
handleHelp(); out.println()
throw new Error("command required")
}
val command = args(0)
args = args.slice(1, args.length)
if (command == "scheduler" && !noScheduler) { handleScheduler(args); return }
if (command == "help") { handleHelp(if (args.length > 0) args(0) else null); return }
args = handleGenericOptions(args)
if (command == "status") { handleStatus(); return }
// rest of the commands require <argument>
if (args.length < 1) {
handleHelp(command); out.println()
throw new Error("argument required")
}
val arg = args(0)
args = args.slice(1, args.length)
command match {
case "add" | "update" => handleAddUpdateBroker(arg, args, command == "add")
case "remove" => handleRemoveBroker(arg)
case "start" | "stop" => handleStartStopBroker(arg, args, command == "start")
case "rebalance" => handleRebalance(arg, args)
case _ => throw new Error("unsupported command " + command)
}
}
private def handleHelp(command: String = null): Unit = {
command match {
case null =>
out.println("Usage: <command>\\n")
printCommands()
case "help" =>
out.println("Print general or command-specific help\\nUsage: help {command}")
case "scheduler" =>
if (noScheduler) throw new Error(s"unsupported command $command")
handleScheduler(null, help = true)
case "status" =>
handleStatus(help = true)
case "add" | "update" =>
handleAddUpdateBroker(null, null, command == "add", help = true)
case "remove" =>
handleRemoveBroker(null, help = true)
case "start" | "stop" =>
handleStartStopBroker(null, null, command == "start", help = true)
case "rebalance" =>
handleRebalance(null, null, help = true)
case _ =>
throw new Error(s"unsupported command $command")
}
}
private def handleScheduler(args: Array[String], help: Boolean = false): Unit = {
val parser = newParser()
parser.accepts("debug", "Debug mode. Default - " + Config.debug)
.withRequiredArg().ofType(classOf[java.lang.Boolean])
parser.accepts("storage",
"""Storage for cluster state. Examples:
| - file:kafka-mesos.json
| - zk:/kafka-mesos
|Default - """.stripMargin + Config.storage)
.withRequiredArg().ofType(classOf[String])
parser.accepts("master",
"""Master connection settings. Examples:
| - master:5050
| - master:5050,master2:5050
| - zk://master:2181/mesos
| - zk://username:password@master:2181
| - zk://master:2181,master2:2181/mesos""".stripMargin)
.withRequiredArg().ofType(classOf[String])
parser.accepts("user", "Mesos user to run tasks. Default - none")
.withRequiredArg().ofType(classOf[String])
parser.accepts("principal", "Principal (username) used to register framework. Default - none")
.withRequiredArg().ofType(classOf[String])
parser.accepts("secret", "Secret (password) used to register framework. Default - none")
.withRequiredArg().ofType(classOf[String])
parser.accepts("framework-name", "Framework name. Default - " + Config.frameworkName)
.withRequiredArg().ofType(classOf[String])
parser.accepts("framework-role", "Framework role. Default - " + Config.frameworkRole)
.withRequiredArg().ofType(classOf[String])
parser.accepts("framework-timeout", "Framework timeout (30s, 1m, 1h). Default - " + Config.frameworkTimeout)
.withRequiredArg().ofType(classOf[String])
parser.accepts("api", "Api url. Example: http://master:7000")
.withRequiredArg().ofType(classOf[String])
parser.accepts("bind-address", "Scheduler bind address (master, 0.0.0.0, 192.168.50.*, if:eth1). Default - all")
.withRequiredArg().ofType(classOf[String])
parser.accepts("zk",
"""Kafka zookeeper.connect. Examples:
| - master:2181
| - master:2181,master2:2181""".stripMargin)
.withRequiredArg().ofType(classOf[String])
parser.accepts("jre", "JRE zip-file (jre-7-openjdk.zip). Default - none.")
.withRequiredArg().ofType(classOf[String])
parser.accepts("log", "Log file to use. Default - stdout.")
.withRequiredArg().ofType(classOf[String])
val configArg = parser.nonOptions()
if (help) {
out.println("Start scheduler \\nUsage: scheduler [options] [config.properties]\\n")
parser.printHelpOn(out)
return
}
var options: OptionSet = null
try { options = parser.parse(args: _*) }
catch {
case e: OptionException =>
parser.printHelpOn(out)
out.println()
throw new Error(e.getMessage)
}
var configFile = if (options.valueOf(configArg) != null) new File(options.valueOf(configArg)) else null
if (configFile != null && !configFile.exists()) throw new Error(s"config-file $configFile not found")
if (configFile == null && Config.DEFAULT_FILE.exists()) configFile = Config.DEFAULT_FILE
if (configFile != null) {
out.println("Loading config defaults from " + configFile)
Config.load(configFile)
}
val debug = options.valueOf("debug").asInstanceOf[java.lang.Boolean]
if (debug != null) Config.debug = debug
val storage = options.valueOf("storage").asInstanceOf[String]
if (storage != null) Config.storage = storage
val provideOption = "Provide either cli option or config default value"
val master = options.valueOf("master").asInstanceOf[String]
if (master != null) Config.master = master
else if (Config.master == null) throw new Error(s"Undefined master. $provideOption")
val user = options.valueOf("user").asInstanceOf[String]
if (user != null) Config.user = user
val principal = options.valueOf("principal").asInstanceOf[String]
if (principal != null) Config.principal = principal
val secret = options.valueOf("secret").asInstanceOf[String]
if (secret != null) Config.secret = secret
val frameworkName = options.valueOf("framework-name").asInstanceOf[String]
if (frameworkName != null) Config.frameworkName = frameworkName
val frameworkRole = options.valueOf("framework-role").asInstanceOf[String]
if (frameworkRole != null) Config.frameworkRole = frameworkRole
val frameworkTimeout = options.valueOf("framework-timeout").asInstanceOf[String]
if (frameworkTimeout != null)
try { Config.frameworkTimeout = new Period(frameworkTimeout) }
catch { case e: IllegalArgumentException => throw new Error("Invalid framework-timeout") }
val api = options.valueOf("api").asInstanceOf[String]
if (api != null) Config.api = api
else if (Config.api == null) throw new Error(s"Undefined api. $provideOption")
val bindAddress = options.valueOf("bind-address").asInstanceOf[String]
if (bindAddress != null)
try { Config.bindAddress = new BindAddress(bindAddress) }
catch { case e: IllegalArgumentException => throw new Error("Invalid bind-address") }
val zk = options.valueOf("zk").asInstanceOf[String]
if (zk != null) Config.zk = zk
else if (Config.zk == null) throw new Error(s"Undefined zk. $provideOption")
val jre = options.valueOf("jre").asInstanceOf[String]
if (jre != null) Config.jre = new File(jre)
if (Config.jre != null && !Config.jre.exists()) throw new Error("JRE file doesn't exists")
val log = options.valueOf("log").asInstanceOf[String]
if (log != null) Config.log = new File(log)
if (Config.log != null) out.println(s"Logging to ${Config.log}")
Scheduler.start()
}
private def handleStatus(help: Boolean = false): Unit = {
if (help) {
out.println("Print cluster status\\nUsage: status [options]\\n")
handleGenericOptions(null, help = true)
return
}
var json: Map[String, Object] = null
try { json = sendRequest("/brokers/status", Collections.emptyMap()) }
catch { case e: IOException => throw new Error("" + e) }
val cluster: Cluster = new Cluster()
cluster.fromJson(json)
printLine("Cluster status received\\n")
printLine("cluster:")
printCluster(cluster)
}
private def handleAddUpdateBroker(id: String, args: Array[String], add: Boolean, help: Boolean = false): Unit = {
val parser = newParser()
parser.accepts("cpus", "cpu amount (0.5, 1, 2)").withRequiredArg().ofType(classOf[java.lang.Double])
parser.accepts("mem", "mem amount in Mb").withRequiredArg().ofType(classOf[java.lang.Long])
parser.accepts("heap", "heap amount in Mb").withRequiredArg().ofType(classOf[java.lang.Long])
parser.accepts("port", "port or range (31092, 31090..31100). Default - auto").withRequiredArg().ofType(classOf[java.lang.String])
parser.accepts("bind-address", "broker bind address (broker0, 192.168.50.*, if:eth1). Default - auto").withRequiredArg().ofType(classOf[java.lang.String])
parser.accepts("stickiness-period", "stickiness period to preserve same node for broker (5m, 10m, 1h)").withRequiredArg().ofType(classOf[String])
parser.accepts("options", "options or file. Examples:\\n log.dirs=/tmp/kafka/$id,num.io.threads=16\\n file:server.properties").withRequiredArg()
parser.accepts("log4j-options", "log4j options or file. Examples:\\n log4j.logger.kafka=DEBUG\\\\, kafkaAppender\\n file:log4j.properties").withRequiredArg()
parser.accepts("jvm-options", "jvm options string (-Xms128m -XX:PermSize=48m)").withRequiredArg()
parser.accepts("constraints", "constraints (hostname=like:master,rack=like:1.*). See below.").withRequiredArg()
parser.accepts("failover-delay", "failover delay (10s, 5m, 3h)").withRequiredArg().ofType(classOf[String])
parser.accepts("failover-max-delay", "max failover delay. See failoverDelay.").withRequiredArg().ofType(classOf[String])
parser.accepts("failover-max-tries", "max failover tries. Default - none").withRequiredArg().ofType(classOf[String])
if (help) {
val command = if (add) "add" else "update"
out.println(s"${command.capitalize} brokers\\nUsage: $command <id-expr> [options]\\n")
parser.printHelpOn(out)
out.println()
handleGenericOptions(null, help = true)
out.println()
printIdExprExamples()
out.println()
printConstraintExamples()
if (!add) out.println("\\nNote: use \\"\\" arg to unset an option")
return
}
var options: OptionSet = null
try { options = parser.parse(args: _*) }
catch {
case e: OptionException =>
parser.printHelpOn(out)
out.println()
throw new Error(e.getMessage)
}
val cpus = options.valueOf("cpus").asInstanceOf[java.lang.Double]
val mem = options.valueOf("mem").asInstanceOf[java.lang.Long]
val heap = options.valueOf("heap").asInstanceOf[java.lang.Long]
val port = options.valueOf("port").asInstanceOf[String]
val bindAddress = options.valueOf("bind-address").asInstanceOf[String]
val stickinessPeriod = options.valueOf("stickiness-period").asInstanceOf[String]
val constraints = options.valueOf("constraints").asInstanceOf[String]
val options_ = options.valueOf("options").asInstanceOf[String]
val log4jOptions = options.valueOf("log4j-options").asInstanceOf[String]
val jvmOptions = options.valueOf("jvm-options").asInstanceOf[String]
val failoverDelay = options.valueOf("failover-delay").asInstanceOf[String]
val failoverMaxDelay = options.valueOf("failover-max-delay").asInstanceOf[String]
val failoverMaxTries = options.valueOf("failover-max-tries").asInstanceOf[String]
val params = new util.LinkedHashMap[String, String]
params.put("id", id)
if (cpus != null) params.put("cpus", "" + cpus)
if (mem != null) params.put("mem", "" + mem)
if (heap != null) params.put("heap", "" + heap)
if (port != null) params.put("port", port)
if (bindAddress != null) params.put("bindAddress", bindAddress)
if (stickinessPeriod != null) params.put("stickinessPeriod", stickinessPeriod)
if (options_ != null) params.put("options", optionsOrFile(options_))
if (constraints != null) params.put("constraints", constraints)
if (log4jOptions != null) params.put("log4jOptions", optionsOrFile(log4jOptions))
if (jvmOptions != null) params.put("jvmOptions", jvmOptions)
if (failoverDelay != null) params.put("failoverDelay", failoverDelay)
if (failoverMaxDelay != null) params.put("failoverMaxDelay", failoverMaxDelay)
if (failoverMaxTries != null) params.put("failoverMaxTries", failoverMaxTries)
var json: Map[String, Object] = null
try { json = sendRequest("/brokers/" + (if (add) "add" else "update"), params) }
catch { case e: IOException => throw new Error("" + e) }
val brokerNodes: List[Map[String, Object]] = json("brokers").asInstanceOf[List[Map[String, Object]]]
val addedUpdated = if (add) "added" else "updated"
val brokers = "broker" + (if (brokerNodes.length > 1) "s" else "")
printLine(s"${brokers.capitalize} $addedUpdated\\n")
printLine(s"$brokers:")
for (brokerNode <- brokerNodes) {
val broker: Broker = new Broker()
broker.fromJson(brokerNode)
printBroker(broker, 1)
printLine()
}
}
private def handleRemoveBroker(id: String, help: Boolean = false): Unit = {
if (help) {
out.println("Remove brokers\\nUsage: remove <id-expr> [options]\\n")
handleGenericOptions(null, help = true)
out.println()
printIdExprExamples()
return
}
var json: Map[String, Object] = null
try { json = sendRequest("/brokers/remove", Collections.singletonMap("id", id)) }
catch { case e: IOException => throw new Error("" + e) }
val ids = json("ids").asInstanceOf[String]
val brokers = "Broker" + (if (ids.contains(",")) "s" else "")
printLine(s"$brokers $ids removed")
}
private def handleStartStopBroker(id: String, args: Array[String], start: Boolean, help: Boolean = false): Unit = {
val parser = newParser()
parser.accepts("timeout", "timeout (30s, 1m, 1h). 0s - no timeout").withRequiredArg().ofType(classOf[String])
if (!start) parser.accepts("force", "forcibly stop").withOptionalArg().ofType(classOf[String])
if (help) {
val command = if (start) "start" else "stop"
out.println(s"${command.capitalize} brokers\\nUsage: $command <id-expr> [options]\\n")
parser.printHelpOn(out)
out.println()
handleGenericOptions(null, help = true)
out.println()
printIdExprExamples()
return
}
var options: OptionSet = null
try { options = parser.parse(args: _*) }
catch {
case e: OptionException =>
parser.printHelpOn(out)
out.println()
throw new Error(e.getMessage)
}
val command: String = if (start) "start" else "stop"
val timeout: String = options.valueOf("timeout").asInstanceOf[String]
val force: Boolean = options.has("force")
val params = new util.LinkedHashMap[String, String]()
params.put("id", id)
if (timeout != null) params.put("timeout", timeout)
if (force) params.put("force", null)
var json: Map[String, Object] = null
try { json = sendRequest("/brokers/" + command, params) }
catch { case e: IOException => throw new Error("" + e) }
val status = json("status").asInstanceOf[String]
val ids = json("ids").asInstanceOf[String]
val brokers = "Broker" + (if (ids.contains(",")) "s" else "")
val startStop = if (start) "start" else "stop"
// started|stopped|scheduled|timeout
if (status == "timeout") throw new Error(s"$brokers $ids scheduled to $startStop. Got timeout")
else if (status == "scheduled") printLine(s"$brokers $ids scheduled to $startStop")
else printLine(s"$brokers $ids $status")
}
private def handleRebalance(arg: String, args: Array[String], help: Boolean = false): Unit = {
val parser = newParser()
parser.accepts("topics", "<topic-expr>. Default - *. See below.").withRequiredArg().ofType(classOf[String])
parser.accepts("timeout", "timeout (30s, 1m, 1h). 0s - no timeout").withRequiredArg().ofType(classOf[String])
if (help) {
out.println("Rebalance topics\\nUsage: rebalance <id-expr>|status [options]\\n")
parser.printHelpOn(out)
out.println()
handleGenericOptions(null, help = true)
out.println()
printTopicExprExamples()
out.println()
printIdExprExamples()
return
}
var options: OptionSet = null
try { options = parser.parse(args: _*) }
catch {
case e: OptionException =>
parser.printHelpOn(out)
out.println()
throw new Error(e.getMessage)
}
val topics: String = options.valueOf("topics").asInstanceOf[String]
val timeout: String = options.valueOf("timeout").asInstanceOf[String]
val params = new util.LinkedHashMap[String, String]()
if (arg != "status") params.put("id", arg)
if (topics != null) params.put("topics", topics)
if (timeout != null) params.put("timeout", timeout)
var json: Map[String, Object] = null
try { json = sendRequest("/brokers/rebalance", params) }
catch { case e: IOException => throw new Error("" + e) }
val status = json("status").asInstanceOf[String]
val error = if (json.contains("error")) json("error").asInstanceOf[String] else ""
val state: String = json("state").asInstanceOf[String]
val is: String = if (status == "idle" || status == "running") "is " else ""
val colon: String = if (state.isEmpty && error.isEmpty) "" else ":"
// started|completed|failed|running|idle|timeout
if (status == "timeout") throw new Error("Rebalance timeout:\\n" + state)
printLine(s"Rebalance $is$status$colon $error")
if (error.isEmpty && !state.isEmpty) printLine(state)
}
private[kafka] def handleGenericOptions(args: Array[String], help: Boolean = false): Array[String] = {
val parser = newParser()
parser.accepts("api", "Api url. Example: http://master:7000").withRequiredArg().ofType(classOf[java.lang.String])
parser.allowsUnrecognizedOptions()
if (help) {
out.println("Generic Options")
parser.printHelpOn(out)
return args
}
var options: OptionSet = null
try { options = parser.parse(args: _*) }
catch {
case e: OptionException =>
parser.printHelpOn(out)
out.println()
throw new Error(e.getMessage)
}
resolveApi(options.valueOf("api").asInstanceOf[String])
options.nonOptionArguments().toArray(new Array[String](0))
}
private def optionsOrFile(value: String): String = {
if (!value.startsWith("file:")) return value
val file = new File(value.substring("file:".length))
if (!file.exists()) throw new Error(s"File $file does not exists")
val props: Properties = new Properties()
val reader = new FileReader(file)
try { props.load(reader) }
finally { reader.close() }
val map = new util.HashMap[String, String](props.toMap)
Util.formatMap(map)
}
private def newParser(): OptionParser = {
val parser: OptionParser = new OptionParser()
parser.formatHelpWith(new BuiltinHelpFormatter(Util.terminalWidth, 2))
parser
}
private def printCommands(): Unit = {
printLine("Commands:")
printLine("help {cmd} - print general or command-specific help", 1)
if (!noScheduler) printLine("scheduler - start scheduler", 1)
printLine("status - print cluster status", 1)
printLine("add - add brokers", 1)
printLine("update - update brokers", 1)
printLine("remove - remove brokers", 1)
printLine("start - start brokers", 1)
printLine("stop - stop brokers", 1)
printLine("rebalance - rebalance topics", 1)
}
private def printCluster(cluster: Cluster): Unit = {
printLine("brokers:", 1)
for (broker <- cluster.getBrokers) {
printBroker(broker, 2)
printLine()
}
}
private def printBroker(broker: Broker, indent: Int): Unit = {
printLine("id: " + broker.id, indent)
printLine("active: " + broker.active, indent)
printLine("state: " + broker.state(), indent)
printLine("resources: " + "cpus:" + "%.2f".format(broker.cpus) + ", mem:" + broker.mem + ", heap:" + broker.heap + ", port:" + (if (broker.port != null) broker.port else "auto"), indent)
if (broker.bindAddress != null) printLine("bind-address: " + broker.bindAddress, indent)
if (!broker.constraints.isEmpty) printLine("constraints: " + Util.formatMap(broker.constraints), indent)
if (!broker.options.isEmpty) printLine("options: " + Util.formatMap(broker.options), indent)
if (!broker.log4jOptions.isEmpty) printLine("log4j-options: " + Util.formatMap(broker.log4jOptions), indent)
if (broker.jvmOptions != null) printLine("jvm-options: " + broker.jvmOptions, indent)
var failover = "failover:"
failover += " delay:" + broker.failover.delay
failover += ", max-delay:" + broker.failover.maxDelay
if (broker.failover.maxTries != null) failover += ", max-tries:" + broker.failover.maxTries
printLine(failover, indent)
var stickiness = "stickiness:"
stickiness += " period:" + broker.stickiness.period
if (broker.stickiness.hostname != null) stickiness += ", hostname:" + broker.stickiness.hostname
if (broker.stickiness.stopTime != null) stickiness += ", expires:" + Str.dateTime(broker.stickiness.expires)
printLine(stickiness, indent)
val task = broker.task
if (task != null) {
printLine("task: ", indent)
printLine("id: " + broker.task.id, indent + 1)
printLine("state: " + task.state, indent + 1)
if (task.endpoint != null) printLine("endpoint: " + task.endpoint + (if (broker.bindAddress != null) " (" + task.hostname + ")" else ""), indent + 1)
if (!task.attributes.isEmpty) printLine("attributes: " + Util.formatMap(task.attributes), indent + 1)
}
}
private def printIdExprExamples(): Unit = {
printLine("id-expr examples:")
printLine("0 - broker 0", 1)
printLine("0,1 - brokers 0,1", 1)
printLine("0..2 - brokers 0,1,2", 1)
printLine("0,1..2 - brokers 0,1,2", 1)
printLine("* - any broker", 1)
}
private def printConstraintExamples(): Unit = {
printLine("constraint examples:")
printLine("like:master - value equals 'master'", 1)
printLine("unlike:master - value not equals 'master'", 1)
printLine("like:slave.* - value starts with 'slave'", 1)
printLine("unique - all values are unique", 1)
printLine("cluster - all values are the same", 1)
printLine("cluster:master - value equals 'master'", 1)
printLine("groupBy - all values are the same", 1)
printLine("groupBy:3 - all values are within 3 different groups", 1)
}
private def printTopicExprExamples(): Unit = {
printLine("topic-expr examples:")
printLine("t0 - topic t0 with default RF (replication-factor)", 1)
printLine("t0,t1 - topics t0, t1 with default RF", 1)
printLine("t0:3 - topic t0 with RF=3", 1)
printLine("t0,t1:2 - topic t0 with default RF, topic t1 with RF=2", 1)
printLine("* - all topics with default RF", 1)
printLine("*:2 - all topics with RF=2", 1)
printLine("t0:1,*:2 - all topics with RF=2 except topic t0 with RF=1", 1)
}
private def printLine(s: Object = "", indent: Int = 0): Unit = out.println(" " * indent + s)
private[kafka] def resolveApi(apiOption: String): Unit = {
if (api != null) return
if (apiOption != null) {
api = apiOption
return
}
if (System.getenv("KM_API") != null) {
api = System.getenv("KM_API")
return
}
if (Config.DEFAULT_FILE.exists()) {
val props: Properties = new Properties()
val stream: FileInputStream = new FileInputStream(Config.DEFAULT_FILE)
props.load(stream)
stream.close()
api = props.getProperty("api")
if (api != null) return
}
throw new Error("Undefined api. Provide either cli option or config default value")
}
private[kafka] def noScheduler: Boolean = System.getenv("KM_NO_SCHEDULER") != null
private[kafka] def sendRequest(uri: String, params: util.Map[String, String]): Map[String, Object] = {
def queryString(params: util.Map[String, String]): String = {
var s = ""
for ((name, value) <- params) {
if (!s.isEmpty) s += "&"
s += URLEncoder.encode(name, "utf-8")
if (value != null) s += "=" + URLEncoder.encode(value, "utf-8")
}
s
}
val qs: String = queryString(params)
val url: String = api + (if (api.endsWith("/")) "" else "/") + "api" + uri
val connection: HttpURLConnection = new URL(url).openConnection().asInstanceOf[HttpURLConnection]
var response: String = null
try {
connection.setRequestMethod("POST")
connection.setDoOutput(true)
val data = qs.getBytes("utf-8")
connection.setRequestProperty("Content-Type", "application/x-www-form-urlencoded; charset=utf-8")
connection.setRequestProperty("Content-Length", "" + data.length)
connection.getOutputStream.write(data)
try { response = Source.fromInputStream(connection.getInputStream).getLines().mkString}
catch {
case e: IOException =>
if (connection.getResponseCode != 200) throw new IOException(connection.getResponseCode + " - " + connection.getResponseMessage)
else throw e
}
} finally {
connection.disconnect()
}
if (response.trim().isEmpty) return null
var node: Map[String, Object] = null
try { node = Util.parseJson(response)}
catch { case e: IllegalArgumentException => throw new IOException(e) }
node
}
class Error(message: String) extends java.lang.Error(message) {}
}
|
sujeetv/kafka
|
src/scala/ly/stealth/mesos/kafka/Cli.scala
|
Scala
|
apache-2.0
| 27,418
|
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.flink.runtime.messages
import java.util
import org.apache.flink.runtime.deployment.{InputChannelDeploymentDescriptor, TaskDeploymentDescriptor}
import org.apache.flink.runtime.executiongraph.{ExecutionAttemptID, PartitionInfo}
import org.apache.flink.runtime.jobgraph.IntermediateDataSetID
import org.apache.flink.runtime.taskmanager.TaskExecutionState
/**
* A set of messages that control the deployment and the state of Tasks executed
* on the TaskManager.
*/
object TaskMessages {
/**
* Marker trait for task messages.
*/
trait TaskMessage
// --------------------------------------------------------------------------
// Starting and stopping Tasks
// --------------------------------------------------------------------------
/**
* Submits a task to the task manager. The result is to this message is a
* [[TaskOperationResult]] message.
*
* @param tasks Descriptor which contains the information to start the task.
*/
case class SubmitTask(tasks: TaskDeploymentDescriptor)
extends TaskMessage with RequiresLeaderSessionID
/**
* Cancels the task associated with [[attemptID]]. The result is sent back to the sender as a
* [[TaskOperationResult]] message.
*
* @param attemptID The task's execution attempt ID.
*/
case class CancelTask(attemptID: ExecutionAttemptID)
extends TaskMessage with RequiresLeaderSessionID
/**
* Stops the task associated with [[attemptID]]. The result is sent back to the sender as a
* [[TaskOperationResult]] message.
*
* @param attemptID The task's execution attempt ID.
*/
case class StopTask(attemptID: ExecutionAttemptID)
extends TaskMessage with RequiresLeaderSessionID
/**
* Triggers a fail of specified task from the outside (as opposed to the task throwing
* an exception itself) with the given exception as the cause.
*
* @param executionID The task's execution attempt ID.
* @param cause The reason for the external failure.
*/
case class FailTask(executionID: ExecutionAttemptID, cause: Throwable)
extends TaskMessage
/**
* Notifies the TaskManager that the task has reached its final state,
* either FINISHED, CANCELED, or FAILED.
*
* @param executionID The task's execution attempt ID.
*/
case class TaskInFinalState(executionID: ExecutionAttemptID)
extends TaskMessage
// --------------------------------------------------------------------------
// Updates to Intermediate Results
// --------------------------------------------------------------------------
/**
* Base class for messages that update the information about location of input partitions
*/
abstract sealed class UpdatePartitionInfo extends TaskMessage with RequiresLeaderSessionID {
def executionID: ExecutionAttemptID
}
/**
*
* @param executionID The task's execution attempt ID.
* @param resultId The input reader to update.
* @param partitionInfo The partition info update.
*/
case class UpdateTaskSinglePartitionInfo(
executionID: ExecutionAttemptID,
resultId: IntermediateDataSetID,
partitionInfo: InputChannelDeploymentDescriptor)
extends UpdatePartitionInfo
/**
*
* @param executionID The task's execution attempt ID.
* @param partitionInfos List of input gates with channel descriptors to update.
*/
case class UpdateTaskMultiplePartitionInfos(
executionID: ExecutionAttemptID,
partitionInfos: java.lang.Iterable[PartitionInfo])
extends UpdatePartitionInfo
/**
* Fails (and releases) all intermediate result partitions identified by
* [[executionID]] from the task manager.
*
* @param executionID The task's execution attempt ID.
*/
case class FailIntermediateResultPartitions(executionID: ExecutionAttemptID)
extends TaskMessage with RequiresLeaderSessionID
// --------------------------------------------------------------------------
// Report Messages
// --------------------------------------------------------------------------
/**
* Denotes a state change of a task at the JobManager. The update success is acknowledged by a
* boolean value which is sent back to the sender.
*
* @param taskExecutionState The changed task state
*/
case class UpdateTaskExecutionState(taskExecutionState: TaskExecutionState)
extends TaskMessage with RequiresLeaderSessionID
// --------------------------------------------------------------------------
// Utility Functions
// --------------------------------------------------------------------------
def createUpdateTaskMultiplePartitionInfos(
executionID: ExecutionAttemptID,
resultIDs: java.util.List[IntermediateDataSetID],
partitionInfos: java.util.List[InputChannelDeploymentDescriptor])
: UpdateTaskMultiplePartitionInfos = {
require(resultIDs.size() == partitionInfos.size(),
"ResultIDs must have the same length as partitionInfos.")
val partitionInfoList = new util.ArrayList[PartitionInfo](resultIDs.size())
for (i <- 0 until resultIDs.size()) {
partitionInfoList.add(new PartitionInfo(resultIDs.get(i), partitionInfos.get(i)))
}
new UpdateTaskMultiplePartitionInfos(
executionID,
partitionInfoList)
}
}
|
oscarceballos/flink-1.3.2
|
flink-runtime/src/main/scala/org/apache/flink/runtime/messages/TaskControlMessages.scala
|
Scala
|
apache-2.0
| 6,086
|
object i0 {
def i1(i2: Any) = i2 match {
case i3: i4 => () => i3
case _ => i2
case _ => throw new Exception(
(new _)
}
}
|
som-snytt/dotty
|
tests/fuzzy/4a5fb957ddf1b97d1e06db42848a3bceeb1e4b74.scala
|
Scala
|
apache-2.0
| 120
|
// Databricks notebook source
// MAGIC %md
// MAGIC ScaDaMaLe Course [site](https://lamastex.github.io/scalable-data-science/sds/3/x/) and [book](https://lamastex.github.io/ScaDaMaLe/index.html)
// COMMAND ----------
// MAGIC %md
// MAGIC # Generate random graphs
// MAGIC Here random graphs are generated, first using Erdös-Renyi method and then using R-MAT.
// COMMAND ----------
import org.apache.spark.graphx.util.GraphGenerators
import scala.util.Random
import org.apache.spark.sql.{Row, DataFrame}
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.{functions => F}
import org.apache.spark.sql.types.{IntegerType, LongType, DoubleType, StringType, StructField, StructType}
// COMMAND ----------
// Values taken from the Ethereum graph
val numNodes = 1520925
val numEdges = 2152835
// COMMAND ----------
// MAGIC %md
// MAGIC
// MAGIC ## Function for making a canonical ordering for the edges of a graph
// MAGIC - Input is a dataframe with rows of "src" and "dst" node numbers
// MAGIC - A new node id is computed such that the nodes have ids 0,1,2,...
// MAGIC - The canonical ordering is made such that each edge will point from lower to higher index
// COMMAND ----------
def makeEdgesCanonical (edgeDF : org.apache.spark.sql.DataFrame): org.apache.spark.sql.DataFrame = {
// Remove self-loops
val edgeDFClean = edgeDF.distinct().where(F.col("src") =!= F.col("dst"))
// Provide each node with an index id
val nodes = edgeDFClean.select(F.col("src").alias("node")).union(edgeDFClean.select(F.col("dst").alias("node"))).distinct()
val nodes_window = Window.orderBy("node")
val nodesWithids = nodes.withColumn("id", F.row_number().over(nodes_window))
// Add the canonical node ids to the edgeDF and drop the old ids
val dstNodes = nodesWithids.withColumnRenamed("node", "dst").withColumnRenamed("id", "dst__")
val srcNodes = nodesWithids.withColumnRenamed("node", "src").withColumnRenamed("id", "src__")
val edgesWithBothIds = edgeDFClean.join(dstNodes, dstNodes("dst") === edgeDFClean("dst"))
.join(srcNodes, srcNodes("src") === edgeDFClean("src"))
.drop("src").drop("dst")
val edgesWithCanonicalIds = edgesWithBothIds.withColumn("src",
F.when(F.col("dst__") > F.col("src__"), F.col("src__")).otherwise(F.col("dst__"))
).withColumn("dst",
F.when(F.col("dst__") > F.col("src__"), F.col("dst__")).otherwise(F.col("src__"))
).drop("src__").drop("dst__").distinct().where(F.col("src") =!= F.col("dst"))
val edges_window = Window.orderBy(F.col("src"), F.col("dst"))
val GroupedCanonicalEdges = edgesWithCanonicalIds.withColumn("id", F.row_number().over(edges_window))
return GroupedCanonicalEdges
}
// COMMAND ----------
// MAGIC %md
// MAGIC ## Generate Erdös-Renyi graph (uniform edge sampling)
// COMMAND ----------
// MAGIC %md
// MAGIC #### Function for sampling an Erdös-Renyi graph
// MAGIC The resulting graph will have at most the number of nodes given by numNodes and at most numEdges edges.
// MAGIC The number of nodes is less than numNodes if some nodes did not have an edge to another node.
// MAGIC The number of edges is less than numEdges if some edges are duplicates or if some edges are self-loops.
// COMMAND ----------
def sampleERGraph (numNodes : Int, numEdges : Int, iter : Int): org.apache.spark.sql.DataFrame = {
val randomEdges = sc.parallelize(0 until numEdges).map {
idx =>
val random = new Random(42 + iter * numEdges + idx)
val src = random.nextInt(numNodes)
val dst = random.nextInt(numNodes)
if (src > dst) Row(dst, src) else Row(src, dst)
}
val schema = new StructType()
.add(StructField("src", IntegerType, true))
.add(StructField("dst", IntegerType, true))
val groupedCanonicalEdges = makeEdgesCanonical(spark.createDataFrame(randomEdges, schema))
return groupedCanonicalEdges
}
// COMMAND ----------
// MAGIC %md
// MAGIC #### Sample and save 10 different Erdös-Renyi graphs with different seeds and save each to parquet
// COMMAND ----------
for(i <- 0 to 9) {
val groupedCanonicalEdges = sampleERGraph(numNodes, numEdges, iter=i)
groupedCanonicalEdges.write.format("parquet").mode("overwrite").save("/projects/group21/uniform_random_graph" + i)
}
// COMMAND ----------
// MAGIC %md
// MAGIC ## Generate R-MAT graph
// COMMAND ----------
// MAGIC %md
// MAGIC #### The default parameters for R-MAT generation
// COMMAND ----------
println("RMAT a: " + GraphGenerators.RMATa)
println("RMAT b: " + GraphGenerators.RMATb)
println("RMAT c: " + GraphGenerators.RMATc)
println("RMAT d: " + GraphGenerators.RMATd)
// COMMAND ----------
// MAGIC %md
// MAGIC #### Function for generating a R-MAT graph, storing the edges as a Dataframe and applying makeEdgesCanonical
// COMMAND ----------
def sampleRMATGraph (numNodes : Int, numEdges : Int): org.apache.spark.sql.DataFrame = {
val rmatGraphraw = GraphGenerators.rmatGraph(sc=spark.sparkContext, requestedNumVertices=numNodes, numEdges=numEdges)
val rmatedges = rmatGraphraw.edges.map{
edge => Row(edge.srcId, edge.dstId)
}
val schema = new StructType()
.add(StructField("src", LongType, true))
.add(StructField("dst", LongType, true))
val rmatGroupedCanonicalEdges = makeEdgesCanonical(spark.createDataFrame(rmatedges, schema))
return rmatGroupedCanonicalEdges
}
// COMMAND ----------
// MAGIC %md
// MAGIC #### Sample 10 R-MAT graphs and save each to parquet
// COMMAND ----------
for(i <- 0 to 9) {
val groupedCanonicalEdges = sampleRMATGraph(numNodes, numEdges)
groupedCanonicalEdges.write.format("parquet").mode("overwrite").save("/projects/group21/rmat_random_graph" + i)
}
// COMMAND ----------
|
lamastex/scalable-data-science
|
dbcArchives/2021/000_0-sds-3-x-projects/student-project-21_group-GraphSpectralAnalysis/02_generate_graphs.scala
|
Scala
|
unlicense
| 5,806
|
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.spark
import java.io.{ByteArrayInputStream, File, FileInputStream, FileOutputStream}
import java.net.{URI, URL}
import java.nio.charset.StandardCharsets
import java.nio.file.Paths
import java.util.Arrays
import java.util.concurrent.{CountDownLatch, TimeUnit}
import java.util.jar.{JarEntry, JarOutputStream}
import scala.collection.JavaConverters._
import scala.collection.mutable
import scala.collection.mutable.ArrayBuffer
import com.google.common.io.{ByteStreams, Files}
import javax.tools.{JavaFileObject, SimpleJavaFileObject, ToolProvider}
import org.apache.spark.executor.TaskMetrics
import org.apache.spark.scheduler._
import org.apache.spark.util.Utils
/**
* Utilities for tests. Included in main codebase since it's used by multiple
* projects.
*
* TODO: See if we can move this to the test codebase by specifying
* test dependencies between projects.
*/
private[spark] object TestUtils {
/**
* Create a jar that defines classes with the given names.
*
* Note: if this is used during class loader tests, class names should be unique
* in order to avoid interference between tests.
*/
def createJarWithClasses(
classNames: Seq[String],
toStringValue: String = "",
classNamesWithBase: Seq[(String, String)] = Seq(),
classpathUrls: Seq[URL] = Seq()): URL = {
val tempDir = Utils.createTempDir()
val files1 = for (name <- classNames) yield {
createCompiledClass(name, tempDir, toStringValue, classpathUrls = classpathUrls)
}
val files2 = for ((childName, baseName) <- classNamesWithBase) yield {
createCompiledClass(childName, tempDir, toStringValue, baseName, classpathUrls)
}
val jarFile = new File(tempDir, "testJar-%s.jar".format(System.currentTimeMillis()))
createJar(files1 ++ files2, jarFile)
}
/**
* Create a jar file containing multiple files. The `files` map contains a mapping of
* file names in the jar file to their contents.
*/
def createJarWithFiles(files: Map[String, String], dir: File = null): URL = {
val tempDir = Option(dir).getOrElse(Utils.createTempDir())
val jarFile = File.createTempFile("testJar", ".jar", tempDir)
val jarStream = new JarOutputStream(new FileOutputStream(jarFile))
files.foreach { case (k, v) =>
val entry = new JarEntry(k)
jarStream.putNextEntry(entry)
ByteStreams.copy(new ByteArrayInputStream(v.getBytes(StandardCharsets.UTF_8)), jarStream)
}
jarStream.close()
jarFile.toURI.toURL
}
/**
* Create a jar file that contains this set of files. All files will be located in the specified
* directory or at the root of the jar.
*/
def createJar(files: Seq[File], jarFile: File, directoryPrefix: Option[String] = None): URL = {
val jarFileStream = new FileOutputStream(jarFile)
val jarStream = new JarOutputStream(jarFileStream, new java.util.jar.Manifest())
for (file <- files) {
// The `name` for the argument in `JarEntry` should use / for its separator. This is
// ZIP specification.
val prefix = directoryPrefix.map(d => s"$d/").getOrElse("")
val jarEntry = new JarEntry(prefix + file.getName)
jarStream.putNextEntry(jarEntry)
val in = new FileInputStream(file)
ByteStreams.copy(in, jarStream)
in.close()
}
jarStream.close()
jarFileStream.close()
jarFile.toURI.toURL
}
// Adapted from the JavaCompiler.java doc examples
private val SOURCE = JavaFileObject.Kind.SOURCE
private def createURI(name: String) = {
URI.create(s"string:///${name.replace(".", "/")}${SOURCE.extension}")
}
private[spark] class JavaSourceFromString(val name: String, val code: String)
extends SimpleJavaFileObject(createURI(name), SOURCE) {
override def getCharContent(ignoreEncodingErrors: Boolean): String = code
}
/** Creates a compiled class with the source file. Class file will be placed in destDir. */
def createCompiledClass(
className: String,
destDir: File,
sourceFile: JavaSourceFromString,
classpathUrls: Seq[URL]): File = {
val compiler = ToolProvider.getSystemJavaCompiler
// Calling this outputs a class file in pwd. It's easier to just rename the files than
// build a custom FileManager that controls the output location.
val options = if (classpathUrls.nonEmpty) {
Seq("-classpath", classpathUrls.map { _.getFile }.mkString(File.pathSeparator))
} else {
Seq()
}
compiler.getTask(null, null, null, options.asJava, null, Arrays.asList(sourceFile)).call()
val fileName = className + ".class"
val result = new File(fileName)
assert(result.exists(), "Compiled file not found: " + result.getAbsolutePath())
val out = new File(destDir, fileName)
// renameTo cannot handle in and out files in different filesystems
// use google's Files.move instead
Files.move(result, out)
assert(out.exists(), "Destination file not moved: " + out.getAbsolutePath())
out
}
/** Creates a compiled class with the given name. Class file will be placed in destDir. */
def createCompiledClass(
className: String,
destDir: File,
toStringValue: String = "",
baseClass: String = null,
classpathUrls: Seq[URL] = Seq()): File = {
val extendsText = Option(baseClass).map { c => s" extends ${c}" }.getOrElse("")
val sourceFile = new JavaSourceFromString(className,
"public class " + className + extendsText + " implements java.io.Serializable {" +
" @Override public String toString() { return \\"" + toStringValue + "\\"; }}")
createCompiledClass(className, destDir, sourceFile, classpathUrls)
}
/**
* Run some code involving jobs submitted to the given context and assert that the jobs spilled.
*/
def assertSpilled[T](sc: SparkContext, identifier: String)(body: => T): Unit = {
val spillListener = new SpillListener
sc.addSparkListener(spillListener)
body
assert(spillListener.numSpilledStages > 0, s"expected $identifier to spill, but did not")
}
/**
* Run some code involving jobs submitted to the given context and assert that the jobs
* did not spill.
*/
def assertNotSpilled[T](sc: SparkContext, identifier: String)(body: => T): Unit = {
val spillListener = new SpillListener
sc.addSparkListener(spillListener)
body
assert(spillListener.numSpilledStages == 0, s"expected $identifier to not spill, but did")
}
}
/**
* A `SparkListener` that detects whether spills have occurred in Spark jobs.
*/
private class SpillListener extends SparkListener {
private val stageIdToTaskMetrics = new mutable.HashMap[Int, ArrayBuffer[TaskMetrics]]
private val spilledStageIds = new mutable.HashSet[Int]
private val stagesDone = new CountDownLatch(1)
def numSpilledStages: Int = {
// Long timeout, just in case somehow the job end isn't notified.
// Fails if a timeout occurs
assert(stagesDone.await(10, TimeUnit.SECONDS))
spilledStageIds.size
}
override def onTaskEnd(taskEnd: SparkListenerTaskEnd): Unit = {
stageIdToTaskMetrics.getOrElseUpdate(
taskEnd.stageId, new ArrayBuffer[TaskMetrics]) += taskEnd.taskMetrics
}
override def onStageCompleted(stageComplete: SparkListenerStageCompleted): Unit = {
val stageId = stageComplete.stageInfo.stageId
val metrics = stageIdToTaskMetrics.remove(stageId).toSeq.flatten
val spilled = metrics.map(_.memoryBytesSpilled).sum > 0
if (spilled) {
spilledStageIds += stageId
}
}
override def onJobEnd(jobEnd: SparkListenerJobEnd): Unit = {
stagesDone.countDown()
}
}
|
sh-cho/cshSpark
|
TestUtils.scala
|
Scala
|
apache-2.0
| 8,422
|
package models
import java.util._;
import javax.persistence._;
import javax.validation.constraints._;
import com.avaje.ebean.Model;
import play.data.format._;
import play.data.validation._;
import models.user.UserSession
import play.api.libs.json.Json
import play.api.libs.json._
@Entity
class User extends Model {
@Id
var id:Int = _
@NotNull
var login:String = _
@NotNull
var name:String = _
@NotNull
var password:String = _
@ManyToOne()
var role:UserRole = _
@OneToMany(mappedBy = "user")
var sessions:List[UserSession] = _
}
object User {
implicit object UserFormat extends Format[User] {
def writes(user: User): JsValue = {
val loginSeq = Seq(
"id" -> JsNumber(user.id),
"login" -> JsString(user.login),
"name" -> JsString(user.name),
"role" -> JsString(if (user.role != null) user.role.symbol else "")
)
JsObject(loginSeq)
}
def reads(json: JsValue): JsResult[User] = {
JsSuccess(new User())
}
}
def finder:Model.Finder[Long, User] = new Model.Finder[Long, User](classOf[User]);
}
|
marcin-lawrowski/felicia
|
app/models/User.scala
|
Scala
|
gpl-3.0
| 1,139
|
//
// Copyright 2013, Martin Pokorny <martin@truffulatree.org>
//
// This Source Code Form is subject to the terms of the Mozilla Public License,
// v. 2.0. If a copy of the MPL was not distributed with this file, You can
// obtain one at http://mozilla.org/MPL/2.0/.
//
package org.truffulatree.scampi3
import scala.collection.mutable
import org.bridj.Pointer
trait GroupComponent {
mpi3: Scampi3 with Mpi3LibraryComponent =>
sealed class Group protected () {
protected final val handlePtr: Pointer[mpi3.lib.MPI_Group] = {
val result = allocateGroup()
result.set(mpi3.lib.MPI_GROUP_NULL)
result
}
protected[scampi3] final def handle = handlePtr(0)
override def equals(other: Any): Boolean = {
other.isInstanceOf[Group] &&
other.asInstanceOf[Group].handle == handle
}
override def hashCode: Int = handle.##
override def finalize() {
mpi3.lifecycleSync { if (!mpi3.finalized) free() }
super.finalize()
}
def free() {
if (!isNull) {
mpi3.mpiCall(mpi3.lib.MPI_Group_free(handlePtr))
}
}
final lazy val size: Int = withOutVar { size: Pointer[Int] =>
mpi3.mpiCall(mpi3.lib.MPI_Group_size(handle, size))
size(0)
}
final lazy val rank: Option[Int] = withOutVar { rank: Pointer[Int] =>
mpi3.mpiCall(mpi3.lib.MPI_Group_rank(handle, rank))
if (rank(0) != mpi3.lib.MPI_UNDEFINED) Some(rank(0))
else None
}
def translateRanks(ranks: Seq[Int], other: Group): Seq[Int] = {
val result = Pointer.allocateInts(ranks.size).as(classOf[Int])
try {
mpi3.mpiCall(
mpi3.lib.MPI_Group_translate_ranks(
handle,
ranks.size,
Pointer.pointerToInts(ranks:_*).as(classOf[Int]),
other.handle,
result))
result.getInts
} finally result.release()
}
def compare(other: Group): mpi3.Comparison.Comparison =
withOutVar { comp: Pointer[Int] =>
mpi3.mpiCall(mpi3.lib.MPI_Group_compare(handle, other.handle, comp))
mpi3.Comparison(comp(0))
}
def union(other: Group): Group =
withOutVar { newGroup: Pointer[mpi3.lib.MPI_Group] =>
mpi3.mpiCall(mpi3.lib.MPI_Group_union(handle, other.handle, newGroup))
Group(newGroup(0))
}
def intersection(other: Group): Group =
withOutVar { newGroup: Pointer[mpi3.lib.MPI_Group] =>
mpi3.mpiCall(
mpi3.lib.MPI_Group_intersection(handle, other.handle, newGroup))
Group(newGroup(0))
}
def difference(other: Group): Group =
withOutVar { newGroup: Pointer[mpi3.lib.MPI_Group] =>
mpi3.mpiCall(
mpi3.lib.MPI_Group_difference(handle, other.handle, newGroup))
Group(newGroup(0))
}
def incl(ranks: Seq[Int]): Group = {
require(
ranks.forall(r => 0 <= r && r < size),
"All elements of 'ranks' are not valid ranks in group")
require(
ranks.distinct.forall(r => ranks.count(_ == r) == 1),
"All elements of 'ranks' are not distinct")
withOutVar { newGroup: Pointer[mpi3.lib.MPI_Group] =>
mpi3.mpiCall(
mpi3.lib.MPI_Group_incl(
handle,
ranks.size,
Pointer.pointerToInts(ranks:_*).as(classOf[Int]),
newGroup))
Group(newGroup(0))
}
}
def excl(ranks: Seq[Int]): Group = {
require(ranks.forall(r => 0 <= r && r < size),
"All elements of 'ranks' are not valid ranks in group")
require(ranks.distinct.forall(r => ranks.count(_ == r) == 1),
"All elements of 'ranks' are not distinct")
withOutVar { newGroup: Pointer[mpi3.lib.MPI_Group] =>
mpi3.mpiCall(
mpi3.lib.MPI_Group_excl(
handle,
ranks.size,
Pointer.pointerToInts(ranks:_*).as(classOf[Int]),
newGroup))
Group(newGroup(0))
}
}
def rangeIncl(ranges: Seq[(Int, Int, Int)]): Group = {
val ranks = ranges flatMap {
case (first, last, stride) => first.to(last, stride)
}
require(
ranks.forall(r => 0 <= r && r < size),
"All computed ranks are not valid ranks in group")
require(
ranks.distinct.forall(r => ranks.count(_ == r) == 1),
"All computed ranks are not distinct")
val flatRanges = ranges flatMap {
case (first, last, stride) => Seq(first, last, stride)
}
withOutVar { newGroup: Pointer[mpi3.lib.MPI_Group] =>
mpi3.mpiCall(
mpi3.lib.MPI_Group_range_incl(
handle,
ranges.size,
Pointer.pointerToInts(flatRanges:_*).as(classOf[Int]),
newGroup))
Group(newGroup(0))
}
}
def rangeExcl(ranges: Seq[(Int, Int, Int)]): Group = {
val ranks = ranges flatMap {
case (first, last, stride) => first.to(last, stride)
}
require(ranks.forall(r => 0 <= r && r < size),
"All computed ranks are not valid ranks in group")
require(ranks.distinct.forall(r => ranks.count(_ == r) == 1),
"All computed ranks are not distinct")
val flatRanges = ranges flatMap {
case (first, last, stride) => Seq(first, last, stride)
}
withOutVar { newGroup: Pointer[mpi3.lib.MPI_Group] =>
mpi3.mpiCall(
mpi3.lib.MPI_Group_range_excl(
handle,
ranges.size,
Pointer.pointerToInts(flatRanges:_*).as(classOf[Int]),
newGroup))
Group(newGroup(0))
}
}
def isNull: Boolean = handle == mpi3.lib.MPI_GROUP_NULL
}
object GroupEmpty extends Group {
handlePtr.set(mpi3.lib.MPI_GROUP_EMPTY)
override def free() {}
}
object Group {
protected[scampi3] def apply(grp: mpi3.lib.MPI_Group): Group = {
if (grp == GroupEmpty.handle) GroupEmpty
else if (grp != mpi3.lib.MPI_GROUP_NULL) {
val result = new Group
result.handlePtr.set(grp)
result
} else throw new mpi3.Exception("Null group cannot be instantiated")
}
}
}
|
mpokorny/scampi
|
src/main/scala/org/truffulatree/scampi3/GroupComponent.scala
|
Scala
|
mpl-2.0
| 6,088
|
/**
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
**/
package kafka.api
import java.util.Properties
import kafka.server.{DynamicConfig, KafkaConfig, KafkaServer}
import org.apache.kafka.common.security.auth.KafkaPrincipal
import org.apache.kafka.common.utils.Sanitizer
import org.junit.Before
class ClientIdQuotaTest extends BaseQuotaTest {
override def producerClientId = "QuotasTestProducer-!@#$%^&*()"
override def consumerClientId = "QuotasTestConsumer-!@#$%^&*()"
@Before
override def setUp() {
this.serverConfig.setProperty(KafkaConfig.ProducerQuotaBytesPerSecondDefaultProp, defaultProducerQuota.toString)
this.serverConfig.setProperty(KafkaConfig.ConsumerQuotaBytesPerSecondDefaultProp, defaultConsumerQuota.toString)
super.setUp()
}
override def createQuotaTestClients(topic: String, leaderNode: KafkaServer): QuotaTestClients = {
val producer = createProducer()
val consumer = createConsumer()
new QuotaTestClients(topic, leaderNode, producerClientId, consumerClientId, producer, consumer) {
override def userPrincipal: KafkaPrincipal = KafkaPrincipal.ANONYMOUS
override def quotaMetricTags(clientId: String): Map[String, String] = {
Map("user" -> "", "client-id" -> clientId)
}
override def overrideQuotas(producerQuota: Long, consumerQuota: Long, requestQuota: Double) {
val producerProps = new Properties()
producerProps.put(DynamicConfig.Client.ProducerByteRateOverrideProp, producerQuota.toString)
producerProps.put(DynamicConfig.Client.RequestPercentageOverrideProp, requestQuota.toString)
updateQuotaOverride(producerClientId, producerProps)
val consumerProps = new Properties()
consumerProps.put(DynamicConfig.Client.ConsumerByteRateOverrideProp, consumerQuota.toString)
consumerProps.put(DynamicConfig.Client.RequestPercentageOverrideProp, requestQuota.toString)
updateQuotaOverride(consumerClientId, consumerProps)
}
override def removeQuotaOverrides() {
val emptyProps = new Properties
updateQuotaOverride(producerClientId, emptyProps)
updateQuotaOverride(consumerClientId, emptyProps)
}
private def updateQuotaOverride(clientId: String, properties: Properties) {
adminZkClient.changeClientIdConfig(Sanitizer.sanitize(clientId), properties)
}
}
}
}
|
KevinLiLu/kafka
|
core/src/test/scala/integration/kafka/api/ClientIdQuotaTest.scala
|
Scala
|
apache-2.0
| 2,888
|
package debop4s.core.utils
import scala.annotation.varargs
/**
* Hash 관련 툴
*
* @author 배성혁 sunghyouk.bae@gmail.com
* @since 2013. 12. 12. 오후 4:57
*/
object Hashs {
/** The constant NULL_VALUE. */
val NULL_VALUE: Int = 0
/** The constant ONE_VALUE. */
val ONE_VALUE: Int = 1
/** The constant FACTOR. */
val FACTOR: Int = 31
/**
* 해시코드를 생성합니다.
*
* @param x 해시코드를 생성할 객체
* @return 해시코드
*/
private def computeInternal(x: Any): Int = if (x == null) NULL_VALUE else x.hashCode()
/**
* 지정된 객체들의 Hash Code를 조합한 Hash Code를 생성합니다.
*
* @param objs 해쉬코드를 생성할 객체 배열
* @return 조합된 Hash code
*/
@varargs
def compute(objs: Any*): Int = {
if (Arrays.isEmpty(objs))
return NULL_VALUE
var hash = NULL_VALUE
objs foreach { x =>
hash = hash * FACTOR + computeInternal(x)
}
hash
}
}
|
debop/debop4s
|
debop4s-core/src/main/scala/debop4s/core/utils/Hashs.scala
|
Scala
|
apache-2.0
| 989
|
/*
* Copyright 2022 HM Revenue & Customs
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package services
import common.enums.VatRegStatus
import connectors._
import models.api._
import models.{TurnoverEstimates, _}
import play.api.libs.json.{Format, JsObject, JsValue}
import play.api.mvc.Request
import uk.gov.hmrc.http.HttpReads.Implicits._
import uk.gov.hmrc.http.{HeaderCarrier, HttpResponse}
import java.time.LocalDate
import javax.inject.{Inject, Singleton}
import scala.concurrent.{ExecutionContext, Future}
@Singleton
class VatRegistrationService @Inject()(val s4LService: S4LService,
vatRegConnector: VatRegistrationConnector,
registrationApiConnector: RegistrationApiConnector,
val sessionService: SessionService
)(implicit ec: ExecutionContext) {
// -- New Registrations API methods --
def getVatScheme(implicit profile: CurrentProfile, hc: HeaderCarrier): Future[VatScheme] =
vatRegConnector.getRegistration(profile.registrationId)
// TODO update structure of VatScheme so that all header information (IDs, creation date, status) can be accessed using the Sections API
def upsertVatScheme(vatScheme: VatScheme)(implicit profile: CurrentProfile, hc: HeaderCarrier): Future[VatScheme] =
vatRegConnector.upsertRegistration(profile.registrationId, vatScheme)
def getAllRegistrations(implicit hc: HeaderCarrier): Future[List[VatSchemeHeader]] =
vatRegConnector.getAllRegistrations
def getSection[T](regId: String)(implicit hc: HeaderCarrier, format: Format[T], apiKey: ApiKey[T]): Future[Option[T]] =
registrationApiConnector.getSection[T](regId)
def upsertSection[T](regId: String, data: T)(implicit hc: HeaderCarrier, format: Format[T], apiKey: ApiKey[T]): Future[T] =
registrationApiConnector.replaceSection[T](regId, data)
// -- End new Registrations API methods --
def getVatSchemeJson(regId: String)(implicit hc: HeaderCarrier): Future[JsValue] =
vatRegConnector.getRegistrationJson(regId)
def getAckRef(regId: String)(implicit hc: HeaderCarrier): Future[String] = vatRegConnector.getAckRef(regId)
def getTaxableThreshold(date: LocalDate = LocalDate.now())(implicit hc: HeaderCarrier): Future[String] = {
vatRegConnector.getTaxableThreshold(date) map { taxableThreshold =>
"%,d".format(taxableThreshold.threshold.toInt)
}
}
def deleteVatScheme(registrationId: String)(implicit hc: HeaderCarrier): Future[Boolean] =
vatRegConnector.deleteVatScheme(registrationId)
def createRegistrationFootprint(implicit hc: HeaderCarrier): Future[VatScheme] = {
logger.info("[createRegistrationFootprint] Creating registration footprint")
vatRegConnector.createNewRegistration
}
def getStatus(regId: String)(implicit hc: HeaderCarrier): Future[VatRegStatus.Value] = vatRegConnector.getStatus(regId)
def getEligibilityData(implicit hc: HeaderCarrier, cp: CurrentProfile): Future[JsObject] = vatRegConnector.getEligibilityData
def submitRegistration()(implicit hc: HeaderCarrier, profile: CurrentProfile, request: Request[_]): Future[DESResponse] = {
vatRegConnector.submitRegistration(profile.registrationId, request.headers.toSimpleMap)
} recover {
case _ => SubmissionFailedRetryable
}
def getThreshold(regId: String)(implicit hc: HeaderCarrier): Future[Threshold] =
vatRegConnector.getThreshold(regId) map (_.getOrElse(throw new IllegalStateException(s"No threshold block found in the back end for regId: $regId")))
def fetchTurnoverEstimates(implicit hc: HeaderCarrier, profile: CurrentProfile): Future[Option[TurnoverEstimates]] = {
vatRegConnector.getTurnoverEstimates
}
def submitHonestyDeclaration(regId: String, honestyDeclaration: Boolean)(implicit hc: HeaderCarrier): Future[HttpResponse] = {
vatRegConnector.submitHonestyDeclaration(regId, honestyDeclaration)
}
def storePartialVatScheme(regId: String, partialVatScheme: JsValue)(implicit hc: HeaderCarrier): Future[JsValue] =
vatRegConnector.upsertVatScheme(regId, partialVatScheme)
def getEligibilitySubmissionData(implicit profile: CurrentProfile, hc: HeaderCarrier): Future[EligibilitySubmissionData] =
registrationApiConnector.getSection[EligibilitySubmissionData](profile.registrationId).map(optData =>
optData.getOrElse(throw new IllegalStateException(s"No EligibilitySubmissionData block found in the backend for regId: ${profile.registrationId}"))
)
def partyType(implicit profile: CurrentProfile, hc: HeaderCarrier): Future[PartyType] =
getEligibilitySubmissionData.map(_.partyType)
def isTransactor(implicit profile: CurrentProfile, hc: HeaderCarrier): Future[Boolean] =
getEligibilitySubmissionData.map(_.isTransactor)
}
|
hmrc/vat-registration-frontend
|
app/services/VatRegistrationService.scala
|
Scala
|
apache-2.0
| 5,341
|
/*
* Copyright (C) 2009-2017 Lightbend Inc. <https://www.lightbend.com>
*/
package play.it.libs
import java.io.File
import java.nio.ByteBuffer
import java.nio.charset.{ Charset, StandardCharsets }
import java.util
import java.util.concurrent.TimeUnit
import akka.stream.scaladsl.{ FileIO, Sink, Source }
import akka.util.ByteString
import play.shaded.ahc.org.asynchttpclient.{ RequestBuilderBase, SignatureCalculator }
import play.api.http.Port
import play.api.libs.json.JsString
import play.api.libs.oauth._
import play.api.libs.streams.Accumulator
import play.api.libs.ws.WSBody
import play.api.mvc.Results.Ok
import play.api.mvc._
import play.api.test._
import play.core.server.Server
import play.it._
import play.it.tools.HttpBinApplication
import play.mvc.Http
import scala.concurrent.duration._
import scala.concurrent.{ Await, Future }
class NettyWSSpec extends WSSpec with NettyIntegrationSpecification
class AkkaHttpWSSpec extends WSSpec with AkkaHttpIntegrationSpecification
trait WSSpec extends PlaySpecification with ServerIntegrationSpecification {
import scala.concurrent.ExecutionContext.Implicits.global
"Web service client" title
sequential
def app = HttpBinApplication.app
val foldingSink = Sink.fold[ByteString, ByteString](ByteString.empty)((state, bs) => state ++ bs)
val isoString = {
// Converts the String "Hello €" to the ISO Counterparty
val sourceCharset = StandardCharsets.UTF_8
val buffer = ByteBuffer.wrap("Hello €".getBytes(sourceCharset))
val data = sourceCharset.decode(buffer)
val targetCharset = Charset.forName("Windows-1252")
new String(targetCharset.encode(data).array(), targetCharset)
}
"WS@java" should {
def withServer[T](block: play.libs.ws.WSClient => T) = {
Server.withApplication(app) { implicit port =>
withClient(block)
}
}
def withEchoServer[T](block: play.libs.ws.WSClient => T) = {
def echo = BodyParser { req =>
Accumulator.source[ByteString].mapFuture { source =>
Future.successful(source).map(Right.apply)
}
}
Server.withRouterFromComponents()(components => {
case _ => components.defaultActionBuilder(echo) { req =>
Ok.chunked(req.body)
}
}) { implicit port =>
withClient(block)
}
}
def withResult[T](result: Result)(block: play.libs.ws.WSClient => T) = {
Server.withRouterFromComponents() { components =>
{
case _ => components.defaultActionBuilder(result)
}
} { implicit port =>
withClient(block)
}
}
def withClient[T](block: play.libs.ws.WSClient => T)(implicit port: Port): T = {
val wsClient = play.test.WSTestClient.newClient(port.value)
try {
block(wsClient)
} finally {
wsClient.close()
}
}
def withHeaderCheck[T](block: play.libs.ws.WSClient => T) = {
Server.withRouterFromComponents() { components =>
{
case _ => components.defaultActionBuilder { req =>
val contentLength = req.headers.get(CONTENT_LENGTH)
val transferEncoding = req.headers.get(TRANSFER_ENCODING)
Ok(s"Content-Length: ${contentLength.getOrElse(-1)}; Transfer-Encoding: ${transferEncoding.getOrElse(-1)}")
}
}
} { implicit port =>
withClient(block)
}
}
def withXmlServer[T](block: play.libs.ws.WSClient => T) = {
Server.withRouterFromComponents() { components =>
{
case _ => components.defaultActionBuilder { req =>
val elem = <name>{ isoString }</name>.toString()
Ok(elem).as("application/xml;charset=Windows-1252")
}
}
} { implicit port =>
withClient(block)
}
}
import play.libs.ws.WSSignatureCalculator
"make GET Requests" in withServer { ws =>
val req = ws.url("/get").get
val rep = req.toCompletableFuture.get(10, TimeUnit.SECONDS) // AWait result
rep.getStatus aka "status" must_== 200 and (
rep.asJson.path("origin").textValue must not beNull)
}
"use queryString in url" in withServer { ws =>
val rep = ws.url("/get?foo=bar").get().toCompletableFuture.get(10, TimeUnit.SECONDS)
rep.getStatus aka "status" must_== 200 and (
rep.asJson().path("args").path("foo").textValue() must_== "bar")
}
"use user:password in url" in Server.withApplication(app) { implicit port =>
withClient { ws =>
val rep = ws.url(s"http://user:password@localhost:$port/basic-auth/user/password").get()
.toCompletableFuture.get(10, TimeUnit.SECONDS)
rep.getStatus aka "status" must_== 200 and (
rep.asJson().path("authenticated").booleanValue() must beTrue)
}
}
"reject invalid query string" in withServer { ws =>
import java.net.MalformedURLException
ws.url("/get?=&foo").
aka("invalid request") must throwA[RuntimeException].like {
case e: RuntimeException =>
e.getCause must beAnInstanceOf[MalformedURLException]
}
}
"reject invalid user password string" in withServer { ws =>
import java.net.MalformedURLException
ws.url("http://@localhost/get").
aka("invalid request") must throwA[RuntimeException].like {
case e: RuntimeException =>
e.getCause must beAnInstanceOf[MalformedURLException]
}
}
"consider query string in JSON conversion" in withServer { ws =>
val empty = ws.url("/get?foo").get.toCompletableFuture.get(10, TimeUnit.SECONDS)
val bar = ws.url("/get?foo=bar").get.toCompletableFuture.get(10, TimeUnit.SECONDS)
empty.asJson.path("args").path("foo").textValue() must_== "" and (
bar.asJson.path("args").path("foo").textValue() must_== "bar")
}
"get a streamed response" in withResult(
Results.Ok.chunked(Source(List("a", "b", "c")))) { ws =>
val res = ws.url("/get").stream().toCompletableFuture.get()
await(res.getBody().runWith(foldingSink, app.materializer)).decodeString("utf-8").
aka("streamed response") must_== "abc"
}
"streaming a request body" in withEchoServer { ws =>
val source = Source(List("a", "b", "c").map(ByteString.apply)).asJava
val res = ws.url("/post").setMethod("POST").setBody(source).execute()
val body = res.toCompletableFuture.get().getBody
body must_== "abc"
}
"streaming a request body with manual content length" in withHeaderCheck { ws =>
val source = Source.single(ByteString("abc")).asJava
val res = ws.url("/post").setMethod("POST").setHeader(CONTENT_LENGTH, "3").setBody(source).execute()
val body = res.toCompletableFuture.get().getBody
body must_== s"Content-Length: 3; Transfer-Encoding: -1"
}
"sending a simple multipart form body" in withServer { ws =>
val source = Source.single(new Http.MultipartFormData.DataPart("hello", "world")).asJava
val res = ws.url("/post").post(source)
val body = res.toCompletableFuture.get().asJson()
body.path("form").path("hello").textValue() must_== "world"
}
"sending a multipart form body" in withServer { ws =>
val file = new File(this.getClass.getResource("/testassets/bar.txt").toURI).toPath
val dp = new Http.MultipartFormData.DataPart("hello", "world")
val fp = new Http.MultipartFormData.FilePart("upload", "bar.txt", "text/plain", FileIO.fromPath(file).asJava)
val source = akka.stream.javadsl.Source.from(util.Arrays.asList(dp, fp))
val res = ws.url("/post").post(source)
val body = res.toCompletableFuture.get().asJson()
body.path("form").path("hello").textValue() must_== "world"
body.path("file").textValue() must_== "This is a test asset."
}
"response asXml with correct contentType" in withXmlServer { ws =>
val body = ws.url("/xml").get().toCompletableFuture.get().asXml()
new String(body.getElementsByTagName("name").item(0).getTextContent.getBytes("Windows-1252")) must_== isoString
}
"send a multipart request body via setMultipartBody" in withServer { ws =>
val file = new File(this.getClass.getResource("/testassets/bar.txt").toURI)
val dp = new Http.MultipartFormData.DataPart("hello", "world")
val fp = new Http.MultipartFormData.FilePart("upload", "bar.txt", "text/plain", FileIO.fromPath(file.toPath).asJava)
val source = akka.stream.javadsl.Source.from(util.Arrays.asList(dp, fp))
val res = ws.url("/post").setMultipartBody(source).setMethod("POST").execute()
val body = res.toCompletableFuture.get().asJson()
body.path("form").path("hello").textValue() must_== "world"
body.path("file").textValue() must_== "This is a test asset."
}
class CustomSigner extends WSSignatureCalculator with play.shaded.ahc.org.asynchttpclient.SignatureCalculator {
def calculateAndAddSignature(request: play.shaded.ahc.org.asynchttpclient.Request, requestBuilder: play.shaded.ahc.org.asynchttpclient.RequestBuilderBase[_]) = {
// do nothing
}
}
"not throw an exception while signing requests" in withServer { ws =>
val key = "12234"
val secret = "asbcdef"
val token = "token"
val tokenSecret = "tokenSecret"
(ConsumerKey(key, secret), RequestToken(token, tokenSecret))
val calc: WSSignatureCalculator = new CustomSigner
ws.url("/").sign(calc).
aka("signed request") must not(throwA[Exception])
}
}
"WS@scala" should {
import play.api.libs.ws.{ StreamedBody, WSSignatureCalculator }
implicit val materializer = app.materializer
val foldingSink = Sink.fold[ByteString, ByteString](ByteString.empty)((state, bs) => state ++ bs)
def withServer[T](block: play.api.libs.ws.WSClient => T) = {
Server.withApplication(app) { implicit port =>
WsTestClient.withClient(block)
}
}
def withEchoServer[T](block: play.api.libs.ws.WSClient => T) = {
def echo = BodyParser { req =>
Accumulator.source[ByteString].mapFuture { source =>
Future.successful(source).map(Right.apply)
}
}
Server.withRouterFromComponents() { components =>
{
case _ => components.defaultActionBuilder(echo) { req =>
Ok.chunked(req.body)
}
}
} { implicit port =>
WsTestClient.withClient(block)
}
}
def withResult[T](result: Result)(block: play.api.libs.ws.WSClient => T) = {
Server.withRouterFromComponents() { c =>
{
case _ => c.defaultActionBuilder(result)
}
} { implicit port =>
WsTestClient.withClient(block)
}
}
def withHeaderCheck[T](block: play.api.libs.ws.WSClient => T) = {
Server.withRouterFromComponents() { c =>
{
case _ => c.defaultActionBuilder { req =>
val contentLength = req.headers.get(CONTENT_LENGTH)
val transferEncoding = req.headers.get(TRANSFER_ENCODING)
Ok(s"Content-Length: ${contentLength.getOrElse(-1)}; Transfer-Encoding: ${transferEncoding.getOrElse(-1)}")
}
}
} { implicit port =>
WsTestClient.withClient(block)
}
}
"make GET Requests" in withServer { ws =>
val req = ws.url("/get").get()
Await.result(req, Duration(1, SECONDS)).status aka "status" must_== 200
}
"Get 404 errors" in withServer { ws =>
val req = ws.url("/post").get()
Await.result(req, Duration(1, SECONDS)).status aka "status" must_== 404
}
"get a streamed response" in withResult(
Results.Ok.chunked(Source(List("a", "b", "c")))) { ws =>
val res = ws.url("/get").stream()
val body = await(res).body
await(body.runWith(foldingSink)).decodeString("utf-8").
aka("streamed response") must_== "abc"
}
"streaming a request body" in withEchoServer { ws =>
val source = Source(List("a", "b", "c").map(ByteString.apply))
val res = ws.url("/post").withMethod("POST").withBody(StreamedBody(source)).execute()
val body = await(res).body
body must_== "abc"
}
"streaming a request body with manual content length" in withHeaderCheck { ws =>
val source = Source.single(ByteString("abc"))
val res = ws.url("/post").withMethod("POST").withHeaders(CONTENT_LENGTH -> "3").withBody(StreamedBody(source)).execute()
val body = await(res).body
body must_== s"Content-Length: 3; Transfer-Encoding: -1"
}
"send a multipart request body" in withServer { ws =>
val file = new File(this.getClass.getResource("/testassets/foo.txt").toURI).toPath
val dp = MultipartFormData.DataPart("hello", "world")
val fp = MultipartFormData.FilePart("upload", "foo.txt", None, FileIO.fromPath(file))
val source = Source(List(dp, fp))
val res = ws.url("/post").post(source)
val body = await(res).json
(body \\ "form" \\ "hello").toOption must beSome(JsString("world"))
(body \\ "file").toOption must beSome(JsString("This is a test asset."))
}
"send a multipart request body via withBody" in withServer { ws =>
val file = new File(this.getClass.getResource("/testassets/foo.txt").toURI)
val dp = MultipartFormData.DataPart("hello", "world")
val fp = MultipartFormData.FilePart("upload", "foo.txt", None, FileIO.fromPath(file.toPath))
val source = Source(List(dp, fp))
val res = ws.url("/post").withBody(source).withMethod("POST").execute()
val body = await(res).json
(body \\ "form" \\ "hello").toOption must beSome(JsString("world"))
(body \\ "file").toOption must beSome(JsString("This is a test asset."))
}
class CustomSigner extends WSSignatureCalculator with SignatureCalculator {
def calculateAndAddSignature(request: play.shaded.ahc.org.asynchttpclient.Request, requestBuilder: RequestBuilderBase[_]) = {
// do nothing
}
}
"not throw an exception while signing requests" >> {
val calc = new CustomSigner
"without query string" in withServer { ws =>
ws.url("/").sign(calc).get().
aka("signed request") must not(throwA[NullPointerException])
}
"with query string" in withServer { ws =>
ws.url("/").withQueryString("lorem" -> "ipsum").
sign(calc) aka "signed request" must not(throwA[Exception])
}
}
}
}
|
ktoso/playframework
|
framework/src/play-integration-test/src/test/scala/play/it/libs/WSSpec.scala
|
Scala
|
apache-2.0
| 14,505
|
/**
* Skinny framework for rapid web app development in Scala.
*
* Skinny is a full-stack web app framework, which is built on Scalatra and additional components are integrated.
* To put it simply, Skinny framework's concept is Scala on Rails. Skinny is highly inspired by Ruby on Rails and it is optimized for sustainable productivity for ordinary Servlet-based app development.
*/
package object skinny {
type SkinnyLifeCycle = bootstrap.SkinnyLifeCycle
type ServletContext = javax.servlet.ServletContext
type SkinnyControllerBase = skinny.controller.SkinnyControllerBase
type SkinnyController = skinny.controller.SkinnyController
type SkinnyApiController = skinny.controller.SkinnyApiController
type SkinnyResource = skinny.controller.SkinnyResource
type SkinnyResourceWithId[Id] = skinny.controller.SkinnyResourceWithId[Id]
type SkinnyServlet = skinny.controller.SkinnyServlet
type SkinnyApiServlet = skinny.controller.SkinnyApiServlet
type Params = skinny.controller.Params
val Params = skinny.controller.Params
type MultiParams = skinny.controller.MultiParams
val MultiParams = skinny.controller.MultiParams
type Flash = skinny.controller.Flash
val Flash = skinny.controller.Flash
type KeyAndErrorMessages = skinny.controller.KeyAndErrorMessages
val KeyAndErrorMessages = skinny.controller.KeyAndErrorMessages
type Routes = skinny.routing.Routes
type SkinnyNoIdMapper[A] = skinny.orm.SkinnyNoIdMapper[A]
type SkinnyCRUDMapper[A] = skinny.orm.SkinnyCRUDMapper[A]
type SkinnyCRUDMapperWithId[Id, A] = skinny.orm.SkinnyCRUDMapperWithId[Id, A]
type SkinnyMapper[A] = skinny.orm.SkinnyMapper[A]
type SkinnyMapperWithId[Id, A] = skinny.orm.SkinnyMapperWithId[Id, A]
type SkinnyJoinTable[A] = skinny.orm.SkinnyJoinTable[A]
@deprecated("Use SkinnyMapper or SkinnyCRUDMapper instead because this mapper has ID.", since = "1.0.14")
type SkinnyJoinTableWithId[Id, A] = skinny.orm.SkinnyJoinTableWithId[Id, A]
type TypeConverter[A, B] = org.scalatra.util.conversion.TypeConverter[A, B]
type TypeConverterSupport = org.scalatra.util.conversion.TypeConverterSupport
val TypeConverterSupport = org.scalatra.util.conversion.TypeConverterSupport
type Logging = skinny.logging.Logging
}
|
BlackPrincess/skinny-framework
|
framework/src/main/scala/skinny/package.scala
|
Scala
|
mit
| 2,257
|
import scala.quoted.*
object Macro {
def impl[A : Type](using Quotes) = {
import quotes.reflect.*
val tpe/*: Type[? <: AnyKind]*/ = TypeRepr.of[A].asType
'{ f[$tpe] } // error
}
def f[T <: AnyKind]: Unit = ()
}
|
dotty-staging/dotty
|
tests/neg-macros/i8871b.scala
|
Scala
|
apache-2.0
| 229
|
package org.falcon.streaming.filter
import twitter4j.FilterQuery
import org.falcon.util.Util
/**
* Project: falcon
* Package: org.falcon.streaming.filter
*
* Author: Sergio Álvarez
* Date: 02/2014
*/
object FilterFactory {
def createFilterQuery: FilterQuery = new FilterQuery().language(Util.language).track(Util.keywords)
}
|
sergio-alvarez/falcon
|
src/main/scala/org/falcon/streaming/filter/FilterFactory.scala
|
Scala
|
apache-2.0
| 335
|
package com.github.gtache.lsp.requests
/**
* An object containing the Timeout for the various requests
*/
object Timeout {
import Timeouts._
private var timeouts: Map[Timeouts, Int] = Timeouts.values().map(t => t -> t.getDefaultTimeout).toMap
def getTimeoutsJava: java.util.Map[Timeouts, Integer] = {
import scala.collection.JavaConverters._
timeouts.map(t => (t._1, t._2.asInstanceOf[Integer])).asJava
}
def setTimeouts(timeouts: Map[Timeouts, Int]): Unit = {
this.timeouts = timeouts
}
def setTimeouts(timeouts: java.util.Map[Timeouts, Integer]): Unit = {
import scala.collection.JavaConverters._
this.timeouts = timeouts.asScala.map(entry => (entry._1, entry._2.toInt)).toMap
}
def CODEACTION_TIMEOUT: Int = timeouts(CODEACTION)
def CODELENS_TIMEOUT: Int = timeouts(CODELENS)
def COMPLETION_TIMEOUT: Int = timeouts(COMPLETION)
def DEFINITION_TIMEOUT: Int = timeouts(DEFINITION)
def DOC_HIGHLIGHT_TIMEOUT: Int = timeouts(DOC_HIGHLIGHT)
def EXECUTE_COMMAND_TIMEOUT: Int = timeouts(EXECUTE_COMMAND)
def FORMATTING_TIMEOUT: Int = timeouts(FORMATTING)
def HOVER_TIMEOUT: Int = timeouts(HOVER)
def INIT_TIMEOUT: Int = timeouts(INIT)
def PREPARE_RENAME_TIMEOUT: Int = timeouts(PREPARE_RENAME)
def REFERENCES_TIMEOUT: Int = timeouts(REFERENCES)
def SIGNATURE_TIMEOUT: Int = timeouts(SIGNATURE)
def SHUTDOWN_TIMEOUT: Int = timeouts(SHUTDOWN)
def SYMBOLS_TIMEOUT: Int = timeouts(SYMBOLS)
def WILLSAVE_TIMEOUT: Int = timeouts(WILLSAVE)
}
|
gtache/intellij-lsp
|
intellij-lsp/src/com/github/gtache/lsp/requests/Timeout.scala
|
Scala
|
apache-2.0
| 1,522
|
/*
* Scala LZW
* Copyright (C) 2012, Wilfred Springer
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package nl.flotsam.lzw
trait Node {
def decode[T](i: Int, fn: (Byte) => T): Node
def encode[T](b: Byte, fn: (Int, Int) => T): Node
def apply[T](fn: (Byte) => T)
def root: Node
def bitsRequired: Int
def terminate[T](fn: (Int, Int) => T)
def first: Byte
}
|
jaimeguzman/scala-lzw
|
src/main/scala/nl/flotsam/lzw/Node.scala
|
Scala
|
gpl-3.0
| 986
|
package me.axiometry.blocknet.entity
import me.axiometry.blocknet._
trait Entity extends WorldLocatable with PreciseLocatable {
def id: Int
def x: Double
def y: Double
def z: Double
def yaw: Double
def pitch: Double
def rider: Entity
def riding: Entity
def x_=(x: Double)
def y_=(y: Double)
def z_=(z: Double)
def yaw_=(yaw: Double)
def pitch_=(pitch: Double)
def rider_=(rider: Entity)
def riding_=(riding: Entity)
override def location = Location.Precise(x, y, z)
def boundingBox: BoundingBox
def location_=(location: Location.Precise) = {
x = location.x
y = location.y
z = location.z
}
def boundingBox_=(boundingBox: BoundingBox)
}
|
Axiometry/Blocknet
|
blocknet-api/src/main/scala/me/axiometry/blocknet/entity/Entity.scala
|
Scala
|
bsd-2-clause
| 691
|
/*
* Copyright (c) 2014-2020 by The Monix Project Developers.
* See the project homepage at: https://monix.io
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package monix.reactive.internal.operators
import monix.reactive.Observable
import scala.concurrent.duration.Duration.Zero
import scala.concurrent.duration._
object WithLatestFrom3Suite extends BaseOperatorSuite {
def count(sourceCount: Int) = sourceCount
def sum(sourceCount: Int): Long =
sourceCount.toLong * (sourceCount + 1) / 2 + sourceCount * 30
def createObservable(sourceCount: Int) = {
require(sourceCount > 0, "sourceCount should be strictly positive")
Some {
val other = Observable.fromIterable(0 to 10)
val o =
if (sourceCount == 1)
Observable
.now(1L)
.delayExecution(1.second)
.withLatestFrom3(other, other, other)(_ + _ + _ + _)
else
Observable
.range(1, sourceCount.toLong + 1, 1)
.delayExecution(1.second)
.withLatestFrom3(other, other, other)(_ + _ + _ + _)
Sample(o, count(sourceCount), sum(sourceCount), 1.second, Zero)
}
}
def observableInError(sourceCount: Int, ex: Throwable) = Some {
val other = Observable.fromIterable(0 to 10)
val o =
if (sourceCount == 1)
Observable
.now(1L)
.delayExecution(1.second)
.endWithError(ex)
.withLatestFrom3(other, other, other)(_ + _ + _ + _)
else
Observable
.range(1, sourceCount.toLong + 1, 1)
.delayExecution(1.second)
.endWithError(ex)
.withLatestFrom3(other, other, other)(_ + _ + _ + _)
Sample(o, count(sourceCount), sum(sourceCount), 1.second, Zero)
}
def brokenUserCodeObservable(sourceCount: Int, ex: Throwable) = {
require(sourceCount > 0, "sourceCount should be strictly positive")
Some {
val other = Observable.fromIterable(0 to 10)
val o =
if (sourceCount == 1)
Observable
.now(1L)
.delayExecution(1.second)
.withLatestFrom3(other, other, other)((x1, x2, x3, x4) => throw ex)
else
Observable
.range(1, sourceCount.toLong + 1, 1)
.delayExecution(1.second)
.withLatestFrom3(other, other, other) { (x1, x2, x3, x4) =>
if (x1 == sourceCount)
throw ex
else
x1 + x2 + x3 + x4
}
Sample(o, count(sourceCount - 1), sum(sourceCount - 1), 1.second, Zero)
}
}
override def cancelableObservables(): Seq[Sample] = {
val other = Observable.now(1).delayExecution(1.second)
val sample = Observable
.now(1L)
.delayExecution(2.seconds)
.withLatestFrom3(other, other, other)(_ + _ + _ + _)
Seq(
Sample(sample, 0, 0, 0.seconds, 0.seconds),
Sample(sample, 0, 0, 1.seconds, 0.seconds)
)
}
}
|
alexandru/monifu
|
monix-reactive/shared/src/test/scala/monix/reactive/internal/operators/WithLatestFrom3Suite.scala
|
Scala
|
apache-2.0
| 3,460
|
package slate
package app
import cats.implicits._
import japgolly.scalajs.react.extra.Reusability
import qq.data.JSON
import qq.data.JSON.{JSONModification, ModifiedJSON}
import slate.app.refresh.BootRefreshPolicy
case class SlateProgramConfig(input: JSON, bootRefreshPolicy: BootRefreshPolicy)
// SlateProgramConfigModification is the type of modifications to a SlateProgramConfig.
// That is to say, any individual modification you want to make to a field of a SlateProgramConfig
// is expressible as a SlateProgramConfigModification. You can represent an arbitrary number of these
// actions with an arbitrary collection like List[SlateProgramConfigModification].
// This will constitute a monoid action on SlateProgramConfig.
sealed trait SlateProgramConfigModification {
def changesShape: Boolean
}
case class InputModification(jsonModification: JSONModification) extends SlateProgramConfigModification {
override def changesShape: Boolean = jsonModification.changesShape.value
}
case class BootRefreshPolicyModification(newPolicy: BootRefreshPolicy) extends SlateProgramConfigModification {
override def changesShape: Boolean = false
}
case class ModifiedSlateProgramConfig(input: ModifiedJSON, bootRefreshPolicy: BootRefreshPolicy, bootRefreshPolicyModified: Boolean) {
def commit: SlateProgramConfig = SlateProgramConfig(JSON.commit(input), bootRefreshPolicy)
}
object ModifiedSlateProgramConfig {
def unmodified(config: SlateProgramConfig): ModifiedSlateProgramConfig =
ModifiedSlateProgramConfig(JSON.unmodified(config.input), config.bootRefreshPolicy, bootRefreshPolicyModified = false)
}
object SlateProgramConfigModification {
implicit final class slateProgramModificationsOps(mods: List[SlateProgramConfigModification]) {
def apply(config: SlateProgramConfig): Option[ModifiedSlateProgramConfig] =
mods.foldM[Option, ModifiedSlateProgramConfig](ModifiedSlateProgramConfig.unmodified(config))((c, m) => m(c))
}
implicit final class slateProgramModificationOps(mod: SlateProgramConfigModification) {
def apply(config: ModifiedSlateProgramConfig): Option[ModifiedSlateProgramConfig] = mod match {
case InputModification(jm) => jm(config.input, JSON.Null, "" -> JSON.Null).map(newInput => config.copy(input = newInput))
case BootRefreshPolicyModification(newPolicy) => Some(config.copy(bootRefreshPolicy = newPolicy))
}
}
}
object SlateProgramConfig {
implicit val configReusability: Reusability[SlateProgramConfig] =
Reusability.byRefOr_==
// I would only like to re-render if the shape changes
implicit val modificationReusability: Reusability[List[SlateProgramConfigModification]] =
Reusability.byRefOr_==[List[JSONModification]]
.contramap[List[SlateProgramConfigModification]](_.collect { case InputModification(m) => m }.filter(_.changesShape.value))
}
|
edmundnoble/slate
|
ui/src/main/scala/slate/app/SlateProgramConfig.scala
|
Scala
|
mit
| 2,849
|
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.blazedb.ml.util
import org.apache.spark.{SparkConf, SparkContext}
/**
* TestingSparkContext
*
*/
class TestingSparkContext(master: String) {
@transient var sc: SparkContext = _
val conf = new SparkConf().setMaster(master).setAppName("TestingSC")
sc = new SparkContext(conf)
sys.addShutdownHook {
if (sc != null) {
sc.stop()
}
}
}
|
javadba/mlpoc
|
src/main/scala/com/blazedb/ml/util/TestingSparkContext.scala
|
Scala
|
apache-2.0
| 1,176
|
package inloopio.math
import java.math.BigDecimal
/**
* Utilities for comparing numbers.
*
* Ported by Caoyuan Deng from Java version at org.apache.commons.math3.util
*/
object Precision {
/**
* Smallest positive number such that {@code 1 - EPSILON} is not
* numerically equal to 1: {@value}.
*/
val EPSILON: Double = math.pow(2, -53) //0x1.0p-53
/**
* Safe minimum, such that {@code 1 / SAFE_MIN} does not overflow.
* In IEEE 754 arithmetic, this is also the smallest normalized
* number 2<sup>-1022</sup>: {@value}.
*/
val SAFE_MIN: Double = math.pow(2, -1022) //0x1.0p-1022
/** Offset to order signed Double numbers lexicographically. */
val SGN_MASK = 0x8000000000000000L
/** Offset to order signed Double numbers lexicographically. */
val SGN_MASK_FLOAT = 0x80000000
/**
* Compares two numbers given some amount of allowed error.
*
* @param x the first number
* @param y the second number
* @param eps the amount of error to allow when checking for equality
* @return <ul><li>0 if {@link #equals(Double, Double, Double) equals(x, y, eps)}</li>
* <li>< 0 if !{@link #equals(Double, Double, Double) equals(x, y, eps)} && x < y</li>
* <li>> 0 if !{@link #equals(Double, Double, Double) equals(x, y, eps)} && x > y</li></ul>
*/
def compare(x: Double, y: Double, eps: Double): Int = {
if (equals(x, y, eps)) {
0
} else if (x < y) {
-1
} else {
1
}
}
/**
* Compares two numbers given some amount of allowed error.
* Two Float numbers are considered equal if there are {@code (maxUlps - 1)}
* (or fewer) floating point numbers between them, i.e. two adjacent floating
* point numbers are considered equal.
* Adapted from <a
* href="http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm">
* Bruce Dawson</a>
*
* @param x first value
* @param y second value
* @param maxUlps {@code (maxUlps - 1)} is the number of floating point
* values between {@code x} and {@code y}.
* @return <ul><li>0 if {@link #equals(Double, Double, int) equals(x, y, maxUlps)}</li>
* <li>< 0 if !{@link #equals(Double, Double, int) equals(x, y, maxUlps)} && x < y</li>
* <li>> 0 if !{@link #equals(Double, Double, int) equals(x, y, maxUlps)} && x > y</li></ul>
*/
def compare(x: Double, y: Double, maxUlps: Int): Int = {
if (equals(x, y, maxUlps)) {
0
} else if (x < y) {
-1
} else {
1
}
}
/**
* Returns true iff they are equal as defined by
* {@link #equals(Float,Float,int) equals(x, y, 1)}.
*
* @param x first value
* @param y second value
* @return {@code true} if the values are equal.
*/
def equals(x: Float, y: Float): Boolean = {
equals(x, y, 1)
}
/**
* Returns true if both arguments are NaN or neither is NaN and they are
* equal as defined by {@link #equals(Float,Float) equals(x, y, 1)}.
*
* @param x first value
* @param y second value
* @return {@code true} if the values are equal or both are NaN.
* @since 2.2
*/
def equalsIncludingNaN(x: Float, y: Float): Boolean = {
(java.lang.Float.isNaN(x) && java.lang.Float.isNaN(y)) || equals(x, y, 1)
}
/**
* Returns true if both arguments are equal or within the range of allowed
* error (inclusive).
*
* @param x first value
* @param y second value
* @param eps the amount of absolute error to allow.
* @return {@code true} if the values are equal or within range of each other.
* @since 2.2
*/
def equals(x: Float, y: Float, eps: Float): Boolean = {
equals(x, y, 1) || math.abs(y - x) <= eps
}
/**
* Returns true if both arguments are NaN or are equal or within the range
* of allowed error (inclusive).
*
* @param x first value
* @param y second value
* @param eps the amount of absolute error to allow.
* @return {@code true} if the values are equal or within range of each other,
* or both are NaN.
* @since 2.2
*/
def equalsIncludingNaN(x: Float, y: Float, eps: Float): Boolean = {
equalsIncludingNaN(x, y) || (math.abs(y - x) <= eps)
}
/**
* Returns true if both arguments are equal or within the range of allowed
* error (inclusive).
* Two Float numbers are considered equal if there are {@code (maxUlps - 1)}
* (or fewer) floating point numbers between them, i.e. two adjacent floating
* point numbers are considered equal.
* Adapted from <a
* href="http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm">
* Bruce Dawson</a>
*
* @param x first value
* @param y second value
* @param maxUlps {@code (maxUlps - 1)} is the number of floating point
* values between {@code x} and {@code y}.
* @return {@code true} if there are fewer than {@code maxUlps} floating
* point values between {@code x} and {@code y}.
* @since 2.2
*/
def equals(x: Float, y: Float, maxUlps: Int): Boolean = {
var xInt = java.lang.Float.floatToIntBits(x)
var yInt = java.lang.Float.floatToIntBits(y)
// Make lexicographically ordered as a two's-complement integer.
if (xInt < 0) {
xInt = SGN_MASK_FLOAT - xInt
}
if (yInt < 0) {
yInt = SGN_MASK_FLOAT - yInt
}
val isEqual = math.abs(xInt - yInt) <= maxUlps
isEqual && !java.lang.Float.isNaN(x) && !java.lang.Float.isNaN(y)
}
/**
* Returns true if both arguments are NaN or if they are equal as defined
* by {@link #equals(Float,Float,int) equals(x, y, maxUlps)}.
*
* @param x first value
* @param y second value
* @param maxUlps {@code (maxUlps - 1)} is the number of floating point
* values between {@code x} and {@code y}.
* @return {@code true} if both arguments are NaN or if there are less than
* {@code maxUlps} floating point values between {@code x} and {@code y}.
* @since 2.2
*/
def equalsIncludingNaN(x: Float, y: Float, maxUlps: Int): Boolean = {
(java.lang.Float.isNaN(x) && java.lang.Float.isNaN(y)) || equals(x, y, maxUlps)
}
/**
* Returns true iff they are equal as defined by
* {@link #equals(Double,Double,int) equals(x, y, 1)}.
*
* @param x first value
* @param y second value
* @return {@code true} if the values are equal.
*/
def equals(x: Double, y: Double) {
equals(x, y, 1)
}
/**
* Returns true if both arguments are NaN or neither is NaN and they are
* equal as defined by {@link #equals(Double,Double) equals(x, y, 1)}.
*
* @param x first value
* @param y second value
* @return {@code true} if the values are equal or both are NaN.
* @since 2.2
*/
def equalsIncludingNaN(x: Double, y: Double): Boolean = {
(java.lang.Double.isNaN(x) && java.lang.Double.isNaN(y)) || equals(x, y, 1)
}
/**
* Returns {@code true} if there is no Double value strictly between the
* arguments or the difference between them is within the range of allowed
* error (inclusive).
*
* @param x First value.
* @param y Second value.
* @param eps Amount of allowed absolute error.
* @return {@code true} if the values are two adjacent floating point
* numbers or they are within range of each other.
*/
def equals(x: Double, y: Double, eps: Double): Boolean = {
equals(x, y, 1) || math.abs(y - x) <= eps
}
/**
* Returns true if both arguments are NaN or are equal or within the range
* of allowed error (inclusive).
*
* @param x first value
* @param y second value
* @param eps the amount of absolute error to allow.
* @return {@code true} if the values are equal or within range of each other,
* or both are NaN.
* @since 2.2
*/
def equalsIncludingNaN(x: Double, y: Double, eps: Double): Boolean = {
equalsIncludingNaN(x, y) || (math.abs(y - x) <= eps)
}
/**
* Returns true if both arguments are equal or within the range of allowed
* error (inclusive).
* Two Float numbers are considered equal if there are {@code (maxUlps - 1)}
* (or fewer) floating point numbers between them, i.e. two adjacent floating
* point numbers are considered equal.
* Adapted from <a
* href="http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm">
* Bruce Dawson</a>
*
* @param x first value
* @param y second value
* @param maxUlps {@code (maxUlps - 1)} is the number of floating point
* values between {@code x} and {@code y}.
* @return {@code true} if there are fewer than {@code maxUlps} floating
* point values between {@code x} and {@code y}.
*/
def equals(x: Double, y: Double, maxUlps: Int): Boolean = {
var xInt = java.lang.Double.doubleToLongBits(x)
var yInt = java.lang.Double.doubleToLongBits(y)
// Make lexicographically ordered as a two's-complement integer.
if (xInt < 0) {
xInt = SGN_MASK - xInt
}
if (yInt < 0) {
yInt = SGN_MASK - yInt
}
val isEqual = math.abs(xInt - yInt) <= maxUlps
isEqual && !java.lang.Double.isNaN(x) && !java.lang.Double.isNaN(y)
}
/**
* Returns true if both arguments are NaN or if they are equal as defined
* by {@link #equals(Double,Double,int) equals(x, y, maxUlps)}.
*
* @param x first value
* @param y second value
* @param maxUlps {@code (maxUlps - 1)} is the number of floating point
* values between {@code x} and {@code y}.
* @return {@code true} if both arguments are NaN or if there are less than
* {@code maxUlps} floating point values between {@code x} and {@code y}.
* @since 2.2
*/
def equalsIncludingNaN(x: Double, y: Double, maxUlps: Int): Boolean = {
(java.lang.Double.isNaN(x) && java.lang.Double.isNaN(y)) || equals(x, y, maxUlps)
}
/**
* Rounds the given value to the specified number of decimal places.
* The value is rounded using the {@link BigDecimal#ROUND_HALF_UP} method.
*
* @param x Value to round.
* @param scale Number of digits to the right of the decimal point.
* @return the rounded value.
* @since 1.1 (previously in {@code MathUtils}, moved as of version 3.0)
*/
def round(x: Double, scale: Int): Double = {
round(x, scale, BigDecimal.ROUND_HALF_UP)
}
/**
* Rounds the given value to the specified number of decimal places.
* The value is rounded using the given method which is any method defined
* in {@link BigDecimal}.
* If {@code x} is infinite or {@code NaN}, then the value of {@code x} is
* returned unchanged, regardless of the other parameters.
*
* @param x Value to round.
* @param scale Number of digits to the right of the decimal point.
* @param roundingMethod Rounding method as defined in {@link BigDecimal}.
* @return the rounded value.
* @throws ArithmeticException if {@code roundingMethod == ROUND_UNNECESSARY}
* and the specified scaling operation would require rounding.
* @throws IllegalArgumentException if {@code roundingMethod} does not
* represent a valid rounding mode.
* @since 1.1 (previously in {@code MathUtils}, moved as of version 3.0)
*/
def round(x: Double, scale: Int, roundingMethod: Int): Double = {
try {
(new BigDecimal(java.lang.Double.toString(x)).setScale(scale, roundingMethod)).doubleValue
} catch {
case ex: NumberFormatException =>
if (java.lang.Double.isInfinite(x)) {
x
} else {
Double.NaN
}
}
}
/**
* Rounds the given value to the specified number of decimal places.
* The value is rounded using the {@link BigDecimal#ROUND_HALF_UP} method.
*
* @param x Value to round.
* @param scale Number of digits to the right of the decimal point.
* @return the rounded value.
* @since 1.1 (previously in {@code MathUtils}, moved as of version 3.0)
*/
def round(x: Float, scale: Int): Float = {
round(x, scale, BigDecimal.ROUND_HALF_UP)
}
/**
* Rounds the given value to the specified number of decimal places.
* The value is rounded using the given method which is any method defined
* in {@link BigDecimal}.
*
* @param x Value to round.
* @param scale Number of digits to the right of the decimal point.
* @param roundingMethod Rounding method as defined in {@link BigDecimal}.
* @return the rounded value.
* @since 1.1 (previously in {@code MathUtils}, moved as of version 3.0)
*/
def round(x: Float, scale: Int, roundingMethod: Int): Float = {
val sign = java.lang.Math.copySign(1f, x)
val factor = math.pow(10.0f, scale).toFloat * sign
roundUnscaled(x * factor, sign, roundingMethod).toFloat / factor
}
/**
* Rounds the given non-negative value to the "nearest" integer. Nearest is
* determined by the rounding method specified. Rounding methods are defined
* in {@link BigDecimal}.
*
* @param unscaled Value to round.
* @param sign Sign of the original, scaled value.
* @param roundingMethod Rounding method, as defined in {@link BigDecimal}.
* @return the rounded value.
* @throws MathIllegalArgumentException if {@code roundingMethod} is not a valid rounding method.
* @since 1.1 (previously in {@code MathUtils}, moved as of version 3.0)
*/
private def roundUnscaled(unscaled: Double, sign: Double, roundingMethod: Int): Double = {
val roundedUnscaled = roundingMethod match {
case BigDecimal.ROUND_CEILING =>
if (sign == -1) {
math.floor(java.lang.Math.nextAfter(unscaled, Double.NegativeInfinity))
} else {
math.ceil(java.lang.Math.nextAfter(unscaled, Double.PositiveInfinity))
}
case BigDecimal.ROUND_DOWN =>
math.floor(java.lang.Math.nextAfter(unscaled, Double.NegativeInfinity))
case BigDecimal.ROUND_FLOOR =>
if (sign == -1) {
math.ceil(java.lang.Math.nextAfter(unscaled, Double.PositiveInfinity))
} else {
math.floor(java.lang.Math.nextAfter(unscaled, Double.NegativeInfinity))
}
case BigDecimal.ROUND_HALF_DOWN =>
val unscaledTmp = java.lang.Math.nextAfter(unscaled, Double.NegativeInfinity)
val fraction = unscaledTmp - math.floor(unscaledTmp)
if (fraction > 0.5) {
math.ceil(unscaledTmp)
} else {
math.floor(unscaledTmp)
}
case BigDecimal.ROUND_HALF_EVEN =>
val fraction = unscaled - math.floor(unscaled)
if (fraction > 0.5) {
math.ceil(unscaled)
} else if (fraction < 0.5) {
math.floor(unscaled)
} else {
// The following equality test is intentional and needed for rounding purposes
if (math.floor(unscaled) / 2.0 == math.floor(math.floor(unscaled) / 2.0)) { // even
math.floor(unscaled)
} else { // odd
math.ceil(unscaled)
}
}
case BigDecimal.ROUND_HALF_UP =>
val unscaledTmp = java.lang.Math.nextAfter(unscaled, Double.PositiveInfinity)
val fraction = unscaledTmp - math.floor(unscaledTmp)
if (fraction >= 0.5) {
math.ceil(unscaled)
} else {
math.floor(unscaled)
}
case BigDecimal.ROUND_UNNECESSARY =>
if (unscaled != math.floor(unscaled)) {
throw new ArithmeticException()
} else unscaled
case BigDecimal.ROUND_UP =>
math.ceil(java.lang.Math.nextAfter(unscaled, Double.PositiveInfinity))
case _ =>
throw new IllegalArgumentException("invalid rounding method {0}, valid methods: {1} ({2}), {3} ({4}), {5} ({6}), {7} ({8}), {9} ({10}), {11} ({12}), {13} ({14}), {15} ({16})".format(
roundingMethod,
"ROUND_CEILING", BigDecimal.ROUND_CEILING,
"ROUND_DOWN", BigDecimal.ROUND_DOWN,
"ROUND_FLOOR", BigDecimal.ROUND_FLOOR,
"ROUND_HALF_DOWN", BigDecimal.ROUND_HALF_DOWN,
"ROUND_HALF_EVEN", BigDecimal.ROUND_HALF_EVEN,
"ROUND_HALF_UP", BigDecimal.ROUND_HALF_UP,
"ROUND_UNNECESSARY", BigDecimal.ROUND_UNNECESSARY,
"ROUND_UP", BigDecimal.ROUND_UP))
}
roundedUnscaled
}
/**
* Computes a number {@code delta} close to {@code originalDelta} with
* the property that <pre><code>
* x + delta - x
* </code></pre>
* is exactly machine-representable.
* This is useful when computing numerical derivatives, in order to reduce
* roundoff errors.
*
* @param x Value.
* @param originalDelta Offset value.
* @return a number {@code delta} so that {@code x + delta} and {@code x}
* differ by a representable floating number.
*/
def representableDelta(x: Double, originalDelta: Double): Double = {
x + originalDelta - x
}
// --- simple test
def main(args: Array[String]) {
println("EPSILON = " + Precision.EPSILON)
println("SAFE_MIN = " + Precision.SAFE_MIN)
println("Math.hypot(8.5, 11.19) = " + java.lang.Math.hypot(8.5, 11.19))
}
}
|
dcaoyuan/inloopio-libs
|
inloopio-math/src/main/scala/inloopio/math/Precision.scala
|
Scala
|
bsd-3-clause
| 16,908
|
package org.phillipgreenii.codedependencytracker
import org.scalatest.{FeatureSpec, GivenWhenThen}
import org.scalatest.concurrent.Timeouts
import org.scalatest.time.SpanSugar._
class AppFunSpec extends FeatureSpec with GivenWhenThen with Timeouts {
feature("App") {
scenario("app should run") {
Given("an app")
When("the app runs")
App.run()
Then("no errors occur")
}
}
}
|
phillipgreenii/code-dependency-tracker
|
src/it/scala/org/phillipgreenii/codedependencytracker/AppFunSpec.scala
|
Scala
|
mit
| 415
|
import scala.reflect.runtime.universe._
import scala.tools.reflect.Eval
object Test extends App {
reify {
List(1, 2, 3) match {
case foo :: bar :: _ => println(foo * bar)
case _ => println("this is getting out of hand!")
}
}.eval
}
|
som-snytt/dotty
|
tests/disabled/macro/run/t5273_1_oldpatmat.scala
|
Scala
|
apache-2.0
| 257
|
package verdandi.ui.summary
import verdandi.ui.swing.RichGridBagPanel
import scala.swing.GridBagPanel._
import verdandi.model.SummaryModel
import verdandi.ui.WidthStoringTable
import scala.swing.BorderPanel
import scala.swing.ScrollPane
import verdandi.ui.swing.RichBorderPanel
import scala.swing.Reactor
import verdandi.event.WorkRecordEvent
import verdandi.event.EventBroadcaster
import verdandi.ui.swing.RichBoxPanel
import scala.swing.Orientation
import scala.swing.ComboBox
import scala.swing.Action
import scala.swing.event.SelectionChanged
import verdandi.ui.swing.Spinner
import verdandi.event.SummaryPeriodChanged
import scala.swing.Label
import java.awt.Font
import verdandi.ui.TextResources
import javax.swing.BorderFactory
import java.util.prefs.Preferences
import verdandi.Prefs
import verdandi.Pref
import verdandi.StringPref
import scala.swing.GridBagPanel
import java.awt.Color
import verdandi.event.SummaryPeriodTypeChanged
import verdandi.model.DefaultListener
class SummaryPanel extends RichBorderPanel {
val summaryModel = new SummaryModel()
val summaryTable = new WidthStoringTable(summaryModel)
reactions += {
case evt: SummaryPeriodTypeChanged => {
revalidate()
}
}
object PeriodSelectionPanel extends RichGridBagPanel with Prefs {
val c = RichGridbagConstraints.default.withInsets(2, 2, 2, 2).withFill(Fill.Vertical)
override val prefs = Map("selectedType" -> StringPref(PeriodType.CalendarWeek.id))
load()
val periodSelector = new ComboBox(List(PeriodType.Day, PeriodType.CalendarWeek, PeriodType.Month)) {
selection.item = PeriodType(prefs("selectedType").value)
summaryModel.periodTypeChanged(selection.item)
}
listenTo(periodSelector.selection)
val spinner = new Spinner {
model = summaryModel.PeriodSpinnerModel
}
add(new Label(TextResources("summarypanel.periodselector.label")), c)
add(periodSelector, c.withGridXInc)
add(spinner, c.withGridXInc)
add(new Label(""), c.withGridXInc.withFill(Fill.Horizontal).withWeightX(1.0))
reactions += {
case sel: SelectionChanged => {
summaryModel.periodTypeChanged(periodSelector.selection.item)
}
}
border = BorderFactory.createEmptyBorder(5, 0, 5, 0)
override def storePrefs() {
prefs("selectedType").value = periodSelector.selection.item.id
}
}
object SumTotalPanel extends RichBoxPanel(Orientation.Horizontal) {
val labelPrefix = TextResources("summarypanel.sumtotal.sumlabel")
val sum = new Label(labelPrefix + summaryModel.sumTotal().format) {
peer.setFont(boldFont)
def boldFont = {
new Font(peer.getFont.getFontName, Font.BOLD, peer.getFont.getSize);
}
}
reactions += {
case evt: SummaryPeriodChanged => sum.text = labelPrefix + summaryModel.sumTotal().format
}
contents += createHorizontalGlue()
contents += sum
border = BorderFactory.createEmptyBorder(3, 0, 3, 0)
}
add(new ScrollPane(summaryTable), BorderPanel.Position.Center);
add(PeriodSelectionPanel, BorderPanel.Position.North)
add(SumTotalPanel, BorderPanel.Position.South)
}
|
osebelin/verdandi
|
src/main/scala/verdandi/ui/summary/SummaryPanel.scala
|
Scala
|
gpl-3.0
| 3,139
|
/**
* Em Java um membro de classe que não declarado
* como public, protected ou private está visível
* na package que contém a classe. Em Scala, você
* ter o mesmo efeito com qualificadores.
* O método seguinte é visível em usa própria package.
*/
package capitulo7.com.hortsmaan.impatient.people
class Person {
private[people] def description = "Uma pessoa com nome ..."
}
/**
* Pode ser extendida a visibilidade para a package
* mais externa:
class Person{
private[impatient] def description = "Uma pessoa com nome ..."
}
*
*/
|
celioeduardo/scala-impatient
|
src/test/scala/capitulo07/VisibilidadeDaPackage.scala
|
Scala
|
mit
| 554
|
// Copyright (c) 2016 pishen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package annoy4s
import com.sun.jna._
trait AnnoyLibrary extends Library {
def createAngular(f: Int): Pointer
def createEuclidean(f: Int): Pointer
def createManhattan(f: Int): Pointer
def createHamming(f: Int): Pointer
def deleteIndex(ptr: Pointer): Unit
def addItem(ptr: Pointer, item: Int, w: Array[Float]): Unit
def build(ptr: Pointer, q: Int): Unit
def save(ptr: Pointer, filename: String): Boolean
def unload(ptr: Pointer): Unit
def load(ptr: Pointer, filename: String): Boolean
def getDistance(ptr: Pointer, i: Int, j: Int): Float
def getNnsByItem(ptr: Pointer, item: Int, n: Int, searchK: Int, result: Array[Int], distances: Array[Float]): Unit
def getNnsByVector(ptr: Pointer, w: Array[Float], n: Int, searchK: Int, result: Array[Int], distances: Array[Float]): Unit
def getNItems(ptr: Pointer): Int
def verbose(ptr: Pointer, v: Boolean): Unit
def getItem(ptr: Pointer, item: Int, v: Array[Float]): Unit
}
|
pishen/annoy4s
|
src/main/scala/annoy4s/AnnoyLibrary.scala
|
Scala
|
apache-2.0
| 1,535
|
// goseumdochi: experiments with incarnation
// Copyright 2016 John V. Sichi
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package org.goseumdochi.android.watchdog
import org.goseumdochi.android._
import org.goseumdochi.android.R
import org.goseumdochi.android.lib._
import android.os._
import android.text.method._
import android.widget._
abstract class WatchdogErrorActivity(
viewId : Int, linkView : Option[TypedResource[TextView]] = None)
extends ErrorActivity(viewId, classOf[SetupActivity])
with WatchdogMainMenuActivityBase
{
override protected def onCreate(savedInstanceState : Bundle)
{
super.onCreate(savedInstanceState)
linkView.foreach(
findView(_).setMovementMethod(LinkMovementMethod.getInstance))
}
}
class WatchdogBumpActivity extends WatchdogErrorActivity(R.layout.bump)
class WatchdogLostActivity extends WatchdogErrorActivity(R.layout.lost)
class WatchdogUnfoundActivity extends WatchdogErrorActivity(R.layout.unfound)
class WatchdogBluetoothErrorActivity extends WatchdogErrorActivity(
R.layout.bluetooth, Some(TR.bluetooth_error_content))
{
override protected def getSubject =
"Need Help with Watchdog for Sphero Connection"
}
|
lingeringsocket/goseumdochi
|
watchdog/src/main/scala/org/goseumdochi/android/watchdog/WatchdogErrorActivity.scala
|
Scala
|
apache-2.0
| 1,707
|
/*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
package com.flyberrycapital.slack.Methods
import com.flyberrycapital.slack.{HttpClient, SlackIM, SlackMessage}
import org.joda.time.DateTime
import play.api.libs.json.{JsObject, JsValue}
/**
* The container for Slack's 'im' methods (https://api.slack.com/methods).
*
* <i>Note: This is a partial implementation, and some (i.e. most) methods are unimplemented.</i>
*/
class IM(httpClient: HttpClient, apiToken: String) {
import com.flyberrycapital.slack.Responses._
/**
* https://api.slack.com/methods/im.close
*
* @param channel The channel ID for the direct message history to close.
* @return IMCloseResponse
*/
def close(channel: String): IMCloseResponse = {
val params = Map("channel" -> channel, "token" -> apiToken)
val responseDict = httpClient.get("im.close", params)
IMCloseResponse(
(responseDict \ "ok").as[Boolean],
(responseDict \ "no_op").asOpt[Boolean].getOrElse(false),
(responseDict \ "already_closed").asOpt[Boolean].getOrElse(false)
)
}
/**
* https://api.slack.com/methods/im.history
*
* The format is exactly the same as channels.history, with the exception that we call im.history.
* Code copied from [[com.flyberrycapital.slack.Methods.Channels]]
*
* @param channel The channel ID of the IM to get history for.
* @param params A map of optional parameters and their values.
* @return ChannelHistoryResponse
*/
def history(channel: String, params: Map[String, String] = Map()): ChannelHistoryResponse = {
val cleanedParams = params + ("channel" -> channel, "token" -> apiToken)
val responseDict = httpClient.get("im.history", cleanedParams)
val messages = (responseDict \ "messages").as[List[JsObject]] map { (x) =>
val user = Option((x \ "user").asOpt[String]
.getOrElse((x \ "user").asOpt[String]
.getOrElse(null)))
SlackMessage(
(x \ "type").as[String],
(x \ "ts").as[String],
user,
(x \ "text").asOpt[String],
(x \ "is_starred").asOpt[Boolean].getOrElse(false),
(x \ "attachments").asOpt[List[JsValue]].getOrElse(List()),
new DateTime(((x \ "ts").as[String].toDouble * 1000).toLong)
)
}
ChannelHistoryResponse(
(responseDict \ "ok").as[Boolean],
messages,
(responseDict \ "has_more").asOpt[Boolean].getOrElse(false),
(responseDict \ "is_limited").asOpt[Boolean].getOrElse(false)
)
}
/**
* A wrapper around the im.history method that allows users to stream through a channel's past messages
* seamlessly without having to worry about pagination and multiple queries.
*
* @param channel The channel ID to fetch history for.
* @param params A map of optional parameters and their values.
* @return Iterator of SlackMessages, ordered by time in descending order.
*/
def historyStream(channel: String, params: Map[String, String] = Map()): Iterator[SlackMessage] = {
new Iterator[SlackMessage] {
var hist = history(channel, params = params)
var messages = hist.messages
def hasNext = messages.nonEmpty
def next() = {
val m = messages.head
messages = messages.tail
if (messages.isEmpty && hist.hasMore) {
hist = history(channel, params = params + ("latest" -> m.ts))
messages = hist.messages
}
m
}
}
}
/**
* https://api.slack.com/methods/im.list
*
* @return IMListResponse of all open IM channels
*/
def list(): IMListResponse = {
val params = Map("token" -> apiToken)
val responseDict = httpClient.get("im.list", params)
val ims = (responseDict \ "ims").as[List[JsObject]] map { (im) =>
SlackIM(
(im \ "id").as[String],
(im \ "user").as[String],
(im \ "created").as[Int],
(im \ "is_user_deleted").as[Boolean]
)
}
IMListResponse(
(responseDict \ "ok").as[Boolean],
ims
)
}
/**
* https://api.slack.com/methods/im.mark
*
* @param channel The channel ID for the direct message history to set reading cursor in.
* @param params ts Timestamp of the most recently seen message.
* @return IMMarkResponse
*/
def mark(channel: String, ts: String): IMMarkResponse = {
val params = Map("channel" -> channel, "ts" -> ts, "token" -> apiToken)
val responseDict = httpClient.get("im.mark", params)
IMMarkResponse(
(responseDict \ "ok").as[Boolean]
)
}
/**
* https://api.slack.com/methods/im.open
*
* @param user The user ID for the user to open a direct message channel with.
* @return IMOpenResponse
*/
def open(user: String): IMOpenResponse = {
val params = Map("user" -> user, "token" -> apiToken)
val responseDict = httpClient.get("im.open", params)
IMOpenResponse(
(responseDict \ "ok").as[Boolean],
(responseDict \ "channel" \ "id").as[String],
(responseDict \ "no_op").asOpt[Boolean].getOrElse(false),
(responseDict \ "already_open").asOpt[Boolean].getOrElse(false)
)
}
}
|
flyberry-capital/scala-slack
|
src/main/scala/com/flyberrycapital/slack/Methods/IM.scala
|
Scala
|
mit
| 6,411
|
package models.database.alias.service
import org.squeryl.annotations._
case class NapsterArtist(@Column("id_napster_artist") id:Long, @Column("is_analysed") isAnalysed:Boolean)
|
haffla/stream-compare
|
app/models/database/alias/service/NapsterArtist.scala
|
Scala
|
gpl-3.0
| 178
|
package com.dzegel.DynamockServer.service
import com.dzegel.DynamockServer.types.Response
import org.scalatest.{BeforeAndAfterEach, FunSuite, Matchers}
class ResponseStoreTests extends FunSuite with BeforeAndAfterEach with Matchers {
private var responseStore: ResponseStore = _
private val expectationId1 = "1"
private val expectationId2 = "2"
private val expectationIds = Set(expectationId1, expectationId2)
private val response1 = Response(100, "content 1", Map("1" -> "1"))
private val response2 = Response(200, "content 2", Map("2" -> "2"))
override protected def beforeEach(): Unit = {
responseStore = new DefaultResponseStore()
}
test("registerResponse and getResponses work") {
responseStore.registerResponse(expectationId1, response1) shouldBe false
responseStore.getResponses(expectationIds) shouldBe Map(expectationId1 -> response1)
responseStore.registerResponse(expectationId2, response2) shouldBe false
responseStore.getResponses(expectationIds) shouldBe Map(expectationId1 -> response1, expectationId2 -> response2)
}
test("registerResponse and getResponses work for overwritten response") {
responseStore.getResponses(expectationIds) shouldBe empty
responseStore.registerResponse(expectationId1, response1) shouldBe false
responseStore.getResponses(expectationIds) shouldBe Map(expectationId1 -> response1)
responseStore.registerResponse(expectationId1, response1) shouldBe false
responseStore.getResponses(expectationIds) shouldBe Map(expectationId1 -> response1)
responseStore.registerResponse(expectationId1, response2) shouldBe true
responseStore.getResponses(expectationIds) shouldBe Map(expectationId1 -> response2)
}
test("getResponses safely handles non-registered expectationIds") {
val getResult1 = responseStore.getResponses(expectationIds)
responseStore.registerResponse(expectationId1, response1)
val getResult2 = responseStore.getResponses(expectationIds)
getResult1 shouldBe empty
getResult2 shouldBe Map(expectationId1 -> response1)
}
test("deleteResponses") {
val expectationIdToResponse = Map(expectationId1 -> response1, expectationId2 -> response2)
responseStore.getResponses(expectationIds) shouldBe empty
responseStore.registerResponse(expectationId1, response1)
responseStore.registerResponse(expectationId2, response2)
responseStore.getResponses(expectationIds) shouldBe expectationIdToResponse
responseStore.deleteResponses(expectationIds)
responseStore.getResponses(expectationIds) shouldBe empty
responseStore.registerResponse(expectationId1, response1)
responseStore.registerResponse(expectationId2, response2)
responseStore.getResponses(expectationIds) shouldBe expectationIdToResponse
responseStore.deleteResponses(Set(expectationId2))
responseStore.getResponses(expectationIds) shouldBe Map(expectationId1 -> response1)
}
test("clearAllResponses") {
responseStore.registerResponse(expectationId1, response1)
responseStore.registerResponse(expectationId2, response2)
responseStore.getResponses(expectationIds) shouldBe Map(expectationId1 -> response1, expectationId2 -> response2)
responseStore.clearAllResponses()
responseStore.getResponses(expectationIds) shouldBe empty
}
}
|
dzegel/DynamockServer
|
src/test/scala/com/dzegel/DynamockServer/service/ResponseStoreTests.scala
|
Scala
|
apache-2.0
| 3,310
|
package julienrf.forms
import org.scalacheck.Properties
object PresenterTest extends Properties("Presenter") {
val p = new Presenter[Int, Option[Seq[String]]] {
def render(field: Field[Int]): Option[Seq[String]] = field.value
}
property("transform") = {
def transform[A, B](presenter: Presenter[A, B], field: Field[A], f: B => B): Boolean =
presenter.transform(f).render(field) == f(presenter.render(field))
transform(p, Field("foo", codecs.Codec.int, Some(Seq("bar", "baz")), Nil), (_: FieldData).map(_.reverse))
}
property("defaultValue") = {
val defaultValue = 42
val filledField = Field("foo", codecs.Codec.int, Some(Seq("bar", "baz")), Nil)
val emptyField = filledField.copy(value = Option.empty[Seq[String]])
val pd = p.defaultValue(defaultValue)
pd.render(emptyField) == Some(Seq("42")) && pd.render(filledField) == Some(Seq("bar", "baz"))
}
}
|
julienrf/play-forms
|
play-forms/src/test/scala/julienrf/forms/PresenterTest.scala
|
Scala
|
mit
| 908
|
package cpup.poland.parser
object TokenMatching {
trait Matcher {
def check(tok: Lexer.Token): Boolean
}
object MNone extends Matcher {
def check(tok: Lexer.Token) = false
}
object MAll extends Matcher {
def check(tok: Lexer.Token) = true
}
case class MToken(matchToken: Lexer.Token) extends Matcher {
def check(tok: Lexer.Token) = tok == matchToken
}
case class MTokenType(matchType: Lexer.TokenType) extends Matcher {
def check(tok: Lexer.Token) = tok.tokenType == matchType
}
case class MAnd(a: Matcher, b: Matcher) extends Matcher {
def check(tok: Lexer.Token) = a.check(tok) && b.check(tok)
}
case class MOr(a: Matcher, b: Matcher) extends Matcher {
def check(tok: Lexer.Token) = a.check(tok) || b.check(tok)
}
case class MNot(matcher: Matcher) extends Matcher {
def check(tok: Lexer.Token) = !matcher.check(tok)
}
case class MMOr(matchers: Seq[Matcher]) extends Matcher {
def check(tok: Lexer.Token) = matchers.exists(_.check(tok))
}
case class MMAnd(matchers: Seq[Matcher]) extends Matcher {
def check(tok: Lexer.Token) = matchers.forall(_.check(tok))
}
def all = MAll
def none = MNone
def not(matcher: Matcher) = MNot(matcher)
def token(tok: Lexer.Token) = MToken(tok)
def tokenType(tokenType: Lexer.TokenType) = MTokenType(tokenType)
def or(a: Matcher, b: Matcher) = MOr(a, b)
def mor(matchers: Matcher*) = MMOr(matchers)
def and(a: Matcher, b: Matcher) = MAnd(a, b)
def mand(matchers: Matcher*) = MMAnd(matchers)
}
|
CoderPuppy/poland-scala
|
src/main/scala/cpup/poland/parser/TokenMatching.scala
|
Scala
|
mit
| 1,487
|
import org.fusesource.scalate._
class ScalateSample {
def layout(): String = {
val bindings = Map(
"name" -> List("Scala", "Java")
)
val engine = new TemplateEngine()
engine.layout("sample.jade", bindings)
}
}
|
grimrose/shibuya-java-05-5
|
src/test/scala/ScalateSample.scala
|
Scala
|
mit
| 240
|
package org.openurp.edu.grade.course.domain.impl
import java.util.Date
import org.openurp.base.model.Semester
import org.openurp.edu.base.model.{ Course, Student }
import org.openurp.edu.grade.course.domain.{ CourseGradeProvider, GpaPolicy, GpaStatService, MultiStdGpa }
import org.openurp.edu.grade.course.model.{ CourseGrade, StdGpa, StdSemesterGpa, StdYearGpa }
class DefaultGpaStatService extends GpaStatService {
var courseGradeProvider: CourseGradeProvider = _
var gpaPolicy: GpaPolicy = _
def statGpa(std: Student, grades: Iterable[CourseGrade]): StdGpa = {
val gradesMap = new collection.mutable.HashMap[Semester, collection.mutable.ListBuffer[CourseGrade]]
val courseMap = new collection.mutable.HashMap[Course, CourseGrade]
for (grade <- grades) {
val semesterGrades = gradesMap.getOrElseUpdate(grade.semester, new collection.mutable.ListBuffer[CourseGrade])
courseMap.get(grade.course) match {
case Some(exist) => if (!exist.passed) courseMap.put(grade.course, grade)
case None => courseMap.put(grade.course, grade)
}
semesterGrades += grade
}
val stdGpa = new StdGpa(std)
val yearGradeMap = new collection.mutable.HashMap[String, collection.mutable.ListBuffer[CourseGrade]]
for (semester <- gradesMap.keySet) {
val stdTermGpa = new StdSemesterGpa()
stdTermGpa.semester = semester
stdGpa.add(stdTermGpa)
val semesterGrades = gradesMap(semester)
val yearGrades = yearGradeMap.getOrElseUpdate(semester.schoolYear, new collection.mutable.ListBuffer[CourseGrade])
yearGrades ++= semesterGrades
stdTermGpa.gpa = gpaPolicy.calcGpa(semesterGrades)
stdTermGpa.ga = gpaPolicy.calcGa(semesterGrades)
stdTermGpa.count = semesterGrades.size
val stats = statCredits(semesterGrades)
stdTermGpa.credits = stats(0)
stdTermGpa.obtainedCredits = stats(1)
}
for (year <- yearGradeMap.keySet) {
val stdYearGpa = new StdYearGpa()
stdYearGpa.schoolYear = year
stdGpa.add(stdYearGpa)
val yearGrades = yearGradeMap(year)
stdYearGpa.gpa = gpaPolicy.calcGpa(yearGrades)
stdYearGpa.ga = gpaPolicy.calcGa(yearGrades)
stdYearGpa.count = yearGrades.size
val stats = statCredits(yearGrades)
stdYearGpa.credits = stats(0)
stdYearGpa.obtainedCredits = stats(1)
}
stdGpa.gpa = gpaPolicy.calcGpa(grades)
stdGpa.ga = gpaPolicy.calcGa(grades)
stdGpa.count = courseMap.size
val totalStats = statCredits(courseMap.values)
stdGpa.credits = totalStats(0)
stdGpa.obtainedCredits = totalStats(1)
val now = new Date()
stdGpa.updatedAt = now
stdGpa
}
def statGpa(std: Student, semesters: Semester*): StdGpa = {
statGpa(std, courseGradeProvider.getPublished(std, semesters: _*))
}
def statGpas(stds: Iterable[Student], semesters: Semester*): MultiStdGpa = {
val semesterGpas = new collection.mutable.ListBuffer[StdGpa]
for (std <- stds) {
semesterGpas += statGpa(std, semesters: _*)
}
new MultiStdGpa(stds, semesterGpas)
}
/**
* 统计学分,0为修读学分,1为实得学分
*/
private def statCredits(grades: Iterable[CourseGrade]): Array[Float] = {
var credits = 0f
var all = 0f
for (grade <- grades) {
if (grade.passed) credits += grade.course.credits
all += grade.course.credits
}
Array(all, credits)
}
}
|
openurp/edu-core
|
grade/core/src/main/scala/org/openurp/edu/grade/course/domain/impl/DefaultGpaStatService.scala
|
Scala
|
gpl-3.0
| 3,422
|
package genetic.string
import java.util.Random
import genetic.genetic.Metric
import genetic.util.Util
import java.util
import genetic.genetic.{Genetic, Metric}
class GeneticString(targetString: Array[Char],
heuristic: Array[Char] => Double,
crossover: (Array[Char], Array[Char], Random) => Array[Char],
rand: Random) extends Genetic[Array[Char]] {
def fitness(gene: Array[Char]): Double = heuristic(gene)
@inline
def mate(x: Array[Char], y: Array[Char]): Array[Char] = {
crossover(x, y, rand)
}
def mutate(s: Array[Char]): Array[Char] = {
val i = rand.nextInt(s.length)
s(i) = (rand.nextInt(91) + 32).toChar
s
}
override def metric(): Metric[Array[Char]] = new Metric[Array[Char]] {
override def distance(x: Array[Char], y: Array[Char]): Double = {
StringHeuristics.heuristic2(x, y)
}
}
override def randomElement(rand: Random): Array[Char] = Util.randString(targetString.length, rand)
override def show(charArr: Array[Char]): String = charArr.mkString
override def showScientific(): Boolean = false
override def hash(gene: Array[Char]): Int = util.Arrays.hashCode(gene)
}
|
NightRa/AILab
|
Genetic/src/main/scala/genetic/string/GeneticString.scala
|
Scala
|
apache-2.0
| 1,200
|
package tholowka.diz.terms
import org.scalatest.FunSpec
import tholowka.diz.loader.ResourceLoader
import java.io.FileNotFoundException
class ResourceLoaderTest extends FunSpec {
describe("** A ResourceLoader") {
it("can load a file as a Stream") {
// note: the root is the root of the project
val json = ResourceLoader.fromFile("build.sbt")
expect(true) {
json.isDefined
}
}
it("loads nothing if the file does not exist") {
//note: the root is the root of the project
intercept[FileNotFoundException] {
ResourceLoader.fromFile("_build.sbt")
}
}
}
}
|
tholowka/diz
|
src/test/scala/tholowka/diz/ResourceLoaderTest.scala
|
Scala
|
mit
| 710
|
package scalacookbook.chapter09
/**
* Created by liguodong on 2016/7/24.
*/
object AReadExample {
def main(args: Array[String]) {
driver
}
/**
* A "driver" function to test Newton's method.
* Start with (a) the desired f(x) and f'(x) equations(开始希望f(x) 和 f'(x)相等),
* (b) an initial guess(最初的猜测) and (c) tolerance values(公差值).
*/
def driver {
// the f(x) and f'(x) functions
val fx = (x: Double) => 3*x + math.sin(x) - math.pow(math.E, x)
//上面方程的导数
val fxPrime = (x: Double) => 3 + math.cos(x) - math.pow(Math.E, x)
val initialGuess = 0.0 //最初假设
val tolerance = 0.00005 //公差值
// pass f(x) and f'(x) to the Newton's Method function, along with
// the initial guess and tolerance
val answer = newtonsMethod(fx, fxPrime, initialGuess, tolerance)
println("answer : "+answer)
}
/**
* Newton's Method for solving equations.(牛顿求解方程的方法)
*
* 两个函数分别传递原方程(FX)和方程的导数
* @todo check that |f(xNext)| is greater than a second tolerance value
* @todo check that f'(x) != 0
*/
def newtonsMethod(
fx: Double => Double,
fxPrime: Double => Double,
x: Double,
tolerance: Double)
: Double = {
var x1 = x
var xNext = newtonsMethodHelper(fx, fxPrime, x1)
while (math.abs(xNext - x1) > tolerance) {
x1 = xNext
println(xNext) // debugging (intermediate values)
xNext = newtonsMethodHelper(fx, fxPrime, x1)
}
xNext
}
/**
* This is the "x2 = x1 - f(x1)/f'(x1)" calculation
*/
def newtonsMethodHelper(
fx: Double => Double,
fxPrime: Double => Double,
x: Double)
: Double = {
x - fx(x) / fxPrime(x)
}
}
|
liguodongIOT/java-scala-mix-sbt
|
src/main/scala/scalacookbook/chapter09/AReadExample.scala
|
Scala
|
apache-2.0
| 1,815
|
package me.yingrui.segment.hmm
import org.junit.Assert
import org.junit.Test
import me.yingrui.segment.util.SerializeHandler
import java.io.File
class TrieNodeTest {
var filename = "test_trie.dat"
@Test
def should_insert_ngram_and_add_count() {
val trie = new Trie()
trie.insert(List[Int](1, 2).toArray)
trie.insert(List[Int](1, 2).toArray)
trie.insert(List[Int](1, 2).toArray)
Assert.assertEquals(3, trie.searchNode(List[Int](1,2).toArray).getCount())
trie.buildIndex(1)
Assert.assertTrue(trie.searchNode(List[Int](1,2).toArray).getProb() - 0.75 < 0.0000001)
}
@Test
def should_save_Trie_into_a_file_and_load_it_correctly() {
val f = new File(filename)
f.deleteOnExit()
val root = create()
val writeHandler = SerializeHandler(f, SerializeHandler.WRITE_ONLY)
root.save(writeHandler)
writeHandler.close()
val readHandler = SerializeHandler(f, SerializeHandler.READ_ONLY)
val copy = new Trie()
copy.load(readHandler)
readHandler.close()
Assert.assertEquals(copy.key, 0)
Assert.assertEquals(copy.count, 0)
Assert.assertEquals(copy.descendant.length, 2)
val child1 = copy.descendant(0)
Assert.assertEquals(child1.key, 1)
Assert.assertEquals(child1.count, 1)
Assert.assertEquals(child1.descendant.length, 1)
val grandChild1 = child1.descendant(0)
Assert.assertEquals(grandChild1.key, 3)
Assert.assertEquals(grandChild1.count, 3)
assert (grandChild1.prob - 0.3 > -0.0000001 && grandChild1.prob - 0.3 < 0.0000001)
Assert.assertNull(grandChild1.descendant)
val child2 = copy.descendant(1)
Assert.assertEquals(child2.key, 2)
Assert.assertEquals(child2.count, 2)
Assert.assertNull(child2.descendant)
}
private def create(): Trie = {
val root = create(0, 0, 0.01)
val child1 = create(1, 1, 0.1)
val child2 = create(2, 2, 0.2)
val grandChild1 = create(3, 3, 0.3)
child1.add(grandChild1)
root.add(child1)
root.add(child2)
return root
}
private def create(key: Int, count: Int, prob: Double): Trie = {
val trie = new Trie()
trie.key = key
trie.count = count
trie.prob = prob
return trie
}
}
|
yingrui/mahjong
|
lib-segment/src/test/scala/me/yingrui/segment/hmm/TrieNodeTest.scala
|
Scala
|
gpl-3.0
| 2,402
|
/*
* Copyright 2014–2020 SlamData Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package slamdata
import sbt.util.FileBasedStore
import sbt.internal.util.codec.JValueFormats
import sjsonnew.{BasicJsonProtocol, IsoString}
import sjsonnew.shaded.scalajson.ast.unsafe.{JField, JObject, JString, JValue}
import sjsonnew.support.scalajson.unsafe.{Converter, Parser, PrettyPrinter}
import java.nio.file.Path
final class ManagedVersions private (path: Path) extends BasicJsonProtocol with JValueFormats {
private[this] val store: FileBasedStore[JValue] =
new FileBasedStore(
path.toFile,
Converter)(
IsoString.iso(PrettyPrinter.apply, Parser.parseUnsafe))
def apply(key: String): String =
get(key).getOrElse(sys.error(s"unable to find string -> string mapping for key '$key'"))
def get(key: String): Option[String] = {
safeRead() match {
case JObject(values) =>
values.find(_.field == key) match {
case Some(JField(_, JString(value))) => Some(value)
case _ => None
}
case _ =>
sys.error(s"unable to parse managed versions store at $path")
}
}
def update(key: String, version: String): Unit = {
safeRead() match {
case JObject(values) =>
var i = 0
var done = false
while (i < values.length && !done) {
if (values(i).field == key) {
values(i) = JField(key, JString(version))
done = true
}
i += 1
}
val values2 = if (!done) {
val values2 = new Array[JField](values.length + 1)
System.arraycopy(values, 0, values2, 0, values.length)
values2(values.length) = JField(key, JString(version))
values2
} else {
values
}
store.write(JObject(values2))
case _ =>
sys.error(s"unable to parse managed versions store at $path")
}
}
private[this] def safeRead(): JValue = {
try {
store.read[JValue]()
} catch {
case _: sbt.internal.util.EmptyCacheError =>
val back = JObject(Array[JField]())
store.write(back)
back
}
}
}
object ManagedVersions {
def apply(path: Path): ManagedVersions =
new ManagedVersions(path)
}
|
slamdata/sbt-slamdata
|
core/src/main/scala/slamdata/ManagedVersions.scala
|
Scala
|
apache-2.0
| 2,787
|
package ddd.support.domain.protocol
import ddd.support.domain.event.DomainEvent
case class ViewUpdated(event: DomainEvent) extends Receipt
|
pawelkaczor/ddd-leaven-akka
|
src/main/scala/ddd/support/domain/protocol/ViewUpdated.scala
|
Scala
|
mit
| 141
|
/**
* This file is part of the TA Buddy project.
* Copyright (c) 2014 Alexey Aksenov ezh@ezh.msk.ru
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU Affero General Global License version 3
* as published by the Free Software Foundation with the addition of the
* following permission added to Section 15 as permitted in Section 7(a):
* FOR ANY PART OF THE COVERED WORK IN WHICH THE COPYRIGHT IS OWNED
* BY Limited Liability Company «MEZHGALAKTICHESKIJ TORGOVYJ ALIANS»,
* Limited Liability Company «MEZHGALAKTICHESKIJ TORGOVYJ ALIANS» DISCLAIMS
* THE WARRANTY OF NON INFRINGEMENT OF THIRD PARTY RIGHTS.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
* or FITNESS FOR A PARTICULAR PURPOSE.
* See the GNU Affero General Global License for more details.
* You should have received a copy of the GNU Affero General Global License
* along with this program; if not, see http://www.gnu.org/licenses or write to
* the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
* Boston, MA, 02110-1301 USA, or download the license from the following URL:
* http://www.gnu.org/licenses/agpl.html
*
* The interactive user interfaces in modified source and object code versions
* of this program must display Appropriate Legal Notices, as required under
* Section 5 of the GNU Affero General Global License.
*
* In accordance with Section 7(b) of the GNU Affero General Global License,
* you must retain the producer line in every report, form or document
* that is created or manipulated using TA Buddy.
*
* You can be released from the requirements of the license by purchasing
* a commercial license. Buying such a license is mandatory as soon as you
* develop commercial activities involving the TA Buddy software without
* disclosing the source code of your own applications.
* These activities include: offering paid services to customers,
* serving files in a web or/and network application,
* shipping TA Buddy with a closed source product.
*
* For more information, please contact Digimead Team at this
* address: ezh@ezh.msk.ru
*/
package org.digimead.tabuddy.desktop.logic.payload.marker.serialization.signature.api
import java.security.PublicKey
import java.util.UUID
import org.digimead.tabuddy.desktop.core.keyring.storage.api.XStorage
/**
* Serialization validator interface.
*/
trait XValidator extends Product1[UUID] with java.io.Serializable {
/** Unique ID. */
val id: UUID
/** Validator name. */
val name: Symbol
/** Validator description. */
val description: String
/** A projection of element 1 of this Product. */
def _1: UUID = id
/** Get validator rule. */
def rule: XValidator.Rule
/** Validation routine. */
def validator: Option[PublicKey] ⇒ Boolean
override def canEqual(that: Any) = that.isInstanceOf[XValidator]
override def equals(that: Any): Boolean = that match {
case that: XValidator ⇒ that.canEqual(this) && that.id.equals(this.id)
case _ ⇒ false
}
override def hashCode = id.##
}
object XValidator {
/**
* Validator rule
*/
case class Rule(val implicitlyAccept: Boolean, val acceptUnsigned: Boolean, keys: Seq[XStorage.Key])
}
|
digimead/digi-TABuddy-desktop
|
part-logic/src/main/scala/org/digimead/tabuddy/desktop/logic/payload/marker/serialization/signature/api/XValidator.scala
|
Scala
|
agpl-3.0
| 3,328
|
package org.dbpedia.lookup.inputformat
import org.semanticweb.yars.nx.parser.NxParser
import java.io.InputStream
import org.dbpedia.lookup.lucene.LuceneConfig
import com.typesafe.config.ConfigFactory
/**
* Class to itereate over DBpedia NTriples dataset and
*
*
* NOTICE: this file has been changed by Paolo Albano, Politecnico di Bari
*/
class DBpediaNTriplesInputFormat(val dataSet: InputStream, val redirects: scala.collection.Set[String]) extends InputFormat {
private val it = new NxParser(dataSet)
val conf = ConfigFactory.load("configuration.conf")
val predicate2field = Map(
//"http://lexvo.org/ontology#label" -> LuceneConfig.Fields.SURFACE_FORM_KEYWORD, // no DBpedia dataset, has to be created
conf.getString("lookup.nTriple.label.baseUri") -> LuceneConfig.Fields.SURFACE_FORM_KEYWORD, // no DBpedia dataset, has to be created
conf.getString("lookup.nTriple.refCount.baseUri") -> LuceneConfig.Fields.REFCOUNT, // no DBpedia dataset, has to be created
conf.getString("lookup.nTriple.description.baseUri") -> LuceneConfig.Fields.DESCRIPTION,
//conf.getString("lookup.label.baseUri") -> LuceneConfig.Fields.DESCRIPTION,
conf.getString("lookup.nTriple.class.baseUri") -> LuceneConfig.Fields.CLASS,
conf.getString("lookup.nTriple.category.baseUri") -> LuceneConfig.Fields.CATEGORY,
conf.getString("lookup.nTriple.template.baseUri") -> LuceneConfig.Fields.TEMPLATE, // not really necessary
conf.getString("lookup.nTriple.redirect.baseUri") -> LuceneConfig.Fields.REDIRECT // not really necessary
)
override def foreach[U](f: ((String,String,String)) => U) {
while(it.hasNext) {
val triple = it.next
val uri = triple(0).toString
val pred = triple(1).toString
val obj = triple(2).toString
predicate2field.get(pred) match {
case Some(field: String) if(redirects.isEmpty || !redirects.contains(uri)) => {
if(field == LuceneConfig.Fields.REDIRECT) {
f( (obj, field, uri) ) // make it a "hasRedirect" relation
}
else {
f( (uri, field, obj) )
}
}
case _ =>
}
}
}
}
|
sisinflab/X-LOD-Lookup
|
src/main/scala/org/dbpedia/lookup/inputformat/DBpediaNTriplesInputFormat.scala
|
Scala
|
apache-2.0
| 2,353
|
package temportalist.origin.foundation.common.modTraits
import temportalist.origin.foundation.common.registers.OptionRegister
/**
*
* Created by TheTemportalist on 4/9/2016.
*
* @author TheTemportalist
*/
trait IHasOptions {
def getOptions: OptionRegister
}
|
TheTemportalist/Origin
|
src/foundation/scala/temportalist/origin/foundation/common/modTraits/IHasOptions.scala
|
Scala
|
apache-2.0
| 273
|
/*
* MilmSearch is a mailing list searching system.
*
* Copyright (C) 2013 MilmSearch Project.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 3
* of the License, or any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
* See the GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public
* License along with this program.
* If not, see <http://www.gnu.org/licenses/>.
*
* You can contact MilmSearch Project at mailing list
* milm-search-public@lists.sourceforge.jp.
*/
package org.milmsearch.core.api
import java.net.URL
import org.milmsearch.core.domain.ML
import org.milmsearch.core.domain.MLArchiveType
import org.milmsearch.core.service.MLService
import org.milmsearch.core.test.util.DateUtil.newDateTime
import org.milmsearch.core.test.util.MockCreatable
import org.milmsearch.core.{ComponentRegistry => CR}
import org.scalamock.scalatest.MockFactory
import org.scalamock.ProxyMockFactory
import org.scalatest.matchers.ShouldMatchers
import org.scalatest.FeatureSpec
import org.scalatest.GivenWhenThen
import javax.ws.rs.GET
import javax.ws.rs.Path
import org.milmsearch.core.domain.Page
import org.milmsearch.core.domain.MLSearchResult
import org.milmsearch.core.domain.ML
import org.milmsearch.core.test.util.DateUtil
/**
* MLResource のテスト
*/
class MLResourceSpec extends FeatureSpec
with MockFactory with ProxyMockFactory
with MockCreatable with ShouldMatchers with GivenWhenThen {
feature("MLResource クラス") {
scenario("存在するML情報の詳細を取得する") {
given("存在するML情報の ID を用いて")
val mlID = "1"
val m = createMock[MLService] {
_ expects 'find withArgs(mlID.toLong) returning Some(newSampleML)
}
CR.mlService.doWith(m) {
when("/mls/{id} に GET リクエストをすると")
val response = new MLResource().show(mlID)
then("ステータスコードは 200 が返る")
response.getStatus should equal (200)
and("リクエストボディは検索した ML 情報の JSON 表現を返す")
response.getEntity should equal (
"""{"id":1,
|"title":"ML タイトル",
|"archiveType":"mailman",
|"archiveURL":"http://localhost/path/to/archive/",
|"lastMailedAt":"2013-01-01T00:00:00+09:00",
|"approvedAt":"2013-01-05T00:00:00+09:00"
|}""".stripMargin.filter(_ != '\\n'))
}
}
scenario("存在しないML情報の詳細を取得する") {
given("存在しないML情報の ID を用いて")
val mlID = "0"
val m = createMock[MLService] {
_ expects 'find withArgs(mlID.toLong) returning None
}
CR.mlService.doWith(m) {
when("/mls/{id} に GET リクエストをすると")
val response = new MLResource().show(mlID)
then("ステータスコードは 404 が返る")
response.getStatus should equal (404)
}
}
scenario("不正なIDでML情報の詳細を取得する") {
given("アルファベットの ID を用いて")
val mlID = "abc"
val m = createMock[MLService] { x => () }
CR.mlService.doWith(m) {
when("/mls/{id} に GET リクエストをすると")
val response = new MLResource().show(mlID)
then("ステータスコードは 400 が返る")
response.getStatus should equal (400)
}
}
scenario("ML一覧を取得する") {
// サービスメソッドの結果
val result = MLSearchResult(
totalResults = 10,
startIndex = 1,
itemsPerPage = 10,
items = (1 to 10) map { i => ML(
i,
"MLタイトル" + i,
MLArchiveType.Mailman,
new URL("http://localhost/path/to/archive/"),
Some(newDateTime(2013, 1, 1)),
newDateTime(2013, 1, 1))
} toList)
val m = createMock[MLService] {
_ expects 'search withArgs(
Page(1L, 10L),
None, // sort
None // filter
) returning result
}
CR.mlService.doWith(m) {
given("デフォルトの一覧条件を用いて")
when("/mls に GET リクエストをすると")
val response = new MLResource().list(
filterBy = null,
filterValue = null,
startPage = null,
count = null,
sortBy = null,
sortOrder = null)
then("ステータスコードは 200 が返る")
response.getStatus should equal (200)
and("リクエストボディは検索した ML 情報の JSON 表現を返す")
response.getEntity should equal (
"""{
|"totalResults":10,
|"startIndex":1,
|"itemsPerPage":10,
|"items":[%s]
|}""".stripMargin format (
1 to 10 map { i =>
"""{
|"id":%s,
|"title":"MLタイトル%s",
|"archiveType":"mailman",
|"archiveURL":"http://localhost/path/to/archive/",
|"lastMailedAt":"2013-01-01T00:00:00+09:00",
|"approvedAt":"2013-01-01T00:00:00+09:00"
|}""".stripMargin format (i, i)
} mkString ",") replaceAll ("\\n", "")
)
}
}
}
/**
* サンプルML情報を生成する
*/
private def newSampleML = ML(
id = 1L,
title = "ML タイトル",
archiveType = MLArchiveType.Mailman,
archiveURL = new URL("http://localhost/path/to/archive/"),
lastMailedAt = Some(newDateTime(2013, 1, 1)),
approvedAt = newDateTime(2013, 1, 5))
}
|
mzkrelx/milm-search-core
|
src/test/scala/org/milmsearch/core/api/MLResourceSpec.scala
|
Scala
|
gpl-3.0
| 6,025
|
package fla
package walks
import scalaz._
import Scalaz._
import spire.math.Interval
import spire.implicits._
import cilib._
object RandomProgressiveManhattanWalk {
def apply(domain: NonEmptyList[Interval[Double]], steps: Int, stepSize: Double) =
walk(domain, steps, stepSize)
val walk: WalkGenerator =
(domain, steps, stepSize) => {
def doWalk: StateT[RVar, (StartingZones, WalkStep), WalkStep] =
for {
state <- S.get
(zones, awalk) = state
r <- hoist.liftM(Dist.uniformInt(Interval(0, domain.size - 1)))
w = (zones zip awalk.pos zip domain).zipWithIndex.map {
case (((si, wsi), interval), i) =>
if (i =/= r) (si, wsi)
else {
val inc = if (si) -1 else 1
val wsRD = wsi + (inc * stepSize)
if (interval.contains(wsRD)) (si, wsRD)
else (!si, wsi - (inc * stepSize))
}
}
newZone = w.map(_._1)
newPosList = w.map(_._2)
newPos = Point(newPosList, domain)
_ <- S.put((newZone, newPos))
} yield newPos
Helpers.initialState(domain) flatMap { state =>
doWalk
.replicateM(steps - 1)
.eval(state)
.map(walks => NonEmptyList.nel(state._2, walks.toIList))
}
}
}
|
cirg-up/fla
|
walks/src/main/scala/fla/walks/RandomProgressiveManhattanWalk.scala
|
Scala
|
apache-2.0
| 1,362
|
package lmxml
package template
package test
import java.io.File
import org.scalatest.FlatSpec
import org.scalatest.matchers.ShouldMatchers
class TemplateTest extends FlatSpec with ShouldMatchers {
val base =
"""
html
head title [title]
body
div #content [rest]
div #footer p "© Philip Cali" is unescaped
"""
val extension =
"""
[base]
---
[title]:
"Guaranteed Victory"
[rest]:
h1 "This is a test"
div .pages
pages
div.page
div .page-header page-header
div .page-body page-body
"""
def writeContents(name: String, contents: String) {
val writer = new java.io.FileWriter(name)
writer.write(contents)
writer.close()
}
case class Page(header: String, body: String)
val parser = new PlainLmxmlParser(2) with FileTemplates {
val working = new File(".")
}
"FileTemplate" should "dynamically find other templates" in {
import scala.io.Source.{fromFile => open}
import transforms._
writeContents("base.lmxml", base)
writeContents("extension.lmxml", extension)
val contents = open("extension.lmxml").getLines.mkString("\\n")
val nodes = parser.parseNodes(contents)
val pages = List(
Page("Test title", "This is the body"),
Page("Second Page", "Do this, that, and that.")
)
val trans = Transform(
"pages" -> Foreach(pages) { page => Seq(
"page-header" -> Value(page.header),
"page-body" -> Value(page.body)
) }
)
val fullOutput = trans andThen XmlConvert andThen XmlFormat(200, 2)
val expected =
<html>
<head>
<title>Guaranteed Victory</title>
</head>
<body>
<div id="content">
<h1>This is a test</h1>
<div class="pages">
<div class="page">
<div class="page-header">Test title</div>
<div class="page-body">This is the body</div>
</div>
<div class="page">
<div class="page-header">Second Page</div>
<div class="page-body">Do this, that, and that.</div>
</div>
</div>
</div>
<div id="footer">
<p>© Philip Cali</p>
</div>
</body>
</html>
fullOutput(nodes).toString() should be === expected.toString()
new File("base.lmxml").delete()
new File("extension.lmxml").delete()
}
}
|
philcali/lmxml
|
template/src/test/scala/template.scala
|
Scala
|
mit
| 2,292
|
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.spark.sql.kafka010
import java.{util => ju}
import java.util.concurrent.TimeoutException
import org.apache.kafka.clients.consumer.{ConsumerRecord, OffsetOutOfRangeException}
import org.apache.kafka.common.TopicPartition
import org.apache.spark.TaskContext
import org.apache.spark.internal.Logging
import org.apache.spark.sql.catalyst.InternalRow
import org.apache.spark.sql.catalyst.expressions.UnsafeRow
import org.apache.spark.sql.kafka010.KafkaSourceProvider.{INSTRUCTION_FOR_FAIL_ON_DATA_LOSS_FALSE, INSTRUCTION_FOR_FAIL_ON_DATA_LOSS_TRUE}
import org.apache.spark.sql.sources.v2.reader._
import org.apache.spark.sql.sources.v2.reader.streaming._
import org.apache.spark.sql.types.StructType
/**
* A [[ContinuousReadSupport]] for data from kafka.
*
* @param offsetReader a reader used to get kafka offsets. Note that the actual data will be
* read by per-task consumers generated later.
* @param kafkaParams String params for per-task Kafka consumers.
* @param sourceOptions The [[org.apache.spark.sql.sources.v2.DataSourceOptions]] params which
* are not Kafka consumer params.
* @param metadataPath Path to a directory this reader can use for writing metadata.
* @param initialOffsets The Kafka offsets to start reading data at.
* @param failOnDataLoss Flag indicating whether reading should fail in data loss
* scenarios, where some offsets after the specified initial ones can't be
* properly read.
*/
class KafkaContinuousReadSupport(
offsetReader: KafkaOffsetReader,
kafkaParams: ju.Map[String, Object],
sourceOptions: Map[String, String],
metadataPath: String,
initialOffsets: KafkaOffsetRangeLimit,
failOnDataLoss: Boolean)
extends ContinuousReadSupport with Logging {
private val pollTimeoutMs = sourceOptions.getOrElse("kafkaConsumer.pollTimeoutMs", "512").toLong
override def initialOffset(): Offset = {
val offsets = initialOffsets match {
case EarliestOffsetRangeLimit => KafkaSourceOffset(offsetReader.fetchEarliestOffsets())
case LatestOffsetRangeLimit => KafkaSourceOffset(offsetReader.fetchLatestOffsets())
case SpecificOffsetRangeLimit(p) => offsetReader.fetchSpecificOffsets(p, reportDataLoss)
}
logInfo(s"Initial offsets: $offsets")
offsets
}
override def fullSchema(): StructType = KafkaOffsetReader.kafkaSchema
override def newScanConfigBuilder(start: Offset): ScanConfigBuilder = {
new KafkaContinuousScanConfigBuilder(fullSchema(), start, offsetReader, reportDataLoss)
}
override def deserializeOffset(json: String): Offset = {
KafkaSourceOffset(JsonUtils.partitionOffsets(json))
}
override def planInputPartitions(config: ScanConfig): Array[InputPartition] = {
val startOffsets = config.asInstanceOf[KafkaContinuousScanConfig].startOffsets
startOffsets.toSeq.map {
case (topicPartition, start) =>
KafkaContinuousInputPartition(
topicPartition, start, kafkaParams, pollTimeoutMs, failOnDataLoss)
}.toArray
}
override def createContinuousReaderFactory(
config: ScanConfig): ContinuousPartitionReaderFactory = {
KafkaContinuousReaderFactory
}
/** Stop this source and free any resources it has allocated. */
def stop(): Unit = synchronized {
offsetReader.close()
}
override def commit(end: Offset): Unit = {}
override def mergeOffsets(offsets: Array[PartitionOffset]): Offset = {
val mergedMap = offsets.map {
case KafkaSourcePartitionOffset(p, o) => Map(p -> o)
}.reduce(_ ++ _)
KafkaSourceOffset(mergedMap)
}
override def needsReconfiguration(config: ScanConfig): Boolean = {
val knownPartitions = config.asInstanceOf[KafkaContinuousScanConfig].knownPartitions
offsetReader.fetchLatestOffsets().keySet != knownPartitions
}
override def toString(): String = s"KafkaSource[$offsetReader]"
/**
* If `failOnDataLoss` is true, this method will throw an `IllegalStateException`.
* Otherwise, just log a warning.
*/
private def reportDataLoss(message: String): Unit = {
if (failOnDataLoss) {
throw new IllegalStateException(message + s". $INSTRUCTION_FOR_FAIL_ON_DATA_LOSS_TRUE")
} else {
logWarning(message + s". $INSTRUCTION_FOR_FAIL_ON_DATA_LOSS_FALSE")
}
}
}
/**
* An input partition for continuous Kafka processing. This will be serialized and transformed
* into a full reader on executors.
*
* @param topicPartition The (topic, partition) pair this task is responsible for.
* @param startOffset The offset to start reading from within the partition.
* @param kafkaParams Kafka consumer params to use.
* @param pollTimeoutMs The timeout for Kafka consumer polling.
* @param failOnDataLoss Flag indicating whether data reader should fail if some offsets
* are skipped.
*/
case class KafkaContinuousInputPartition(
topicPartition: TopicPartition,
startOffset: Long,
kafkaParams: ju.Map[String, Object],
pollTimeoutMs: Long,
failOnDataLoss: Boolean) extends InputPartition
object KafkaContinuousReaderFactory extends ContinuousPartitionReaderFactory {
override def createReader(partition: InputPartition): ContinuousPartitionReader[InternalRow] = {
val p = partition.asInstanceOf[KafkaContinuousInputPartition]
new KafkaContinuousPartitionReader(
p.topicPartition, p.startOffset, p.kafkaParams, p.pollTimeoutMs, p.failOnDataLoss)
}
}
class KafkaContinuousScanConfigBuilder(
schema: StructType,
startOffset: Offset,
offsetReader: KafkaOffsetReader,
reportDataLoss: String => Unit)
extends ScanConfigBuilder {
override def build(): ScanConfig = {
val oldStartPartitionOffsets = KafkaSourceOffset.getPartitionOffsets(startOffset)
val currentPartitionSet = offsetReader.fetchEarliestOffsets().keySet
val newPartitions = currentPartitionSet.diff(oldStartPartitionOffsets.keySet)
val newPartitionOffsets = offsetReader.fetchEarliestOffsets(newPartitions.toSeq)
val deletedPartitions = oldStartPartitionOffsets.keySet.diff(currentPartitionSet)
if (deletedPartitions.nonEmpty) {
reportDataLoss(s"Some partitions were deleted: $deletedPartitions")
}
val startOffsets = newPartitionOffsets ++
oldStartPartitionOffsets.filterKeys(!deletedPartitions.contains(_))
KafkaContinuousScanConfig(schema, startOffsets)
}
}
case class KafkaContinuousScanConfig(
readSchema: StructType,
startOffsets: Map[TopicPartition, Long])
extends ScanConfig {
// Created when building the scan config builder. If this diverges from the partitions at the
// latest offsets, we need to reconfigure the kafka read support.
def knownPartitions: Set[TopicPartition] = startOffsets.keySet
}
/**
* A per-task data reader for continuous Kafka processing.
*
* @param topicPartition The (topic, partition) pair this data reader is responsible for.
* @param startOffset The offset to start reading from within the partition.
* @param kafkaParams Kafka consumer params to use.
* @param pollTimeoutMs The timeout for Kafka consumer polling.
* @param failOnDataLoss Flag indicating whether data reader should fail if some offsets
* are skipped.
*/
class KafkaContinuousPartitionReader(
topicPartition: TopicPartition,
startOffset: Long,
kafkaParams: ju.Map[String, Object],
pollTimeoutMs: Long,
failOnDataLoss: Boolean) extends ContinuousPartitionReader[InternalRow] {
private val consumer = KafkaDataConsumer.acquire(topicPartition, kafkaParams, useCache = false)
private val converter = new KafkaRecordToUnsafeRowConverter
private var nextKafkaOffset = startOffset
private var currentRecord: ConsumerRecord[Array[Byte], Array[Byte]] = _
override def next(): Boolean = {
var r: ConsumerRecord[Array[Byte], Array[Byte]] = null
while (r == null) {
if (TaskContext.get().isInterrupted() || TaskContext.get().isCompleted()) return false
// Our consumer.get is not interruptible, so we have to set a low poll timeout, leaving
// interrupt points to end the query rather than waiting for new data that might never come.
try {
r = consumer.get(
nextKafkaOffset,
untilOffset = Long.MaxValue,
pollTimeoutMs,
failOnDataLoss)
} catch {
// We didn't read within the timeout. We're supposed to block indefinitely for new data, so
// swallow and ignore this.
case _: TimeoutException | _: org.apache.kafka.common.errors.TimeoutException =>
// This is a failOnDataLoss exception. Retry if nextKafkaOffset is within the data range,
// or if it's the endpoint of the data range (i.e. the "true" next offset).
case e: IllegalStateException if e.getCause.isInstanceOf[OffsetOutOfRangeException] =>
val range = consumer.getAvailableOffsetRange()
if (range.latest >= nextKafkaOffset && range.earliest <= nextKafkaOffset) {
// retry
} else {
throw e
}
}
}
nextKafkaOffset = r.offset + 1
currentRecord = r
true
}
override def get(): UnsafeRow = {
converter.toUnsafeRow(currentRecord)
}
override def getOffset(): KafkaSourcePartitionOffset = {
KafkaSourcePartitionOffset(topicPartition, nextKafkaOffset)
}
override def close(): Unit = {
consumer.release()
}
}
|
michalsenkyr/spark
|
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaContinuousReadSupport.scala
|
Scala
|
apache-2.0
| 10,256
|
package com.cloudray.scalapress.plugin.form.section
import javax.persistence.{ManyToOne, Entity, Table, JoinColumn}
import com.cloudray.scalapress.plugin.form.{FormDao, Form}
import com.cloudray.scalapress.section.Section
import scala.beans.BeanProperty
import com.cloudray.scalapress.plugin.form.controller.renderer.FormRenderer
import com.cloudray.scalapress.framework.{ScalapressRequest, ScalapressContext}
/** @author Stephen Samuel */
@Entity
@Table(name = "blocks_forms")
class FormSection extends Section {
@ManyToOne
@JoinColumn(name = "form")
@BeanProperty
var form: Form = _
def desc: String = "For showing a form on a folder or object page"
override def backoffice: String = "/backoffice/plugin/form/section/" + id
def render(req: ScalapressRequest): Option[String] = {
val rendered = FormRenderer.render(form, req)
Option(rendered)
}
override def _init(context: ScalapressContext) {
form = context.bean[FormDao].findAll.head
}
}
|
vidyacraghav/scalapress
|
src/main/scala/com/cloudray/scalapress/plugin/form/section/FormSection.scala
|
Scala
|
apache-2.0
| 978
|
/*
* #%L
* Core runtime for OOXOO
* %%
* Copyright (C) 2006 - 2017 Open Design Flow
* %%
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
* #L%
*/
package com.idyria.osi.ooxoo.model.writers
import com.idyria.osi.ooxoo.model._
import java.io._
import org.odfi.tea.file.DirectoryUtilities
/**
*
* Write outputs to a base folder
*
*/
class FileWriters(
var baseFolder: File) extends PrintStreamWriter(null) {
override def cleanOutput(path: String) = {
DirectoryUtilities.deleteDirectoryContent(new File(baseFolder, path))
}
override def file(path: String) = {
// Close actual output
//---------------
if (this.out != null) {
this.out.close()
}
// To File
//---------------
var file = new File(baseFolder, path)
// Prepare Folder
//---------------------
file.getParentFile.mkdirs
// Set to current output
//----------------
this.out = new PrintStream(new FileOutputStream(file))
// Save as file written
//----------
filesWritten = path :: filesWritten
println(s"Opened File to : $file")
}
override def finish = {
if (this.out != null) {
this.out.close()
}
}
override def getWriterForFile(f: String) = {
var w = new FileWriters(baseFolder)
w.file(f)
w
}
}
|
richnou/ooxoo-core
|
ooxoo-core/src/main/scala/com/idyria/osi/ooxoo/model/writers/FilesWriter.scala
|
Scala
|
agpl-3.0
| 1,992
|
package com.rasterfoundry.api.project
import com.rasterfoundry.akkautil.PaginationDirectives
import com.rasterfoundry.akkautil.{Authentication, CommonHandlers}
import com.rasterfoundry.api.utils.queryparams.QueryParametersCommon
import com.rasterfoundry.common._
import com.rasterfoundry.common.color._
import com.rasterfoundry.database._
import com.rasterfoundry.datamodel._
import akka.http.scaladsl.model.StatusCodes
import akka.http.scaladsl.server._
import cats.Applicative
import cats.effect._
import de.heikoseeberger.akkahttpcirce.ErrorAccumulatingCirceSupport._
import doobie.implicits._
import doobie.{ConnectionIO, Transactor}
import java.util.UUID
trait ProjectLayerRoutes
extends Authentication
with CommonHandlers
with PaginationDirectives
with QueryParametersCommon
with ProjectSceneQueryParameterDirective
with ProjectAuthorizationDirectives {
implicit val xa: Transactor[IO]
val BULK_OPERATION_MAX_LIMIT = 100
def createProjectLayer(projectId: UUID): Route = authenticate { user =>
authorizeScope(ScopedAction(Domain.Projects, Action.Create, None), user) {
entity(as[ProjectLayer.Create]) { newProjectLayer =>
authorizeAuthResultAsync {
ProjectDao
.authorized(user, ObjectType.Project, projectId, ActionType.Edit)
.transact(xa)
.unsafeToFuture
} {
onSuccess(
ProjectLayerDao
.insertProjectLayer(newProjectLayer.toProjectLayer)
.transact(xa)
.unsafeToFuture
) { projectLayer =>
complete(StatusCodes.Created, projectLayer)
}
}
}
}
}
def listProjectLayers(projectId: UUID): Route = authenticateAllowAnonymous {
user =>
authorizeScope(ScopedAction(Domain.Projects, Action.Read, None), user) {
(authorizeAsync(
ProjectDao
.authorized(user, ObjectType.Project, projectId, ActionType.Edit)
.transact(xa)
.unsafeToFuture
.map(_.toBoolean)
) | projectIsPublic(projectId)) {
(withPagination) { (page) =>
complete {
ProjectLayerDao
.listProjectLayersForProject(page, projectId)
.transact(xa)
.unsafeToFuture
}
}
}
}
}
def getProjectLayer(projectId: UUID, layerId: UUID): Route = authenticate {
user =>
authorizeScope(ScopedAction(Domain.Projects, Action.Read, None), user) {
authorizeAuthResultAsync {
ProjectDao
.authorized(user, ObjectType.Project, projectId, ActionType.View)
.transact(xa)
.unsafeToFuture
} {
rejectEmptyResponse {
complete {
ProjectLayerDao
.getProjectLayer(projectId, layerId)
.transact(xa)
.unsafeToFuture
}
}
}
}
}
def updateProjectLayer(projectId: UUID, layerId: UUID): Route = authenticate {
user =>
authorizeScope(ScopedAction(Domain.Projects, Action.Update, None), user) {
authorizeAsync {
ProjectDao
.authProjectLayerExist(projectId, layerId, user, ActionType.Edit)
.transact(xa)
.unsafeToFuture
} {
entity(as[ProjectLayer]) { updatedProjectLayer =>
onSuccess(
ProjectLayerDao
.updateProjectLayer(updatedProjectLayer, layerId)
.transact(xa)
.unsafeToFuture
) {
completeSingleOrNotFound
}
}
}
}
}
def deleteProjectLayer(projectId: UUID, layerId: UUID): Route = authenticate {
user =>
authorizeScope(ScopedAction(Domain.Projects, Action.Delete, None), user) {
authorizeAsync {
ProjectDao
.authProjectLayerExist(projectId, layerId, user, ActionType.Edit)
.transact(xa)
.unsafeToFuture
} {
rejectEmptyResponse {
complete {
ProjectLayerDao
.deleteProjectLayer(layerId)
.transact(xa)
.unsafeToFuture
}
}
}
}
}
def getProjectLayerMosaicDefinition(projectId: UUID, layerId: UUID): Route =
authenticate { user =>
authorizeScope(ScopedAction(Domain.Projects, Action.Read, None), user) {
authorizeAsync {
ProjectDao
.authProjectLayerExist(projectId, layerId, user, ActionType.View)
.transact(xa)
.unsafeToFuture
} {
rejectEmptyResponse {
complete {
SceneToLayerDao
.getMosaicDefinition(layerId)
.transact(xa)
.unsafeToFuture
}
}
}
}
}
def getProjectLayerSceneColorCorrectParams(
projectId: UUID,
layerId: UUID,
sceneId: UUID
): Route =
authenticate { user =>
authorizeScope(ScopedAction(Domain.Projects, Action.Read, None), user) {
authorizeAsync {
ProjectDao
.authProjectLayerExist(projectId, layerId, user, ActionType.View)
.transact(xa)
.unsafeToFuture
} {
complete {
SceneToLayerDao
.getColorCorrectParams(layerId, sceneId)
.transact(xa)
.unsafeToFuture
}
}
}
}
def setProjectLayerSceneColorCorrectParams(
projectId: UUID,
layerId: UUID,
sceneId: UUID
): Route =
authenticate { user =>
authorizeScope(
ScopedAction(Domain.Projects, Action.ColorCorrect, None),
user
) {
authorizeAsync {
ProjectDao
.authProjectLayerExist(projectId, layerId, user, ActionType.Edit)
.transact(xa)
.unsafeToFuture
} {
entity(as[ColorCorrect.Params]) { ccParams =>
onSuccess(
SceneToLayerDao
.setColorCorrectParams(layerId, sceneId, ccParams)
.transact(xa)
.unsafeToFuture
) { _ =>
complete(StatusCodes.NoContent)
}
}
}
}
}
def setProjectLayerScenesColorCorrectParams(
projectId: UUID,
layerId: UUID
): Route =
authenticate { user =>
authorizeScope(
ScopedAction(Domain.Projects, Action.ColorCorrect, None),
user
) {
authorizeAsync {
ProjectDao
.authProjectLayerExist(projectId, layerId, user, ActionType.Edit)
.transact(xa)
.unsafeToFuture
} {
entity(as[BatchParams]) { params =>
onSuccess(
SceneToLayerDao
.setColorCorrectParamsBatch(layerId, params)
.transact(xa)
.unsafeToFuture
) { _ =>
complete(StatusCodes.NoContent)
}
}
}
}
}
def setProjectLayerSceneOrder(projectId: UUID, layerId: UUID): Route =
authenticate { user =>
authorizeScope(
ScopedAction(Domain.Projects, Action.ColorCorrect, None),
user
) {
authorizeAsync {
ProjectDao
.authProjectLayerExist(projectId, layerId, user, ActionType.Edit)
.transact(xa)
.unsafeToFuture
} {
entity(as[List[UUID]]) { sceneIds =>
if (sceneIds.length > BULK_OPERATION_MAX_LIMIT) {
complete(StatusCodes.PayloadTooLarge)
}
onSuccess(
SceneToLayerDao
.setManualOrder(layerId, sceneIds)
.transact(xa)
.unsafeToFuture
) { _ =>
complete(StatusCodes.NoContent)
}
}
}
}
}
def listLayerScenes(projectId: UUID, layerId: UUID): Route = authenticate {
user =>
authorizeScope(ScopedAction(Domain.Projects, Action.Read, None), user) {
authorizeAuthResultAsync {
ProjectDao
.authorized(user, ObjectType.Project, projectId, ActionType.View)
.transact(xa)
.unsafeToFuture
} {
(withPagination & projectSceneQueryParameters) {
(page, sceneParams) =>
complete {
ProjectLayerScenesDao
.listLayerScenes(layerId, page, sceneParams)
.transact(xa)
.unsafeToFuture
}
}
}
}
}
def listLayerDatasources(projectId: UUID, layerId: UUID): Route =
authenticate { user =>
authorizeScope(ScopedAction(Domain.Projects, Action.Read, None), user) {
(projectQueryParameters) { projectQueryParams =>
authorizeAsync {
val authorized = for {
authProject <- ProjectDao.authorized(
user,
ObjectType.Project,
projectId,
ActionType.View
)
authResult <- (authProject, projectQueryParams.analysisId) match {
case (AuthFailure(), Some(analysisId: UUID)) =>
ToolRunDao
.authorizeReferencedProject(user, analysisId, projectId)
case (_, _) =>
Applicative[ConnectionIO].pure(authProject.toBoolean)
}
} yield authResult
authorized.transact(xa).unsafeToFuture
} {
complete {
ProjectLayerDatasourcesDao
.listProjectLayerDatasources(layerId)
.transact(xa)
.unsafeToFuture
}
}
}
}
}
def getProjectLayerSceneCounts(projectId: UUID): Route =
authenticate { user =>
authorizeScope(ScopedAction(Domain.Projects, Action.Read, None), user) {
authorizeAuthResultAsync {
ProjectDao
.authorized(user, ObjectType.Project, projectId, ActionType.View)
.transact(xa)
.unsafeToFuture
} {
complete {
ProjectLayerScenesDao
.countLayerScenes(projectId)
.transact(xa)
.map(Map(_: _*))
.unsafeToFuture
}
}
}
}
def setProjectLayerColorMode(projectId: UUID, layerId: UUID) =
authenticate { user =>
authorizeScope(ScopedAction(Domain.Projects, Action.Update, None), user) {
authorizeAuthResultAsync {
ProjectDao
.authorized(user, ObjectType.Project, projectId, ActionType.Edit)
.transact(xa)
.unsafeToFuture
} {
entity(as[ProjectColorModeParams]) { colorBands =>
val setProjectLayerColorBandsIO = for {
rowsAffected <- SceneToLayerDao
.setProjectLayerColorBands(layerId, colorBands)
} yield {
rowsAffected
}
onSuccess(setProjectLayerColorBandsIO.transact(xa).unsafeToFuture) {
_ =>
complete(StatusCodes.NoContent)
}
}
}
}
}
}
|
raster-foundry/raster-foundry
|
app-backend/api/src/main/scala/project/ProjectLayerRoutes.scala
|
Scala
|
apache-2.0
| 11,298
|
package mm4s.examples.status
import java.util.UUID
import akka.actor.{Actor, ActorLogging, ActorRef}
import mm4s.api.{Post, Posted}
import mm4s.bots.api.{Bot, BotID, Ready}
import mm4s.examples.status.StatusBot._
import net.codingwell.scalaguice.ScalaModule
import scala.collection.mutable
class StatusBot extends Actor with Bot with ActorLogging {
def receive: Receive = {
case Ready(api, id) => context.become(ready(api, id))
}
def ready(api: ActorRef, id: BotID): Receive = {
val jobs = mutable.Map[String, Job]()
log.debug("StatusBot [{}] ready", id.username)
api ! Post("StatusBot ready!")
{
case Posted(t) if t.startsWith("@status") =>
log.debug("{} received {}", self.path.name, t)
t match {
case rmock(t) =>
t.split(",").map(_.trim).map(_.toLong)
.foreach(v => context.self ! JobRequest(v * 1000))
case rcheck(uid) =>
jobs.get(uid) match {
case Some(j) => api ! Post(s"Job `$uid` is ${jobPct(j)}% complete")
case None => api ! Post(s"unknown job `$uid`")
}
case rdone(uid) =>
jobs.get(uid) match {
case Some(j) => api ! Post(s"`${jobDone(j)}`")
case None => api ! Post(s"unknown job `$uid`")
}
case rlist() =>
val t = System.currentTimeMillis()
val status = jobs.foldLeft(
"""
|| ID | Status |
||------|------|""".stripMargin) {
(acc, e) =>
acc ++ s"\n|${e._1}|${mark(e._2, t)}|"
}
api ! Post(status)
case _ => api ! Post(s"Sorry I don't understand `$t`, try `help`")
}
case JobRequest(l) =>
val id = jobId()
val start = System.currentTimeMillis()
jobs(id) = Job(id, l, start + l)
api ! Post(s"job started, `$id`")
}
}
}
object StatusBot {
val rmock = """mock\s*?([\d]+[\s*,\s*\d]*)""".r.unanchored
val rcheck = """check\s*?(\w+)""".r.unanchored
val rdone = """isdone\s*?(\w+)""".r.unanchored
val rlist = """@status list$""".r.anchored
case class JobRequest(t: Long)
case class Job(id: String, length: Long, stop: Long)
def jobId() = UUID.randomUUID().toString.take(5)
def jobDone(j: Job) = j.stop < System.currentTimeMillis()
def jobPct(j: Job, curr: Long = System.currentTimeMillis()) = {
if (jobDone(j)) 100
else 100 - ((Math.abs(curr - j.stop) / j.length.toDouble) * 100).toInt
}
def mark(j: Job, curr: Long = System.currentTimeMillis()) = {
jobPct(j, curr) match {
case 100 => ":white_check_mark:"
case p => s"$p%"
}
}
}
class StatusBotModule extends ScalaModule {
def configure() = bind[Bot].to[StatusBot]
}
object StatusBotBoot4dev extends App {
mm4s.bots.Boot.main(Array.empty)
}
|
jw3/mm4s-examples
|
statusbot/src/main/scala/mm4s/examples/status/StatusBot.scala
|
Scala
|
apache-2.0
| 2,873
|
package ru.biocad.ig.alicont.algorithms
/**
* Created with IntelliJ IDEA.
* User: pavel
* Date: 27.11.13
* Time: 23:16
*/
object AlgorithmType extends Enumeration {
type AlgorithmType = Value
val GLOBAL, LOCAL, SEMIGLOBAL = Value
val AFFINE_GLOBAL, AFFINE_LOCAL, AFFINE_SEMIGLOBAL = Value
def affine : List[AlgorithmType] = AFFINE_GLOBAL :: AFFINE_LOCAL :: AFFINE_SEMIGLOBAL :: Nil
def simple : List[AlgorithmType] = GLOBAL :: LOCAL :: SEMIGLOBAL :: Nil
}
|
zmactep/igcat
|
lib/ig-alicont/src/main/scala/ru/biocad/ig/alicont/algorithms/AlgorithmType.scala
|
Scala
|
bsd-2-clause
| 474
|
package uk.gov.gds.ier.step
import uk.gov.gds.ier.controller.routes._
import play.api.mvc.Call
import play.api.mvc.Results.Redirect
case class GoTo[T](redirectCall:Call) extends Step[T] {
def isStepComplete(currentState: T): Boolean = false
def nextStep(currentState: T): Step[T] = this
val routing = Routes(
get = redirectCall,
post = redirectCall,
editGet = redirectCall,
editPost = redirectCall
)
}
|
michaeldfallen/ier-frontend
|
app/uk/gov/gds/ier/step/GoTo.scala
|
Scala
|
mit
| 430
|
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.openwhisk.core.database
import java.util.Base64
import akka.NotUsed
import akka.http.scaladsl.model.{ContentType, Uri}
import akka.stream.Materializer
import akka.stream.scaladsl.{Sink, Source}
import akka.util.ByteString
import spray.json.DefaultJsonProtocol
import org.apache.openwhisk.common.TransactionId
import org.apache.openwhisk.core.database.AttachmentSupport.MemScheme
import org.apache.openwhisk.core.entity.Attachments.Attached
import org.apache.openwhisk.core.entity.{ByteSize, DocId, DocInfo, UUID}
import scala.concurrent.{ExecutionContext, Future}
object AttachmentSupport {
/**
* Scheme name for attachments which are inlined
*/
val MemScheme: String = "mem"
}
case class InliningConfig(maxInlineSize: ByteSize)
/**
* Provides support for inlining small attachments. Inlined attachment contents are encoded as part of attachment
* name itself.
*/
trait AttachmentSupport[DocumentAbstraction <: DocumentSerializer] extends DefaultJsonProtocol {
/** Materializer required for stream processing */
protected[core] implicit val materializer: Materializer
protected def executionContext: ExecutionContext
/**
* Attachment scheme name to use for non inlined attachments
*/
protected def attachmentScheme: String
protected def inliningConfig: InliningConfig
/**
* Attachments having size less than this would be inlined
*/
def maxInlineSize: ByteSize = inliningConfig.maxInlineSize
/**
* See {{ ArtifactStore#put }}
*/
protected[database] def put(d: DocumentAbstraction)(implicit transid: TransactionId): Future[DocInfo]
/**
* Given a ByteString source it determines if the source can be inlined or not by returning an
* Either - Left(byteString) containing all the bytes from the source or Right(Source[ByteString, _])
* if the source is large
*/
protected[database] def inlineOrAttach(
docStream: Source[ByteString, _],
previousPrefix: ByteString = ByteString.empty): Future[Either[ByteString, Source[ByteString, _]]] = {
implicit val ec = executionContext
docStream.prefixAndTail(1).runWith(Sink.head).flatMap {
case (Nil, _) =>
Future.successful(Left(previousPrefix))
case (Seq(prefix), tail) =>
val completePrefix = previousPrefix ++ prefix
if (completePrefix.size < maxInlineSize.toBytes) {
inlineOrAttach(tail, completePrefix)
} else {
Future.successful(Right(tail.prepend(Source.single(completePrefix))))
}
}
}
/**
* Constructs a URI for the attachment
*
* @param bytesOrSource either byteString or byteString source
* @param path function to generate the attachment name for non inlined case
* @return constructed uri. In case of inlined attachment the uri contains base64 encoded inlined attachment content
*/
protected[database] def uriOf(bytesOrSource: Either[ByteString, Source[ByteString, _]], path: => String): Uri = {
bytesOrSource match {
case Left(bytes) => Uri.from(scheme = MemScheme, path = encode(bytes))
case Right(_) => uriFrom(scheme = attachmentScheme, path = path)
}
}
//Not using Uri.from due to https://github.com/akka/akka-http/issues/2080
protected[database] def uriFrom(scheme: String, path: String): Uri = Uri(s"$scheme:$path")
/**
* Constructs a source from inlined attachment contents
*/
protected[database] def memorySource(uri: Uri): Source[ByteString, NotUsed] = {
require(uri.scheme == MemScheme, s"URI $uri scheme is not $MemScheme")
Source.single(ByteString(decode(uri)))
}
protected[database] def isInlined(uri: Uri): Boolean = uri.scheme == MemScheme
/**
* Computes digest for passed bytes as hex encoded string
*/
protected[database] def digest(bytes: TraversableOnce[Byte]): String = {
val digester = StoreUtils.emptyDigest()
digester.update(bytes.toArray)
StoreUtils.encodeDigest(digester.digest())
}
/**
* Attaches the passed source content to an {{ AttachmentStore }}
*
* @param doc document with attachment
* @param update function to update the `Attached` state with attachment metadata
* @param contentType contentType of the attachment
* @param docStream attachment source
* @param oldAttachment old attachment in case of update. Required for deleting the old attachment
* @param attachmentStore attachmentStore where attachment needs to be stored
*
* @return a tuple of updated document info and attachment metadata
*/
protected[database] def attachToExternalStore[A <: DocumentAbstraction](
doc: A,
update: (A, Attached) => A,
contentType: ContentType,
docStream: Source[ByteString, _],
oldAttachment: Option[Attached],
attachmentStore: AttachmentStore)(implicit transid: TransactionId): Future[(DocInfo, Attached)] = {
val asJson = doc.toDocumentRecord
val id = asJson.fields("_id").convertTo[String].trim
implicit val ec = executionContext
for {
bytesOrSource <- inlineOrAttach(docStream)
uri = uriOf(bytesOrSource, UUID().asString)
attached <- {
// Upload if cannot be inlined
bytesOrSource match {
case Left(bytes) =>
Future.successful(Attached(uri.toString, contentType, Some(bytes.size), Some(digest(bytes))))
case Right(source) =>
attachmentStore
.attach(DocId(id), uri.path.toString, contentType, source)
.map(r => Attached(uri.toString, contentType, Some(r.length), Some(r.digest)))
}
}
i1 <- put(update(doc, attached))
//Remove old attachment if it was part of attachmentStore
_ <- oldAttachment
.map { old =>
val oldUri = Uri(old.attachmentName)
if (oldUri.scheme == attachmentStore.scheme) {
attachmentStore.deleteAttachment(DocId(id), oldUri.path.toString)
} else {
Future.successful(true)
}
}
.getOrElse(Future.successful(true))
} yield (i1, attached)
}
private def encode(bytes: Seq[Byte]): String = {
Base64.getUrlEncoder.encodeToString(bytes.toArray)
}
private def decode(uri: Uri): Array[Byte] = {
Base64.getUrlDecoder.decode(uri.path.toString())
}
}
|
starpit/openwhisk
|
common/scala/src/main/scala/org/apache/openwhisk/core/database/AttachmentSupport.scala
|
Scala
|
apache-2.0
| 7,065
|
/*
* Copyright (C) 2015 Stratio (http://stratio.com)
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.stratio.sparta.sdk.pipeline.filter
import akka.event.slf4j.SLF4JLogging
import com.stratio.sparta.sdk.pipeline.schema.TypeOp._
import com.stratio.sparta.sdk.pipeline.schema.TypeOp
import com.stratio.sparta.sdk.properties.JsoneyStringSerializer
import org.apache.spark.sql.Row
import org.apache.spark.sql.types.StructType
import org.json4s.jackson.JsonMethods._
import org.json4s.{DefaultFormats, Formats}
import scala.util.{Failure, Success, Try}
trait Filter extends SLF4JLogging {
@transient
implicit val json4sJacksonFormats: Formats = DefaultFormats + new JsoneyStringSerializer()
def filterInput: Option[String]
val schema: StructType
def defaultCastingFilterType: TypeOp
val filters = filterInput match {
case Some(jsonFilters) => parse(jsonFilters).extract[Seq[FilterModel]]
case None => Seq()
}
def applyFilters(row: Row): Option[Map[String, Any]] = {
val mapRow = schema.fieldNames.zip(row.toSeq).toMap
if (mapRow.map(inputField => doFiltering(inputField, mapRow)).forall(result => result))
Option(mapRow)
else None
}
private def doFiltering(inputField: (String, Any),
inputFields: Map[String, Any]): Boolean = {
filters.map(filter =>
if (inputField._1 == filter.field && (filter.fieldValue.isDefined || filter.value.isDefined)) {
val filterType = filterCastingType(filter.fieldType)
val inputValue = TypeOp.transformAnyByTypeOp(filterType, inputField._2)
val filterValue = filter.value.map(value => TypeOp.transformAnyByTypeOp(filterType, value))
val fieldValue = filter.fieldValue.flatMap(fieldValue =>
inputFields.get(fieldValue).map(value => TypeOp.transformAnyByTypeOp(filterType, value)))
applyFilterCauses(filter, inputValue, filterValue, fieldValue)
}
else true
).forall(result => result)
}
private def filterCastingType(fieldType: Option[String]): TypeOp =
fieldType match {
case Some(typeName) => getTypeOperationByName(typeName, defaultCastingFilterType)
case None => defaultCastingFilterType
}
//scalastyle:off
private def applyFilterCauses(filter: FilterModel,
value: Any,
filterValue: Option[Any],
dimensionValue: Option[Any]): Boolean = {
val valueOrdered = value
val filterValueOrdered = filterValue.map(filterVal => filterVal)
val dimensionValueOrdered = dimensionValue.map(dimensionVal => dimensionVal)
Seq(
if (filter.value.isDefined && filterValue.isDefined && filterValueOrdered.isDefined)
Try(doFilteringType(filter.`type`, valueOrdered, filterValueOrdered.get)) match {
case Success(filterResult) =>
filterResult
case Failure(e) =>
log.error(e.getLocalizedMessage)
true
}
else true,
if (filter.fieldValue.isDefined && dimensionValue.isDefined && dimensionValueOrdered.isDefined)
Try(doFilteringType(filter.`type`, valueOrdered, dimensionValueOrdered.get)) match {
case Success(filterResult) =>
filterResult
case Failure(e) =>
log.error(e.getLocalizedMessage)
true
}
else true
).forall(result => result)
}
private def doFilteringType(filterType: String, value: Any, filterValue: Any): Boolean = {
import OrderingAny._
filterType match {
case "=" => value equiv filterValue
case "!=" => !(value equiv filterValue)
case "<" => value < filterValue
case "<=" => value <= filterValue
case ">" => value > filterValue
case ">=" => value >= filterValue
}
}
}
|
fjsc/sparta
|
sdk/src/main/scala/com/stratio/sparta/sdk/pipeline/filter/Filter.scala
|
Scala
|
apache-2.0
| 4,346
|
package edu.gemini.too.event.service
import edu.gemini.pot.sp._
import edu.gemini.pot.spdb.{ProgramEvent, ProgramEventListener, IDBTriggerAction, IDBDatabaseService}
import edu.gemini.spModel.core.Site
import edu.gemini.spModel.gemini.obscomp.SPProgram
import edu.gemini.spModel.obs.{ObservationStatus, ObsSchedulingReport}
import ObservationStatus.{READY, ON_HOLD}
import edu.gemini.spModel.too.{Too, TooType}
import edu.gemini.too.event.api.{TooEvent, TooService => TooServiceApi, TooPublisher, TooTimestamp}
import edu.gemini.util.security.permission.ProgramPermission
import edu.gemini.util.security.policy.ImplicitPolicy
import scala.collection.JavaConverters._
import scala.concurrent.ops.spawn
import java.security.Principal
object TooService {
val DefaultEventRetentionTime = 30 * 60 * 1000
}
/**
* The TooService is notified by the database whenever the TooCondition matches
* a change event. It creates a correspond TooEvent, publishes it to any local
* subscribers and holds on to it (for a limited time) in case remote clients
* should poll for updates.
*
* @param eventRetentionTime minimum tme that ToO events will be kept
*/
class TooService(db: IDBDatabaseService, val site: Site, val eventRetentionTime: Long = TooService.DefaultEventRetentionTime) extends IDBTriggerAction with ProgramEventListener[ISPProgram] with TooPublisher { outer =>
private var timestamp = TooTimestamp.now
private var recentEvents: List[TooEvent] = Nil
def lastEventTimestamp: TooTimestamp = synchronized { timestamp }
def serviceApi(ps: java.util.Set[Principal]): TooServiceApi =
new TooServiceApi {
def events(since: TooTimestamp): java.util.List[TooEvent] = {
def isVisible(evt: TooEvent): Boolean =
ImplicitPolicy.forJava.hasPermission(db, ps, new ProgramPermission.Read(evt.report.getObservationId.getProgramID))
(recentEvents takeWhile { _.timestamp > since} filter { isVisible }).reverse.asJava
}
def lastEventTimestamp: TooTimestamp =
outer.lastEventTimestamp
def eventRetentionTime: Long =
outer.eventRetentionTime
}
private def trigger(obsList: List[ISPObservation]) {
val time = TooTimestamp.now
val events = obsList map { obs =>
val report = new ObsSchedulingReport(obs, site, time.value)
TooEvent(report, Too.get(obs), time)
}
val cutoff = time.less(eventRetentionTime)
synchronized {
recentEvents = events ++ (recentEvents filter { _.timestamp > cutoff })
timestamp = time
}
if (obsList.size > 0) spawn {
events foreach { evt => publish(evt) }
}
}
def doTriggerAction(change: SPCompositeChange, handback: Object) {
// This solution assumes you will never be able to process multiple events in
// the same millisecond. If you could and a client happened to poll in the
// middle of doing that, it would miss subsequent events in the same
// millisecond.
val obs = handback.asInstanceOf[ISPObservation]
trigger(List(obs).filter(o => Option(o.getObservationID).isDefined))
}
def programReplaced(pme: ProgramEvent[ISPProgram]) {
def isTooProgram(p: ISPProgram) =
if (Option(p.getProgramID).isEmpty) false
else {
val dObj = p.getDataObject.asInstanceOf[SPProgram]
dObj.isActive && dObj.getTooType != TooType.none
}
def obsStatus(n: ISPProgram): Seq[(SPNodeKey, ObservationStatus)] = {
val obsList = n.getAllObservations.asScala
obsList.map { o => o.getNodeKey -> ObservationStatus.computeFor(o) }
}
def obsList(n: ISPProgram, ks: Set[SPNodeKey]): List[ISPObservation] =
if (ks.size == 0) Nil // just a shortcut ...
else n.getAllObservations.asScala.filter(o => ks.contains(o.getNodeKey)).toList
val oldProg = pme.getOldProgram
val newProg = pme.getNewProgram
if (isTooProgram(oldProg) && isTooProgram(newProg)) {
val oldStatuses = obsStatus(oldProg)
val newStatuses = obsStatus(newProg)
def keySet(tups: Seq[(SPNodeKey, ObservationStatus)], status: Option[ObservationStatus] = None): Set[SPNodeKey] =
status.fold(tups)(s => tups.filter(_._2 == s)).map(_._1).toSet
val allOldKeys = keySet(oldStatuses)
val oldOnHold = keySet(oldStatuses, Some(ON_HOLD))
val newReady = keySet(newStatuses, Some(READY))
val transitionTrigger = oldOnHold & newReady
val creationTrigger = newReady.filterNot(allOldKeys.contains)
trigger(obsList(newProg, transitionTrigger ++ creationTrigger))
}
}
def programAdded(pme: ProgramEvent[ISPProgram]) { /* ignore */ }
def programRemoved(pme: ProgramEvent[ISPProgram]) { /* ignore */ }
}
|
arturog8m/ocs
|
bundle/edu.gemini.too.event/src/main/scala/edu/gemini/too/event/service/TooService.scala
|
Scala
|
bsd-3-clause
| 4,718
|
package org.opennetworkinsight
import org.opennetworkinsight.netflow.FlowWordCreation
import org.scalatest.{FlatSpec, Matchers}
class FlowWordCreationTest extends FlatSpec with Matchers {
// Replace ports in index 10 and 11
val rowSrcIPLess = Array("2016-05-05 12:59:32", "2016", "5", "5", "12", "59", "32", "0", "10.0.2.115", "172.16.0.107",
"-", "-", "TCP", ".AP...", "0", "0", "32", "46919", "0", "0", "2", "3", "0", "0", "0", "0",
"10.219.100.251", "12.99222222", "7", "4", "7")
val rowDstIPLess = Array("2016-05-05 12:59:32", "2016", "5", "5", "12", "59", "32", "0", "172.16.0.107", "10.0.2.115",
"-", "-", "TCP", ".AP...", "0", "0", "32", "46919", "0", "0", "2", "3", "0", "0", "0", "0",
"10.219.100.251", "12.99222222", "7", "4", "7")
// 1. Test when sip is less than dip and sip is not 0 and dport is <= 1024 & sport > 1024 and min(dport, sport) !=0 +
"adjustPort" should "create word with ip_pair as sourceIp-destIp, port is dport and dest_word direction is -1" in {
rowSrcIPLess(10) = "2132"
rowSrcIPLess(11) = "23"
val result = FlowWordCreation.adjustPort(rowSrcIPLess(8), rowSrcIPLess(9), rowSrcIPLess(11).toInt, rowSrcIPLess(10).toInt,
rowSrcIPLess(29).toDouble, rowSrcIPLess(28).toDouble, rowSrcIPLess(30).toDouble)
result.length shouldBe 4
result(1) shouldBe "10.0.2.115 172.16.0.107"
result(0) shouldBe "23.0"
result(3) shouldBe "-1_23.0_7.0_7.0_4.0"
result(2) shouldBe "23.0_7.0_7.0_4.0"
}
// 2. Test when sip is less than dip and sip is not 0 and sport is <= 1024 & dport > 1024 and min(dport, sport) !=0 +
it should "create word with ip_pair as sourceIp-destIp, port is sport and src_word direction is -1" in {
rowSrcIPLess(10) = "23"
rowSrcIPLess(11) = "2132"
val result = FlowWordCreation.adjustPort(rowSrcIPLess(8), rowSrcIPLess(9), rowSrcIPLess(11).toInt, rowSrcIPLess(10).toInt,
rowSrcIPLess(29).toDouble, rowSrcIPLess(28).toDouble, rowSrcIPLess(30).toDouble)
result.length shouldBe 4
result(1) shouldBe "10.0.2.115 172.16.0.107"
result(0) shouldBe "23.0"
result(3) shouldBe "23.0_7.0_7.0_4.0"
result(2) shouldBe "-1_23.0_7.0_7.0_4.0"
}
// 3. Test when sip is less than dip and sip is not 0 and dport and sport are > 1024 +
it should "create word with ip_pair as sourceIp-destIp, port is 333333.0 and both words direction is 1 (not showing)" in {
rowSrcIPLess(10) = "8392"
rowSrcIPLess(11) = "9874"
val result = FlowWordCreation.adjustPort(rowSrcIPLess(8), rowSrcIPLess(9), rowSrcIPLess(11).toInt, rowSrcIPLess(10).toInt,
rowSrcIPLess(29).toDouble, rowSrcIPLess(28).toDouble, rowSrcIPLess(30).toDouble)
result.length shouldBe 4
result(1) shouldBe "10.0.2.115 172.16.0.107"
result(0) shouldBe "333333.0"
result(3) shouldBe "333333.0_7.0_7.0_4.0"
result(2) shouldBe "333333.0_7.0_7.0_4.0"
}
// 4. Test when sip is less than dip and sip is not 0 and dport is 0 but sport is not +
it should "create word with ip_pair as sourceIp-destIp, port is sport and source_word direction is -1" in {
rowSrcIPLess(10) = "80"
rowSrcIPLess(11) = "0"
val result = FlowWordCreation.adjustPort(rowSrcIPLess(8), rowSrcIPLess(9), rowSrcIPLess(11).toInt, rowSrcIPLess(10).toInt,
rowSrcIPLess(29).toDouble, rowSrcIPLess(28).toDouble, rowSrcIPLess(30).toDouble)
result.length shouldBe 4
result(1) shouldBe "10.0.2.115 172.16.0.107"
result(0) shouldBe "80.0"
result(3) shouldBe "80.0_7.0_7.0_4.0"
result(2) shouldBe "-1_80.0_7.0_7.0_4.0"
}
// 5. Test when sip is less than dip and sip is not 0 and sport is 0 but dport is not +
it should "create word with ip_pair as sourceIp-destIp, port is dport and dest_word direction is -1 II" in {
rowSrcIPLess(10) = "0"
rowSrcIPLess(11) = "43"
val result = FlowWordCreation.adjustPort(rowSrcIPLess(8), rowSrcIPLess(9), rowSrcIPLess(11).toInt, rowSrcIPLess(10).toInt,
rowSrcIPLess(29).toDouble, rowSrcIPLess(28).toDouble, rowSrcIPLess(30).toDouble)
result.length shouldBe 4
result(1) shouldBe "10.0.2.115 172.16.0.107"
result(0) shouldBe "43.0"
result(3) shouldBe "-1_43.0_7.0_7.0_4.0"
result(2) shouldBe "43.0_7.0_7.0_4.0"
}
// 6. Test when sip is less than dip and sip is not 0 and sport and dport are less or equal than 1024 +
it should "create word with ip_pair as sourceIp-destIp, port is 111111.0 and both words direction is 1 (not showing)" in {
rowSrcIPLess(10) = "1024"
rowSrcIPLess(11) = "80"
val result = FlowWordCreation.adjustPort(rowSrcIPLess(8), rowSrcIPLess(9), rowSrcIPLess(11).toInt, rowSrcIPLess(10).toInt,
rowSrcIPLess(29).toDouble, rowSrcIPLess(28).toDouble, rowSrcIPLess(30).toDouble)
result.length shouldBe 4
result(1) shouldBe "10.0.2.115 172.16.0.107"
result(0) shouldBe "111111.0"
result(3) shouldBe "111111.0_7.0_7.0_4.0"
result(2) shouldBe "111111.0_7.0_7.0_4.0"
}
// 7. Test when sip is less than dip and sip is not 0 and sport and dport are 0+
it should "create word with ip_pair as sourceIp-destIp, port is max(0,0) and both words direction is 1 (not showing)" in {
rowSrcIPLess(10) = "0"
rowSrcIPLess(11) = "0"
val result = FlowWordCreation.adjustPort(rowSrcIPLess(8), rowSrcIPLess(9), rowSrcIPLess(11).toInt, rowSrcIPLess(10).toInt,
rowSrcIPLess(29).toDouble, rowSrcIPLess(28).toDouble, rowSrcIPLess(30).toDouble)
result.length shouldBe 4
result(1) shouldBe "10.0.2.115 172.16.0.107"
result(0) shouldBe "0.0"
result(3) shouldBe "0.0_7.0_7.0_4.0"
result(2) shouldBe "0.0_7.0_7.0_4.0"
}
// 8. Test when sip is not less than dip and dport is <= 1024 & sport > 1024 and min(dport, sport) !=0+
it should "create word with ip_pair as destIp-sourceIp, port is dport and dest_word direction is -1" in {
rowDstIPLess(10) = "3245"
rowDstIPLess(11) = "43"
val result = FlowWordCreation.adjustPort(rowDstIPLess(8), rowDstIPLess(9), rowDstIPLess(11).toInt, rowDstIPLess(10).toInt,
rowDstIPLess(29).toDouble, rowDstIPLess(28).toDouble, rowDstIPLess(30).toDouble)
result.length shouldBe 4
result(1) shouldBe "10.0.2.115 172.16.0.107"
result(0) shouldBe "43.0"
result(3) shouldBe "-1_43.0_7.0_7.0_4.0"
result(2) shouldBe "43.0_7.0_7.0_4.0"
}
// 9. Test when sip is not less than dip and sport is <= 1024 & dport > 1024 and min(dport, sport) !=0 +
it should "create word with ip_pair as destIp-sourceIp, port is sport and src_word direction is -1" in {
rowDstIPLess(10) = "80"
rowDstIPLess(11) = "2435"
val result = FlowWordCreation.adjustPort(rowDstIPLess(8), rowDstIPLess(9), rowDstIPLess(11).toInt, rowDstIPLess(10).toInt,
rowDstIPLess(29).toDouble, rowDstIPLess(28).toDouble, rowDstIPLess(30).toDouble)
result.length shouldBe 4
result(1) shouldBe "10.0.2.115 172.16.0.107"
result(0) shouldBe "80.0"
result(3) shouldBe "80.0_7.0_7.0_4.0"
result(2) shouldBe "-1_80.0_7.0_7.0_4.0"
}
// 10. Test when sip is not less than dip and dport and sport are > 1024 +
it should "create word with ip_pair as destIp-sourceIp, port is 333333.0 and both words direction is 1 (not showing)" in {
rowDstIPLess(10) = "2354"
rowDstIPLess(11) = "2435"
val result = FlowWordCreation.adjustPort(rowDstIPLess(8), rowDstIPLess(9), rowDstIPLess(11).toInt, rowDstIPLess(10).toInt,
rowDstIPLess(29).toDouble, rowDstIPLess(28).toDouble, rowDstIPLess(30).toDouble)
result.length shouldBe 4
result(1) shouldBe "10.0.2.115 172.16.0.107"
result(0) shouldBe "333333.0"
result(3) shouldBe "333333.0_7.0_7.0_4.0"
result(2) shouldBe "333333.0_7.0_7.0_4.0"
}
// 11. Test when sip is not less than dip and dport is 0 but sport is not +
it should "create word with ip_pair as destIp-sourceIp, port is sport and src_word direction is -1 II" in {
rowDstIPLess(10) = "80"
rowDstIPLess(11) = "0"
val result = FlowWordCreation.adjustPort(rowDstIPLess(8), rowDstIPLess(9), rowDstIPLess(11).toInt, rowDstIPLess(10).toInt,
rowDstIPLess(29).toDouble, rowDstIPLess(28).toDouble, rowDstIPLess(30).toDouble)
result.length shouldBe 4
result(1) shouldBe "10.0.2.115 172.16.0.107"
result(0) shouldBe "80.0"
result(3) shouldBe "80.0_7.0_7.0_4.0"
result(2) shouldBe "-1_80.0_7.0_7.0_4.0"
}
// 12. Test when sip is not less than dip and sport is 0 but dport is not +
it should "create word with ip_pair as destIp-sourceIp, port is dport and dest_word direction is -1 II" in {
rowDstIPLess(10) = "0"
rowDstIPLess(11) = "2435"
val result = FlowWordCreation.adjustPort(rowDstIPLess(8), rowDstIPLess(9), rowDstIPLess(11).toInt, rowDstIPLess(10).toInt,
rowDstIPLess(29).toDouble, rowDstIPLess(28).toDouble, rowDstIPLess(30).toDouble)
result.length shouldBe 4
result(1) shouldBe "10.0.2.115 172.16.0.107"
result(0) shouldBe "2435.0"
result(3) shouldBe "-1_2435.0_7.0_7.0_4.0"
result(2) shouldBe "2435.0_7.0_7.0_4.0"
}
// 13. Test when sip is not less than dip and sport and dport are less or equal than 1024
it should "create word with ip_pair as destIp-sourceIp, port 111111.0 and both words direction is 1 (not showing)" in {
rowDstIPLess(10) = "80"
rowDstIPLess(11) = "1024"
val result = FlowWordCreation.adjustPort(rowDstIPLess(8), rowDstIPLess(9), rowDstIPLess(11).toInt, rowDstIPLess(10).toInt,
rowDstIPLess(29).toDouble, rowDstIPLess(28).toDouble, rowDstIPLess(30).toDouble)
result.length shouldBe 4
result(1) shouldBe "10.0.2.115 172.16.0.107"
result(0) shouldBe "111111.0"
result(3) shouldBe "111111.0_7.0_7.0_4.0"
result(2) shouldBe "111111.0_7.0_7.0_4.0"
}
// 14. Test when sip is not less than dip and sport and dport are 0
it should "create word with ip_pair as destIp-sourceIp, port is max(0,0) and both words direction is 1 (not showing)" in {
rowDstIPLess(10) = "0"
rowDstIPLess(11) = "0"
val result = FlowWordCreation.adjustPort(rowDstIPLess(8), rowDstIPLess(9), rowDstIPLess(11).toInt, rowDstIPLess(10).toInt,
rowDstIPLess(29).toDouble, rowDstIPLess(28).toDouble, rowDstIPLess(30).toDouble)
result.length shouldBe 4
result(1) shouldBe "10.0.2.115 172.16.0.107"
result(0) shouldBe "0.0"
result(3) shouldBe "0.0_7.0_7.0_4.0"
result(2) shouldBe "0.0_7.0_7.0_4.0"
}
}
|
Open-Network-Insight/oni-ml
|
src/test/scala/org/opennetworkinsight/FlowWordCreationTest.scala
|
Scala
|
apache-2.0
| 10,363
|
/*
* Copyright (C) 2005, The Beangle Software.
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published
* by the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package org.beangle.webmvc.view.tag
import jakarta.servlet.http.HttpServletRequest
import org.beangle.commons.collection.page.Page
import org.beangle.web.action.context.ActionContext
import org.beangle.webmvc.execution.MappingHandler
import org.beangle.template.api.{AbstractModels, ComponentContext}
import org.beangle.webmvc.dispatch.ActionUriRender
import org.beangle.commons.text.escape.JavascriptEscaper
import java.io.StringWriter
import java.util as ju
class CoreModels(context: ComponentContext, request: HttpServletRequest) extends AbstractModels(context) {
def url(url: String): String = {
val mapping = ActionContext.current.handler.asInstanceOf[MappingHandler].mapping
this.context.services("uriRender").asInstanceOf[ActionUriRender].render(mapping, url)
}
def base: String = {
request.getContextPath
}
def now = new ju.Date
/**
* query string and form control
*/
def paramstring: String = {
val sw = new StringWriter()
val em = request.getParameterNames()
while (em.hasMoreElements()) {
val attr = em.nextElement()
val value = request.getParameter(attr)
if (!attr.equals("x-requested-with")) {
sw.write(attr)
sw.write('=')
sw.write(JavascriptEscaper.escape(value,false))
if (em.hasMoreElements()) sw.write('&')
}
}
sw.toString()
}
def isPage(data: Object) = data.isInstanceOf[Page[_]]
def text(name: String): String = {
context.textProvider(name, name)
}
def text(name: String, arg0: Object): String = {
context.textProvider(name, name, arg0)
}
def text(name: String, arg0: Object, arg1: Object): String = {
context.textProvider(name, name, arg0, arg1)
}
}
|
beangle/webmvc
|
core/src/main/scala/org/beangle/webmvc/view/tag/CoreModels.scala
|
Scala
|
lgpl-3.0
| 2,439
|
/*
* Copyright (C) 2005, The Beangle Software.
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published
* by the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package org.beangle.webmvc.view.tag
import org.beangle.commons.lang.Strings
import org.beangle.template.api.ComponentContext
import java.io.Writer
import org.beangle.template.api.{UIBean,ClosingUIBean,Themes}
class Head(context: ComponentContext) extends ActionClosingUIBean(context) {
var loadui = true
var compressed = true
override def evaluateParams(): Unit = {
val devMode = requestParameter("devMode")
if (null != devMode) compressed = !("true".equals(devMode) || "on".equals(devMode))
}
}
class Foot(context: ComponentContext) extends ClosingUIBean(context)
object Anchor {
val ReservedTargets: Set[String] = Set("_blank", "_top", "_self", "_parent")
}
class Anchor(context: ComponentContext) extends ActionClosingUIBean(context) {
var href: String = _
var target: String = _
var onclick: String = _
def reserved: Boolean = Anchor.ReservedTargets.contains(target)
override def evaluateParams(): Unit = {
this.href = render(this.href)
if (!reserved) {
if (null == onclick) {
if (null != target) {
onclick = Strings.concat("return bg.Go(this,'", target, "')")
target = null
} else {
onclick = "return bg.Go(this,null)"
}
}
}
}
override def doEnd(writer: Writer, body: String): Boolean = {
if (context.theme == Themes.Default) {
try {
writer.append("<a href=\"")
writer.append(href).append("\"")
if (null != id) {
writer.append(" id=\"").append(id).append("\"")
}
if (null != target) {
writer.append(" target=\"").append(target).append("\"")
}
if (null != onclick) {
writer.append(" onclick=\"").append(onclick).append("\"")
}
if (null != cssClass) {
writer.append(" class=\"").append(cssClass).append("\"")
}
writer.append(parameterString)
writer.append(">").append(body).append("</a>")
} catch {
case e: Exception =>
e.printStackTrace()
}
false
} else {
super.doEnd(writer, body)
}
}
}
|
beangle/webmvc
|
core/src/main/scala/org/beangle/webmvc/view/tag/html.scala
|
Scala
|
lgpl-3.0
| 2,831
|
package xyz.discretezoo.web.db.v1
import xyz.discretezoo.web.db.ZooPostgresProfile.api._
case class GraphVT(zooid: Int, vtIndex: Option[Int])
class GraphsVT(tag: Tag) extends Table[GraphVT](tag, "graph_cvt") {
def zooid: Rep[Int] = column[Int]("zooid", O.PrimaryKey)
def vtIndex = column[Option[Int]]("vt_index")
def * = (
zooid,
vtIndex
) <> ((GraphVT.apply _).tupled, GraphVT.unapply)
}
|
DiscreteZOO/DiscreteZOO-web
|
src/main/scala/xyz/discretezoo/web/db/v1/GraphVT.scala
|
Scala
|
mit
| 410
|
/*
* Copyright 2016 The BigDL Authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.intel.analytics.bigdl.dllib.nn
import com.intel.analytics.bigdl.dllib.tensor.Tensor
import com.intel.analytics.bigdl._
import com.intel.analytics.bigdl.dllib.utils.serializer.ModuleSerializationTest
import org.scalatest.{FlatSpec, Matchers}
import scala.util.Random
@com.intel.analytics.bigdl.tags.Parallel
class SpatialCrossMapLRNSpec extends FlatSpec with Matchers {
private def referenceLRNForwardAcrossChannels
(input: Tensor[Double], alpha: Double, beta: Double, size: Int): Tensor[Double] = {
val output = Tensor[Double]()
output.resizeAs(input)
val batch = input.size(1)
val channel = input.size(2)
val height = input.size(3)
val width = input.size(4)
for (n <- 0 until batch) {
for (c <- 0 until channel) {
for (h <- 0 until height) {
for (w <- 0 until width) {
var cStart = c - (size - 1) / 2
val cEnd = math.min(cStart + size, channel)
cStart = math.max(cStart, 0)
var scale = 1.0
for (i <- cStart until cEnd) {
val value = input.valueAt(n + 1, i + 1, h + 1, w + 1)
scale += value * value * alpha / size
}
output.setValue(n + 1, c + 1, h + 1, w + 1,
input.valueAt(n + 1, c + 1, h + 1, w + 1) * math.pow(scale, -beta))
}
}
}
}
output
}
private def referenceLRNForwardAcrossChannels
(input: Tensor[Float], alpha: Float, beta: Float, size: Int): Tensor[Float] = {
val output = Tensor[Float]()
output.resizeAs(input)
val batch = input.size(1)
val channel = input.size(2)
val height = input.size(3)
val width = input.size(4)
for (n <- 0 until batch) {
for (c <- 0 until channel) {
for (h <- 0 until height) {
for (w <- 0 until width) {
var cStart = c - (size - 1) / 2
val cEnd = math.min(cStart + size, channel)
cStart = math.max(cStart, 0)
var scale = 1.0f
for (i <- cStart until cEnd) {
val value = input.valueAt(n + 1, i + 1, h + 1, w + 1)
scale += value * value * alpha / size
}
output.setValue(n + 1, c + 1, h + 1, w + 1,
input.valueAt(n + 1, c + 1, h + 1, w + 1) * math.pow(scale, -beta).toFloat)
}
}
}
}
output
}
"LocalNormalizationAcrossChannels Forward Double" should "be correct" in {
val layer = new SpatialCrossMapLRN[Double](5, 0.0001, 0.75, 1.0)
val input = Tensor[Double](2, 7, 3, 3)
input.rand()
val outputRef = referenceLRNForwardAcrossChannels(input, 0.0001, 0.75, 5)
layer.forward(input)
val output = layer.forward(input)
output should be(outputRef)
}
"LocalNormalizationAcrossChannels Backward Double" should "be correct" in {
val layer = new SpatialCrossMapLRN[Double](5, 0.0001, 0.75, 1.0)
val input = Tensor[Double](2, 7, 3, 3)
input.rand()
val checker = new GradientChecker(1e-2, 1e-2)
checker.checkLayer(layer, input) should be(true)
}
"LocalNormalizationAcrossChannels Backward Float" should "be correct" in {
val layer = new SpatialCrossMapLRN[Float](5, 0.0001, 0.75, 1.0)
val input = Tensor[Float](2, 7, 3, 3)
input.rand()
val checker = new GradientChecker(1e-2, 1e-2)
checker.checkLayer[Float](layer, input) should be(true)
}
"LocalNormalizationAcrossChannels with Large Region Backward Double" should "be correct" in {
val layer = new SpatialCrossMapLRN[Double](15, 0.0001, 0.75, 1.0)
val input = Tensor[Double](2, 7, 3, 3)
input.rand()
val checker = new GradientChecker(1e-2, 1e-2)
checker.checkLayer(layer, input) should be(true)
}
"LocalNormalizationAcrossChannels with Large Region Backward Float" should "be correct" in {
val layer = new SpatialCrossMapLRN[Float](15, 0.0001, 0.75, 1.0)
val input = Tensor[Float](2, 7, 3, 3)
input.rand()
val checker = new GradientChecker(1e-2, 1e-2)
checker.checkLayer(layer, input) should be(true)
}
"LocalNormalizationAcrossChannels with Large Region Forward Double" should "be correct" in {
val layer = new SpatialCrossMapLRN[Double](15, 0.0001, 0.75, 1.0)
val input = Tensor[Double](2, 7, 3, 3)
input.rand()
val outputRef = referenceLRNForwardAcrossChannels(input, 0.0001, 0.75, 15)
val output = layer.forward(input)
output should be(outputRef)
}
"LocalNormalizationAcrossChannels Forward Float" should "be correct" in {
val layer = new SpatialCrossMapLRN[Float](5, 0.0001f, 0.75f, 1.0f)
val input = Tensor[Float](2, 7, 3, 3)
input.rand()
val outputRef = referenceLRNForwardAcrossChannels(input, 0.0001f, 0.75f, 5)
val output = layer.forward(input)
output should be(outputRef)
}
"LocalNormalizationAcrossChannels with Large Region Forward Float" should "be correct" in {
val layer = new SpatialCrossMapLRN[Float](15, 0.0001f, 0.75f, 1.0f)
val input = Tensor[Float](2, 7, 3, 3)
input.rand()
val outputRef = referenceLRNForwardAcrossChannels(input, 0.0001f, 0.75f, 15)
val output = layer.forward(input)
output should be(outputRef)
}
}
class SpatialCrossMapLRNSerialTest extends ModuleSerializationTest {
override def test(): Unit = {
val spatialCrossMapLRN = SpatialCrossMapLRN[Float](5, 0.01, 0.75, 1.0).
setName("spatialCrossMapLRN")
val input = Tensor[Float](2, 2, 2, 2).apply1( e => Random.nextFloat())
runSerializationTest(spatialCrossMapLRN, input)
}
}
|
intel-analytics/BigDL
|
scala/dllib/src/test/scala/com/intel/analytics/bigdl/dllib/nn/SpatialCrossMapLRNSpec.scala
|
Scala
|
apache-2.0
| 6,137
|
import annotation.experimental
@main
@experimental
def run(): Unit = f
@experimental
def f = 2
|
lampepfl/dotty
|
tests/pos-custom-args/no-experimental/i13848.scala
|
Scala
|
apache-2.0
| 97
|
package com.owtelse.parsers
import org.specs2.{ScalaCheck, Specification}
import org.specs2.matcher.ThrownExpectations
import java.lang.String
import org.scalacheck.Gen
/**
* Created by IntelliJ IDEA.
* User: robertk
*/
trait CliParserTest extends Specification with ScalaCheck with ThrownExpectations {
def recogniseShortFlagNames = clips.recogniseShortFlagNames
def recogniseLongFlagNames = clips.recogniseLongFlagNames
def parseShortFlags = clips.parseShorFlags
def parseLongFlags = clips.parseLongFlags
def recogniseShortArgFlagNames = clips.recogniseShortArgFlagNames
def parseShortArgFlags = clips.parseShortArgFlags
def oops = clips.oops
/* enable fixtures, and actually run the tests */
object clips {
import parseTestHelper._
def recogniseShortFlagNames = check { propRecogniseShortFlagNames }
def parseShorFlags = check { propParseShortFlag }
def recogniseLongFlagNames = check { propRecogniseLongFlagNames }
def parseLongFlags = check { propParseLongFlag }
def recogniseShortArgFlagNames = check { propRecogniseShortArgFlagNames }
def parseShortArgFlags = check { propParseShortArgFlag }
def oops = oops1
}
}
trait CliParserFixture {
import com.owtelse.knownFlags._;
val flagPrefix = "-"
//val knownShortFlags = Set("p", "t", "d")
//val knownLongFlags = Set("lax")
}
object parseTestHelper extends CLIParser with FlagGenerator {
def parse(p: Parser[Any])(i: String) = {
parseAll(p, i)
}
import org.scalacheck.Prop._
import com.owtelse.knownFlags._;
def propParseLongFlag = propParse(genLongFlag)(longFlg)
def propParseShortFlag = propParse(genShortFlag)(shortFlg)
def propParseShortArgFlag = propParse(genShortArgFlag)(shortFlgArg)
def propRecogniseLongFlagNames = propKnownLongFlagnameParses && propNotKnownLongFlagnameFailsParse
def propRecogniseShortFlagNames = propKnownShortFlagnameParses && propNotKnownShortFlagnameFailsParse
def propRecogniseShortArgFlagNames = propKnownShortArgFlagnameParses && propNotKnownShortArgFlagnameParses
def propKnownLongFlagnameParses = propKnownFlagnameParses(genLongFlagName)(longFlagName)
def propKnownShortFlagnameParses = propKnownFlagnameParses(genShortFlagName)(shortFlgName)
def propKnownShortArgFlagnameParses = propKnownFlagnameParses(genShortArgFlagName)(shortArgFlgName)
//oops list of specific chars I try to fail the parse with.
def oops1 = {
val supplimentaryCharString = new String(Character.toChars(0x495f))
parse(longFlagName)(supplimentaryCharString) match {
case x: Failure => {
// println("--- OOoooPS good parse fname(" + supplimentaryCharString.size + "):"+supplimentaryCharString + " :- " + stringCodePointschars(supplimentaryCharString))
!knownShortFlags.contains(supplimentaryCharString)
}
case _ => {
println("--- OOoooPS fname(\\"+fname.size+\\"):"+supplimentaryCharString + " :- " + stringCodePointschars(supplimentaryCharString))
false
}
}
}
/**
* Simple test for parse Success or fail
*/
def propParse[T](flagGen: Gen[String])(p: Parser[T]) = forAll(flagGen) {
flag: String =>
parse(p)(flag) match {
case _: Success[_] => true
case _ => false
}
}
def propKnownFlagnameParses(flagNameGen: Gen[String])(p: Parser[Any]) = forAll(flagNameGen) {
fname: String =>
val parseResult = parse(p)(fname)
println("------>>> WTF parsed "+parseResult)
parseResult match {
case x: Success[_] => {
println("--->> parse val =" + x.get)
true
}
case _ => false
}
}
//limit the generated arbitrary non flag strings to 2Chars, ie up to and bigger than known strings but not so big as to waste time generating
def propNotKnownShortFlagnameFailsParse = forAll(genSizedNotShortFlagName(2)) {
fname: String =>
parse(shortFlgName)(fname) match {
case x: Failure => {
//println("--- good, parse fail fname("+fname.size+"):"+fname + " :- " + stringCodePointschars(fname))
!knownShortFlags.contains(fname)
}
case _ => {
println("--- Aaarh propNotKnownFlagnameParses fname(" + fname.size + "):"+fname + " :- " + stringCodePointschars(fname))
false
}
}
}
def propNotKnownShortArgFlagnameParses = forAll(genSizedNotShortArgFlagName(2)) {
fname: String =>
parse(shortFlgName)(fname) match {
case x: Failure => {
//println("--- good, parse fail fname("+fname.size+"):"+fname + " :- " + stringCodePointschars(fname))
!knownShortFlags.contains(fname)
}
case _ => {
println("--- Aaarh propNotKnownFlagnameParses fname(" + fname.size + "):"+fname + " :- " + stringCodePointschars(fname))
false
}
}
}
//limit the generated arbitrary non flag strings to 2Chars, ie up to and bigger than known strings but not so big as to waste time generating
def propNotKnownLongFlagnameFailsParse = forAll(genSizedNotLongFlagName(3)) {
fname: String =>
parse(longFlagName)(fname) match {
case x: Failure => {
// println("--- good, parse fail fname("+fname.size+"):"+fname + " :- " + stringCodePointschars(fname))
!knownLongFlags.contains(fname)
}
case _ => {
println("--- Aaarh propNotKnownLongFlagnameFailsParse fname("+ fname.size +"):"+fname + " :- " + stringCodePointschars(fname))
false
}
}
}
// debug function. show me the unicode of chars
def stringCodePointschars(s: String): String = {
val ret = Predef.augmentString(s).flatMap{ c =>
val codepoint = Character.codePointAt(Array(c),0)
val cs: Array[Char] = Character.toChars(codepoint)
cs match {
case Array(one) => cs.map(c => "\\\\u%s ".format(one.toInt.toHexString))
case _ => " what the fek?---->" + cs + "<----"
}
}
ret.mkString
}
}
/**
* Generates arbitrary Flags
*/
trait FlagGenerator extends FlagNameGen {
import org.scalacheck.Gen
def genLongFlag = genFlag(flagPrefix+flagPrefix)(genLongFlagName)
def genShortFlag = genFlag(flagPrefix)(genShortFlagName)
def genShortArgFlag = genArgFlag(flagPrefix)(genShortArgFlagName)
def genFlag(flagPrefix: String)(flagNameGen: Gen[String]) = for {
name <- flagNameGen
flag = flagPrefix ++ name
} yield flag
def genArgFlag(flagPrefix: String)(flagNameGen: Gen[String]) = for {
name <- flagNameGen
//TODO gen the args too
flag = flagPrefix ++ name ++ " " ++ "a:b:c"
} yield flag
}
/**
* Generates arbitrary Flagnames and !Flagnames
*/
trait FlagNameGen extends CliParserFixture {
import scalaz._
import Scalaz._
import org.scalacheck.{Gen, Arbitrary}
import Arbitrary.arbitrary
import com.owtelse.knownFlags._;
def genShortFlagName = genFlagName(knownShortFlags.values.toSeq)
def genLongFlagName = genFlagName(knownLongFlags.values.toSeq)
//knownShortArgFlagName is a Map[String, (List[String] => Flag[String])]
//ie a container of Functions... mmm sounds like an applicative..
//genFlagName expects a Map[Sting, Flag[String]]
// if I can use Applicative functor to apply the funcs in container to a List of Strings then I'll have a Container of
// flags which is what I need, but Map is kind ** I need * ie M[A] not M[A,B] so a little type lambda should fix that up
//Then I can applic it
//do type lambda to make M[A,B] look like M[B] with A fixed.
var sArgFlags = knownShortArgFlags.values.toList
var theArgFlags = sArgFlags ∘ (f => f(List("dummy arg")))
def genShortArgFlagName = genFlagName(theArgFlags)
def genSizedNotShortFlagName(n: Int) = Gen.resize(n, genNotFlagName(knownShortFlags.values.toSeq))
def genSizedNotLongFlagName(n: Int) = Gen.resize(n, genNotFlagName(knownLongFlags.values.toSeq))
def genSizedNotShortArgFlagName(n: Int) = Gen.resize(n, genNotFlagName(theArgFlags))
def genFlagName(knownFlagValues: Seq[Flag[String]]): Gen[String] = for {
s <- Gen.oneOf(knownFlagValues)
} yield { knownFlagValues.foreach(x => print(" " + x.symbol + " :")); println("----->>>> Generated known flag..." + s.symbol); s.symbol}
//Not a known flag for negative testing
def genNotFlagName(knownFlags: Seq[Flag[String]]): Gen[String] = Gen.sized {
size => for {
s <- arbitrary[String]
cleaned = s.filter{c => val x = char2Character(c)
!knownFlags.contains (new String(x.toString)) && x != null }
} yield cleaned
}
}
|
karlroberts/splat2
|
src/test/scala/com/owtelse/parsers/CliParserTest.scala
|
Scala
|
bsd-3-clause
| 8,604
|
/*
* InputNeuron.scala
* textnoise
*
* Created by Илья Михальцов on 2014-05-13.
* Copyright 2014 Илья Михальцов. All rights reserved.
*/
package com.morphe.noise.backend.neurals
import scala.collection._
class InputNeuron (input: Double, connections: GenSeq[Double], currentLayer: Int, index: Int)
extends Neuron(connections, currentLayer, index) {
override lazy val y: Double = input
override lazy val v: Double = 0.0
override def sameNeuronWithConnections (conns: GenSeq[Double]) = {
val n = new InputNeuron(input, conns, currentLayer, index)
n.state = state
n
}
def basicNeuronWithSameConnections () = {
new Neuron(connections, currentLayer, index)
}
}
|
morpheby/textnoise
|
src/main/scala/com/morphe/noise/backend/neurals/InputNeuron.scala
|
Scala
|
gpl-2.0
| 759
|
package org.thp.cortex.controllers
import javax.inject.{ Inject, Singleton }
import scala.collection.immutable
import scala.concurrent.{ ExecutionContext, Future }
import scala.concurrent.duration.{ DurationLong, FiniteDuration }
import scala.util.Random
import play.api.{ Configuration, Logger }
import play.api.http.Status
import play.api.libs.json.Json
import play.api.mvc.{ AbstractController, Action, AnyContent, ControllerComponents }
import akka.actor.{ ActorSystem, Props }
import akka.util.Timeout
import akka.pattern.ask
import org.thp.cortex.models.Roles
import org.thp.cortex.services.StreamActor
import org.thp.cortex.services.StreamActor.StreamMessages
import org.elastic4play.Timed
import org.elastic4play.controllers._
import org.elastic4play.services.{ AuxSrv, EventSrv, MigrationSrv }
@Singleton
class StreamCtrl(
cacheExpiration: FiniteDuration,
refresh: FiniteDuration,
nextItemMaxWait: FiniteDuration,
globalMaxWait: FiniteDuration,
authenticated: Authenticated,
renderer: Renderer,
eventSrv: EventSrv,
auxSrv: AuxSrv,
migrationSrv: MigrationSrv,
components: ControllerComponents,
implicit val system: ActorSystem,
implicit val ec: ExecutionContext) extends AbstractController(components) with Status {
@Inject() def this(
configuration: Configuration,
authenticated: Authenticated,
renderer: Renderer,
eventSrv: EventSrv,
auxSrv: AuxSrv,
migrationSrv: MigrationSrv,
components: ControllerComponents,
system: ActorSystem,
ec: ExecutionContext) =
this(
configuration.getMillis("stream.longpolling.cache").millis,
configuration.getMillis("stream.longpolling.refresh").millis,
configuration.getMillis("stream.longpolling.nextItemMaxWait").millis,
configuration.getMillis("stream.longpolling.globalMaxWait").millis,
authenticated,
renderer,
eventSrv,
auxSrv,
migrationSrv,
components,
system,
ec)
private[StreamCtrl] lazy val logger = Logger(getClass)
/**
* Create a new stream entry with the event head
*/
@Timed("controllers.StreamCtrl.create")
def create: Action[AnyContent] = authenticated(Roles.read) {
val id = generateStreamId()
system.actorOf(Props(
classOf[StreamActor],
cacheExpiration,
refresh,
nextItemMaxWait,
globalMaxWait,
eventSrv,
auxSrv), s"stream-$id")
Ok(id)
}
val alphanumeric: immutable.IndexedSeq[Char] = ('a' to 'z') ++ ('A' to 'Z') ++ ('0' to '9')
private[controllers] def generateStreamId() = Seq.fill(10)(alphanumeric(Random.nextInt(alphanumeric.size))).mkString
private[controllers] def isValidStreamId(streamId: String): Boolean = {
streamId.length == 10 && streamId.forall(alphanumeric.contains)
}
/**
* Get events linked to the identified stream entry
* This call waits up to "refresh", if there is no event, return empty response
*/
@Timed("controllers.StreamCtrl.get")
def get(id: String): Action[AnyContent] = Action.async { implicit request ⇒
implicit val timeout: Timeout = Timeout(refresh + globalMaxWait + 1.second)
if (!isValidStreamId(id)) {
Future.successful(BadRequest("Invalid stream id"))
}
else {
val futureStatus = authenticated.expirationStatus(request) match {
case ExpirationError if !migrationSrv.isMigrating ⇒ authenticated.getFromApiKey(request).map(_ ⇒ OK)
case _: ExpirationWarning ⇒ Future.successful(220)
case _ ⇒ Future.successful(OK)
}
futureStatus.flatMap { status ⇒
(system.actorSelection(s"/user/stream-$id") ? StreamActor.GetOperations) map {
case StreamMessages(operations) ⇒ renderer.toOutput(status, operations)
case m ⇒ InternalServerError(s"Unexpected message : $m (${m.getClass})")
}
}
}
}
@Timed("controllers.StreamCtrl.status")
def status = Action { implicit request ⇒
val status = authenticated.expirationStatus(request) match {
case ExpirationWarning(duration) ⇒ Json.obj("remaining" → duration.toSeconds, "warning" → true)
case ExpirationError ⇒ Json.obj("remaining" → 0, "warning" → true)
case ExpirationOk(duration) ⇒ Json.obj("remaining" → duration.toSeconds, "warning" → false)
}
Ok(status)
}
}
|
CERT-BDF/Cortex
|
app/org/thp/cortex/controllers/StreamCtrl.scala
|
Scala
|
agpl-3.0
| 4,476
|
package chandu0101.scalajs.react.components.util
object InputTypes {
val NUMBER = "number"
val TEXT = "text"
val SEARCH = "search"
val EMAIL = "email"
val TEL = "tel"
val DATE = "date"
}
|
mproch/scalajs-react-components
|
core/src/main/scala/chandu0101/scalajs/react/components/util/InputTypes.scala
|
Scala
|
apache-2.0
| 207
|
Subsets and Splits
Filtered Scala Code Snippets
The query filters and retrieves a sample of code snippets that meet specific criteria, providing a basic overview of the dataset's content without revealing deeper insights.