text
stringlengths
8
267k
meta
dict
Q: How do I send an email attachment using the designated client, programmatically from Java I'd like to encourage our users of our RCP application to send the problem details to our support department. To this end, I've added a "Contact support" widget to our standard error dialogue. I've managed to use URI headers to send a stacktrace using Java 6's JDIC call: Desktop.getDesktop().mail(java.net.URI). This will fire up the user's mail client, ready for them to add their comments, and hit send. I like firing up the email client, because it's what the user is used to, it tells support a whole lot about the user (sigs, contact details etc) and I don't really want to ship with Java Mail. What I'd like to do is attach the log file and the stacktrace as a file, so there is no maximum length requirement, and the user sees a nice clean looking email, and the support department has a lot more information to work with. Can I do this with the approach I'm taking? Or is there a better way? Edit: I'm in an OSGi context, so bundling JDIC would be necessary. If possible, I'd like to ship with as few dependencies as possible, and bundling up the JDIC for multiple platforms does not sound fun, especially for such a small feature. JavaMail may be suitable, but for the fact that this will be on desktops of our corporate clients. The setup/discovery of configuration would have to be transparent, automatic and reliable. Regarding JavaMail, configuration seems to be manual only. Is this the case? The answer I like most is using the Desktop.open() for an *.eml file. Unfortunately Outlook Express (rather than Outlook) opens eml files. I have no idea if this is usual or default to have Windows configured for to open EML files like this. Is this usual? Or is there another text based format that a) is easy to generate, b) opens by default in the same email client as users would be using already? A: You could save a temporary .eml file, and Desktop.getDesktop().open(emlFile) Edit: As you point out, this will unfortunately open outlook express instead of outlook. However if you have Windows Live Mail installed, it will use that. A: If you're using JDK 6 (you really should), the Desktop API is now part of the JRE. See http://java.sun.com/developer/technicalArticles/J2SE/Desktop/javase6/desktop_api/ for more information. A: As a completely different way of handling the same problem, we use a bug tracker with an XML-RPC interface, and our (RCP also, btw) app talks to that using a custom submission dialogue. It means we can send the log files to help diagnose the problem, without the user having to find them. I'm sure most bug trackers have something like this available. We use Jira, and it works great (apparently, they've just released a free Personal version that makes it easy to try). A: Using that method, you can set the subject line and body text with a URI like mailto:me@here.com?SUBJECT=Support mail&BODY=This is a support mail However, the length of the subject and body text will have some limitation There is no way I can think of to attatch a file using this method or something similar (without adding javamail to your app) A: JDIC may not always be available on your user's platform. A good way to do this is to use the javamail API. You can send a multi-part e-mail message as explained in this tutorial by SUN: Sending Attachments A: import java.awt.Desktop; import java.io.File; import java.net.URI; public class TestMail { public static void main(String[] args) { try { Runtime.getRuntime().exec( new String[] {"rundll32", "url.dll,FileProtocolHandler", "mailto:a@a.de?subject=someSubject&cc=a@a.de&bcc=a@a.de&body=someBodyText&Attach=c:\\test\\test.doc"}, null ); } catch (Exception e) { e.printStackTrace(); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/81862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is it possible to print a variable's type in standard C++? For example: int a = 12; cout << typeof(a) << endl; Expected output: int A: The other answers involving RTTI (typeid) are probably what you want, as long as: * *you can afford the memory overhead (which can be considerable with some compilers) *the class names your compiler returns are useful The alternative, (similar to Greg Hewgill's answer), is to build a compile-time table of traits. template <typename T> struct type_as_string; // declare your Wibble type (probably with definition of Wibble) template <> struct type_as_string<Wibble> { static const char* const value = "Wibble"; }; Be aware that if you wrap the declarations in a macro, you'll have trouble declaring names for template types taking more than one parameter (e.g. std::map), due to the comma. To access the name of the type of a variable, all you need is template <typename T> const char* get_type_as_string(const T&) { return type_as_string<T>::value; } A: C++11 update to a very old question: Print variable type in C++. The accepted (and good) answer is to use typeid(a).name(), where a is a variable name. Now in C++11 we have decltype(x), which can turn an expression into a type. And decltype() comes with its own set of very interesting rules. For example decltype(a) and decltype((a)) will generally be different types (and for good and understandable reasons once those reasons are exposed). Will our trusty typeid(a).name() help us explore this brave new world? No. But the tool that will is not that complicated. And it is that tool which I am using as an answer to this question. I will compare and contrast this new tool to typeid(a).name(). And this new tool is actually built on top of typeid(a).name(). The fundamental issue: typeid(a).name() throws away cv-qualifiers, references, and lvalue/rvalue-ness. For example: const int ci = 0; std::cout << typeid(ci).name() << '\n'; For me outputs: i and I'm guessing on MSVC outputs: int I.e. the const is gone. This is not a QOI (Quality Of Implementation) issue. The standard mandates this behavior. What I'm recommending below is: template <typename T> std::string type_name(); which would be used like this: const int ci = 0; std::cout << type_name<decltype(ci)>() << '\n'; and for me outputs: int const <disclaimer> I have not tested this on MSVC. </disclaimer> But I welcome feedback from those who do. The C++11 Solution I am using __cxa_demangle for non-MSVC platforms as recommend by ipapadop in his answer to demangle types. But on MSVC I'm trusting typeid to demangle names (untested). And this core is wrapped around some simple testing that detects, restores and reports cv-qualifiers and references to the input type. #include <type_traits> #include <typeinfo> #ifndef _MSC_VER # include <cxxabi.h> #endif #include <memory> #include <string> #include <cstdlib> template <class T> std::string type_name() { typedef typename std::remove_reference<T>::type TR; std::unique_ptr<char, void(*)(void*)> own ( #ifndef _MSC_VER abi::__cxa_demangle(typeid(TR).name(), nullptr, nullptr, nullptr), #else nullptr, #endif std::free ); std::string r = own != nullptr ? own.get() : typeid(TR).name(); if (std::is_const<TR>::value) r += " const"; if (std::is_volatile<TR>::value) r += " volatile"; if (std::is_lvalue_reference<T>::value) r += "&"; else if (std::is_rvalue_reference<T>::value) r += "&&"; return r; } The Results With this solution I can do this: int& foo_lref(); int&& foo_rref(); int foo_value(); int main() { int i = 0; const int ci = 0; std::cout << "decltype(i) is " << type_name<decltype(i)>() << '\n'; std::cout << "decltype((i)) is " << type_name<decltype((i))>() << '\n'; std::cout << "decltype(ci) is " << type_name<decltype(ci)>() << '\n'; std::cout << "decltype((ci)) is " << type_name<decltype((ci))>() << '\n'; std::cout << "decltype(static_cast<int&>(i)) is " << type_name<decltype(static_cast<int&>(i))>() << '\n'; std::cout << "decltype(static_cast<int&&>(i)) is " << type_name<decltype(static_cast<int&&>(i))>() << '\n'; std::cout << "decltype(static_cast<int>(i)) is " << type_name<decltype(static_cast<int>(i))>() << '\n'; std::cout << "decltype(foo_lref()) is " << type_name<decltype(foo_lref())>() << '\n'; std::cout << "decltype(foo_rref()) is " << type_name<decltype(foo_rref())>() << '\n'; std::cout << "decltype(foo_value()) is " << type_name<decltype(foo_value())>() << '\n'; } and the output is: decltype(i) is int decltype((i)) is int& decltype(ci) is int const decltype((ci)) is int const& decltype(static_cast<int&>(i)) is int& decltype(static_cast<int&&>(i)) is int&& decltype(static_cast<int>(i)) is int decltype(foo_lref()) is int& decltype(foo_rref()) is int&& decltype(foo_value()) is int Note (for example) the difference between decltype(i) and decltype((i)). The former is the type of the declaration of i. The latter is the "type" of the expression i. (expressions never have reference type, but as a convention decltype represents lvalue expressions with lvalue references). Thus this tool is an excellent vehicle just to learn about decltype, in addition to exploring and debugging your own code. In contrast, if I were to build this just on typeid(a).name(), without adding back lost cv-qualifiers or references, the output would be: decltype(i) is int decltype((i)) is int decltype(ci) is int decltype((ci)) is int decltype(static_cast<int&>(i)) is int decltype(static_cast<int&&>(i)) is int decltype(static_cast<int>(i)) is int decltype(foo_lref()) is int decltype(foo_rref()) is int decltype(foo_value()) is int I.e. Every reference and cv-qualifier is stripped off. C++14 Update Just when you think you've got a solution to a problem nailed, someone always comes out of nowhere and shows you a much better way. :-) This answer from Jamboree shows how to get the type name in C++14 at compile time. It is a brilliant solution for a couple reasons: * *It's at compile time! *You get the compiler itself to do the job instead of a library (even a std::lib). This means more accurate results for the latest language features (like lambdas). Jamboree's answer doesn't quite lay everything out for VS, and I'm tweaking his code a little bit. But since this answer gets a lot of views, take some time to go over there and upvote his answer, without which, this update would never have happened. #include <cstddef> #include <stdexcept> #include <cstring> #include <ostream> #ifndef _MSC_VER # if __cplusplus < 201103 # define CONSTEXPR11_TN # define CONSTEXPR14_TN # define NOEXCEPT_TN # elif __cplusplus < 201402 # define CONSTEXPR11_TN constexpr # define CONSTEXPR14_TN # define NOEXCEPT_TN noexcept # else # define CONSTEXPR11_TN constexpr # define CONSTEXPR14_TN constexpr # define NOEXCEPT_TN noexcept # endif #else // _MSC_VER # if _MSC_VER < 1900 # define CONSTEXPR11_TN # define CONSTEXPR14_TN # define NOEXCEPT_TN # elif _MSC_VER < 2000 # define CONSTEXPR11_TN constexpr # define CONSTEXPR14_TN # define NOEXCEPT_TN noexcept # else # define CONSTEXPR11_TN constexpr # define CONSTEXPR14_TN constexpr # define NOEXCEPT_TN noexcept # endif #endif // _MSC_VER class static_string { const char* const p_; const std::size_t sz_; public: typedef const char* const_iterator; template <std::size_t N> CONSTEXPR11_TN static_string(const char(&a)[N]) NOEXCEPT_TN : p_(a) , sz_(N-1) {} CONSTEXPR11_TN static_string(const char* p, std::size_t N) NOEXCEPT_TN : p_(p) , sz_(N) {} CONSTEXPR11_TN const char* data() const NOEXCEPT_TN {return p_;} CONSTEXPR11_TN std::size_t size() const NOEXCEPT_TN {return sz_;} CONSTEXPR11_TN const_iterator begin() const NOEXCEPT_TN {return p_;} CONSTEXPR11_TN const_iterator end() const NOEXCEPT_TN {return p_ + sz_;} CONSTEXPR11_TN char operator[](std::size_t n) const { return n < sz_ ? p_[n] : throw std::out_of_range("static_string"); } }; inline std::ostream& operator<<(std::ostream& os, static_string const& s) { return os.write(s.data(), s.size()); } template <class T> CONSTEXPR14_TN static_string type_name() { #ifdef __clang__ static_string p = __PRETTY_FUNCTION__; return static_string(p.data() + 31, p.size() - 31 - 1); #elif defined(__GNUC__) static_string p = __PRETTY_FUNCTION__; # if __cplusplus < 201402 return static_string(p.data() + 36, p.size() - 36 - 1); # else return static_string(p.data() + 46, p.size() - 46 - 1); # endif #elif defined(_MSC_VER) static_string p = __FUNCSIG__; return static_string(p.data() + 38, p.size() - 38 - 7); #endif } This code will auto-backoff on the constexpr if you're still stuck in ancient C++11. And if you're painting on the cave wall with C++98/03, the noexcept is sacrificed as well. C++17 Update In the comments below Lyberta points out that the new std::string_view can replace static_string: template <class T> constexpr std::string_view type_name() { using namespace std; #ifdef __clang__ string_view p = __PRETTY_FUNCTION__; return string_view(p.data() + 34, p.size() - 34 - 1); #elif defined(__GNUC__) string_view p = __PRETTY_FUNCTION__; # if __cplusplus < 201402 return string_view(p.data() + 36, p.size() - 36 - 1); # else return string_view(p.data() + 49, p.find(';', 49) - 49); # endif #elif defined(_MSC_VER) string_view p = __FUNCSIG__; return string_view(p.data() + 84, p.size() - 84 - 7); #endif } I've updated the constants for VS thanks to the very nice detective work by Jive Dadson in the comments below. Update: Be sure to check out this rewrite or this rewrite below which eliminate the unreadable magic numbers in my latest formulation. A: Don't forget to include <typeinfo> I believe what you are referring to is runtime type identification. You can achieve the above by doing . #include <iostream> #include <typeinfo> using namespace std; int main() { int i; cout << typeid(i).name(); return 0; } A: As I challenge I decided to test how far can one go with platform-independent (hopefully) template trickery. The names are assembled completely at compilation time. (Which means typeid(T).name() couldn't be used, thus you have to explicitly provide names for non-compound types. Otherwise placeholders will be displayed instead.) Example usage: TYPE_NAME(int) TYPE_NAME(void) // You probably should list all primitive types here. TYPE_NAME(std::string) int main() { // A simple case std::cout << type_name<void(*)(int)> << '\n'; // -> `void (*)(int)` // Ugly mess case // Note that compiler removes cv-qualifiers from parameters and replaces arrays with pointers. std::cout << type_name<void (std::string::*(int[3],const int, void (*)(std::string)))(volatile int*const*)> << '\n'; // -> `void (std::string::*(int *,int,void (*)(std::string)))(volatile int *const*)` // A case with undefined types // If a type wasn't TYPE_NAME'd, it's replaced by a placeholder, one of `class?`, `union?`, `enum?` or `??`. std::cout << type_name<std::ostream (*)(int, short)> << '\n'; // -> `class? (*)(int,??)` // With appropriate TYPE_NAME's, the output would be `std::string (*)(int,short)`. } Code: #include <type_traits> #include <utility> static constexpr std::size_t max_str_lit_len = 256; template <std::size_t I, std::size_t N> constexpr char sl_at(const char (&str)[N]) { if constexpr(I < N) return str[I]; else return '\0'; } constexpr std::size_t sl_len(const char *str) { for (std::size_t i = 0; i < max_str_lit_len; i++) if (str[i] == '\0') return i; return 0; } template <char ...C> struct str_lit { static constexpr char value[] {C..., '\0'}; static constexpr int size = sl_len(value); template <typename F, typename ...P> struct concat_impl {using type = typename concat_impl<F>::type::template concat_impl<P...>::type;}; template <char ...CC> struct concat_impl<str_lit<CC...>> {using type = str_lit<C..., CC...>;}; template <typename ...P> using concat = typename concat_impl<P...>::type; }; template <typename, const char *> struct trim_str_lit_impl; template <std::size_t ...I, const char *S> struct trim_str_lit_impl<std::index_sequence<I...>, S> { using type = str_lit<S[I]...>; }; template <std::size_t N, const char *S> using trim_str_lit = typename trim_str_lit_impl<std::make_index_sequence<N>, S>::type; #define STR_LIT(str) ::trim_str_lit<::sl_len(str), ::str_lit<STR_TO_VA(str)>::value> #define STR_TO_VA(str) STR_TO_VA_16(str,0),STR_TO_VA_16(str,16),STR_TO_VA_16(str,32),STR_TO_VA_16(str,48) #define STR_TO_VA_16(str,off) STR_TO_VA_4(str,0+off),STR_TO_VA_4(str,4+off),STR_TO_VA_4(str,8+off),STR_TO_VA_4(str,12+off) #define STR_TO_VA_4(str,off) ::sl_at<off+0>(str),::sl_at<off+1>(str),::sl_at<off+2>(str),::sl_at<off+3>(str) template <char ...C> constexpr str_lit<C...> make_str_lit(str_lit<C...>) {return {};} template <std::size_t N> constexpr auto make_str_lit(const char (&str)[N]) { return trim_str_lit<sl_len((const char (&)[N])str), str>{}; } template <std::size_t A, std::size_t B> struct cexpr_pow {static constexpr std::size_t value = A * cexpr_pow<A,B-1>::value;}; template <std::size_t A> struct cexpr_pow<A,0> {static constexpr std::size_t value = 1;}; template <std::size_t N, std::size_t X, typename = std::make_index_sequence<X>> struct num_to_str_lit_impl; template <std::size_t N, std::size_t X, std::size_t ...Seq> struct num_to_str_lit_impl<N, X, std::index_sequence<Seq...>> { static constexpr auto func() { if constexpr (N >= cexpr_pow<10,X>::value) return num_to_str_lit_impl<N, X+1>::func(); else return str_lit<(N / cexpr_pow<10,X-1-Seq>::value % 10 + '0')...>{}; } }; template <std::size_t N> using num_to_str_lit = decltype(num_to_str_lit_impl<N,1>::func()); using spa = str_lit<' '>; using lpa = str_lit<'('>; using rpa = str_lit<')'>; using lbr = str_lit<'['>; using rbr = str_lit<']'>; using ast = str_lit<'*'>; using amp = str_lit<'&'>; using con = str_lit<'c','o','n','s','t'>; using vol = str_lit<'v','o','l','a','t','i','l','e'>; using con_vol = con::concat<spa, vol>; using nsp = str_lit<':',':'>; using com = str_lit<','>; using unk = str_lit<'?','?'>; using c_cla = str_lit<'c','l','a','s','s','?'>; using c_uni = str_lit<'u','n','i','o','n','?'>; using c_enu = str_lit<'e','n','u','m','?'>; template <typename T> inline constexpr bool ptr_or_ref = std::is_pointer_v<T> || std::is_reference_v<T> || std::is_member_pointer_v<T>; template <typename T> inline constexpr bool func_or_arr = std::is_function_v<T> || std::is_array_v<T>; template <typename T> struct primitive_type_name {using value = unk;}; template <typename T, typename = std::enable_if_t<std::is_class_v<T>>> using enable_if_class = T; template <typename T, typename = std::enable_if_t<std::is_union_v<T>>> using enable_if_union = T; template <typename T, typename = std::enable_if_t<std::is_enum_v <T>>> using enable_if_enum = T; template <typename T> struct primitive_type_name<enable_if_class<T>> {using value = c_cla;}; template <typename T> struct primitive_type_name<enable_if_union<T>> {using value = c_uni;}; template <typename T> struct primitive_type_name<enable_if_enum <T>> {using value = c_enu;}; template <typename T> struct type_name_impl; template <typename T> using type_name_lit = std::conditional_t<std::is_same_v<typename primitive_type_name<T>::value::template concat<spa>, typename type_name_impl<T>::l::template concat<typename type_name_impl<T>::r>>, typename primitive_type_name<T>::value, typename type_name_impl<T>::l::template concat<typename type_name_impl<T>::r>>; template <typename T> inline constexpr const char *type_name = type_name_lit<T>::value; template <typename T, typename = std::enable_if_t<!std::is_const_v<T> && !std::is_volatile_v<T>>> using enable_if_no_cv = T; template <typename T> struct type_name_impl { using l = typename primitive_type_name<T>::value::template concat<spa>; using r = str_lit<>; }; template <typename T> struct type_name_impl<const T> { using new_T_l = std::conditional_t<type_name_impl<T>::l::size && !ptr_or_ref<T>, spa::concat<typename type_name_impl<T>::l>, typename type_name_impl<T>::l>; using l = std::conditional_t<ptr_or_ref<T>, typename new_T_l::template concat<con>, con::concat<new_T_l>>; using r = typename type_name_impl<T>::r; }; template <typename T> struct type_name_impl<volatile T> { using new_T_l = std::conditional_t<type_name_impl<T>::l::size && !ptr_or_ref<T>, spa::concat<typename type_name_impl<T>::l>, typename type_name_impl<T>::l>; using l = std::conditional_t<ptr_or_ref<T>, typename new_T_l::template concat<vol>, vol::concat<new_T_l>>; using r = typename type_name_impl<T>::r; }; template <typename T> struct type_name_impl<const volatile T> { using new_T_l = std::conditional_t<type_name_impl<T>::l::size && !ptr_or_ref<T>, spa::concat<typename type_name_impl<T>::l>, typename type_name_impl<T>::l>; using l = std::conditional_t<ptr_or_ref<T>, typename new_T_l::template concat<con_vol>, con_vol::concat<new_T_l>>; using r = typename type_name_impl<T>::r; }; template <typename T> struct type_name_impl<T *> { using l = std::conditional_t<func_or_arr<T>, typename type_name_impl<T>::l::template concat<lpa, ast>, typename type_name_impl<T>::l::template concat< ast>>; using r = std::conditional_t<func_or_arr<T>, rpa::concat<typename type_name_impl<T>::r>, typename type_name_impl<T>::r>; }; template <typename T> struct type_name_impl<T &> { using l = std::conditional_t<func_or_arr<T>, typename type_name_impl<T>::l::template concat<lpa, amp>, typename type_name_impl<T>::l::template concat< amp>>; using r = std::conditional_t<func_or_arr<T>, rpa::concat<typename type_name_impl<T>::r>, typename type_name_impl<T>::r>; }; template <typename T> struct type_name_impl<T &&> { using l = std::conditional_t<func_or_arr<T>, typename type_name_impl<T>::l::template concat<lpa, amp, amp>, typename type_name_impl<T>::l::template concat< amp, amp>>; using r = std::conditional_t<func_or_arr<T>, rpa::concat<typename type_name_impl<T>::r>, typename type_name_impl<T>::r>; }; template <typename T, typename C> struct type_name_impl<T C::*> { using l = std::conditional_t<func_or_arr<T>, typename type_name_impl<T>::l::template concat<lpa, type_name_lit<C>, nsp, ast>, typename type_name_impl<T>::l::template concat< type_name_lit<C>, nsp, ast>>; using r = std::conditional_t<func_or_arr<T>, rpa::concat<typename type_name_impl<T>::r>, typename type_name_impl<T>::r>; }; template <typename T> struct type_name_impl<enable_if_no_cv<T[]>> { using l = typename type_name_impl<T>::l; using r = lbr::concat<rbr, typename type_name_impl<T>::r>; }; template <typename T, std::size_t N> struct type_name_impl<enable_if_no_cv<T[N]>> { using l = typename type_name_impl<T>::l; using r = lbr::concat<num_to_str_lit<N>, rbr, typename type_name_impl<T>::r>; }; template <typename T> struct type_name_impl<T()> { using l = typename type_name_impl<T>::l; using r = lpa::concat<rpa, typename type_name_impl<T>::r>; }; template <typename T, typename P1, typename ...P> struct type_name_impl<T(P1, P...)> { using l = typename type_name_impl<T>::l; using r = lpa::concat<type_name_lit<P1>, com::concat<type_name_lit<P>>..., rpa, typename type_name_impl<T>::r>; }; #define TYPE_NAME(t) template <> struct primitive_type_name<t> {using value = STR_LIT(#t);}; A: As explained by Scott Meyers in Effective Modern C++, Calls to std::type_info::name are not guaranteed to return anything sensible. The best solution is to let the compiler generate an error message during the type deduction, for example: template<typename T> class TD; int main(){ const int theAnswer = 32; auto x = theAnswer; auto y = &theAnswer; TD<decltype(x)> xType; TD<decltype(y)> yType; return 0; } The result will be something like this, depending on the compilers: test4.cpp:10:21: error: aggregate ‘TD<int> xType’ has incomplete type and cannot be defined TD<decltype(x)> xType; test4.cpp:11:21: error: aggregate ‘TD<const int *> yType’ has incomplete type and cannot be defined TD<decltype(y)> yType; Hence, we get to know that x's type is int and y's type is const int* A: I like Nick's method, A complete form might be this (for all basic data types): template <typename T> const char* typeof(T&) { return "unknown"; } // default template<> const char* typeof(int&) { return "int"; } template<> const char* typeof(short&) { return "short"; } template<> const char* typeof(long&) { return "long"; } template<> const char* typeof(unsigned&) { return "unsigned"; } template<> const char* typeof(unsigned short&) { return "unsigned short"; } template<> const char* typeof(unsigned long&) { return "unsigned long"; } template<> const char* typeof(float&) { return "float"; } template<> const char* typeof(double&) { return "double"; } template<> const char* typeof(long double&) { return "long double"; } template<> const char* typeof(std::string&) { return "String"; } template<> const char* typeof(char&) { return "char"; } template<> const char* typeof(signed char&) { return "signed char"; } template<> const char* typeof(unsigned char&) { return "unsigned char"; } template<> const char* typeof(char*&) { return "char*"; } template<> const char* typeof(signed char*&) { return "signed char*"; } template<> const char* typeof(unsigned char*&) { return "unsigned char*"; } A: A more generic solution without function overloading than my previous one: template<typename T> std::string TypeOf(T){ std::string Type="unknown"; if(std::is_same<T,int>::value) Type="int"; if(std::is_same<T,std::string>::value) Type="String"; if(std::is_same<T,MyClass>::value) Type="MyClass"; return Type;} Here MyClass is user defined class. More conditions can be added here as well. Example: #include <iostream> class MyClass{}; template<typename T> std::string TypeOf(T){ std::string Type="unknown"; if(std::is_same<T,int>::value) Type="int"; if(std::is_same<T,std::string>::value) Type="String"; if(std::is_same<T,MyClass>::value) Type="MyClass"; return Type;} int main(){; int a=0; std::string s=""; MyClass my; std::cout<<TypeOf(a)<<std::endl; std::cout<<TypeOf(s)<<std::endl; std::cout<<TypeOf(my)<<std::endl; return 0;} Output: int String MyClass A: #include <iostream> #include <typeinfo> using namespace std; #define show_type_name(_t) \ system(("echo " + string(typeid(_t).name()) + " | c++filt -t").c_str()) int main() { auto a = {"one", "two", "three"}; cout << "Type of a: " << typeid(a).name() << endl; cout << "Real type of a:\n"; show_type_name(a); for (auto s : a) { if (string(s) == "one") { cout << "Type of s: " << typeid(s).name() << endl; cout << "Real type of s:\n"; show_type_name(s); } cout << s << endl; } int i = 5; cout << "Type of i: " << typeid(i).name() << endl; cout << "Real type of i:\n"; show_type_name(i); return 0; } Output: Type of a: St16initializer_listIPKcE Real type of a: std::initializer_list<char const*> Type of s: PKc Real type of s: char const* one two three Type of i: i Real type of i: int A: For anyone still visiting, I've recently had the same issue and decided to write a small library based on answers from this post. It provides constexpr type names and type indices und is is tested on Mac, Windows and Ubuntu. The library code is here: https://github.com/TheLartians/StaticTypeInfo A: Note that the names generated by the RTTI feature of C++ is not portable. For example, the class MyNamespace::CMyContainer<int, test_MyNamespace::CMyObject> will have the following names: // MSVC 2003: class MyNamespace::CMyContainer[int,class test_MyNamespace::CMyObject] // G++ 4.2: N8MyNamespace8CMyContainerIiN13test_MyNamespace9CMyObjectEEE So you can't use this information for serialization. But still, the typeid(a).name() property can still be used for log/debug purposes A: Try: #include <typeinfo> // … std::cout << typeid(a).name() << '\n'; You might have to activate RTTI in your compiler options for this to work. Additionally, the output of this depends on the compiler. It might be a raw type name or a name mangling symbol or anything in between. A: You can use templates. template <typename T> const char* typeof(T&) { return "unknown"; } // default template<> const char* typeof(int&) { return "int"; } template<> const char* typeof(float&) { return "float"; } In the example above, when the type is not matched it will print "unknown". A: As mentioned, typeid().name() may return a mangled name. In GCC (and some other compilers) you can work around it with the following code: #include <cxxabi.h> #include <iostream> #include <typeinfo> #include <cstdlib> namespace some_namespace { namespace another_namespace { class my_class { }; } } int main() { typedef some_namespace::another_namespace::my_class my_type; // mangled std::cout << typeid(my_type).name() << std::endl; // unmangled int status = 0; char* demangled = abi::__cxa_demangle(typeid(my_type).name(), 0, 0, &status); switch (status) { case -1: { // could not allocate memory std::cout << "Could not allocate memory" << std::endl; return -1; } break; case -2: { // invalid name under the C++ ABI mangling rules std::cout << "Invalid name" << std::endl; return -1; } break; case -3: { // invalid argument std::cout << "Invalid argument to demangle()" << std::endl; return -1; } break; } std::cout << demangled << std::endl; free(demangled); return 0; } A: For something different, here's a "To English" conversion of the type, deconstructing every qualifier, extent, argument, and so on, recursively building the string describing the type I think the "deduced this" proposal would help cut down many of the specializations. In any case, this was a fun morning exercise, regardless of excessive bloat. :) struct X { using T = int *((*)[10]); T f(T, const unsigned long long * volatile * ); }; int main() { std::cout << describe<decltype(&X::f)>() << std::endl; } Output: pointer to member function of class 1X taking (pointer to array[10] of pointer to int, pointer to volatile pointer to const unsigned long long), and returning pointer to array[10] of pointer to int Here's the code: https://godbolt.org/z/7jKK4or43 Note: most current version is in my github: https://github.com/cuzdav/type_to_string // Print types as strings, including functions, member #include <type_traits> #include <typeinfo> #include <string> #include <utility> namespace detail { template <typename T> struct Describe; template <typename T, class ClassT> struct Describe<T (ClassT::*)> { static std::string describe(); }; template <typename RetT, typename... ArgsT> struct Describe<RetT(ArgsT...)> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...)> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...) const> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...) volatile> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...) noexcept> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...) const volatile> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...) const noexcept> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...) volatile noexcept> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...) const volatile noexcept> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...)&> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...) const &> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...) volatile &> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...) & noexcept> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...) const volatile &> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...) const & noexcept> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...) volatile & noexcept> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...) const volatile & noexcept> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...) &&> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...) const &&> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...) volatile &&> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...) && noexcept> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...) const volatile &&> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...) const && noexcept> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...) volatile && noexcept> { static std::string describe(); }; template <typename RetT, class ClassT, typename... ArgsT> struct Describe<RetT(ClassT::*)(ArgsT...) const volatile && noexcept> { static std::string describe(); }; template <typename T> std::string describe() { using namespace std::string_literals; auto terminal = [&](char const * desc) { return desc + " "s + typeid(T).name(); }; if constexpr(std::is_const_v<T>) { return "const " + describe<std::remove_const_t<T>>(); } else if constexpr(std::is_volatile_v<T>) { return "volatile " + describe<std::remove_volatile_t<T>>(); } else if constexpr (std::is_same_v<bool, T>) { return "bool"; } else if constexpr(std::is_same_v<char, T>) { return "char"; } else if constexpr(std::is_same_v<signed char, T>) { return "signed char"; } else if constexpr(std::is_same_v<unsigned char, T>) { return "unsigned char"; } else if constexpr(std::is_unsigned_v<T>) { return "unsigned " + describe<std::make_signed_t<T>>(); } else if constexpr(std::is_void_v<T>) { return "void"; } else if constexpr(std::is_integral_v<T>) { if constexpr(std::is_same_v<short, T>) return "short"; else if constexpr(std::is_same_v<int, T>) return "int"; else if constexpr(std::is_same_v<long, T>) return "long"; else if constexpr(std::is_same_v<long long, T>) return "long long"; } else if constexpr(std::is_same_v<float, T>) { return "float"; } else if constexpr(std::is_same_v<double, T>) { return "double"; } else if constexpr(std::is_same_v<long double, T>) { return "long double"; } else if constexpr(std::is_same_v<std::nullptr_t, T>) { return "nullptr_t"; } else if constexpr(std::is_class_v<T>) { return terminal("class"); } else if constexpr(std::is_union_v<T>) { return terminal("union"); } else if constexpr(std::is_enum_v<T>) { std::string result; if (!std::is_convertible_v<T, std::underlying_type_t<T>>) { result += "scoped "; } return result + terminal("enum"); } else if constexpr(std::is_pointer_v<T>) { return "pointer to " + describe<std::remove_pointer_t<T>>(); } else if constexpr(std::is_lvalue_reference_v<T>) { return "lvalue-ref to " + describe<std::remove_reference_t<T>>(); } else if constexpr(std::is_rvalue_reference_v<T>) { return "rvalue-ref to " + describe<std::remove_reference_t<T>>(); } else if constexpr(std::is_bounded_array_v<T>) { return "array[" + std::to_string(std::extent_v<T>) + "] of " + describe<std::remove_extent_t<T>>(); } else if constexpr(std::is_unbounded_array_v<T>) { return "array[] of " + describe<std::remove_extent_t<T>>(); } else if constexpr(std::is_function_v<T>) { return Describe<T>::describe(); } else if constexpr(std::is_member_object_pointer_v<T>) { return Describe<T>::describe(); } else if constexpr(std::is_member_function_pointer_v<T>) { return Describe<T>::describe(); } } template <typename RetT, typename... ArgsT> std::string Describe<RetT(ArgsT...)>::describe() { std::string result = "function taking ("; ((result += detail::describe<ArgsT>(", ")), ...); return result + "), returning " + detail::describe<RetT>(); } template <typename T, class ClassT> std::string Describe<T (ClassT::*)>::describe() { return "pointer to member of " + detail::describe<ClassT>() + " of type " + detail::describe<T>(); } struct Comma { char const * sep = ""; std::string operator()(std::string const& str) { return std::exchange(sep, ", ") + str; } }; enum Qualifiers {NONE=0, CONST=1, VOLATILE=2, NOEXCEPT=4, LVREF=8, RVREF=16}; template <typename RetT, typename ClassT, typename... ArgsT> std::string describeMemberPointer(Qualifiers q) { std::string result = "pointer to "; if (NONE != (q & CONST)) result += "const "; if (NONE != (q & VOLATILE)) result += "volatile "; if (NONE != (q & NOEXCEPT)) result += "noexcept "; if (NONE != (q & LVREF)) result += "lvalue-ref "; if (NONE != (q & RVREF)) result += "rvalue-ref "; result += "member function of " + detail::describe<ClassT>() + " taking ("; Comma comma; ((result += comma(detail::describe<ArgsT>())), ...); return result + "), and returning " + detail::describe<RetT>(); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...)>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(NONE); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...) const>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(CONST); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...) noexcept>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(NOEXCEPT); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...) volatile>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(VOLATILE); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...) volatile noexcept>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(VOLATILE | NOEXCEPT); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...) const volatile>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(CONST | VOLATILE); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...) const noexcept>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(CONST | NOEXCEPT); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...) const volatile noexcept>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(CONST | VOLATILE | NOEXCEPT); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...) &>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(LVREF); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...) const &>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(LVREF | CONST); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...) & noexcept>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(LVREF | NOEXCEPT); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...) volatile &>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(LVREF | VOLATILE); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...) volatile & noexcept>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(LVREF | VOLATILE | NOEXCEPT); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...) const volatile &>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(LVREF | CONST | VOLATILE); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...) const & noexcept>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(LVREF | CONST | NOEXCEPT); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...) const volatile & noexcept>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(LVREF | CONST | VOLATILE | NOEXCEPT); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...)&&>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(RVREF); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...) const &&>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(RVREF | CONST); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...) && noexcept>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(RVREF | NOEXCEPT); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...) volatile &&>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(RVREF | VOLATILE); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...) volatile && noexcept>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(RVREF | VOLATILE | NOEXCEPT); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...) const volatile &&>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(RVREF | CONST | VOLATILE); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...) const && noexcept>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(RVREF | CONST | NOEXCEPT); } template <typename RetT, class ClassT, typename... ArgsT> std::string Describe<RetT(ClassT::*)(ArgsT...) const volatile && noexcept>::describe() { return describeMemberPointer<RetT, ClassT, ArgsT...>(RVREF | CONST | VOLATILE | NOEXCEPT); } } // detail /////////////////////////////////// // Main function /////////////////////////////////// template <typename T> std::string describe() { return detail::describe<T>(); } /////////////////////////////////// // Sample code /////////////////////////////////// #include <iostream> struct X { using T = int *((*)[10]); T f(T, const unsigned long long * volatile * ); }; int main() { std::cout << describe<decltype(&X::f)>() << std::endl; } A: Howard Hinnant used magic numbers to extract type name. 康桓瑋 suggested string prefix and suffix. But prefix/suffix keep changing. With “probe_type” type_name automatically calculates prefix and suffix sizes for “probe_type” to extract type name: #include <string_view> using namespace std; namespace typeName { template <typename T> constexpr string_view wrapped_type_name () { #ifdef __clang__ return __PRETTY_FUNCTION__; #elif defined(__GNUC__) return __PRETTY_FUNCTION__; #elif defined(_MSC_VER) return __FUNCSIG__; #endif } class probe_type; constexpr string_view probe_type_name ("typeName::probe_type"); constexpr string_view probe_type_name_elaborated ("class typeName::probe_type"); constexpr string_view probe_type_name_used (wrapped_type_name<probe_type> ().find (probe_type_name_elaborated) != -1 ? probe_type_name_elaborated : probe_type_name); constexpr size_t prefix_size () { return wrapped_type_name<probe_type> ().find (probe_type_name_used); } constexpr size_t suffix_size () { return wrapped_type_name<probe_type> ().length () - prefix_size () - probe_type_name_used.length (); } template <typename T> string_view type_name () { constexpr auto type_name = wrapped_type_name<T> (); return type_name.substr (prefix_size (), type_name.length () - prefix_size () - suffix_size ()); } } #include <iostream> using typeName::type_name; using typeName::probe_type; class test; int main () { cout << type_name<class test> () << endl; cout << type_name<const int*&> () << endl; cout << type_name<unsigned int> () << endl; const int ic = 42; const int* pic = &ic; const int*& rpic = pic; cout << type_name<decltype(ic)> () << endl; cout << type_name<decltype(pic)> () << endl; cout << type_name<decltype(rpic)> () << endl; cout << type_name<probe_type> () << endl; } Output gcc 10.2: test const int *& unsigned int const int const int * const int *& typeName::probe_type clang 11.0.0: test const int *& unsigned int const int const int * const int *& typeName::probe_type VS 2019 version 16.7.6: class test const int*& unsigned int const int const int* const int*& class typeName::probe_type A: Another take on @康桓瑋's answer (originally ), making less assumptions about the prefix and suffix specifics, and inspired by @Val's answer - but without polluting the global namespace; without any conditions; and hopefully easier to read. The popular compilers provide a macro with the current function's signature. Now, functions are templatable; so the signature contains the template arguments. So, the basic approach is: Given a type, be in a function with that type as a template argument. Unfortunately, the type name is wrapped in text describing the function, which is different between compilers. For example, with GCC, the signature of template <typename T> int foo() with type double is: int foo() [T = double]. So, how do you get rid of the wrapper text? @HowardHinnant's solution is the shortest and most "direct": Just use per-compiler magic numbers to remove a prefix and a suffix. But obviously, that's very brittle; and nobody likes magic numbers in their code. It turns out, however, that given the macro value for a type with a known name, you can determine what prefix and suffix constitute the wrapping. #include <string_view> template <typename T> constexpr std::string_view type_name(); template <> constexpr std::string_view type_name<void>() { return "void"; } namespace detail { using type_name_prober = void; template <typename T> constexpr std::string_view wrapped_type_name() { #ifdef __clang__ return __PRETTY_FUNCTION__; #elif defined(__GNUC__) return __PRETTY_FUNCTION__; #elif defined(_MSC_VER) return __FUNCSIG__; #else #error "Unsupported compiler" #endif } constexpr std::size_t wrapped_type_name_prefix_length() { return wrapped_type_name<type_name_prober>().find(type_name<type_name_prober>()); } constexpr std::size_t wrapped_type_name_suffix_length() { return wrapped_type_name<type_name_prober>().length() - wrapped_type_name_prefix_length() - type_name<type_name_prober>().length(); } } // namespace detail template <typename T> constexpr std::string_view type_name() { constexpr auto wrapped_name = detail::wrapped_type_name<T>(); constexpr auto prefix_length = detail::wrapped_type_name_prefix_length(); constexpr auto suffix_length = detail::wrapped_type_name_suffix_length(); constexpr auto type_name_length = wrapped_name.length() - prefix_length - suffix_length; return wrapped_name.substr(prefix_length, type_name_length); } See it on GodBolt. This should be working with MSVC as well. A: According to Howard's solution, if you don't like the magic number, I think this is a good way to represent and it looks intuitive: #include <string_view> template <typename T> constexpr auto type_name() { std::string_view name, prefix, suffix; #ifdef __clang__ name = __PRETTY_FUNCTION__; prefix = "auto type_name() [T = "; suffix = "]"; #elif defined(__GNUC__) name = __PRETTY_FUNCTION__; prefix = "constexpr auto type_name() [with T = "; suffix = "]"; #elif defined(_MSC_VER) name = __FUNCSIG__; prefix = "auto __cdecl type_name<"; suffix = ">(void)"; #endif name.remove_prefix(prefix.size()); name.remove_suffix(suffix.size()); return name; } Demo. A: You could use a traits class for this. Something like: #include <iostream> using namespace std; template <typename T> class type_name { public: static const char *name; }; #define DECLARE_TYPE_NAME(x) template<> const char *type_name<x>::name = #x; #define GET_TYPE_NAME(x) (type_name<typeof(x)>::name) DECLARE_TYPE_NAME(int); int main() { int a = 12; cout << GET_TYPE_NAME(a) << endl; } The DECLARE_TYPE_NAME define exists to make your life easier in declaring this traits class for all the types you expect to need. This might be more useful than the solutions involving typeid because you get to control the output. For example, using typeid for long long on my compiler gives "x". A: Very ugly but does the trick if you only want compile time info (e.g. for debugging): auto testVar = std::make_tuple(1, 1.0, "abc"); decltype(testVar)::foo= 1; Returns: Compilation finished with errors: source.cpp: In function 'int main()': source.cpp:5:19: error: 'foo' is not a member of 'std::tuple<int, double, const char*>' A: In C++11, we have decltype. There is no way in standard c++ to display exact type of variable declared using decltype. We can use boost typeindex i.e type_id_with_cvr (cvr stands for const, volatile, reference) to print type like below. #include <iostream> #include <boost/type_index.hpp> using namespace std; using boost::typeindex::type_id_with_cvr; int main() { int i = 0; const int ci = 0; cout << "decltype(i) is " << type_id_with_cvr<decltype(i)>().pretty_name() << '\n'; cout << "decltype((i)) is " << type_id_with_cvr<decltype((i))>().pretty_name() << '\n'; cout << "decltype(ci) is " << type_id_with_cvr<decltype(ci)>().pretty_name() << '\n'; cout << "decltype((ci)) is " << type_id_with_cvr<decltype((ci))>().pretty_name() << '\n'; cout << "decltype(std::move(i)) is " << type_id_with_cvr<decltype(std::move(i))>().pretty_name() << '\n'; cout << "decltype(std::static_cast<int&&>(i)) is " << type_id_with_cvr<decltype(static_cast<int&&>(i))>().pretty_name() << '\n'; return 0; } A: You may also use c++filt with option -t (type) to demangle the type name: #include <iostream> #include <typeinfo> #include <string> using namespace std; int main() { auto x = 1; string my_type = typeid(x).name(); system(("echo " + my_type + " | c++filt -t").c_str()); return 0; } Tested on linux only. A: Copying from this answer: https://stackoverflow.com/a/56766138/11502722 I was able to get this somewhat working for C++ static_assert(). The wrinkle here is that static_assert() only accepts string literals; constexpr string_view will not work. You will need to accept extra text around the typename, but it works: template<typename T> constexpr void assertIfTestFailed() { #ifdef __clang__ static_assert(testFn<T>(), "Test failed on this used type: " __PRETTY_FUNCTION__); #elif defined(__GNUC__) static_assert(testFn<T>(), "Test failed on this used type: " __PRETTY_FUNCTION__); #elif defined(_MSC_VER) static_assert(testFn<T>(), "Test failed on this used type: " __FUNCSIG__); #else static_assert(testFn<T>(), "Test failed on this used type (see surrounding logged error for details)."); #endif } } MSVC Output: error C2338: Test failed on this used type: void __cdecl assertIfTestFailed<class BadType>(void) ... continued trace of where the erroring code came from ... A: Building on a number of the previous answers, I made this solution which does not store the result of __PRETTY_FUNCTION__ in the binary. It uses a static array to hold the string representation of the type name. It requires C++23. #include <iostream> #include <string_view> #include <array> template <typename T> constexpr auto type_name() { auto gen = [] <class R> () constexpr -> std::string_view { return __PRETTY_FUNCTION__; }; constexpr std::string_view search_type = "float"; constexpr auto search_type_string = gen.template operator()<float>(); constexpr auto prefix = search_type_string.find(search_type); constexpr auto suffix = search_type_string.size() - prefix - search_type.size(); constexpr auto str = gen.template operator()<T>(); constexpr int size = str.size() - prefix - suffix; constexpr auto static arr = [&]<std::size_t... I>(std::index_sequence<I...>) constexpr { return std::array<char, size>{str[prefix + I]...}; } (std::make_index_sequence<size>{}); return std::string_view(arr.data(), size); } A: C++ Data type resolve in Compile-Time using Template and Runtime using TypeId. Compile time solution. template <std::size_t...Idxs> constexpr auto substring_as_array(std::string_view str, std::index_sequence<Idxs...>) { return std::array{str[Idxs]..., '\n'}; } template <typename T> constexpr auto type_name_array() { #if defined(__clang__) constexpr auto prefix = std::string_view{"[T = "}; constexpr auto suffix = std::string_view{"]"}; constexpr auto function = std::string_view{__PRETTY_FUNCTION__}; #elif defined(__GNUC__) constexpr auto prefix = std::string_view{"with T = "}; constexpr auto suffix = std::string_view{"]"}; constexpr auto function = std::string_view{__PRETTY_FUNCTION__}; #elif defined(_MSC_VER) constexpr auto prefix = std::string_view{"type_name_array<"}; constexpr auto suffix = std::string_view{">(void)"}; constexpr auto function = std::string_view{__FUNCSIG__}; #else # error Unsupported compiler #endif constexpr auto start = function.find(prefix) + prefix.size(); constexpr auto end = function.rfind(suffix); static_assert(start < end); constexpr auto name = function.substr(start, (end - start)); return substring_as_array(name, std::make_index_sequence<name.size()>{}); } template <typename T> struct type_name_holder { static inline constexpr auto value = type_name_array<T>(); }; template <typename T> constexpr auto type_name() -> std::string_view { constexpr auto& value = type_name_holder<T>::value; return std::string_view{value.data(), value.size()}; } Runtime solution. template <typename T> void PrintDataType(T type) { auto name = typeid(type).name(); string cmd_str = "echo '" + string(name) + "' | c++filt -t"; system(cmd_str.c_str()); } Main Code #include <iostream> #include <map> #include <string> #include <typeinfo> #include <string_view> #include <array> // std::array #include <utility> // std::index_sequence using std::string; int main() { //Dynamic resolution. std::map<int, int> iMap; PrintDataType(iMap); //Compile type resolution. std::cout << type_name<std::list<int>>() << std::endl; return 0; } Code Snippet A: Consider this code: #include <iostream> int main() { int a = 2; // Declare type "int" std::string b = "Hi"; // Declare type "string" long double c = 3438; // Declare type "long double" if(typeid(a) == typeid(int)) { std::cout<<"int\n"; } if(typeid(b) == typeid(std::string)) { std::cout<<"string\n"; } if(typeid(c) == typeid(long double)) { std::cout<<"long double"; } return 0; } I believe you want the whole word (rather than only printing the short form of int (which is i), you want int), that is why I did the if. For some of the variables (string,long double etc... which do not print the expected result comparing their short forms), you need to compare the result of applying the typeid operator with the typeid of a specific type. From cppreference: Returns an implementation defined null-terminated character string containing the name of the type. No guarantees are given; in particular, the returned string can be identical for several types and change between invocations of the same program. IMO, Python is better than C++ in this case. Python has built-in type function to directly access the data type of the variable.
{ "language": "en", "url": "https://stackoverflow.com/questions/81870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "602" }
Q: How to start facebook app? Just want to know what is better way to get start developing faccebook app?Any tutorial recommnedation?And which is better to start up -php or rails? A: From my experience, there is a much better support focus on PHP than on anything else. That said, there'd be no point learning PHP just to take advantage of the superior support. Two other general points: * *The official support community is really awful. The community has no expert voices and the FB staff only interject when their reputation is at stake. Your best friend is Google and your ability to extrapolate from tutorials. *The FB style of interaction doesn't really lend itself to an MVC framework. One might still save you time, but I find they get under my feet. If you need convincing on this point, may I refer you to the many cases where JSON responses are required or where FBML needs to be 'set' for the profile. The Facebook platform isn't a whole lot of fun and your users won't thank you for your work. But it's a massive audience and a very useful learning experience. Good luck! A: Btw, you can also use ASP.NET, in which case here is how to get started: http://www.stevetrefethen.com/wiki/Facebook%20application%20development%20in%20ASP.NET.ashx The link includes a VS.NET starter kit which makes it very easy to get started quickly. A: Start with their docs: http://developer.facebook.com/get_started.php?tab=tutorial There are libraries floating around for lots of different languages and frameworks so I say: whatever you're happiest with is where you should start. A: I've seen pretty complete FB wrapper libraries for both PHP and Ruby. Which one you should choose really depends on which language/framework you're more comfortable with. I will say that when I was evaluating Ruby libraries recently, Facebooker seemed to be superior in terms of active development and tutorial content on the web. (Be sure to use the Facebooker project on GitHub, not the deprecated one on RubyForge.) A: Can I put a shout out for Ruby On Rails with the Koala gem? I have built a Facebook app in the last two months learning Ruby On Rails from scratch (the last programming of any kind I did was mathematical modeling for my Physics degree project in 1995 in Fortran!). Ruby On Rails was very simple to pick up and there is a ton of help out there. There are also lots of work already done for you in the way of Ruby Gems. For Facebook I looked through them all and I found Koala the easiest to use, personally. http://github.com/arsduo/koala/ A: Use the Get Started tutorial on developers.facebook.com. This will suggest you use the sample code button which will give you some PHP to list your friends. Then you can start playing with the PHP using the wiki for reference to the FQL and FBML. PHP will be easier to start with as there are lots of samples in PHP. Rails may have advantages in the long term though. A: Re: Ruby on Rails vs. PHP - whichever you're currently competent in. If neither, whichever you'd like to become competent in. Both can do what you want.
{ "language": "en", "url": "https://stackoverflow.com/questions/81874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Why is the Content Header 'application/javascript' causing a 500 Error? I have a script that works fine on my test server (using IIS6). The script processes an ajax request and sends a response with the following line: header( 'application/javascript' ); But on my live server, this line crashes the page and causes a 500 error. Do I need to allow PHP to send different MIME types in IIS7? If so, how do I do this? I can't find any way on the interface. A: take a look at http://en.wikipedia.org/wiki/Mime_type There it says you should use application/javascript instead of text/javascript. A: The header is incorrect, try this instead: header('Content-Type: application/javascript');
{ "language": "en", "url": "https://stackoverflow.com/questions/81896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: How to find and kill running Win-Processes from within Java? I need a Java way to find a running Win process from which I know to name of the executable. I want to look whether it is running right now and I need a way to kill the process if I found it. A: private static final String TASKLIST = "tasklist"; private static final String KILL = "taskkill /F /IM "; public static boolean isProcessRunning(String serviceName) throws Exception { Process p = Runtime.getRuntime().exec(TASKLIST); BufferedReader reader = new BufferedReader(new InputStreamReader( p.getInputStream())); String line; while ((line = reader.readLine()) != null) { System.out.println(line); if (line.contains(serviceName)) { return true; } } return false; } public static void killProcess(String serviceName) throws Exception { Runtime.getRuntime().exec(KILL + serviceName); } EXAMPLE: public static void main(String args[]) throws Exception { String processName = "WINWORD.EXE"; //System.out.print(isProcessRunning(processName)); if (isProcessRunning(processName)) { killProcess(processName); } } A: There is a little API providing the desired functionality: https://github.com/kohsuke/winp Windows Process Library A: Here's a groovy way of doing it: final Process jpsProcess = "cmd /c jps".execute() final BufferedReader reader = new BufferedReader(new InputStreamReader(jpsProcess.getInputStream())); def jarFileName = "FileName.jar" def processId = null reader.eachLine { if (it.contains(jarFileName)) { def args = it.split(" ") if (processId != null) { throw new IllegalStateException("Multiple processes found executing ${jarFileName} ids: ${processId} and ${args[0]}") } else { processId = args[0] } } } if (processId != null) { def killCommand = "cmd /c TASKKILL /F /PID ${processId}" def killProcess = killCommand.execute() def stdout = new StringBuilder() def stderr = new StringBuilder() killProcess.consumeProcessOutput(stdout, stderr) println(killCommand) def errorOutput = stderr.toString() if (!errorOutput.empty) { println(errorOutput) } def stdOutput = stdout.toString() if (!stdOutput.empty) { println(stdOutput) } killProcess.waitFor() } else { System.err.println("Could not find process for jar ${jarFileName}") } A: You could use a command line tool for killing processes like SysInternals PsKill and SysInternals PsList. You could also use the build-in tasklist.exe and taskkill.exe, but those are only available on Windows XP Professional and later (not in the Home Edition). Use java.lang.Runtime.exec to execute the program. A: You can use command line windows tools tasklist and taskkill and call them from Java using Runtime.exec(). A: Use the following class to kill a Windows process (if it is running). I'm using the force command line argument /F to make sure that the process specified by the /IM argument will be terminated. import java.io.BufferedReader; import java.io.InputStreamReader; public class WindowsProcess { private String processName; public WindowsProcess(String processName) { this.processName = processName; } public void kill() throws Exception { if (isRunning()) { getRuntime().exec("taskkill /F /IM " + processName); } } private boolean isRunning() throws Exception { Process listTasksProcess = getRuntime().exec("tasklist"); BufferedReader tasksListReader = new BufferedReader( new InputStreamReader(listTasksProcess.getInputStream())); String tasksLine; while ((tasksLine = tasksListReader.readLine()) != null) { if (tasksLine.contains(processName)) { return true; } } return false; } private Runtime getRuntime() { return Runtime.getRuntime(); } } A: You will have to call some native code, since IMHO there is no library that does it. Since JNI is cumbersome and hard you might try to use JNA (Java Native Access). https://jna.dev.java.net/ A: small change in answer written by Super kakes private static final String KILL = "taskkill /IMF "; Changed to .. private static final String KILL = "taskkill /IM "; /IMF option doesnot work .it does not kill notepad..while /IM option actually works
{ "language": "en", "url": "https://stackoverflow.com/questions/81902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: Quicker way to create duplicate Virtual PC images? I use Virtual PC to create fresh environments for testing my installer. But I must be doing something wrong because a VPC image with Vista or XP inside is taking around 15GB of disk space (that includes VS2005/S2008 installed in them). To create a new copy for testing I copy and paste the folder that has the .vhd, .vmc and .vsv files inside. After using the new VPC image for testing I then delete that copied folder. This works but it takes a looong time to copy 15GB each time. Is there some faster/more efficent approach? A: Use differencing/undo disks. This means when you shut down your VPC you'll be asked if you want to save changes, simply answer no and you'll be back to where you started. A: Doesn't VirtualPC have a fake-write/snapshot mode? That way it should not write to your original disk at all unless you say so at the end of the session. If it doesn't, you might seriously want to consider VMWare or VirtualBox as these do have this feature and it's REALLY useful for things like this. Edit: it looks like VPC does have a feature like this called differencing disks. Have a look at this: http://www.andrewconnell.com/blog/articles/UseVirtualPCsDifferencingDisksToYourAdvantage.aspx A: VPC has a so called undo disk. you create sg similar like "restore point" and in vpc you can roll back to that version. ideal for testing setups. A: Sound like you need to use differencing virtual hard disks rather than creating a new copy every time. Instructions here A: Another option: you can use Microsoft's ImageX to store VHDs in WIM format. If you have multiple images you are constantly reusing, this is an incredible way to manage VMs. I have a slew of Windows XP and 2003 images I keep in compressed WIM format. You can capture the VMs by mounting them in Windows PE and then capturing them to a network drive. A: Also, you mentioned cut & paste, this is not the best way to be copying large amounts of data within windows. At least use xcopy, robocopy is even faster. A: Also, another option if you are looking to duplicate the images for use on other real machines, you can convert the disk to a dynamically expanding disk which will reduce the size of the vdisk making it easier to copy. This also allows for a more rapid backup, which looks to be part of what your testing does by default. The problem with dynamic disks is they tend to be slightly slower performance wise than fixed-size disks. However, if all you are doing is using it for testing on the same machine, see the answers above. Differencing is the way to go.
{ "language": "en", "url": "https://stackoverflow.com/questions/81904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Transact-SQL to sum up elapsed time I have a table in my database the records start and stop times for a specific task. Here is a sample of the data: Start Stop 9/15/2008 5:59:46 PM 9/15/2008 6:26:28 PM 9/15/2008 6:30:45 PM 9/15/2008 6:40:49 PM 9/16/2008 8:30:45 PM 9/15/2008 9:20:29 PM 9/16/2008 12:30:45 PM 12/31/9999 12:00:00 AM I would like to write a script that totals up the elapsed minutes for these time frames, and wherever there is a 12/31/9999 date, I want it to use the current date and time, as this is still in progress. How would I do this using Transact-SQL? A: I think this is cleaner: SELECT SUM( DATEDIFF(mi, Start, ISNULL(NULLIF(Stop,'99991231'), GetDate())) ) AS ElapsedTime FROM Table A: Try: Select Sum( DateDiff( Minute, IsNull((Select Start where Start != '9999.12.31'), GetDate()), IsNull((Select End where End != '9999.12.31'), GetDate()) ) ) from *tableName* A: SELECT SUM( CASE WHEN Stop = '31 dec 9999' THEN DateDiff(mi, Start, Stop) ELSE DateDiff(mi, Start, GetDate()) END ) AS TotalMinutes FROM task However, a better solution would be to make the Stop field nullable, and make it null when the task is still running. That way, you could do this: SELECT SUM( DateDiff( mi, Start, IsNull(Stop, GetDate() ) ) AS TotalMinutes FROM task A: The following will work for SQL Server, other databases use different functions for date calculation and getting the current time. Select Case When (Stop <> '31 Dec 9999') Then DateDiff(mi, Start, Stop) Else DateDiff(mi, Start, GetDate()) End From ATable A: The datediff function can display the elapsed minutes. The if statement for the 12/31/9999 check I'll leave as an excercise for the reader ;-) A: --you can play with the datediff using mi for minutes -- this give you second of each task select Start, Stop, CASE WHEN Stop = '9999-12-31' THEN datediff(ss, start,getdate()) ELSE datediff(ss, start,stop) END duration_in_seconds from mytable -- sum Select Sum(duration_in_seconds) from ( select Start, Stop, CASE WHEN Stop = '9999-12-31' THEN datediff(ss, start,getdate()) ELSE datediff(ss, start,stop) END duration_in_seconds from mytable)x A: Datediff becomes more difficult to use as you have more dateparts in your difference (i.e. in your case, looks like minutes and seconds; occasionally hours). Fortunately, in most variations of TSQL, you can simply perform math on the dates. Assuming this is a date field, you can probably just query: select duration = stop - start For a practical example, let's select the difference between two datetimes without bothering with a table: select convert(datetime,'2008-09-17 04:56:45.030') - convert(datetime,'2008-09-17 04:53:05.920') which returns "1900-01-01 00:03:39.110", indicating there are zero years/months/days; 3 mins, 39.11 seconds between these two datetimes. From there, your code can TimeSpan.Parse this value. A: Using help from AJ's answer, BelowNinety's answer and Nerdfest's answer, I came up with the following: Select Sum( Case When End = '12/31/9999 12:00:00 AM' Then DateDiff(mi, Start, Getdate()) Else DateDiff(mi, Start, End) End) As ElapsedTime From Table Thanks for the help!
{ "language": "en", "url": "https://stackoverflow.com/questions/81905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: ASP.NET website does not rebuild I have a solution with several projects in Visual 2008, let's say SuggestionProcessor (a class library) and Suggestions (a website) with a webhandler GetSuggestions.ashx. I changed a method in SuggestionProcessor which is used in the webhandler. The call in the webhandler has been adjusted to the changed method. But now when I try to execute the webhandler after a rebuild I get an error that the method I changed is missing, displaying the old method signature. When I try to rebuild the entire project it seems that the website does not rebuild properly and the code I changed in the webhandler does not seem to be included in the rebuild. I made sure that the website is last in the build order. What I tried is remove the dlls that the build process should rebuild from the bin folder (not the ones referenced from outside the website). When rebuilding I now get a: 'could not load type Suggestions.global'. Duh, that is what the build process should create. What is going wrong here? A: I solved this one by reverting to a previous state when it still worked. Thanks for the suggestions, I'm sorry they didn't work in my situation. Shall I delete this question now that it doesn't really have a clear use for someone else? A: I would check your web.config file, there may be references there that are causing the error since they are missing. A: Maybe try and right click on your solution and select "Clean solution" and then try and rebuild all. If that doesn't work, check your solutions build configuration and make sure all your projects are getting built A: Try "Clean Solution", then building SuggestionProcessor, and after that clean and rebuild the web solution. A: Visual Studio creates a copy of all your DLLs and sometimes this copies are not refreshed. Just execute iisreset and delete all folders in: C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322\Temporary ASP.NET Files\ Of course change windows installation folder and framework folder to your version! A: I don't think so... I've seen similar issues in Visual Studio 2008 working on web projects where the build and rebuild would fail time after time. I knew that my changes shouldn't have affected the build so I just kept cleaning and building each of the individual projects in my solution until finally (and I do mean finally as in, it took up to 10 builds) my web project would build correctly. I have no idea why, but it feels like some sort of caching issue. A: From my answer at "Could not load type [Namespace].Global" causing me grief: It seems that VS 2008 does not always add the .asax(.cs) files correctly by default. In this case, refreshing, rebuilding, removing and re-adding, etc. etc. will not fix the problem. Instead: Check the Build Action of Global.asax.cs. It should be set to Compile. In Solution Explorer, Right-click Global.asax.cs and go to Properties. In the Properties pane, set the Build Action (while not debugging).
{ "language": "en", "url": "https://stackoverflow.com/questions/81911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Easy way to export a SQL table without access to the server or phpMyADMIN I need a way to easily export and then import data in a MySQL table from a remote server to my home server. I don't have direct access to the server, and no utilities such as phpMyAdmin are installed. I do, however, have the ability to put PHP scripts on the server. How do I get at the data? I ask this question purely to record my way to do it A: WORKING SOLUTION (latest version at: Export.php + Import.php ) EXPORT_TABLES("localhost","user","pass","db_name"); CODE: //https://github.com/tazotodua/useful-php-scripts function EXPORT_TABLES($host,$user,$pass,$name, $tables=false, $backup_name=false ){ $mysqli = new mysqli($host,$user,$pass,$name); $mysqli->select_db($name); $mysqli->query("SET NAMES 'utf8'"); $queryTables = $mysqli->query('SHOW TABLES'); while($row = $queryTables->fetch_row()) { $target_tables[] = $row[0]; } if($tables !== false) { $target_tables = array_intersect( $target_tables, $tables); } foreach($target_tables as $table){ $result = $mysqli->query('SELECT * FROM '.$table); $fields_amount=$result->field_count; $rows_num=$mysqli->affected_rows; $res = $mysqli->query('SHOW CREATE TABLE '.$table); $TableMLine=$res->fetch_row(); $content = (!isset($content) ? '' : $content) . "\n\n".$TableMLine[1].";\n\n"; for ($i = 0, $st_counter = 0; $i < $fields_amount; $i++, $st_counter=0) { while($row = $result->fetch_row()) { //when started (and every after 100 command cycle): if ($st_counter%100 == 0 || $st_counter == 0 ) {$content .= "\nINSERT INTO ".$table." VALUES";} $content .= "\n("; for($j=0; $j<$fields_amount; $j++) { $row[$j] = str_replace("\n","\\n", addslashes($row[$j]) ); if (isset($row[$j])){$content .= '"'.$row[$j].'"' ; }else {$content .= '""';} if ($j<($fields_amount-1)){$content.= ',';} } $content .=")"; //every after 100 command cycle [or at last line] ....p.s. but should be inserted 1 cycle eariler if ( (($st_counter+1)%100==0 && $st_counter!=0) || $st_counter+1==$rows_num) {$content .= ";";} else {$content .= ",";} $st_counter=$st_counter+1; } } $content .="\n\n\n"; } $backup_name = $backup_name ? $backup_name : $name."___(".date('H-i-s')."_".date('d-m-Y').")__rand".rand(1,11111111).".sql"; header('Content-Type: application/octet-stream'); header("Content-Transfer-Encoding: Binary"); header("Content-disposition: attachment; filename=\"".$backup_name."\""); echo $content; exit; } A: You could use SQL for this: $file = 'backups/mytable.sql'; $result = mysql_query("SELECT * INTO OUTFILE '$file' FROM `##table##`"); Then just point a browser or FTP client at the directory/file (backups/mytable.sql). This is also a nice way to do incremental backups, given the filename a timestamp for example. To get it back in to your DataBase from that file you can use: $file = 'backups/mytable.sql'; $result = mysql_query("LOAD DATA INFILE '$file' INTO TABLE `##table##`"); The other option is to use PHP to invoke a system command on the server and run 'mysqldump': $file = 'backups/mytable.sql'; system("mysqldump --opt -h ##databaseserver## -u ##username## -p ##password## ##database | gzip > ".$file); A: If you have FTP/SFTP access you could just go ahead and upload phpMyAdmin yourself. I'm using this little package to make automated mysql backups from a server I only have FTP access to: http://www.taw24.de/download/pafiledb.php?PHPSESSID=b48001ea004aacd86f5643a72feb2829&action=viewfile&fid=43&id=1 The site is in german but the download has some english documentation as well. A quick google also turns up this, but I have not used it myself: http://snipplr.com/view/173/mysql-dump/ A: You might consider looking at: http://www.webyog.com This is a great GUI admin tool, and they have a really neat HTTP-Tunneling feature (I'm not sure if this is only in enterprise which costs a few bucks). Basically you upload a script they provide into your webspace (php script) and point sqlyog manager to it and you can access the database(s). It uses this script to tunnel/proxy the requests/queries between your home client and the server. I know at least 1 person who uses this method with great results. A: Here is a PHP script I made which will backup all tables in your database. It is based on this http://davidwalsh.name/backup-mysql-database-php with some improvements. First of all it will correctly set up foreign key restrictions. In my set up the script will run on a certain day of the week, let's say Monday. In case it did not run on Monday, it will still run on Tuesday (for example), creating the .sql file with the date of previous Monday, when it was supposed to run. It will erase .sql file from 4 weeks ago, so it always keeps the last 4 backups. Here's the code: <?php backup_tables(); // backup all tables in db function backup_tables() { $day_of_backup = 'Monday'; //possible values: `Monday` `Tuesday` `Wednesday` `Thursday` `Friday` `Saturday` `Sunday` $backup_path = 'databases/'; //make sure it ends with "/" $db_host = 'localhost'; $db_user = 'root'; $db_pass = ''; $db_name = 'movies_database_1'; //set the correct date for filename if (date('l') == $day_of_backup) { $date = date("Y-m-d"); } else { //set $date to the date when last backup had to occur $datetime1 = date_create($day_of_backup); $date = date("Y-m-d", strtotime($day_of_backup.' -7 days')); } if (!file_exists($backup_path.$date.'-backup'.'.sql')) { //connect to db $link = mysqli_connect($db_host,$db_user,$db_pass); mysqli_set_charset($link,'utf8'); mysqli_select_db($link,$db_name); //get all of the tables $tables = array(); $result = mysqli_query($link, 'SHOW TABLES'); while($row = mysqli_fetch_row($result)) { $tables[] = $row[0]; } //disable foreign keys (to avoid errors) $return = 'SET FOREIGN_KEY_CHECKS=0;' . "\r\n"; $return.= 'SET SQL_MODE="NO_AUTO_VALUE_ON_ZERO";' . "\r\n"; $return.= 'SET AUTOCOMMIT=0;' . "\r\n"; $return.= 'START TRANSACTION;' . "\r\n"; //cycle through foreach($tables as $table) { $result = mysqli_query($link, 'SELECT * FROM '.$table); $num_fields = mysqli_num_fields($result); $num_rows = mysqli_num_rows($result); $i_row = 0; //$return.= 'DROP TABLE '.$table.';'; $row2 = mysqli_fetch_row(mysqli_query($link,'SHOW CREATE TABLE '.$table)); $return.= "\n\n".$row2[1].";\n\n"; if ($num_rows !== 0) { $row3 = mysqli_fetch_fields($result); $return.= 'INSERT INTO '.$table.'( '; foreach ($row3 as $th) { $return.= '`'.$th->name.'`, '; } $return = substr($return, 0, -2); $return.= ' ) VALUES'; for ($i = 0; $i < $num_fields; $i++) { while($row = mysqli_fetch_row($result)) { $return.="\n("; for($j=0; $j<$num_fields; $j++) { $row[$j] = addslashes($row[$j]); $row[$j] = preg_replace("#\n#","\\n",$row[$j]); if (isset($row[$j])) { $return.= '"'.$row[$j].'"' ; } else { $return.= '""'; } if ($j<($num_fields-1)) { $return.= ','; } } if (++$i_row == $num_rows) { $return.= ");"; // last row } else { $return.= "),"; // not last row } } } } $return.="\n\n\n"; } // enable foreign keys $return .= 'SET FOREIGN_KEY_CHECKS=1;' . "\r\n"; $return.= 'COMMIT;'; //set file path if (!is_dir($backup_path)) { mkdir($backup_path, 0755, true); } //delete old file $old_date = date("Y-m-d", strtotime('-4 weeks', strtotime($date))); $old_file = $backup_path.$old_date.'-backup'.'.sql'; if (file_exists($old_file)) unlink($old_file); //save file $handle = fopen($backup_path.$date.'-backup'.'.sql','w+'); fwrite($handle,$return); fclose($handle); } } ?> A: I did it by exporting to CSV, and then importing with whatever utility is available. I quite like the use of the php://output stream. $result = $db_con->query('SELECT * FROM `some_table`'); $fp = fopen('php://output', 'w'); if ($fp && $result) { header('Content-Type: text/csv'); header('Content-Disposition: attachment; filename="export.csv"'); while ($row = $result->fetch_array(MYSQLI_NUM)) { fputcsv($fp, array_values($row)); } die; } A: You should also consider phpMinAdmin which is only one file, so its easy to upload and setup. A: I found that I didn't have enough permissions for SELECT * INTO OUTFILE. But I was able to use enough php (iterating and imploding) to really cut down on the nested loops compared to other approaches. $dbfile = tempnam(sys_get_temp_dir(),'sql'); // array_chunk, but for an iterable function iter_chunk($iterable,$chunksize) { foreach ( $iterable as $item ) { $ret[] = $item; if ( count($ret) >= $chunksize ) { yield $ret; $ret = array(); } } if ( count($ret) > 0 ) { yield $ret; } } function tupleFromArray($assocArr) { return '('.implode(',',array_map(function($val) { return '"'.addslashes($val).'"'; },array_values($assocArr))).')'; } file_put_contents($dbfile,"\n-- Table $table --\n/*\n"); $description = $db->query("DESCRIBE `$table`"); $row = $description->fetch_assoc(); file_put_contents($dbfile,implode("\t",array_keys($row))."\n",FILE_APPEND); foreach ( $description as $row ) { file_put_contents($dbfile,implode("\t",array_values($row))."\n",FILE_APPEND); } file_put_contents($dbfile,"*/\n",FILE_APPEND); file_put_contents($dbfile,"DROP TABLE IF EXISTS `$table`;\n",FILE_APPEND); file_put_contents($dbfile,array_pop($db->query("SHOW CREATE TABLE `$table`")->fetch_row()),FILE_APPEND); $ret = $db->query("SELECT * FROM `$table`"); $chunkedData = iter_chunk($ret,1023); foreach ( $chunkedData as $chunk ) { file_put_contents($dbfile, "\n\nINSERT INTO `$table` VALUES " . implode(',',array_map('tupleFromArray',$chunk)) . ";\n", FILE_APPEND ); } readfile($dbfile); unlink($dbfile); If you have tables with foreign keys, this approach can still work if you drop them in the correct order and then recreate them in the correct (reverse) order. The CREATE statement will create the foreign key dependency for you. Go through SELECT * FROM information_schema.referential_constraints to determine that order. If your foreign keys have a circular dependency, then there is no possible order to drop or create. In that case, you might be able to follow the lead of phpMyAdmin, which creates all of the foreign keys at the end. But this also means that you have to adjust the CREATE statements. A: I use mysqldump via the command line : exec("mysqldump sourceDatabase -uUsername -p'password' > outputFilename.sql"); Then you just download the resulting file and your done.
{ "language": "en", "url": "https://stackoverflow.com/questions/81934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: Hide google Toolbar by javascript Is there a way to hide the google toolbar in my browser programmable? A: You haven't said which browser you are using so I'm going to assume Internet Explorer* and answer No. If JavaScript on a web page could manipulate the browser, it would be a serious security hole and could create a lot of confusion for users. So no... for a good reason: Security. *. If you were using Firefox, and were talking about JavaScript within an extension to manipulate and theme the window chrome then this would be a different story. A: I really think that it is imposible to do that with javascript. This is because javascript is designed to control the behaviour of the site. And the browser is not part of the site. Of course maby you are talking about some other Google toolbar then the plugin in the browser. A: As far as I know, you cannot access these parts of the browser due to security issues. But you can load new browser windows without toolbars as such. I don't know exactly how (hopefully other users will help yout out), but maybe start here: http://www.experts-exchange.com/Web/Web_Languages/JavaScript/Q_20782379.html (PS: I know, it's experts-exchange, but I'm not going to copy someone elses work, even if it's posted on EE).
{ "language": "en", "url": "https://stackoverflow.com/questions/81945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Unicode debug visualizer in Visual Studio 2008 Is there a unicode debug visualizer in Visual Studio 2008? I have a xml file that I'm pretty sure is in unicode. When I open it in wordpad, it shows the japanese characters correctly. When I read the file into a string using File.ReadAllText (UTF8), all the japanese characters show up as blocks in the string visualizer. If I use the xml visualizer, the characters show up correctly. A: If you're getting square blocks, rather than complete garbage, you probably just need to specify a more suitable font in Visual Studio (in Tools | Options | Fonts and Colors). Try MS Gothic or MS Mincho (both Japanese fonts); I am guessing your issue can be resolved by tweaking the settings for [Watch, Locals and Autos Tool Windows], but it could be somewhere else. Not all applications magically font-link to a font that contains the characters you want to display. A: You say it's Unicode, so why not use File.ReadAllText(Encoding.Unicode) then?
{ "language": "en", "url": "https://stackoverflow.com/questions/81949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Setting Remote Launch in DCOM I can use DCOMCNFG to disable remote launch on my DCOM application, but I would like to do this programatically. I looked at CoInitializeSecurity, but that does not seem to do it. Anyone done this? I am using Delphi BTW. A: The binary data is simply a security descriptor structure (PSecurityDescriptor). I mean it is a copy of the memory of this structure. And, of course, the security descriptor is self relative. JWSCL can create such a structure easily. Launch- and AccessPermission list for every user access rights that also contain remote and local access. A: The permissions for Remote/Local Activation/Launch are stored in the registry under the AppID for the object. I'm not sure how to edit it programmatically. A: This is very similar to change Access Permissions in Component Services > COM Security with script/api? for which i posted a response.
{ "language": "en", "url": "https://stackoverflow.com/questions/81963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: 'Looser' typing in C# by casting down the inheritance tree The question I want to ask is thus: Is casting down the inheritance tree (ie. towards a more specialiased class) from inside an abstract class excusable, or even a good thing, or is it always a poor choice with better options available? Now, the example of why I think it can be used for good. I recently implemented Bencoding from the BitTorrent protocol in C#. A simple enough problem, how to represent the data. I chose to do it this way, We have an abstract BItem class, which provides some basic functionality, including the static BItem Decode(string) that is used to decode a Bencoded string into the necessary structure. There are also four derived classes, BString, BInteger, BList and BDictionary, representing the four different data types that be encoded. Now, here is the tricky part. BList and BDictionary have this[int] and this[string] accessors respectively to allow access to the array-like qualities of these data types. The potentially horrific part is coming now: BDictionary torrent = (BDictionary) BItem.DecodeFile("my.torrent"); int filelength = (BInteger)((BDictionary)((BList)((BDictionary) torrent["info"])["files"])[0])["length"]; Well, you get the picture... Ouch, that's hard on the eyes, not to mention the brain. So, I introduced something extra into the abstract class: public BItem this[int index] { get { return ((BList)this)[index]; } } public BItem this[string index] { get { return ((BDictionary)this)[index]; } } Now we could rewrite that old code as: BDictionary torrent = (BDictionary)BItem.DecodeFile("my.torrent"); int filelength = (BInteger)torrent["info"]["files"][0]["length"]; Wow, hey presto, MUCH more readable code. But did I just sell part of my soul for implying knowledge of subclasses into the abstract class? EDIT: In response to some of the answers coming in, you're completely off track for this particular question since the structure is variable, for instance my example of torrent["info"]["files"][0]["length"] is valid, but so is torrent["announce-list"][0][0], and both would be in 90% of torrent files out there. Generics isn't the way to go, with this problem atleast :(. Have a click through to the spec I linked, it's only 4 small dot-points large. A: I think I would make the this[int] and this[string] accessors virtual and override them in BList/BDictionary. Classes where the accessors does not make sense should cast a NotSupportedException() (perhaps by having a default implementation in BItem). That makes your code work in the same way and gives you a more readable error in case you should write (BInteger)torrent["info"][0]["files"]["length"]; by mistake. A: You really should not access any derived classes from the base class as it pretty much breaks the idea of OOP. Readibility certainly goes a long way, but I wouldn't trade it for reusability. Consider the case when you'll need to add another subclass - you'll also need to update the base class accordingly. A: If file length is something you retrieve often, why not implement a property in the BDictionary (?) class... so that you code becomes: BDictionary torrent = BItem.DecodeFile("my.torrent"); int filelength = torrent.FileLength; That way the implementation details are hidden from the user. A: The way I see it, not all BItems are collections, thus not all BItems have indexers, so the indexer shouldn't be in BItem. I would derive another abstract class from BItem, let's name it BCollection, and put the indexers there, something like: abstract class BCollection : BItem { public BItem this[int index] {get;} public BItem this[string index] {get;} } and make BList and BDictionary inherit from BCollection. Or you could go the extra mile and make BCollection a generic class. A: My recommendation would be to introduce more abstractions. I find it confusing that a BItem has a DecodeFile() which returns a BDictionary. This may be a reasonable thing to do in the torrent domain, I don't know. However, I would find an api like the following more reasonable: BFile torrent = BFile.DecodeFile("my.torrent"); int filelength = torrent.Length; A: Did you concider parsing a simple "path" so you could write it this way: BDictionary torrent = BItem.DecodeFile("my.torrent"); int filelength = (int)torrent.Fetch("info.files.0.length"); Perhaps not the best way, but the readability increases(a little) A: * *If you have complete control of your codebase and your thought-process, by all means do. *If not, you'll regret this the day some new person injects a BItem derivation that you didn't see coming into your BList or BDictionary. If you have to do this, atleast wrap it (control access to the list) in a class which has strongly typed method signatures. BString GetString(BInteger); SetString(BInteger, BString); Accept and return BStrings even though you internally store it in a BList of BItems. (let me split before I make my 2 B or not 2 B) A: Hmm. I would actually argue that the first line of coded is more readable than the second - it takes a little longer to figure out what's going on it, but its more apparant that you're treating objects as BList or BDictionary. Applying the methods to the abstract class hides that detail, which can make it harder to figure out what your method is actually doing. A: If you introduce generics, you can avoid casting. class DecodedTorrent : BDictionary<BDictionary<BList<BDictionary<BInteger>>>> { } DecodedTorrent torrent = BItem.DecodeFile("mytorrent"); int x = torrent["info"]["files"][0]["length"]; Hmm, but that probably won't work, as the types may depend on the path you take through the structure. A: Is it just me BDictionary torrent = BItem.DecodeFile("my.torrent");int filelength = (BInteger)((BDictionary)((BList)((BDictionary) torrent["info"])["files"])[0])["length"]; You don't need the BDictionary cast 'torrent' is declared as a BDictionary public BItem this[int index]{&nbsp; &nbsp; get { return ((BList)this)[index]; }}public BItem this[string index]{&nbsp; &nbsp; get { return ((BDictionary)this)[index]; }} These don't acheive the desired result as the return type is still the abstrat version, so you still have to cast. The rewritten code would have to be BDictionary torrent = BItem.DecodeFile("my.torrent");int filelength = (BInteger)((BList)((BDictionary)torrent["info"]["files"])[0])["length"]; Which is the just as bad as the first lot
{ "language": "en", "url": "https://stackoverflow.com/questions/81972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Issues with using ruby (on rails) in 64-bit multiple platforms Has anyone used ruby in 64-bit environments in various platforms (HP=UX, Solaris, AIX etc.) in a commercial production environment that heavily relies on database. Have you faced any issues / bugs during these times? I know that overall things look ok. Compilation, deployment etc. I would like to know if you encountered any 'gotcha's A: I have no issues with Debian on a 64 bit platform. The only issues I've had with 64 bit linux environments is related to things like the flash plugin for Firefox. Edit: I used Debian on a server and a laptop. The firefox problem was only on the laptop. (For obvious reasons) A: We use it on 64-bit freebsd (mysql database server). Ruby itself has been fine. There was an issue with phusion passenger a while ago, but it's since been fixed, and we've had some issues with C extensions (notably RMagick), but we've been able to overcome them all. RMagick didn't crash, but had a bug where it wouldn't produce valid output when compositing TIFF files with clipping paths. If you don't rely on any obscure C extensions I'd say you'll be fine. A: I had to use 32-bit MySQL on my 64-bit MacBookPro with rails b/c mysql.gem couldn't handle 64-bit MySQL. A: I'm sorry I've not experiences with Ruby on anything else but Linux. As epochwolf has written I have also not troubles with Debian, Postgres, Rails, (neither with Apache and passenger nor with Mongrel cluster. So I'm using probably the most widely used platform for Ruby, so I'd expect that there are less problems. I've done my share of AIX administration but to that time ruby was not even known. So I can't tell if Ruby is that stabel on other Unices. However it seems one can get around this in two ways 1) just try it on others systems but Linux (or some BSD (be it Free, Open, or Net) 2) if you encounter problems use a server under Linux and/or some BSD whic is known to work. Regards Friedrich A: I run both 32 bit and 64 bit ruby on Solaris 10. Compiling extensions for 64-bit AMD64 can be a little tricky. There exists a Sybase driver, which works but has a couple of bugs. The Oracle driver is a little better. It's not the most common setup, so finding help can be a bit difficult. I'm running Ruby 1.8.6-p287. Later versions have caused issues. I usually compile 32 bit ruby with gcc and 64 bit with Sun C 5.8.
{ "language": "en", "url": "https://stackoverflow.com/questions/81973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: A potentially dangerous Request.Form value was detected from the client Every time a user posts something containing < or > in a page in my web application, I get this exception thrown. I don't want to go into the discussion about the smartness of throwing an exception or crashing an entire web application because somebody entered a character in a text box, but I am looking for an elegant way to handle this. Trapping the exception and showing An error has occurred please go back and re-type your entire form again, but this time please do not use < doesn't seem professional enough to me. Disabling post validation (validateRequest="false") will definitely avoid this error, but it will leave the page vulnerable to a number of attacks. Ideally: When a post back occurs containing HTML restricted characters, that posted value in the Form collection will be automatically HTML encoded. So the .Text property of my text-box will be something & lt; html & gt; Is there a way I can do this from a handler? A: You could also use JavaScript's escape(string) function to replace the special characters. Then server side use Server.URLDecode(string) to switch it back. This way you don't have to turn off input validation and it will be more clear to other programmers that the string may have HTML content. A: I was getting this error too. In my case, a user entered an accented character á in a Role Name (regarding the ASP.NET membership provider). I pass the role name to a method to grant Users to that role and the $.ajax post request was failing miserably... I did this to solve the problem: Instead of data: { roleName: '@Model.RoleName', users: users } Do this data: { roleName: '@Html.Raw(@Model.RoleName)', users: users } @Html.Raw did the trick. I was getting the Role name as HTML value roleName="Cadastro b&#225;s". This value with HTML entity &#225; was being blocked by ASP.NET MVC. Now I get the roleName parameter value the way it should be: roleName="Cadastro Básico" and ASP.NET MVC engine won't block the request anymore. A: The previous answers are great, but nobody said how to exclude a single field from being validated for HTML/JavaScript injections. I don't know about previous versions, but in MVC3 Beta you can do this: [HttpPost, ValidateInput(true, Exclude = "YourFieldName")] public virtual ActionResult Edit(int id, FormCollection collection) { ... } This still validates all the fields except for the excluded one. The nice thing about this is that your validation attributes still validate the field, but you just don't get the "A potentially dangerous Request.Form value was detected from the client" exceptions. I've used this for validating a regular expression. I've made my own ValidationAttribute to see if the regular expression is valid or not. As regular expressions can contain something that looks like a script I applied the above code - the regular expression is still being checked if it's valid or not, but not if it contains scripts or HTML. A: I ended up using JavaScript before each postback to check for the characters you didn't want, such as: <asp:Button runat="server" ID="saveButton" Text="Save" CssClass="saveButton" OnClientClick="return checkFields()" /> function checkFields() { var tbs = new Array(); tbs = document.getElementsByTagName("input"); var isValid = true; for (i=0; i<tbs.length; i++) { if (tbs(i).type == 'text') { if (tbs(i).value.indexOf('<') != -1 || tbs(i).value.indexOf('>') != -1) { alert('<> symbols not allowed.'); isValid = false; } } } return isValid; } Granted my page is mostly data entry, and there are very few elements that do postbacks, but at least their data is retained. A: You can use something like: var nvc = Request.Unvalidated().Form; Later, nvc["yourKey"] should work. A: In ASP.NET MVC you need to set requestValidationMode="2.0" and validateRequest="false" in web.config, and apply a ValidateInput attribute to your controller action: <httpRuntime requestValidationMode="2.0"/> <configuration> <system.web> <pages validateRequest="false" /> </system.web> </configuration> and [Post, ValidateInput(false)] public ActionResult Edit(string message) { ... } A: There's a different solution to this error if you're using ASP.NET MVC: * *ASP.NET MVC – pages validateRequest=false doesn’t work? *Why is ValidateInput(False) not working? *ASP.NET MVC RC1, VALIDATEINPUT, A POTENTIAL DANGEROUS REQUEST AND THE PITFALL C# sample: [HttpPost, ValidateInput(false)] public ActionResult Edit(FormCollection collection) { // ... } Visual Basic sample: <AcceptVerbs(HttpVerbs.Post), ValidateInput(False)> _ Function Edit(ByVal collection As FormCollection) As ActionResult ... End Function A: You can HTML encode text box content, but unfortunately that won't stop the exception from happening. In my experience there is no way around, and you have to disable page validation. By doing that you're saying: "I'll be careful, I promise." A: As long as these are only "<" and ">" (and not the double quote itself) characters and you're using them in context like <input value="this" />, you're safe (while for <textarea>this one</textarea> you would be vulnerable of course). That may simplify your situation, but for anything more use one of other posted solutions. A: If you're just looking to tell your users that < and > are not to be used BUT, you don't want the entire form processed/posted back (and lose all the input) before-hand could you not simply put in a validator around the field to screen for those (and maybe other potentially dangerous) characters? A: You can automatically HTML encode field in custom Model Binder. My solution some different, I put error in ModelState and display error message near the field. It`s easy to modify this code for automatically encode public class AppModelBinder : DefaultModelBinder { protected override object CreateModel(ControllerContext controllerContext, ModelBindingContext bindingContext, Type modelType) { try { return base.CreateModel(controllerContext, bindingContext, modelType); } catch (HttpRequestValidationException e) { HandleHttpRequestValidationException(bindingContext, e); return null; // Encode here } } protected override object GetPropertyValue(ControllerContext controllerContext, ModelBindingContext bindingContext, PropertyDescriptor propertyDescriptor, IModelBinder propertyBinder) { try { return base.GetPropertyValue(controllerContext, bindingContext, propertyDescriptor, propertyBinder); } catch (HttpRequestValidationException e) { HandleHttpRequestValidationException(bindingContext, e); return null; // Encode here } } protected void HandleHttpRequestValidationException(ModelBindingContext bindingContext, HttpRequestValidationException ex) { var valueProviderCollection = bindingContext.ValueProvider as ValueProviderCollection; if (valueProviderCollection != null) { ValueProviderResult valueProviderResult = valueProviderCollection.GetValue(bindingContext.ModelName, skipValidation: true); bindingContext.ModelState.SetModelValue(bindingContext.ModelName, valueProviderResult); } string errorMessage = string.Format(CultureInfo.CurrentCulture, "{0} contains invalid symbols: <, &", bindingContext.ModelMetadata.DisplayName); bindingContext.ModelState.AddModelError(bindingContext.ModelName, errorMessage); } } In Application_Start: ModelBinders.Binders.DefaultBinder = new AppModelBinder(); Note that it works only for form fields. Dangerous value not passed to controller model, but stored in ModelState and can be redisplayed on form with error message. Dangerous chars in URL may be handled this way: private void Application_Error(object sender, EventArgs e) { Exception exception = Server.GetLastError(); HttpContext httpContext = HttpContext.Current; HttpException httpException = exception as HttpException; if (httpException != null) { RouteData routeData = new RouteData(); routeData.Values.Add("controller", "Error"); var httpCode = httpException.GetHttpCode(); switch (httpCode) { case (int)HttpStatusCode.BadRequest /* 400 */: if (httpException.Message.Contains("Request.Path")) { httpContext.Response.Clear(); RequestContext requestContext = new RequestContext(new HttpContextWrapper(Context), routeData); requestContext.RouteData.Values["action"] ="InvalidUrl"; requestContext.RouteData.Values["controller"] ="Error"; IControllerFactory factory = ControllerBuilder.Current.GetControllerFactory(); IController controller = factory.CreateController(requestContext, "Error"); controller.Execute(requestContext); httpContext.Server.ClearError(); Response.StatusCode = (int)HttpStatusCode.BadRequest /* 400 */; } break; } } } ErrorController: public class ErrorController : Controller { public ActionResult InvalidUrl() { return View(); } } A: For those who are not using model binding, who are extracting each parameter from the Request.Form, who are sure the input text will cause no harm, there is another way. Not a great solution but it will do the job. From client side, encode it as uri then send it. e.g: encodeURIComponent($("#MsgBody").val()); From server side, accept it and decode it as uri. e.g: string temp = !string.IsNullOrEmpty(HttpContext.Current.Request.Form["MsgBody"]) ? System.Web.HttpUtility.UrlDecode(HttpContext.Current.Request.Form["MsgBody"]) : null; or string temp = !string.IsNullOrEmpty(HttpContext.Current.Request.Form["MsgBody"]) ? System.Uri.UnescapeDataString(HttpContext.Current.Request.Form["MsgBody"]) : null; please look for the differences between UrlDecode and UnescapeDataString A: in my case, using asp:Textbox control (Asp.net 4.5), instead of setting the all page for validateRequest="false" i used <asp:TextBox runat="server" ID="mainTextBox" ValidateRequestMode="Disabled" ></asp:TextBox> on the Textbox that caused the exception. A: The answer to this question is simple: var varname = Request.Unvalidated["parameter_name"]; This would disable validation for the particular request. A: You can catch that error in Global.asax. I still want to validate, but show an appropriate message. On the blog listed below, a sample like this was available. void Application_Error(object sender, EventArgs e) { Exception ex = Server.GetLastError(); if (ex is HttpRequestValidationException) { Response.Clear(); Response.StatusCode = 200; Response.Write(@"[html]"); Response.End(); } } Redirecting to another page also seems like a reasonable response to the exception. http://www.romsteady.net/blog/2007/06/how-to-catch-httprequestvalidationexcep.html A: In ASP.NET MVC (starting in version 3), you can add the AllowHtml attribute to a property on your model. It allows a request to include HTML markup during model binding by skipping request validation for the property. [AllowHtml] public string Description { get; set; } A: For MVC, ignore input validation by adding [ValidateInput(false)] above each Action in the Controller. A: None of the suggestions worked for me. I did not want to turn off this feature for the whole website anyhow because 99% time I do not want my users placing HTML on web forms. I just created my own work around method since I'm the only one using this particular application. I convert the input to HTML in the code behind and insert it into my database. A: As indicated in my comment to Sel's answer, this is our extension to a custom request validator. public class SkippableRequestValidator : RequestValidator { protected override bool IsValidRequestString(HttpContext context, string value, RequestValidationSource requestValidationSource, string collectionKey, out int validationFailureIndex) { if (collectionKey != null && collectionKey.EndsWith("_NoValidation")) { validationFailureIndex = 0; return true; } return base.IsValidRequestString(context, value, requestValidationSource, collectionKey, out validationFailureIndex); } } A: Please bear in mind that some .NET controls will automatically HTML encode the output. For instance, setting the .Text property on a TextBox control will automatically encode it. That specifically means converting < into &lt;, > into &gt; and & into &amp;. So be wary of doing this... myTextBox.Text = Server.HtmlEncode(myStringFromDatabase); // Pseudo code However, the .Text property for HyperLink, Literal and Label won't HTML encode things, so wrapping Server.HtmlEncode(); around anything being set on these properties is a must if you want to prevent <script> window.location = "http://www.google.com"; </script> from being output into your page and subsequently executed. Do a little experimenting to see what gets encoded and what doesn't. A: You should use the Server.HtmlEncode method to protect your site from dangerous input. More info here A: A solution I don't like to turn off the post validation (validateRequest="false"). On the other hand it is not acceptable that the application crashes just because an innocent user happens to type <x or something. Therefore I wrote a client side javascript function (xssCheckValidates) that makes a preliminary check. This function is called when there is an attempt to post the form data, like this: <form id="form1" runat="server" onsubmit="return xssCheckValidates();"> The function is quite simple and could be improved but it is doing its job. Please notice that the purpose of this is not to protect the system from hacking, it is meant to protect the users from a bad experience. The request validation done at the server is still turned on, and that is (part of) the protection of the system (to the extent it is capable of doing that). The reason i say "part of" here is because I have heard that the built in request validation might not be enough, so other complementary means might be necessary to have full protection. But, again, the javascript function I present here has nothing to do with protecting the system. It is only meant to make sure the users will not have a bad experience. You can try it out here: function xssCheckValidates() { var valid = true; var inp = document.querySelectorAll( "input:not(:disabled):not([readonly]):not([type=hidden])" + ",textarea:not(:disabled):not([readonly])"); for (var i = 0; i < inp.length; i++) { if (!inp[i].readOnly) { if (inp[i].value.indexOf('<') > -1) { valid = false; break; } if (inp[i].value.indexOf('&#') > -1) { valid = false; break; } } } if (valid) { return true; } else { alert('In one or more of the text fields, you have typed\r\nthe character "<" or the character sequence "&#".\r\n\r\nThis is unfortunately not allowed since\r\nit can be used in hacking attempts.\r\n\r\nPlease edit the field and try again.'); return false; } } <form onsubmit="return xssCheckValidates();" > Try to type < or &# <br/> <input type="text" /><br/> <textarea></textarea> <input type="submit" value="Send" /> </form> A: In the web.config file, within the tags, insert the httpRuntime element with the attribute requestValidationMode="2.0". Also add the validateRequest="false" attribute in the pages element. Example: <configuration> <system.web> <httpRuntime requestValidationMode="2.0" /> </system.web> <pages validateRequest="false"> </pages> </configuration> A: If you don't want to disable ValidateRequest you need to implement a JavaScript function in order to avoid the exception. It is not the best option, but it works. function AlphanumericValidation(evt) { var charCode = (evt.charCode) ? evt.charCode : ((evt.keyCode) ? evt.keyCode : ((evt.which) ? evt.which : 0)); // User type Enter key if (charCode == 13) { // Do something, set controls focus or do anything return false; } // User can not type non alphanumeric characters if ( (charCode < 48) || (charCode > 122) || ((charCode > 57) && (charCode < 65)) || ((charCode > 90) && (charCode < 97)) ) { // Show a message or do something return false; } } Then in code behind, on the PageLoad event, add the attribute to your control with the next code: Me.TextBox1.Attributes.Add("OnKeyPress", "return AlphanumericValidation(event);") A: If you are on .NET 4.0 make sure you add this in your web.config file inside the <system.web> tags: <httpRuntime requestValidationMode="2.0" /> In .NET 2.0, request validation only applied to aspx requests. In .NET 4.0 this was expanded to include all requests. You can revert to only performing XSS validation when processing .aspx by specifying: requestValidationMode="2.0" You can disable request validate entirely by specifying: validateRequest="false" A: It seems no one has mentioned the below yet, but it fixes the issue for me. And before anyone says yeah it's Visual Basic... yuck. <%@ Page Language="vb" AutoEventWireup="false" CodeBehind="Example.aspx.vb" Inherits="Example.Example" **ValidateRequest="false"** %> I don't know if there are any downsides, but for me this worked amazing. A: Another solution is: protected void Application_Start() { ... RequestValidator.Current = new MyRequestValidator(); } public class MyRequestValidator: RequestValidator { protected override bool IsValidRequestString(HttpContext context, string value, RequestValidationSource requestValidationSource, string collectionKey, out int validationFailureIndex) { bool result = base.IsValidRequestString(context, value, requestValidationSource, collectionKey, out validationFailureIndex); if (!result) { // Write your validation here if (requestValidationSource == RequestValidationSource.Form || requestValidationSource == RequestValidationSource.QueryString) return true; // Suppress error message } return result; } } A: I see there's a lot written about this...and I didn't see this mentioned. This has been available since .NET Framework 4.5 The ValidateRequestMode setting for a control is a great option. This way the other controls on the page are still protected. No web.config changes needed. protected void Page_Load(object sender, EventArgs e) { txtMachKey.ValidateRequestMode = ValidateRequestMode.Disabled; } A: I know this question is about form posting, but I would like to add some details for people who received this error on others circumstances. It could also occur on a handler used to implement a web service. Suppose your web client sends POST or PUT requests using ajax and sends either json or xml text or raw data (a file content) to your web service. Because your web service does not need to get any information from a Content-Type header, your JavaScript code did not set this header to your ajax request. But if you do not set this header on a POST/PUT ajax request, Safari may add this header: "Content-Type: application/x-www-form-urlencoded". I observed that on Safari 6 on iPhone, but others Safari versions/OS or Chrome may do the same. So when receiving this Content-Type header some part of .NET Framework assume the request body data structure corresponds to an html form posting while it does not and rose an HttpRequestValidationException exception. First thing to do is obviously to always set Content-Type header to anything but a form MIME type on a POST/PUT ajax request even it is useless to your web service. I also discovered this detail: On these circumstances, the HttpRequestValidationException exception is rose when your code tries to access HttpRequest.Params collection. But surprisingly, this exception is not rose when it accesses HttpRequest.ServerVariables collection. This shows that while these two collections seem to be nearly identical, one accesses request data through security checks and the other one does not. A: I guess you could do it in a module; but that leaves open some questions; what if you want to save the input to a database? Suddenly because you're saving encoded data to the database you end up trusting input from it which is probably a bad idea. Ideally you store raw unencoded data in the database and the encode every time. Disabling the protection on a per page level and then encoding each time is a better option. Rather than using Server.HtmlEncode you should look at the newer, more complete Anti-XSS library from the Microsoft ACE team. A: If you're using framework 4.0 then the entry in the web.config (<pages validateRequest="false" />) <configuration> <system.web> <pages validateRequest="false" /> </system.web> </configuration> If you're using framework 4.5 then the entry in the web.config (requestValidationMode="2.0") <system.web> <compilation debug="true" targetFramework="4.5" /> <httpRuntime targetFramework="4.5" requestValidationMode="2.0"/> </system.web> If you want for only single page then, In you aspx file you should put the first line as this : <%@ Page EnableEventValidation="false" %> if you already have something like <%@ Page so just add the rest => EnableEventValidation="false" %> I recommend not to do it. A: In ASP.NET, you can catch the exception and do something about it, such as displaying a friendly message or redirect to another page... Also there is a possibility that you can handle the validation by yourself... Display friendly message: protected override void OnError(EventArgs e) { base.OnError(e); var ex = Server.GetLastError().GetBaseException(); if (ex is System.Web.HttpRequestValidationException) { Response.Clear(); Response.Write("Invalid characters."); // Response.Write(HttpUtility.HtmlEncode(ex.Message)); Response.StatusCode = 200; Response.End(); } } A: For ASP.NET 4.0, you can allow markup as input for specific pages instead of the whole site by putting it all in a <location> element. This will make sure all your other pages are safe. You do NOT need to put ValidateRequest="false" in your .aspx page. <configuration> ... <location path="MyFolder/.aspx"> <system.web> <pages validateRequest="false" /> <httpRuntime requestValidationMode="2.0" /> </system.web> </location> ... </configuration> It is safer to control this inside your web.config, because you can see at a site level which pages allow markup as input. You still need to programmatically validate input on pages where request validation is disabled. A: I found a solution that uses JavaScript to encode the data, which is decoded in .NET (and doesn't require jQuery). * *Make the textbox an HTML element (like textarea) instead of an ASP one. *Add a hidden field. *Add the following JavaScript function to your header. function boo() { targetText = document.getElementById("HiddenField1"); sourceText = document.getElementById("userbox"); targetText.value = escape(sourceText.innerText); } In your textarea, include an onchange that calls boo(): <textarea id="userbox" onchange="boo();"></textarea> Finally, in .NET, use string val = Server.UrlDecode(HiddenField1.Value); I am aware that this is one-way - if you need two-way you'll have to get creative, but this provides a solution if you cannot edit the web.config Here's an example I (MC9000) came up with and use via jQuery: $(document).ready(function () { $("#txtHTML").change(function () { var currentText = $("#txtHTML").text(); currentText = escape(currentText); // Escapes the HTML including quotations, etc $("#hidHTML").val(currentText); // Set the hidden field }); // Intercept the postback $("#btnMyPostbackButton").click(function () { $("#txtHTML").val(""); // Clear the textarea before POSTing // If you don't clear it, it will give you // the error due to the HTML in the textarea. return true; // Post back }); }); And the markup: <asp:HiddenField ID="hidHTML" runat="server" /> <textarea id="txtHTML"></textarea> <asp:Button ID="btnMyPostbackButton" runat="server" Text="Post Form" /> This works great. If a hacker tries to post via bypassing JavaScript, they they will just see the error. You can save all this data encoded in a database as well, then unescape it (on the server side), and parse & check for attacks before displaying elsewhere. A: Cause ASP.NET by default validates all input controls for potentially unsafe contents that can lead to cross-site scripting (XSS) and SQL injections. Thus it disallows such content by throwing the above exception. By default it is recommended to allow this check to happen on each postback. Solution On many occasions you need to submit HTML content to your page through Rich TextBoxes or Rich Text Editors. In that case you can avoid this exception by setting the ValidateRequest tag in the @Page directive to false. <%@ Page Language="C#" AutoEventWireup="true" ValidateRequest = "false" %> This will disable the validation of requests for the page you have set the ValidateRequest flag to false. If you want to disable this, check throughout your web application; you’ll need to set it to false in your web.config <system.web> section <pages validateRequest ="false" /> For .NET 4.0 or higher frameworks you will need to also add the following line in the <system.web> section to make the above work. <httpRuntime requestValidationMode = "2.0" /> That’s it. I hope this helps you in getting rid of the above issue. Reference by: ASP.Net Error: A potentially dangerous Request.Form value was detected from the client A: I think you are attacking it from the wrong angle by trying to encode all posted data. Note that a "<" could also come from other outside sources, like a database field, a configuration, a file, a feed and so on. Furthermore, "<" is not inherently dangerous. It's only dangerous in a specific context: when writing strings that haven't been encoded to HTML output (because of XSS). In other contexts different sub-strings are dangerous, for example, if you write a user-provided URL into a link, the sub-string "javascript:" may be dangerous. The single quote character on the other hand is dangerous when interpolating strings in SQL queries, but perfectly safe if it is a part of a name submitted from a form or read from a database field. The bottom line is: you can't filter random input for dangerous characters, because any character may be dangerous under the right circumstances. You should encode at the point where some specific characters may become dangerous because they cross into a different sub-language where they have special meaning. When you write a string to HTML, you should encode characters that have special meaning in HTML, using Server.HtmlEncode. If you pass a string to a dynamic SQL statement, you should encode different characters (or better, let the framework do it for you by using prepared statements or the like).. When you are sure you HTML-encode everywhere you pass strings to HTML, then set ValidateRequest="false" in the <%@ Page ... %> directive in your .aspx file(s). In .NET 4 you may need to do a little more. Sometimes it's necessary to also add <httpRuntime requestValidationMode="2.0" /> to web.config (reference). A: Disable the page validation if you really need the special characters like, >, , <, etc. Then ensure that when the user input is displayed, the data is HTML-encoded. There is a security vulnerability with the page validation, so it can be bypassed. Also the page validation shouldn't be solely relied on. See: http://web.archive.org/web/20080913071637/http://www.procheckup.com:80/PDFs/bypassing-dot-NET-ValidateRequest.pdf A: The other solutions here are nice, however it's a bit of a royal pain in the rear to have to apply [AllowHtml] to every single Model property, especially if you have over 100 models on a decent sized site. If like me, you want to turn this (IMHO pretty pointless) feature off site wide you can override the Execute() method in your base controller (if you don't already have a base controller I suggest you make one, they can be pretty useful for applying common functionality). protected override void Execute(RequestContext requestContext) { // Disable requestion validation (security) across the whole site ValidateRequest = false; base.Execute(requestContext); } Just make sure that you are HTML encoding everything that is pumped out to the views that came from user input (it's default behaviour in ASP.NET MVC 3 with Razor anyway, so unless for some bizarre reason you are using Html.Raw() you shouldn't require this feature. A: Use the Server.HtmlEncode("yourtext"); A: For those of us still stuck on webforms I found the following solution that enables you to only disable the validation on one field! (I would hate to disable it for the whole page.) VB.NET: Public Class UnvalidatedTextBox Inherits TextBox Protected Overrides Function LoadPostData(postDataKey As String, postCollection As NameValueCollection) As Boolean Return MyBase.LoadPostData(postDataKey, System.Web.HttpContext.Current.Request.Unvalidated.Form) End Function End Class C#: public class UnvalidatedTextBox : TextBox { protected override bool LoadPostData(string postDataKey, NameValueCollection postCollection) { return base.LoadPostData(postDataKey, System.Web.HttpContext.Current.Request.Unvalidated.Form); } } Now just use <prefix:UnvalidatedTextBox id="test" runat="server" /> instead of <asp:TextBox, and it should allow all characters (this is perfect for password fields!) A: Last but not least, please note ASP.NET Data Binding controls automatically encode values during data binding. This changes the default behavior of all ASP.NET controls (TextBox, Label etc) contained in the ItemTemplate. The following sample demonstrates (ValidateRequest is set to false): aspx <%@ Page Language="C#" ValidateRequest="false" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="WebApplication17._Default" %> <html> <body> <form runat="server"> <asp:FormView ID="FormView1" runat="server" ItemType="WebApplication17.S" SelectMethod="FormView1_GetItem"> <ItemTemplate> <asp:TextBox ID="TextBox1" runat="server"></asp:TextBox> <asp:Button ID="Button1" runat="server" Text="Button" OnClick="Button1_Click" /> <asp:Label ID="Label1" runat="server" Text="<%#: Item.Text %>"></asp:Label> <asp:TextBox ID="TextBox2" runat="server" Text="<%#: Item.Text %>"></asp:TextBox> </ItemTemplate> </asp:FormView> </form> code behind public partial class _Default : Page { S s = new S(); protected void Button1_Click(object sender, EventArgs e) { s.Text = ((TextBox)FormView1.FindControl("TextBox1")).Text; FormView1.DataBind(); } public S FormView1_GetItem(int? id) { return s; } } public class S { public string Text { get; set; } } * *Case submit value: &#39; Label1.Text value: &#39; TextBox2.Text value: &amp;#39; *Case submit value: <script>alert('attack!');</script> Label1.Text value: <script>alert('attack!');</script> TextBox2.Text value: &lt;script&gt;alert(&#39;attack!&#39;);&lt;/script&gt; A: In .Net 4.0 and onwards, which is the usual case, put the following setting in system.web <system.web> <httpRuntime requestValidationMode="2.0" /> A: How to fix this issue for AjaxExtControls in ASP.NET 4.6.2: We had the same problem with AjaxExtControls rich text editor. This issue started right after upgrading from .NET 2.0 to .NET 4.5. I looked at all SOF answers but could not find a solution that does not compromise with the security provided by .NET 4.5. Fix 1(Not Recommended as it can degrade application security) : I tested after changing this attribute in requestValidationMode = "2.0 and it worked but I was concerned about the security features. So this is fix is like degrading the security for entire application. Fix 2 (Recommended): Since this issue was only occurring with one of AjaxExtControl, I was finally able to solve this issue using the simple code below: editorID.value = editorID.value.replace(/>/g, "&gt;"); editorID.value = editorID.value.replace(/</g, "&lt;"); This code is executed on client side (javascript) before sending the request to server. Note that the editorID is not the ID that we have on our html/aspx pages but it is the id of the rich text editor that AjaxExtControl internally uses. A: I have a Web Forms application that has had this issue for a text box comments field, where users sometimes pasted email text, and the "<" and ">" characters from email header info would creep in there and throw this exception. I addressed the issue from another angle... I was already using Ajax Control Toolkit, so I was able to use a FilteredTextBoxExtender to prevent those two characters from entry in the text box. A user copy-pasting text will then get what they were expecting, minus those characters. <asp:TextBox ID="CommentsTextBox" runat="server" TextMode="MultiLine"></asp:TextBox> <ajaxToolkit:FilteredTextBoxExtender ID="ftbe" runat="server" TargetControlID="CommentsTextBox" filterMode="InvalidChars" InvalidChars="<>" /> A: None of the answers worked for me. Then I discovered that if I removed the following code I could get it to work: //Register action filter via Autofac rather than GlobalFilters to allow dependency injection builder.RegisterFilterProvider(); builder.RegisterType<OfflineActionFilter>() .AsActionFilterFor<Controller>() .InstancePerLifetimeScope(); I can only conclude that something in Autofac's RegisterFilterProvider breaks or overrides the validateRequest attribute A: Even after adding <httpRuntime requestValidationMode="2.0"> to web.config I still kept getting the error in an application that uses WIF for authentication. What solved for me was adding <sessionState mode="InProc" cookieless="UseUri"/> inside the <system.web> element. A: Try with Server.Encode and Server.HtmlDecode while sending and receiving.
{ "language": "en", "url": "https://stackoverflow.com/questions/81991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1577" }
Q: how to get login credentials by openId? Is is possible to get login credentials such as name/id if user does login by OpenId? A: There are two accepted methods for retrieving these kind of things by OpenID: SReg and Attribute Exchange (AX). Both of these are extensions to the standard OpenID specification; SReg is the older of the two and specifies a set of fields that can be requested and sent with authentication, whereas AX allows requesting of any attribute. Both of the specification documents are pretty concise on how they work, although it's difficult to guage what the standard "names" are for attributes to be requested from AX. Usually, servers tend to implement the SReg names. OpenID Simple Registration Extension Specification 1.0 OpenID Attribute Exchange Specification 1.0 Final A: You will not get their actual username (or password), but you will get their OpenID wich is unique.
{ "language": "en", "url": "https://stackoverflow.com/questions/81994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Why wouldn't DB2 let me have a column in WHERE clause? I have a remote DB2 database that I'm accessing through ODBC. When I have a query like SELECT t.foo, t.bar, t.problemcolumn FROM problemtable t WHERE t.bar < 60; it works like a charm, so the table and columns obviously exist. But if I specify the problem column in the WHERE clause SELECT t.foo, t.bar, t.problemcolumn FROM problemtable t WHERE t.problemcolumn = 'x' AND t.bar < 60; it gives me an error Table "problemtable" does not exist. What could possibly be the reason for this? I've double checked the spellings and I can trigger the problem just by including the problemcolumn in the where-clause. A: Sorry for the obvious answer, but does the problemtable exist? Your code looks like pseudo code because of the table/column names, but be sure to double check your spelling. It's not a view which might even consist of joined tables across different databases/servers? A: What is the actual SQL you're using? I don't see anything wrong with the example you put up. Try looking for misplaced commas and/or quotes that could be triggering the error. A: Does it work with just: SELECT t.foo, t.bar, t.problemcolumn FROM problemtable t WHERE t.problemcolumn = 'x' A: Please run the next SQL statements. For me it works fine. If you still have this strange error, it will be a DB2 bug. I had some problems once with copying code from UNIX editors into Windows and vice versa. The SQL would not run, although it looked ok. Retyping the statement fixed my problem then. create table problemtable ( foo varchar(10), bar int, problemcolumn varchar(10) ); SELECT t.foo, t.bar, t.problemcolumn FROM problemtable t WHERE t.bar < 60; SELECT t.foo, t.bar, t.problemcolumn FROM problemtable t WHERE t.problemcolumn = 'x' AND t.bar < 60; A: It think it should be work in DB2. What is your font-ent software? A: DB2 sometimes gives misleading errors. You can try these troubleshooting steps: * *Try executing the code through DBArtisan or DB2 Control Center and see if you get a proper result/ error message. *Try using schema_name.problemtable instead of just problemtable *Make sure that problemcolumn is of the same data type that you are comparing it with.
{ "language": "en", "url": "https://stackoverflow.com/questions/82003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Non-unicode XML representation I have xml where some of the element values are unicode characters. Is it possible to represent this in an ANSI encoding? E.g. <?xml version="1.0" encoding="utf-8"?> <xml> <value>受</value> </xml> to <?xml version="1.0" encoding="Windows-1252"?> <xml> <value>&#27544;</value> </xml> I deserialize the XML and then attempt to serialize it using XmlTextWriter specifying the Default encoding (Default is Windows-1252). All the unicode characters end up as question marks. I'm using VS 2008, C# 3.5 A: Okay I tested it with the following code: string xml = "<?xml version=\"1.0\" encoding=\"utf-8\"?><xml><value>受</value></xml>"; XmlWriterSettings settings = new XmlWriterSettings { Encoding = Encoding.Default }; MemoryStream ms = new MemoryStream(); using (XmlWriter writer = XmlTextWriter.Create(ms, settings)) XElement.Parse(xml).WriteTo(writer); string value = Encoding.Default.GetString(ms.ToArray()); And it correctly escaped the unicode character thus: <?xml version="1.0" encoding="Windows-1252"?><xml><value>&#x53D7;</value></xml> I must be doing something wrong somewhere else. Thanks for the help. A: If I understand the question, then yes. You just need a ; after the 27544: <?xml version="1.0" encoding="Windows-1252"?> <xml> <value>&#27544;</value> </xml> Or are you wondering how to generate this XML programmatically? If so, what language/environment are you working in?
{ "language": "en", "url": "https://stackoverflow.com/questions/82008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Are .NET languages really making any kind of dent in consumer desktop applications? Do you write consumer desktop applications with .NET languages? If so what type? My impression is that most consumer desktop applications are still native compiled applications in C, C++ and the like. Whilst .NET languages are growing in up take and popularity, do these new breed of applications ever break out of the enterprise & web domain to become high street consumer applications? For example look at your desktop now? how many applications are written in .NET languages, Firefox? Microsoft Office? Thunderbird? iTunes? Microsoft Visual Studio? My company develops high end CAD/CAE applications we leverage new technology but our core development is still done with C++. A: I built and maintain a big desktop application written in .NET (1.1, 2.0 now). The application is for Dentists and it works by making use of the Ink technology found in the MIcrosoft.Ink namespace in the TabletPC SDK. Some dentists use Tablet PCs to make things easier and leverage the power of that technology. On the other hand, since I find Windows UI not good looking (XP/Vista) and find that every application looks the same and inconsistent, I wrote my own GDI+ library of controls and while respecting more or less the "windows UI guidelines", I came up with very nice buttons and other UI elements that make my App look "way better" than any other "normal" windows application. We run at full screen (maximixed, no controls, no app bar), but we do this because it's a very specific application used in machines dedicated to the task. Dental clinics don't use Microsoft Excel and ALT-TAB to our application. The application works like an "ATM", touch touch, done. Very simple. It has been a success in Europe where I am. So I have to say that I am glad that the app is not a web application, because when we started, the .NET GDI+ for Windows Forms was way superior to anything that WEB could have offered; even today, Ajax is not able to reproduce the full desktop experience (not that it should but…). Java had an ugly UI back then (don't know now) so we elected .NET and used C# ever since. Desktop applications are not going to die anytime soon, some things still cannot be reproduced inside a webrowser. I considered Java, C++, Delphi among others before starting with this six years ago. None offered the simplicity and power of c#.NET with little disadvantages (like the Framework that nobody had back then). Now, every windows box will surely have the .NET Framework 2.0. Again, my consumer application is very specific and targeted towards a closed market, but we don't have anything against .NET. A: As mentioned, I know of Tomboy, Beagle, and in addition, F-Spot. All come as part of most linux distros. Paint.NET is another app. A: Maybe you are seeing this because many of the popular desktop apps have a code base older than 2001? Edit: I should probably have said older than 2003 or 2004...I doubt anyone would have started a major desktop app the first year or two of the .NET release. A: Intuit's TurboTax 2007 and 2008 are both written in .NET. Unlike the demo of a niche-market video edit tool I griped about in a comment to another answer, it actually installed completely cleanly and without incident (including its self-updater trick) on my slightly aging XP box here at home. This year's UI is substantially different from past years, and for the most part its better. Since the transition to .NET seems to have happened last year without changing the UI much at all, the new UI can't be blamed on (or credited to) the switch to .NET. I'm just a user, and have no idea what motivated their dev team to switch. I do think that is the first retail software package I've caught in the wild that was clearly based on .NET. A: That's a shame. The only reason to hold back on desktop development with .net is the requirement of the .net framework on the desktop machine, but imho that is a small price to pay for the bennefits you get when being able to work in the .net environment. A: As long as you don't need über performance, I can't see any reason not to use .NET. With the new super small redistriutables you can include a .net installer that takes up a couple hundred KB. I would say that the productivity gains of a modern, garbage collected language should only make C++ a good option if you already have the developers who are proficient in that language or there are specific technical requirements which makes it necessary or if the clients' machines are locked down such that the .net platform cannot be used. While I'm not a part of the working force yet (i.e. I am a student), everything I can get away with I write in C#. Nothing else I've tried comes close to the level of efficiency and cleanness afforded by this language (and which provides all the productivity features of Visuall Studio). A: I've noticed that in Process Explorer more and more of my desktop apps are being highlighted in yellow (meaning they're .Net). As mentioned above, ATI's Catalyst is, Windows Live Mesh, many games have .Net update or config engines, as well as most of the bits I write that haven't quite made it into the public arena yet (because I don't have as much time as I'd like for coding & testing). Also, large parts of Visual Studio ARE .NET - at least according to Process Explorer. I think that, as somebody mentioned above, there are a lot of desktop apps already out there that have older code-bases which their owners won't convert unless there's some fantastic value in doing so. A: Visual Studio (at least 2008) IS written in .NET A: Well there are apps such as Tomboy and Beagle which are available as part of some Linux distros so I'm not sure if they count as high street consumer applications. Come to think of it I'm not really that aware of any other "non-enterprise" applications written in .NET languages. A: Not the traditional desktop app, but the ATI Catalyst Control Center is .NET based. A: Actually, I have found some applications that require .Net on my desktop. The most famous is Paint.Net, but also amongst them is "Catalyst Control Center", delivered with my ATI graphics card. And naturally, our company is writing our own desktop .Net application. Our target audience are business users. A: There probably won't be a whole lot of winforms apps in the traditional sense being written, but the next version of both windows live messenger will be written in windows presnetation foundation and I think this is what the trend will be towards. Windows Media Centre was written in C# which is pretty impressive, but having said that, it's not your traditional winforms app either. A: TechSmith's Jing is .NET, and in fact it is WPF so it is 3.5, bleeding edge .NET. A: Almost all the client programs written here where I work are in .NET; it's a terrific platform for business applications. Having said that, most of the programs out there that .NET would be a good target for are being deployed as web applications instead; the rest are typically graphical and cpu-intensive applications that are typically implemented in c++ for performance reasons. For the same reason, you don't see too many desktop applications written in java, either. A: Most of you refer to open source. I agree, there are some projects using .NET (I'm using RSSBandit for example) but they doesn't matter (mostly). But what about enterprise apps? Recently I've written app which is like MS Surface and it is for advertising purposes. Before this I had to write an app to maintain warehouse for example. Something different? In times of WinForms I've written an app to support ebay-like page. Do you need any more? Personally, I think that .NET is widely use in business (which you don't see everyday) and it's not used by open source (why? I don't know, maybe contributors hate MS?). However, I also think that it will be changing towards .NET, especially with the next releases of the Windows platform. And, I almost forgot - installing .NET framework is not a problem, be serious, users are not that stupid and lazy! And it's true that desktop is loosing it's mojo to web environment, but it will never die ;) A: Microsoft InfoPath - part of Microsoft Office is also written in .NET
{ "language": "en", "url": "https://stackoverflow.com/questions/82022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Validating XML files against schema in Oracle PL/SQL I have a requirement to validate an incoming file against an XSD. Both will be on the server file system. I've looked at dbms_xmlschema, but have had issues getting it to work. Could it be easier to do it with some Java?What's the simplest class I could put in the database? Here's a simple example: DECLARE v_schema_url VARCHAR2(200) := 'http://www.example.com/schema.xsd'; v_blob bLOB; v_clob CLOB; v_xml XMLTYPE; BEGIN begin dbms_xmlschema.deleteschema(v_schema_url); exception when others then null; end; dbms_xmlschema.registerSchema(schemaURL => v_schema_url, schemaDoc => ' <xs:schema targetNamespace="http://www.example.com" xmlns:ns="http://www.example.com" xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified" attributeFormDefault="unqualified" version="3.0"> <xs:element name="something" type="xs:string"/> </xs:schema>', local => TRUE); v_xml := XMLTYPE.createxml('<something xmlns="http://www.xx.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.example.com/schema.xsd"> data </something>'); IF v_xml.isschemavalid(v_schema_url) = 1 THEN dbms_output.put_line('valid'); ELSE dbms_output.put_line('not valid'); END IF; END; This generates the following error: ORA-01031: insufficient privileges ORA-06512: at "XDB.DBMS_XDBZ0", line 275 ORA-06512: at "XDB.DBMS_XDBZ", line 7 ORA-06512: at line 1 ORA-06512: at "XDB.DBMS_XMLSCHEMA_INT", line 3 ORA-06512: at "XDB.DBMS_XMLSCHEMA", line 14 ORA-06512: at line 12 A: Update XML Schema registration requires following privileges: grant alter session to <USER>; grant create type to <USER>; /* required when gentypes => true */ grant create table to <USER>; /* required when gentables => true */ For some reason it's not enough if those privileges are granted indirectly via roles, but the privileges need to be granted directly to schema/user. Original Answer I have also noticed that default values of parameters gentables and gentypes raise insufficient privileges exception. Probably I'm just lacking of some privileges to use those features, but at the moment I don't have a good understanding what they do. I'm just happy to disable them and validation seems to work fine. I'm running on Oracle Database 11g Release 11.2.0.1.0 gentypes => true, gentables => true dbms_xmlschema.registerschema(schemaurl => name, schemadoc => xmltype(schema), local => true --gentypes => false, --gentables => false ); ORA-01031: insufficient privileges ORA-06512: at "XDB.DBMS_XMLSCHEMA_INT", line 55 ORA-06512: at "XDB.DBMS_XMLSCHEMA", line 159 ORA-06512: at "JANI.XML_VALIDATOR", line 38 ORA-06512: at line 7 gentypes => false, gentables => true dbms_xmlschema.registerschema(schemaurl => name, schemadoc => xmltype(schema), local => true, gentypes => false --gentables => false ); ORA-31084: error while creating table "JANI"."example873_TAB" for element "example" ORA-01031: insufficient privileges ORA-06512: at "XDB.DBMS_XMLSCHEMA_INT", line 55 ORA-06512: at "XDB.DBMS_XMLSCHEMA", line 159 ORA-06512: at "JANI.XML_VALIDATOR", line 38 ORA-06512: at line 7 gentypes => true, gentables => false dbms_xmlschema.registerschema(schemaurl => name, schemadoc => xmltype(schema), local => true, --gentypes => false gentables => false ); ORA-01031: insufficient privileges ORA-06512: at "XDB.DBMS_XMLSCHEMA_INT", line 55 ORA-06512: at "XDB.DBMS_XMLSCHEMA", line 159 ORA-06512: at "JANI.XML_VALIDATOR", line 38 ORA-06512: at line 7 gentypes => false, gentables => false dbms_xmlschema.registerschema(schemaurl => name, schemadoc => xmltype(schema), local => true, gentypes => false, gentables => false ); PL/SQL procedure successfully completed. A: You must have ALTER SESSION privilege granted in order to register a schema. A: here is a piece of code that works for me. user272735's answer is right, I wrote another answer as far as I can't write all the code in a comment (too long). /* Formatted on 21/08/2012 12:52:47 (QP5 v5.115.810.9015) */ DECLARE -- Local variables here res BOOLEAN; tempXML XMLTYPE; xmlDoc XMLTYPE; xmlSchema XMLTYPE; schemaURL VARCHAR2 (256) := 'testcase.xsd'; BEGIN dbms_xmlSchema.deleteSchema (schemaURL, 4); -- Test statements here xmlSchema := xmlType('<?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xdb="http://xmlns.oracle.com/xdb" elementFormDefault="qualified" attributeFormDefault="unqualified"> <xs:element name="root" xdb:defaultTable="ROOT_TABLE"> <xs:complexType> <xs:sequence> <xs:element name="child1"/> <xs:element name="child2"/> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> '); -- http://stackoverflow.com/questions/82047/validating-xml-files-against-schema-in-oracle-pl-sql dbms_xmlschema.registerschema(schemaurl => schemaURL, schemadoc => xmlSchema, local => true, gentypes => false, gentables => false ); xmlDoc := xmltype('<root xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="' || schemaURL || '"><child1>foo</child1><child2>bar</child2></root>'); xmlDoc.schemaValidate (); -- if we are here, xml is valid DBMS_OUTPUT.put_line ('OK'); exception when others then DBMS_OUTPUT.put_line (SQLErrm); END; A: Once you get past the install issues, there are challenges in some Oracle versions when the schemas get big, particularly when you have schemas that include other schemas. I know we had that issue in 9.2, not sure about 10.2 or 11. For small schemas like your example, though, it should just work. A: Registering the XSD leads to creation of tables, types and triggers. Therefore you need the following grants: grant create table to <user>; grant create type to <user>; grant create trigger to <user>; A: If I remember correctly, that error message is given when XDB (Oracle's XML DataBase package) is not properly installed. Have the DBA check this out.
{ "language": "en", "url": "https://stackoverflow.com/questions/82047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Deserializing Client-Side AJAX JSON Dates Given the following JSON Date representation: "\/Date(1221644506800-0700)\/" How do you deserialize this into it's JavaScript Date-type form? I've tried using MS AJAX JavaScrioptSerializer as shown below: Sys.Serialization.JavaScriptSerializer.deserialize("\/Date(1221644506800-0700)\/") However, all I get back is the literal string date. A: The regular expression used in the ASP.net AJAX deserialize method looks for a string that looks like "/Date(1234)/" (The string itself actually needs to contain the quotes and slashes). To get such a string, you will need to escape the quote and back slash characters, so the javascript code to create the string looks like "\"\/Date(1234)\/\"". This will work. Sys.Serialization.JavaScriptSerializer.deserialize("\"\\/Date(1221644506800)\\/\"") It's kind of weird, but I found I had to serialize a date, then serialize the string returned from that, then deserialize on the client side once. Something like this. Script.Serialization.JavaScriptSerializer jss = new Script.Serialization.JavaScriptSerializer(); string script = string.Format("alert(Sys.Serialization.JavaScriptSerializer.deserialize({0}));", jss.Serialize(jss.Serialize(DateTime.Now))); Page.ClientScript.RegisterStartupScript(this.GetType(), "ClientScript", script, true); A: Provided you know the string is definitely a date I prefer to do this : new Date(parseInt(value.replace("/Date(", "").replace(")/",""), 10)) A: For those who don't want to use Microsoft Ajax, simply add a prototype function to the string class. E.g. String.prototype.dateFromJSON = function () { return eval(this.replace(/\/Date\((\d+)\)\//gi, "new Date($1)")); }; Don't want to use eval? Try something simple like var date = new Date(parseInt(jsonDate.substr(6))); As a side note, I used to think Microsoft was misleading by using this format. However, the JSON specification is not very clear when it comes to defining a way to describe dates in JSON. A: Actually, momentjs supports this kind of format, you might do something like: var momentValue = moment(value); momentValue.toDate(); This returns the value in a javascript date format A: Bertrand LeRoy, who worked on ASP.NET Atlas/AJAX, described the design of the JavaScriptSerializer DateTime output and revealed the origin of the mysterious leading and trailing forward slashes. He made this recommendation: run a simple search for "\/Date((\d+))\/" and replace with "new Date($1)" before the eval (but after validation) I implemented that as: var serializedDateTime = "\/Date(1271389496563)\/"; document.writeln("Serialized: " + serializedDateTime + "<br />"); var toDateRe = new RegExp("^/Date\\((\\d+)\\)/$"); function toDate(s) { if (!s) { return null; } var constructor = s.replace(toDateRe, "new Date($1)"); if (constructor == s) { throw 'Invalid serialized DateTime value: "' + s + '"'; } return eval(constructor); } document.writeln("Deserialized: " + toDate(serializedDateTime) + "<br />"); This is very close to the many of the other answers: * *Use an anchored RegEx as Sjoerd Visscher did -- don't forget the ^ and $. *Avoid string.replace, and the 'g' or 'i' options on your RegEx. "/Date(1271389496563)//Date(1271389496563)/" shouldn't work at all. A: A JSON value is a string, number, object, array, true, false or null. So this is just a string. There is no official way to represent dates in JSON. This syntax is from the asp.net ajax implementation. Others use the ISO 8601 format. You can parse it like this: var s = "\/Date(1221644506800-0700)\/"; var m = s.match(/^\/Date\((\d+)([-+]\d\d)(\d\d)\)\/$/); var date = null; if (m) date = new Date(1*m[1] + 3600000*m[2] + 60000*m[3]); A: The big number is the standard JS time new Date(1221644506800) Wed Sep 17 2008 19:41:46 GMT+1000 (EST)
{ "language": "en", "url": "https://stackoverflow.com/questions/82058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: A regex for version number parsing I have a version number of the following form: version.release.modification where version, release and modification are either a set of digits or the '*' wildcard character. Additionally, any of these numbers (and any preceding .) may be missing. So the following are valid and parse as: 1.23.456 = version 1, release 23, modification 456 1.23 = version 1, release 23, any modification 1.23.* = version 1, release 23, any modification 1.* = version 1, any release, any modification 1 = version 1, any release, any modification * = any version, any release, any modification But these are not valid: *.12 *123.1 12* 12.*.34 Can anyone provide me a not-too-complex regex to validate and retrieve the release, version and modification numbers? A: Don't know what platform you're on but in .NET there's the System.Version class that will parse "n.n.n.n" version numbers for you. A: I've seen a lot of answers, but... i have a new one. It works for me at least. I've added a new restriction. Version numbers can't start (major, minor or patch) with any zeros followed by others. 01.0.0 is not valid 1.0.0 is valid 10.0.10 is valid 1.0.0000 is not valid ^(?:(0\\.|([1-9]+\\d*)\\.))+(?:(0\\.|([1-9]+\\d*)\\.))+((0|([1-9]+\\d*)))$ It's based in a previous one. But i see this solution better... for me ;) Enjoy!!! A: I tend to agree with split suggestion. Ive created a "tester" for your problem in perl #!/usr/bin/perl -w @strings = ( "1.2.3", "1.2.*", "1.*","*" ); %regexp = ( svrist => qr/(?:(\d+)\.(\d+)\.(\d+)|(\d+)\.(\d+)|(\d+))?(?:\.\*)?/, onebyone => qr/^(\d+\.)?(\d+\.)?(\*|\d+)$/, greg => qr/^(\*|\d+(\.\d+){0,2}(\.\*)?)$/, vonc => qr/^((?:\d+(?!\.\*)\.)+)(\d+)?(\.\*)?$|^(\d+)\.\*$|^(\*|\d+)$/, ajb => qr/^(?:(\d+)\.)?(?:(\d+)\.)?(\*|\d+)$/, jrudolph => qr/^(((\d+)\.)?(\d+)\.)?(\d+|\*)$/ ); foreach my $r (keys %regexp){ my $reg = $regexp{$r}; print "Using $r regexp\n"; foreach my $s (@strings){ print "$s : "; if ($s =~m/$reg/){ my ($main, $maj, $min,$rev,$ex1,$ex2,$ex3) = ("any","any","any","any","any","any","any"); $main = $1 if ($1 && $1 ne "*") ; $maj = $2 if ($2 && $2 ne "*") ; $min = $3 if ($3 && $3 ne "*") ; $rev = $4 if ($4 && $4 ne "*") ; $ex1 = $5 if ($5 && $5 ne "*") ; $ex2 = $6 if ($6 && $6 ne "*") ; $ex3 = $7 if ($7 && $7 ne "*") ; print "$main $maj $min $rev $ex1 $ex2 $ex3\n"; }else{ print " nomatch\n"; } } print "------------------------\n"; } Current output: > perl regex.pl Using onebyone regexp 1.2.3 : 1. 2. 3 any any any any 1.2.* : 1. 2. any any any any any 1.* : 1. any any any any any any * : any any any any any any any ------------------------ Using svrist regexp 1.2.3 : 1 2 3 any any any any 1.2.* : any any any 1 2 any any 1.* : any any any any any 1 any * : any any any any any any any ------------------------ Using vonc regexp 1.2.3 : 1.2. 3 any any any any any 1.2.* : 1. 2 .* any any any any 1.* : any any any 1 any any any * : any any any any any any any ------------------------ Using ajb regexp 1.2.3 : 1 2 3 any any any any 1.2.* : 1 2 any any any any any 1.* : 1 any any any any any any * : any any any any any any any ------------------------ Using jrudolph regexp 1.2.3 : 1.2. 1. 1 2 3 any any 1.2.* : 1.2. 1. 1 2 any any any 1.* : 1. any any 1 any any any * : any any any any any any any ------------------------ Using greg regexp 1.2.3 : 1.2.3 .3 any any any any any 1.2.* : 1.2.* .2 .* any any any any 1.* : 1.* any .* any any any any * : any any any any any any any ------------------------ A: ^(?:(\d+)\.)?(?:(\d+)\.)?(\*|\d+)$ Perhaps a more concise one could be : ^(?:(\d+)\.){0,2}(\*|\d+)$ This can then be enhanced to 1.2.3.4.5.* or restricted exactly to X.Y.Z using * or {2} instead of {0,2} A: This should work for what you stipulated. It hinges on the wild card position and is a nested regex: ^((\*)|([0-9]+(\.((\*)|([0-9]+(\.((\*)|([0-9]+)))?)))?))$ A: For parsing version numbers that follow these rules: - Are only digits and dots - Cannot start or end with a dot - Cannot be two dots together This one did the trick to me. ^(\d+)((\.{1}\d+)*)(\.{0})$ Valid cases are: 1, 0.1, 1.2.1 A: Use regex and now you have two problems. I would split the thing on dots ("."), then make sure that each part is either a wildcard or set of digits (regex is perfect now). If the thing is valid, you just return correct chunk of the split. A: Another try: ^(((\d+)\.)?(\d+)\.)?(\d+|\*)$ This gives the three parts in groups 4,5,6 BUT: They are aligned to the right. So the first non-null one of 4,5 or 6 gives the version field. * *1.2.3 gives 1,2,3 *1.2.* gives 1,2,* *1.2 gives null,1,2 **** gives null,null,* *1.* gives null,1,* A: My take on this, as a good exercise - vparse, which has a tiny source, with a simple function: function parseVersion(v) { var m = v.match(/\d*\.|\d+/g) || []; v = { major: +m[0] || 0, minor: +m[1] || 0, patch: +m[2] || 0, build: +m[3] || 0 }; v.isEmpty = !v.major && !v.minor && !v.patch && !v.build; v.parsed = [v.major, v.minor, v.patch, v.build]; v.text = v.parsed.join('.'); return v; } A: Sometimes version numbers might contain alphanumeric minor information (e.g. 1.2.0b or 1.2.0-beta). In this case I am using this regex: ([0-9]{1,4}(\.[0-9a-z]{1,6}){1,5}) A: Keep in mind regexp are greedy, so if you are just searching within the version number string and not within a bigger text, use ^ and $ to mark start and end of your string. The regexp from Greg seems to work fine (just gave it a quick try in my editor), but depending on your library/language the first part can still match the "*" within the wrong version numbers. Maybe I am missing something, as I haven't used Regexp for a year or so. This should make sure you can only find correct version numbers: ^(\*|\d+(\.\d+)*(\.\*)?)$ edit: actually greg added them already and even improved his solution, I am too slow :) A: (?ms)^((?:\d+(?!\.\*)\.)+)(\d+)?(\.\*)?$|^(\d+)\.\*$|^(\*|\d+)$ Does exactly match your 6 first examples, and rejects the 4 others * *group 1: major or major.minor or '*' *group 2 if exists: minor or * *group 3 if exists: * You can remove '(?ms)' I used it to indicate to this regexp to be applied on multi-lines through QuickRex A: This matches 1.2.3.* too ^(*|\d+(.\d+){0,2}(.*)?)$ I would propose the less elegant: (*|\d+(.\d+)?(.*)?)|\d+.\d+.\d+) A: It seems pretty hard to have a regex that does exactly what you want (i.e. accept only the cases that you need and reject all others and return some groups for the three components). I've give it a try and come up with this: ^(\*|(\d+(\.(\d+(\.(\d+|\*))?|\*))?))$ IMO (I've not tested extensively) this should work fine as a validator for the input, but the problem is that this regex doesn't offer a way of retrieving the components. For that you still have to do a split on period. This solution is not all-in-one, but most times in programming it doesn't need to. Of course this depends on other restrictions that you might have in your code. A: Specifying XSD elements: <xs:simpleType> <xs:restriction base="xs:string"> <xs:pattern value="[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}(\..*)?"/> </xs:restriction> </xs:simpleType> A: One more solution: ^[1-9][\d]*(.[1-9][\d]*)*(.\*)?|\*$ A: Thanks for all the responses! This is ace :) Based on OneByOne's answer (which looked the simplest to me), I added some non-capturing groups (the '(?:' parts - thanks to VonC for introducing me to non-capturing groups!), so the groups that do capture only contain the digits or * character. ^(?:(\d+)\.)?(?:(\d+)\.)?(\*|\d+)$ Many thanks everyone! A: This might work: ^(\*|\d+(\.\d+){0,2}(\.\*)?)$ At the top level, "*" is a special case of a valid version number. Otherwise, it starts with a number. Then there are zero, one, or two ".nn" sequences, followed by an optional ".*". This regex would accept 1.2.3.* which may or may not be permitted in your application. The code for retrieving the matched sequences, especially the (\.\d+){0,2} part, will depend on your particular regex library. A: I'd express the format as: "1-3 dot-separated components, each numeric except that the last one may be *" As a regexp, that's: ^(\d+\.)?(\d+\.)?(\*|\d+)$ [Edit to add: this solution is a concise way to validate, but it has been pointed out that extracting the values requires extra work. It's a matter of taste whether to deal with this by complicating the regexp, or by processing the matched groups. In my solution, the groups capture the "." characters. This can be dealt with using non-capturing groups as in ajborley's answer. Also, the rightmost group will capture the last component, even if there are fewer than three components, and so for example a two-component input results in the first and last groups capturing and the middle one undefined. I think this can be dealt with by non-greedy groups where supported. Perl code to deal with both issues after the regexp could be something like this: @version = (); @groups = ($1, $2, $3); foreach (@groups) { next if !defined; s/\.//; push @version, $_; } ($major, $minor, $mod) = (@version, "*", "*"); Which isn't really any shorter than splitting on "." ] A: My 2 cents: I had this scenario: I had to parse version numbers out of a string literal. (I know this is very different from the original question, but googling to find a regex for parsing version number showed this thread at the top, so adding this answer here) So the string literal would be something like: "Service version 1.2.35.564 is running!" I had to parse the 1.2.35.564 out of this literal. Taking a cue from @ajborley, my regex is as follows: (?:(\d+)\.)?(?:(\d+)\.)?(?:(\d+)\.\d+) A small C# snippet to test this looks like below: void Main() { Regex regEx = new Regex(@"(?:(\d+)\.)?(?:(\d+)\.)?(?:(\d+)\.\d+)", RegexOptions.Compiled); Match version = regEx.Match("The Service SuperService 2.1.309.0) is Running!"); version.Value.Dump("Version using RegEx"); // Prints 2.1.309.0 } A: I had a requirement to search/match for version numbers, that follows maven convention or even just single digit. But no qualifier in any case. It was peculiar, it took me time then I came up with this: '^[0-9][0-9.]*$' This makes sure the version, * *Starts with a digit *Can have any number of digit *Only digits and '.' are allowed One drawback is that version can even end with '.' But it can handle indefinite length of version (crazy versioning if you want to call it that) Matches: * *1.2.3 *1.09.5 *3.4.4.5.7.8.8. *23.6.209.234.3 If you are not unhappy with '.' ending, may be you can combine with endswith logic A: I found this, and it works for me: /(\^|\~?)(\d|x|\*)+\.(\d|x|\*)+\.(\d|x|\*)+ A: /^([1-9]{1}\d{0,3})(\.)([0-9]|[1-9]\d{1,3})(\.)([0-9]|[1-9]\d{1,3})(\-(alpha|beta|rc|HP|CP|SP|hp|cp|sp)[1-9]\d*)?(\.C[0-9a-zA-Z]+(-U[1-9]\d*)?)?(\.[0-9a-zA-Z]+)?$/ * *A normal version: ([1-9]{1}\d{0,3})(\.)([0-9]|[1-9]\d{1,3})(\.)([0-9]|[1-9]\d{1,3}) *A Pre-release or patched version: (\-(alpha|beta|rc|EP|HP|CP|SP|ep|hp|cp|sp)[1-9]\d*)? (Extension Pack, Hotfix Pack, Coolfix Pack, Service Pack) *Customized version: (\.C[0-9a-zA-Z]+(-U[1-9]\d*)?)? *Internal version: (\.[0-9a-zA-Z]+)?
{ "language": "en", "url": "https://stackoverflow.com/questions/82064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "114" }
Q: Design Pattern for multithreaded observers In a digital signal acquisition system, often data is pushed into an observer in the system by one thread. example from Wikipedia/Observer_pattern: foreach (IObserver observer in observers) observer.Update(message); When e.g. a user action from e.g. a GUI-thread requires the data to stop flowing, you want to break the subject-observer connection, and even dispose of the observer alltogether. One may argue: you should just stop the data source, and wait for a sentinel value to dispose of the connection. But that would incur more latency in the system. Of course, if the data pumping thread has just asked for the address of the observer, it might find it's sending a message to a destroyed object. Has someone created an 'official' Design Pattern countering this situation? Shouldn't they? A: If you want to have the data source to always be on the safe side of concurrency, you should have at least one pointer that is always safe for him to use. So the Observer object should have a lifetime that isn't ended before that of the data source. This can be done by only adding Observers, but never removing them. You could have each observer not do the core implementation itself, but have it delegate this task to an ObserverImpl object. You lock access to this impl object. This is no big deal, it just means the GUI unsubscriber would be blocked for a little while in case the observer is busy using the ObserverImpl object. If GUI responsiveness would be an issue, you can use some kind of concurrent job-queue mechanism with an unsubscription job pushed onto it. ( like PostMessage in Windows ) When unsubscribing, you just substitute the core implementation for a dummy implementation. Again this operation should grab the lock. This would indeed introduce some waiting for the data source, but since it's just a [ lock - pointer swap - unlock ] you could say that this is fast enough for real-time applications. If you want to avoid stacking Observer objects that just contain a dummy, you have to do some kind of bookkeeping, but this could boil down to something trivial like an object holding a pointer to the Observer object he needs from the list. Optimization : If you also keep the implementations ( the real one + the dummy ) alive as long as the Observer itself, you can do this without an actual lock, and use something like InterlockedExchangePointer to swap the pointers. Worst case scenario : delegating call is going on while pointer is swapped --> no big deal all objects stay alive and delegating can continue. Next delegating call will be to new implementation object. ( Barring any new swaps of course ) A: You could send a message to all observers informing them the data source is terminating and let the observers remove themselves from the list. In response to the comment, the implementation of the subject-observer pattern should allow for dynamic addition / removal of observers. In C#, the event system is a subject/observer pattern where observers are added using event += observer and removed using event -= observer.
{ "language": "en", "url": "https://stackoverflow.com/questions/82074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Y-Modem Implementation for .Net Is there a ready and free Y-Modem Implementation for .Net, preferrably in C#? I found only C/C++ Solutions. A: There is a library for XModem that you could adapt to use Y-Modem without much effort.
{ "language": "en", "url": "https://stackoverflow.com/questions/82093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I handle message failure in MSMQ bindings for WCF I have create a WCF service and am utilising netMsmqBinding binding. This is a simple service that passes a Dto to my service method and does not expect a response. The message is placed in an MSMQ, and once picked up inserted into a database. What is the best method to make sure no data is being lost. I have tried the 2 following methods: * *Throw an exception This places the message in a dead letter queue for manual perusal. I can process this when my strvice starts *set the receiveRetryCount="3" on the binding After 3 tries - which happen instantanously, this seems to leave the message in queue, but fault my service. Restarting my service repeats this process. Ideally I would like to do the follow: Try process the message * *If this fails, wait 5 minutes for that message and try again. *If that process fails 3 times, move the message to a dead letter queue. *Restarting the service will push all messages from the dead letter queue back into the queue so that it can be processed. Can I achieve this? If so how? Can you point me to any good articles on how best to utilize WCF and MSMQ for my given sceneria. Any help would be much appreciated. Thanks! Some additional information I am using MSMQ 3.0 on Windows XP and Windows Server 2003. Unfortunately I can't use the built in poison message support targeted at MSMQ 4.0 and Vista/2008. A: There's a sample in the SDK that might be useful in your case. Basically, what it does is attach an IErrorHandler implementation to your service that will catch the error when WCF declares the message to be "poison" (i.e. when all configured retries have been exhausted). What the sample does is move the message to another queue and then restart the ServiceHost associated with the message (since it will have faulted when the poison message was found). It's not a very pretty sample, but it can be useful. There are a couple of limitations, though: 1- If you have multiple endpoints associated with your service (i.e. exposed through several queues), there's no way to know which queue the poison message arrived in. If you only have a single queue, this won't be a problem. I haven't seen any official workaround for this, but I've experimented with one possible alternative which I've documented here: http://winterdom.com/weblog/2008/05/27/NetMSMQAndPoisonMessages.aspx 2- Once the problem message is moved to another queue, it becomes your responsibility, so it's up to you to move it back to the processing queue once the timeout is done (or attach a new service to that queue to handle it). To be honest, in either case, you're looking at some "manual" work here that WCF just doesn't cover on it's own. I've been recently working on a different project where I have a requirement to explicitly control how often retries happen, and my current solution was to create a set of retry queues and manually move messages between the retry queues and the main processing queue based on a set of timers and some heuristics, just using the raw System.Messaging stuff to handle the MSMQ queues. It seems to work pretty nicely, though there are a couple of gotchas if you go this way. A: If you're using SQL-Server then you should use a distributed transaction, since both MSMQ and SQL-Server support it. What happens is you wrap your database write in a TransactionScope block and call scope.Complete() only if it succeeds. If it fails, then when your WCF method returns the message will be placed back into the queue to be tried again. Here's a trimmed version of code I use: [OperationBehavior(TransactionScopeRequired=true, TransactionAutoComplete=true)] public void InsertRecord(RecordType record) { try { using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required)) { SqlConnection InsertConnection = new SqlConnection(ConnectionString); InsertConnection.Open(); // Insert statements go here InsertConnection.Close(); // Vote to commit the transaction if there were no failures scope.Complete(); } } catch (Exception ex) { logger.WarnException(string.Format("Distributed transaction failure for {0}", Transaction.Current.TransactionInformation.DistributedIdentifier.ToString()), ex); } } I test this by queueing up a large but known number of records, let WCF start lots of threads to handle many of them simultaneously (reaches 16 threads--16 messages off the queue at once), then kill the process in the middle of operations. When the program is restarted the messages are read back from the queue and processed again as if nothing happened, and at the conclusion of the test the database is consistent and has no missing records. The Distributed Transaction Manager has an ambient presence, and when you create a new instance of TransactionScope it automatically searches for the current transaction within the scope of the method invokation--which should have been created already by WCF when it popped the message off the queue and invoked your method. A: I think with MSMQ (avaiable only on Vista) you might be able to to do like this: <bindings> <netMsmqBinding> <binding name="PosionMessageHandling" receiveRetryCount="3" retryCycleDelay="00:05:00" maxRetryCycles="3" receiveErrorHandling="Move" /> </netMsmqBinding> </bindings> WCF will immediately retry for ReceiveRetryCount times after the first call failure. After the batch has failed the message is moved to the retry queue. After a delay of RetryCycleDelay minute, the message moved from the retry queue to the endpoint queue and the batch is retried. This will be repeated MaxRetryCycle time. If all that fails the message is handled according to receiveErrorHandling which can be move (to poison queue), reject, drop or fault By the way a good text about WCF and MSMQ is the chapther 9 of Progammig WCF book from Juval Lowy A: Unfortunately I'm stuck on Windows XP and Windows Server 2003 so that isn't an option for me. - (I will re-clarify that in my question as I found this solution after posting and realised i couldn't use it) I found that one solution was to setup a custom handler which would move my message onto another queue or poison queue and restart my service. This seemed crazy to me. Imagine my Sql Server was down how often the service would be restarted. SO what I've ended up doing is allowing the Line to fault and leave messages on the queue. I also log a fatal message to my system logging service that this has happened. Once our issue is resolved, I restart the service and all the messages start getting processed again. I realised re-processing this message or any other will all fail, so why the need to move this message and the others to another queue. I may as well stop my service, and start it again when all is operating as expected. aogan, you had the perfect answer for MSMQ 4.0, but unfortunately not for me
{ "language": "en", "url": "https://stackoverflow.com/questions/82099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Should a Log4J logger be declared as transient? I am using Java 1.4 with Log4J. Some of my code involves serializing and deserializing value objects (POJOs). Each of my POJOs declares a logger with private final Logger log = Logger.getLogger(getClass()); The serializer complains of org.apache.log4j.Logger not being Serializable. Should I use private final transient Logger log = Logger.getLogger(getClass()); instead? A: If you really want to go the transient approach you will need to reset the log when your object is deserialized. The way to do that is to implement the method: private void readObject(java.io.ObjectInputStream in) throws IOException, ClassNotFoundException; The javadocs for Serializable has information on this method. Your implementation of it will look something like: private void readObject(java.io.ObjectInputStream in) throws IOException, ClassNotFoundException { log = Logger.getLogger(...); in.defaultReadObject(); } If you do not do this then log will be null after deserializing your object. A: Either declare your logger field as static or as transient. Both ways ensure the writeObject() method will not attempt to write the field to the output stream during serialization. Usually logger fields are declared static, but if you need it to be an instance field just declare it transient, as its usually done for any non-serializable field. Upon deserialization the logger field will be null, though, so you have to implement a readObject() method to initialize it properly. A: How about using a static logger? Or do you need a different logger reference for each instance of the class? Static fields are not serialized by default; you can explicitly declare fields to serialize with a private, static, final array of ObjectStreamField named serialPersistentFields. See Oracle documentation Added content: As you use getLogger(getClass()), you will use the same logger in each instance. If you want to use separate logger for each instance you have to differentiate on the name of the logger in the getLogger() -method. e.g. getLogger(getClass().getName() + hashCode()). You should then use the transient attribute to make sure that the logger is not serialized. A: Try making the Logger static instead. Than you don't have to care about serialization because it is handled by the class loader. A: These kinds of cases, particularly in EJB, are generally best handled via thread local state. Usually the use case is something like you have a particular transaction which is encountering a problem and you need to elevate logging to debug for that operation so you can generate detailed logging on the problem operation. Carry some thread local state across the transaction and use that to select the correct logger. Frankly I don't know where it would be beneficial to set the level on an INSTANCE in this environment because the mapping of instances into the transaction should be a container level function, you won't actually have control of which instance is used in a given transaction anyway. Even in cases where you're dealing with a DTO it is not generally a good idea to design your system in such a way that a given specific instance is required because the design can easily evolve in ways that make that a bad choice. You could come along a month from now and decide that efficiency considerations (caching or some other life cycle changing optimization) will break your assumption about the mapping of instances into units of work. A: The logger must be static; this would make it non-serializable. There's no reason to make logger non-static, unless you have a strong reason to do it so. A: If you want the Logger to be per-instance then yes, you would want to make it transient if you're going to serialize your objects. Log4J Loggers aren't serializable, not in the version of Log4J that I'm using anyway, so if you don't make your Logger fields transient you'll get exceptions on serialization. A: Loggers are not serializable so you must use transient when storing them in instance fields. If you want to restore the logger after deserialization you can store the Level (String) indide your object which does get serialized. A: There are good reasons to use an instance logger. One very good use case is so you can declare the logger in a super-class and use it in all sub-classes (the only downside is that logs from the super-class are attributed to the sub-class but it is usually easy to see that). (Like others have mentioned use static or transient).
{ "language": "en", "url": "https://stackoverflow.com/questions/82109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: Do Delphi class vars have global or thread local storage? My guess is that class variables ("class var") are truly global in storage (that is, one instance for the entire application). But I am wondering whether this is the case, or whether they are thread in storage (eg similar to a "threadvar") - once instance per thread. Anyone know? Edit: changed "scope" to "storage" as this is in fact the correct terminology, and what I am after (thanks Barry) A: Yes, class variables are globally scoped. Have a look in the RTL source for details of how threadvars are implemented. Under Win32 each thread can have a block of memory allocated automatically to it on thread creation. This extra data area is what is used to contain your threadvars. A: Class variables are scoped according to their member visibility attributes, and have global storage, not thread storage. Scope is a syntactic concept, and relates to what identifiers are visible from where. It is the storage of the variable that is of concern here. A: Class variables are just like classes: global and unique for the application.
{ "language": "en", "url": "https://stackoverflow.com/questions/82113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: AssertionError with BIRT Runtime Engine API I'm new to BIRT and I'm trying to make the Report Engine running. I'm using the code snippets provided in http://www.eclipse.org/birt/phoenix/deploy/reportEngineAPI.php But I have a strange exception: java.lang.AssertionError at org.eclipse.birt.core.framework.Platform.startup(Platform.java:86) and nothing in the log file. Maybe I missed something in the configuration? Could somebody give me a hint about what I can try to make it running? Here is the code I'm using: public static void executeReport() { IReportEngine engine=null; EngineConfig config = null; try{ config = new EngineConfig( ); config.setBIRTHome("D:\\birt-runtime-2_3_0\\ReportEngine"); config.setLogConfig("d:/temp", Level.FINEST); Platform.startup( config ); IReportEngineFactory factory = (IReportEngineFactory) Platform .createFactoryObject( IReportEngineFactory.EXTENSION_REPORT_ENGINE_FACTORY ); engine = factory.createReportEngine( config ); IReportRunnable design = null; //Open the report design design = engine.openReportDesign("D:\\birt-runtime-2_3_0\\ReportEngine\\samples\\hello_world.rptdesign"); IRunAndRenderTask task = engine.createRunAndRenderTask(design); HTMLRenderOption options = new HTMLRenderOption(); options.setOutputFileName("output/resample/Parmdisp.html"); options.setOutputFormat("html"); task.setRenderOption(options); task.run(); task.close(); engine.destroy(); }catch( Exception ex){ ex.printStackTrace(); } finally { Platform.shutdown( ); } } A: I had the same mistake a couple of month ago. I'm not quite sure what actually fixed it but my code looks like the following: IDesignEngine engine = null; DesignConfig dConfig = new DesignConfig(); EngineConfig config = new EngineConfig(); IDesignEngineFactory factory = null; config.setLogConfig(LOG_DIRECTORY, Level.FINE); HttpServletRequest servletRequest = (HttpServletRequest) FacesContext.getCurrentInstance() .getExternalContext().getRequest(); String u = servletRequest.getSession().getServletContext().getRealPath("/"); File f = new File(u + PATH_TO_ENGINE_HOME); log.debug("setting engine home to:"+f.getAbsolutePath()); config.setEngineHome(f.getAbsolutePath()); Platform.startup(config); factory = (IDesignEngineFactory) Platform.createFactoryObject(IDesignEngineFactory.EXTENSION_DESIGN_ENGINE_FACTORY); engine = factory.createDesignEngine(dConfig); SessionHandle session = engine.newSessionHandle(null); this.design = session.openDesign(u + PATH_TO_MAIN_DESIGN); Perhaps you can solve your problem by comparing this code snippet and your own code. btw my PATH_TO_ENGINE_HOME is "/WEB-INF/platform". [edit]I used the complete "platform"-folder from the WebViewerExample of the birt-runtime-2_1_1. atm birt-runtime-2_3_0 is actual.[/edit] If this doesn't help please give a few more details (for example a code snippet). A: Just a thought, but I wonder if your use of a forward slash when setting the logger is causing a problem? instead of config.setLogConfig("d:/temp", Level.FINEST); you should use config.setLogConfig("/temp", Level.FINEST); or config.setLogConfig("d:\\temp", Level.FINEST); Finally, I realize that this is just some sample code, but you will certainly want to split your platform startup code out from your run and render task. The platform startup is very expensive and should only be done once per session. I have a couple of Eclipse projects that are setup in a Subversion server that demonstrate how to use the Report Engine API (REAPI) and the Design Engine API (DEAPI) that you may find useful as your code gets more complicated. To get the examples you will need either the Subclipse or the Subversive plugins and then you will need to connect to the following repository: http://longlake.minnovent.com/repos/birt_example The projects that you need are: birt_api_example birt_runtime_lib script.lib You may need to adjust some of the file locations in the BirtUtil class, but I think that most file locations are relative path. There is more information about how to use the examples projects on my blog at http:/birtworld.blogspot.com. In particular this article should help: Testing And Debug of Reports
{ "language": "en", "url": "https://stackoverflow.com/questions/82123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Displaying build times in Visual Studio Our build server is taking too long to build one of our C++ projects. It uses Visual Studio 2008, running devenv.com MyApp.sln /Build -- see devenv command-line switches (although that's for a newer version of VS). Is there a way to get devenv.com to log the time taken to build each project in the solution, so that I know where to focus my efforts? Improved hardware is not an option in this case. I've tried setting the output verbosity (under menu Tools → Options → Projects and Solutions → Build and Run → MSBuild project build output verbosity). This doesn't seem to have any effect in the IDE. When running MSBuild from the command line (and, for Visual Studio 2008, it needs to be MSBuild v3.5), it displays the total time elapsed at the end, but not in the IDE. I really wanted a time-taken report for each project in the solution, so that I could figure out where the build process was taking its time. A: For Visual Studio 2012 you could use the Build Monitor extension. A: Menu Tools → Options → Projects and Solutions → Build and Run → Set "MSBuild project build output verbosity" from "Minimal" to "Normal". A: Since your question involves using DevEnv from the command line, I would also suggest using MSBuild (which can build .sln files without modification). msbuild /fl /flp:Verbosity=diagnostic Your.sln msbuild /? will show you other useful options for the filelogger. A: If you're stuck on VS2005 you could use the vs-build-timer plugin. At the completion of a build it shows the total time taken and a (optional) summary of each of the project durations. Disclaimer; I wrote it. And yes, I need to create an installer...one day! A: If you want to visualize your build, you can use Incredibuild. Incredibuild's now available in standalone-mode (not distributed, but for use only on 8 cores on your local machine) for free as part of Visual Studio 2015 Update 1. Disclaimer: I work for Incredibuild A: Visual Studio 2012 - 2019 * *For MSBuild Projects (e.g., all .NET projects): Click Tools → Options and then select Projects and Solutions → Build and Run. Change MSBuild project build output verbosity to Normal. So it will display Time Elapsed in every Solution Project it builds. But there is unfortunately no Elapsed Time Sum over all projects. You will also see the Build started Timestamp *For C/C++ Projects: Click Tools → Options and then select Projects and Solutions → VC++ Project Settings. Change Build Timing to Yes. A: I ended up here because I just wanted the date and time included in the build output. Should others be searching for something similar it's as simple as adding echo %date% %time% to the Pre-build and/or Post-build events under project, Properties → Compile → Build Events. A: Do a build first and see which project is appearing first in the build output (Ctrl + Home in the output window). Right click that project → Project Properties → Compile → Build Events → Pre-build. And echo ###########%date% %time%#############. So every time you see build results (or during build) do Ctrl + Home in the output window. And somewhere in that area the time and date stares at your face! Oh and you might end up adding these details to many projects as the build order can change :) I found a better solution! ### Tools → Options → Projects & Solutions → Build and Run → MSBuild project build output verbosity = Normal (or above Minimal). This adds the time in the beginning/top of output window. Ctrl + Home in the output window should do. If we want to see how much time each projects take then Projects & Solutions → VC++ Project Settings → Build Timing = yes. It is applicable to all projects; "VC++" is misleading. A: I have created an extension to measure the build times and present the order of events in a graph: Visual Studio Build Timer. It is available on the Visual Studio market place and works for Visual Studio 2015, Visual Studio 2017 and Visual Studio 2019. Apart from showing which projects take longer, the chart displays effective dependencies between them, i.e., projects that need to wait for others, which helps figuring out what dependencies need to break to increase the parallelization of your build. A: Menu Tools → Options → Projects and Solutions → VC++ Project Settings → Build Timing should work. A: Go to menu Tools → Options → Projects and Solutions → Build and Run → MSBuild project build output verbosity. Set to "Normal" or "Detailed", and the build time will appear in the output window. A: If you want to invoke an external program that can track your total build times, you can use the following solution for Visual Studio 2010 (and maybe older). The code below uses CTime by Casey Muratori. Of course, you can also use it to simply print the build time. Open up the macro explorer, and paste the following before End Module: Dim buildStart As Date Private Sub RunCtime(ByVal StartRatherThanEnd As Boolean) Dim Arg As String Dim psi As New System.Diagnostics.ProcessStartInfo("ctime.exe") If StartRatherThanEnd Then psi.Arguments = "-begin" Else psi.Arguments = "-end" End If psi.Arguments += " c:\my\path\build.ctm" psi.RedirectStandardOutput = False psi.WindowStyle = ProcessWindowStyle.Hidden psi.UseShellExecute = False psi.CreateNoWindow = True Dim process As System.Diagnostics.Process process = System.Diagnostics.Process.Start(psi) Dim myOutput As System.IO.StreamReader = process.StandardOutput process.WaitForExit(2000) If process.HasExited Then Dim output As String = myOutput.ReadToEnd WriteToBuildWindow("CTime output: " + output) End If End Sub Private Sub BuildEvents_OnBuildBegin(ByVal Scope As EnvDTE.vsBuildScope, ByVal Action As EnvDTE.vsBuildAction) Handles BuildEvents.OnBuildBegin WriteToBuildWindow("Build started!") buildStart = Date.Now RunCtime(True) End Sub Private Sub BuildEvents_OnBuildDone(ByVal Scope As EnvDTE.vsBuildScope, ByVal Action As EnvDTE.vsBuildAction) Handles BuildEvents.OnBuildDone Dim buildTime = Date.Now - buildStart WriteToBuildWindow(String.Format("Total build time: {0} seconds", buildTime.ToString)) RunCtime(False) End Sub Private Sub WriteToBuildWindow(ByVal message As String) Dim win As Window = DTE.Windows.Item(EnvDTE.Constants.vsWindowKindOutput) Dim ow As OutputWindow = CType(win.Object, OutputWindow) If (Not message.EndsWith(vbCrLf)) Then message = message + vbCrLf End If ow.OutputWindowPanes.Item("Build").OutputString(message) End Sub The answer was taken from here and here. A: Options -> Projects and Solutions -> VC++ Project Settings -> Build Timing A: Parallel Builds Monitor is a nice extension for Visual Studio.
{ "language": "en", "url": "https://stackoverflow.com/questions/82128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "199" }
Q: producing 2 or more short sounds when a web page loads I have 6 sound files (1.wav 2.wav etc..) of which 3 different ones have to be heard each time the web page opens. The numbers are selected randomly. I have tried multiple "embeds" but only the last sound selected gets produced. I have also tried javascript routines that fiddle the bgsound attribute, however, I was not able to produce more than one sound at a time. The sounds are required to play either automatically on page open or they can be triggered by a click on a button or link, however, only one click is acceptable for the three sounds. Is there another way to do this? suggestions very welcome. A: A simple Flash would do the trick better than anything else. However please consider that unless you develop your page for the Intranet application and the feature was specifically requested by the users it will most likely go against the best usability practices for web. Most users consider the pages which produce sounds to be very distractive and if the sound is produced on the page load the most likely will not be able to turn it off. If you want to embed some sound in your page you may allow the user to turn it on explicitely. A: I would use Flash if i'm trying to add sound into a webpage, you can embed a flash document with no width or height so it will be invisible but still play noise. A: Check out Sound Manager 2 an invisible flash movie that you can use to play sounds. It allows you to load and play multiple sounds. To do what you wish to accomplish I would re-encode the wav files as mp3s (so that they download faster and Sound Manager can play them). Then use javascript to get sound manager to create the sounds and play them in a random order. You can listen to the onfinish event of each sound to start playing the next sound. A: I've got a good idea: DON'T! I hate web sites that play sounds without my telling them to. I use a multi-tabbed browser, and a multi-tasking operating system, and you don't have control of my computer, so don't assume you can play a sound without interfering with other things I'm doing. A: If you're not against using a JavaScript framework to play a sound scriptaculous provides an API for playing sounds. http://github.com/madrobby/scriptaculous/wikis/sound A: A browser will split page loading into multiple items and thus it's likely to load all sounds at once using multiple threads. I think what you're trying to accomplish is impossible. A: I know this is a bit exorbitant, but the number of combinations are not overly excessive. If you pre-blend the wav files into a series of files and just name them as follows 1_2_3, 1_2_3, 1_2_5, 1_2_6 1_3_4, 1_3_5, 1_3_6 1_4_5, 1_4_6 2_3_4, 2_3_5, 2_3_6, 2_4_5, 2_4_6, 2_5_6 3_4_5, 3_4_6, 3_5_6, 4_5_6 ( fortunately not too many combinations ) and then as long as you do: $n = [ randomnumber , randomnumber , randomnumber ]; $n = sort $n; file = "$n[0]_$n[1]_$n[2].wav" that should get it working. Noted many people are opposed to sound and depending on what technique you use to play it it may/may not work for all, but that's probably a feature we should get browsers to enforce, because some people like being able to hear sounds ( shocking, but true ). A: Adding Sounds - HTML Lessons HTML MUSIC / MEDIA CODE - Sound http://html-lesson.blogspot.com/2008/06/music-media-code-sound.html
{ "language": "en", "url": "https://stackoverflow.com/questions/82141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is there a fast, accurate Highlighter for Lucene? I've been using the (Java) Highlighter for Lucene (in the Sandbox package) for some time. However, this isn't really very accurate when it comes to matching the correct terms in search results - it works well for simple queries, for example searching for two separate words will highlight both code fragments in the results. However, it doesn't act well with more complicated queries. In the simplest case, phrase queries such as "Stack Overflow" will match all occurrences of Stack or Overflow in the highlighting, which gives the impression to the user that it isn't working very well. I tried applying the fix here but that came with a lot of performance caveats, and at the end of the day was just plain unusable. The performance is especially an issue on wildcard queries. This is due to the way that the highlighting works; instead of just working on the querystring and the text it parses it as Lucene would and then looks for all the matches that Lucene has made; unfortunately this means that for certain wildcard queries it can be looking for matches to 2000+ clauses on large documents, and it's simply not fast enough. Is there any faster implementation of an accurate highlighter? A: There is a new faster highlighter (needs to be patched in but will be part of release 2.9) https://issues.apache.org/jira/browse/LUCENE-1522 and a back-reference to this question A: You could look into using Solr. http://lucene.apache.org/solr Solr is a sort of generic search application that uses Lucene and supports highlighting. It's possible that the highlighting in Solr is usable as an API outside of Solr. You could also look at how Solr does it for inspiration. A: I've been reading on the subject and came across spanQuery which would return to you the span of the matched term or terms in the field that matched.
{ "language": "en", "url": "https://stackoverflow.com/questions/82151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Build setup project with NAnt I've already got a NAnt build script that builds/runs tests/zips web project together, etc. but I'm working on a basic desktop application. How would I go about building the setup project using NAnt so I can include it with the build report on TeamCity. Edit: The setup is the basic Setup Project supplied with Visual Studio. It's for internal to a company so it doesn't do anything fancy. A: The only way to build a Visual Studio setup project is through Visual Studio. You will need to have a copy of VS installed on the build machine and run it as a command line tool (exec devenv.exe) with the appropriate parameters (which should be the build mode (release or debug) and the project name to build, there might be a few others but you can run devenv /? to get a list of the different command line options). A: It's been a few years, but the last time I had to do this, I used a tool called Wix, which had utilities named Candle and Light. I used these tools in my NAnt script to create an MSI Installer. A: Instead of trying to build using MSBUILD (assumption), build the solution or project using DEVENV.EXE. The command line is something along the lines of: DEVENV MySolutionFile.sln /build DEBUG /project SetupProject.vdproj You can change the DEBUG to RELEASE or any other build configuration you've set up. You can also leave out the /project... part to build the whole solution.
{ "language": "en", "url": "https://stackoverflow.com/questions/82169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: PyQt and PyCairo I know it's possible to place a PyCairo surface inside a Gtk Drawing Area. But I think Qt is a lot better to work with, so I've been wondering if there's anyway to place a PyCairo surface inside some Qt component? A: Qt's own OpenGL based surfaces (using QPainter) are known to be much faster than Cairo. Might you explain why you want specifically Cairo in Qt? For the basics of using QPainter see this excerpt from the book "C++ GUI Programming with Qt4", and while it's C++ code, the PyQt implementation will be parallel. As for joining Cairo with Qt... This article in ArsTechnica sheds some light - it seems nothing that could help you exists currently (iow., nobody tried such marriage). A: For plotting with you should also consider matplotlib, which provides a higher level API and integrates well with PyQT.
{ "language": "en", "url": "https://stackoverflow.com/questions/82180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the best way of speccing plugins with RSpec? I'm creating a plugin, and am looking to use RSpec so I can build it using BDD. Is there a recommended method of doing this? A: OK, I think I have a solution: * *Generate the plugin via script/generate plugin *change the Rakefile, and add require 'spec/rake/spectask' desc 'Test the PLUGIN_NAME plugin.' Spec::Rake::SpecTask.new(:spec) do |t| t.libs << 'lib' t.verbose = true end * *Create a spec directory, and begin adding specs in *_spec.rb files, as normal You can also modify the default task to run spec instead of test, too. A: For an example of an existing plugin that uses rspec, check out the restful_authentication plugin. Maybe it will help.
{ "language": "en", "url": "https://stackoverflow.com/questions/82191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: How do you customize the RSS feeds in SharePoint In the early days of SharePoint 2007 beta, I've come across the ability to customize the template used to emit the RSS feeds from lists. I can't find it again. Anybody know where it is? A: Ah, found it, based on a subtle hint from Jan Tielens. It's on the Settings page for the list, under Communications -> RSS settings. /_layouts/listsyndication.aspx?List=<list id> I could have sworn there was more, like an actual template file you could customize. A: I my search, also came across Customize RSS for the Content Query Web Part "After you customize the Content Query Web Part to display the fields and content you want, you can set up the Web Part to emit a Really Simple Syndication (RSS) feed of that content."
{ "language": "en", "url": "https://stackoverflow.com/questions/82214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I use calculated value date in Sharepoint lists field to find a date+30 days? I have a list I've built in Sharepoint, where one of the fields is a date that the user enters. I want to add another field, which is a calculated value field that needs to be the date provided by the user + 30 days. What formula do I need to pass to the calculated value field to achieve that? A: Try this: * *Create a new Calculated column *In the Forumla box, enter something like this: =TEXT([existing date column]+30,"yyyy-mm-dd") You can use any date format string you like instead of "yyyy-mm-dd" *Make the data type "Date and Time" *Make the date and time format "Date Only"
{ "language": "en", "url": "https://stackoverflow.com/questions/82220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is it worth the effort to move from a hand crafted hibernate mapping file to annotaions? I've got a webapp whose original code base was developed with a hand crafted hibernate mapping file. Since then, I've become fairly proficient at 'coding' my hbm.xml file. But all the cool kids are using annotations these days. So, the question is: Is it worth the effort to refactor my code to use hibernate annotations? Will I gain anything, other than being hip and modern? Will I lose any of the control I have in my existing hand coded mapping file? A sub-question is, how much effort will it be? I like my databases lean and mean. The mapping covers only a dozen domain objects, including two sets, some subclassing, and about 8 tables. Thanks, dear SOpedians, in advance for your informed opinions. A: "If it ain't broke - don't fix it!" I'm an old fashioned POJO/POCO kind of guy anyway, but why change to annotations just to be cool? To the best of my knowledge you can do most of the stuff as annotations, but the more complex mappings are sometimes expressed more clearly as XML. A: One thing you'll gain from using annotations instead of an external mapping file is that your mapping information will be on classes and fields which improves maintainability. You add a field, you immediately add the annotation. You remove one, you also remove the annotation. you rename a class or a field, the annotation is right there and you can rename the table or column as well. you make changes in class inheritance, it's taken into account. You don't have to go and edit an external file some time later. this makes the whole thing more efficient and less error prone. On the other side, you'll lose the global view your mapping file used to give you. A: I've recently done both in a project and found: * *I prefer writing annotations to XML (plays well with static typing of Java, auto-complete in IDE, refactoring, etc). I like seeing the stuff all woven together rather than going back and forth between code and XML. *Encodes db information in your classes. Some people find that gross and unacceptable. I can't say it bothered me. It has to go somewhere and we're going to rebuild the WAR for a change regardless. *We actually went all the way to JPA annotations but there are definitely cases where the JPA annotations are not enough, so then had to use either Hibernate annotations or config to tweak. *Note that you can actually use both annotations AND hbm files. Might be a nice hybrid that specifies the O part in annotations and R part in hbm files but sounds like more trouble than it's worth. A: As much as I like to move on to new and potentially better things I need to remember to not mess with things that aren't broken. So if having the hibernate mappings in a separate file is working for you now I wouldn't change it. A: I definitely prefer annotations, having used them both. They are much more maintainable and since you aren't dealing with that many classes to re-map, I would say it's worth it. The annotations make refactoring much easier. A: All the features are supported both in the XML and in annotations. You will still be able to override your annotations with xml declaration. As for the effort, i think it is worth it as you will be able to see all in one place and not switch between your code and the xml file (unless of-course you are using two monitors ;) ) A: The only thing you'll gain from using annotations I would probably argue that this is the thing you want to gain from using annotations. Because you don't get compile time safety with NHibernate this is the next best thing. A: "If it ain't broke - don't fix it!" @Macka - Thanks, I needed to hear that. And thanks to everyone for your answers. While I am in the very fortunate position of having an insane amount of professional and creative control over my work, and can bring in just about any technology, library, or tool for just about any reason (baring expensive stuff) including "because all the cool kids are using it"...It does not really make sense to port what amounts to a significant portion of the core of an existing project. I'll try out Hibernate or JPA annotations with a green-field project some time. Unfortunately, I rarely get new completely independent projects.
{ "language": "en", "url": "https://stackoverflow.com/questions/82223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Force IIS7 to suggest downloading *.exe files in "Local intranet" zone Problem: * *html file on local server (inside our organization) with link to an exe on the same server. *clicking the link runs the exe on the client. Instead I want it to offer downloading it. Tried so far: * *Changed permissions on the exe's virtual directory to be read and script. *Added Content-disposition header on the exe's directory. *I can't change settings in the browser. It's intended for a lot of people to consume. A: You need to set content-disposition in the HTTP header. This Microsoft Knowledge Base entry has more detail on how to do this. A: Runs them where: on the server, or on the client? If on the server: set the handler mappings of the file so that CGI-exe is disabled. If on the client: then this is a web browser issue - it shouldn't be running EXEs directly! What browser is it? As Dave Webb mentions, you could use the Content-Disposition HTTP header: these can be added using HTTP Response Headers in IIS7 for that directory/file. A: Whether a file is downloaded or opened automatically is a browser, not a server, side setting. The other way of doing it would be to change the MIME type for the file to something like application/octet-stream or similar to try and force your browser to download it.
{ "language": "en", "url": "https://stackoverflow.com/questions/82232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there a Problem with JPA Entities, Oracle 10g and Calendar Type properties? I'm experiencing the following very annoying behaviour when using JPA entitys in conjunction with Oracle 10g. Suppose you have the following entity. @Entity @Table(name = "T_Order") public class TOrder implements Serializable { private static final long serialVersionUID = 2235742302377173533L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Integer id; @Column(name = "activationDate") private Calendar activationDate; public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public Calendar getActivationDate() { return activationDate; } public void setActivationDate(Calendar activationDate) { this.activationDate = activationDate; } } This entity is mapped to Oracle 10g, so in the DB there will be a table T_ORDER with a primary key NUMBER column ID and a TIMESTAMP column activationDate. Lets suppose I create an instance of this class with the activation date 15. Sep 2008 00:00AM. My local timezone is CEST which is GMT+02:00. When I persist this object and select the data from the table T_ORDER using sqlplus, I find out that in the table actually 14. Sep 2008 22:00 is stored, which is ok so far, because the oracle db timezone is GMT. But now the annoying part. When I read this entity back into my JAVA program, I find out that the oracle time zone is ignored and I get 14. Sep 2008 22:00 CEST, which is definitly wrong. So basically, when writing to the DB the timezone information will be used, when reading it will be ignored. Is there any solution for this out there? The most simple solution I guess would be to set the oracle dbs timezone to GMT+02, but unfortunatly I can't do this because there are other applications using the same server. We use the following technology MyEclipse 6.5 JPA with Hibernate 3.2 Oracle 10g thin JDBC Driver A: You should not use a Calendar for accessing dates from the database, for this exact reason. You should use java.util.Date as so: @Temporal(TemporalType.TIMESTAMP) @Column(name="activationDate") public Date getActivationDate() { return this.activationDate; } java.util.Date points to a moment in time, irrespective of any timezones. Calendar can be used to format a date for a particular timezone or locale. A: I already had my share of problems with JPA and timestamps. I've been reading in the oracle forums and please check the following: * *The field in the database should be TIMESTAMP_TZ and not just TIMESTAMP *Try adding the annotation @Temporal(value = TemporalType.TIMESTAMP) *If you don't really need the timezone, put in a date or timestamp field.
{ "language": "en", "url": "https://stackoverflow.com/questions/82235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Testing StarTeam operations In a Java application I need to checkout files from Borland Starteam 2006 R2 using Starteam API by various parameters (date, label). Is there any framework that helps to write automatic tests for such functionality? A: I'm not aware of any; the approach i'd take is a project which has sample files you can checkout by various criteria, and then verify everything you expected arrived, and it is the right file (hash matches). You're aware that they ship a command line client (stcmd) too, right? For a lot of things, you don't need to use the api at all.
{ "language": "en", "url": "https://stackoverflow.com/questions/82245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I use sudo to redirect output to a location I don't have permission to write to? I've been given sudo access on one of our development RedHat linux boxes, and I seem to find myself quite often needing to redirect output to a location I don't normally have write access to. The trouble is, this contrived example doesn't work: sudo ls -hal /root/ > /root/test.out I just receive the response: -bash: /root/test.out: Permission denied How can I get this to work? A: A trick I figured out myself was sudo ls -hal /root/ | sudo dd of=/root/test.out A: The problem is that the command gets run under sudo, but the redirection gets run under your user. This is done by the shell and there is very little you can do about it. sudo command > /some/file.log `-----v-----'`-------v-------' command redirection The usual ways of bypassing this are: * *Wrap the commands in a script which you call under sudo. If the commands and/or log file changes, you can make the script take these as arguments. For example: sudo log_script command /log/file.txt *Call a shell and pass the command line as a parameter with -c This is especially useful for one off compound commands. For example: sudo bash -c "{ command1 arg; command2 arg; } > /log/file.txt" *Arrange a pipe/subshell with required rights (i.e. sudo) # Read and append to a file cat ./'file1.txt' | sudo tee -a '/log/file.txt' > '/dev/null'; # Store both stdout and stderr streams in a file { command1 arg; command2 arg; } |& sudo tee -a '/log/file.txt' > '/dev/null'; A: How about writing a script? Filename: myscript #!/bin/sh /bin/ls -lah /root > /root/test.out # end script Then use sudo to run the script: sudo ./myscript A: Whenever I have to do something like this I just become root: # sudo -s # ls -hal /root/ > /root/test.out # exit It's probably not the best way, but it works. A: I would do it this way: sudo su -c 'ls -hal /root/ > /root/test.out' A: Yet another variation on the theme: sudo bash <<EOF ls -hal /root/ > /root/test.out EOF Or of course: echo 'ls -hal /root/ > /root/test.out' | sudo bash They have the (tiny) advantage that you don't need to remember any arguments to sudo or sh/bash A: This is based on the answer involving tee. To make things easier I wrote a small script (I call it suwrite) and put it in /usr/local/bin/ with +x permission: #! /bin/sh if [ $# = 0 ] ; then echo "USAGE: <command writing to stdout> | suwrite [-a] <output file 1> ..." >&2 exit 1 fi for arg in "$@" ; do if [ ${arg#/dev/} != ${arg} ] ; then echo "Found dangerous argument ‘$arg’. Will exit." exit 2 fi done sudo tee "$@" > /dev/null As shown in the USAGE in the code, all you have to do is to pipe the output to this script followed by the desired superuser-accessible filename and it will automatically prompt you for your password if needed (since it includes sudo). echo test | suwrite /root/test.txt Note that since this is a simple wrapper for tee, it will also accept tee's -a option to append, and also supports writing to multiple files at the same time. echo test2 | suwrite -a /root/test.txt echo test-multi | suwrite /root/test-a.txt /root/test-b.txt It also has some simplistic protection against writing to /dev/ devices which was a concern mentioned in one of the comments on this page. A: Clarifying a bit on why the tee option is preferable Assuming you have appropriate permission to execute the command that creates the output, if you pipe the output of your command to tee, you only need to elevate tee's privledges with sudo and direct tee to write (or append) to the file in question. in the example given in the question that would mean: ls -hal /root/ | sudo tee /root/test.out for a couple more practical examples: # kill off one source of annoying advertisements echo 127.0.0.1 ad.doubleclick.net | sudo tee -a /etc/hosts # configure eth4 to come up on boot, set IP and netmask (centos 6.4) echo -e "ONBOOT=\"YES\"\nIPADDR=10.42.84.168\nPREFIX=24" | sudo tee -a /etc/sysconfig/network-scripts/ifcfg-eth4 In each of these examples you are taking the output of a non-privileged command and writing to a file that is usually only writable by root, which is the origin of your question. It is a good idea to do it this way because the command that generates the output is not executed with elevated privileges. It doesn't seem to matter here with echo but when the source command is a script that you don't completely trust, it is crucial. Note you can use the -a option to tee to append append (like >>) to the target file rather than overwrite it (like >). A: Make sudo run a shell, like this: sudo sh -c "echo foo > ~root/out" A: sudo at now at> echo test > /tmp/test.out at> <EOT> job 1 at Thu Sep 21 10:49:00 2017 A: The way I would go about this issue is: If you need to write/replace the file: echo "some text" | sudo tee /path/to/file If you need to append to the file: echo "some text" | sudo tee -a /path/to/file A: Your command does not work because the redirection is performed by your shell which does not have the permission to write to /root/test.out. The redirection of the output is not performed by sudo. There are multiple solutions: * *Run a shell with sudo and give the command to it by using the -c option: sudo sh -c 'ls -hal /root/ > /root/test.out' *Create a script with your commands and run that script with sudo: #!/bin/sh ls -hal /root/ > /root/test.out Run sudo ls.sh. See Steve Bennett's answer if you don't want to create a temporary file. *Launch a shell with sudo -s then run your commands: [nobody@so]$ sudo -s [root@so]# ls -hal /root/ > /root/test.out [root@so]# ^D [nobody@so]$ *Use sudo tee (if you have to escape a lot when using the -c option): sudo ls -hal /root/ | sudo tee /root/test.out > /dev/null The redirect to /dev/null is needed to stop tee from outputting to the screen. To append instead of overwriting the output file (>>), use tee -a or tee --append (the last one is specific to GNU coreutils). Thanks go to Jd, Adam J. Forster and Johnathan for the second, third and fourth solutions. A: Someone here has just suggested sudoing tee: sudo ls -hal /root/ | sudo tee /root/test.out > /dev/null This could also be used to redirect any command, to a directory that you do not have access to. It works because the tee program is effectively an "echo to a file" program, and the redirect to /dev/null is to stop it also outputting to the screen to keep it the same as the original contrived example above. A: Don't mean to beat a dead horse, but there are too many answers here that use tee, which means you have to redirect stdout to /dev/null unless you want to see a copy on the screen. A simpler solution is to just use cat like this: sudo ls -hal /root/ | sudo bash -c "cat > /root/test.out" Notice how the redirection is put inside quotes so that it is evaluated by a shell started by sudo instead of the one running it. A: Maybe you been given sudo access to only some programs/paths? Then there is no way to do what you want. (unless you will hack it somehow) If it is not the case then maybe you can write bash script: cat > myscript.sh #!/bin/sh ls -hal /root/ > /root/test.out Press ctrl + d : chmod a+x myscript.sh sudo myscript.sh Hope it help.
{ "language": "en", "url": "https://stackoverflow.com/questions/82256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1081" }
Q: HTML drag and drop sortable tables Ever wanted to have an HTML drag and drop sortable table in which you could sort both rows and columns? I know it's something I'd die for. There's a lot of sortable lists going around but finding a sortable table seems to be impossible to find. I know that you can get pretty close with the tools that script.aculo.us provides but I ran into some cross-browser issues with them. A: I recommend Sortables in jQuery. You can use it on list items or pretty much anything, including tables. jQuery is very cross-browser friendly and I recommend it all the time. A: I've used dhtmlxGrid in the past. Among other things it supports drag-and-drop rows/columns, client-side sorting (string, integer, date, custom) and multi-browser support. Response to comment: No, not found anything better - just moved on from that project. :-) A: David Heggie's answer was the most useful to me. It can be slightly more concise: var sort = function(event, ui) { var url = "/myReorderFunctionURL/" + $(this).sortable('serialize'); $.post(url, null,null,"script"); // sortable("refresh") is automatic } $(".sort").sortable({ cursor: 'move', axis: 'y', stop: sort }); works for me, with the same markup. A: I've used jQuery UI's sortable plugin with good results. Markup similar to this: <table id="myTable"> <thead> <tr><th>ID</th><th>Name</th><th>Details</th></tr> </thead> <tbody class="sort"> <tr id="1"><td>1</td><td>Name1</td><td>Details1</td></tr> <tr id="2"><td>2</td><td>Name1</td><td>Details2</td></tr> <tr id="3"><td>3</td><td>Name1</td><td>Details3</td></tr> <tr id="4"><td>4</td><td>Name1</td><td>Details4</td></tr> </tbody> </table> and then in the javascript $('.sort').sortable({ cursor: 'move', axis: 'y', update: function(e, ui) { href = '/myReorderFunctionURL/'; $(this).sortable("refresh"); sorted = $(this).sortable("serialize", 'id'); $.ajax({ type: 'POST', url: href, data: sorted, success: function(msg) { //do something with the sorted data } }); } }); This POSTs a serialized version of the items' IDs to the URL given. This function (PHP in my case) then updates the items' orders in the database. A: Most frameworks (Yui, MooTools, jQuery, Prototype/Scriptaculous, etc.) have sortable list functionality. Do a little research into each and pick the one that suits your needs most. A: If you don't mind Java, there is a very handy library for GWT called GWT-DND check out the online demo to see how powerful it is. A: If you find .serialize() returning null in David Heggie's solution then set the id values for the TRs as 'id_1' instead of simply '1' Example: <tr id="id_1"><td>1</td><td>Name1</td><td>Details1</td></tr> <tr id="id_2"><td>2</td><td>Name1</td><td>Details2</td></tr> <tr id="id_3"><td>3</td><td>Name1</td><td>Details3</td></tr> <tr id="id_4"><td>4</td><td>Name1</td><td>Details4</td></tr> The above will serialize as "id[]=1&id[]=2&id[]=3" You can use '=', '-' or '_' instead of '_'. And any other word besides "id". A: I am using JQuery Sortable to do so but in case, you are using Vue.js like me, here is a solution that creates a custom Vue directive to encapsulate the Sortable functionality, I am aware of Vue draggable but it doesnt sort table columns as per the issue HERE To see this in action, CHECK THIS JS Code Vue.directive("draggable", { //adapted from https://codepen.io/kminek/pen/pEdmoo inserted: function(el, binding, a) { Sortable.create(el, { draggable: ".draggable", onEnd: function(e) { /* vnode.context is the context vue instance: "This is not documented as it's not encouraged to manipulate the vm from directives in Vue 2.0 - instead, directives should be used for low-level DOM manipulation, and higher-level stuff should be solved with components instead. But you can do this if some usecase needs this. */ // fixme: can this be reworked to use a component? // https://github.com/vuejs/vue/issues/4065 // https://forum.vuejs.org/t/how-can-i-access-the-vm-from-a-custom-directive-in-2-0/2548/3 // https://github.com/vuejs/vue/issues/2873 "directive interface change" // `binding.expression` should be the name of your array from vm.data // set the expression like v-draggable="items" var clonedItems = a.context[binding.expression].filter(function(item) { return item; }); clonedItems.splice(e.newIndex, 0, clonedItems.splice(e.oldIndex, 1)[0]); a.context[binding.expression] = []; Vue.nextTick(function() { a.context[binding.expression] = clonedItems; }); } }); } }); const cols = [ {name: "One", id: "one", canMove: false}, {name: "Two", id: "two", canMove: true}, {name: "Three", id: "three", canMove: true}, {name: "Four", id: "four", canMove: true}, ] const rows = [ {one: "Hi there", two: "I am so excited to test", three: "this column that actually drags and replaces", four: "another column in its place only if both can move"}, {one: "Hi", two: "I", three: "am", four: "two"}, {one: "Hi", two: "I", three: "am", four: "three"}, {one: "Hi", two: "I", three: "am", four: "four"}, {one: "Hi", two: "I", three: "am", four: "five"}, {one: "Hi", two: "I", three: "am", four: "six"}, {one: "Hi", two: "I", three: "am", four: "seven"} ] Vue.component("datatable", { template: "#datatable", data() { return { cols: cols, rows: rows } } }) new Vue({ el: "#app" }) CSS .draggable { cursor: move; } table.table tbody td { white-space: nowrap; } Pug Template HTML #app datatable script(type="text/x-template" id="datatable") table.table thead(v-draggable="cols") template(v-for="c in cols") th(:class="{draggable: c.canMove}") b-dropdown#ddown1.m-md-2(:text='c.name') b-dropdown-item First Action b-dropdown-item Second Action b-dropdown-item Third Action b-dropdown-divider b-dropdown-item Something else here... b-dropdown-item(disabled='') Disabled action tbody template(v-for="row in rows") tr template(v-for="(col, index) in cols") td {{row[col.id]}} A: How about sorttable? That would seem to fit your requirements nicely. It's rather easy to use - load the sorttable Javascript file, then, for each table you want it to make sortable, apply class="sortable" to the <table> tag. It will immediately understand how to sort most types of data, but if there's something it doesn't, you can add a custom sort key to tell it how to sort. The documentation explains it all pretty well.
{ "language": "en", "url": "https://stackoverflow.com/questions/82259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: pseudo-streaming of wmv files Is it possible to do pseudo-streaming(eg start playback at any point) with wmv files and silverlight? This is possible using Flash in a progressive download setup but can it be done on the Microsoft track? A: You can use Windows Media Services 2008. It enables you to actually stream WMV to Silverlight interface. A: No reason you couldn't stream it like any other HTTP video; it basically just expects the file to be a correct WMV file. You would need to have a server that supports the seeking, though. A: Since asking this question, Microsoft has released Smooth Streaming, which is exactly what I was asking for and more. Clearly this is the best Silverlight solution although you do need Windows Server 2008 on the backend.
{ "language": "en", "url": "https://stackoverflow.com/questions/82264", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Launch Local Mail Client from App Hosted on Citrix I have a desktop application (Windows Forms) which my client hosts on a Citrix server. I would like to launch the user's locally configured mail client to send mail from my application. How do I do this? In addition to this, I will need to attach a file to the email before it is sent. A: I'm not at all sure how the Citrix client would handle mailto: links (or if you can configure that), but if you haven't tried them, I suggest you do. Example: mailto:someone@example.com?subject=hello&body=see+attachment&attachment=\\host\path-to\file.foo Also note that not all email clients support the attachment parameter in mailto URLs.
{ "language": "en", "url": "https://stackoverflow.com/questions/82266", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Tools for command line file parsing in cygwin I have to deal with text files in a motley selection of formats. Here's an example (Columns A and B are tab delimited): A B a Name1=Val1, Name2=Val2, Name3=Val3 b Name1=Val4, Name3=Val5 c Name1=Val6, Name2=Val7, Name3=Val8 The files could have headers or not, have mixed delimiting schemes, have columns with name/value pairs as above etc. I often have the ad-hoc need to extract data from such files in various ways. For example from the above data I might want the value associated with Name2 where it is present. i.e. A B a Val2 c Val7 What tools/techniques are there for performing such manipulations as one line commands, using the above as an example but extensible to other cases? A: You have all the basic bash shell commands, for example grep, cut, sed and awk at your disposal. You can also use Perl or Ruby for more complex things. A: I don't like sed too much, but it works for such things: var="Name2";sed -n "1p;s/\([^ ]*\) .*$var=\([^ ,]*\).*/\1 \2/p" < filename Gives you: A B a Val2 c Val7 A: From what I've seen I'd start with Awk for this sort of thing and then if you need something more complex, I'd progress to Python. A: I would use sed: # print section of file between two regular expressions (inclusive) sed -n '/Iowa/,/Montana/p' # case sensitive A: Since you have cygwin, I'd go with Perl. It's the easiest to learn (check out the O'Reily book: Learning Perl) and widely applicable. A: I would use Perl. Write a small module (or more than one) for dealing with the different formats. You could then run perl oneliners using that library. Example for what it would look like as follows: perl -e 'use Parser;' -e 'parser("in.input").get("Name2");' Don't quote me on the syntax, but that's the general idea. Abstract the task at hand to allow you to think in terms of what you need to do, not how you need to do it. Ruby would be another option, it tends to have a cleaner syntax, but either language would work.
{ "language": "en", "url": "https://stackoverflow.com/questions/82268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I use the same field type in multiple lists on SharePoint? I have a SharePoint site with multiple lists, some of which have the same fields - a choice of products or countries. How can I build the lists in a way that I configure the choice field once and use it in multiple lists, so that in the future, if I add a value to the choice, I add it only once? A: If you go to Site Settings, under Galleries there is an option for Site Columns. You can create your choice list there. Then, under the Library Settings there is an option to Add From Existing Site Columns. You should be able to see and select your newly created column there. A: You should create a list which contains the countries. Then in the lists where you want to reuse the countries lookup, create a column of type Lookup and select the countries list in the "Get infomation from" dropdown. Here is a link to a more visual guide: http://blog.phase2int.com/?p=101
{ "language": "en", "url": "https://stackoverflow.com/questions/82269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Placing Share Documents subfolder as a webpart in SharePoint I want to place a Webpart on a page that holds a subfolder of the Document Library in SharePoint, but somehow, the only thing I get is the root folder of the document library. Is there a Webpart that fills this need? A: Here is how to do it in Sharepoint 2010 with only Javascript, no SharePoint Designer necessary. * *create a document library web part on your web part page *change the view to show all items without folders and set the item limit to a sufficiently large number so that there are no batches *add Content Editor web part below document library web part *Add the following javascript and change the the first variable to meet your needs Note: If you have more than one Document Library web part, you will need to add to this code. <script type="text/javascript" language="javascript"> //change this to meet your needs var patt = /FOLDER%20TO%20SEARCH/gi; var x = document.getElementsByTagName("TD"); // find all of the TDs var i=0; for (i=0;i<x.length;i++) { if (x[i].className =="ms-vb-title") //find the TDs styled for documents { var y = x[i].getElementsByTagName("A"); //this gets the URL linked to the name field //conveniently the URL is the first variable in the array. YMMV. var title = y[0]; //search for pattern var result = patt.test(title); //If the pattern isn't in that row, do not display the row if ( !result ) { x[i].parentNode.style.display = "none"; //and hide the row } } } </script> A: By default I don't think that is possible. The list web part that would show the Shared Documents understands how to render the library, but doesn't understand how to filter to only show the contents of one subfolder. It would be nice to create a Filter Web Part and to provide that filter to the List web part so that it filters according to the sub folder defined within the fileref field of the document library. However the filters it appears to be able to consume are Type, Modified and Modified By. So you could filter it to just the documents you touched, but not the ones in a given location. End result: Roll your own web part. A: The reason is that the folder selected by the webpart is not controlled by the webpart itself, but by a querystring parameter. e.g. "?RootFolder=%2fDocuments%2fMyFolder1&FolderCTID=" So folders are not "real" folders as such, despite the "lie" that is the webdav interface e.g. \\sharepointsite\documents There should be a way of including the desired RootFolder parameter, like a linking to the page with the querystring included (far from ideal). I do not know of any webparts that do this. A: I was able to do this by creating a new Column and specifying a keyword for the entire Shared Documents list. Then I had to add metadata. Add the WebPart again to the page. Create a View that enabled the display of the files as a flat list, and filter on the new Column (i.e. where Keyword is/contains ----). Then I get the list I want on the page with the web part. A: I have a work around I've used that doesn't required Designer. Not as elegant, but achievable by any power user. After you've added the library web part, go to the page and click down to the folder you want to be the default. See that the page link now shows something like : www.mysite.com/sharepoint/default.aspx?RootFolder=%2Fsubfoldername&FolderCTID=... Copy that link. Delete &FolderCTID and everything that follows. In this case what remains is : www.mysite.com/sharepoint/default.aspx?RootFolder=%2Fsubfoldername Use this link for navigation to the page and the library will display as you want within that page. Be aware it does not replace the default view for that page. A: Another way of face this issue would be to just use the Content Search WebPart ( CSWP ) and filter the results based on : * *folder path *url depth You will need a UrlDepth value that matches your requirement. The best thing is to use a high value, like 10, and then reduce until it shows just the files you need. Regarding folder path, remove the (quotes) ", this way the query will perform a "contains" lookup, instead of "equal to": Result will be something like this: path:[your site]/Docs/our_team UrlDepth:7 If the folder name contains spaces, you may need to wraps it with quotes. something like: path:[your site]/Docs/"our team" A: One alternative I've used is to drop a Page Viewer Web Part on the page and choose "Folder" as the type of thing to view. Then specify the webdav UNC to the folder such as "\some_sharepoint-site\some_site\shared documents\some_folder\" A: Place the document library list view web part on any page. Edit the web part. From filter select column "Content Type" and value "Folder" Save and you are done. By doing that it will show you root folder files only.
{ "language": "en", "url": "https://stackoverflow.com/questions/82286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is it possible to build a Linux/Motif Eclipse RCP application? I am trying to build an Eclipse application that would work with a linux/motif installation target. However, this seems not to be possible even though the export option is available in the product export wizard. I've checked the content of the delta pack and indeed, the packages for linux/motif are missing. After checking the downloads page for eclipse 3.4 at: http://download.eclipse.org/eclipse/downloads/drops/R-3.4-200806172000/index.php I see that even though there is an Eclipse version marked for Linux/motif, it is marked as Testing only. Additionally, there is no delta pack for this target. Has anyone been successful building an RCP application targeting linux/motif? Would it work if I download this testing only version of eclipse and copy the missing plugins? A: We have a similar issue. We are building Eclipse applications and one of our platforms is Solaris 10 x86 which was supported for a short time as an early access build in 3.2 and dropped. I believe 3.2 and 3.3 supported motif so your best bet may be to revert to an older version of Eclipse. I develop in 3.4 and when we do the Solaris specific release we switch back to 3.2, it is usually about 10 minutes of changes to fix everything for the prior version. Usually it is removing @overides in a few locations and changing a function or two that Eclipse no longer uses. The other thing you can do is get the Linux/Motif package for Eclipse, and install it on a Linux box running Motif. Check out your project on that Eclipse machine and export it there. I tried out VirtualBox (a free Virtual Machine from Sun Microsystems) it should make this easy for you.
{ "language": "en", "url": "https://stackoverflow.com/questions/82305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I determine the length (i.e. duration) of a .wav file in C#? In the uncompressed situation I know I need to read the wav header, pull out the number of channels, bits, and sample rate and work it out from there: (channels) * (bits) * (samples/s) * (seconds) = (filesize) Is there a simpler way - a free library, or something in the .net framework perhaps? How would I do this if the .wav file is compressed (with the mpeg codec for example)? A: Yes, There is a free library that can be used to get time duration of Audio file. This library also provides many more functionalities. TagLib TagLib is distributed under the GNU Lesser General Public License (LGPL) and Mozilla Public License (MPL). I implemented below code that returns time duration in seconds. using TagLib.Mpeg; public static double GetSoundLength(string FilePath) { AudioFile ObjAF = new AudioFile(FilePath); return ObjAF.Properties.Duration.TotalSeconds; } A: I had difficulties with the example of the MediaPlayer-class above. It could take some time, before the player has opened the file. In the "real world" you have to register for the MediaOpened-event, after that has fired, the NaturalDuration is valid. In a console-app you just have to wait a few seconds after the open. using System; using System.Text; using System.Windows.Media; using System.Windows; namespace ConsoleApplication2 { class Program { static void Main(string[] args) { if (args.Length == 0) return; Console.Write(args[0] + ": "); MediaPlayer player = new MediaPlayer(); Uri path = new Uri(args[0]); player.Open(path); TimeSpan maxWaitTime = TimeSpan.FromSeconds(10); DateTime end = DateTime.Now + maxWaitTime; while (DateTime.Now < end) { System.Threading.Thread.Sleep(100); Duration duration = player.NaturalDuration; if (duration.HasTimeSpan) { Console.WriteLine(duration.TimeSpan.ToString()); break; } } player.Close(); } } } A: Not to take anything away from the answer already accepted, but I was able to get the duration of an audio file (several different formats, including AC3, which is what I needed at the time) using the Microsoft.DirectX.AudioVideoPlayBack namespace. This is part of DirectX 9.0 for Managed Code. Adding a reference to that made my code as simple as this... Public Shared Function GetDuration(ByVal Path As String) As Integer If File.Exists(Path) Then Return CInt(New Audio(Path, False).Duration) Else Throw New FileNotFoundException("Audio File Not Found: " & Path) End If End Function And it's pretty fast, too! Here's a reference for the Audio class. A: Download NAudio.dll from the link https://www.dll-files.com/naudio.dll.html and then use this function public static TimeSpan GetWavFileDuration(string fileName) { WaveFileReader wf = new WaveFileReader(fileName); return wf.TotalTime; } you will get the Duration A: In the .net framework there is a mediaplayer class: http://msdn.microsoft.com/en-us/library/system.windows.media.mediaplayer_members.aspx Here is an example: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2667714&SiteID=1&pageid=0#2685871 A: You may consider using the mciSendString(...) function (error checking is omitted for clarity): using System; using System.Text; using System.Runtime.InteropServices; namespace Sound { public static class SoundInfo { [DllImport("winmm.dll")] private static extern uint mciSendString( string command, StringBuilder returnValue, int returnLength, IntPtr winHandle); public static int GetSoundLength(string fileName) { StringBuilder lengthBuf = new StringBuilder(32); mciSendString(string.Format("open \"{0}\" type waveaudio alias wave", fileName), null, 0, IntPtr.Zero); mciSendString("status wave length", lengthBuf, lengthBuf.Capacity, IntPtr.Zero); mciSendString("close wave", null, 0, IntPtr.Zero); int length = 0; int.TryParse(lengthBuf.ToString(), out length); return length; } } } A: Try code below from How to determine the length of a .wav file in C# string path = @"c:\test.wav"; WaveReader wr = new WaveReader(File.OpenRead(path)); int durationInMS = wr.GetDurationInMS(); wr.Close(); A: i have tested blew code would fail,file formats are like "\\ip\dir\*.wav' public static class SoundInfo { [DllImport("winmm.dll")] private static extern uint mciSendString ( string command, StringBuilder returnValue, int returnLength, IntPtr winHandle ); public static int GetSoundLength(string fileName) { StringBuilder lengthBuf = new StringBuilder(32); mciSendString(string.Format("open \"{0}\" type waveaudio alias wave", fileName), null, 0, IntPtr.Zero); mciSendString("status wave length", lengthBuf, lengthBuf.Capacity, IntPtr.Zero); mciSendString("close wave", null, 0, IntPtr.Zero); int length = 0; int.TryParse(lengthBuf.ToString(), out length); return length; } } while naudio works public static int GetSoundLength(string fileName) { using (WaveFileReader wf = new WaveFileReader(fileName)) { return (int)wf.TotalTime.TotalMilliseconds; } }` A: You might find that the XNA library has some support for working with WAV's etc. if you are willing to go down that route. It is designed to work with C# for game programming, so might just take care of what you need. A: There's a bit of a tutorial (with - presumably - working code you can leverage) over at CodeProject. The only thing you have to be a little careful of is that it's perfectly "normal" for a WAV file to be composed of multiple chunks - so you have to scoot over the entire file to ensure that all chunks are accounted for. A: What exactly is your application doing with compressed WAVs? Compressed WAV files are always tricky - I always try and use an alternative container format in this case such as OGG or WMA files. The XNA libraries tend to be designed to work with specific formats - although it is possible that within XACT you'll find a more generic wav playback method. A possible alternative is to look into the SDL C# port, although I've only ever used it to play uncompressed WAVs - once opened you can query the number of samples to determine the length. A: I'm gonna have to say MediaInfo, I have been using it for over a year with a audio/video encoding application I'm working on. It gives all the information for wav files along with almost every other format. MediaInfoDll Comes with sample C# code on how to get it working. A: time = FileLength / (Sample Rate * Channels * Bits per sample /8) A: I'm going to assume that you're somewhat familiar with the structure of a .WAV file : it contains a WAVEFORMATEX header struct, followed by a number of other structs (or "chunks") containing various kinds of information. See Wikipedia for more info on the file format. First, iterate through the .wav file and add up the the unpadded lengths of the "data" chunks (the "data" chunk contains the audio data for the file; usually there is only one of these, but it's possible that there could be more than one). You now have the total size, in bytes, of the audio data. Next, get the "average bytes per second" member of the WAVEFORMATEX header struct of the file. Finally, divide the total size of the audio data by the average bytes per second - this will give you the duration of the file, in seconds. This works reasonably well for uncompressed and compressed files. A: Imports System.IO Imports System.Text Imports System.Math Imports System.BitConverter Public Class PulseCodeModulation ' Pulse Code Modulation WAV (RIFF) file layout ' Header chunk ' Type Byte Offset Description ' Dword 0 Always ASCII "RIFF" ' Dword 4 Number of bytes in the file after this value (= File Size - 8) ' Dword 8 Always ASCII "WAVE" ' Format Chunk ' Type Byte Offset Description ' Dword 12 Always ASCII "fmt " ' Dword 16 Number of bytes in this chunk after this value ' Word 20 Data format PCM = 1 (i.e. Linear quantization) ' Word 22 Channels Mono = 1, Stereo = 2 ' Dword 24 Sample Rate per second e.g. 8000, 44100 ' Dword 28 Byte Rate per second (= Sample Rate * Channels * (Bits Per Sample / 8)) ' Word 32 Block Align (= Channels * (Bits Per Sample / 8)) ' Word 34 Bits Per Sample e.g. 8, 16 ' Data Chunk ' Type Byte Offset Description ' Dword 36 Always ASCII "data" ' Dword 40 The number of bytes of sound data (Samples * Channels * (Bits Per Sample / 8)) ' Buffer 44 The sound data Dim HeaderData(43) As Byte Private AudioFileReference As String Public Sub New(ByVal AudioFileReference As String) Try Me.HeaderData = Read(AudioFileReference, 0, Me.HeaderData.Length) Catch Exception As Exception Throw End Try 'Validate file format Dim Encoder As New UTF8Encoding() If "RIFF" <> Encoder.GetString(BlockCopy(Me.HeaderData, 0, 4)) Or _ "WAVE" <> Encoder.GetString(BlockCopy(Me.HeaderData, 8, 4)) Or _ "fmt " <> Encoder.GetString(BlockCopy(Me.HeaderData, 12, 4)) Or _ "data" <> Encoder.GetString(BlockCopy(Me.HeaderData, 36, 4)) Or _ 16 <> ToUInt32(BlockCopy(Me.HeaderData, 16, 4), 0) Or _ 1 <> ToUInt16(BlockCopy(Me.HeaderData, 20, 2), 0) _ Then Throw New InvalidDataException("Invalid PCM WAV file") End If Me.AudioFileReference = AudioFileReference End Sub ReadOnly Property Channels() As Integer Get Return ToUInt16(BlockCopy(Me.HeaderData, 22, 2), 0) 'mono = 1, stereo = 2 End Get End Property ReadOnly Property SampleRate() As Integer Get Return ToUInt32(BlockCopy(Me.HeaderData, 24, 4), 0) 'per second End Get End Property ReadOnly Property ByteRate() As Integer Get Return ToUInt32(BlockCopy(Me.HeaderData, 28, 4), 0) 'sample rate * channels * (bits per channel / 8) End Get End Property ReadOnly Property BlockAlign() As Integer Get Return ToUInt16(BlockCopy(Me.HeaderData, 32, 2), 0) 'channels * (bits per sample / 8) End Get End Property ReadOnly Property BitsPerSample() As Integer Get Return ToUInt16(BlockCopy(Me.HeaderData, 34, 2), 0) End Get End Property ReadOnly Property Duration() As Integer Get Dim Size As Double = ToUInt32(BlockCopy(Me.HeaderData, 40, 4), 0) Dim ByteRate As Double = ToUInt32(BlockCopy(Me.HeaderData, 28, 4), 0) Return Ceiling(Size / ByteRate) End Get End Property Public Sub Play() Try My.Computer.Audio.Play(Me.AudioFileReference, AudioPlayMode.Background) Catch Exception As Exception Throw End Try End Sub Public Sub Play(playMode As AudioPlayMode) Try My.Computer.Audio.Play(Me.AudioFileReference, playMode) Catch Exception As Exception Throw End Try End Sub Private Function Read(AudioFileReference As String, ByVal Offset As Long, ByVal Bytes As Long) As Byte() Dim inputFile As System.IO.FileStream Try inputFile = IO.File.Open(AudioFileReference, IO.FileMode.Open) Catch Exception As FileNotFoundException Throw New FileNotFoundException("PCM WAV file not found") Catch Exception As Exception Throw End Try Dim BytesRead As Long Dim Buffer(Bytes - 1) As Byte Try BytesRead = inputFile.Read(Buffer, Offset, Bytes) Catch Exception As Exception Throw Finally Try inputFile.Close() Catch Exception As Exception 'Eat the second exception so as to not mask the previous exception End Try End Try If BytesRead < Bytes Then Throw New InvalidDataException("PCM WAV file read failed") End If Return Buffer End Function Private Function BlockCopy(ByRef Source As Byte(), ByVal Offset As Long, ByVal Bytes As Long) As Byte() Dim Destination(Bytes - 1) As Byte Try Buffer.BlockCopy(Source, Offset, Destination, 0, Bytes) Catch Exception As Exception Throw End Try Return Destination End Function End Class
{ "language": "en", "url": "https://stackoverflow.com/questions/82319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: How to Set Grid Column MaxWidth depending on Window or Screen Size in XAML I have a 3 column grid in a window with a GridSplitter on the first column. I want to set the MaxWidth of the first column to a third of the parent Window or Page Width (or ActualWidth) and I would prefer to do this in XAML if possible. This is some sample XAML to play with in XamlPad (or similar) which shows what I'm doing. <Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:sys="clr-namespace:System;assembly=mscorlib" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" > <Grid> <Grid.ColumnDefinitions> <ColumnDefinition x:Name="Column1" Width="200"/> <ColumnDefinition x:Name="Column2" MinWidth="50" /> <ColumnDefinition x:Name="Column3" Width="{ Binding ElementName=Column1, Path=Width }"/> </Grid.ColumnDefinitions> <Label Grid.Column="0" Background="Green" /> <GridSplitter Grid.Column="0" Width="5" /> <Label Grid.Column="1" Background="Yellow" /> <Label Grid.Column="2" Background="Red" /> </Grid> </Page> As you can see, the right column width is bound to the width of the first column, so when you slide the left column using the splitter, the right column does the same :) If you slide the left column to the right, eventually it will slide over half the page/window and over to the right side of the window, pushing away column 2 and 3. I want to prevent this by setting the MaxWidth of column 1 to a third of the window width (or something like that). I can do this in code behind quite easily, but how to do it in "XAML Only"? EDIT: David Schmitt suggested to use SharedSizeGroup instead of binding, which is an excellent suggestion. My sample code would look like this then: <Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:sys="clr-namespace:System;assembly=mscorlib" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" > <Grid IsSharedSizeScope="True"> <Grid.ColumnDefinitions> <ColumnDefinition x:Name="Column1" SharedSizeGroup="ColWidth" Width="40"/> <ColumnDefinition x:Name="Column2" MinWidth="50" Width="*" /> <ColumnDefinition x:Name="Column3" SharedSizeGroup="ColWidth"/> </Grid.ColumnDefinitions> <Label Grid.Column="0" Background="Green" /> <GridSplitter Grid.Column="0" Width="5" /> <Label Grid.Column="1" Background="Yellow" /> <Label Grid.Column="2" Background="Red" /> </Grid> </Page> A: I think the XAML-only approach is somewhat circuitous, but here is a way to do it. <Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:sys="clr-namespace:System;assembly=mscorlib" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" > <!-- This contains our real grid, and a reference grid for binding the layout--> <Grid x:Name="Container"> <!-- hidden because it's behind the grid below --> <Grid x:Name="LayoutReference"> <Grid.ColumnDefinitions> <ColumnDefinition Width="*"/> <ColumnDefinition Width="*"/> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <!-- We need the border, because the column doesn't have an ActualWidth --> <Border x:Name="ReferenceBorder" Background="Black" /> <Border Background="White" Grid.Column="1" /> <Border Background="Black" Grid.Column="2" /> </Grid> <!-- I made this transparent, so we can see the reference --> <Grid Opacity="0.9"> <Grid.ColumnDefinitions> <ColumnDefinition x:Name="Column1" MaxWidth="{Binding ElementName=ReferenceBorder,Path=ActualWidth}"/> <ColumnDefinition x:Name="Column2" MinWidth="50" /> <ColumnDefinition x:Name="Column3" Width="{ Binding ElementName=Column1, Path=Width }"/> </Grid.ColumnDefinitions> <Label Grid.Column="0" Background="Green"/> <GridSplitter Grid.Column="0" Width="5" /> <Label Grid.Column="1" Background="Yellow" /> <Label Grid.Column="2" Background="Red" /> </Grid> </Grid> </Page> A: Too lazy to actually write it up myself, but you should be able to use a mathematical converter and bind to your parent windows width (either by name, or with a RelativeSource ancestor search). //I know I borrowed this from someone, sorry I forgot to add a comment from whom public class ScaledValueConverter : IValueConverter { public Object Convert(Object value, Type targetType, Object parameter, System.Globalization.CultureInfo culture) { Double scalingFactor = 0; if (parameter != null) { Double.TryParse((String)(parameter), out scalingFactor); } if (scalingFactor == 0.0d) { return Double.NaN; } return (Double)value * scalingFactor; } public Object ConvertBack(Object value, Type targetType, Object parameter, System.Globalization.CultureInfo culture) { throw new Exception("The method or operation is not implemented."); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/82323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Using CreateItemFromTemplate to process an olEmbeddeditem Outlook attachment I am using C# to process a message in my Outlook inbox that contains attachments. One of the attachments is of type olEmbeddeditem. I need to be able to process the contents of that attachment. From what I can tell I need to save the attachment to disk and use CreateItemFromTemplate which would return an object. The issue I have is that an olEmbeddeditem can be any of the Outlook object types MailItem, ContactItem, MeetingItem, etc. How do you know which object type a particular olEmbeddeditem attachment is going to be so that you know the object that will be returned by CreateItemFromTemplate? Alternatively, if there is a better way to get olEmbeddeditem attachment contents into an object for processing I'd be open to that too. A: I found the following code on Google Groups for determining the type of an Outlook object: Type t = SomeOutlookObject.GetType(); string messageClass = t.InvokeMember("MessageClass", BindingFlags.Public | BindingFlags.GetField | BindingFlags.GetProperty, null, SomeOutlookObject, new object[]{}).ToString(); Console.WriteLine("\tType: " + messageClass); I don't know if that helps with an olEmbedded item, but it seems to identify regular messages, calendar items, etc. A: Working with email attachments that are also emails which in turn contains user defined properties that I want to access, then I perform the following steps: Outlook.Application mailApplication = new Outlook.Application(); Outlook.NameSpace mailNameSpace = mailApplication.GetNamespace(“mapi”); // make sure it is an embedded item If(myAttachment.Type == Outlook.OlAttachmentType.olEmbeddeditem) { myAttachment.Type.SaveAsFile(“temp.msg”); Outlook.MailItem attachedEmail = (Outlook.MailItem)mailNameSpace.OpenSharedItem(“temp.msg”); String customProperty = attachedEmail.PropertyAccessor.GetProperty( “http://schemas.microsoft.com/mapi/string/{00020329-0000-0000-c000-000000000046}/myProp } If you open the MailItem using, then I will not have access to the properties as mentioned above: Outlook.MailItem attachedEmail = (Outlook.MailItem)mailApplication.CreateFromTemplate(“temp.msg”);
{ "language": "en", "url": "https://stackoverflow.com/questions/82332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there a way to specify a different session store with Tomcat? Tomcat (version 5 here) stores session information in memory. When clustering this information is periodically broadcast to other servers in the cluster to keep things in sync. You can use a database store to make sessions persistant but this information is only written periodically as well and is only really used for failure-recovery rather than actually replacing the in-memory sessions. If you don't want to use sticky sessions (our configuration doesn't allow it unfortunately) this raises the problem of the sessions getting out of sync. In other languages, web frameworks tend to allow you to use a database as the primary session store. Whilst this introduces a potential scaling issue it does make session management very straightforward. I'm wondering if there's a way to get tomcat to use a database for sessions in this way (technically this would also remove the need for any clustering configuration in the tomcat server.xml). A: There definitely is a way. Though I'd strongly vote for sticky sessions - saves so much load for your servers/database (unless something fails)... http://tomcat.apache.org/tomcat-5.5-doc/config/manager.html has information about SessionManager configuration and setup for Tomcat. Depending on your exact requirements you might have to implement your own session manager, but this starting point should provide some help. A: Take a look at Terracotta, I think it can address your scaling issues without a major application redesign. A: I've always been a fan of the Rails sessions technique: store the sessions (zipped+encrypted+signed) in the user's cookie. That way you can do load balancing to your hearts content, and not have to worry about sticky sessions, or hitting the database for your session data, etc. I'm just not sure you could implement that easily in a java app without some sort of rewriting of your session-access code. Anyway just a thought. A: Another alternative would be the memcached-session-manager, a memcached based session failover and session replication solution for tomcat 6.x / 7.x. It supports both sticky sessions and non-sticky sessions. I created this project to get the best of performance and reliability and to be able to scale out by just adding more tomcat and memcached nodes.
{ "language": "en", "url": "https://stackoverflow.com/questions/82340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Charting in web-based applications What are the various charting tools that are available for displaying charts on a web page using ASP.NET? I know about commercial tools such as Dundas and Infragistics. I could have "googled" this but I want to know the various tools that SO participants have used? Any free charting tools that are available are also welcome to be mentioned. A: If you do not mind using Flash to display your graphs, Open Flash Charts supports a lot of languages. This was also the choice used for the Stackoverflow reputation tracker piece as mentioned in this question A: I like google charts, but check the license before using. A: Hey - don't know if this works for ASP.NET but I've used the ZedGraph tool for my winforms apps and it is really nice. A: ZedGraph works superbly in ASP .NET, and is a superb charting package. Really flexible, and makes attractive graphs. The graphs are generated as static images (PNG by default) and it automatically deletes old ones. Also, it is widely supported, has a great wiki, and a decent code-project tutorial (http://www.codeproject.com/KB/graphics/zedgraph.aspx). A: I used Chart Director for a medium sized project, and loved it. It's incredibly feature-rich, has pretty good documentation, and an amazingly good support forum -- it's one of those ones where you ask a question, and a guy who works for the company that produces the software almost invariably answers it within a few hours. I used it with PHP and MySQL, but as far as I know it works with ASP.NET as well. A: You might like to take a look at the new Google Visualization API. Saw a presentation on this at yesterday's Google Dev. Day in London and it looked very interesting. While it is currently only able to work with data retrieved from Google Spreadsheets, expanding it to handle data retrieval from other sources is a high priority for the Viz. team. HTH. cheers, Rob A: What about using Flotr? The syntax is pretty clean and you can produce some pretty nifty graphs (Check out some examples) with minimal effort. A: If you need to build charts FAST then have a look at this rocket: dsec.com/csp_charts.png You can call the chart server from your ASP.Net scripts. A: If you use SQL Server, then SQL Server reporting services is not bad. It includes a free version of Dundas chart controls which allows you to do basic charting. There are are couple of issues with presentation and making it Firefox friendly but it's a pretty simple solution. - If you've SQL Server of course! A: We have used Telerik's RadChart and MSSQL Reporting Services. A: I would look no farther than Dundas if you have the cha-ching to pay for it. I've used it on several projects and not found a better option. Cheaper with better licensing, yes, but not better in terms of functionality. A: For free flash charting, you may look at FusionCharts Free. Or, if you want more professional and are ready to shell out $$$, look at FusionCharts v3
{ "language": "en", "url": "https://stackoverflow.com/questions/82345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I read only x number of bytes of the body using Net::HTTP? It seems like the methods of Ruby's Net::HTTP are all or nothing when it comes to reading the body of a web page. How can I read, say, the just the first 100 bytes of the body? I am trying to read from a content server that returns a short error message in the body of the response if the file requested isn't available. I need to read enough of the body to determine whether the file is there. The files are huge, so I don't want to get the whole body just to check if the file is available. A: I wanted to do this once, and the only thing that I could think of is monkey patching the Net::HTTP#read_body and Net::HTTP#read_body_0 methods to accept a length parameter, and then in the former just pass the length parameter to the read_body_0 method, where you can read only as much as length bytes. A: Are you sure the content server only returns a short error page? Doesn't it also set the HTTPResponse to something appropriate like 404. In which case you can trap the HTTPClientError derived exception (most likely HTTPNotFound) which is raised when accessing Net::HTTP.value(). If you get an error then your file wasn't there if you get 200 the file is starting to download and you can close the connection. A: To read the body of an HTTP request in chunks, you'll need to use Net::HTTPResponse#read_body like this: http.request_get('/large_resource') do |response| response.read_body do |segment| print segment end end A: This is an old thread, but the question of how to read only a portion of a file via HTTP in Ruby is still a mostly unanswered one according to my research. Here's a solution I came up with by monkey-patching Net::HTTP a bit: require 'net/http' # provide access to the actual socket class Net::HTTPResponse attr_reader :socket end uri = URI("http://www.example.com/path/to/file") begin Net::HTTP.start(uri.host, uri.port) do |http| request = Net::HTTP::Get.new(uri.request_uri) # calling request with a block prevents body from being read http.request(request) do |response| # do whatever limited reading you want to do with the socket x = response.socket.read(100); # be sure to call finish before exiting the block http.finish end end rescue IOError # ignore end The rescue catches the IOError that's thrown when you call HTTP.finish prematurely. FYI, the socket within the HTTPResponse object isn't a true IO object (it's an internal class called BufferedIO), but it's pretty easy to monkey-patch that, too, to mimic the IO methods you need. For example, another library I was using (exifr) needed the readchar method, which was easy to add: class Net::BufferedIO def readchar read(1)[0].ord end end A: Shouldn't you just use an HTTP HEAD request (Ruby Net::HTTP::Head method) to see if the resource is there, and only proceed if you get a 2xx or 3xx response? This presumes your server is configured to return a 4xx error code if the document is not available. I would argue this was the correct solution. An alternative is to request the HTTP head and look at the content-length header value in the result: if your server is correctly configured, you should easily be able to tell the difference in length between a short message and a long document. Another alternative: set the content-range header field in the request (which again assumes that the server is behaving correctly WRT the HTTP spec). I don't think that solving the problem in the client after you've sent the GET request is the way to go: by that time, the network has done the heavy lifting, and you won't really save any wasted resources. Reference: http header definitions A: You can't. But why do you need to? Surely if the page just says that the file isn't available then it won't be a huge page (i.e. by definition, the file won't be there)?
{ "language": "en", "url": "https://stackoverflow.com/questions/82349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Way to Alt-Tab through all applications in Virtual and Host PC I am using VirtualPc 2007 more and more, the one annoyance is "Alt-Tab". What I want is to be able to use alt-tab to iterate through all of the applications in the host pc and virtual(s) as if they were one long list. Is there a setting or program to do this? A: No. If the VPC is in full screen mode, alt+tab works only within the context of the VPC. If the VPC Screen is not focused, you will get the collection of applications that are in the Host (including the instance of VPC) A: Sadly, no. This is possible in Parallels on the Mac, in what they call "Coherence" mode. But Virtual PC does not support that kind of integration. A: No. If the VPC is in full screen mode, alt+tab works only within the context of the VPC. However, you may find the "Host key+DOWN ARROW" shortcut useful. This minimizes the virtual machine and allows you to alt-tab on the host again.
{ "language": "en", "url": "https://stackoverflow.com/questions/82350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: In Struts 1.3, what's the best way for the Controller to fill the View with variables? I've just inherited some old Struts code. If Struts (1.3) follows the MVC pattern, how do the Action classes fill the View with variables to render in HTML ? So far, I've seen the Action classes push variables in (1) the HTTP request with request.setAttribute("name", user.getName()) (2) in ActionForm classes, using methods specific to the application: UserForm form = (UserForm) actionForm; form.setUserName(user.getName()); and (3) a requestScope variable, that I see in the JSP layer (the view uses JSP), but I can't see in the Action classes. <p style='color: red'><c:out value='${requestScope.userName}' /></p> So, which of these is considered old-school, and what's the recommended way of pushing variables in the View in Struts ? A: My Struts days are long over, but as far as I remember we used to place one view-specific bean (which would work as a holder for fine-graner beans or collections of beans) into the request scope within our Action.perform() implementation. This view-specific bean would then be rendered by the view. A: As Struts 1.3 is considered old-school, I'd recommend to go with the flow and use the style that already is used throughout the application you inherited. If all different styles are already used, pick the most used one. After that, pick your personal favourite. Mine would be 1 or 3 - the form (2) is usually best suited for data that will eventually be rendered inside some form controls. If this is the case - use the form, otherwise - don't.
{ "language": "en", "url": "https://stackoverflow.com/questions/82359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Web Site or Web Application in ASP.NET Which Visual Studio template should be used for a ASP.NET web site, the Web Site template or the Project | Web Application template? A: you'd better read this: http://msdn.microsoft.com/en-us/library/aa730880(VS.80).aspx in my opinion it depends on what you are developing A: Both function and perform similarly, but still differ in following ways: Web application: * *We can't include C# and VB pages in single web application. *We can set up dependencies between multiple projects. *Can not edit individual files after deployment without recompiling. *Right choice for enterprise environments where multiple developers work unitedly for creating, testing and deployment. Web site: * *Can mix VB and C# page in single website. *Can not establish dependencies. *Edit individual files after deployment. *Right choice when one developer will responsible for creating and managing entire website. A: Web application projects works more like a traditional VS project, which has a project file, is compiled in one step and so on. Web site projects works more like classic ASP or PHP-sites. There is no project file (references are stored in the solution file), and pages are recompiled dynamically on the server. The nice thing with web sites is that you can just ftp to the server and change a file in a text editor. You dont need VS. Some may hate that, though. It probably depends on your background. If you are used to ASP or PHP style development, web site projects will seem more natural to you. If you have a traditional application developer background, web application projects will seem more natural. A: If you're using Team Foundation Server for source control, you'll probably have to use a Web Application Project, as you need a .csproj file. There are more details from Jeff Atwood himself: Web Site Projects vs. Web Application Projects Web Site web projects are particularly painful in Team System due to the lack of a physical file that contains project information and metadata. For example, it's impossible to check in code analysis rules on Web Site projects, because the code analysis rules are stored entirely on the client! A: I prefer a website. A website is a collection of files in a directory. It becomes more portable and deployable. A web application clouds the issue with a project file. A: Personally I use web application projects exclusively now. I actually converted a rather web site to a web application because of compilation times for the web site. I also use pre-build events to move configuration specific configuration files around and pre-build and post-build events are not available in web sites. A: In Visual Studio 2015, I've come to somewhat prefer web site projects over web app projects. I still use visual studio though because you get Nuget Packaging, you can install nuget packages to both types of projects. However a WebSite Project does not have a project file, you are literally just adding a folder to your solution. However you can still have code, but I prefer to put it in a separate project. In WebApp projects you have your Assets, Css, Views (razor, aspx etc), Controllers/Code Behinds etc all in one project and it just mashes together. I prefer to work with websites in two halves. The front end (css, js, images, "html/cshtml/aspx/ashx/.master/etc") and the back end (all the code). So I create a Web Site project and a class Library to accompany it (in visual studio you can add references to the web site project). I add my class Library as a dependency and all Code is in the class Library. You can still have a global.asax, you just have to tell it that the code behind is in another dll (not the one that the site will compile to). MVC views, you just specify the namespaces like normal (the dll is referrence so the namespaces are there). And in WebForms you Just have to remember to include the assembly name with your type references that the code is in. It's a little tedious to get use to, but when you do you have isolated structure, everything is in a place that makes sense and modularized in an easy to maintain way. And the PLUS side is that because the Web Site is just a folder (no project file) it can be opened in Visual Studio Code easily, and other popular text editors making it easy for designers to work on the css/js/images etc (which are not in the code project). Keeping the layer designers separated, the designer sees just what they need to see. Now structure wise. I keep my code local on my machine checked into a subversion repository using Tortoise SVN and Visual SVN (java/.net shop). To test locally I install IIS and I set the website project up in IIS locally just like I would on the dev/prod servers. Then I install MSDeploy on the dev/prod servers and I use the Publish web app feature via MSDeploy in visual studio and I use web.config transformations. So I have web.config transformations for dev and prod and the main web.config without transformations is for local testing (so it works for all devs on the project). To previous stated cons: Having a WebSite Project vs a WebApp Project doesn't mean multiple developers can't work on it, that's only if your WebSite Project is on some server some where and you are loading it directly from there which would be bad practice. You can treat a WebSite Project just like any other Visual Studio project, local code, source control, multiple developers. As a final note, an added benefit of separating your code is you can put all of your code in a shared project. Then you can create a Class Library for each port you might do, say one on straight .net 4.6 and another on .net core 5 and link in your shared project. As long as your code is compatible with both, it will build and you don't have any duplicated code files.
{ "language": "en", "url": "https://stackoverflow.com/questions/82361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Smooth ProgressBar in WPF I'm using the ProgressBar control in a WPF application and I'm getting this old, Windows 3.1 ProgressBlocks thing. In VB6, there was a property to show a smooth ProgressBar. Is there such a thing for WPF? A: This KB article seems to explain what you are looking for... there is a link to a VB version of the article too. A: I was not able to find a direct solution for this. But I found something even better. In WPF, you can use Windows Themes. I am using Windows XP, and having Vista-Aero Theme on my WPF Application, making all controls look like Vista-Aero. Here's the code... Go to Application.xaml.vb and write... Enum appThemes Aero Luna LunaMettalic LunaHomestead Royale End Enum Private Sub Application_Startup(ByVal sender As Object, ByVal e As System.Windows.StartupEventArgs) Handles Me.Startup setTheme(appThemes.Aero) End Sub ''' <summary> ''' Function to set the default theme of this application ''' </summary> ''' <param name="Theme"> ''' Theme of type appThemes ''' </param> ''' <remarks></remarks> Public Sub setTheme(ByVal Theme As appThemes) Dim uri As Uri Select Case Theme Case appThemes.Aero ' Vista Aero Theme uri = New Uri("PresentationFramework.Aero;V3.0.0.0;31bf3856ad364e35;component\\themes/Aero.NormalColor.xaml", UriKind.Relative) Case appThemes.Luna ' Luna Theme uri = New Uri("PresentationFramework.Luna;V3.0.0.0;31bf3856ad364e35;component\\themes/Luna.NormalColor.xaml", UriKind.Relative) Case appThemes.LunaHomestead ' Luna Mettalic uri = New Uri("PresentationFramework.Luna;V3.0.0.0;31bf3856ad364e35;component\\themes/Luna.Metallic.xaml", UriKind.Relative) Case appThemes.LunaMettalic ' Luna Homestead uri = New Uri("PresentationFramework.Luna;V3.0.0.0;31bf3856ad364e35;component\\themes/Luna.Homestead.xaml", UriKind.Relative) Case appThemes.Royale ' Royale Theme uri = New Uri("PresentationFramework.Royale;V3.0.0.0;31bf3856ad364e35;component\\themes/Royale.NormalColor.xaml", UriKind.Relative) End Select ' Set the Theme Resources.MergedDictionaries.Add(Application.LoadComponent(uri)) End Sub (I hope you can convert it to C#) A: I am not sure what you want to do. If you simply want a progress bar that "sweeps" from side to side like on starting Vista you could use: IsIndetermined = true. If you actually want to go from 0% to 100% you have to either animate over the value as shown in this example on msdn: http://msdn.microsoft.com/en-us/library/system.windows.controls.progressbar.aspx or set the value explicitly either in code-behind (most likely from a background worker) or through a binding to a changing value. Nevertheless the WPF ProgressBar should always be "smooth", there is the possibility that the UI will default to a more simpler version through a RemoteDesktop connection. A: I recently was annoyed by the appearance of my progress bars on XP after developing on Vista. I didn't want to try suggestions I'd seen for loading the Vista styles out of dll's, but this article gave me just what I was looking for. vista appearance - no new classes. Plus it has the glassy highlight on a timer. No pictures on the article, but it looks just like Vista's ProgressBar.
{ "language": "en", "url": "https://stackoverflow.com/questions/82365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Best way to handle URLs in a multilingual site in ASP.net I need to do a multilingual website, with urls like www.domain.com/en/home.aspx for english www.domain.com/es/home.aspx for spanish In the past, I would set up two virtual directories in IIS, and then detect the URL in global.aspx and change the language according to the URL Sub Application_BeginRequest(ByVal sender As Object, ByVal e As EventArgs) Dim lang As String If HttpContext.Current.Request.Path.Contains("/en/") Then lang = "en" Else lang = "es" End If Thread.CurrentThread.CurrentUICulture = CultureInfo.GetCultureInfo(lang) Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture(lang) End Sub The solution is more like a hack. I'm thinking about using Routing for a new website. Do you know a better or more elegant way to do it? edit: The question is about the URL handling, not about resources, etc. A: I decided to go with the new ASP.net Routing. Why not urlRewriting? Because I don't want to change the clean URL that routing gives to you. Here is the code: Sub Application_Start(ByVal sender As Object, ByVal e As EventArgs) ' Code that runs on application startup RegisterRoutes(RouteTable.Routes) End Sub Public Sub RegisterRoutes(ByVal routes As RouteCollection) Dim reportRoute As Route Dim DefaultLang As String = "es" reportRoute = New Route("{lang}/{page}", New LangRouteHandler) '* if you want, you can contrain the values 'reportRoute.Constraints = New RouteValueDictionary(New With {.lang = "[a-z]{2}"}) reportRoute.Defaults = New RouteValueDictionary(New With {.lang = DefaultLang, .page = "home"}) routes.Add(reportRoute) End Sub Then LangRouteHandler.vb class: Public Class LangRouteHandler Implements IRouteHandler Public Function GetHttpHandler(ByVal requestContext As System.Web.Routing.RequestContext) As System.Web.IHttpHandler _ Implements System.Web.Routing.IRouteHandler.GetHttpHandler 'Fill the context with the route data, just in case some page needs it For Each value In requestContext.RouteData.Values HttpContext.Current.Items(value.Key) = value.Value Next Dim VirtualPath As String VirtualPath = "~/" + requestContext.RouteData.Values("page") + ".aspx" Dim redirectPage As IHttpHandler redirectPage = BuildManager.CreateInstanceFromVirtualPath(VirtualPath, GetType(Page)) Return redirectPage End Function End Class Finally I use the default.aspx in the root to redirect to the default lang used in the browser list. Maybe this can be done with the route.Defaults, but don't work inside Visual Studio (maybe it works in the server) Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Dim DefaultLang As String = "es" Dim SupportedLangs As String() = {"en", "es"} Dim BrowserLang As String = Mid(Request.UserLanguages(0).ToString(), 1, 2).ToLower If SupportedLangs.Contains(BrowserLang) Then DefaultLang = BrowserLang Response.Redirect(DefaultLang + "/") End Sub Some sources: * Mike Ormond's blog * Chris Cavanagh’s Blog * MSDN A: * *Use urlrewriteing.net for asp.net webforms, or routing with mvc. Rewrite www.site.com/en/something.aspx to url: page.aspx?lang=en. UrlRewriteing.net can be easily configured via regex in web.config. You can also use routing with webforms now, it's probably similar... *with webforms, let every aspx page inherits from BasePage class, which then inherits from Page class. In BasePage class override "InitializeCulture()" and set culture info to thread, like you described in question. It's good to do that in this order: 1. check url for Lang param, 2. check cookie, 3. set default lang *For static content (text, pics url) on pages use LocalResources,or Global if content is repeating across site. You can watch videocast on using global/local res. on www.asp.net *Prepare db for multiple languages. But that's another story. A: I personnaly use the resources files. Very efficient, very simple. A: UrlRewriting is the way to go. There is a good article on MSDN on the best ways to do it. http://msdn.microsoft.com/en-us/library/ms972974.aspx A: Kind of a tangent, but I'd actually avoid doing this with different paths unless the different languages are completely content separate from each other. For Google rank, or for users sharing URLs (one of the big advantages of ‘clean’ URLs), you want the address to stay as constant as possible. You can find users’ language preferences from their browser settings: CultureInfo.CurrentUICulture Then your URL for English or Spanish: www.domain.com/products/newproduct Same address for any language, but the user gets the page in their chosen language. We use this in Canada to provide systems in English and French at the same time. A: To do this with URL Routing, refer to this post: Friendly URLS with URL Routing A: Also, watch out new IIS 7.0 - URL Rewriting. Excellent article here http://learn.iis.net/page.aspx/496/iis-url-rewriting-and-aspnet-routing/ I liked this part Which Option Should You Use? * *If you are developing a new ASP.NET Web application that uses either ASP.NET MVC or ASP.NET Dynamic Data technologies, use ASP.NET routing. Your application will benefit from native support for clean URLs, including generation of clean URLs for the links in your Web pages. Note that ASP.NET routing does not support standard Web Forms applications yet, although there are plans to support it in the future. *If you already have a legacy ASP.NET Web application and do not want to change it, use the URL-rewrite module. The URL-rewrite module allows you to translate search-engine-friendly URLs into a format that your application currently uses. Also, it allows you to create redirect rules that can be used to redirect search-engine crawlers to clean URLs. http://learn.iis.net/page.aspx/496/iis-url-rewriting-and-aspnet-routing/ Thanks, Maulik.
{ "language": "en", "url": "https://stackoverflow.com/questions/82380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Should Tables be avoided in HTML at any cost? It is advisable to use tables in HTML pages (now that we have CSS)? What are the applications of tables? What features/abilities does tables have that are not in CSS? Related Questions * *Tables instead of DIVs *DIV vs TABLE * *DIVs vs. TABLEs a rebuttal please A: No - not at all. But use tables for tabular data. Just don't use them for general layouting. But if you display tabular data, like results or maybe even a form, go ahead and use tables! A: I guess I'm not in the majority here, but I still see a lot of use for tables in HTML. Of course, yes, for things like forms, you simply can't beat a table. Trying to line up the labels and accompanying form fields would certainly be possible using DIV's, but what a PITA that would be, and resizing the page would be ugly in some cases. A table works wonders here. But also consider larger issues. For example, have you tried to create a standard 3 column layout with header and footer using only DIV's and CSS? Again, it is very possible, and there are about 1000 websites with examples, but they all tend to suffer from the same problems. The first problem is that the columns never really like to all become the same height without some sort of javascript assistance. The second problem is changing the layout is often a tricky thing, because it is a bit of a balacing act to get everything "just right" to begin with, so then you don't want to mess with it. Finally, and this goes back to that last point - it ain't easy. You have to lay out all your DIV's, then go back and create the magic CSS that force those DIV's into the proper position, and then spend a few hours tweaking it until it is right.... ugh. And then looking at the HTML without a viewer really gives you NO idea what the page looks like because it is ALL reinterpreted by the CSS in the end. Now, option B. Use a table. Spend about 30 seconds typing out the <tr> and <td> tags, probably no CSS at all, no javascript 'fixit', and know EXACTLY what it will look like just by looking at the HTML. Certainly there are arguments for and against tables, and purists seem to favor DIV's for layout, but don't use DIV's for religious reasons just because someone told you tables are evil. Use what works best for you, your page, and the way your viewers are going to interact with the page. If that's a table, then use it. If not, don't. A: If only the CSS equivalent was as expedient in setting up a "table-like" layout, then I would love to use CSS. I find the time that it takes to mimic the things that others have listed here (equal heights on cells, auto-growing rows, etc.) is simply not worth the effort. I don't get a return on my investment as opposed to quickly throwing together a table in most cases. All browsers can agree on exactly how a table should be laid out. Bam. Done. Here's how my CSS follies usually go: Try setting up a table-like layout in CSS. Spend 20 minutes or more getting everything just so in Mozilla and then open it up in IE. Spend another 30 minutes tweaking between those two browsers. Pretend like there are only two browsers in the world because I actually need to get some work done. I believe in the promise of CSS: Separation of concerns. The problem is that for those who need to be productive, CSS is not ready for prime time. Or perhaps it's the rendering engines of the browsers - whichever. A: The benefit of CSS is that it separates design and layout from content. If you have tabular data then it makes sense to use a <TABLE> tag. If you want to layout different blocks of content then you should use <DIV> or <SPAN> and CSS. A: Tables are for outputting tabular data. Anything that you might display in a spreadsheet, columns of results, that kind of thing. The suggestion of using CSS rather than tables is for columnar layouts, which weren't really actual tables. It was never intended to suggest that tables should be removed completely. A: Tables have no equivalent in CSS2, and they aren't that easy to duplicate using css. The particular part of tables that is hard to reproduce is the auto-sizing of the columns. While it's easy to let the 1 row grow to the same size across the page, it's hard for the next row to match up the column size or each cell size dynamically, and in fact can't be done without using other scripting languages such as javascript, php or others. You can use max and min widths, as well as set percentages for cell sizes, or hard code cell width's, but dynamically growing cells work fine for 1 row, it's the next row below it that won't match up. A: Scanning over these answers, I didn't see one real-world reason where tables are necessary for layout and that is html emails. I work for a very, very large financial company and we track around 600 different jobs a month....many of which are multiple email campaigns. You cannot use css for layout for any mail reader. There are a few inline css specs you can use that relate to color, font face and size and even line-height is fairly widely available but for layout, you have to use tables as all the major and most minor email readers (Outlook 97/03, Entourage, MSN etc) read them just fine. The issues with tables come into play when you have td's that do not have height/width specified whether they contain data or not. So, 'broken' table layouts are usually fixed by paying attention to attributes and yes... whitespace in the html....yes whitespace that isn't supposed to matter. So, if you ever work for a large company/corporation or you land a very large client, be ready to throw all the current technology out the door and get your html table hat on! A: I think it all depends on time/effort. While a purist might say "only use tables for tabular data," I've used tables to ease cross-browser layouts in the past. For me, it's matter of time utilization. I can either spend my time cranking away on the CSS to get it right or I can toss it in a table and spend far less time on it. I tend to go this route until things are up and running. Once the functionality is there, I go back and polish the CSS/HTML. A: I have to go with the tables approach here. The reason for this comes simply down to cost. Until a well supported CSS-centric approach to layout comes out, and I am talking about at the macro level...not micro within containers, trying to shoehorn CSS positioning into a generalized approach to layout is inefficient. You have to approach this from the perspective of the person writing the check for the development. A few years ago I contracted to develop and maintain a site for a major hotel chain. We went with a table driven layout; your basic header, body, footer with left/right columns. We also used tables for some of the finer elements like non-graphic buttons. The chain's parent company maintained their own site and went with a pure CSS approach to layout. When IE7 came out our site worked perfectly without any changes. The parent company's site was a mess. Overall they spent about 1000 total hours (between meetings/development/QA/rollout) fixing the issues. When you are paid to develop a site part of your responsibility is to mitigate against future risk and minimize future development costs, particularly if those costs do not add value to the site (your product). A: Everyone so far has said how tables should only be used for tabular data, and this is true. Take a look at the HTML source of any page on SO and you'll see that they have a different idea... I think their rationale is that sometimes using a table is just so much simpler. Though, there are a lot of really good usability reasons why to avoid them. A: Other than for tabular data, tables are unfortunately still necessary if you want to create flexible grid layouts such as complex forms in a cross-browser compatible manner. CSS2 has support for creating flexible grid layouts without using the table element via the display property, but this is not supported in IE6 or IE7. However, for most basic form layouts, CSS should be sufficient. A: My very simple and basic opinion on this is that tables are there for tabular data - not for positioning one thing on top or next to another element because you happen to like it being there. So - if you want to display a table of data: do so (with a table). If you want to position content on the page: use css. A: I believe and hope the era of using tables for layout is gone. Simply put: a table is a table, nothing else. What I think will be the new, similar, flamewar topic for next few years is : should I use new CSS feature display: table, table-cell, table-row etc. for layout?? ? :-) A: I think you might be slightly confused. CSS is a method of styling HTML documents, whether this is a table element or a 'div'. It used to be that tables were used in HTML to design the whole page layout. It was common to see multiple nested tables (usually generated by WYSIWYG programs such as Dreamweaver), and this lead to a maintainance nightmare. Since the introduction and adoption of CSS stylesheets it is now correct only to use a table for tabular data. That means if your data is most naturally rendered as a table use the table element. You can still style the table using CSS. However, and I generally disapprove of this method since it duplicates existing functionality, it is possible to use elements to build a table like structure. In fact there are several website that will generate such code for you, but I have no links to hand. Remeber also that some users might be either viewing in text only mode, or using a screen reader both of which will probably break the page (like reading the columns vertically rather than horizontally) hence the proper use of tables. HTH A: I think the reason this comes up so much is that using html tables is easier to understand when creating structure and layout than CSS. Using CSS is more abstract but helps seperate your code. In response to the "structure" argument...by definition, isn't using html tables giving structure to data so that you identify it as "tabular". Whether it is CSS or html you are controlling the flow and layout of data. Be it tabular or otherwise... If CSS tables are so important (and I'm sure my answer will start some kind of flame war) why isn't there a mechanism in Visual Studio to create a layout using CSS? Oh, you didn't check the layout feature thingy in VS? Yeah, you get tables, straight from Microsoft. I'm not complaining, just making the point that if it meant as much to the rest of the world you would see MS switch to CSS. In my opinion do what fits best with your project. For one or two page web projects I still use html because what it does is obvious. Using CSS is more abstract and some mornings I just haven't had that much coffee. HTML tables can get messy, fast. CSS takes a little more time to get messy but will too. A: Although for some layouts using tables may seem simpler at first, when maintenance time comes using css pays off. Especially if you ever want to change the position of something, or if you want to use the same layout in several places. IMHO, tables should be used only when presenting tables. Never for layout purposes. A: Tables are used for displaying tabular data. There's not much else to add :-D A: If there is a reason for using a table in a semantic way (for example for showing tabular data) use tables. Just use no tables to do layout stuff. There are lot of searchengine und accessibility benefits that you get with semantic markup. A: Reserving tables for use only for tabular data works most of the time. There are certain layout problems that are much easier to achieve with tables than anything else. For example, vertical alignment and equal height columns. On the other side of the fence, there have been some complex data tables that I've used floated divs for because tables weren't fitting the bill. Like anything else, use the right tool for the right job, and sometimes the most pragmatic tool isn't what you think it would be. That said, I rarely use tables for layout, even in the examples I gave, because inevitably I hit some problem that makes me have to do it with CSS anyway. Your mileage may vary. A: I also don't like the idea of using tables to arrange items on a page. However this should not become a religion - tables are bad. I've encountered one example where I have a three column layout and needed the center column to grow with the content. In the end the solution that was the simplest and worked well on all needed browsers was to use a table. IMHO CSS still has some shortcomings and sometimes it's better to use something simple that works rather than just adhere blindly to some idea ('tables are bad...'). A: you can even use them for layouting, if your layout is somewhat tabular. I like the feature of tables and cells dynamically adjusting to whatever width the browser window is. Never use any fixed widths or height, that's just messing up things. A: It's easy. Think your webpage as a real page on paper. You should use tables on webpage only when you would draw a table on your paper. Another tip is to use table only with border > 0. This way, you use tables only when you mean you want a table (and not for layout). A: @toddhd, advocating the use of tables for layout simply to save time is a cop out. If time is the issue, then you can quickly create a columnar layout without tables using a framework like Blueprint CSS or 960 Grid. A: Seems to depend on what browsers were capable of when you started designing. If you began back when only tables worked then that is what you will probably always want to use, because you know it. Or if that is what you liked to use even after browsers had advanced to be able to render CSS... I just don't like to keep doing the same things over and over because they are easy and I know how to do them.That is boring. I would much rather learn new techniques than dismiss something because I don't know how to use it or I think I don't have time to learn. I don't like to use tables, I don't understand them, they seem alien and offensive but that is because I began in 2005. I also don't use design programs to spit out templates because I prefer hand coding. A: if you need to use a table, your layout probably sucks. Show me a single well-designed site that uses tables, and I'll show you a thousand that don't. A lot of people are throwing around the term "purist". This is rubbish. Would you print newspaper copy in a book? Would you design a brochure with Excel? No? Then why would you use a table to display non-tabular data? Most times the difficulties people face in letting go of tables for layout is a result of their own ignorance of HTML/CSS/Good Design Principles. While you may find it difficult to make multi-column layouts extend evenly in the vertical direction, you might think to try a different method. DIVs are not TABLEs. You can make faux columns using wide borders, page backgrounds, etc. Spend some time actually learning to do what you're setting out to do, or forever be seen as someone without any kind of useful skill. (http://www.alistapart.com would be a good start) I'm honestly surprised that this question is cropping up in 2008. DIVs came around when I was in high school. Stop being a noob. And as was mentioned above, tables fail hard on mobile devices and screen readers. A: I am going to assume that your question is actually - "Should I avoid using HTML tables for layout purposes?". The answer is: No. In fact, tables are specifically meant to be used to control layout. The drawback for using tables to control the layout of an entire page is that the source file rapidly becomes difficult to manually understand and edit. A (possible) advantage of using tables to control the entire layout is if there are specific concerns about cross-browser compatibility and rendering. I prefer to work directly with the HTML source file, and for me using CSS makes that easier than using tables for the entire layout. A: Tables make sense only when you're trying to display tabular data and that's it. For layouts, you should always use DIVs and play with CSS positionings.
{ "language": "en", "url": "https://stackoverflow.com/questions/82391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: How to match URIs in text? How would one go about spotting URIs in a block of text? The idea is to turn such runs of texts into links. This is pretty simple to do if one only considered the http(s) and ftp(s) schemes; however, I am guessing the general problem (considering tel, mailto and other URI schemes) is much more complicated (if it is even possible). I would prefer a solution in C# if possible. Thank you. A: Regexs may prove a good starting point for this, though URIs and URLs are notoriously difficult to match with a single pattern. To illustrate, the simplest of patterns looks fairly complicated (in Perl 5 notation): \w+:\/{2}[\d\w-]+(\.[\d\w-]+)*(?:(?:\/[^\s/]*))* This would match http://example.com/foo/bar-baz and ftp://192.168.0.1/foo/file.txt but would cause problems for at least these: * *mailto:support@stackoverflow.com (no match - no //, but present @) *ftp://192.168.0.1.2 (match, but too many numbers, so it's not a valid URI) *ftp://1000.120.0.1 (match, but the IP address needs numbers between 0 and 255, so it's not a valid URI) *nonexistantscheme://obvious.false.positive *http://www.google.com/search?q=uri+regular+expression (match, but query isn't I think this is a case of the 80:20 rule. If you want to catch most things, then I would do as suggested an find a decent regular expression if you can't write one yourself. If you're looking at text pulled from fairly controlled sources (e.g. machine generated), then this will the best course of action. If you absolutely positively have to catch every URI that you encounter, and you're looking at text from the wild, then I think I would look for any word with a colon in it e.g. \s(\w:\S+)\s. Once you have a suitable candidate for a URI, then pass it to the a real URI parser in the URI class of whatever library you're using. If you're interested in why it's so hard to write a URI pattern, the I guess it would be that the definition of a URI is done with a Type-2 grammar, while regular expressions can only parse languages from Type-3 grammars. A: Whether or not something is a URI is context-dependent. In general the only thing they always have in common is that they start "scheme_name:". The scheme name can be anything (subject to legal characters). But other strings also contain colons without being URIs. So you need to decide what schemes you're interested in. Generally you can get away with searching for "scheme_name:", followed by characters up to a space, for each scheme you care about. Unfortunately URIs can contain spaces, so if they're embedded in text they are potentially ambiguous. There's nothing you can do to resolve the ambiguity - the person who wrote the text would have to fix it. URIs can optionally be enclosed in <>. Most people don't do that, though, so recognising that format will only occasionally help. The Wikipedia article for URI lists the relevant RFCs. [Edit to add: using regular expressions to fully validate URIs is a nightmare - even if you somehow find or create one that's correct, it will be very large and difficult to comment and maintain. Fortunately, if all you're doing is highlighting links, you probably don't care about the odd false positive, so you don't need to validate. Just look for "http://", "mailto:\S*@", etc] A: For a lot of the protocols you could just search for "://" without the quotes. Not sure about the others though. A: Here is a code snippet with regular expressions for various needs: http://snipplr.com/view/6889/regular-expressions-for-uri-validationparsing/ A: That is not easy to do, if you want to also match "something.tld", because normal text will have many instances of that pattern, but if you want to match only URIs that begin with a scheme, you can try this regular expression (sorry, I don't know how to plug it in C#) (http|https|ftp|mailto|tel):\S+[/a-zA-Z0-9] You can add more schemes there, and it will match the scheme until the next whitespace character, taking into account that the last character is not invalid (for example as in the very usual string "http://www.example.com.") A: the URL Tool for Ubiquity does the following: findURLs: function(text) { var urls = []; var matches = text.match(/(\S+\.{1}[^\s\,\.\!]+)/g); if (matches) { for each (var match in matches) { urls.push(match); } } return urls; }, A: The following perl regexp should pull do the trick. Does c# have perl regexps? /\w+:\/\/[\w][\w\.\/]*/
{ "language": "en", "url": "https://stackoverflow.com/questions/82398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: WAITFOR command Given the problem that a stored procedure on SQL Server 2005, which is looping through a cursor, must be run once an hour and it takes about 5 minutes to run, but it takes up a large chunk of processor time: edit: I'd remove the cursor if I could, unfortunatly, I have to be doing a bunch of processing and running other stored procs/queries based on the row. Can I use WAITFOR DELAY '0:0:0.1' before each fetch to act as SQL's version of .Net's Thread.Sleep? Thus allowing the other processes to complete faster at the cost of this procedure's execution time. Or is there another solution I'm not seeing? Thanks A: Putting the WAITFOR inside the loop would indeed slow it down and allow other things to go faster. You might also consider a WHILE loop instead of a cursor - in my experience it runs faster. You might also consider moving your cursor to a fast-forward, read-only cursor - that can limit how much memory it takes up. declare @minid int, @maxid int, @somevalue int select @minid = 1, @maxid = 5 while @minid <= @maxid begin set @somevalue = null select @somevalue = somefield from sometable where id = @minid print @somevalue set @minid = @minid + 1 waitfor delay '00:00:00.1' end A: I'm not sure if that would solve the problem. IMHO the performance problem with cursors is around the amount of memory you use to keep the dataset resident and loop through it, if you then add a waitfor inside the loop you're hogging resources for longer. But I may be wrong here, what I would suggest is to use perfmon to check the server's performance under both conditions, and then make a decision whether it is worth-it or not to add the wait. Looking at the tag, I'm assuming you're using MS SQL Server, and not any of the other flavours. A: You could delay the procedure, but that might or might not help you. It depends on how the procedure works. Is it in a transaction, why a cursor (horribly inefficient in SQL Server), where is the slowdown, etc. Perhaps reworking the procedure would make more sense. A: Ever since SQL 2005 included windowing functions and other neat features, I've been able to eliminate cursors in almost all instances. Perhaps your problem would best be served by eliminating the cursor itself? Definitely check out Ranking functions http://msdn.microsoft.com/en-us/library/ms189798.aspx and Aggregate window functions http://msdn.microsoft.com/en-us/library/ms189461.aspx A: I'm guessing that whatever code you have means that the other processes can't access the table your cursor is derived from. Provided that you make the cursor READONLY FASTWORD you should not lock the tables the cursor is derived from. If, however, you need to write, then WAITFOR wouldn't help. Once you've locked the table, it's locked. An option would be to snapshot the tables into a temp table, then cursor/loop through that instead. You would then not be locking the underlying tables, but equally the tables could change while you're processing the snapshot... Dems
{ "language": "en", "url": "https://stackoverflow.com/questions/82404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is there a way to override the empty constructor in a class generated by LINQtoSQL? If I have a table in my database called 'Users', there will be a class generated by LINQtoSQL called 'User' with an already declared empty constructor. What is the best practice if I want to override this constructor and add my own logic to it? A: It doesn't look like you can override the empty constructor. Instead, I would create a method that performs the functionality that you need in the empty constructor and returns the new object. // Add new partial class to extend functionality public partial class User { // Add additional constructor public User(int id) { ID = id; } // Add static method to initialize new object public User GetNewUser() { // functionality User user = new User(); user.Name = "NewName"; return user; } } Then elsewhere in your code, instead of using the default empty constructor, do one of the following: User user1 = new User(1); User user2 = User.GetNewUser(); A: The default constructor which is generated by the O/R-Designer, calls a partial function called OnCreated - so the best practice is not to override the default constructor, but instead implement the partial function OnCreated in MyDataClasses.cs to initialize items: partial void OnCreated() { Name = ""; } If you are implementing other constructors, always take care to call the default constructor so the classes will be initialized properly - for example entitysets (relations) are constructed in the default constructor. A: Setting DataContext Connection property to 'None' worked for me. Steps below. Open the dbml -> Right Click Properties -> Update Connection in DataContext properties to 'None'. This will remove the empty constructor from the generated code file. -> Create a new partial class for the DataContext with an empty constructor like below Partial Class MyDataContext Public Sub New() MyBase.New(ConfigurationManager.ConnectionStrings("MyConnectionString").ConnectionString, mappingSource) OnCreated() End Sub End Class A: Here's the C# version: public partial class PENCILS_LinqToSql_DataClassesDataContext { public PENCILS_LinqToSql_DataClassesDataContext() : base(ConnectionString(), mappingSource) { } public static String ConnectionString() { String CS; String Key; Key = System.Configuration.ConfigurationManager.AppSettings["DefaultConnectionString"].ToString(); /// Get the actual connection string. CS = System.Configuration.ConfigurationManager.ConnectionStrings[Key].ToString(); return CS; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/82409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Prefetch instructions on ARM Newer ARM processors include the PLD and PLI instructions. I'm writing tight inner loops (in C++) which have a non-sequential memory access pattern, but a pattern that naturally my code fully understands. I would anticipate a substantial speedup if I could prefetch the next location whilst processing the current memory location, and I would expect this to be quick-enough to try out to be worth the experiment! I'm using new expensive compilers from ARM, and it doesn't seem to be including PLD instructions anywhere, let alone in this particular loop that I care about. How can I include explicit prefetch instructions in my C++ code? A: There should be some Compiler-specific Features. There is no standard way to do it for C/C++. Check out you compiler Compiler Reference Guide. For RealView Compiler see this or this. A: If you are trying to extract truly maximum performance from these loops, than I would recommend writing the entire looping construct in assembler. You should be able to use inline assembly depending on the data structures involved in your loop. Even better if you can unroll any piece of your loop (like the parts involved in making the access non-sequential). A: At the risk of asking the obvious: have you verified the compiler's target architecture? For example (humor me), if by default the compiler is targeted to ARM7, you're never going to see the PLD instruction. A: It is not outside the realm of possibility that other optimizations like software pipelining and loop unrolling may achieve the same effect as your prefetching idea (hiding the latency of the loads by overlapping it with useful computation), but without the extra instruction-cache pressure caused by the extra instructions. I would even go so far as to say that this is the case more often than not, for tight inner loops that tend to have few instructions and little control flow. Is your compiler doing these types of traditional optimizations instead. If so, it may be worth looking at the pipeline diagram to develop a more detailed cost model of how your processor works, and evaluate more quantitatively whether prefetching would help.
{ "language": "en", "url": "https://stackoverflow.com/questions/82415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Change report data visibility based on rendering format in Reporting Services Is it possible to hide or exclude certain data from a report if it's being rendered in a particular format (csv, xml, excel, pdf, html). The problem is that I want hyperlinks to other reports to not be rendered when the report is generated in Excel format - but they should be there when the report is rendered in HTML format. A: The way I did this w/SSRS 2005 for a web app using the ReportViewer control is I had a hidden boolean report parameter which was used in the report decide if to render text as hyperlinks or not. Then the trick was how to send that parameter value depending on the rendering format. The way I did that was by disabling the ReportViewer export controls (by setting its ShowExportControls property to false) and making my own ASP.NET buttons for each format I wanted to be exportable. The code for those buttons first set the hidden boolean parameter and refreshed the report: ReportViewer1.ServerReport.SetParameters(New ReportParameter() {New ReportParameter("ExportView", "True")}) ReportViewer1.ServerReport.Refresh() Then you need to programmatically export the report. See this page for an example of how to do that (ignore the first few lines of code that create and initialize a ReportViewer). A: I don't think this is possible in the 2000 version, but might be in later versions. If I remember right, we ended up making two versions of the report.
{ "language": "en", "url": "https://stackoverflow.com/questions/82417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: When Hibernate flushes a Session, how does it decide which objects in the session are dirty? My understanding of Hibernate is that as objects are loaded from the DB they are added to the Session. At various points, depending on your configuration, the session is flushed. At this point, modified objects are written to the database. How does Hibernate decide which objects are 'dirty' and need to be written? Do the proxies generated by Hibernate intercept assignments to fields, and add the object to a dirty list in the Session? Or does Hibernate look at each object in the Session and compare it with the objects original state? Or something completely different? A: Hibernate takes a snapshot of the state of each object that gets loaded into the Session. On flush, each object in the Session is compared with its corresponding snapshot to determine which ones are dirty. SQL statements are issued as required, and the snapshots are updated to reflect the state of the (now clean) Session objects. A: Hibernate does/can use bytecode generation (CGLIB) so that it knows a field is dirty as soon as you call the setter (or even assign to the field afaict). This immediately marks that field/object as dirty, but doesn't reduce the number of objects that need to be dirty-checked during flush. All it does is impact the implementation of org.hibernate.engine.EntityEntry.requiresDirtyCheck(). It still does a field-by-field comparison to check for dirtiness. I say the above based on a recent trawl through the source code (3.2.6GA), with whatever credibility that adds. Points of interest are: * *SessionImpl.flush() triggers an onFlush() event. *SessionImpl.list() calls autoFlushIfRequired() which triggers an onAutoFlush() event. (on the tables-of-interest). That is, queries can invoke a flush. Interestingly, no flush occurs if there is no transaction. *Both those events eventually end up in AbstractFlushingEventListener.flushEverythingToExecutions(), which ends up (amongst other interesting locations) at flushEntities(). *That loops over every entity in the session (source.getPersistenceContext().getEntityEntries()) calling DefaultFlushEntityEventListener.onFlushEntity(). *You eventually end up at dirtyCheck(). That method does make some optimizations wrt to CGLIB dirty flags, but we've still ended up looping over every entity. A: Take a look to org.hibernate.event.def.DefaultFlushEntityEventListener.dirtyCheck Every element in the session goes to this method to determine if it is dirty or not by comparing with an untouched version (one from the cache or one from the database). A: Hibernate default dirty checking mechanism will traverse current attached entities and match all properties against their initial loading-time values. You can better visualize this process in the following diagram: A: These answers are incomplete (at best -- I am not an expert here). If you have an hib man entity in your session, you do NOTHING to it, you can still get an update issued when you call save() on it. when? when another session updates that object between your load() and save(). here is my example of this: hibernate sets dirty flag (and issues update) even though client did not change value
{ "language": "en", "url": "https://stackoverflow.com/questions/82429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Hosting a website on your own server Is there a detailed guide which explains how to host a website on your own server on linux. I have currently hosted it on one of the commerical web-hosts. Also the domain is registered to a different vendor. Thanks A: This guide is probably more info than you really requested, but webserver information is in there. It's Gentoo-specific, but you can apply the same information with minor translations to any other distro. A: I think it depends on how familiar you are with linux. Certainly, many people do this for hobbyist websites. There are many aspects involved - you should begin with something simple like getting apache running and visible to the outside world. A: I would look into installing apache 99% of linux distributions will have a package for it. On ubuntu you can run: sudo apt-get install apache2 Are you considering hosting a web page locally for the internet? Or is this just for development etc.. If it's for an internet server, you will need a stable internet connection with a good upstream. You may also need a static IP address so you can setup DNS to point to the right place. A: While I don't have an url to a good tutorial in english, I would just warn you that this is not something you should take lightly. Administrating a server involves getting your hands dirty in linux stuff and dealing with security can be pretty complex depending on your knowledge and requirements. So if you know nothing about it, you should be very careful and if the website you host has is of any commercial importance you are probably better off hiring a server admin. A: Just to point out; if this is a personal (home) server, as opposed to one in a corporate environment, then it's better not to bother hosting it - you won't necessarily have the bandwidth, and your ISP may not allow it. As mentioned above, you will also need a static IP address, and you'll need to set up DNS records to point to the correct location, which your domain vendor may or may not help you with.
{ "language": "en", "url": "https://stackoverflow.com/questions/82431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Why is it impossible to override a getter-only property and add a setter? Why is the following C# code not allowed: public abstract class BaseClass { public abstract int Bar { get;} } public class ConcreteClass : BaseClass { public override int Bar { get { return 0; } set {} } } CS0546 'ConcreteClass.Bar.set': cannot override because 'BaseClass.Bar' does not have an overridable set accessor A: I think the main reason is simply that the syntax is too explicit for this to work any other way. This code: public override int MyProperty { get { ... } set { ... } } is quite explicit that both the get and the set are overrides. There is no set in the base class, so the compiler complains. Just like you can't override a method that's not defined in the base class, you can't override a setter either. You might say that the compiler should guess your intention and only apply the override to the method that can be overridden (i.e. the getter in this case), but this goes against one of the C# design principles - that the compiler must not guess your intentions, because it may guess wrong without you knowing. I think the following syntax might do nicely, but as Eric Lippert keeps saying, implementing even a minor feature like this is still a major amount of effort... public int MyProperty { override get { ... } // not valid C# set { ... } } or, for autoimplemented properties, public int MyProperty { override get; set; } // not valid C# A: You could perhaps go around the problem by creating a new property: public new int Bar { get { return 0; } set {} } int IBase.Bar { get { return Bar; } } A: It's possible. tl;dr– You can override a get-only method with a setter if you want. It's basically just: * *Create a new property that has both a get and a set using the same name. *override the prior get to alias the new get. This enables us to override properties with get/set even if they lacked a setter in their base definition. Situation: Pre-existing get-only property. You have some class structure that you can't modify. Maybe it's just one class, or it's a pre-existing inheritance tree. Whatever the case, you want to add a set method to a property, but can't. public abstract class A // Pre-existing class; can't modify { public abstract int X { get; } // You want a setter, but can't add it. } public class B : A // Pre-existing class; can't modify { public override int X { get { return 0; } } } Problem: Can't override the get-only with get/set. You want to override with a get/set property, but it won't compile. public class C : B { private int _x; public override int X { get { return _x; } set { _x = value; } // Won't compile } } Solution: Use an abstract intermediate layer. While you can't directly override with a get/set property, you can: * *Create a new get/set property with the same name. *override the old get method with an accessor to the new get method to ensure consistency. So, first you write the abstract intermediate layer: public abstract class C : B { // Seal off the old getter. From now on, its only job // is to alias the new getter in the base classes. public sealed override int X { get { return this.XGetter; } } protected abstract int XGetter { get; } } Then, you write the class that wouldn't compile earlier. It'll compile this time because you're not actually override'ing the get-only property; instead, you're replacing it using the new keyword. public class D : C { private int _x; public new virtual int X { get { return this._x; } set { this._x = value; } } // Ensure base classes (A,B,C) use the new get method. protected sealed override int XGetter { get { return this.X; } } } Result: Everything works! var d = new D(); var a = d as A; var b = d as B; var c = d as C; Print(a.X); // Prints "0", the default value of an int. Print(b.X); // Prints "0", the default value of an int. Print(c.X); // Prints "0", the default value of an int. Print(d.X); // Prints "0", the default value of an int. // a.X = 7; // Won't compile: A.X doesn't have a setter. // b.X = 7; // Won't compile: B.X doesn't have a setter. // c.X = 7; // Won't compile: C.X doesn't have a setter. d.X = 7; // Compiles, because D.X does have a setter. Print(a.X); // Prints "7", because 7 was set through D.X. Print(b.X); // Prints "7", because 7 was set through D.X. Print(c.X); // Prints "7", because 7 was set through D.X. Print(d.X); // Prints "7", because 7 was set through D.X. Discussion. This method allows you to add set methods to get-only properties. You can also use it to do stuff like: * *Change any property into a get-only, set-only, or get-and-set property, regardless of what it was in a base class. *Change the return type of a method in derived classes. The main drawbacks are that there's more coding to do and an extra abstract class in the inheritance tree. This can be a bit annoying with constructors that take parameters because those have to be copy/pasted in the intermediate layer. Bonus: You can change the property's return-type. As a bonus, you can also change the return type if you want. * *If the base definition was get-only, then you can use a more-derived return type. *If the base definition was set-only, then you can use a less-derived return type. *If the base definition was already get/set, then: * *you can use a more-derived return type if you make it set-only; *you can use a less-derived return type if you make it get-only. In all cases, you can keep the same return type if you want. A: I stumbled across the very same problem today and I think I have a very valid reason for wanting this. First I'd like to argue that having a get-only property doesn't necessarily translate into read-only. I interpret it as "From this interface/abstract class you can get this value", that doesn't mean that some implementation of that interface/abstract class won't need the user/program to set this value explicitly. Abstract classes serve the purpose of implementing part of the needed functionality. I see absolutely no reason why an inherited class couldn't add a setter without violating any contracts. The following is a simplified example of what I needed today. I ended up having to add a setter in my interface just to get around this. The reason for adding the setter and not adding, say, a SetProp method is that one particular implementation of the interface used DataContract/DataMember for serialization of Prop, which would have been made needlessly complicated if I had to add another property just for the purpose of serialization. interface ITest { // Other stuff string Prop { get; } } // Implements other stuff abstract class ATest : ITest { abstract public string Prop { get; } } // This implementation of ITest needs the user to set the value of Prop class BTest : ATest { string foo = "BTest"; public override string Prop { get { return foo; } set { foo = value; } // Not allowed. 'BTest.Prop.set': cannot override because 'ATest.Prop' does not have an overridable set accessor } } // This implementation of ITest generates the value for Prop itself class CTest : ATest { string foo = "CTest"; public override string Prop { get { return foo; } // set; // Not needed } } I know this is just a "my 2 cents" post, but I feel with the original poster and trying to rationalize that this is a good thing seems odd to me, especially considering that the same limitations doesn't apply when inheriting directly from an interface. Also the mention about using new instead of override does not apply here, it simply doesn't work and even if it did it wouldn't give you the result wanted, namely a virtual getter as described by the interface. A: I can understand all your points, but effectively, C# 3.0's automatic properties get useless in that case. You can't do anything like that: public class ConcreteClass : BaseClass { public override int Bar { get; private set; } } IMO, C# should not restrict such scenarios. It's the responsibility of the developer to use it accordingly. A: I agree that not being able to override a getter in a derived type is an anti-pattern. Read-Only specifies lack of implementation, not a contract of a pure functional (implied by the top vote answer). I suspect Microsoft had this limitation either because the same misconception was promoted, or perhaps because of simplifying grammar; though, now that scope can be applied to get or set individually, perhaps we can hope override can be too. The misconception indicated by the top vote answer, that a read-only property should somehow be more "pure" than a read/write property is ridiculous. Simply look at many common read only properties in the framework; the value is not a constant / purely functional; for example, DateTime.Now is read-only, but anything but a pure functional value. An attempt to 'cache' a value of a read only property assuming it will return the same value next time is risky. In any case, I've used one of the following strategies to overcome this limitation; both are less than perfect, but will allow you to limp beyond this language deficiency: class BaseType { public virtual T LastRequest { get {...} } } class DerivedTypeStrategy1 { /// get or set the value returned by the LastRequest property. public bool T LastRequestValue { get; set; } public override T LastRequest { get { return LastRequestValue; } } } class DerivedTypeStrategy2 { /// set the value returned by the LastRequest property. public bool SetLastRequest( T value ) { this._x = value; } public override T LastRequest { get { return _x; } } private bool _x; } A: The problem is that for whatever reason Microsoft decided that there should be three distinct types of properties: read-only, write-only, and read-write, only one of which may exist with a given signature in a given context; properties may only be overridden by identically-declared properties. To do what you want it would be necessary to create two properties with the same name and signature--one of which was read-only, and one of which was read-write. Personally, I wish that the whole concept of "properties" could be abolished, except that property-ish syntax could be used as syntactic sugar to call "get" and "set" methods. This would not only facilitate the 'add set' option, but would also allow for 'get' to return a different type from 'set'. While such an ability wouldn't be used terribly often, it could sometimes be useful to have a 'get' method return a wrapper object while the 'set' could accept either a wrapper or actual data. A: Here is a work-around in order to achieve this using Reflection: var UpdatedGiftItem = // object value to update; foreach (var proInfo in UpdatedGiftItem.GetType().GetProperties()) { var updatedValue = proInfo.GetValue(UpdatedGiftItem, null); var targetpropInfo = this.GiftItem.GetType().GetProperty(proInfo.Name); targetpropInfo.SetValue(this.GiftItem, updatedValue,null); } This way we can set object value on a property that is readonly. Might not work in all the scenarios though! A: You should alter your question title to either detail that your question is solely in regards to overriding an abstract property, or that your question is in regards to generally overriding a class's get-only property. If the former (overriding an abstract property) That code is useless. A base class alone shouldn't tell you that you're forced to override a Get-Only property (Perhaps an Interface). A base class provides common functionality which may require specific input from an implementing class. Therefore, the common functionality may make calls to abstract properties or methods. In the given case, the common functionality methods should be asking for you to override an abstract method such as: public int GetBar(){} But if you have no control over that, and the functionality of the base class reads from its own public property (weird), then just do this: public abstract class BaseClass { public abstract int Bar { get; } } public class ConcreteClass : BaseClass { private int _bar; public override int Bar { get { return _bar; } } public void SetBar(int value) { _bar = value; } } I want to point out the (weird) comment: I would say a best-practice is for a class to not use its own public properties, but to use its private/protected fields when they exist. So this is a better pattern: public abstract class BaseClass { protected int _bar; public int Bar { get { return _bar; } } protected void DoBaseStuff() { SetBar(); //Do something with _bar; } protected abstract void SetBar(); } public class ConcreteClass : BaseClass { protected override void SetBar() { _bar = 5; } } If the latter (overriding a class's get-only property) Every non-abstract property has a setter. Otherwise it's useless and you shouldn't care to use it. Microsoft doesn't have to allow you to do what you want. Reason being: the setter exists in some form or another, and you can accomplish what you want Veerryy easily. The base class, or any class where you can read a property with {get;}, has SOME sort of exposed setter for that property. The metadata will look like this: public abstract class BaseClass { public int Bar { get; } } But the implementation will have two ends of the spectrum of complexity: Least Complex: public abstract class BaseClass { private int _bar; public int Bar { get{ return _bar; }} public void SetBar(int value) { _bar = value; } } Most Complex: public abstract class BaseClass { private int _foo; private int _baz; private int _wtf; private int _kthx; private int _lawl; public int Bar { get { return _foo * _baz + _kthx; } } public bool TryDoSomethingBaz(MyEnum whatever, int input) { switch (whatever) { case MyEnum.lol: _baz = _lawl + input; return true; case MyEnum.wtf: _baz = _wtf * input; break; } return false; } public void TryBlowThingsUp(DateTime when) { //Some Crazy Madeup Code _kthx = DaysSinceEaster(when); } public int DaysSinceEaster(DateTime when) { return 2; //<-- calculations } } public enum MyEnum { lol, wtf, } My point being, either way, you have the setter exposed. In your case, you may want to override int Bar because you don't want the base class to handle it, don't have access to review how it's handling it, or were tasked to hax some code real quick'n'dirty against your will. In both Latter and Former (Conclusion) Long-Story Short: It isn't necessary for Microsoft to change anything. You can choose how your implementing class is set up and, sans the constructor, use all or none of the base class. A: Solution for only a small subset of use cases, but nevertheless: in C# 6.0 "readonly" setter is automatically added for overridden getter-only properties. public abstract class BaseClass { public abstract int Bar { get; } } public class ConcreteClass : BaseClass { public override int Bar { get; } public ConcreteClass(int bar) { Bar = bar; } } A: Because the writer of Baseclass has explicitly declared that Bar has to be a read-only property. It doesn't make sense for derivations to break this contract and make it read-write. I'm with Microsoft on this one. Let's say I'm a new programmer who has been told to code against the Baseclass derivation. i write something that assumes that Bar cannot be written to (since the Baseclass explicitly states that it is a get only property). Now with your derivation, my code may break. e.g. public class BarProvider { BaseClass _source; Bar _currentBar; public void setSource(BaseClass b) { _source = b; _currentBar = b.Bar; } public Bar getBar() { return _currentBar; } } Since Bar cannot be set as per the BaseClass interface, BarProvider assumes that caching is a safe thing to do - Since Bar cannot be modified. But if set was possible in a derivation, this class could be serving stale values if someone modified the _source object's Bar property externally. The point being 'Be Open, avoid doing sneaky things and surprising people' Update: Ilya Ryzhenkov asks 'Why don't interfaces play by the same rules then?' Hmm.. this gets muddier as I think about it. An interface is a contract that says 'expect an implementation to have a read property named Bar.' Personally I'm much less likely to make that assumption of read-only if I saw an Interface. When i see a get-only property on an interface, I read it as 'Any implementation would expose this attribute Bar'... on a base-class it clicks as 'Bar is a read-only property'. Of course technically you're not breaking the contract.. you're doing more. So you're right in a sense.. I'd close by saying 'make it as hard as possible for misunderstandings to crop up'. A: Because at the IL level, a read/write property translates into two (getter and setter) methods. When overriding, you have to keep supporting the underlying interface. If you could add a setter, you would effectively be adding a new method, which would remain invisible to the outside world, as far as your classes' interface was concerned. True, adding a new method would not be breaking compatibility per se, but since it would remain hidden, decision to disallow this makes perfect sense. A: Because a class that has a read-only property (no setter) probably has a good reason for it. There might not be any underlying datastore, for example. Allowing you to create a setter breaks the contract set forth by the class. It's just bad OOP. A: A read-only property in the base class indicates that this property represents a value that can always be determined from within the class (for example an enum value matching the (db-)context of an object). So the responsibillity of determining the value stays within the class. Adding a setter would cause an awkward issue here: A validation error should occur if you set the value to anything else than the single possible value it already has. Rules often have exceptions, though. It is very well possible that for example in one derived class the context narrows the possible enum values down to 3 out of 10, yet the user of this object still needs to decide which one is correct. The derived class needs to delegate the responsibillity of determining the value to the user of this object. Important to realize is that the user of this object should be well aware of this exception and assume the responsibillity to set the correct value. My solution in these kind of situations would be to leave the property read-only and add a new read-write property to the derived class to support the exception. The override of the original property will simply return the value of the new property. The new property can have a proper name indicating the context of this exception properly. This also supports the valid remark: "make it as hard as possible for misunderstandings to crop up" by Gishu. A: Because that would break the concept of encapsulation and implementation hiding. Consider the case when you create a class, ship it, and then the consumer of your class makes himself able to set a property for which you originally provide a getter only. It would effectively disrupt any invariants of your class which you can depend on in your implementation. A: This is not impossible. You simply have to use the "new" keyword in your property. For example, namespace { public class Base { private int _baseProperty = 0; public virtual int BaseProperty { get { return _baseProperty; } } } public class Test : Base { private int _testBaseProperty = 5; public new int BaseProperty { get { return _testBaseProperty; } set { _testBaseProperty = value; } } } } It appears as if this approach satisfies both sides of this discussion. Using "new" breaks the contract between the base class implementation and the subclass implementation. This is necessary when a Class can have multiple contracts (either via interface or base class). Hope this helps
{ "language": "en", "url": "https://stackoverflow.com/questions/82437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "157" }
Q: Big PDF file when language is PL (Polish) I converted a Smart Form output into PDF using the function module SX_OBJECT_CONVERT_OTF_PDF. My problem is that when the language is PL (Polish) the PDF file is 10 times bigger comparing to EN language. Why? A: Gunstick answer is probably right. Sap note: 843480 discuss this issue. As of release 620 onward, there is support patches that enable pdf elements( such as fonts) to be compressed. The resulting pdf will be larger then the only English one, but it will probably be less than 10 times larger. A: This may be that polish uses a specific font (special characters) which is not installed by default on an OS. So the pdf converter includes the complete font into the document in order to render it correctly at the destination. This is just speculation though. A: You may try this one: http://lucattelli.com/blog/?page_id=478 This FM can take the binary PDF and convert it to BASE 64 and send it as a mail attachment. See if it helps
{ "language": "en", "url": "https://stackoverflow.com/questions/82441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Extension Methods not working for an interface Inspired by the MVC storefront the latest project I'm working on is using extension methods on IQueryable to filter results. I have this interface; IPrimaryKey { int ID { get; } } and I have this extension method public static IPrimaryKey GetByID(this IQueryable<IPrimaryKey> source, int id) { return source(obj => obj.ID == id); } Let's say I have a class, SimpleObj which implements IPrimaryKey. When I have an IQueryable of SimpleObj the GetByID method doesn't exist, unless I explicitally cast as an IQueryable of IPrimaryKey, which is less than ideal. Am I missing something here? A: Edit: Konrad's solution is better because its far simpler. The below solution works but is only required in situations similar to ObjectDataSource where a method of a class is retrieved through reflection without walking up the inheritance hierarchy. Obviously that's not happening here. This is possible, I've had to implement a similar pattern when I designed a custom entity framework solution for working with ObjectDataSource: public interface IPrimaryKey<T> where T : IPrimaryKey<T> { int Id { get; } } public static class IPrimaryKeyTExtension { public static IPrimaryKey<T> GetById<T>(this IQueryable<T> source, int id) where T : IPrimaryKey<T> { return source.Where(pk => pk.Id == id).SingleOrDefault(); } } public class Person : IPrimaryKey<Person> { public int Id { get; set; } } Snippet of use: var people = new List<Person> { new Person { Id = 1 }, new Person { Id = 2 }, new Person { Id = 3 } }; var personOne = people.AsQueryable().GetById(1); A: This cannot work due to the fact that generics don't have the ability to follow inheritance patterns. ie. IQueryable<SimpleObj> is not in the inheritance tree of IQueryable<IPrimaryKey> A: It works, when done right. cfeduke's solution works. However, you don't have to make the IPrimaryKey interface generic, in fact, you don't have to change your original definition at all: public static IPrimaryKey GetByID<T>(this IQueryable<T> source, int id) where T : IPrimaryKey { return source(obj => obj.ID == id); }
{ "language": "en", "url": "https://stackoverflow.com/questions/82442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What impact (if any) does Delphi 2009's switch to Unicode(/UTF16) have on executable size and memory footprint? Here's one from the "No question's too dumb" department: Well, as the subject says: Is there an impact? If so, how much? Will all the string literals I have in my code and in my DFM resources now take up twice as much space inside the compiled binaries? What about runtime memory usage of compiled applications? Will all the string variables now take up twice as much RAM? Should I even bother? I remember something like this being asked during one of the early pre-release webcasts but I can't remember the answer. And as the trial is only 14 days I'm not going to just try it myself before the third-party libraries I need have been updated (supposedly in about a month). A: D2009 uses UTF-16 for the default string type, although you can make variables UTF-8 if you need to. Jan Goyvaerts discusses the size/speed tradeoff in a good blog post. String literals in DFMs have been UTF-8 since at least D7. Hence, there will be no increase in size due to strings in DFMs with D2009. A: I have now finally gotten my hands on Delphi 2009 and after making the necessary adjustments my project now compiles and runs just fine. :) To get results quickly I initially had to comment out one slightly more complex module of the app so it's not 100% comparable yet but it already seems safe enough to predict that despite a significant amount of string literals in our source code (excessive debug log messages) the size of the binary compiled with Delphi 2009 will probably be roughly the same as before - if not actually less! I wonder, does the Delphi compiler actually perform any kind of compression on the binaries or at least its resource sections in any way? I really would have expected the change to UTF-16 string literals to have a bigger impact in this particular app. Are the literals really stored as (uncompressed) UTF-16 inside the binary? I haven't had time to investigate differences in the memory footprint yet. EDIT: Not directly Unicode-related but definitely related: Andreas Hausladen recently posted an interesting bit about the (significant) impact of the {$STRINGCHECKS} compiler option (BTW: turned on by default) on compiled executable size: http://andy.jgknet.de/blog/?p=487 A: I haven't used Delphi in years, but it probably depends on what Unicode encoding they use. UTF8 will be exactly the same for the regular ASCII character set (it only uses more than one byte when you get into the exotic characters). UTF16 might be a bit bloated. A: I have been waiting for an Unicode VCL for too many years, finally we see it. I don't think most applications need to worry about the size issues as they don't have that many string literals anyway or store massive amounts of data in-memory. Usability issues are more weighted to justify Unicode use as much as possible. If some developer wants to create a tiny exes, they can hand optimize using AnsiString (if i18n is not an issue).
{ "language": "en", "url": "https://stackoverflow.com/questions/82454", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Perforce. Getting the file status in the sandbox How can I figure out the state of the files in my client, I want to know if the file needs an updated, or patched, or modified etc. In CVS, I used to simply run "cvs -n -q update . > file". Later look for M,U,P,C attributes to get the current status of the file. In perforce, "p4 sync -n" doesn't give output like "cvs -n -q update". How can I see the current status of files, in case of Perforce? A: To my knowledge, there isn't a command that will give you exactly what you want. In looking what the update command does, there is no single alternative in Perforce. I think that the closest that you will come will be to use the 'p4 fstat' command and parse the output from there to get the information that you need. You might find this page helpful. I also found this link to a p4wrapper that claims to wrap in come CVS commands (including update) into a script. There might be others like this one around as well. I also wanted to comment that the answer to this question is like many with Perforce when asking "how do I do...". The answer usually comes down to writing a script to take the output from perforce commands to get the results that you need. Their philosophy is to provide bare bones commands and have developers build off of the basic functionality. Love it or hate it, that's the basic model. Many good scripts can be found in the Perforce Public Depot here. A: Not sure if this is what you're looking for, but the p4 diff command has a few useful options. From the usage: -sa Opened files that are different from the revision in the depot, or missing. -sb Opened for integrate files that have been resolved but have been modified after being resolved. -sd Unopened files that are missing on the client. -se Unopened files that are different from the revision in the depot. -sl Every unopened file, along with the status of 'same, 'diff', or 'missing' as compared to its revision in the depot. -sr Opened files that are the same as the revision in the depot. A: Full disclosure: I work for Perforce There will be a 2 new commands "p4 status" and "p4 reconcile" in the up-coming 2012.1 release. See the following for more details: http://www.perforce.com/blog/120126/new-20121-p4reconcile-p4status A: Not quite sure what you mean. If you are talking about seeing what files need "resolving" (in perforce language) then you can use: p4 resolve -n See the p4 command line manual website here: http://www.perforce.com/perforce/doc.current/manuals/cmdref/resolve.html#1040665 Also P4V has a nice feature to highlight unsubmitted and dirty files, if you use that client. Right-click on a fodler in the workspace view, and select "reconcile offline work." After a bit of processing you'll get a list of files that are out of sync with the depot. Hope this helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/82468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to catch ALL exceptions/crashes in a .NET app Possible Duplicate: .NET - What’s the best way to implement a “catch all exceptions handler” I have a .NET console app app that is crashing and displaying a message to the user. All of my code is in a try{<code>} catch(Exception e){<stuff>} block, but still errors are occasionally displayed. In a Win32 app, you can capture all possible exceptions/crashes by installing various exception handlers: /* C++ exc handlers */ _set_se_translator SetUnhandledExceptionFilter _set_purecall_handler set_terminate set_unexpected _set_invalid_parameter_handler What is the equivalent in the .NET world so I can handle/log/quiet all possible error cases? A: Be aware that some exception are dangerous to catch - or mostly uncatchable, * *OutOfMemoryException: anything you do in the catch handler might allocate memory (in the managed or unmanaged side of the CLR) and thus trigger another OOM *StackOverflowException: depending whether the CLR detected it sufficiently early, you might get notified. Worst case scenario, it simply kills the process. A: You can add an event handler to AppDomain.UnhandledException event, and it'll be called when a exception is thrown and not caught. A: You can use the AppDomain.CurrentDomain.UnhandledException to get an event. A: Although catching all exceptions without the plan to properly handle them is surely a bad practice, I think that an application should fail in some graceful way. A crash should not scare the user to death, and at least it should display a description of the error, some information to report to the tech support stuff, and ideally a button to close the application and restart it. In an ideal world, the application should be able to dump on disk the user data, and then try to recover it (but I see that this is asking too much). Anyway, I usually use: AppDomain.CurrentDomain.UnhandledException A: You may also go with Application.ThreadException Event. Once I was developing a .NET app running inside a COM based application; this event was the very useful, as AppDomain.CurrentDomain.UnhandledException didn't work in this case. A: Contrary to what some others have posted, there's nothing wrong catching all exceptions. The important thing is to handle them all appropriately. If you have a stack overflow or out of memory condition, the app should shut down for them. Also, keep in mind that OOM conditions can prevent your exception handler from running correctly. For example, if your exception handler displays a dialog with the exception message, if you're out of memory, there may not be enough left for the dialog display. Best to log it and shut down immediately. As others mentioned, there are the UnhandledException and ThreadException events that you can handle to collection exceptions that might otherwise get missed. Then simply throw an exception handler around your main loop (assuming a winforms app). Also, you should be aware that OutOfMemoryExceptions aren't always thrown for out of memory conditions. An OOM condition can trigger all sorts of exceptions, in your code, or in the framework, that don't necessarily have anything to do with the fact that the real underlying condition is out of memory. I've frequently seen InvalidOperationException or ArgumentException when the underlying cause is actually out of memory. A: This article in codeproject by our host Jeff Atwood is what you need. Includes the code to catch unhandled exceptions and best pratices for showing information about the crash to the user. A: The Global.asax class is your last line of defense. Look at: protected void Application_Error(Object sender, EventArgs e) method A: I think you should rather not even catch all Exception but better let them be shown to the user. The reason for this is that you should only catch Exceptions which you can actually handle. If you run into some Exception which causes the program to stop but still catch it, this might cause much more severe problems. Also read FAQ: Why does FxCop warn against catch(Exception)?. A: Be aware that catching these unhandled exceptions can change the security requirements of your application. Your application may stop running correctly under certain contexts (when run from a network share, etc.). Be sure to test thoroughly. A: it doesn't hurt to use both AppDomain.CurrentDomain.UnhandledException Application.ThreadException but keep in mind that exceptions on secondary threads are not caught by these handlers; use SafeThread for secondary threads if needed
{ "language": "en", "url": "https://stackoverflow.com/questions/82483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39" }
Q: Has anyone tried transactional memory for C++? I was checking out Intel's "whatif" site and their Transactional Memory compiler (each thread has to make atomic commits or rollback the system's memory, like a Database would). It seems like a promising way to replace locks and mutexes but I can't find many testimonials. Does anyone here have any input? A: I have not used Intel's compiler, however, Herb Sutter had some interesting comments on it... From Sutter Speaks: The Future of Concurrency Do you see a lot of interest in and usage of transactional memory, or is the concept too difficult for most developers to grasp? It's not yet possible to answer who's using it because it hasn't been brought to market yet. Intel has a software transactional memory compiler prototype. But if the question is "Is it too hard for developers to use?" the answer is that I certainly hope not. The whole point is it's way easier than locks. It is the only major thing on the research horizon that holds out hope of greatly reducing our use of locks. It will never replace locks completely, but it's our only big hope to replacing them partially. There are some limitations. In particular, some I/O is inherently not transactional—you can't take an atomic block that prompts the user for his name and read the name from the console, and just automatically abort and retry the block if it conflicts with another transaction; the user can tell the difference if you prompt him twice. Transactional memory is great for stuff that is only touching memory, though. Every major hardware and software vendor I know of has multiple transactional memory tools in R&D. There are conferences and academic papers on theoretical answers to basic questions. We're not at the Model T stage yet where we can ship it out. You'll probably see early, limited prototypes where you can't do unbounded transactional memory—where you can only read and write, say, 100 memory locations. That's still very useful for enabling more lock-free algorithms, though. A: Dr. Dobb's had an article on the concept last year: Transactional Programming by Calum Grant -- http://www.ddj.com/cpp/202802978 It includes some examples, comparisons, and conclusions using his example library. A: I've built the combinatorial STM library on top of some functional programming ideas. It doesn't require any compiler support (except it uses C++17), doesn't bring a new syntax. In general, it adopts the interface of the STM library from Haskell. So, my library has several nice properties: * *Monadically combinatorial. Every transaction is a computation inside the custom monad named STML. You can combine monadic transactions into more big monadic transactions. *Transactions are separated from data model. You construct your concurrent data model with transactional variables (TVars) and run transactions over it. *There is retry combinator. It allows you to rerun the transaction. Very useful to build short and understandable transactions. *There are different monadic combinators to express computations shortly. *There is Context. Every computation should be run in some context, not in the global runtime. So you can have many different contexts if you need several independent STM clusters. *The implementation is quite simple conceptually. At least, the reference implementation in Haskell is so, but I had to reinvent several approaches for C++ implementation due to the lack of a good support of Functional Programming. The library shows very nice stability and robustness, even if we consider it experimental. Moreover, my approach opens a lot of possibilities to improve the library by performance, features, comprehensiveness, etc. To demonstrate its work, I've solved the Dining Philosophers task. You can find the code in the links below. Sample transaction: STML<bool> takeFork(const TVar<Fork>& tFork) { STML<bool> alreadyTaken = withTVar(tFork, isForkTaken); STML<Unit> takenByUs = modifyTVar(tFork, setForkTaken); STML<bool> success = sequence(takenByUs, pure(true)); STML<bool> fail = pure(false); STML<bool> result = ifThenElse(alreadyTaken, fail, success); return result; }; UPDATE I've wrote a tutorial, you can find it here. * *Dining Philosophers task *My C++ STM library A: Sun Microsystems have announced that they're releasing a new processor next year, codenamed Rock, that has hardware support for transactional memory. It will have some limitations, but it's a good first step that should make it easier for programmers to replace locks/mutexes with transactions and expect good performance out of it. For an interesting talk on the subject, given by Mark Moir, one of the researchers at Sun working on Transactional Memory and Rock, check out this link. For more information and announcements from Sun about Rock and Transactional Memory in general, this link. The obligatory wikipedia entry :) Finally, this link, at the University of Wisconsin-Madison, contains a bibliography of most of the research that has been and is being done about Transactional Memory, whether it's hardware related or software related. A: In some cases I can see this as being useful and even necessary. However, even if the processor has special instructions that make this process easier there is still a large overhead compared to a mutex or semaphore. Depending on how it's implemented it may also impact realtime performance (have to either stop interrupts, or prevent them from writing into your shared areas). My expectation is that if this was implemented, it would only be needed for portions of a given memory space, though, and so the impact could be limited. -Adam
{ "language": "en", "url": "https://stackoverflow.com/questions/82495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Is it possible to drag and drop from/to outside a Flash applet with JavaScript? Let's say I want a web page that contains a Flash applet and I'd like to drag and drop some objects from or to the rest of the web page, is this at all possible? Bonus if you know a website somewhere that does that! A: DISCLAIMER I haven't tested this code at all, but the idea should work. Also, this only handles the dragging to a flash movie. Here's some Actionscript 3.0 code which makes use of the ExternalInterface class. import flash.display.Sprite; import flash.external.ExternalInterface; import flash.net.URLLoader; import flash.net.URLRequest; if (ExternalInterface.available) { ExternalInterface.addCallback("handleDroppedImage", myDroppedImageHandler); } private function myDroppedImageHandler(url:String, x:Number, y:Number):void { var container:Sprite = new Sprite(); container.x = x; container.y = y; addChild(container); var loader:Loader = new Loader(); var request:URLRequest = new URLRequest(url); loader.load(request); container.addChild(loader); } Here's the HTML/jQuery code <html> <head> <title>XHTML 1.0 Transitional Template</title> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js"></script> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.5.2/jquery-ui.min.js"></script> <script type="text/javascript"> $(function() { $("#dragIcon").draggable(); $("#flash").droppable({ tolerance : "intersect", drop: function(e,ui) { // Get the X,Y coords relative to to the flash movie var x = $(this).offset().left - ui.draggable.offset().left; var y = $(this).offset().top - ui.draggable.offset().top; // Get the url of the dragged image var url = ui.draggable.attr("src"); // Get access to the swf var swf = ($.browser.msie) ? document["MyFlashMovie"] : window["MyFlashMovie"]; // Call the ExternalInterface function swf.handleDroppedImage(url, x, y); // remove the swf from the javascript DOM ui.draggable.remove(); } }); }); </script> </head> <body> <img id="dragIcon" width="16" height="16" alt="drag me" /> <div id="flash"> <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" id="MyFlashMovie" width="500" height="375" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab"> <param name="movie" value="MyFlashMovie.swf" /> <param name="quality" value="high" /> <param name="bgcolor" value="#869ca7" /> <param name="allowScriptAccess" value="sameDomain" /> <embed src="MyFlashMovie.swf" quality="high" bgcolor="#869ca7" width="500" height="375" name="MyFlashMovie" align="middle" play="true" loop="false" quality="high" allowScriptAccess="sameDomain" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer"> </embed> </object> </div> </body> </html> A: This one intrigued me. I know jessegavin posted some code while I went to figure this out, but this one is tested. I have a super-simple working example that lets you drag to and from flash. It's pretty messy as I threw it together during my lunch break. Here's the demo And the source The base class is taken directly from the External Interface LiveDocs. I added MyButton so the button could have some text. The majority of the javascript comes from the same LiveDocs example. I compiled this using mxmlc. A: I would say it is possible to drop to Flash if you detect that the item is dragged on to the that contains the flash stuff, and you set your dragged objects to have a z-index higher than the flash. Then when it is dropped you can talk to Flash using javascript to tell it where and what was dropped. However the other way around is probably much harder, because you'd have to detect when the object hits the border of the flash movie and "pass" it to the javascript handler (create it in the html, hide it in flash). The question is probably to know whether it's worth the trouble, or if you can maybe achieve everything in JS or in Flash ? A: Hang on, the encapsulation point is a valid one but flash can execute JS functions, and Seldaek is right that an HTML element with a higher z-index should float on the flash movie. So if you did all the drag handling in JS and had the flash read its own dimensions and the position of the pointer in the app it could signal JS methods that slave element(s) to the pointer even (especially) when the pointer leaves the boundaries of the flash app. It would be pretty hairy though. A: If the whole site is one big embedded flash file then yes it's possible. I don't think that you can acheive it any other way A: Not possible in flash - unless you want to drag to a target inside the same flash application. Could probably be done with a signed Java applet (but who wants to go down that road?)
{ "language": "en", "url": "https://stackoverflow.com/questions/82509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Reviews for programmable, tiling window manager ion3 I find the concept of the programmable, tiling, keyboard-focuessed window manager ion3 very appealing, but I think it takes some time to customize it to your needs until you can really evaluate this totally different UI-concept. Therefore, I would like to read reviews of people who tried it for a longer time as environment for programming (in particular using emacs/gcc). (The policies of the ion3-author concerning linux-distros are not easy to follow for me, but this should not be the point here...) A: I use ion3 daily. It's a wonderful window manager. The tiling interface really enables you maximize real estate. Once you get it setup to your liking, it is much more efficient to navigate via the keyboard. Even moving applications between tiles isn't that hard once you get used to the tag/attach key sequence. With ion3, Vimperator and the various shells I have open during the day -- I barely use the rodent. The author's opinions aside -- a good resource for configuring/extending Ion to your liking can be found at: Configuring and Extending Ion3 with Lua A: I've been using Ion daily for nearly two years now. Good things: * *Easy to use from the keyboard. *Handles multiple screens (Xinerama) very well (once you have the mod_xinerama plugin), especially as in my case the screens are different sizes. *Very predictable where windows will appear. *Splitting, resizing and moving windows is very easy. *Multiple, independent workspaces on each screen. *Very fast and reliable. Bad things: * *Too many different shortcuts. e.g. there are separate keys for moving to the next tab, next frame, next screen, and next workspace. *Applications that use lots of small windows together work really badly (e.g. the Gimp) because it maximises all of them on top of each other initially. *Sub-dialogs can cause trouble. Sometimes they open in a separate tab when you want them in the same tab, or sometimes the open in the same tab and take the focus when you want to continue interacting with the main window. These things can probably be changed in the config files, but I haven't got around to it yet. Also, the actual C code is easy to read, and on the few occasions where I've wanted to fix something it has been very easy. I don't feel tempted to go back to a non-tiling WM, anyway. A: I've used it off and on for the last few years, I think its a great window manager, but I keep crawling back to kde3 whatever I use. Its however difficult to put into quantifiable terms why this happens, but its right up there with the gnome-vs-kde battle. Neither side can understand the other. I would also just love to have kicker + ion3, but they don't gel awfully well. Moving applications between tiles ( something I tend to do lots ) also is a bit inefficient ( too addicted to the mouse ) ( Kicker + Evilwm is a good combination, but evilwm just can't handle stacking in a user-friendly way )
{ "language": "en", "url": "https://stackoverflow.com/questions/82518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: svn over HTTP proxy I'm on laptop (Ubuntu) with a network that use HTTP proxy (only http connections allowed). When I use svn up for url like 'http://.....' everything is cool (google chrome repository works perfect), but right now I need to svn up from server with 'svn://....' and I see connection refused. I've set proxy configuration in /etc/subversion/servers but it doesn't help. Anyone have opinion/solution? A: In /etc/subversion/servers you are setting http-proxy-host, which has nothing to do with svn:// which connects to a different server usually running on port 3690 started by svnserve command. If you have access to the server, you can setup svn+ssh:// as explained here. Update: You could also try using connect-tunnel, which uses your HTTPS proxy server to tunnel connections: connect-tunnel -P proxy.company.com:8080 -T 10234:svn.example.com:3690 Then you would use svn checkout svn://localhost:10234/path/to/trunk A: Ok, this should be really easy: $ sudo vi /etc/subversion/servers Edit the file: [Global] http-proxy-host=my.proxy.com http-proxy-port=3128 Save it, run svn again and it will work. A: If you can get SSH to it you can an SSH Port-forwarded SVN server. Use SSHs -L ( or -R , I forget, it always confuses me ) to make an ssh tunnel so that 127.0.0.1:3690 is really connecting to remote:3690 over the ssh tunnel, and then you can use it via svn co svn://127.0.0.1/.... A: Okay, this topic is somewhat outdated, but as I found it on google and have a solution this might be interesting for someone: Basically (of course) this is not possible on every http proxy but works on proxies allowing http connect on port 3690. This method is used by http proxies on port 443 to provide a way for secure https connections. If your administrator configures the proxy to open port 3690 for http connect you can setup your local machine to establish a tunnel through the proxy. I just was in the need to check out some files from svn.openwrt.org within our companies network. An easy solution to create a tunnel is adding the following line to your /etc/hosts 127.0.0.1 svn.openwrt.org Afterwards, you can use socat to create a tcp tunnel to a local port: while true; do socat tcp-listen:3690 proxy:proxy.at.your.company:svn.openwrt.org:3690; done You should execute the command as root. It opens the local port 3690 and on connection creates a tunnel to svn.openwrt.org on the same port. Just replace the port and server addresses on your own needs. A: when you use the svn:// URI it uses port 3690 and probably won't use http proxy A: svn:// doesn't talk http, therefor there's nothing a http proxy could do. Any reason why http doesn't work? Have you considered https? If you really need it, you probably have to have port 3690 opened in your firewall. A: If you're using the standard SVN installation the svn:// connection will work on tcpip port 3690 and so it's basically impossible to connect unless you change your network configuration (you said only Http traffic is allowed) or you install the http module and Apache on the server hosting your SVN server.
{ "language": "en", "url": "https://stackoverflow.com/questions/82530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51" }
Q: Boost serialization: specifying a template class version I have a template class that I serialize (call it C), for which I want to specify a version for boost serialization. As BOOST_CLASS_VERSION does not work for template classes. I tried this: namespace boost { namespace serialization { template< typename T, typename U > struct version< C<T,U> > { typedef mpl::int_<1> type; typedef mpl::integral_c_tag tag; BOOST_STATIC_CONSTANT(unsigned int, value = version::type::value); }; } } but it does not compile. Under VC8, a subsequent call to BOOST_CLASS_VERSION gives this error: error C2913: explicit specialization; 'boost::serialization::version' is not a specialization of a class template What is the correct way to do it? A: I was able to properly use the macro BOOST_CLASS_VERSION until I encapsulated it inside a namespace. Compilation errors returned were: error C2988: unrecognizable template declaration/definition error C2143: syntax error: missing ';' before '<' error C2913: explicit specialization; 'Romer::RDS::Settings::boost::serialization::version' is not a specialization of a class template error C2059: syntax error: '<' error C2143: syntax error: missing ';' before '{' error C2447: '{': missing function header (old-style formal list?) As suggested in a previous edit, moving BOOST_CLASS_VERSION to global scope solved the issue. I would prefer keeping the macro close to the referenced structure. A: #include <boost/serialization/version.hpp> :-) A: To avoid premature dependency of your library on Boost.Serialization you can forward declare: namespace boost { namespace serialization { template<typename T> struct version; } // end namespace serialization } // end namespace boost Instead of including the header. To declare the version of you class you can do: namespace boost { namespace serialization { template<typename T, int D, class A> struct version< your_type<T, D, A> > { enum { value = 16 }; }; } // end namespace serialization } // end namespace boost Since it doesn't use the BOOST_CLASS_VERSION macro still doesn't need premature inclusion of the Boost.Serialization headers. (for some reason static const [constexpr] unsigned int value = 16; doesn't work for me, in C++14).
{ "language": "en", "url": "https://stackoverflow.com/questions/82550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Best server-side framework for heavy RIA based application? What do the collective beleive to be the best platform to use as a backend to AJAX / Flex / Silverlight applications and why? We are undergoing a technology review and I would like to know some other opinions. Is It Java, Grails, Python, Rails, ColdFusion, something else? A: There is no definitive answer. However, I would choose a light solution, like Python or Rails, over Java or ColdFusion. You may want to investigate C# ASP.NET + Silverlight combo. Microsoft made it highly integrated, which is double-edged sword. But in many cases this helps. You may also want to review existing solutions / applications / startups. Don't ditch PHP up front, there are many existing components for it. And don't overestimate the impact of server-side technology choice on success.
{ "language": "en", "url": "https://stackoverflow.com/questions/82599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Python + DNS : Cannot get RRSIG records: No Answer I get DNS records from a Python program, using DNS Python I can get various DNSSEC-related records: >>> import dns.resolver >>> myresolver = dns.resolver.Resolver() >>> myresolver.use_edns(1, 0, 1400) >>> print myresolver.query('sources.org', 'DNSKEY') <dns.resolver.Answer object at 0xb78ed78c> >>> print myresolver.query('ripe.net', 'NSEC') <dns.resolver.Answer object at 0x8271c0c> But no RRSIG records: >>> print myresolver.query('sources.org', 'RRSIG') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.5/site-packages/dns/resolver.py", line 664, in query answer = Answer(qname, rdtype, rdclass, response) File "/usr/lib/python2.5/site-packages/dns/resolver.py", line 121, in __init__ raise NoAnswer I tried several signed domains like absolight.fr or ripe.net. Trying with dig, I see that there are indeed RRSIG records. Checking with tcpdump, I can see that DNS Python sends the correct query and receives correct replies (here, eight records): 16:09:39.342532 IP 192.134.4.69.53381 > 192.134.4.162.53: 22330+ [1au] RRSIG? sources.org. (40) 16:09:39.343229 IP 192.134.4.162.53 > 192.134.4.69.53381: 22330 8/5/6 RRSIG[|domain] DNS Python 1.6.0 - Python 2.5.2 (r252:60911, Aug 8 2008, 09:22:44) [GCC 4.3.1] on linux2 A: You probably mean RRSIG ANY (otherwise, the order is wrong, the class needs to be after the type) >>> print myresolver.query('sources.org', 'RRSIG', 'ANY') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.5/site-packages/dns/resolver.py", line 664, in query answer = Answer(qname, rdtype, rdclass, response) File "/usr/lib/python2.5/site-packages/dns/resolver.py", line 121, in __init__ raise NoAnswer dns.resolver.NoAnswer A: RRSIG is not a record, it's a hashed digest of a valid DNS Record. You can query a DNSKEY record, set want_dnssec=True and get a DNSKEY Record, and an "RRSIG of a DNSKEY Record". More generally, RRSIG is just a signature of a valid record (such as a DS Record). So when you ask the server myresolver.query('sources.org', 'RRSIG') It doesn't know what you are asking for. RRSIG in itself has no meaning, you need to specify RRSIG of what? A: If you try this, what happens? print myresolver.query('sources.org', 'ANY', 'RRSIG') A: This looks like a probable bug in the Python DNS library, although I don't read Python well enough to find it. Note that in any case your EDNS0 buffer size parameter is not large enough to handle the RRSIG records for sources.org, so your client and server would have to fail over to TCP/IP. A: You may want to use raise_on_no_answer=False and you will get the correct response: resolver.query(hostname, dnsrecord, raise_on_no_answer=False)
{ "language": "en", "url": "https://stackoverflow.com/questions/82607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Information about how many files to compile before build in Visual Studio How can I figure out, how many files needs to be recompiled before I start the build process. Sometimes I don't remember how many basic header files I changed so a Rebuild All would be better than a simple build. There seams to be no option for this, but IMHO it must be possible (f.e. XCode give me this information). Update: My problem is not, that Visual Studio doesn't know what to compile. I need to know how much it will compile so that I can decide if I can make a quick test with my new code or if I should write more code till I start the "expensive" build process. Or if my boss ask "When can I have the new build?" the best answer is not "It is done when it is done!". It's really helpful when the IDE can say "compile 200 of 589 files" instead of "compile x,y, ..." A: Could your version control tell you this? For example in Subversion "Check for modifications" will list everything changed since your last checkin (although not since your last build) Mind you, doesn't "build" automatically do exactly that? (build only what's changed)? A: Usually Visual Studio is good at knowing what needs to be compiled for you. If you have multiple projects in a solution then just make sure your solution dependencies are set up correctly and it should just work when you hit Build.
{ "language": "en", "url": "https://stackoverflow.com/questions/82612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Differences between NHibernate, Castle, Linq - Who are they aimed at? This answer says that Linq is targeted at a slightly different group of developers than NHibernate, Castle, etc. Being rather new to C#, nevermind all the DB stuff surrounding it: * *Are there other major, for lack of a better term, SQL wrappers than NHibernate, Castle, Linq? *What are the differences between them? *What kind of developers or development are they aimed at? -Adam A: When you say Castle I assume you mean Castle Active Record? The difference is NHibernate is an OR/M and is aimed at developers who want to focus on the domain rather than the database. With linq to sql, your database is pre-existing and you're relationships and some of programming will be driven by how your database is defined. Now between NHibernate and Castle ActiveRecord -- they are similar in that you're driving your application design from the domain but with NHibernate you provide mapping xml files (or mapping classes with fluent NHibernate) where in Active Record you are using the convention over configuration (using attributes to define any columns and settings that don't fit naturally). Castle Active record is still using NHibernate in the background. One OR/M is not necessarily the 'one true way' to go. It depends on your environment, the application your developing and your team. You may also want to check out SubSonic. It's great for active record but it is not for project where you want to focus mainly on your Domain. Depending on the project, I usually use either NHibernate (with Castle Active Record) or Subsonic A: LINQ is just a set of new C# features: extension methods, lambda expressions, object initializers, anonymous types, etc. "LINQ to SQL" on the other hand is something you can compare other SQL wrappers. A: NHibernate and Linq To Sql are Object/Relational Mappers designed to ease dealing with the impedence mismatch found between objects and RDBMS. If you where want to achieve a testable, persistant ignorant application, NHibernate is the way to go. I would always recommend NHibernate over Linq To Sql. Both tools are aimed at removing dealing with data access. How many times do you really need to write data access code? Castle is an application framework and Inversion of Control container and doesn't provide data access. It supplies facilities for using NHibernate, making for less friction, and it also supplies an implemenation of the Actuve Record pattern using NHibernate. A: Actually we use both Linq and NHibernate together (with Fluent). If you have a little patience with the learning curve, then you will quickly fall in love.
{ "language": "en", "url": "https://stackoverflow.com/questions/82632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Can you use Microsoft Entity Framework with Oracle? Is it possible to use Microsoft Entity Framework with Oracle database? A: Update: Oracle now fully supports the Entity Framework. Oracle Data Provider for .NET Release 11.2.0.3 (ODAC 11.2) Release Notes: http://docs.oracle.com/cd/E20434_01/doc/win.112/e23174/whatsnew.htm#BGGJIEIC More documentation on Linq to Entities and ADO.NET Entity Framework: http://docs.oracle.com/cd/E20434_01/doc/win.112/e23174/featLINQ.htm#CJACEDJG Note: ODP.NET also supports Entity SQL. A: Yes. See this step by step tutorial of Entity Framework, LINQ, and Model-First for the Oracle database (11G), and using Visual Studio 2010 with .NET 4. A: In case you don't know it already, Oracle has released ODP.NET which supports Entity Framework. It doesn't support code first yet though. http://www.oracle.com/technetwork/topics/dotnet/index-085163.html A: DevArt's OraDirect provider now supports entity framework. See http://devart.com/news/2008/directs475.html A: Oracle have announced a "statement of direction" for ODP.net and the Entity Framework: In summary, ODP.Net beta around the end of 2010, production sometime in 2011. A: The answer is "mostly". We've hit a problem using it where the EF generates code that uses the CROSS and OUTER APPLY operators. This link shows that MS knows its a problem with SQL Server previous to 2005 however, they forget to mention that these operators are not supported by Oracle either. A: Now has a new nuget package, try use it: https://www.nuget.org/packages/Oracle.ManagedDataAccess.EntityFramework/
{ "language": "en", "url": "https://stackoverflow.com/questions/82644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "97" }
Q: Best way for allowing subdomain session cookies using Tomcat By default tomcat will create a session cookie for the current domain. If you are on www.example.com, your cookie will be created for www.example.com (will only work on www.example.com). Whereas for example.com it will be created for .example.com (desired behaviour, will work on any subdomain of example.com as well as example.com itself). I've seen a few Tomcat valves which seem to intercept the creation of session cookies and create a replacement cookie with the correct .example.com domain, however none of them seem to work flawlessly and they all appear to leave the existing cookie and just create a new one. This means that two JSESSIONID cookies are being sent with each request. I was wondering if anybody has a definitive solution to this problem. A: I have just gone through all of this looking for a simple solution. I started looking at it from the tomcat perspective first. Tomcat does not give direct access to configuring the domain cookie for the session, and I definitely did not want to custom patch tomcat to fix that problem as shown in some other posts. Valves in tomcat also seems to be a problem solution due to the limitations on accessing headers & cookies built into the Servlet specification. They also fail completely if the http response is commited before it gets passed to your valve. Since we proxy our requests through Apache, I then moved onto how to use apache to fix the problem instead. I first tried the mod_proxy directive ProxyPassReverseCookieDomain, but it does not work for JSESSIONID cookies because tomcat does not set the domain attribute and ProxyPassReverseCookieDomain cannot work without some sort of domain being part of the cookie. I also came across a hack using ProxyPassReverseCookiePath where they were rewriting the path to add a domain attribute to the cookie, but that felt way to messy for a production site. I finally got it to work by rewriting the response headers using the mod_headers module in apache as mentioned by Dave above. I have added the following line inside the virtual host definition: Header edit Set-Cookie "(JSESSIONID\s?=[^;,]+?)((?:;\s?(?:(?i)Comment|Max-Age|Path|Version|Secure)[^;,]*?)*)(;\s?(?:(?i)Domain\s?=)[^;,]+?)?((?:;\s?(?:(?i)Comment|Max-Age|Path|Version|Secure)[^;,]*?)*)(,|$)" "$1$2; Domain=.example.com$4$5" The above should all be a single line in the config. It will replace any JSESSIONID cookies domain attribute with ".example.com". If a JSESSIONID cookie does not contain a domain attribute, then the pattern will add one with a value of ".example.com". As a bonus, this solution does not suffer from the double JSESSION cookies problem of the valves. The pattern should work with multiple cookies in the Set-Cookie header without affecting the other cookies in the header. It should also be modifiable to work with other cookies by changing JSESSIONID in the first part of the pattern to what ever cookie name you desire. I am not reg-ex power user, so I am sure there are a couple of optimisations that could be made to the pattern, but it seems to be working for us so far. I will update this post if I find any bugs with the pattern. Hopefully this will stop a few of you from having to go through the last couple of days worth of frustrations as I did. A: This is apparently supported via a configuration setting in 6.0.27 and onwards: Configuration is done by editing META-INF/context.xml <Context sessionCookiePath="/something" sessionCookieDomain=".domain.tld" /> https://issues.apache.org/bugzilla/show_bug.cgi?id=48379 A: As a session (and its Id) is basically considered of value only for the issueing application, you may rather look for setting an additional cookie. Have a look at Tomcats SingleSignOnValve, providing the extra-Cookie JSESSIONIDSSO (note the ...SSO) for the server path "/" instead of "/applicationName" (as JSESSIONID cookies are usually set). With such a Valve you may implement any interprocess communication you need in order to synchronize any state between different servers, virtual hosts or webapps on any number of tomcats/webservers/whatever. Another reason why you cannot use tomcats session cookie for your own purposes is, that multiple webapps on the same host have different session ids. E.g. there are different cookies for "/webapp1" and "/webapp2". If you provide "/webapp1"'s cookie to "/webapp2", this wouldn't find the session you referenced, invalidate your session+cookie and set its own new one. You'd have to rewrite all of tomcats session handling to accept external session id values (bad idea securitywise) or to share a certain state among applications. Session handling should be considered the containers (tomcats) business. Whatever else you need you should add without interfering with what the container believes is necessary to do. A: I've run into this at $DAYJOB. In my case I wanted to implement SSL signon then redirect to a non SSL page. The core problem in tomcat is the method (from memory) SessionManager.configureSessionCookie which hard codes all the variables you would like to get access to. I came up with a few ideas, including a particularly egregious hack using mod_headers in apache to rewrite the cookie based on regex substitution. The definative way to solve this would be to submit a patch to the tomcat developers that adds configurable parameters to the SessionManager class. A: The valve techniques do not seem to be 100% perfect. If you dare to modify Tomcat itself: catalina.jar contains the following class: org.apache.catalina.connector.Request The Request has a method: configureSessionCookie(Cookie cookie) For our environment it was best to just hardcode it, but you could do more fancy logic: cookie.setDomain(".xyz.com"); Seems to work perfectly. Would be nice if this was configurable in tomcat.
{ "language": "en", "url": "https://stackoverflow.com/questions/82645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Is there any list of blog engines, written in Django? Is there any list of blog engines, written in Django? A: James Bennett has an interesting take on this question: “where can I find a good Django-powered blogging application” is probably at the top of the frequently-asked questions list both on django-users and in the IRC; part of this is simply that, right now, there is no “definitive” Django blogging application; there are a bunch of them available if you go looking, but you’re not likely to get anyone to recommend one of them as “the” Django blogging app (unless the person doing the recommending happens to be the author of one of them). The blog entry also has a list. A: Byteflow is a blog engine, written on Python, using Django A: Django's powerful admin interface and easy ORM makes it a 30 minute job to build a blog that propably fits your needs; Why look for a 3rd party product when you can make it yourself very quickly? A: The book Practical Django Projects provides a tutorial on how to create a Django blogging app. A: EDIT: Original link went dead so here's an updated link with extracts of the list sorted with the most recently updated source at the top. Eleven Django blog engines you should know by Monty Lounge Industries * *Biblion *Django-article *Flother *Basic-Blog *Hello-Newman *Banjo *djangotechblog *Django-YABA *Shifting Bits (this is now just a biblion blog) *Mighty Lemon *Coltrane A: Nathan Borror has a great package of 'basic apps' that has a blog. These are well written, well documented apps that you should try out, get ideas from, etc. http://code.google.com/p/django-basic-apps/ A: You should check django-blogango. http://agiliq.com/blog is run using this blogging engine. https://github.com/agiliq/django-blogango
{ "language": "en", "url": "https://stackoverflow.com/questions/82653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: What is the best way to print screens from an ASP.NET page .NET1.1/.NET2.0 I have seen examples of printing from a windows application but I have not been able to find a good example of any way of doing this. A: I've used the print style sheet here's and article http://alistapart.com/stories/goingtoprint/ that will go through the way to set that up. Rather than setting up a special page that would need to be maintained as well. A: If you just need to print your web page from the client-side use window.print(). Sample could be found here: http://www.javascriptkit.com/howto/newtech2.shtml. I would suggest preparing a special version of your page first with no dynamic content and with a layout which would look nice on print. If you need to send something to printer on the server-side that would be a little bit more complicated. Check out this MSDN article on how to do the basic printing. A: The browser prints your pages. If you need to tweak the page so it looks better on the printer, use CSS @media selectors. A: Restating what others have said, you just need to call window.print() in javascript. That and build a separate css for print.
{ "language": "en", "url": "https://stackoverflow.com/questions/82654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you clear your MRU list in Visual Studio? I want to clear the list of projects on the start page...how do I do this? I know I can track it down in the registry, but is there an approved route to go? A: There is an MSDN article here which suggests that you just move the projects to a new directory. However, as you mentioned, the list of projects is kept in the registry under this key: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\<version>\ProjectMRUList and the list of recent files is kept in this key: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\<version>\FILEMRUList Note For Visual Studio 2015: The location has changed. You can check out this answer for details. Some people have automated clearing this registry key with their own tools: Visual Studio Most Recent Files Utility Add-in for cleaning Visual Studio 2008 MRU Projects list A: If you try opening up a project that can no longer be found, Visual Studio will prompt you for permission to remove it from the MRU list. So if you temporarily rename an appropriate top level folder to fake the projects' disappearance, you can get rid of the projects one by one. A: In Visual Studio 2015 all the history lists (including search history, file MRU and project MRU) are now located at: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\14.0\MRUItems You will see a different GUID folder for each list, and a sub-folder called Items in each of them. Find the Items folder that contains the relevant list, and just delete its parent GUID folder. Visual Studio will re-create the GUID folder together with a new Items child folder, next time it wants to add something to the list again. A: I found the MRU editor from Code Project a great tool for that. No problems with it, and it works on 2003, 2005, and 2008. A: Note: This answer is specific to Visual Studio 2010. If you don't want to manually edit the registry, you can use PowerCommands for Visual Studio 2010. PowerCommands 10.0 is a set of useful extensions for the Visual Studio 2010 adding additional functionality to various areas of the IDE. The specific command for clearing the registry from the extension is: Clear Recent Project List This command clears the Visual Studio recent project list. The Clear Recent Project List command brings up a Clear File dialog which allows any or all recent projects to be selected. The PowerCommands can be installed with the Visual Studio extension manager: Tools > Extension Manager > Online Gallery: search for PowerCommands for Visual Studio 2010. A: PowerCommands for Visual Studio 2008 Features * *Clear Recent File List *Clear Recent Project List *Clear All Panes *Copy Path *Email CodeSnippet *Insert Guid Attribute *Show All Files *Undo Close *Collapse Projects *Copy Class *Paste Class *Copy References *Paste References *Copy As Project Reference *Edit Project File *Open Containing Folder *Open Command Prompt *Unload Projects *Reload Projects *Remove and Sort Usings *Extract Constant *Transform Templates *Close All A: Try Recently Used Files: a free addin for Visual Studio that manages MRU files on a per-project basis: Supported for VS 2010, 2012, 2013. For Visual Studio 2012, 2013: http://visualstudiogallery.msdn.microsoft.com/a61cbd1d-b5a2-490b-a6bb-f0ea3ecf214a For Visual Studio 2010: http://visualstudiogallery.msdn.microsoft.com/45283881-5a62-4dc1-8ffb-4cbc02709947 A: For Visual Studio 2013: Open the Run dialog (Press Win + R) type: regedit navigate to: HKEY_CURRENT_USER > Software > Microsoft > VisualStudio click 12.0 then the files will show up on the right side. Look for the "LastLoadedSolution", right click then click Modify change the value to 0. This worked for me. A: I'm not sure if this solution has been posted somewhere here, but if you have VS 2013 Update 5 you can open start page, and right click project below "Recent" list, and choose "Remove from list". I don't know how about other VS versions, maybe this feature is available. A: I had this issue as applied to VS 2017 where you do not have any MRU items in the registry as in the previous versions. The solution was, on the other hand, simple: go to "Tools->Extensions and Updates" and install "Power Commands for Visual Studio". After they have been installed, your File menu will look as shown below. Just click the menu item to clear the project MRU.
{ "language": "en", "url": "https://stackoverflow.com/questions/82661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45" }
Q: What are the best remoting technologies for mobile applications? I have a java back-end that needs to expose services to clients running in the following environments : * *J2ME *Windows Mobile *iPhone I am looking for the best tool for each platform. I do not search a technology that works everywhere. I need something "light" adapted to low speed internet access. Right now I am using SOAP. It is verbose and not easy to parse on the mobile. The problem is that I have not seen any real alternative. Is there a format that works "out of the box" with one of these platforms ? I would rather not use a bloated library that will increase tremendously the download time of the application. Everybody seems to agree on JSON. Does anyone has implemented a solution based on JSON running with Objective-C, J2ME, Windows Mobile ? Note : so far the best solution seems to be Hessian. It works well on Windows Mobile and Objective-C/iPhone . The big problem is J2ME. The J2ME implementation of Hessian has serious limitations. It does not support complex objects. I had written another question about it. If you have any ideas, there are very welcome. A: JSON is fairly compact, and supported by most frameworks. You can transfer data over HTTP using standard REST techniques. There are JSON libraries for Java, Objective C, and many other languages (scroll down). You should have no problem finding framework support on the server side, because JSON is used for web applications. Older alternatives include plain XML and XML-RPC (like SOAP, but much simpler, and with libraries for most languages). A: Hessian. http://hessian.caucho.com. Implementations in multiple languages (including ObjC), super light weight, and doesn't require reliance on dom/xml parsers for translation from wire to object models. Once we found Hessian, we forgot we ever knew XML. A: REST + XML or JSON would be a good alternative. It is making big strides in the RIA world and the beauty of it is in it's simplicity. It is very easy to use without needing any special tooling. SOAP has it's strong points, but it works best in an environment with strong tooling support for it. I'm guessing from your question that's not the case. A: Seconding JSON. I ported the Stringtree JSON reader to J2ME. It's a single class JSON reader that compiles into a 5KB class file, and directly maps the JSON structure into native CLDC types like Hashtable and Vector. Now I can use the same server for both my desktop browser AJAX frontend and my J2ME client. A: How about plain old XML (somewhat unfortunately referred to as POX)? Another very useful option would be JSON. There are libraries for every single programming language out there. Possibly, since you are working in an environment that is constrained in terms of both computing and networking resources, and with a statically typed language, Google’s protocol buffers would be preferrable for you. (Just disregard the RPC crud in there; RPC is an attractive nuisance, not a useful technology.) The problem with your question is that you haven’t provided a whole lot of context about what kind of data this is and what your use cases are, so it’s hard to speak in anything but very vague generalities.
{ "language": "en", "url": "https://stackoverflow.com/questions/82691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I alter a TEXT column on a database table in SQL server? In a SQL server database, I have a table which contains a TEXT field which is set to allow NULLs. I need to change this to not allow NULLs. I can do this no problem via Enterprise Manager, but when I try to run the following script, alter table dbo.[EventLog] Alter column [Message] text Not null, I get an error: Cannot alter column 'ErrorMessage' because it is 'text'. Reading SQL Books Online does indeed reveal you are not allow to do an ALTER COLUMN on TEXT fields. I really need to be able to do this via a script though, and not manually in Enterprise Manager. What are the options for doing this in script then? A: You can use Enterprise Manager to create your script. Right click on the table in EM and select Design. Uncheck the Allow Nulls column for the Text field. Instead of hitting the regular save icon (the floppy), click an icon that looks like a golden scroll with a tiny floppy or just do Table Designer > Generate Change Script from the menu. Save the script to a file so you can reuse it. Here is a sample script: /* To prevent any potential data loss issues, you should review this script in detail before running it outside the context of the database designer.*/ BEGIN TRANSACTION SET QUOTED_IDENTIFIER ON SET ARITHABORT ON SET NUMERIC_ROUNDABORT OFF SET CONCAT_NULL_YIELDS_NULL ON SET ANSI_NULLS ON SET ANSI_PADDING ON SET ANSI_WARNINGS ON COMMIT BEGIN TRANSACTION GO CREATE TABLE dbo.Tmp_TestTable ( tableKey int NOT NULL, Description varchar(50) NOT NULL, TextData text NOT NULL ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY] GO IF EXISTS(SELECT * FROM dbo.TestTable) EXEC('INSERT INTO dbo.Tmp_TestTable (tableKey, Description, TextData) SELECT tableKey, Description, TextData FROM dbo.TestTable WITH (HOLDLOCK TABLOCKX)') GO DROP TABLE dbo.TestTable GO EXECUTE sp_rename N'dbo.Tmp_TestTable', N'TestTable', 'OBJECT' GO ALTER TABLE dbo.TestTable ADD CONSTRAINT PK_TestTable PRIMARY KEY CLUSTERED ( tableKey ) ON [PRIMARY] GO COMMIT A: Create a new field. Copy the data across. Drop the old field. Rename the new field. A: I think getting rid of the null values is the easist. (as raz0rf1sh has said) CREATE TABLE tmp1( col1 INT identity ( 1, 1 ), col2 TEXT ) GO INSERT INTO tmp1 SELECT NULL GO 10 SELECT * FROM tmp1 UPDATE tmp1 SET col2 = '' WHERE col2 IS NULL ALTER TABLE tmp1 ALTER COLUMN col2 TEXT NOT NULL SELECT * FROM tmp1 DROP TABLE tmp1 A: Off the top of my head, I'd say you need to create a new table with the same structure as the existing table but with your text column set to not null and then run a query to move the records from one to the other. I realize that's sort of a pseudocode answer but I think that's really the only option you've got. If others with a better grip on the exact TSQL syntax care to supplement this answer, feel free. A: I would update all the columns with NULL values and set it to an empty string, for example, ''. Then you should be able to run your ALTER TABLE script with no problems. A lot less work than creating a new column. A: Try to generate the change script from within Enterprise Manager to see how it is done there
{ "language": "en", "url": "https://stackoverflow.com/questions/82721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Convert DOS/Windows line endings to Linux line endings in Vim If I open files I created in Windows, the lines all end with ^M. How do I delete these characters all at once? A: dos2unix can directly modify the file contents. You can directly use it on the file, without any need for temporary file redirection. dos2unix input.txt input.txt The above uses the assumed US keyboard. Use the -437 option to use the UK keyboard. dos2unix -437 input.txt input.txt A: Convert directory of files from DOS to Unix Using command line and sed, find all files in current directory with the extension ".ext" and remove all "^M" @ https://gist.github.com/sparkida/7773170 find $(pwd) -type f -name "*.ext" | while read file; do sed -e 's/^M//g' -i "$file"; done; Also, as mentioned in a previous answer, ^M = Ctrl+V + Ctrl+M (don't just type the caret "^" symbol and M). A: tr -d '\15\32' < winfile.txt > unixfile.txt (See: Convert between Unix and Windows text files) A: To run directly in a Linux console: vim file.txt +"set ff=unix" +wq A: The following steps can convert the file format for DOS to Unix: :e ++ff=dos Edit file again, using dos file format ('fileformats' is ignored).[A 1] :setlocal ff=unix This buffer will use LF-only line endings when written.[A 2] :w Write buffer using Unix (LF-only) line endings. Reference: File format A: I found a very easy way: Open the file with nano: nano file.txt Press Ctrl + O to save, but before pressing Enter, press: Alt+D to toggle between DOS and Unix/Linux line-endings, or: Alt+M to toggle between Mac and Unix/Linux line-endings, and then press Enter to save and Ctrl+X to quit. A: With the following command: :%s/^M$//g To get the ^M to appear, type CtrlV and then CtrlM. CtrlV tells Vim to take the next character entered literally. A: The comment about getting the ^M to appear is what worked for me. Merely typing "^M" in my vi got nothing (not found). The CTRL+V CTRL+M sequence did it perfectly though. My working substitution command was :%s/Ctrl-V Ctrl-M/\r/g and it looked like this on my screen: :%s/^M/\r/g A: :g/Ctrl-v Ctrl-m/s/// CtrlM is the character \r, or carriage return, which DOS line endings add. CtrlV tells Vim to insert a literal CtrlM character at the command line. Taken as a whole, this command replaces all \r with nothing, removing them from the ends of lines. A: Change the line endings in the view: :e ++ff=dos :e ++ff=mac :e ++ff=unix This can also be used as saving operation (:w alone will not save using the line endings you see on screen): :w ++ff=dos :w ++ff=mac :w ++ff=unix And you can use it from the command-line: for file in *.cpp do vi +':w ++ff=unix' +':q' "$file" done A: You can use: vim somefile.txt +"%s/\r/\r/g" +wq Or the dos2unix utility. A: :set fileformat=unix to convert from DOS to Unix. A: :%s/\r\+//g In Vim, that strips all carriage returns, and leaves only newlines. A: In VIM: :e ++ff=dos | set ff=unix | w! In shell with VIM: vim some_file.txt +'e ++ff=dos | set ff=unix | wq!' e ++ff=dos - force open file in dos format. set ff=unix - convert file to unix format. A: You can use the following command: :%s/^V^M//g where the '^' means use CTRL key. A: I typically use :%s/\r/\r/g which seems a little odd, but works because of the way that Vim matches linefeeds. I also find it easier to remember :) A: From: File format [Esc] :%s/\r$// A: I prefer to use the following command: :set fileformat=unix You can also use mac or dos to respectively convert your file to Mac or MS-DOS/Windows file convention. And it does nothing if the file is already in the correct format. For more information, see the Vim help: :help fileformat A: dos2unix is a commandline utility that will do this, or :%s/^M//g will if you use Ctrl-v Ctrl-m to input the ^M, or you can :set ff=unix and Vim will do it for you. There is documentation on the fileformat setting, and the Vim wiki has a comprehensive page on line ending conversions. Alternately, if you move files back and forth a lot, you might not want to convert them, but rather to do :set ff=dos, so Vim will know it's a DOS file and use DOS conventions for line endings. A: The below command is used for reformating all .sh file in the current directory. I tested it on my Fedora OS. for file in *.sh; do awk '{ sub("\r$", ""); print }' $file >luxubutmp; cp -f luxubutmp $file; rm -f luxubutmp ;done A: In Vim, type: :w !dos2unix % This will pipe the contents of your current buffer to the dos2unix command and write the results over the current contents. Vim will ask to reload the file after. A: Usually there is a dos2unix command you can use for this. Just make sure you read the manual as the GNU and BSD versions differ on how they deal with the arguments. BSD version: dos2unix $FILENAME $FILENAME_OUT mv $FILENAME_OUT $FILENAME GNU version: dos2unix $FILENAME Alternatively, you can create your own dos2unix with any of the proposed answers here, for example: function dos2unix(){ [ "${!}" ] && [ -f "{$1}" ] || return 1; { echo ':set ff=unix'; echo ':wq'; } | vim "${1}"; } A: From Wikia: %s/\r\+$//g That will find all carriage return signs (one and more reps) up to the end of line and delete, so just \n will stay at EOL. A: This is my way. I opened a file in DOS EOL and when I save the file, that will automatically convert to Unix EOL: autocmd BufWrite * :set ff=unix A: I wanted newlines in place of the ^M's. Perl to the rescue: perl -pi.bak -e 's/\x0d/\n/g' excel_created.txt Or to write to stdout: perl -p -e 's/\x0d/\n/g' < excel_created.txt A: If you create a file in Notepad or Notepad++ in Windows, bring it to Linux, and open it by Vim, you will see ^M at the end of each line. To remove this, At your Linux terminal, type dos2unix filename.ext This will do the required magic. A: I knew I'd seen this somewhere. Here is the FreeBSD login tip: Do you need to remove all those ^M characters from a DOS file? Try tr -d \\r < dosfile > newfile -- Originally by Dru <genesis@istar.ca> A: This is a little more than you asked for but: nmap <C-d> :call range(line('w0'),line('w$'))->map({_,v-> getline(v)})->map({_,v->trim(v,join(map(range(1,0x1F)+[0xa0],{n->n->nr2char()}),''),2)})->map({k,v->setline(k+1,v)})<CR> Run this and :set ff=unix|dos and no more need for unix2dos. * *the single arg form of trim() has the same default mask above, plus 0X20 (an actual space) instead of 0x1F *that default mask clears out all non-printing chars including non-breaking spaces [0xa0] that are hard to find *create a list of lines from the range of lines *map that list to the trim function with using the same mask code as the source, less spaces *map that again to setline to replace the lines. *all :set fileformat= does at this point is choose which eol to save it with, dos or unix *it should be pretty easy to change the range of characters above if you want to eliminate or add some
{ "language": "en", "url": "https://stackoverflow.com/questions/82726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "808" }
Q: How do I reference a diagram in a DSL T4 template? Google's not coming to my rescue, here, and I just know this is the perfect place to ask. I'm writing a custom DirectiveProcessor for a DSL and I want to be able to access a diagram from within my T4 template. Within my DirectiveProcessor, I've loaded the domain model and my diagram using (wait for it...) LoadModelAndDiagram(...). So, now the diagram's loaded into the default partition in the Store, but I can't for the life of me work out how to resolve a reference to it later. Can anyone guide the way? A: Well, after lots of further work, I decided I didn't need to access my diagram **from within** a custom DirectiveProcessor. I've still got a custom DirectiveProcessor because the standard generated one doesn't load the existing diagram when it loads the domain model. Getting a custom DirectiveProcessor to load the diagram and model at the same time is trivially easy. You subclass the standard generated DirectiveProcessor base class and override: protected override bool LoadDiagramData { get { return true; } } Now, we have the diagram loaded, so to get back to the original question, how do we access it? Like this: using (Transaction t = partition.Store.TransactionManager .BeginTransaction("MyTxn", true)) { MyDslDiagram diagram = partition.ElementDirectory .FindElements<MyDslDiagram>(true).SingleOrDefault(); /* * Now, do stuff with your diagram. * */ } Now, this code works just fine if you have a diagram loaded. If you don't, diagram will come back as null, in which case, we either have to load the diagram explicitly or create one dynamically. I won't go into that, here. Maybe on my blog when I've had some sleep!
{ "language": "en", "url": "https://stackoverflow.com/questions/82776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Whats the best windows tool for merging RSS Feeds? It seems like such a simple thing, but I can't find any obvious solutions... I want to be able to take two or three feeds, and then merge then in to a single rss feed, to be published internally on our network. Is there a simple tool out there that will do this? Free or commercial.. update: Should have mentioned, looking for a windows application that will run as a scheduled service on a server. A: Maybe http://www.planetplanet.org/ will do what you want. It's for creating blog aggregations like planet lisp. A: Google reader, create a group, add your feeds into the folder and then share that as an RSS feed. :-) Works while you're asleep! A: There are a whole pile of options here: http://allrss.com/rssremixers.html. A: Yahoo Pipes could be nice. Depends on how much "private" you want the resulting feed to be. For 100% offline solution investigate Atomisator. It's a Python framework basically for doing offline what Yahoo Pipes does online. A: If you're using PHP, the SimplePie library will do this. Here's a tutorial.
{ "language": "en", "url": "https://stackoverflow.com/questions/82782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Sharepoint UserProfileManager without Manage User Profiles right I have an issue that is driving me a bit nuts: Using a UserProfileManager as an non-authorized user. The problem: The user does not have "Manage User Profiles" rights, but I still want to use the UserProfileManager. The idea of using SPSecurity.RunWithElevatedPrivileges does not seem to work, as the UserProfileManager authorizes against the SSP as it seems. SPSecurity.RunWithElevatedPrivileges(delegate() { using (SPSite site = new SPSite(inputWeb.Site.ID)) { ServerContext ctx = ServerContext.GetContext(site); UserProfileManager upm = new UserProfileManager(ctx,true); UserProfile u = upm.GetUserProfile(userLogin); DepartmentName = u["Department"].Value as string; } }); This still fails on the "new UserProfileManager" line, with the "You must have manage user profiles administrator rights to use administrator mode" exception. As far as I userstood, RunWithElevatedPrivileges reverts to the AppPool Identity. WindowsIdentity.GetCurrent().Name returns "NT AUTHORITY\network service", and I have given that account Manage User Profiles rights - no luck. site.RootWeb.CurrentUser.LoginName returns SHAREPOINT\system for the site created within RunWithElevatedPrivileges, which is not a valid Windows Account ofc. Is there even a way to do that? I do not want to give all users "Manage User Profiles" rights, but I just want to get some data from the user profiles (Department, Country, Direct Reports). Any ideas? A: The permission that needs set is actually found in the Shared Service Provider. * *Navigate to Central Admin *Navigate to the Shared Service Provider *Under User Profiles and My Sites navigate to Personalization services permissions . *If the account doesn't already exist, add the account for which your sites App Domain is running under. *Grant that user Manage user profiles permission. I notice that you're running the application pool under the Network Service account. I implemented an identical feature on my site; however, the application pool was hosted under a Windows account. I'm not sure why this would make a difference, however. A: There are two ways I've actually managed to accomplish this: * *Put the code that uses the UserProfileManager behind a web services layer. The web service should use an application pool identity that has access to the User Profile services. *Use the impersonation technique describe in the following article: http://www.dotnetjunkies.com/WebLog/victorv/archive/2005/06/30/128890.aspx A: Thanks for the Answers. One Caveat: if you run the Application Pool as "Network Service" instead of a Domain Account, you're screwed. But then again, it's recommended to use a domain account anyway (On a test server I used network service, but after changing it to a domain account it worked). A: Here's the answer. Its a stupid Microsoft bug, and there is a hotfix. I'm downloading now to test it. http://support.microsoft.com/kb/952294/en-us
{ "language": "en", "url": "https://stackoverflow.com/questions/82788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What are the best methods to ensure our SharePoint implementation is accessible? Are there any blogs, guides, checklists, or controls we should be using to ensure our SharePoint implementation is accessible? Preferrably to the W3C double A standard, or as close to that as we can get. We're implementing an extranet solution. A: The best place to start is the Accessibility Kit for Sharepoint. With this, you may reach single A standard, but in my experience, you will find it very tough to reach AA. Microsoft didn't factor in accessibility in Sharepoint, and even 2007 suffers from a huge overdependence on table layout. Good luck! A: This study has already been funded by Microsoft, and unfortunately the results only seem to be online in a Word Document. The document is hosted on this blog: http://blog.mastykarz.nl/best-practices-for-developing-accessible-web-sites-in-microsoft-office-sharepoint-server-2007/ And the path to the document is here: http://go.microsoft.com/fwlink/?LinkId=121877 I'm unsure on whether it would be a good thing to copy the contents of that into here to fully answer the question in a way that will be indexed by search engines, but I'll play safe as it's not my content. A: How are you deploying the implementation? Is it as an Intranet, or, is it as a public facing website. I think one of the first rules is to be extremely selective with the use of out of the box web parts. Many of the web-parts I looked at weren't compliant even on a basic level. Andrew A: The best way is to run checks as you develop so you know where your pain points are. The next step maybe to start with a minimal masterpage so you can choose what elements are presented to the user. More advanced you can override the render methods to remove or change bits of the page that are not compliant with your checks. EG changing the case of tags (XHTML does not like all caps) A bit more in this guide. http://techtalkpt.wordpress.com/2008/06/18/building-accessible-sharepoint-sites-part-1/ http://techtalkpt.wordpress.com/2008/08/07/building-accessible-sharepoint-sites-part-2/ A: I recently read the MOSS book by Andrew Connell (www.andrewconnell.com) and it has a chapter dedicated to accessibility and SharePoint sites. Simply put SharePoint sites are very difficult to generate W3C AAA standards, but the Accessibility Kit is one of the best starting points. Stronly recommend his book for this chapter (http://www.amazon.com/dp/0470224754?tag=andrewconnell-20&camp=14573&creative=327641&linkCode=as1&creativeASIN=0470224754&adid=18S6FKQJR5FZK56WHH6A&) A: It depends how much of Sharepoint out of the box you are intending to use. In implementing our public facing site we managed to achieve AA compliance, although the amount of custom development required has raised questions over the benefits we are actually gaining from using Sharepoint in the first place. A few pointers: We made heavy use of SPQuery/SPSiteDataQuery to render site data to screen using xslt which gave us full control over the output. I found this link helpful: http://blog.thekid.me.uk/archive/2007/02/25/xml-results-using-spsitedataquery-in-sharepoint.aspx Check out RadEditor for Sharepoint for a nice accessible rich text editor for publishing. For xhtml compliance, things were a little more tricky, we had to override most of the Sharepoint publishing controls' render methods to correct dodgy output. If you are wanting to leverage the portal like capabilites of Sharepoint in your extranet it is more problematic. The web part framework is not accessible and I have not yet found a way to make it so. Any suggestions welcome!
{ "language": "en", "url": "https://stackoverflow.com/questions/82806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: SoundPlayer crash on Vista The following code is causing an intermittent crash on a Vista machine. using (SoundPlayer myPlayer = new SoundPlayer(Properties.Resources.BEEPPURE)) myPlayer.Play(); I highly suspect it is this code because the program crashes mid-beep or just before the beep is played every time. I have top-level traps for all ThreadExceptions, UnhandledExceptions in my app domain, and a try-catch around Application.Run, none of which trap this crash. Any ideas? EDIT: The Event Viewer has the following information: Faulting application [xyz].exe, version 4.0.0.0, time stamp 0x48ce5a74, faulting module msvcrt.dll, version 7.0.6001.18000, time stamp 0x4791a727, exception code 0xc0000005, fault offset 0x00009b30, process id 0x%9, application start time 0x%10. Interestingly, the HRESULT 0xc0000005 has the message: "Reading or writing to an inaccessible memory location." (STATUS_ACCESS_VIOLATION) A: Actually, the above code (that is, new SoundPlayer(BEEPPURE)).Play(); was crashing for me. This article explains why, and provides an alternative to SoundPlayer that works flawlessly: http://www.codeproject.com/KB/audio-video/soundplayerbug.aspx?msg=2862832#xx2862832xx A: You can use WinDBG and trap all first-chance exceptions. I'm sure you'll see something interesting. If so, you can use SOS to clean up the stack and post it here to help us along. Or you can use Visual Studio by enabling the trap of all exceptions. Go to "Debug" and then "Exceptions" and make sure you trap everything. Do this along with switching the debugger to mixed-mode (managed and unmanaged). Once you have the stack trace, we can determine the answer. A process doesn't exit on Windows without an exception. It's in there. Also, you might want to check the machine's Event Log to see if anything has shown up. A: The event viewer shows HRESULT 0xc0000005 "Reading or writing to an inaccessible memory location." (STATUS_ACCESS_VIOLATION) See my edit above for more details; reproing this takes a while so I can't get a fresh crash dump for WinDBG for a little while. A: The solution is to use Microsoft.VisualBasic.Devices, which does not suffer from this bug. Since it's Vista only, and the Event Viewer even managed to fail midway through logging the crash (process id 0x**%9** should have a hex value there instead), I point the blame at the new sound code in Vista. BTW, connecting the VS debugger to the crashing process remotely managed to first hang Visual Studio, then cause a BSOD on my machine while killing the non-responsive devenv.exe. Wonderful! A: Pure speculation here, but the problem may be the using statement. Your code is like this (I think): using (SoundPlayer myPlayer = new SoundPlayer(BEEPPURE)) { myPlayer.Play(); } The using block will call Dispose() on myPlayer, sometimes before it is done playing the sound (but rarely, because the sound is so short - with a longer sound, I'll bet you can reproduce the error every time). The error would be the result of the Windows API (which SoundPlayer wraps) trying to play a buffer which has already been disposed by .NET. I think if you do this: SoundPlayer myPlayer = new SoundPlayer(BEEPPURE); myPlayer.Play(); or even (new SoundPlayer(BEEPPURE)).Play(); you will not see the error any more.
{ "language": "en", "url": "https://stackoverflow.com/questions/82814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I check whether a file exists without exceptions? How do I check whether a file exists or not, without using the try statement? A: Testing for files and folders with os.path.isfile(), os.path.isdir() and os.path.exists() Assuming that the "path" is a valid path, this table shows what is returned by each function for files and folders: You can also test if a file is a certain type of file using os.path.splitext() to get the extension (if you don't already know it) >>> import os >>> path = "path to a word document" >>> os.path.isfile(path) True >>> os.path.splitext(path)[1] == ".docx" # test if the extension is .docx True A: TL;DR The answer is: use the pathlib module Pathlib is probably the most modern and convenient way for almost all of the file operations. For the existence of a file or a folder a single line of code is enough. If file is not exists, it will not throw any exception. from pathlib import Path if Path("myfile.txt").exists(): # works for both file and folders # do your cool stuff... The pathlib module was introduced in Python 3.4, so you need to have Python 3.4+. This library makes your life much easier while working with files and folders, and it is pretty to use. Here is more documentation about it: pathlib — Object-oriented filesystem paths. BTW, if you are going to reuse the path, then it is better to assign it to a variable. So it will become: from pathlib import Path p = Path("loc/of/myfile.txt") if p.exists(): # works for both file and folders # do stuffs... #reuse 'p' if needed. A: Use os.path.exists() to check whether file exists or not: def fileAtLocation(filename,path): return os.path.exists(path + filename) filename="dummy.txt" path = "/home/ie/SachinSaga/scripts/subscription_unit_reader_file/" if fileAtLocation(filename,path): print('file found at location..') else: print('file not found at location..') A: import os if os.path.isfile(filepath): print("File exists") A: In 2016 the best way is still using os.path.isfile: >>> os.path.isfile('/path/to/some/file.txt') Or in Python 3 you can use pathlib: import pathlib path = pathlib.Path('/path/to/some/file.txt') if path.is_file(): ... A: It doesn't seem like there's a meaningful functional difference between try/except and isfile(), so you should use which one makes sense. If you want to read a file, if it exists, do try: f = open(filepath) except IOError: print 'Oh dear.' But if you just wanted to rename a file if it exists, and therefore don't need to open it, do if os.path.isfile(filepath): os.rename(filepath, filepath + '.old') If you want to write to a file, if it doesn't exist, do # Python 2 if not os.path.isfile(filepath): f = open(filepath, 'w') # Python 3: x opens for exclusive creation, failing if the file already exists try: f = open(filepath, 'wx') except IOError: print 'file already exists' If you need file locking, that's a different matter. A: import os path = /path/to/dir root,dirs,files = os.walk(path).next() if myfile in files: print "yes it exists" This is helpful when checking for several files. Or you want to do a set intersection/ subtraction with an existing list. A: To check if a file exists, from sys import argv from os.path import exists script, filename = argv target = open(filename) print "file exists: %r" % exists(filename) A: You could try this (safer): try: # http://effbot.org/zone/python-with-statement.htm # 'with' is safer to open a file with open('whatever.txt') as fh: # Do something with 'fh' except IOError as e: print("({})".format(e)) The ouput would be: ([Errno 2] No such file or directory: 'whatever.txt') Then, depending on the result, your program can just keep running from there or you can code to stop it if you want. A: If the reason you're checking is so you can do something like if file_exists: open_it(), it's safer to use a try around the attempt to open it. Checking and then opening risks the file being deleted or moved or something between when you check and when you try to open it. If you're not planning to open the file immediately, you can use os.path.isfile Return True if path is an existing regular file. This follows symbolic links, so both islink() and isfile() can be true for the same path. import os.path os.path.isfile(fname) if you need to be sure it's a file. Starting with Python 3.4, the pathlib module offers an object-oriented approach (backported to pathlib2 in Python 2.7): from pathlib import Path my_file = Path("/path/to/file") if my_file.is_file(): # file exists To check a directory, do: if my_file.is_dir(): # directory exists To check whether a Path object exists independently of whether is it a file or directory, use exists(): if my_file.exists(): # path exists You can also use resolve(strict=True) in a try block: try: my_abs_path = my_file.resolve(strict=True) except FileNotFoundError: # doesn't exist else: # exists A: You can use the following open method to check if a file exists + readable: file = open(inputFile, 'r') file.close() A: You can use os.listdir to check if a file is in a certain directory. import os if 'file.ext' in os.listdir('dirpath'): #code A: Use: import os # For testing purposes the arguments defaulted to the current folder and file. # returns True if file found def file_exists(FOLDER_PATH='../', FILE_NAME=__file__): return os.path.isdir(FOLDER_PATH) \ and os.path.isfile(os.path.join(FOLDER_PATH, FILE_NAME)) It is basically a folder check, and then a file check with the proper directory separator using os.path.join. A: Date: 2017-12-04 Every possible solution has been listed in other answers. An intuitive and arguable way to check if a file exists is the following: import os os.path.isfile('~/file.md') # Returns True if exists, else False # Additionally, check a directory os.path.isdir('~/folder') # Returns True if the folder exists, else False # Check either a directory or a file os.path.exists('~/file') I made an exhaustive cheat sheet for your reference: # os.path methods in exhaustive cheat sheet {'definition': ['dirname', 'basename', 'abspath', 'relpath', 'commonpath', 'normpath', 'realpath'], 'operation': ['split', 'splitdrive', 'splitext', 'join', 'normcase'], 'compare': ['samefile', 'sameopenfile', 'samestat'], 'condition': ['isdir', 'isfile', 'exists', 'lexists' 'islink', 'isabs', 'ismount',], 'expand': ['expanduser', 'expandvars'], 'stat': ['getatime', 'getctime', 'getmtime', 'getsize']} A: Although I always recommend using try and except statements, here are a few possibilities for you (my personal favourite is using os.access): * *Try opening the file: Opening the file will always verify the existence of the file. You can make a function just like so: def File_Existence(filepath): f = open(filepath) return True If it's False, it will stop execution with an unhanded IOError or OSError in later versions of Python. To catch the exception, you have to use a try except clause. Of course, you can always use a try except` statement like so (thanks to hsandt for making me think): def File_Existence(filepath): try: f = open(filepath) except IOError, OSError: # Note OSError is for later versions of Python return False return True *Use os.path.exists(path): This will check the existence of what you specify. However, it checks for files and directories so beware about how you use it. import os.path >>> os.path.exists("this/is/a/directory") True >>> os.path.exists("this/is/a/file.txt") True >>> os.path.exists("not/a/directory") False *Use os.access(path, mode): This will check whether you have access to the file. It will check for permissions. Based on the os.py documentation, typing in os.F_OK, it will check the existence of the path. However, using this will create a security hole, as someone can attack your file using the time between checking the permissions and opening the file. You should instead go directly to opening the file instead of checking its permissions. (EAFP vs LBYP). If you're not going to open the file afterwards, and only checking its existence, then you can use this. Anyway, here: >>> import os >>> os.access("/is/a/file.txt", os.F_OK) True I should also mention that there are two ways that you will not be able to verify the existence of a file. Either the issue will be permission denied or no such file or directory. If you catch an IOError, set the IOError as e (like my first option), and then type in print(e.args) so that you can hopefully determine your issue. I hope it helps! :) A: If the file is for opening you could use one of the following techniques: with open('somefile', 'xt') as f: # Using the x-flag, Python 3.3 and above f.write('Hello\n') if not os.path.exists('somefile'): with open('somefile', 'wt') as f: f.write("Hello\n") else: print('File already exists!') Note: This finds either a file or a directory with the given name. A: Use os.path.isfile() with os.access(): import os PATH = './file.txt' if os.path.isfile(PATH) and os.access(PATH, os.R_OK): print("File exists and is readable") else: print("Either the file is missing or not readable") A: Additionally, os.access(): if os.access("myfile", os.R_OK): with open("myfile") as fp: return fp.read() Being R_OK, W_OK, and X_OK the flags to test for permissions (doc). A: Another possible option is to check whether the filename is in the directory using os.listdir(): import os if 'foo.txt' in os.listdir(): # Do things This will return true if it is and false if not. A: import os os.path.exists(path) # Returns whether the path (directory or file) exists or not os.path.isfile(path) # Returns whether the file exists or not A: Although almost every possible way has been listed in (at least one of) the existing answers (e.g. Python 3.4 specific stuff was added), I'll try to group everything together. Note: every piece of Python standard library code that I'm going to post, belongs to version 3.5.3. Problem statement: * *Check file (arguable: also folder ("special" file) ?) existence *Don't use try / except / else / finally blocks Possible solutions: 1. [Python.Docs]: os.path.exists(path) Also check other function family members like os.path.isfile, os.path.isdir, os.path.lexists for slightly different behaviors: Return True if path refers to an existing path or an open file descriptor. Returns False for broken symbolic links. On some platforms, this function may return False if permission is not granted to execute os.stat() on the requested file, even if the path physically exists. All good, but if following the import tree: * *os.path - posixpath.py (ntpath.py) * *genericpath.py - line ~20+ def exists(path): """Test whether a path exists. Returns False for broken symbolic links""" try: st = os.stat(path) except os.error: return False return True it's just a try / except block around [Python.Docs]: os.stat(path, *, dir_fd=None, follow_symlinks=True). So, your code is try / except free, but lower in the framestack there's (at least) one such block. This also applies to other functions (including os.path.isfile). 1.1. [Python.Docs]: pathlib - Path.is_file() * *It's a fancier (and more [Wiktionary]: Pythonic) way of handling paths, but *Under the hood, it does exactly the same thing (pathlib.py - line ~1330): def is_file(self): """ Whether this path is a regular file (also True for symlinks pointing to regular files). """ try: return S_ISREG(self.stat().st_mode) except OSError as e: if e.errno not in (ENOENT, ENOTDIR): raise # Path doesn't exist or is a broken symlink # (see https://bitbucket.org/pitrou/pathlib/issue/12/) return False 2. [Python.Docs]: With Statement Context Managers Either: * *Create one: class Swallow: # Dummy example swallowed_exceptions = (FileNotFoundError,) def __enter__(self): print("Entering...") def __exit__(self, exc_type, exc_value, exc_traceback): print("Exiting:", exc_type, exc_value, exc_traceback) # Only swallow FileNotFoundError (not e.g. TypeError - if the user passes a wrong argument like None or float or ...) return exc_type in Swallow.swallowed_exceptions * *And its usage - I'll replicate the os.path.isfile behavior (note that this is just for demonstrating purposes, do not attempt to write such code for production): import os import stat def isfile_seaman(path): # Dummy func result = False with Swallow(): result = stat.S_ISREG(os.stat(path).st_mode) return result *Use [Python.Docs]: contextlib.suppress(*exceptions) - which was specifically designed for selectively suppressing exceptions But, they seem to be wrappers over try / except / else / finally blocks, as [Python.Docs]: Compound statements - The with statement states: This allows common try...except...finally usage patterns to be encapsulated for convenient reuse. 3. Filesystem traversal functions Search the results for matching item(s): * *[Python.Docs]: os.listdir(path='.') (or [Python.Docs]: os.scandir(path='.') on Python v3.5+, backport: [PyPI]: scandir) * *Under the hood, both use: * *Nix: [Man7]: OPENDIR(3) / [Man7]: READDIR(3) / [Man7]: CLOSEDIR(3) *Win: [MS.Learn]: FindFirstFileW function (fileapi.h) / [MS.Learn]: FindNextFileW function (fileapi.h) / [MS.Learn]: FindClose function (fileapi.h) via [GitHub]: python/cpython - (main) cpython/Modules/posixmodule.c Using scandir() instead of listdir() can significantly increase the performance of code that also needs file type or file attribute information, because os.DirEntry objects expose this information if the operating system provides it when scanning a directory. All os.DirEntry methods may perform a system call, but is_dir() and is_file() usually only require a system call for symbolic links; os.DirEntry.stat() always requires a system call on Unix, but only requires one for symbolic links on Windows. *[Python.Docs]: os.walk(top, topdown=True, onerror=None, followlinks=False) * *Uses os.listdir (os.scandir when available) *[Python.Docs]: glob.iglob(pathname, *, root_dir=None, dir_fd=None, recursive=False, include_hidden=False) (or its predecessor: glob.glob) * *Doesn't seem a traversing function per se (at least in some cases), but it still uses os.listdir Since these iterate over folders, (in most of the cases) they are inefficient for our problem (there are exceptions, like non wildcarded globbing - as @ShadowRanger pointed out), so I'm not going to insist on them. Not to mention that in some cases, filename processing might be required. 4. [Python.Docs]: os.access(path, mode, *, dir_fd=None, effective_ids=False, follow_symlinks=True) Its behavior is close to os.path.exists (actually it's wider, mainly because of the 2nd argument). * *User permissions might restrict the file "visibility" as the doc states: ... test if the invoking user has the specified access to path. mode should be F_OK to test the existence of path... *Security considerations: Using access() to check if a user is authorized to e.g. open a file before actually doing so using open() creates a security hole, because the user might exploit the short time interval between checking and opening the file to manipulate it. os.access("/tmp", os.F_OK) Since I also work in C, I use this method as well because under the hood, it calls native APIs (again, via "${PYTHON_SRC_DIR}/Modules/posixmodule.c"), but it also opens a gate for possible user errors, and it's not as Pythonic as other variants. So, don't use it unless you know what you're doing: * *Nix: [Man7]: ACCESS(2) Warning: Using these calls to check if a user is authorized to, for example, open a file before actually doing so using open(2) creates a security hole, because the user might exploit the short time interval between checking and opening the file to manipulate it. For this reason, the use of this system call should be avoided. *Win: [MS.Learn]: GetFileAttributesW function (fileapi.h) As seen, this approach is highly discouraged (especially on Nix). Note: calling native APIs is also possible via [Python.Docs]: ctypes - A foreign function library for Python, but in most cases it's more complicated. Before working with CTypes, check [SO]: C function called from Python via ctypes returns incorrect value (@CristiFati's answer) out. (Win specific): since vcruntime###.dll (msvcr###.dll for older VStudio versions - I'm going to refer to it as UCRT) exports a [MS.Learn]: _access, _waccess function family as well, here's an example (note that the recommended [Python.Docs]: msvcrt - Useful routines from the MS VC++ runtime doesn't export them): Python 3.5.3 (v3.5.3:1880cb95a742, Jan 16 2017, 16:02:32) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import ctypes as cts, os >>> cts.CDLL("msvcrt")._waccess(u"C:\\Windows\\Temp", os.F_OK) 0 >>> cts.CDLL("msvcrt")._waccess(u"C:\\Windows\\Temp.notexist", os.F_OK) -1 Notes: * *Although it's not a good practice, I'm using os.F_OK in the call, but that's just for clarity (its value is 0) *I'm using _waccess so that the same code works on Python 3 and Python 2 (in spite of [Wikipedia]: Unicode related differences between them - [SO]: Passing utf-16 string to a Windows function (@CristiFati's answer)) *Although this targets a very specific area, it was not mentioned in any of the previous answers The Linux (Ubuntu ([Wikipedia]: Ubuntu version history) 16 x86_64 (pc064)) counterpart as well: Python 3.5.2 (default, Nov 17 2016, 17:05:23) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import ctypes as cts, os >>> cts.CDLL("/lib/x86_64-linux-gnu/libc.so.6").access(b"/tmp", os.F_OK) 0 >>> cts.CDLL("/lib/x86_64-linux-gnu/libc.so.6").access(b"/tmp.notexist", os.F_OK) -1 Notes: * *Instead hardcoding libc.so (LibC)'s path ("/lib/x86_64-linux-gnu/libc.so.6") which may (and most likely, will) vary across systems, None (or the empty string) can be passed to CDLL constructor (ctypes.CDLL(None).access(b"/tmp", os.F_OK)). According to [Man7]: DLOPEN(3): If filename is NULL, then the returned handle is for the main program. When given to dlsym(3), this handle causes a search for a symbol in the main program, followed by all shared objects loaded at program startup, and then all shared objects loaded by dlopen() with the flag RTLD_GLOBAL. *Main (current) program (python) is linked against LibC, so its symbols (including access) will be loaded *This has to be handled with care, since functions like main, Py_Main and (all the) others are available; calling them could have disastrous effects (on the current program) *This doesn't also apply to Windows (but that's not such a big deal, since UCRT is located in "%SystemRoot%\System32" which is in %PATH% by default). I wanted to take things further and replicate this behavior on Windows (and submit a patch), but as it turns out, [MS.Learn]: GetProcAddress function (libloaderapi.h) only "sees" exported symbols, so unless someone declares the functions in the main executable as __declspec(dllexport) (why on Earth the common person would do that?), the main program is loadable, but it is pretty much unusable 5. 3rd-party modules with filesystem capabilities Most likely, will rely on one of the ways above (maybe with slight customizations). One example would be (again, Win specific) [GitHub]: mhammond/pywin32 - Python for Windows (pywin32) Extensions, which is a Python wrapper over WinAPIs. But, since this is more like a workaround, I'm stopping here. 6. SysAdmin approach I consider this a (lame) workaround (gainarie): use Python as a wrapper to execute shell commands: * *Win: (py35x64_test) [cfati@CFATI-5510-0:e:\Work\Dev\StackOverflow\q000082831]> "e:\Work\Dev\VEnvs\py35x64_test\Scripts\python.exe" -c "import os; print(os.system('dir /b \"C:\\Windows\\Temp\" > nul 2>&1'))" 0 (py35x64_test) [cfati@CFATI-5510-0:e:\Work\Dev\StackOverflow\q000082831]> "e:\Work\Dev\VEnvs\py35x64_test\Scripts\python.exe" -c "import os; print(os.system('dir /b \"C:\\Windows\\Temp.notexist\" > nul 2>&1'))" 1 *Nix ([Wikipedia]: Unix-like) - Ubuntu: [cfati@cfati-5510-0:/mnt/e/Work/Dev/StackOverflow/q000082831]> python3 -c "import os; print(os.system('ls \"/tmp\" > /dev/null 2>&1'))" 0 [cfati@cfati-5510-0:/mnt/e/Work/Dev/StackOverflow/q000082831]> python3 -c "import os; print(os.system('ls \"/tmp.notexist\" > /dev/null 2>&1'))" 512 Bottom line: * *Do use try / except / else / finally blocks, because they can prevent you running into a series of nasty problems *A possible counterexample that I can think of, is performance: such blocks are costly, so try not to place them in code that it's supposed to run hundreds of thousands times per second (but since (in most cases) it involves disk access, it won't be the case) A: Use os.path.exists to check both files and directories: import os.path os.path.exists(file_path) Use os.path.isfile to check only files (note: follows symbolic links): os.path.isfile(file_path) A: if os.path.isfile(path_to_file): try: open(path_to_file) pass except IOError as e: print "Unable to open file" Raising exceptions is considered to be an acceptable, and Pythonic, approach for flow control in your program. Consider handling missing files with IOErrors. In this situation, an IOError exception will be raised if the file exists but the user does not have read permissions. Source: Using Python: How To Check If A File Exists A: If you imported NumPy already for other purposes then there is no need to import other libraries like pathlib, os, paths, etc. import numpy as np np.DataSource().exists("path/to/your/file") This will return true or false based on its existence. A: Check file or directory exists You can follow these three ways: 1. Using isfile() Note 1: The os.path.isfile used only for files import os.path os.path.isfile(filename) # True if file exists os.path.isfile(dirname) # False if directory exists 2. Using exists Note 2: The os.path.exists is used for both files and directories import os.path os.path.exists(filename) # True if file exists os.path.exists(dirname) # True if directory exists 3. The pathlib.Path method (included in Python 3+, installable with pip for Python 2) from pathlib import Path Path(filename).exists() A: Python 3.4+ has an object-oriented path module: pathlib. Using this new module, you can check whether a file exists like this: import pathlib p = pathlib.Path('path/to/file') if p.is_file(): # or p.is_dir() to see if it is a directory # do stuff You can (and usually should) still use a try/except block when opening files: try: with p.open() as f: # do awesome stuff except OSError: print('Well darn.') The pathlib module has lots of cool stuff in it: convenient globbing, checking file's owner, easier path joining, etc. It's worth checking out. If you're on an older Python (version 2.6 or later), you can still install pathlib with pip: # installs pathlib2 on older Python versions # the original third-party module, pathlib, is no longer maintained. pip install pathlib2 Then import it as follows: # Older Python versions import pathlib2 as pathlib A: You can write Brian's suggestion without the try:. from contextlib import suppress with suppress(IOError), open('filename'): process() suppress is part of Python 3.4. In older releases you can quickly write your own suppress: from contextlib import contextmanager @contextmanager def suppress(*exceptions): try: yield except exceptions: pass A: I'm the author of a package that's been around for about 10 years, and it has a function that addresses this question directly. Basically, if you are on a non-Windows system, it uses Popen to access find. However, if you are on Windows, it replicates find with an efficient filesystem walker. The code itself does not use a try block… except in determining the operating system and thus steering you to the "Unix"-style find or the hand-buillt find. Timing tests showed that the try was faster in determining the OS, so I did use one there (but nowhere else). >>> import pox >>> pox.find('*python*', type='file', root=pox.homedir(), recurse=False) ['/Users/mmckerns/.python'] And the doc… >>> print pox.find.__doc__ find(patterns[,root,recurse,type]); Get path to a file or directory patterns: name or partial name string of items to search for root: path string of top-level directory to search recurse: if True, recurse down from root directory type: item filter; one of {None, file, dir, link, socket, block, char} verbose: if True, be a little verbose about the search On some OS, recursion can be specified by recursion depth (an integer). patterns can be specified with basic pattern matching. Additionally, multiple patterns can be specified by splitting patterns with a ';' For example: >>> find('pox*', root='..') ['/Users/foo/pox/pox', '/Users/foo/pox/scripts/pox_launcher.py'] >>> find('*shutils*;*init*') ['/Users/foo/pox/pox/shutils.py', '/Users/foo/pox/pox/__init__.py'] >>> The implementation, if you care to look, is here: https://github.com/uqfoundation/pox/blob/89f90fb308f285ca7a62eabe2c38acb87e89dad9/pox/shutils.py#L190 A: This is the simplest way to check if a file exists. Just because the file existed when you checked doesn't guarantee that it will be there when you need to open it. import os fname = "foo.txt" if os.path.isfile(fname): print("file does exist at this time") else: print("no such file exists at this time") A: Here's a one-line Python command for the Linux command line environment. I find this very handy since I'm not such a hot Bash guy. python -c "import os.path; print os.path.isfile('/path_to/file.xxx')" A: Adding one more slight variation which isn't exactly reflected in the other answers. This will handle the case of the file_path being None or empty string. def file_exists(file_path): if not file_path: return False elif not os.path.isfile(file_path): return False else: return True Adding a variant based on suggestion from Shahbaz def file_exists(file_path): if not file_path: return False else: return os.path.isfile(file_path) Adding a variant based on suggestion from Peter Wood def file_exists(file_path): return file_path and os.path.isfile(file_path): A: You can use the "OS" library of Python: >>> import os >>> os.path.exists("C:\\Users\\####\\Desktop\\test.txt") True >>> os.path.exists("C:\\Users\\####\\Desktop\\test.tx") False A: How do I check whether a file exists, without using the try statement? In 2016, this is still arguably the easiest way to check if both a file exists and if it is a file: import os os.path.isfile('./file.txt') # Returns True if exists, else False isfile is actually just a helper method that internally uses os.stat and stat.S_ISREG(mode) underneath. This os.stat is a lower-level method that will provide you with detailed information about files, directories, sockets, buffers, and more. More about os.stat here Note: However, this approach will not lock the file in any way and therefore your code can become vulnerable to "time of check to time of use" (TOCTTOU) bugs. So raising exceptions is considered to be an acceptable, and Pythonic, approach for flow control in your program. And one should consider handling missing files with IOErrors, rather than if statements (just an advice). A: How do I check whether a file exists, using Python, without using a try statement? Now available since Python 3.4, import and instantiate a Path object with the file name, and check the is_file method (note that this returns True for symlinks pointing to regular files as well): >>> from pathlib import Path >>> Path('/').is_file() False >>> Path('/initrd.img').is_file() True >>> Path('/doesnotexist').is_file() False If you're on Python 2, you can backport the pathlib module from pypi, pathlib2, or otherwise check isfile from the os.path module: >>> import os >>> os.path.isfile('/') False >>> os.path.isfile('/initrd.img') True >>> os.path.isfile('/doesnotexist') False Now the above is probably the best pragmatic direct answer here, but there's the possibility of a race condition (depending on what you're trying to accomplish), and the fact that the underlying implementation uses a try, but Python uses try everywhere in its implementation. Because Python uses try everywhere, there's really no reason to avoid an implementation that uses it. But the rest of this answer attempts to consider these caveats. Longer, much more pedantic answer Available since Python 3.4, use the new Path object in pathlib. Note that .exists is not quite right, because directories are not files (except in the unix sense that everything is a file). >>> from pathlib import Path >>> root = Path('/') >>> root.exists() True So we need to use is_file: >>> root.is_file() False Here's the help on is_file: is_file(self) Whether this path is a regular file (also True for symlinks pointing to regular files). So let's get a file that we know is a file: >>> import tempfile >>> file = tempfile.NamedTemporaryFile() >>> filepathobj = Path(file.name) >>> filepathobj.is_file() True >>> filepathobj.exists() True By default, NamedTemporaryFile deletes the file when closed (and will automatically close when no more references exist to it). >>> del file >>> filepathobj.exists() False >>> filepathobj.is_file() False If you dig into the implementation, though, you'll see that is_file uses try: def is_file(self): """ Whether this path is a regular file (also True for symlinks pointing to regular files). """ try: return S_ISREG(self.stat().st_mode) except OSError as e: if e.errno not in (ENOENT, ENOTDIR): raise # Path doesn't exist or is a broken symlink # (see https://bitbucket.org/pitrou/pathlib/issue/12/) return False Race Conditions: Why we like try We like try because it avoids race conditions. With try, you simply attempt to read your file, expecting it to be there, and if not, you catch the exception and perform whatever fallback behavior makes sense. If you want to check that a file exists before you attempt to read it, and you might be deleting it and then you might be using multiple threads or processes, or another program knows about that file and could delete it - you risk the chance of a race condition if you check it exists, because you are then racing to open it before its condition (its existence) changes. Race conditions are very hard to debug because there's a very small window in which they can cause your program to fail. But if this is your motivation, you can get the value of a try statement by using the suppress context manager. Avoiding race conditions without a try statement: suppress Python 3.4 gives us the suppress context manager (previously the ignore context manager), which does semantically exactly the same thing in fewer lines, while also (at least superficially) meeting the original ask to avoid a try statement: from contextlib import suppress from pathlib import Path Usage: >>> with suppress(OSError), Path('doesnotexist').open() as f: ... for line in f: ... print(line) ... >>> >>> with suppress(OSError): ... Path('doesnotexist').unlink() ... >>> For earlier Pythons, you could roll your own suppress, but without a try will be more verbose than with. I do believe this actually is the only answer that doesn't use try at any level in the Python that can be applied to prior to Python 3.4 because it uses a context manager instead: class suppress(object): def __init__(self, *exceptions): self.exceptions = exceptions def __enter__(self): return self def __exit__(self, exc_type, exc_value, traceback): if exc_type is not None: return issubclass(exc_type, self.exceptions) Perhaps easier with a try: from contextlib import contextmanager @contextmanager def suppress(*exceptions): try: yield except exceptions: pass Other options that don't meet the ask for "without try": isfile import os os.path.isfile(path) from the docs: os.path.isfile(path) Return True if path is an existing regular file. This follows symbolic links, so both islink() and isfile() can be true for the same path. But if you examine the source of this function, you'll see it actually does use a try statement: # This follows symbolic links, so both islink() and isdir() can be true # for the same path on systems that support symlinks def isfile(path): """Test whether a path is a regular file""" try: st = os.stat(path) except os.error: return False return stat.S_ISREG(st.st_mode) >>> OSError is os.error True All it's doing is using the given path to see if it can get stats on it, catching OSError and then checking if it's a file if it didn't raise the exception. If you intend to do something with the file, I would suggest directly attempting it with a try-except to avoid a race condition: try: with open(path) as f: f.read() except OSError: pass os.access Available for Unix and Windows is os.access, but to use you must pass flags, and it does not differentiate between files and directories. This is more used to test if the real invoking user has access in an elevated privilege environment: import os os.access(path, os.F_OK) It also suffers from the same race condition problems as isfile. From the docs: Note: Using access() to check if a user is authorized to e.g. open a file before actually doing so using open() creates a security hole, because the user might exploit the short time interval between checking and opening the file to manipulate it. It’s preferable to use EAFP techniques. For example: if os.access("myfile", os.R_OK): with open("myfile") as fp: return fp.read() return "some default data" is better written as: try: fp = open("myfile") except IOError as e: if e.errno == errno.EACCES: return "some default data" # Not a permission error. raise else: with fp: return fp.read() Avoid using os.access. It is a low level function that has more opportunities for user error than the higher level objects and functions discussed above. Criticism of another answer: Another answer says this about os.access: Personally, I prefer this one because under the hood, it calls native APIs (via "${PYTHON_SRC_DIR}/Modules/posixmodule.c"), but it also opens a gate for possible user errors, and it's not as Pythonic as other variants: This answer says it prefers a non-Pythonic, error-prone method, with no justification. It seems to encourage users to use low-level APIs without understanding them. It also creates a context manager which, by unconditionally returning True, allows all Exceptions (including KeyboardInterrupt and SystemExit!) to pass silently, which is a good way to hide bugs. This seems to encourage users to adopt poor practices. A: Prefer the try statement. It's considered better style and avoids race conditions. Don't take my word for it. There's plenty of support for this theory. Here's a couple: * *Style: Section "Handling unusual conditions" of these course notes for Software Design (2007) *Avoiding Race Conditions A: Unlike isfile(), exists() will return True for directories. So depending on if you want only plain files or also directories, you'll use isfile() or exists(). Here is some simple REPL output: >>> os.path.isfile("/etc/password.txt") True >>> os.path.isfile("/etc") False >>> os.path.isfile("/does/not/exist") False >>> os.path.exists("/etc/password.txt") True >>> os.path.exists("/etc") True >>> os.path.exists("/does/not/exist") False A: Use: import os #Your path here e.g. "C:\Program Files\text.txt" #For access purposes: "C:\\Program Files\\text.txt" if os.path.exists("C:\..."): print "File found!" else: print "File not found!" Importing os makes it easier to navigate and perform standard actions with your operating system. For reference, also see How do I check whether a file exists without exceptions?. If you need high-level operations, use shutil. A: import os.path def isReadableFile(file_path, file_name): full_path = file_path + "/" + file_name try: if not os.path.exists(file_path): print "File path is invalid." return False elif not os.path.isfile(full_path): print "File does not exist." return False elif not os.access(full_path, os.R_OK): print "File cannot be read." return False else: print "File can be read." return True except IOError as ex: print "I/O error({0}): {1}".format(ex.errno, ex.strerror) except Error as ex: print "Error({0}): {1}".format(ex.errno, ex.strerror) return False #------------------------------------------------------ path = "/usr/khaled/documents/puzzles" fileName = "puzzle_1.txt" isReadableFile(path, fileName) A: exists() and is_file() methods of 'Path' object can be used for checking if a given path exists and is a file. Python 3 program to check if a file exists: # File name: check-if-file-exists.py from pathlib import Path filePath = Path(input("Enter path of the file to be found: ")) if filePath.exists() and filePath.is_file(): print("Success: File exists") else: print("Error: File does not exist") Output: $ python3 check-if-file-exists.py Enter path of the file to be found: /Users/macuser1/stack-overflow/index.html Success: File exists $ python3 check-if-file-exists.py Enter path of the file to be found: hghjg jghj Error: File does not exist A: This is how I found a list of files (in these images) in one folder and searched it in a folder (with subfolders): # This script concatenates JavaScript files into a unified JavaScript file to reduce server round-trips import os import string import math import ntpath import sys #import pyodbc import gzip import shutil import hashlib # BUF_SIZE is totally arbitrary, change for your app! BUF_SIZE = 65536 # Let’s read stuff in 64 kilobyte chunks # Iterate over all JavaScript files in the folder and combine them filenames = [] shortfilenames = [] imgfilenames = [] imgshortfilenames = [] # Get a unified path so we can stop dancing with user paths. # Determine where files are on this machine (%TEMP% directory and application installation directory) if '.exe' in sys.argv[0]: # if getattr(sys, 'frozen', False): RootPath = os.path.abspath(os.path.join(__file__, "..\\")) elif __file__: RootPath = os.path.abspath(os.path.join(__file__, "..\\")) print ("\n storage of image files RootPath: %s\n" %RootPath) FolderPath = "D:\\TFS-FARM1\\StoneSoup_STS\\SDLC\\Build\\Code\\StoneSoup_Refactor\\StoneSoupUI\\Images" print ("\n storage of image files in folder to search: %s\n" %FolderPath) for root, directories, filenames2 in os.walk(FolderPath): for filename in filenames2: fullname = os.path.join(root, filename) filenames.append(fullname) shortfilenames.append(filename) for i, fname in enumerate(shortfilenames): print("%s - %s" % (i+1, fname)) for root, directories, filenames2 in os.walk(RootPath): for filename in filenames2: fullname = os.path.join(root, filename) imgfilenames.append(fullname) imgshortfilenames.append(filename) for i, fname in enumerate(imgshortfilenames): print("%s - %s" % (i+1, fname)) for i, fname in enumerate(imgshortfilenames): if fname in shortfilenames: print("%s - %s exists" % (i+1, fname)) else: print("%s - %s ABSENT" % (i+1, fname))
{ "language": "en", "url": "https://stackoverflow.com/questions/82831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6915" }
Q: Best way to read commandline parameters in console application Below are two ways of reading in the commandline parameters. The first is the way that I'm accustom to seeing using the parameter in the main. The second I stumbled on when reviewing code. I noticed that the second assigns the first item in the array to the path and application but the first skips this. Is it just preference or is the second way the better way now? Sub Main(ByVal args() As String) For i As Integer = 0 To args.Length - 1 Console.WriteLine("Arg: " & i & " is " & args(i)) Next Console.ReadKey() End Sub Sub Main() Dim args() As String = System.Environment.GetCommandLineArgs() For i As Integer = 0 To args.Length - 1 Console.WriteLine("Arg: " & i & " is " & args(i)) Next Console.ReadKey() End Sub I think the same can be done in C#, so it's not necessarily a vb.net question. A: The first way is better because it's simpler. A: Do you know getopt? There is a port for C# on codeplex: http://www.codeplex.com/getopt A: Second way is better because it can be used outside the main(), so when you refactor it's one less thing to think about. Also I don't like the "magic" that puts the args in the method parameter for the first way. A: To me the first way seems more intuitive because that is how I have been doing it since my C/C++ days. If your commandline has one too many switches please do take a look at getopt that Thomas recommends. It's quite useful. I haven't had a look at C# port of the same though. Regards, kgr
{ "language": "en", "url": "https://stackoverflow.com/questions/82838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: MySQL tools which ease creation of SQL JOIN statements? Does anyone know tools which look at a MySQL database, show you all the tables graphically and allow you to create complicated JOIN statements via drag-and-drop? A: Before you buy anything, see if the free, official MySQL's GUI tools (specifically the MySQL Query Browser) will work for you. Personally, I'm fairly comfortable interacting with MySQL's command line interface and haven't used their GUI tools very much, but I just downloaded Query Browser and it seems like it does exactly what you're looking for. Also, check out "Building Queries Visually in MySQL Query Browser" for a nice tour of MySQL Query Browser. A: As an update, the MySQL Tools collection is no longer supported, and has been replaced by MySQL Workbench. The documentation can be found here: http://dev.mysql.com/doc/workbench/en/index.html and you can download it here: http://dev.mysql.com/downloads/workbench/ Edit: stumbled across this today too, a good beginner tutorial for mysql workbench -> http://net.tutsplus.com/tutorials/databases/visual-database-creation-with-mysql-workbench/ A: EMS SQL Manager for MySQL has query constructor. Can't recall about joins, but they should be supported.
{ "language": "en", "url": "https://stackoverflow.com/questions/82842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: In WPF, what is the equivalent of Suspend/ResumeLayout() and BackgroundWorker() from Windows Forms If I am in a function in the code behind, and I want to implement displaying a "Loading..." in the status bar the following makes sense, but as we know from WinForms is a NoNo: StatusBarMessageText.Text = "Loading Configuration Settings..."; LoadSettingsGridData(); StatusBarMessageText.Text = "Done"; What we all now from WinForms Chapter 1 class 101, is the form won't display changes to the user until after the Entire Function completes... meaning the "Loading" message will never be displayed to the user. The following code is needed. Form1.SuspendLayout(); StatusBarMessageText.Text = "Loading Configuration Settings..."; Form1.ResumeLayout(); LoadSettingsGridData(); Form1.SuspendLayout(); StatusBarMessageText.Text = "Done"; Form1.ResumeLayout(); What is the best practice for dealing with this fundamental issue in WPF? A: Best and simplest: using(var d = Dispatcher.DisableProcessing()) { /* your work... Use dispacher.begininvoke... */ } Or IDisposable d; try { d = Dispatcher.DisableProcessing(); /* your work... Use dispacher.begininvoke... */ } finally { d.Dispose(); } A: In reading the article by Shawn Wildermuth WPF Threads: Build More Responsive Apps With The Dispatcher. I came accross the following, which states you can use the Background Worker just like you could in WindowsForms. Fancy that: BackgroundWorker Now that you have a sense of how the Dispatcher works, you might be surprised to know that you will not find use for it in most cases. In Windows Forms 2.0, Microsoft introduced a class for non-UI thread handling to simplify the development model for user interface developers. This class is called the BackgroundWorker. Figure 7 shows typical usage of the BackgroundWorker class. Figure 7 Using a BackgroundWorker in WPF BackgroundWorker _backgroundWorker = new BackgroundWorker(); ... // Set up the Background Worker Events _backgroundWorker.DoWork += _backgroundWorker_DoWork; backgroundWorker.RunWorkerCompleted += _backgroundWorker_RunWorkerCompleted; // Run the Background Worker _backgroundWorker.RunWorkerAsync(5000); ... // Worker Method void _backgroundWorker_DoWork(object sender, DoWorkEventArgs e) { // Do something } // Completed Method void _backgroundWorker_RunWorkerCompleted( object sender, RunWorkerCompletedEventArgs e) { if (e.Cancelled) { statusText.Text = "Cancelled"; } else if (e.Error != null) { statusText.Text = "Exception Thrown"; } else { statusText.Text = "Completed"; } } The BackgroundWorker component works well with WPF because underneath the covers it uses the AsyncOperationManager class, which in turn uses the SynchronizationContext class to deal with synchronization. In Windows Forms, the AsyncOperationManager hands off a WindowsFormsSynchronizationContext class that derives from the SynchronizationContext class. Likewise, in ASP.NET it works with a different derivation of SynchronizationContext called AspNetSynchronizationContext. These SynchronizationContext-derived classes know how to handle the cross-thread synchronization of method invocation. In WPF, this model is extended with a DispatcherSynchronizationContext class. By using BackgroundWorker, the Dispatcher is being employed automatically to invoke cross-thread method calls. The good news is that since you are probably already familiar with this common pattern, you can continue using BackgroundWorker in your new WPF projects. A: The easiest way to get this to work is to add the LoadSettingsGridData to the dispatcher queue. If you set the operation's DispatcherPriority sufficiently low enough, the layout operations will occur, and you will be good to go. StatusBarMessageText.Text = "Loading Configuration Settings..."; this.Dispatcher.BeginInvoke(new Action(LoadSettingsGridData), DispatcherPriority.Render); this.Dispatcher.BeginInvoke(new Action(() => StatusBarMessageText.Text = "Done"), DispatcherPriority.Render);
{ "language": "en", "url": "https://stackoverflow.com/questions/82847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: What files are you allowed to modify in SharePoint 2007? What files can we modify so that our solution is still supported by Microsoft? Is it allowed to customize error pages? Can we modify the web.config files to use custom HTTPHandlers? A: You can certainly edit the web.config file for your sites. The one thing that you should be aware of, however, is that when you start editing files manually on the file system, you will have to remember to manually make those changes across all servers in the farm (assuming a farm exists). In addition to this, when you edit files in the 12 hive, it's important to understand that you will be making a change to all SharePoint sites hosted on the server(s) for which the files were edited. Personally, if I were going to create a custom error page, I would simply add a <customErrors> section to my web.config. I avoid editing any existing files in the 12 hive, but I have added files (though it's rare). A: The customization of the error page is not very easy (or flexible). You can see an example here: http://blogs.msdn.com/jingmeili/archive/2007/04/08/how-to-create-your-own-custom-404-error-page-and-handle-redirect-in-sharepoint-2007-moss.aspx The web.config can be changed. I used my own HttpModules in addition to the original ones, but I haven't used custom HttpHandlers. IMO it should work if you don't change the original handler (i.e. if you add your handler for a specific type of file not handled by SP). A: do not modify any pre-installed files in the 12 hive (Program Files\Common Files\Microsoft Shared\Web Server Extensions\12)... a service pack may update and overwrite any changes. Anything in the Content Database (Masterpage, Stylesheets list in ~Catalogs) is available to modify (I would add, instead of update, in case a service pack changes anything) as it sits atop the file system, and is instantly available to any members of the web farm (newly added servers). Any custom features, added to the 12 hive in the features folder, in a custom/non-microsoft folder (that is, inside the 12\feature folder, do not modify any preinstalled files, but feel free to add a folder for your feature and work within). Custom features can be developed using the Visual Studio Extensions (VSeWSS), currently available for Visual Studio 2005/2008... benefit being that the output is a feature package (.WSP file) which is designed to be portable across SharePoint. Additionally, the .WSP files are just CAB files with a different extension, offering the ability to be explored by simply renaming them. A: For site definitions, Microsoft has a good article about what is supported and unsupported. In short, the only change you can make to the out-of-the-box site definitions is changing the entry in the webtemp.xml file to hidden in order to prevent the site definition from appearing in the site template list. This is something many may be interested in doing. You may also, of course, copy existing definitions and rename them in order to create new ones. The complete list of supported and unsupported scenarios for working with custom site definitions can be found here: http://support.microsoft.com/default.aspx?scid=kb;en-us;898631 A: Here is the closest I can find to a official response from Microsoft: http://technet.microsoft.com/en-us/library/cc263010.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/82850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }