text
stringlengths
8
267k
meta
dict
Q: Testing if an Object is a Dictionary in C# Is there a way to test if an object is a dictionary? In a method I'm trying to get a value from a selected item in a list box. In some circumstances, the list box might be bound to a dictionary, but this isn't known at compile time. I would like to do something similar to this: if (listBox.ItemsSource is Dictionary<??>) { KeyValuePair<??> pair = (KeyValuePair<??>)listBox.SelectedItem; object value = pair.Value; } Is there a way to do this dynamically at runtime using reflection? I know it's possible to use reflection with generic types and determine the key/value parameters, but I'm not sure if there's a way to do the rest after those values are retrieved. A: I know this question was asked many years ago, but it is still visible publicly. There were few examples proposed here in this topic and in this one: Determine if type is dictionary [duplicate] but there are few mismatches, so I want to share my solution Short answer: var dictionaryInterfaces = new[] { typeof(IDictionary<,>), typeof(IDictionary), typeof(IReadOnlyDictionary<,>), }; var dictionaries = collectionOfAnyTypeObjects .Where(d => d.GetType().GetInterfaces() .Any(t=> dictionaryInterfaces .Any(i=> i == t || t.IsGenericType && i == t.GetGenericTypeDefinition()))) Longer answer: I believe this is the reason why people make mistakes: //notice the difference between IDictionary (interface) and Dictionary (class) typeof(IDictionary<,>).IsAssignableFrom(typeof(IDictionary<,>)) // true typeof(IDictionary<int, int>).IsAssignableFrom(typeof(IDictionary<int, int>)); // true typeof(IDictionary<int, int>).IsAssignableFrom(typeof(Dictionary<int, int>)); // true typeof(IDictionary<,>).IsAssignableFrom(typeof(Dictionary<,>)); // false!! in contrast with above line this is little bit unintuitive so let say we have these types: public class CustomReadOnlyDictionary : IReadOnlyDictionary<string, MyClass> public class CustomGenericDictionary : IDictionary<string, MyClass> public class CustomDictionary : IDictionary and these instances: var dictionaries = new object[] { new Dictionary<string, MyClass>(), new ReadOnlyDictionary<string, MyClass>(new Dictionary<string, MyClass>()), new CustomReadOnlyDictionary(), new CustomDictionary(), new CustomGenericDictionary() }; so if we will use .IsAssignableFrom() method: var dictionaries2 = dictionaries.Where(d => { var type = d.GetType(); return type.IsGenericType && typeof(IDictionary<,>).IsAssignableFrom(type.GetGenericTypeDefinition()); }); // count == 0!! we will not get any instance so best way is to get all interfaces and check if any of them is dictionary interface: var dictionaryInterfaces = new[] { typeof(IDictionary<,>), typeof(IDictionary), typeof(IReadOnlyDictionary<,>), }; var dictionaries2 = dictionaries .Where(d => d.GetType().GetInterfaces() .Any(t=> dictionaryInterfaces .Any(i=> i == t || t.IsGenericType && i == t.GetGenericTypeDefinition()))) // count == 5 A: Check to see if it implements IDictionary. See the definition of System.Collections.IDictionary to see what that gives you. if (listBox.ItemsSource is IDictionary) { DictionaryEntry pair = (DictionaryEntry)listBox.SelectedItem; object value = pair.Value; } EDIT: Alternative when I realized KeyValuePair's aren't castable to DictionaryEntry if (listBox.DataSource is IDictionary) { listBox.ValueMember = "Value"; object value = listBox.SelectedValue; listBox.ValueMember = ""; //If you need it to generally be empty. } This solution uses reflection, but in this case you don't have to do the grunt work, ListBox does it for you. Also if you generally have dictionaries as data sources you may be able to avoid reseting ValueMember all of the time. A: It should be something like the following. I wrote this in the answer box so the syntax may not be exactly right, but I've made it Wiki editable so anybody can fix up. if (listBox.ItemsSource.IsGenericType && typeof(IDictionary<,>).IsAssignableFrom(listBox.ItemsSource.GetGenericTypeDefinition())) { var method = typeof(KeyValuePair<,>).GetProperty("Value").GetGetMethod(); var item = method.Invoke(listBox.SelectedItem, null); } A: you can check to see if it implements IDictionary. You'll just have to enumerate over using the DictionaryEntry class. A: I'm coming from Determine if type is dictionary, where none of the answers there adequately solve my issue. The closest answer here comes from Lukas Klusis, but falls short of giving a IsDictionary(Type type) method. Here's that method, taking inspiration from his answer: private static Type[] dictionaryInterfaces = { typeof(IDictionary<,>), typeof(System.Collections.IDictionary), typeof(IReadOnlyDictionary<,>), }; public static bool IsDictionary(Type type) { return dictionaryInterfaces .Any(dictInterface => dictInterface == type || // 1 (type.IsGenericType && dictInterface == type.GetGenericTypeDefinition()) || // 2 type.GetInterfaces().Any(typeInterface => // 3 typeInterface == dictInterface || (typeInterface.IsGenericType && dictInterface == typeInterface.GetGenericTypeDefinition()))); } // 1 addresses public System.Collections.IDictionary MyProperty {get; set;} // 2 addresses public IDictionary<SomeObj, SomeObj> MyProperty {get; set;} // 3 (ie the second .Any) addresses any scenario in which the type implements any one of the dictionaryInterfaces Types. The issues with the other answers - assuming they address #3 - is that they don't address #1 and #2. Which is understandable, since getting and checking a Property's Type probably isn't a common scenario. But in case you're like me, and that scenario is part of your use-case, there you go! A: You could be a little more generic and ask instead if it implements IDictionary. Then the KeyValue collection will contina plain Objects. A: I believe a warning is at place. When you're testing if an object 'is a' something this or that, you're reimplementing (part of) the type system. The first 'is a' is often swiftly followed by a second one, and soon your code is full of type checks, which ought to be very well handled by the type system - at least in an object oriented design. Of course, I know nothing of the context of the question. I do know a 2000 line file in our own codebase that handles 50 different object to String conversions... :( A: if(typeof(IDictionary).IsAssignableFrom(listBox.ItemsSource.GetType())) { }
{ "language": "en", "url": "https://stackoverflow.com/questions/123181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Can WampServer be used successfully in production? Can WampServer be used successfully in production? Is this a bad idea? So everyone knows, and I don't see how this mattered, we've paid for a windows dedicated box and we have existing IIS apps. We just wanted to use a PHP based CMS which installs easier on apache (since it has some dependencies). So, as the title indicated, windows, apache, php, and mysql are requirements. Additionally, I'm talking specifically of the WampServer flavor of WAMP. A: If you're not going onto the internet, there isn't any reason really not to. Of course you'd have to look at all the normal caveats - backups etc. Instead of using an already made one, why not try to do your own? It would be a good learning experience and really they aren't that hard to get working together. A: WAMP is approriate for production of an Intranet. We developed a solution with FLEX (front END) /PHP/MYSQL (BACKEND) and it's been working very well for a year now. You just have to secure the Server on which WAMP runs. WAMP is just a tool for configuring APACHE/PHP/MYSQL on a Windows plateform with ease. A: WampServer themselves says they are not appropriate for production, only for development. Security issues, load balancing, etc., are definitely part of it... plus, deploying Apache on Windows is just a nightmare. Use LAMP. Alternatively, use IIS... if you're going to deploy a Windows production server (don't), use IIS. A: LAMP is more stable, but i have wamp running intranet-sites succesfully in two organisations with over a 1000 users. A: I don't see why not, but why use Apache on Windows when you can quite easily install PHP on IIS? A: I love how the only guy who answered the actual question by paying attention to the fact that the OP was asking about the all in one product that is WampServer has a -1 rating. To reiterate what he said though, yes it would be a bad idea to use it in a production environment. A: I'm using WAMP over Windows Server 2003 as a production server for an Intranet. accesing MySQL and SQL Server toghether. We are not too many users, but I had no problem so far. Easy configuration, easy maintenance, posibility to autenticate domain users in Apache... Perhaps with heavy load environments it's not so good, but for me is the perfect sollution by now. A: YES, it can be used in production under condition that you install the secure WAMP distro. And yes it can run on Internet and not just intranet. Here is a link to a secure WAMP for production where you can customize the security level and other settings to suit production environment. http://securewamp.org/en/ Windows and WAMP can be successfully used in production even on high traffic websites however you will need to make changes and switch from mod_php to FCGID. A: Why not just use LAMP? PHP code is portable. I used WAMP for development, LAMP for production. WAMP would probably work for production, but why not just use LAMP?
{ "language": "en", "url": "https://stackoverflow.com/questions/123187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How do I escape from a Snippet? (Vb.Net) In C# when I am done entering the fields of a snippet, I can hit Enter to get to the next line. What is the equivalent Key in VB? Edit: I prefer not to use the mouse. A: Wow... I sure hope they improve this soon. Meanwhile, in case anyone cares, I created an additional replacement field ($Enter$) at the end of my custom snippet. This allows me to [tab] through the fields and then type [DownArrow] [Enter] when I reach the end of the list. Something like.... private _$PropertyName$ As $PropertyType$ Public WriteOnly Property $PropertyName$() As $PropertyType$ Set(ByVal value as $PropertyType$) _$PropertyName$ = value End Set End Property $Enter$ A: Don't know the key, but I use right-click -> Hide Snippet Highlighting. A: It turns out there isn't one- VB.NET snippet support lags behind that of c# There's no support for * *$end$ in the snippet *ClassName() or other functions *snippet hints. And there's field tab issues as well - in c# you only tab through unique fields. In vb.net you tab through all. In short, using snippets n vb.net is not as fun. A: At any point while you're editing a snippet, you can use the up/down arrow keys to get out of it. Or have I misunderstood what you're trying to do? A: Can't you just use the down arrow key? Maybe I'm misunderstanding your question. For the record, VB snippets do support tooltips (hints) and help URLs.
{ "language": "en", "url": "https://stackoverflow.com/questions/123188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to copy files How do I copy a file in Python? A: copy2(src,dst) is often more useful than copyfile(src,dst) because: * *it allows dst to be a directory (instead of the complete target filename), in which case the basename of src is used for creating the new file; *it preserves the original modification and access info (mtime and atime) in the file metadata (however, this comes with a slight overhead). Here is a short example: import shutil shutil.copy2('/src/dir/file.ext', '/dst/dir/newname.ext') # complete target filename given shutil.copy2('/src/file.ext', '/dst/dir') # target filename is /dst/dir/file.ext A: In case you've come this far down. The answer is that you need the entire path and file name import os shutil.copy(os.path.join(old_dir, file), os.path.join(new_dir, file)) A: There are two best ways to copy file in Python. 1. We can use the shutil module Code Example: import shutil shutil.copyfile('/path/to/file', '/path/to/new/file') There are other methods available also other than copyfile, like copy, copy2, etc, but copyfile is best in terms of performance, 2. We can use the OS module Code Example: import os os.system('cp /path/to/file /path/to/new/file') Another method is by the use of a subprocess, but it is not preferable as it’s one of the call methods and is not secure. A: Use the shutil module. copyfile(src, dst) Copy the contents of the file named src to a file named dst. The destination location must be writable; otherwise, an IOError exception will be raised. If dst already exists, it will be replaced. Special files such as character or block devices and pipes cannot be copied with this function. src and dst are path names given as strings. Take a look at filesys for all the file and directory handling functions available in standard Python modules. A: Here is a simple way to do it, without any module. It's similar to this answer, but has the benefit to also work if it's a big file that doesn't fit in RAM: with open('sourcefile', 'rb') as f, open('destfile', 'wb') as g: while True: block = f.read(16*1024*1024) # work by blocks of 16 MB if not block: # end of file break g.write(block) Since we're writing a new file, it does not preserve the modification time, etc. We can then use os.utime for this if needed. A: Similar to the accepted answer, the following code block might come in handy if you also want to make sure to create any (non-existent) folders in the path to the destination. from os import path, makedirs from shutil import copyfile makedirs(path.dirname(path.abspath(destination_path)), exist_ok=True) copyfile(source_path, destination_path) As the accepted answers notes, these lines will overwrite any file which exists at the destination path, so sometimes it might be useful to also add: if not path.exists(destination_path): before this code block. A: Directory and File copy example, from Tim Golden's Python Stuff: import os import shutil import tempfile filename1 = tempfile.mktemp (".txt") open (filename1, "w").close () filename2 = filename1 + ".copy" print filename1, "=>", filename2 shutil.copy (filename1, filename2) if os.path.isfile (filename2): print "Success" dirname1 = tempfile.mktemp (".dir") os.mkdir (dirname1) dirname2 = dirname1 + ".copy" print dirname1, "=>", dirname2 shutil.copytree (dirname1, dirname2) if os.path.isdir (dirname2): print "Success" A: shutil has many methods you can use. One of which is: import shutil shutil.copyfile(src, dst) # 2nd option shutil.copy(src, dst) # dst can be a folder; use shutil.copy2() to preserve timestamp * *Copy the contents of the file named src to a file named dst. Both src and dst need to be the entire filename of the files, including path. *The destination location must be writable; otherwise, an IOError exception will be raised. *If dst already exists, it will be replaced. *Special files such as character or block devices and pipes cannot be copied with this function. *With copy, src and dst are path names given as strs. Another shutil method to look at is shutil.copy2(). It's similar but preserves more metadata (e.g. time stamps). If you use os.path operations, use copy rather than copyfile. copyfile will only accept strings. A: For small files and using only Python built-ins, you can use the following one-liner: with open(source, 'rb') as src, open(dest, 'wb') as dst: dst.write(src.read()) This is not optimal way for applications where the file is too large or when memory is critical, thus Swati's answer should be preferred. A: Firstly, I made an exhaustive cheat sheet of the shutil methods for your reference. shutil_methods = {'copy':['shutil.copyfileobj', 'shutil.copyfile', 'shutil.copymode', 'shutil.copystat', 'shutil.copy', 'shutil.copy2', 'shutil.copytree',], 'move':['shutil.rmtree', 'shutil.move',], 'exception': ['exception shutil.SameFileError', 'exception shutil.Error'], 'others':['shutil.disk_usage', 'shutil.chown', 'shutil.which', 'shutil.ignore_patterns',] } Secondly, explaining methods of copy in examples: * *shutil.copyfileobj(fsrc, fdst[, length]) manipulate opened objects In [3]: src = '~/Documents/Head+First+SQL.pdf' In [4]: dst = '~/desktop' In [5]: shutil.copyfileobj(src, dst) AttributeError: 'str' object has no attribute 'read' # Copy the file object In [7]: with open(src, 'rb') as f1,open(os.path.join(dst,'test.pdf'), 'wb') as f2: ...: shutil.copyfileobj(f1, f2) In [8]: os.stat(os.path.join(dst,'test.pdf')) Out[8]: os.stat_result(st_mode=33188, st_ino=8598319475, st_dev=16777220, st_nlink=1, st_uid=501, st_gid=20, st_size=13507926, st_atime=1516067347, st_mtime=1516067335, st_ctime=1516067345) *shutil.copyfile(src, dst, *, follow_symlinks=True) Copy and rename In [9]: shutil.copyfile(src, dst) IsADirectoryError: [Errno 21] Is a directory: ~/desktop' # So dst should be a filename instead of a directory name *shutil.copy() Copy without preseving the metadata In [10]: shutil.copy(src, dst) Out[10]: ~/desktop/Head+First+SQL.pdf' # Check their metadata In [25]: os.stat(src) Out[25]: os.stat_result(st_mode=33188, st_ino=597749, st_dev=16777220, st_nlink=1, st_uid=501, st_gid=20, st_size=13507926, st_atime=1516066425, st_mtime=1493698739, st_ctime=1514871215) In [26]: os.stat(os.path.join(dst, 'Head+First+SQL.pdf')) Out[26]: os.stat_result(st_mode=33188, st_ino=8598313736, st_dev=16777220, st_nlink=1, st_uid=501, st_gid=20, st_size=13507926, st_atime=1516066427, st_mtime=1516066425, st_ctime=1516066425) # st_atime,st_mtime,st_ctime changed *shutil.copy2() Copy with preserving the metadata In [30]: shutil.copy2(src, dst) Out[30]: ~/desktop/Head+First+SQL.pdf' In [31]: os.stat(src) Out[31]: os.stat_result(st_mode=33188, st_ino=597749, st_dev=16777220, st_nlink=1, st_uid=501, st_gid=20, st_size=13507926, st_atime=1516067055, st_mtime=1493698739, st_ctime=1514871215) In [32]: os.stat(os.path.join(dst, 'Head+First+SQL.pdf')) Out[32]: os.stat_result(st_mode=33188, st_ino=8598313736, st_dev=16777220, st_nlink=1, st_uid=501, st_gid=20, st_size=13507926, st_atime=1516067063, st_mtime=1493698739, st_ctime=1516067055) # Preserved st_mtime *shutil.copytree() Recursively copy an entire directory tree rooted at src, returning the destination directory. A: In Python, you can copy the files using * *shutil module *os module *subprocess module import os import shutil import subprocess 1) Copying files using shutil module shutil.copyfile signature shutil.copyfile(src_file, dest_file, *, follow_symlinks=True) # example shutil.copyfile('source.txt', 'destination.txt') shutil.copy signature shutil.copy(src_file, dest_file, *, follow_symlinks=True) # example shutil.copy('source.txt', 'destination.txt') shutil.copy2 signature shutil.copy2(src_file, dest_file, *, follow_symlinks=True) # example shutil.copy2('source.txt', 'destination.txt') shutil.copyfileobj signature shutil.copyfileobj(src_file_object, dest_file_object[, length]) # example file_src = 'source.txt' f_src = open(file_src, 'rb') file_dest = 'destination.txt' f_dest = open(file_dest, 'wb') shutil.copyfileobj(f_src, f_dest) 2) Copying files using os module os.popen signature os.popen(cmd[, mode[, bufsize]]) # example # In Unix/Linux os.popen('cp source.txt destination.txt') # In Windows os.popen('copy source.txt destination.txt') os.system signature os.system(command) # In Linux/Unix os.system('cp source.txt destination.txt') # In Windows os.system('copy source.txt destination.txt') 3) Copying files using subprocess module subprocess.call signature subprocess.call(args, *, stdin=None, stdout=None, stderr=None, shell=False) # example (WARNING: setting `shell=True` might be a security-risk) # In Linux/Unix status = subprocess.call('cp source.txt destination.txt', shell=True) # In Windows status = subprocess.call('copy source.txt destination.txt', shell=True) subprocess.check_output signature subprocess.check_output(args, *, stdin=None, stderr=None, shell=False, universal_newlines=False) # example (WARNING: setting `shell=True` might be a security-risk) # In Linux/Unix status = subprocess.check_output('cp source.txt destination.txt', shell=True) # In Windows status = subprocess.check_output('copy source.txt destination.txt', shell=True) A: shutil module offers some high-level operations on files. It supports file copying and removal. Refer to the table below for your use case. Function UtilizeFile Object Preserve FileMetadata Preserve Permissions Supports Directory Dest. shutil.copyfileobj ✔ ⅹ ⅹ ⅹ shutil.copyfile ⅹ ⅹ ⅹ ⅹ shutil.copy2 ⅹ ✔ ✔ ✔ shutil.copy ⅹ ⅹ ✔ ✔ A: Function Copiesmetadata Copiespermissions Uses file object Destinationmay be directory shutil.copy No Yes No Yes shutil.copyfile No No No No shutil.copy2 Yes Yes No Yes shutil.copyfileobj No No Yes No A: As of Python 3.5 you can do the following for small files (ie: text files, small jpegs): from pathlib import Path source = Path('../path/to/my/file.txt') destination = Path('../path/where/i/want/to/store/it.txt') destination.write_bytes(source.read_bytes()) write_bytes will overwrite whatever was at the destination's location A: You could use os.system('cp nameoffilegeneratedbyprogram /otherdirectory/'). Or as I did it, os.system('cp '+ rawfile + ' rawdata.dat') where rawfile is the name that I had generated inside the program. This is a Linux-only solution. A: You can use one of the copy functions from the shutil package: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Function preserves supports accepts copies other permissions directory dest. file obj metadata ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― shutil.copy ✔ ✔ ☐ ☐ shutil.copy2 ✔ ✔ ☐ ✔ shutil.copyfile ☐ ☐ ☐ ☐ shutil.copyfileobj ☐ ☐ ✔ ☐ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Example: import shutil shutil.copy('/etc/hostname', '/var/tmp/testhostname') A: Use open(destination, 'wb').write(open(source, 'rb').read()) Open the source file in read mode, and write to the destination file in write mode. A: For large files, I read the file line by line and read each line into an array. Then, once the array reached a certain size, append it to a new file. for line in open("file.txt", "r"): list.append(line) if len(list) == 1000000: output.writelines(list) del list[:] A: Use subprocess.call to copy the file from subprocess import call call("cp -p <file> <file>", shell=True) A: Copying a file is a relatively straightforward operation as shown by the examples below, but you should instead use the shutil stdlib module for that. def copyfileobj_example(source, dest, buffer_size=1024*1024): """ Copy a file from source to dest. source and dest must be file-like objects, i.e. any object with a read or write method, like for example StringIO. """ while True: copy_buffer = source.read(buffer_size) if not copy_buffer: break dest.write(copy_buffer) If you want to copy by filename you could do something like this: def copyfile_example(source, dest): # Beware, this example does not handle any edge cases! with open(source, 'rb') as src, open(dest, 'wb') as dst: copyfileobj_example(src, dst) A: shutil.copy(src, dst, *, follow_symlinks=True) A: Python provides built-in functions for easily copying files using the operating system shell utilities. The Following command is used to copy a file: shutil.copy(src, dst) The following command is used to copy a file with metadata information: shutil.copystat(src, dst) A: Here is an answer utilizing "shutil.copyfileobj" and it is highly efficient. I used it in a tool I created some time ago. I didn't write this originally, but I tweaked it a little bit. def copyFile(src, dst, buffer_size=10485760, perserveFileDate=True): ''' @param src: Source File @param dst: Destination File (not file path) @param buffer_size: Buffer size to use during copy @param perserveFileDate: Preserve the original file date ''' # Check to make sure destination directory exists. If it doesn't create the directory dstParent, dstFileName = os.path.split(dst) if(not(os.path.exists(dstParent))): os.makedirs(dstParent) # Optimize the buffer for small files buffer_size = min(buffer_size,os.path.getsize(src)) if(buffer_size == 0): buffer_size = 1024 if shutil._samefile(src, dst): raise shutil.Error("`%s` and `%s` are the same file" % (src, dst)) for fn in [src, dst]: try: st = os.stat(fn) except OSError: # File most likely does not exist pass else: # XXX What about other special files? (sockets, devices...) if shutil.stat.S_ISFIFO(st.st_mode): raise shutil.SpecialFileError("`%s` is a named pipe" % fn) with open(src, 'rb') as fsrc: with open(dst, 'wb') as fdst: shutil.copyfileobj(fsrc, fdst, buffer_size) if(perserveFileDate): shutil.copystat(src, dst) A: You can use os.link to create a hard link to a file: os.link(source, dest) This is not an independent clone, but if you plan to only read (not modify) the new file and its content must remain the same as the original, this will work well. It also has a benefit that if you want to check whether the copy already exists, you can compare the hard links (with os.stat) instead of their content. In Linux, the command cp with keys cp -al creates a hard link. Therefore a hard link may be considered a copy. Sometimes a person would need exactly this behaviour (access to file content from a different place), and not need a separate copy. A: You can use system. For Unix-like systems: import os copy_file = lambda src_file, dest: os.system(f"cp {src_file} {dest}") copy_file("./file", "../new_dir/file")
{ "language": "en", "url": "https://stackoverflow.com/questions/123198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3566" }
Q: Fixed TD height property in HTML I can't make td "Date" to have fixed height. If there is less in Body section td Date element is bigger than it should be - even if I set Date height to 10% and Body height to 90%. Any suggestions? <tr> <td class="Author" rowspan="2"> <a href="#">Claude</a><br /> <a href="#"><img src="Users/4/Avatar.jpeg" style="border-width:0px;" /></a> </td> <td class="Date"> Sent: <span>18.08.2008 20:49:28</span> </td> </tr> <tr> <td class="Body"> <span>Id lacinia lacus arcu non quis mollis sit. Ligula elit. Ultricies elit cursus. Quis ipsum nec rutrum id tellus aliquam. Tortor arcu fermentum nibh justo leo ante vitae fringilla. Pulvinar aliquam. Fringilla mollis facilisis.</span> </td> </tr> And my css for now is: table.ForumThreadViewer td.Date { text-align: left; vertical-align: top; font-size: xx-small; border-bottom: solid 1 black; height: 20px; } table.ForumThreadViewer td.Body { text-align: left; vertical-align: top; border-top: solid 1 black; } table.ForumThreadViewer td.Author { vertical-align: top; text-align: left; } It's working for FF but not for IE. :( A: When you use percentages, they're relative to their container and even then, that only works on some types of element. I imagine for this to work, you need to apply the height to the <tr>s, and give the <table> a height. If the <table> height is relative too, you need to give its container a height too. But looking at your data, are you really sure you should be using a table at all?! A: Oli is right! Give then screenshot you posted, you are using the wrong markup. You could use something more like this: <div class="post"> <div class="author"> <a href="#">Claude</a><br /> <a href="#"><img src="Users/4/Avatar.jpeg" /></a> </div> <div class="content"> <div class="date">Sent: 18.08.2008 20:49:28</div> <div class="body"> This is the content of the message. </div> </div> <div class="clear">&nbsp;</div> </div> with css like this: div.post { border: 1px solid #999; margin-bottom: -1px; /* collapse the borders between posts */ } div.author { float: left; width: 150px; border-right: 1px solid #999; } div.content { border-left: 1px solid #999; margin-left: 150px; } div.date { border-bottom: 1px solid #999; } div.clear { clear: both; height: 0; line-height: 0; } A: CSS .Date { height: 50px; } A: I don't fully understand your question, you're saying if you use this: .Date { height: 10% } .Body { height: 90%; } That the .Date td is bigger than it should be? How do you know how big it should be without setting an absolute height? Are you taking borders and padding into account? You could try adding colspan="2"to the .Body td or an extra <td> element with only a non-breaking space (&nbsp;)
{ "language": "en", "url": "https://stackoverflow.com/questions/123216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is the best SQL library for use in Common Lisp? Ideally something that will work with Oracle, MS SQL Server, MySQL and Posgress. A: At the moment there's no open-source library that supports all the SQL backends you mention. CLSQL comes quite close (lacking only support for MS SQL). The alternatives are: * *CL-RDBMS (which supports Oracle, Postgres through Postmodern and SQLite3) *Postmodern (only Postgres). If you can use a commercial Lisp, you can give a try to CommonSQL included with Lispworks, which supports all the databases you mentioned. CLSQL looks like the most popular open source library at the moment. Unfortunately, it seems to suffer from bit rot, and the developers had to make some compromises to support all those platforms. If the RDB backend is not a constraint, then I recommend Postmodern. It is very well documented and has a clean API (and a nice small language compiled to SQL). Also, it is well maintained and small enough to keep being understandable and extensible. It focuses only on Postgres, not trying to be all things for all people. A: Allegro Common Lisp has an ODBC library and a MySQL-specific library, both exhaustively documented. I've used the MySQL one; no surprises. A: if you mean common lisp by lisp, then there's cl-rdbms. it is heavily tested on postgres (uses postmodern as the backend lib), it has a toy sqlite backend and it also has an OCI based oracle backend. it supports abstracting away the different sql dialects, has an sql quasi-quote syntax extension installable on e.g. the [] characters. i'm not sure if it's the best, and i'm biased anyway... :) but we ended up rolling our own lib after using clsql for a while, which is i think the most widely used sql lib for cl. see cliki page about sql for a further reference.
{ "language": "en", "url": "https://stackoverflow.com/questions/123234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Problem with Bash output redirection I was trying to remove all the lines of a file except the last line but the following command did not work, although file.txt is not empty. $cat file.txt |tail -1 > file.txt $cat file.txt Why is it so? A: You can use sed to delete all lines but the last from a file: sed -i '$!d' file * *-i tells sed to replace the file in place; otherwise, the result would write to STDOUT. *$ is the address that matches the last line of the file. *d is the delete command. In this case, it is negated by !, so all lines not matching the address will be deleted. A: Before 'cat' gets executed, Bash has already opened 'file.txt' for writing, clearing out its contents. In general, don't write to files you're reading from in the same statement. This can be worked around by writing to a different file, as above:$cat file.txt | tail -1 >anotherfile.txt $mv anotherfile.txt file.txtor by using a utility like sponge from moreutils:$cat file.txt | tail -1 | sponge file.txt This works because sponge waits until its input stream has ended before opening its output file. A: When you submit your command string to bash, it does the following: * *Creates an I/O pipe. *Starts "/usr/bin/tail -1", reading from the pipe, and writing to file.txt. *Starts "/usr/bin/cat file.txt", writing to the pipe. By the time 'cat' starts reading, 'file.txt' has already been truncated by 'tail'. That's all part of the design of Unix and the shell environment, and goes back all the way to the original Bourne shell. 'Tis a feature, not a bug. A: tmp=$(tail -1 file.txt); echo $tmp > file.txt; A: This works nicely in a Linux shell: replace_with_filter() { local filename="$1"; shift local dd_output byte_count filter_status dd_status dd_output=$("$@" <"$filename" | dd conv=notrunc of="$filename" 2>&1; echo "${PIPESTATUS[@]}") { read; read; read -r byte_count _; read filter_status dd_status; } <<<"$dd_output" (( filter_status > 0 )) && return "$filter_status" (( dd_status > 0 )) && return "$dd_status" dd bs=1 seek="$byte_count" if=/dev/null of="$filename" } replace_with_filter file.txt tail -1 dd's "notrunc" option is used to write the filtered contents back, in place, while dd is needed again (with a byte count) to actually truncate the file. If the new file size is greater or equal to the old file size, the second dd invocation is not necessary. The advantages of this over a file copy method are: 1) no additional disk space necessary, 2) faster performance on large files, and 3) pure shell (other than dd). A: Redirecting from a file through a pipeline back to the same file is unsafe; if file.txt is overwritten by the shell when setting up the last stage of the pipeline before tail starts reading off the first stage, you end up with empty output. Do the following instead: tail -1 file.txt >file.txt.new && mv file.txt.new file.txt ...well, actually, don't do that in production code; particularly if you're in a security-sensitive environment and running as root, the following is more appropriate: tempfile="$(mktemp file.txt.XXXXXX)" chown --reference=file.txt -- "$tempfile" chmod --reference=file.txt -- "$tempfile" tail -1 file.txt >"$tempfile" && mv -- "$tempfile" file.txt Another approach (avoiding temporary files, unless <<< implicitly creates them on your platform) is the following: lastline="$(tail -1 file.txt)"; cat >file.txt <<<"$lastline" (The above implementation is bash-specific, but works in cases where echo does not -- such as when the last line contains "--version", for instance). Finally, one can use sponge from moreutils: tail -1 file.txt | sponge file.txt A: As Lewis Baumstark says, it doesn't like it that you're writing to the same filename. This is because the shell opens up "file.txt" and truncates it to do the redirection before "cat file.txt" is run. So, you have to tail -1 file.txt > file2.txt; mv file2.txt file.txt A: Just for this case it's possible to use cat < file.txt | (rm file.txt; tail -1 > file.txt) That will open "file.txt" just before connection "cat" with subshell in "(...)". "rm file.txt" will remove reference from disk before subshell will open it for write for "tail", but contents will be still available through opened descriptor which is passed to "cat" until it will close stdin. So you'd better be sure that this command will finish or contents of "file.txt" will be lost A: echo "$(tail -1 file.txt)" > file.txt A: It seems to not like the fact you're writing it back to the same filename. If you do the following it works: $cat file.txt | tail -1 > anotherfile.txt A: tail -1 > file.txt will overwrite your file, causing cat to read an empty file because the re-write will happen before any of the commands in your pipeline are executed.
{ "language": "en", "url": "https://stackoverflow.com/questions/123235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How can I automate exporting of tables into proper XML files from MSSQL or Access? We have a customer requesting data in XML format. Normally this is not required as we usually just hand off an Access database or csv files and that is sufficient. However in this case I need to automate the exporting of proper XML from a dozen tables. If I can do it out of SQL Server 2005, that would be preferred. However I can't for the life of me find a way to do this. I can dump out raw xml data but this is just a tag per row with attribute values. We need something that represents the structure of the tables. Access has an export in xml format that meets our needs. However I'm not sure how this can be automated. It doesn't appear to be available in any way through SQL so I'm trying to track down the necessary code to export the XML through a macro or vbscript. Any suggestions? A: Look into using FOR XML AUTO. Depending on your requirements, you might need to use EXPLICIT. As a quick example: SELECT * FROM Customers INNER JOIN Orders ON Orders.CustID = Customers.CustID FOR XML AUTO This will generate a nested XML document with the orders inside the customers. You could then use SSIS to export that out into a file pretty easily I would think. I haven't tried it myself though. A: If you want a document instead of a fragment, you'll probably need a two-part solution. However, both parts could be done in SQL Server. It looks from the comments on Tom's entry like you found the ELEMENTS argument, so you're getting the fields as child elements rather than attributes. You'll still end up with a fragment, though, because you won't get a root node. There are different ways you could handle this. SQL Server provides a method for using XSLT to transform XML documents, so you could create an XSL stylesheet to wrap the result of your query in a root element. You could also add anything else the customer's schema requires (assuming they have one). If you wanted to leave some fields as attributes and make others elements, you could also use XSLT to move those fields, so you might end up with something like this: <customer id="204"> <firstname>John</firstname> <lastname>Public</lastname> </customer> A: There's an outline here of a macro used to export data from an access db to an xml file, which may be of some use to you. Const acExportTable = 0 Set objAccess = CreateObject("Access.Application") objAccess.OpenCurrentDatabase "C:\Scripts\Test.mdb" 'Export the table "Inventory" to test.xml objAccess.ExportXML acExportTable,"Inventory","c:\scripts\test.xml" A: The easiest way to do this that I can think of would be to create a small app to do it for you. You could do it as a basic WinForm and then just make use of a LinqToSql dbml class to represent your database. Most of the time you can just serialize those objects using XmlSerializer namespace. Occasionally it is more difficult than that depending on the complexity of your database. Check out this post for some detailed info on LinqToSql and Xml Serialization: http://www.west-wind.com/Weblog/posts/147218.aspx Hope that helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/123236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I use XPathNodeIterator to iterate through a list of items in an XML file? This is a sample (edited slightly, but you get the idea) of my XML file: <HostCollection> <ApplicationInfo /> <Hosts> <Host> <Name>Test</Name> <IP>192.168.1.1</IP> </Host> <Host> <Name>Test</Name> <IP>192.168.1.2</IP> </Host> </Hosts> </HostCollection> When my application (VB.NET app) loads, I want to loop through the list of hosts and their attributes and add them to a collection. I was hoping I could use the XPathNodeIterator for this. The examples I found online seemed a little muddied, and I'm hoping someone here can clear things up a bit. A: You could load them into an XmlDocument and use an XPath statement to fill a NodeList... Dim doc As XmlDocument = New XmlDocument() doc.Load("hosts.xml") Dim nodeList as XmlNodeList nodeList = doc.SelectNodes("/HostCollectionInfo/Hosts/Host") Then loop through the nodes A: XPathDocument xpathDoc; using (StreamReader input = ...) { xpathDoc = new XPathDocument(input); } XPathNavigator nav = xpathDoc.CreateNavigator(); XmlNamespaceManager nsmgr = new XmlNamespaceManager(nav.NameTable); XPathNodeIterator nodes = nav.Select("/HostCollection/Hosts/Host", nsmgr); while (nodes.MoveNext()) { // access the current Host with nodes.Current }
{ "language": "en", "url": "https://stackoverflow.com/questions/123239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Outlook Plug-In for custom CRM I would like to write a plug-in that will allow a custom written CRM to read and write to their local Outlook client. I know that this poses a security concern. But, my clients are asking that their CRM "be connected" to Outlook. They would like to be able to do the following: A) When a contact sends them an email (reply or free standing email), they'd like the details of this email to go INTO the CRM. Yep. They would like me to save the body, time and date it was sent, etc. B) They want to be able to send new emails (or replies to existing emails) from within the CRM itself. Basically, "a form that looks like Outlook's send/reply email form". C) Want the ability to search for contacts and the related emails with a search for tags/keywords facility. (i.e. if a product name or code appears in an email then they want the email returned in the search). D) Having performed a search of many contacts, they will want to prepare a mailer and shoot out some sort of email announcement to their qualified leads. This could be 50, 100, or more persons. So its got to be able to allow bulk mailing. E) Given a list of new prospects, that arent currently contacts in the CRM, they will want to do the same and if they get replies from this mailer to the prospects, the will want the replies to be saved in the DB and contacts be inserted into the DB. F) They would like to be able to utilize the calendar and task list facilities of Outlook from the CRM, as well. More or less, they want this pretty basic (as it is today) CRM that I created to integrate with Outlook and have it do so seamlessly as if it was an add-on to the CRM. A plug-in is what I am thinking... But, I dont know where to begin. My environment is Windows XP/Vista and is going to be ASP.NET and I am going to use the VB.NET language to accomplish this. What do I need? Are there resources out there that can describe how to build a plug-in to Outlook as I have been asked to? This is not Exchange, none of the clients use exchange (not so far). They all run Outlook. Mostly 2003. Most clients are XP right now but some are upgrading to Vista. For some reason I cant seem to wrap my head around this. I think the whole security issue is thwarting my ability to see past what is probably a simple thing. The client doesnt want to be prompted by any security messages asking them if they are sure they want to send 382 emails to their contacts. Not once and certainly not 382 times. Where do I begin? I've searched the internet for similar but mainly what I found are already-written products and I've got to write this from scratch. A: I was part of the team that created the original Outlook Plug-In for Frankley Covey time management tools. It was quite an adventure! The first thing I would do is make your client pick a version of Outlook, and stick with it. DO NOT let the client add support for additional Outlook versions, unless they are willing to pay for it, and willing to have the delivery time pushed back to a reasonable date. The team I was with swore by the Slipstick website. There are several solutions to the Outlook security prompts in there. If you can, talk to Microsoft and see if they can get you the object model for the specific version of Outlook you will be working with. We had this model printed on a large scale color printer and put it on a large wall. IIRC, it was something like 7'x5' object map. This helped tons. You might end up creating specific classifications/namespaces for your Outlook code. It's been a while, but I remember something about a dot notation like .Email, .Task, and several others. I had to create a couple new dot namespaces for the Outlook Task object. As razorfish noted, look up the new Visual Studio For Office Tools. This has made some stuff a lot easier. Talk to your client and find out if they will need to connect to Exchange servers. There were two distinct ways of building Plug-ins. One mode only worked with Outlook itself, while the other talked with Exchange. This is very important to your development efforts. The models are VERY different and will cost you extra time if you pick the wrong one. EDIT: There are a couple books that were helpful with this. The books are for Outlook 2000, so you might want to see if there are updated versions. Building Applications with Microsoft Outlook 2000 Technical Reference Building Applications using Outlook 2000, CDO, Exchange, and Visual Basic Both have a lot of information on how to do deep integrations with Outlook. A: You should take a look at the Visual Studio for Office Tools. You can easily create add-ins for Outlook, Word, Excel ... pretty much the entire Microsoft Office family of products. You can also take a look at Add-In Express, but I didn't have much luck with their controls, and the VSTO for 2008 is extremely easy to use. A: Check out Kayxo Insight. It's a framework for creating the kind of solution you are describing. A: Check out www.softomate.com they offer plugins and integration solutions for various projects.
{ "language": "en", "url": "https://stackoverflow.com/questions/123261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Convert a string to a date in .net I'm reading text from a flat file in c# and need to test whether certain values are dates. They could be in either YYYYMMDD format or MM/DD/YY format. What is the simplest way to do this in .Net? A: DateTime.TryParse method A: you could try also TryParseExact for set exact format. method, here's documentation: http://msdn.microsoft.com/en-us/library/ms131044.aspx e.g. DateTime outDt; bool blnYYYMMDD = DateTime.TryParseExact(yourString,"yyyyMMdd" ,CultureInfo.CurrentCulture,DateTimeStyles.None , out outDt); I hope i help you. A: string[] formats = {"yyyyMMdd", "MM/dd/yy"}; var Result = DateTime.ParseExact(input, formats, CultureInfo.CurrentCulture, DateTimeStyles.None); or DateTime result; string[] formats = {"yyyyMMdd", "MM/dd/yy"}; DateTime.TryParseExact(input, formats, CultureInfo.CurrentCulture, DateTimeStyles.None, out result); More info in the MSDN documentation on ParseExact and TryParseExact. A: You can also do Convert.ToDateTime not sure the advantages of either A: Using TryParse will not throw an exception if it fails. Also, TryParse will return True/False, indicating the success of the conversion. Regards... A: You can use the TryParse method to check validity and parse at same time. DateTime output; string input = "09/23/2008"; if (DateTime.TryParseExact(input,"MM/dd/yy", DateTimeFormatInfo.InvariantInfo, DateTimeStyles.None, out output) || DateTime.TryParseExact(input,"yyyyMMdd", DateTimeFormatInfo.InvariantInfo, DateTimeStyles.None, out output)) { //handle valid date } else { //handle invalid date }
{ "language": "en", "url": "https://stackoverflow.com/questions/123263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Why am I getting "multiple cascade paths" with this table relationship? I have the following table relationship in my database: Parent / \ Child1 Child2 \ / GrandChild I am trying to create the FK relationships so that the deletion of the Parent table cascades to both child and the grandchild table. For any one particular granchild, it will either be parented to one or the other child tables, but never both at the same time. When I'm trying to add ON DELETE CASCADE to the FK relationships, everything is fine adding them to one "side" of the two children (Parent-Child1-GrandChild is fine for Cascade Delete). However, as soon as I add the Cascade Delete on the Child2 "side" of the relationship SQL tells me that the FK would cause multiple cascade paths. I was under the impression that multiple cascade paths only apply when more than one FK indicates the SAME table. Why would I be getting the multiple cascade paths error in this case? PS The table relationships at this point would be very difficult to change so simply telling me to change my table structure is not going to be helpful, thanks. A: The message means that if you delete a Parent record, there are two paths that lead to all deletable GrandChild records. Fix: Remove the ON DELETE CASCADE options in the FKs, and create INSTEAD OF DELETE triggers for the ChildX tables, deleting all grandchild records, and then the childX records themselves.
{ "language": "en", "url": "https://stackoverflow.com/questions/123274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Where are the explanations to warnings of VS 2005 Code Analyses tool? Does anyone know where to find an explanation to the warnings in VS 2005 Code Analyses tool. I would like some documentation, explaining why it creates the warnings it does and what course of action one should take. A: You should be able to right click the warnings it gives you in the Error List and view Error Help, right from within Visual Studio. There's also a section of MSDN articles, if you'd prefer. A: I'm not sure which codeanalysis tool you are referring to. If you mean FxCop, look here: http://msdn.microsoft.com/en-us/library/bb429379(VS.80).aspx If you mean StyleCop, see the download here: http://code.msdn.microsoft.com/sourceanalysis/Release/ProjectReleases.aspx?ReleaseId=1425
{ "language": "en", "url": "https://stackoverflow.com/questions/123290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: AJAX Thumbnails Does anyone know of any free frameworks that help you create thumbnails for web pages, that when clicked, the original image loads ontop of the page (modal like), as oppossed to opening a separate page. Thanks! A: This is really 2 questions in 1. The "lightbox" display of the original, larger sized, images in a modal box is handled in JavaScript with a library such as ThickBox. The resizing of the images can be done manually, or via some kind of code on the server side. Sitepoint has a decent guide on how to resize server side with PHP. I hope this helps point you in the right direction. A: Lightbox. http://planetozh.com/projects/lightbox-clones/ A: I've used DhoniShow a few times in the past, and clients really liked it. It's not AJAX, as it loads all the full sized images on page load, but if you were motivated I'm sure you could make that change pretty easily. A: As have been already said, you can use Lightbox to get the effect to the user, I would use it with its ajax mode, loading the thumbnail as its contents. Now for the thumbnail, I would recommend you something like http://megasnaps.com/ wich you can use for free. Good luck.
{ "language": "en", "url": "https://stackoverflow.com/questions/123292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Do you use the branches/tags/trunk convention? Do you always follow the convention of putting branches, tags and trunk directories at the top level of your Subversion repository? Lately, I have stopped bothering and nothing bad has happened (yet)! It should be possible to move directory trees around if there's ever a need to create them. Am I building up trouble for later? A: Quick answer is "do whatever best suits your procedures". As Danimal said the structure of branch/trunk/tag is a convention. However, I don't agree that it's the location of the b/t/t that is important, merely the existence of them. What you should have is somewhere that is obviously designated for branches, somewhere designated for your trunk and same for your tags. Exactly where they fall depends very much on the structure of your repository and the nature of the files that you're keeping. For example, if you are keeping multiple projects in one repository you'll probably find that it makes more sense to create your b/t/t directories under your projects. If you have distinct modules within your project then the b/t/t should be created under the module directories. Ask yourself what the biggest logical chunk is going to be that you wish to branch and be guided by that. A: Have you tried branching or tagging yet? Until then, there's no problem. However, an added benefit of using the branches,tags,trunk convention is that it's exactly that -- a convention. People expect to see that, so they know what to do when they need to fork. A: This depends on how big the project is. We have some stuff (granted, in git, but the concept is the same) that is fairly big. Every developer uses his/her own branch, there is a testing and mainline branch. We also tag the releases, and if there are version-specific fixes, a branch is created so fixes can be integrated fairly easy. This setup has advantages: we don't get in each others hair during developement. But the downside is that we need an integrator to put the commits from the developers branch into the testing branch, and then to the mainline one. If the project is small, then it's just overhead, but you never know how big a project will get. A: I just started to actually use the convention, and I agree with Danimal. If you have one build in QA, and another in Production, and another in crazy-new-experimental feature development, it's nice to quickly switch back and forth between them. A: I've written tools in the past to automate certain pieces of SVN. Creating a basic repository is one of them. Step 1: create an empty repository. Step 2: create trunk, branches and tags folders - commit Step 3: Copy hook scripts to new repository One of my hook scripts is to make sure that the items in the tags directory cannot be modified. This makes tags have a meaning different from branches. A: Nope, have abandoned that approach for the projects currently in the queue. While the concept seems very valid, it just seems to waste more time than it saves in practice. A: Do you at least have a trunk? If not, when you do need to branch or tag, you will have to have those sitting in your root project directory, alongside the actual code/contents. Yikes! EDIT: I guess you could create a trunk folder, then move everything into that, then create your branches etc... To those saying "just do it later, don't waste time, etc..." Honestly though, how much overhead is it to create them at the outset of your project? 2 minutes, tops? Why not just do it then? It will take much longer to move everything later - even if you only end up needing to branch 1 in 5 times, I still think you'd use less time starting with a branch, tag, trunk structure. A: As I said in What do "branch", "tag" and "trunk" mean in Subversion repositories?, since branch and tag are the same, you are not obliged to follow any convention but your own. Especially for a small project with sequential development (i.e. no need for parallel efforts between current development, maintenance of older versions, exploration of alternative frameworks, ...) A: I'll generally keep my trunk in the root of the repository and only move it into a Trunk folder if I actually need to create a tag of a branch. I think with SVN, as long as your structure is logical, you should have no trouble rearranging it later if your needs change. A: I use trunk, tags and branches on every project. Seriously, how hard is it to create 2 extra directories when you create the project. There is some benefit to following the convention just to maintain consistency. I find that I have lots of tags (each push of an app outside the developer environment gets versioned and tagged). I don't have so many branches because I'm generally not working with people I don't trust with a commit prior to review. So, usually when I get branches, it's because I have a permanent splitting of the codebase - usually for different clients. Once the code becomes irreconcilable, I generally stop a branch and move it to it's own trunk. A: Lately i´m using a model more focused in agile and you can take a look here. It´s really important to following some policies in version control, cuase even using a well defined model, code version is in nature something that leads you to commit mistakes, messy merges, and all that bad stuff, so be careful. This model gives responsibilities for each repository and does not let you overlap where you production, deliverable and under-construction code lies. A: I follow the convention for numerous reasons * *Reference material and procedures which use the b/t/t convention can be instantly applied to your svn repo structure. *All developers coming into the team who are familiar with the convention have a minimal learning curve to get used to your svn repo structure. *Where as trunks & branches have an immediate and obvious benefit, it's only when you're having to trawl through histories and logs to cover your or your companies ass, that you realise the benefit of maintaining a consistent tagging procedure. In short it may not be immediately obvious why the convention is a good thing, but it's when you need help, advice, or some management craziness that it becomes a proverbial godsend. A: I like to use the branches for "mini-projects" for simple proof of concepts. It's fast, easy and generally helps to keep up with your main project. I put proof of concepts in the branches directory since it isn't apart of the main project but it is of value to the project. Like others have mentioned, I use the tags for releases. Most releases I do are in versions so I generally just have a zip file of the package or the version'd installer. A: No. Not the last 3 work situations. I work with non-programmers who need to write, fix, and recall processing scripts. Programming is mostly casual, with occasionally deeper or bigger work. There's no expectation of following big-time software developer's practices. The standard repository terminology can clash with jargon used in the field we work in. So we make up our own repository directories. A: In a very straightforward environment, you can get away with leaving out the branch, tag, trunk from the top of your SVN repository. For example, if you're using SVN for your university assignments, you're not going to be very concerned about changes to the code after it gets released to your customer (the person marking the assignment), and so you could sensibly dispense with branch, tag, trunk, and just have one structure. (Effectively, the whole thing is the 'trunk'.) If, on the other hand, you've been managing code that is deployed to 700 different sites and that is split across separate product lines, you'd be insane not to use 'branch, tag, trunk' near the top of your structure (there's a sensible case for splitting your products before going down the BTT route), since you're going to need to know what code went where, and to be able to separate major rewrite activity (the stuff you do in the trunk) from spot fixes to help a site having an immediate problem (which you do in a branch, then merge into the trunk). And if you want to be able to answer the question "Why did the Foobar stop working when we rolled out patch 1.2.3?" then tags are essential.
{ "language": "en", "url": "https://stackoverflow.com/questions/123295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Translating Int32 into ushort and back again I am attempting to devise a system for packing integer values greater than 65535 into a ushort. Let me explain. We have a system which generates Int32 values using an IDENTITY column from SQL Server and are limited by an in-production client API that overflows our Int32 IDs to ushorts. Fortunately the client only has about 20 or so instances of things with these IDs - let's call them packages - at any given time and it only needs to have them unique amongst local siblings. The generally accepted solution is to translate our Int32 IDs to ushorts (and no I don't mean casting, I mean translating) before transmitting them to the client, however there are barbs with this approach: * *Some IDs less than 65535 may still be in play on a given client at any time due to non-expiration. *We cannot have any ID collisions - that is if package ID 1 goes to the client, an algorithm that tracks how many times 65535 is removed from an Int32 to make a ushort when applied to 65536 would also result in 1 thus causing a collision. *We must be able to reconstruct the ushort into the Int32 upon return. What we have available to solve this problem is a single signed byte field that is echoed to us and gives us 127 values to play with (really 117 because we're using 0-9 for something else). I'll refer to this as the "byte field" from here on out. We have discussed three different translation routines: * *Multiplicative: store in the byte field how many times we remove 65535 from our Int32 to make a ushort. This has collision problems as detailed above. *Serialized Session State: for each client, generate a session ID based on facts about that client. Then store a 1:1 translation table starting from 1 up to the number of packages delivered so when the client accesses our server again the inventory of packages can be translated back to their known database IDs. This has overhead problems since we'd be backing serialized session state to a database and we want to support hundreds to thousands of transactions a second. *Varied algorithmic approach where the byte field is an ID of a transformative algorithm that takes an Int32 and transforms it into a ushort. Obviously many of these are going to be simple Multiplicative (to increase our ceiling of IDs we can transform) but some will have to be multiplicative with a smaller boundry (like 32768) with a number added to/subtracted from to get as close to a number that can be guaranteed unique amongst siblings. This approach is processor intensive but should allow us to avoid collisions while remaining scalable (though with this approach we have a limited ceiling that won't be reached before the ushort problem goes away on its own due to upgrades). So my question is: is there a better way than my approaches above, and if not, what should I be looking for in terms of algorithms (for approach #3) to generate a number between 1-65535 when a given number is greater than 0 and must not be a one way hash? Clarification: its not that the ushort ceiling is the greatest problem, its that the client API uses a ushort so I cannot combine the byte field on the client to get bigger values (the client API is non-upgradeable but will eventually phase out of existence). A: I can think of a few other options: Are there globally fewer than 65536 entries in the database? If so, then you could maintain a mapping table that's not associated with session state, but is a persisted part of the application. Are the majority of entries at indexes less than, say 50,000? If that's the case you could map such values directly, and use a map associated with the session for the remaining ones. If persisting such session data is an issue and the number of clients is reasonably small, you could enable client/session affinity and maintain the map local to the server. If it's not a web application, you could maintain the map on the client itself. I don't see any algorithmic way that would avoid collisions - I suspect you could always come up with an examples that would collide. A: Regarding approach 2: Your second approach is pretty much how NAT works. Every TCP/UDP client on the local network has up to 65535 ports in use (except port 0) and a private IP. The router knows only a single public IP. Since two clients may both have source port 300, it cannot simply just replace the private IP with a public one, that would cause collisions to appear. Thus it replaces the IP and "translates" the port (NAT: Network Address Translation). On return, it translates the port back and replaces the public with a private IP again, before forwarding the package back. You'd be doing nothing else than that. However, routers keep that information in memory - and they are not too slow when doing NAT (companies with hundreds of computers are NATed to the Internet sometimes and the slow down is hardly noticeably in most cases). You say you want up to thousand transactions a second - but how many clients will there be? As this mainly will define the size of memory needed to backup the mappings. If there are not too many clients, you could keep the mapping with a sorted table in memory, in that case, speed will be the smallest problem (table getting to bigger and server running out of memory is the bigger one). What is a bit unclear to me is that you once say Fortunately the client only has about 20 or so instances of things with these IDs - let's call them packages - at any given time and it only needs to have them unique amongst local siblings. but then you say Some IDs less than 65535 may still be in play on a given client at any time due to non-expiration. I guess, what you probably meant by the second statement is, that if a client requests ID 65536, it might still have IDs below 65535 and these can be as low as (let's say) 20. It's not that the client processes IDs in a straight order, right? So you cannot say, just because it now requested 65536, it may have some smaller values, but certainly not in the range 1-1000, correct? It might actually keep a reference to 20, 90, 2005 and 41238 and still go over 65535, that's what you meant? I personally like your second approach more than the third one, as it is easier to avoid a collision in any case and translating the number back is a plain, simple operation. Although I doubt that your third approach can work in the long run. Okay, you might have a byte to store how often you subtracted 2^16 of the number. However, you can only subtract 117 * 2^16 as largest numbers. What will you do if numbers go above that? Using a different algorithm, that does not subtract, but does what? Divide? Shift bits? In that case you lose granularity, that means this algorithm can't hit any possible number any longer (it will make large jumps). If it was so easy to just apply a magic translation function upon 32 bit to make 16 bit from it (+ one extra byte) and then just transform it back, guess every compression method in this world would use it, as it could, no matter what the 32 bit number was, always compress it down to 24 bit (16 bit + one byte). That would be magic. It is not possible to pack 32 bit into 24 bit and also pack all the logic how to transform it back into it as well. You will need some external storage, which brings us back to your 2nd approach. This is the only approach that will work and it will work for every number in 32 bit number range. A: How much "more" than 65535 do you need? You could always just add a few bits from your "byte field" as the high-order bits of the ID. Just 2 bits would get you to 262,143, 3 bits would get you 524,287.
{ "language": "en", "url": "https://stackoverflow.com/questions/123301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How deep is the Win32 message queue? How many messages does the queue for a standard window hold? What happens when the queue overflows? The documentation for GetMessage and relatives doesn't say anything about this, and PeekMessage only gives you a yes/no for certain classes of messages, not a message count. This page says that the queues are implemented using memory-mapped files, and that there is no message count limit, but that page is about WinCE. Does this apply to desktop Win32 as well? A: As stated in the MSDN article, if you need to worry about the size of the message queue, you might be better off redesigning your application. A: 10000 by default, but it can be adjusted via the registry. If queue overflows, PostMessage fails. Documentation here: PostMessage function on MSDN
{ "language": "en", "url": "https://stackoverflow.com/questions/123323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: What is the best practice for creating an ASP.NET MVC control or helper? I'm looking to build an reusable control or custom helper for my MVC project. I'm sure there is a best practice for this. Any help is appreciated. A: Sure, the best practices are encapsulated in the HtmlHelpers and other helpers in the MVC source code. You can get the source at http://www.codeplex.com/aspnet A: (Hate to disagree with you Scott) Using HtmlHelpers is not a best practice. It is not a practice that has been sharpened through use on real projects and is only one approach. I have recently written a little post on using SubControllers.. we use HtmlHelpers for some small things, but we use SubControllers for greater control, composition and dependency inversion.
{ "language": "en", "url": "https://stackoverflow.com/questions/123327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Can I use RegEx and VI (or something similar) to autofill some text for me? NOTE: I am not set on using VI, it is just the first thing that came to mind that might be able to do what I need. Feel free to suggest any other program. I have a form with nearly 100 fields that I would like to auto-fill with PHP. I know how to do the autofill, but I would like to avoid manually adding the needed text to 100 fields. Is there an automated way I can take the text: <input name="riskRating" id="riskRating" type="text" /> and change it to: <input name="riskRating" id="riskRating" type="text" value="<?php echo $data['riskRating']; ?>" /> Remember that I am wanting to do this to almost 100 fields. I am trying to avoid going to each field, pasting in the PHP code and changing the variable name manually. I'm hoping some VI guru out there knows off the top of his/her head. A: Taking some ideas from Zsolt Botykai and Mark Biek: :%s:<input\(.* id="\([^"]*\)".*\) />:<input \1 value="<?php echo $data['\2']; ?> />:g A: :%s:\(<input name="\([^"]\+\)" id="[^"]\+" type="text" \)/>:\1value="<?php echo $data ['\2']; ?>" />:gci That's one line. HTH. A: I did it like this. I'm not sure how to escape it to work in vim though. I'll edit if I can figure it out This is the search part of the regex: <input (.*) id="(.*?)" (.*) /> This is the replace part: <input \1 id="\2" \3 value="<?php echo $data['\2']; ?>" /> A: step 1 : search the chaine type="text" : /type="text" Verify that all the strings you want are caught. step 2 : Subsitute with the wanted string : :%s//type="text" value="<?php echo $data riskrating]; ?>"/g step 3 : Be happy !
{ "language": "en", "url": "https://stackoverflow.com/questions/123334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What causes java.lang.IllegalStateException: Post too large in tomcat / mod_jk what configuration needs to be tweaked, and where does it live, in order to increase the maximum allowed post size? A: It will be for others persons, I see you are coupling Apache HTTP and Tomcat (tomcat / mod_jk), in this case edit the Coyote/JK2 AJP 1.3 Connector the same way you do it for the standard connector (Coyote HTTP/1.1), because the AJP1.3 Connector is where Tomcat receive data. <!-- Define a Coyote/JK2 AJP 1.3 Connector on port 8009 --> <Connector port="8009" enableLookups="false" redirectPort="8443" debug="0" protocol="AJP/1.3" maxPostSize="0"/> A: Apache Tomcat by default sets a limit on the maximum size of HTTP POST requests it accepts. In Tomcat 5, this limit is set to 2 MB. When you try to upload files larger than 2 MB, this error can occur. The solution is to reconfigure Tomcat to accept larger POST requests, either by increasing the limit, or by disabling it. This can be done by editing [TOMCAT_DIR]/conf/server.xml. Set the Tomcat configuration parameter maxPostSize for the HTTPConnector to a larger value (in bytes) to increase the limit. Setting it to 0 in will disable the size check. See the Tomcat Configuration Reference for more information. A: The root cause of IllegalStateException exception is a java servlet is attempting to write to the output stream after the response has been committed. Take care that no content is added to the response after redirecting/dispatching request.
{ "language": "en", "url": "https://stackoverflow.com/questions/123335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How can you strip non-ASCII characters from a string? (in C#) How can you strip non-ASCII characters from a string? (in C#) A: I believe MonsCamus meant: parsememo = Regex.Replace(parsememo, @"[^\u0020-\u007E]", string.Empty); A: I found the following slightly altered range useful for parsing comment blocks out of a database, this means that you won't have to contend with tab and escape characters which would cause a CSV field to become upset. parsememo = Regex.Replace(parsememo, @"[^\u001F-\u007F]", string.Empty); If you want to avoid other special characters or particular punctuation check the ascii table A: no need for regex. just use encoding... sOutput = System.Text.Encoding.ASCII.GetString(System.Text.Encoding.ASCII.GetBytes(sInput)); A: I came here looking for a solution for extended ascii characters, but couldnt find it. The closest I found is bzlm's solution. But that works only for ASCII Code upto 127(obviously you can replace the encoding type in his code, but i think it was a bit complex to understand. Hence, sharing this version). Here's a solution that works for extended ASCII codes i.e. upto 255 which is the ISO 8859-1 It finds and strips out non-ascii characters(greater than 255) Dim str1 as String= "â, ??î or ôu� n☁i✑++$-♓!‼⁉4⃣od;/⏬'®;☕:☝)///1!@#" Dim extendedAscii As Encoding = Encoding.GetEncoding("ISO-8859-1", New EncoderReplacementFallback(String.empty), New DecoderReplacementFallback()) Dim extendedAsciiBytes() As Byte = extendedAscii.GetBytes(str1) Dim str2 As String = extendedAscii.GetString(extendedAsciiBytes) console.WriteLine(str2) 'Output : â, ??î or ôu ni++$-!‼⁉4od;/';:)///1!@#$%^yz: Here's a working fiddle for the code Replace the encoding as per the requirement, rest should remain the same. A: string s = "søme string"; s = Regex.Replace(s, @"[^\u0000-\u007F]+", string.Empty); The ^ is the not operator. It tells the regex to find everything that doesn't match, instead of everything that does match. The \u####-\u#### says which characters match.\u0000-\u007F is the equivalent of the first 128 characters in utf-8 or unicode, which are always the ascii characters. So you match every non ascii character (because of the not) and do a replace on everything that matches. (as explained in a comment by Gordon Tucker Dec 11, 2009 at 21:11) A: This is not optimal performance-wise, but a pretty straight-forward Linq approach: string strippedString = new string( yourString.Where(c => c <= sbyte.MaxValue).ToArray() ); The downside is that all the "surviving" characters are first put into an array of type char[] which is then thrown away after the string constructor no longer uses it. A: If you want not to strip, but to actually convert latin accented to non-accented characters, take a look at this question: How do I translate 8bit characters into 7bit characters? (i.e. Ü to U) A: Here is a pure .NET solution that doesn't use regular expressions: string inputString = "Räksmörgås"; string asAscii = Encoding.ASCII.GetString( Encoding.Convert( Encoding.UTF8, Encoding.GetEncoding( Encoding.ASCII.EncodingName, new EncoderReplacementFallback(string.Empty), new DecoderExceptionFallback() ), Encoding.UTF8.GetBytes(inputString) ) ); It may look cumbersome, but it should be intuitive. It uses the .NET ASCII encoding to convert a string. UTF8 is used during the conversion because it can represent any of the original characters. It uses an EncoderReplacementFallback to to convert any non-ASCII character to an empty string. A: Inspired by philcruz's Regular Expression solution, I've made a pure LINQ solution public static string PureAscii(this string source, char nil = ' ') { var min = '\u0000'; var max = '\u007F'; return source.Select(c => c < min ? nil : c > max ? nil : c).ToText(); } public static string ToText(this IEnumerable<char> source) { var buffer = new StringBuilder(); foreach (var c in source) buffer.Append(c); return buffer.ToString(); } This is untested code. A: I used this regex expression: string s = "søme string"; Regex regex = new Regex(@"[^a-zA-Z0-9\s]", (RegexOptions)0); return regex.Replace(s, ""); A: I use this regular expression to filter out bad characters in a filename. Regex.Replace(directory, "[^a-zA-Z0-9\\:_\- ]", "") That should be all the characters allowed for filenames. A: public string ReturnCleanASCII(string s) { StringBuilder sb = new StringBuilder(s.Length); foreach (char c in s) { if ((int)c > 127) // you probably don't want 127 either continue; if ((int)c < 32) // I bet you don't want control characters continue; if (c == '%') continue; if (c == '?') continue; sb.Append(c); } return sb.ToString(); } A: If you want a string with only ISO-8859-1 characters and excluding characters which are not standard, you should use this expression : var result = Regex.Replace(value, @"[^\u0020-\u007E\u00A0-\u00FF]+", string.Empty); Note : Using Encoding.GetEncoding("ISO-8859-1") method will not do the job because undefined characters are not excluded. .Net Fiddle sample Wikipedia ISO-8859-1 code page for more details. A: I did a bit of testing, and @bzlm 's answer is the fastest valid answer. But it turns out we can do much faster. The conversion using encoding is equivalent to the following code when inlining Encoding.Convert public static string StripUnicode(string unicode) { Encoding dstEncoding = GreedyAscii; Encoding srcEncoding = Encoding.UTF8; return dstEncoding.GetString(dstEncoding.GetBytes(srcEncoding.GetChars(srcEncoding.GetBytes(unicode)))); } As you can clearly see we perform two redundant actions by reencoding UTF8. Why is that you may ask? C# exclusively stores strings in UTF16 graphmemes. These can ofc also be UTF8 graphmemes, since unicode is intercompatible. (Sidenote: @bzlm 's solution breaks UTF16 characters which may throw an exception during transcoding.) => The operation is independant of the source encoding, since it always is UTF16. Lets get rid of the redundant reencoding, and prevent edgecase failures. public static string StripUnicode(string unicode) { Encoding dstEncoding = GreedyAscii; return dstEncoding.GetString(dstEncoding.GetBytes(unicode)); } We alreadly have a simplified and perfectly workable solution. Which requries less then half as much time to compute. There is not much more performance to be gained, but for further memory optimization we can do two things: * *Accept a ReadOnlySpan<char> for a more usable api. *Attempt to fit the tempoary byte[] unto the stack; otherwise use an array pool. public static string StripUnicode(ReadOnlySpan<char> unicode) { return EnsureEncoding(unicode, GreedyAscii); } /// <summary>Produces a string which is compatible with the limiting encoding</summary> /// <remarks>Ensure that the encoding does not throw on illegal characters</remarks> public static string EnsureEncoding(ReadOnlySpan<char> unicode, Encoding limitEncoding) { int asciiBytesLength = limitEncoding.GetMaxByteCount(unicode.Length); byte[]? asciiBytes = asciiBytesLength <= 2048 ? null : ArrayPool<byte>.Shared.Rent(asciiBytesLength); Span<byte> asciiSpan = asciiBytes ?? stackalloc byte[asciiBytesLength]; asciiBytesLength = limitEncoding.GetBytes(unicode, asciiSpan); asciiSpan = asciiSpan.Slice(0, asciiBytesLength); string asciiChars = limitEncoding.GetString(asciiSpan); if (asciiBytes is { }) { ArrayPool<byte>.Shared.Return(asciiBytes); } return asciiChars; } private static Encoding GreedyAscii { get; } = Encoding.GetEncoding(Encoding.ASCII.EncodingName, new EncoderReplacementFallback(string.Empty), new DecoderExceptionFallback()); You can see this snipped in action on sharplab.io A: Necromancing. Also, the method by bzlm can be used to remove characters that are not in an arbitrary charset, not just ASCII: // https://en.wikipedia.org/wiki/Code_page#EBCDIC-based_code_pages // https://en.wikipedia.org/wiki/Windows_code_page#East_Asian_multi-byte_code_pages // https://en.wikipedia.org/wiki/Chinese_character_encoding System.Text.Encoding encRemoveAllBut = System.Text.Encoding.ASCII; encRemoveAllBut = System.Text.Encoding.GetEncoding(System.Globalization.CultureInfo.InstalledUICulture.TextInfo.ANSICodePage); // System-encoding encRemoveAllBut = System.Text.Encoding.GetEncoding(1252); // Western European (iso-8859-1) encRemoveAllBut = System.Text.Encoding.GetEncoding(1251); // Windows-1251/KOI8-R encRemoveAllBut = System.Text.Encoding.GetEncoding("ISO-8859-5"); // used by less than 0.1% of websites encRemoveAllBut = System.Text.Encoding.GetEncoding(37); // IBM EBCDIC US-Canada encRemoveAllBut = System.Text.Encoding.GetEncoding(500); // IBM EBCDIC Latin 1 encRemoveAllBut = System.Text.Encoding.GetEncoding(936); // Chinese Simplified encRemoveAllBut = System.Text.Encoding.GetEncoding(950); // Chinese Traditional encRemoveAllBut = System.Text.Encoding.ASCII; // putting ASCII again, as to answer the question // https://stackoverflow.com/questions/123336/how-can-you-strip-non-ascii-characters-from-a-string-in-c string inputString = "RäksmörПривет, мирgås"; string asAscii = encRemoveAllBut.GetString( System.Text.Encoding.Convert( System.Text.Encoding.UTF8, System.Text.Encoding.GetEncoding( encRemoveAllBut.CodePage, new System.Text.EncoderReplacementFallback(string.Empty), new System.Text.DecoderExceptionFallback() ), System.Text.Encoding.UTF8.GetBytes(inputString) ) ); System.Console.WriteLine(asAscii); AND for those that just want to remote the accents: (caution, because Normalize != Latinize != Romanize) // string str = Latinize("(æøå âôû?aè"); public static string Latinize(string stIn) { // Special treatment for German Umlauts stIn = stIn.Replace("ä", "ae"); stIn = stIn.Replace("ö", "oe"); stIn = stIn.Replace("ü", "ue"); stIn = stIn.Replace("Ä", "Ae"); stIn = stIn.Replace("Ö", "Oe"); stIn = stIn.Replace("Ü", "Ue"); // End special treatment for German Umlauts string stFormD = stIn.Normalize(System.Text.NormalizationForm.FormD); System.Text.StringBuilder sb = new System.Text.StringBuilder(); for (int ich = 0; ich < stFormD.Length; ich++) { System.Globalization.UnicodeCategory uc = System.Globalization.CharUnicodeInfo.GetUnicodeCategory(stFormD[ich]); if (uc != System.Globalization.UnicodeCategory.NonSpacingMark) { sb.Append(stFormD[ich]); } // End if (uc != System.Globalization.UnicodeCategory.NonSpacingMark) } // Next ich //return (sb.ToString().Normalize(System.Text.NormalizationForm.FormC)); return (sb.ToString().Normalize(System.Text.NormalizationForm.FormKC)); } // End Function Latinize
{ "language": "en", "url": "https://stackoverflow.com/questions/123336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "269" }
Q: Restart IIS if new dll dropped in bin? Do I have to restart IIS if I drop a new DLL in the bin of my virtual directory? A: No you don't have to restart IIS. However, your worker process will automatically recycle itself. A: No you do not have to, the application will recycle, but an IISReset is NOT needed A: If your application is an ASP.NET app, I believe the AppDomain will restart, but the worker process (w3wp.exe) will NOT. For most purposes, an AppDomain reset is sufficient to clear the state but for some (generally to do with unmanaged DLLs having been loaded in the process) this may not be sufficient. In these cases, IISRESET will work.
{ "language": "en", "url": "https://stackoverflow.com/questions/123337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Command-line Unix ASCII-based charting / plotting tool Is there a good command-line UNIX charting / graphing / plotting tool out there? I'm looking for something that will plot xy points on an ASCII graph. Just to clarify, I'm looking for something that will output a graph in ASCII (like ascii-art style), so I can use it over an interactive shell session without needing X. A: While gnuplot is powerful, it's also really irritating when you just want to pipe in a bunch of points and get a graph. Thankfully, someone created eplot (easy plot), which handles all the nonsense for you. It doesn't seem to have an option to force terminal graphs; I patched it like so: --- eplot.orig 2012-10-12 17:07:35.000000000 -0700 +++ eplot 2012-10-12 17:09:06.000000000 -0700 @@ -377,6 +377,7 @@ # ---- print the options com="echo '\n"+getStyleString+@oc["MiscOptions"] com=com+"set multiplot;\n" if doMultiPlot + com=com+"set terminal dumb;\n" com=com+"plot "+@oc["Range"]+comString+"\n'| gnuplot -persist" printAndRun(com) # ---- convert to PDF An example of use: [$]> git shortlog -s -n | awk '{print $1}' | eplot 2> /dev/null 3500 ++-------+-------+--------+--------+-------+--------+-------+-------++ + + + "/tmp/eplot20121012-19078-fw3txm-0" ****** + * | 3000 +* ++ |* | | * | 2500 ++* ++ | * | | * | 2000 ++ * ++ | ** | 1500 ++ **** ++ | * | | ** | 1000 ++ * ++ | * | | * | 500 ++ *** ++ | ************** | + + + + ********** + + + + 0 ++-------+-------+--------+--------+-----***************************++ 0 5 10 15 20 25 30 35 40 A: gnuplot is the definitive answer to your question. I am personally also a big fan of the google chart API, which can be accessed from the command line with the help of wget (or curl) to download a png file (and view with xview or something similar). I like this option because I find the charts to be slightly prettier (i.e. better antialiasing). A: Also, spark is a nice little bar graph in your shell. A: Another simpler/lighter alternative to gnuplot is ervy, a NodeJS based terminal charts tool. Supported types: scatter (XY points), bar, pie, bullet, donut and gauge. Usage examples with various options can be found on the projects GitHub repo A: You should use gnuplot and be sure to issue the command "set term dumb" after starting up. You can also give a row and column count. Here is the output from gnuplot if you issue "set term dumb 64 10" and then "plot sin(x)": 1 ++-----------****-----------+--***-------+------****--++ 0.6 *+ **+ * +** * sin(x)*******++ 0.2 +* * * ** ** * **++ 0 ++* ** * ** * ** *++ -0.4 ++** * ** ** * * *+ -0.8 ++ ** * + * ** + * +** +* -1 ++--****------+-------***---+----------****-----------++ -10 -5 0 5 10 It looks better at 79x24 (don't use the 80th column on an 80x24 display: some curses implementations don't always behave well around the last column). I'm using gnuplot v4, but this should work on slightly older or newer versions. A: I found a tool called ttyplot in homebrew. It's good. https://github.com/tenox7/ttyplot A: See also: asciichart (implemented in Node.js and ported to Python, Java, Go and Haskell) A: termplotlib (one of my projects) has picked up popularity lately, so perhaps this is helpful for some people. import termplotlib as tpl import numpy x = numpy.linspace(0, 2 * numpy.pi, 10) y = numpy.sin(x) fig = tpl.figure() fig.plot(x, y, label="data", width=50, height=15) fig.show() 1 +---------------------------------------+ 0.8 | ** ** | 0.6 | * ** data ******* | 0.4 | ** | 0.2 |* ** | 0 | ** | | * | -0.2 | ** ** | -0.4 | ** * | -0.6 | ** | -0.8 | **** ** | -1 +---------------------------------------+ 0 1 2 3 4 5 6 7 A: Another option I've just run across is bashplotlib. Here's an example run on (roughly) the same data as my eplot example: [$]> git shortlog -s -n | awk '{print $1}' | hist 33| o 32| o 30| o 28| o 27| o 25| o 23| o 22| o 20| o 18| o 16| o 15| o 13| o 11| o 10| o 8| o 6| o 5| o 3| o o o 1| o o o o o 0| o o o o o o o ---------------------- ----------------------- | Summary | ----------------------- | observations: 50 | | min value: 1.000000 | | mean : 519.140000 | |max value: 3207.000000| ----------------------- Adjusting the bins helps the resolution a bit: [$]> git shortlog -s -n | awk '{print $1}' | hist --nosummary --bins=40 18| o | o 17| o 16| o 15| o 14| o 13| o 12| o 11| o 10| o 9| o 8| o 7| o 6| o 5| o o 4| o o o 3| o o o o o 2| o o o o o 1| o o o o o o o 0| o o o o o o o o o o o o o | o o o o o o o o o o o o o -------------------------------------------------------------------------------- A: feedgnuplot is another front end to gnuplot, which handles piping in data. $ seq 5 | awk '{print 2*$1, $1*$1}' | feedgnuplot --lines --points --legend 0 "data 0" --title "Test plot" --y2 1 \ --terminal 'dumb 80,40' --exit Test plot 10 ++------+--------+-------+-------+-------+--------+-------+------*A 25 + + + + + + + + **#+ | : : : : : : data 0+**A*** | | : : : : : : :** # | 9 ++.......................................................**.##....| | : : : : : : ** :# | | : : : : : : ** # | | : : : : : :** ##: ++ 20 8 ++................................................A....#..........| | : : : : : **: # : | | : : : : : ** : ## : | | : : : : : ** :# : | | : : : : :** B : | 7 ++......................................**......##................| | : : : : ** : ## : : ++ 15 | : : : : ** : # : : | | : : : :** : ## : : | 6 ++..............................*A.......##.......................| | : : : ** : ##: : : | | : : : ** : # : : : | | : : :** : ## : : : ++ 10 5 ++......................**........##..............................| | : : ** : #B : : : | | : : ** : ## : : : : | | : :** : ## : : : : | 4 ++...............A.......###......................................| | : **: ##: : : : : | | : ** : ## : : : : : ++ 5 | : ** : ## : : : : : | | :** ##B# : : : : : | 3 ++.....**..####...................................................| | **#### : : : : : : | | **## : : : : : : : | B** + + + + + + + + 2 A+------+--------+-------+-------+-------+--------+-------+------++ 0 1 1.5 2 2.5 3 3.5 4 4.5 5 You can install it on Debian and Ubuntu by running sudo apt install feedgnuplot . A: Plots in a single line are really simple, and can help one see patterns of highs and lows. See also pysparklines. (Does anyone know of unicode slanting lines, which could be fit together to make line, not bar, plots ?) #!/usr/bin/env python # -*- coding: utf-8 -*- from __future__ import division import numpy as np __version__ = "2015-01-02 jan denis" #............................................................................... def onelineplot( x, chars=u"▁▂▃▄▅▆▇█", sep=" " ): """ numbers -> v simple one-line plots like f ▆ ▁ ▁ ▁ █ ▁ ▁ ▁ ▁ ▁ ▁ ▁ ▁ ▁ ▁ ▁ ▁ ▁ ▁ ▁ osc 47 ▄ ▁ █ ▇ ▄ ▆ ▅ ▇ ▇ ▇ ▇ ▇ ▄ ▃ ▃ ▁ ▃ ▂ rosenbrock f █ ▅ █ ▅ █ ▅ █ ▅ █ ▅ █ ▅ █ ▅ █ ▅ ▁ ▁ ▁ ▁ osc 58 ▂ ▁ ▃ ▂ ▄ ▃ ▅ ▄ ▆ ▅ ▇ ▆ █ ▇ ▇ ▃ ▃ ▇ rastrigin f █ █ █ █ ▁ ▁ ▁ ▁ ▁ ▁ ▁ ▁ ▁ ▁ ▁ ▁ ▁ ▁ ▁ ▁ osc 90 █ ▇ ▇ ▁ █ ▇ █ ▇ █ ▇ █ ▇ █ ▇ █ ▇ █ ▇ ackley Usage: astring = onelineplot( numbers [optional chars= sep= ]) In: x: a list / tuple / numpy 1d array of numbers chars: plot characters, default the 8 Unicode bars above sep: "" or " " between plot chars How it works: linscale x -> ints 0 1 2 3 ... -> chars ▁ ▂ ▃ ▄ ... See also: https://github.com/RedKrieg/pysparklines """ xlin = _linscale( x, to=[-.49, len(chars) - 1 + .49 ]) # or quartiles 0 - 25 - 50 - 75 - 100 xints = xlin.round().astype(int) assert xints.ndim == 1, xints.shape # todo: 2d return sep.join([ chars[j] for j in xints ]) def _linscale( x, from_=None, to=[0,1] ): """ scale x from_ -> to, default min, max -> 0, 1 """ x = np.asanyarray(x) m, M = from_ if from_ is not None \ else [np.nanmin(x), np.nanmax(x)] if m == M: return np.ones_like(x) * np.mean( to ) return (x - m) * (to[1] - to[0]) \ / (M - m) + to[0] #............................................................................... if __name__ == "__main__": # standalone test -- import sys if len(sys.argv) > 1: # numbers on the command line, may be $(cat myfile) x = map( float, sys.argv[1:] ) else: np.random.seed( 0 ) x = np.random.exponential( size=20 ) print onelineplot( x ) A: Here is my patch for eplot that adds a -T option for terminal output: --- eplot 2008-07-09 16:50:04.000000000 -0400 +++ eplot+ 2017-02-02 13:20:23.551353793 -0500 @@ -172,7 +172,10 @@ com=com+"set terminal postscript color;\n" @o["DoPDF"]=true - # ---- Specify a custom output file + when /^-T$|^--terminal$/ + com=com+"set terminal dumb;\n" + + # ---- Specify a custom output file when /^-o$|^--output$/ @o["OutputFileSpecified"]=checkOptArg(xargv,i) i=i+1 i=i+1 Using this you can run it as eplot -T to get ASCII-graphics result instead of a gnuplot window. A: Try gnuplot. It has very powerful graphing possibilities. It can output to your terminal in the following way: gnuplot> set terminal dumb Terminal type set to 'dumb' Options are 'feed 79 24' gnuplot> plot sin(x) 1 ++----------------**---------------+----**-----------+--------**-----++ + *+ * + * * + sin(x) ****** + 0.8 ++ * * * * * * ++ | * * * * * * | 0.6 ++ * * * * * * ++ * * * * * * * | 0.4 +* * * * * * * ++ |* * * * * * * | 0.2 +* * * * * * * ++ | * * * * * * * | 0 ++* * * * * * *++ | * * * * * * *| -0.2 ++ * * * * * * *+ | * * * * * * *| -0.4 ++ * * * * * * *+ | * * * * * * * -0.6 ++ * * * * * * ++ | * * * * * * | -0.8 ++ * * * * * * ++ + * * + * * + * * + -1 ++-----**---------+----------**----+---------------**+---------------++ -10 -5 0 5 10 A: The new kid on the block: YouPlot.
{ "language": "en", "url": "https://stackoverflow.com/questions/123378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "174" }
Q: IIS performance problem trying to implement an XMPP-like protocol we're have a client that needs to get interactive messages from a server, from clients that are distributed around the world behind all kinds of firewalls with all kinds of ports closed. The only thing we can rely on is HTTP port 80 (and HTTPS 443). The design is basically modeled after XMPP (the Jabber protocol), using our client and IIS. The client issues GET requests to a .NET Handler; the handler holds the request open for a while looking for messages. If any messages arrive, they are immediately sent to the client; if not, after a timeout the connection is closed with a "no-data" response. The client immediately reopens the communication. Well, theoretically. What's actually happening is first, IIS can't handle more than about 100 simultaneous requests - others are all queued, and there can be a several minute lag between "connected" and IIS recognizing that the client called in. Second, about half the time the client times out without any response from the server (the client timeout is five minutes longer than the server's). POST always works. Other data served on the same web server works. Web services on the same server work. This is an out-of-the-box installation on Windows 2K3 Server. Is there a configuration option we're missing, or is there something else I should look at to address this? Thanks. A: I think you're hitting ASP.NET thread pool limits, rather than IIS ones. Look into creating a asynchronous HTTP handler (IHttpAsyncHandler) as when they block/wait they aren't tying up the thread pool (they use completion ports instead). Update: Came across this recently that seems to concur with my thinking: CodeProject: Scalable COMET Combined with ASP.NET A: If IIS doesn't fit your requirements, you should choose another web server such as Apache (with Mod_mono) or LightTPD. BTW, you can tunnel XMPP through HTTP using XMPP Over BOSH. No need to invent a custom protocol. A: Out of the box, windows needs some tweeking. I had to implement a comet server in asp.net and ran into some silly defaults. After reading these links: * *IIS 7.0 503 errors with generic handler (.ashx) implementing IHttpAsyncHandler *http://blogs.technet.com/b/winserverperformance/archive/2008/07/25/tuning-windows-server-2008-for-php.aspx *http://smallvoid.com/article/winnt-tcpip-max-limit.html *http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.express.doc/info/exp/ae/rprf_plugin.html *http://support.microsoft.com/kb/820129 *http://msdn.microsoft.com/en-us/library/ee37705(BTS.10).aspx *http://blogs.msdn.com/b/tmarq/archive/2007/07/21/asp-net-thread-usage-on-iis-7-0-and-6-0.aspx I came up with the following changes that were made to our windows 2k8 server. * *reg add HKLM\System\CurrentControlSet\Services\HTTP\Parameters /v MaxConnections /t REG_DWORD /d 1000000 /f *reg add HKLM\System\CurrentControlSet\Services\TcpIp\Parameters /v TcpTimedWaitDelay /t REG_DWORD /d 30 /f *reg add HKLM\SOFTWARE\Microsoft\ASP.NET\2.0.50727.0 /v MaxConcurrentThreadsPerCPU /t REG_DWORD /d 0 /f *reg add HKLM\SOFTWARE\Microsoft\ASP.NET\2.0.50727.0 /v MaxConcurrentRequestsPerCPU /t REG_DWORD /d 30000 /f *appcmd.exe set apppool "[app pool name]" /queueLength:65535 *appcmd.exe set config /section:serverRuntime /appConcurrentRequestLimit:100000 *reg add HKLM\System\CurrentControlSet\Services\TcpIp\Parameters /v MaxUserPort /t REG_DWORD /d 65534 /f *reg add HKLM\System\CurrentControlSet\Services\TcpIp\Parameters /v MaxFreeTcbs /t REG_DWORD /d 2000 /f *reg add HKLM\System\CurrentControlSet\Services\TcpIp\Parameters /v MaxHashTableSize /t REG_DWORD /d 2048 /f reg add HKLM\System\CurrentControlSet\Services\InetInfo\Parameters /v MaxPoolThreads /t REG_DWORD /d 80 /f *appcmd set config /section:processModel /requestQueueLimit:100000 /commit:MACHINE I don't know if all the changes were required or optimal, but with some quik tesing against a test server, we achived over 30k executing connections and 5k requests per second. Couldn't go further because i ran out of client machines to run the tests from. A: XMPP was never designed for high performance applications. The messages must traverse through the entire stack to the application layer, and there is a lot of XML parsing. Have you considered using some other standard besides XMPP?
{ "language": "en", "url": "https://stackoverflow.com/questions/123387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to unload an assembly from the primary AppDomain? I would like to know how to unload an assembly that is loaded into the main AppDomain. I have the following code: var assembly = Assembly.LoadFrom( FilePathHere ); I need/want to be able to unload this assembly when I am done. Thanks for your help. A: If you want to have temporary code which can be unloaded afterwards, depending on your needs the DynamicMethod class might do what you want. That doesn't give you classes, though. A: For .net versions core 3.0 and later: You can now unload assemblies. Note that appdomains are no longer available in .net core. Instead, you can create one or more AssemblyLoadContext, load your assemblies via that context, then unload that context. See AssemblyLoadContext, or this tutorial that simulates loading a plugin then unloading it. For .net versions before .net core 3, including netframework 4 and lower You can not unload an assembly from an appdomain. You can destroy appdomains, but once an assembly is loaded into an appdomain, it's there for the life of the appdomain. See Jason Zander's explanation of Why isn't there an Assembly.Unload method? If you are using 3.5, you can use the AddIn Framework to make it easier to manage/call into different AppDomains (which you can unload, unloading all the assemblies). If you are using versions before that, you need to create a new appdomain yourself to unload it. A: I also know this is very old, but may help someone who is having this issue! Here is one way I have found to do it! instead of using: var assembly = Assembly.LoadFrom( FilePathHere ); use this: var assembly = Assembly.Load( File.ReadAllBytes(FilePathHere)); This actually loads the "Contents" of the assembly file, instead of the file itself. Which means there is NOT a file lock placed on the assembly file! So now it can be copied over, deleted or upgraded without closing your application or trying to use a separate AppDomain or Marshaling! PROS: Very Simple to fix with a 1 Liner of code! CONS: Cannot use AppDomain, Assembly.Location or Assembly.CodeBase. Now you just need to destroy any instances created on the assembly. For example: assembly = null; A: You can't unload an assembly without unloading the whole AppDomain. Here's why: * *You are running that code in the app domain. That means there are potentially call sites and call stacks with addresses in them that are expecting to keep working. *Say you did manage to track all handles and references to already running code by an assembly. Assuming you didn't ngen the code, once you successfully freed up the assembly, you have only freed up the metadata and IL. The JIT'd code is still allocated in the app domain loader heap (JIT'd methods are allocated sequentially in a buffer in the order in which they are called). *The final issue relates to code which has been loaded shared, otherwise more formally know as "domain neutral" (check out /shared on the ngen tool). In this mode, the code for an assembly is generated to be executed from any app domain (nothing hard wired). It is recommended that you design your application around the application domain boundary naturally, where unload is fully supported. A: You should load your temporary assemblies in another AppDomain and when not in use then you can unload that AppDomain. It's safe and fast. A: Here is a GOOD example how to compile and run dll during run time and then unload all resources: http://www.west-wind.com/presentations/dynamicCode/DynamicCode.htm A: I know its old but might help someone. You can load the file from stream and release it. It worked for me. I found the solution HERE. Hope it helps. A: As an alternative, if the assembly was just loaded in the first place, to check information of the assembly like the publicKey, the better way would be to not load it,and rather check the information by loading just the AssemblyName at first: AssemblyName an = AssemblyName.GetAssemblyName ("myfile.exe"); byte[] publicKey = an.GetPublicKey(); CultureInfo culture = an.CultureInfo; Version version = an.Version; EDIT If you need to reflect the types in the assembly without getting the assembly in to your app domain, you can use the Assembly.ReflectionOnlyLoadFrom method. this will allow you to look at they types in the assembly but not allow you to instantiate them, and will also not load the assembly in to the AppDomain. Look at this example as exlanation public void AssemblyLoadTest(string assemblyToLoad) { var initialAppDomainAssemblyCount = AppDomain.CurrentDomain.GetAssemblies().Count(); //4 Assembly.ReflectionOnlyLoad(assemblyToLoad); var reflectionOnlyAppDomainAssemblyCount = AppDomain.CurrentDomain.GetAssemblies().Count(); //4 //Shows that assembly is NOT loaded in to AppDomain with Assembly.ReflectionOnlyLoad Assert.AreEqual(initialAppDomainAssemblyCount, reflectionOnlyAppDomainAssemblyCount); // 4 == 4 Assembly.Load(assemblyToLoad); var loadAppDomainAssemblyCount = AppDomain.CurrentDomain.GetAssemblies().Count(); //5 //Shows that assembly is loaded in to AppDomain with Assembly.Load Assert.AreNotEqual(initialAppDomainAssemblyCount, loadAppDomainAssemblyCount); // 4 != 5 }
{ "language": "en", "url": "https://stackoverflow.com/questions/123391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "53" }
Q: Rhino Mocks: Re-assign a new result for a method on a stub I know I can do this: IDateTimeFactory dtf = MockRepository.GenerateStub<IDateTimeFactory>(); dtf.Now = new DateTime(); DoStuff(dtf); // dtf.Now can be called arbitrary number of times, will always return the same value dtf.Now = new DateTime()+new TimeSpan(0,1,0); // 1 minute later DoStuff(dtf); //ditto from above What if instead of IDateTimeFactory.Now being a property it is a method IDateTimeFactory.GetNow(), how do I do the same thing? As per Judah's suggestion below I have rewritten my SetDateTime helper method as follows: private void SetDateTime(DateTime dt) { Expect.Call(_now_factory.GetNow()).Repeat.Any(); LastCall.Do((Func<DateTime>)delegate() { return dt; }); } but it still throws "The result for ICurrentDateTimeFactory.GetNow(); has already been setup." errors. Plus its still not going to work with a stub.... A: I know this is an old question, but thought I'd post an update for more recent Rhino Mocks versions. Based on the previous answers which use Do(), there is a slightly cleaner (IMO) way available if you are using AAA in Rhino Mocks (available from version 3.5+). [Test] public void TestDoStuff() { var now = DateTime.Now; var dtf = MockRepository.GenerateStub<IDateTimeFactory>(); dtf .Stub(x => x.GetNow()) .Return(default(DateTime)) //need to set a dummy return value .WhenCalled(x => x.ReturnValue = now); //close over the now variable DoStuff(dtf); // dtf.Now can be called arbitrary number of times, will always return the same value now = now + new TimeSpan(0, 1, 0); // 1 minute later DoStuff(dtf); //ditto from above } private void DoStuff(IDateTimeFactory dtf) { Console.WriteLine(dtf.GetNow()); } A: George, Using your updated code, I got this to work: MockRepository mocks = new MockRepository(); [Test] public void Test() { IDateTimeFactory dtf = mocks.DynamicMock<IDateTimeFactory>(); DateTime desiredNowTime = DateTime.Now; using (mocks.Record()) { SetupResult.For(dtf.GetNow()).Do((Func<DateTime>)delegate { return desiredNowTime; }); } using (mocks.Playback()) { DoStuff(dtf); // Prints the current time desiredNowTime += TimeSpan.FromMinutes(1); // 1 minute later DoStuff(dtf); // Prints the time 1 minute from now } } void DoStuff(IDateTimeFactory factory) { DateTime time = factory.GetNow(); Console.WriteLine(time); } FWIW, I don't believe you can accomplish this using stubs; you need to use a mock instead. A: You can use Expect.Call to accomplish this. Here's an example using the record/playback model: using (mocks.Record()) { Expect.Call(s.GetSomething()).Return("ABC"); // 1st call will return ABC Expect.Call(s.GetSomething()).Return("XYZ"); // 2nd call will return XYZ } using (mocks.Playback()) { DoStuff(s); DoStuff(s); } A: Ok, so my first answer doesn't work for you because GetSomething may be called multiple times, and you don't know how many times. You're getting into some complex scenario here -- unknown number of method invocations, yet with different results after DoSomething is called -- I recommend breaking up your unit test to be simpler, or you'll have to have unit tests for your unit tests. :-) Failing that, here's how you can accomplish what you're trying to do: bool shouldReturnABC = true; using (mocks.Record()) { Expect.Call(s.GetSomething()).Repeat.Any(); LastCall.Do((Func<string>)delegate() { return shouldReturnABC ? "ABC" : "XYZ"; } } using (mocks.Playback()) { DoStuff(s); shouldReturnABC = false; DoStuff(s); }
{ "language": "en", "url": "https://stackoverflow.com/questions/123394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Using jQuery to find the next table row Using jQuery, how do you bind a click event to a table cell (below, class="expand") that will change the image src (which is in the clicked cell - original will be plus.gif, alternating with minus.gif) and hide/show the row immediately below it based on whether that row has a class of hide. (show it if it has a class of "hide" and hide if it does not have a class of "hide"). I am flexible with changing ids and classes in the markup. Thanks Table rows <tr> <td class="expand"><img src="plus.gif"/></td> <td>Data1</td><td>Data2</td><td>Data3</td> </tr> <tr class="show hide"> <td> </td> <td>Data4</td><td>Data5</td><td>Data6</td> </tr> A: Nobody has any love for the ternary operator? :) I understand readability considerations, but for some reason it clicks for me to write it as: $(document).ready( function () { $(".expand").click(function() { $("img",this).attr("src", $("img",this) .attr("src")=="minus.gif" ? "plus.gif" : "minus.gif" ); $(this).parent().next().toggle(); }); }); ...and has the benefit of no extraneous classes. A: I had to solve this problem recently, but mine involved some nested tables, so I needed a more specific, safer version of javascript. My situation was a little different because I had contents of a td and wanted to toggle the next TR, but the concept remains the same. $(document).ready(function() { $('.expandButton').click(function() { $(this).closest('tr').next('tr.expandable').fadeToggle(); }); }); Closest grabs the nearest TR, in this case the first parent. You could add a CSS class on there if you want to get extremely specific. Then I specify to grab the next TR with a class of expandable, the target for this button. Then I just fadeToggle() it to toggle whether it is displayed or not. Specifying the selectors really helps narrow down what it will handle. A: You don't need the show and hide tags: $(document).ready(function(){ $('.expand').click(function() { if( $(this).hasClass('hidden') ) $('img', this).attr("src", "plus.jpg"); else $('img', this).attr("src", "minus.jpg"); $(this).toggleClass('hidden'); $(this).parent().next().toggle(); }); }); edit: Okay, I added the code for changing the image. That's just one way to do it. I added a class to the expand attribute as a tag when the row that follows is hidden and removed it when the row was shown. A: Try this... //this will bind the click event //put this in a $(document).ready or something $(".expand").click(expand_ClickEvent); //this is your event handler function expand_ClickEvent(){ //get the TR that you want to show/hide var TR = $('.expand').parent().next(); //check its class if (TR.hasClass('hide')){ TR.removeClass('hide'); //remove the hide class TR.addClass('show'); //change it to the show class TR.show(); //show the TR (you can use any jquery animation) //change the image URL //select the expand class and the img in it, then change its src attribute $('.expand img').attr('src', 'minus.gif'); } else { TR.removeClass('show'); //remove the show class TR.addClass('hide'); //change it to the hide class TR.hide(); //hide the TR (you can use any jquery animation) //change the image URL //select the expand class and the img in it, then change its src attribute $('.expand img').attr('src', 'plus.gif'); } } Hope this helps. A: This is how the images are set up in the html <tr> <td colspan="2" align="center" <input type="image" src="save.gif" id="saveButton" name="saveButton" style="visibility: collapse; display: none" onclick="ToggleFunction(false)"/> <input type="image" src="saveDisabled.jpg" id="saveButtonDisabled" name="saveButton" style="visibility: collapse; display: inline" onclick="ToggleFunction(true)"/> </td> </tr> Change the onClick event to your own function that's in JS to toggle between them. In the ToggleFunction(seeSaveButton){ if(seeSaveButton){ $("#saveButton").attr("disabled", true) .attr("style", "visibility: collapse; display: none;"); $("#saveButtonDisabled").attr("disabled", true) .attr("style", "display: inline;"); } else { $("#saveButton").attr("disabled", false) .attr("style", "display: inline;"); $("#saveButtonDisabled") .attr("disabled", true) .attr("style", "visibility: collapse; display: none;"); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/123401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Image database build in Oracle - Is an e-commerce style interface workable? Question Can I build a image database/library that has an e-commerce style checkout system and a powerful search in Oracle/Java? Are there existing elements out there I should be aware of? Or, is this better in another dev environment like PHP/MySQL? Overview I am working on an image database/library dev team on the UI side of things, but I can't help but feel the team is tracking a bit off course on this one. Ordinarily, I would look to see this entirely within an open-source, license-free environment like a PHP/MySQL/Apache server environment, however strict IT approvals processes have limited the areas of exploration to Oracle as the DB component with Java then making the likely programming environment. With that noted, I have never worked with either, and don't know how complicated it would be to put the following features together: * *keyword and metadata based search *e-commerce style checkout system (no payment methods or infrastructure required) *gallery groupings of images (edited by a central editor) *tag cloud generation and search *component to parse metadata from embedded metadata (in XMP) to the database *web-based back end for database management, maintenance of controlled keyword lists and administration of content I've uploaded a pdf of screenshots of the UI I've developed for the web side of things. UI Sceenshots Sorry for the complicated Q, having a hard time with it myself. A: Seems your question is really a struggle between Oracle/Java and PHP/MySQL. The details you state are none too difficult to implement using either of these tools sets or using a dozen others that I could think of. If I am correct (only you could know), then this is a fabulous opportunity for you. You seem to have experience with the free/open tools, but none with the commercial/closed ones. Then take the opportunity to learn on your employer's dime. A: I can't comment on whether Jave is apropriate, but you might look at the Oracle Application Express environment http://apex.oracle.com. This looks and sounds entirely within the scope that they aim for there. A: Thanks David and dacracot for your insights. Upon further review and consultation (plus some leveraging) of our IT department, considering that this app and DB would run on their own server, and apparently other areas of the corporation have approved the use of open server technologies like PHP, SQL and Apache, this will likely move to a PHP/SQL build for a few reasons, not the least of them being licensing costs of putting Oracle on a new server and the overall larger availability of PHP/SQL developers in the local dev community. I did adapt some of dacracot's advice however, and convinced admin to send me to some .NET training so I can better assist in managing our intranet. I will also keep the Oracle Application Express site in mind for further dev needs across our intranet. Thanks for your help.
{ "language": "en", "url": "https://stackoverflow.com/questions/123421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Installing TFS 2008 Workgroup Edition on Windows Server 2003 with SQL Server 2008 I ran into an issue when installing Team Foundation Server 2008 Workgroup Edition. I have a Windows Server 2003 SP2 machine that's practically new. I have installed SQL Server 2008 with almost all of the options. When I run through the TFS installation, it allows me to enter my SQL instance name and then goes through the verification process to ensure I have everything I need. However, I receive an error stating that A compatible version of SQL Server is not installed. I verified and I do have SQL Server 2008 running just fine. It's Standard edition and does have Full Text, Reporting, and Analysis services installed. The other errors I receive are with Full Text not being installed, running, or enabled. Any help is appreciated. A: To install Team Foundation Server 2008 against SQL Server 2008, you must use TFS 2008 Service Pack 1. However, currently Microsoft do not provide a "TFS with SP1" download - you must created your own slipstreamed installation by downloading the TFS 2008 media and then applying the service pack to the media before running the installer. For more information see my blog post here http://www.woodwardweb.com/vsts/creating_a_tfs.html Also note that TFS needs certain settings for its SQL Server. When installing a SQL instance for TFS I usually follow the guidance in the TFS Installation Guide pretty ridgedly just to be sure I have everything set up right. You can download the latest copy of the TFS install guide here http://www.microsoft.com/downloads/details.aspx?FamilyID=FF12844F-398C-4FE9-8B0D-9E84181D9923&displaylang=en Good luck, Martin. A: You need the Tfs 2008 Sp1 for support for SQL Server 2008. See this post Link Hth., /Gert A: Martin - Thanks for the post. I found a cleaner version of your procedure and blogged about it at http://weblogs.asp.net/jgaylord/archive/2008/09/24/installing-team-foundation-server-2008-on-sql-server-2008.aspx. A: There is actually an article in the TFS Installation Guide called "How to: Integrate the Installation of Team Foundation Server and Service Pack 1 " Extract TFS DVD to 'TFS2008' Extract TFS SP1 to 'SP1' Run this command from the directory they're both in Run installer from TFS2008WITHSP1 Note: target dir seems to need to be fully qualified MSIEXEC /a TFS2008\AT\VS_SETUP.MSI /p SP1\TFS90sp1-KB949786.msp TARGETDIR=s:\software\msdn\servers\TFS2008WITHSP1
{ "language": "en", "url": "https://stackoverflow.com/questions/123430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Progress Reports I'm not talking about the kind you get in college, but rather implementing progress reports on the Job for developers. My thoughts in organizing a development team is encouraging and, to some extend, requiring regular progress updates were developers would report on what they did in the last hour or few hours and how long tasks took. Below I listed a few of the pros that come to my mind * *Allowing me to go back and see were I might have made a mistake and allow myself a starting point when working to solve a problem that I created. *Gives teams a good understanding of were they are on the project and regular updates *For future projects, the ability to go back and see how long a certain task took and create an accurate estimate *Encouraging a greater amount of communication among a team What I would not want to see happen is it become a way for breathing down a developers back, and I think it could easily become a distraction if one felt pressured submit updates every hour. * *What are you thoughts? *Pros and Cons. *Have you ever experienced this first hand? How did it make you feel? A: Usually, requiring status reports more frequently than once per day will get you a lot of Office Space TPS Report comments. Any benefit you will see in more project data will quickly be out-weighed by low morale and general team malaise. Try asking for updates on a regular (maybe daily) basis. Don't ask for formal, written reports, that's your job as PM to produce those for your boss. Developers have development work to do. Try not to burden them with managerial tasks. A: This is yet another example where Project managers fail to understand their role. Scrum is not an answer, nor any other doctrine. Why on earth would you, in any organization and to better support or be part of a decision, need hourly reports? are you workers fish? do they have no more than 60 minutes of recollection, needing you to troll on by "hey Jeff... how is it going?"... completely mind-exhausting line of thought killer forceps-driven pause "wazup patcouch22?... whom I seen 59 minutes ago..." And what if you understood, to the infinitely detail, what went wrong with the last slip... will the exact same derailments happen on your next project? Even if it did, do you understand the robotization required to avoid all forms of slipage/error/progress? Be humans... be helping humans, for crying out loud! there are no miracle mathematically structured ways to achieve high-levels of productivity... just heuristics. Read The Mythical Man Month and others... it's not so much on poor management techniques, it's about accidents and because we're dealing with humans. The best and team-productivity enhancing thing I've done (when I'm "just" a PM): keep my staff well fed, well slept, with regular schedules, and offer them my "ask me your dumbest question, I'll only answer IFFFF I'm 10000% sure of the answer". Shield them from the pressures above, solve for them the problems below, make sure they know you're there for punching-bag duty. A: I'd avoid status reports all together, but if you must use them, make them no more frequent than weekly. Good developers are more like artists than laborers. They produce great work in creative spurts and not with clock-work regularity. If you require frequent status reports, they'll feel unnecessary pressure which will actually make them less happy, less creative, and ultimately less productive. A: We use twitter.com for team updates. I ask my team to tweet when they start a task, midway through a task, and when they finish the task and start a new one. This way: * *I know what they are up to fairly frequently and I don't need to barge into their office and always ask, 'What are you working on?' *If a developer goes silent for too long I can go and offer help. *Developers can ask for help easily without barging in on another developer *The character limit in Twitter ensures updates are short, and do not require a lot of time to create. We all set our accounts to private accounts to ensure no one outside of our group gets our tweets. We've been using it for a better part of two months...it has really opened up to me what my guys are doing without being intrusive. A: Every hour is too frequent. That many interruptions will decrease productivity, and increase developer frustration. I would suggest looking into the Scrum methodology, they have a "daily scrum" meeting, every morning where you update the team on your progress the previous day and planned work for the current day. It has worked well for me, it might work for you. Scrum also includes the concept of story and task cards where you estimate time, and eventually come back to see how far off your estimates were. This give s you a "focus factor" that you can use to help increase the accuracy of future estimations. Check out this PDF Scrum and XP from the Trenches for a good read about it. A: Progress Reports every few hours are overkill. If you're working using source control you can get a great deal of mileage out of keeping track of your checkins and setting a standard for your developers to comment on any commit/checkins that they do. In this way you're not badgering them (and incurring very expensive context switches) but you're allowing them to stay in their flow while still being able to monitor progress. Depending on how sophisticated your source control is, you can correlate tasks to commits/checkins which is additional granularity for keeping track of estimates. A: There are two things that you want to do. Daily Meetings All you want to do is ask two questions. * *What did you do yesterday? *What are you going to do today? Very quickly you will establish if the developers are making progress or if there are any issues that are causing delays. Trying to get updates more regularly will prove to be overkill and probably be perceived as micro management. Weekly Progress Reports Once a week, take half an hour to put together a simple report that covers the following * *Achievements *Assumptions *Dependancies *Issues *Resolutions It shouldn't take much effort to do this and it will give you a very good insight into how the project is tracking. It's also very effective in providing management or clients and overview of what's happening and what needs to be addressed. For a more comprehensive overview, visit the following links * *Daily Meetings *Progress Reports Cheers, Marty A: The scrum methodology handles this pretty well. You have short daily meetings to report progress and obstacles. It allows everyone to be caught up without being bogged down by the minutia. A: Look up Scrum, it is an agile approach that defines everything you want to do and works great for our team (as well as many others I have read about). A: Agile scrum actually enforcing this. We are following VSTS Scrum methodology and project template to track all Task/Bug etc. and we can easily set a field for the time reporting(Which we are thinking of implementing soon) So that the final data will be so useful for the organization to asses the people for appraisal. If they lack some expertise we can easily find that out with this close tracking. But the practicality of this is a big ? A: I'd nix the status reports if you can. Although it sounds like a good idea it sends the message that you're trying to manage the people, and not focusing on the best way to get the work done. From what I've seen, people seem to work best when you describe some of the work that needs to get done and then give them plenty of room, then offer yourself as a resource. I'd think something like hourly reports would be tough on everyone, including you. A quick morning meeting (similar to scrum) can be helpful - if anyone is hung up it becomes apparent pretty quickly since they're saying the same thing each day. It also gives other people the opportunity to step up and offer to help, which you can always privately note if you wish or if you've got a boss that likes the idea of reviews. A: Everything about leading a team is scheduling, motivation, prioritization, and conflict management. I get my team together every Monday morning before we start work to chat about their work. We talk about what we accomplished the previous week, and what we're looking forward to getting done the upcoming week. On top of that, we each bring up something we did (usually code-related) that was really exciting in some way. Some piece of code that just worked; A napkin sketch of an idea for a new app; A new technology that could enrich the rest of the team; There's always something. I've found that on top of starting the week off with a gratifying list of accomplishments, it also is invigorating to think about what the future holds, and what awesome projects/accomplishments await. We work out logistics outside the meeting. Schedules, priorities are handled on an individual basis. Such meetings have actually turned out small things like Finisht.com and Twenis.com. It's been very cool, and the team I work with can get so excited about coding that I sometimes can't believe it.
{ "language": "en", "url": "https://stackoverflow.com/questions/123453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: debug JSP from eclipse Does anyone know of a good tool for debugging JSPs from within Eclipse? I'd like to be able to set and watch breakpoints, step through the Java code/tags, etc within Eclipse while the app is running (under JBoss in my case). Presumably, it's reasonably straightforward to debug the servlet class that's generated from a JSP, but it's also fairly unappealing. A: Within Eclipse, you can put breakpoints to your jsp file, step through the Java code/tags, etc. However the only view you can use while debugging is the Variables view to inspect the value of any variable. And one more thing, you can not see the value for example of this expression: <%= response.encodeURL("ProcessLogin.jsp") %>just the value of the variable response. A: If you have WTP installed, you can set breakpoints within a JSP and they work fine in a regular "remote debug" session. However, once you've stopped on a breakpoint, stepping through the code is nigh on impossible and finding whatever it is that you wish to inspect takes a lot of digging around in the "Variables" view. A: If you are having to use a debugger in a JSP, chances are very good that you are doing things in the JSP that you shouldn't be. I recommend that you think very hard about whether your current implementation is using good MVC design practice. JSPs really should be about presentation, which should rarely (if ever) require debugging. If you have certain logic constructs that you are having to implement in JSP, consider implementing them using a custom tag (which is easy to debug in an IDE), or do the processing in controller servlet that presents the data in an easy to digest form for the JSP. A: Apparently, Eclipse has a troubleshooting page on this, though when I tried it I did get a 404 with it. Hopefully this can at least get you started in a good direction.
{ "language": "en", "url": "https://stackoverflow.com/questions/123462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51" }
Q: Zabbix: is it possible to monitor arbitrary string variable? We are using Zabbix for services monitoring. There are some essential monitoring configured. I want to have timeline of version strings of my service along with this monitorings. That would give me opportunity to see that upgrading to this version altered overall error-count. Is it possible? A: Yes, it's possible. You can pass arbitrary data from your Zabbix agent to the Zabbix server by using "UserParameter" fields in zabbix_server.conf, i.e. agent configuration file. General syntax is: UserParameter=section[id], command For example, let's assume you want to monitor how many users are logged in. You would use: UserParameter=sys[num_users], who | wc -l (I assume you know how to configure the Zabbix server to receive this data, it's pretty straightforward - just create a new item, bind it to a template and connect a template to a server or server group). If you want to monitor some file for a specific string, just use grep, sed, cut, tr and other standard Unix tools. If you need more complex things, just write a shell script. A: Update to Igor's answer: UserParameter is declared client-side in zabbix_agentd.conf or zabbix_agent.conf (depending on whether you're using the daemon or inetd version), not zabbix_server.conf.
{ "language": "en", "url": "https://stackoverflow.com/questions/123480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Weird Output on SQL REPLACE I am using REPLACE in an SQL view to remove the spaces from a property number. The function is setup like this REPLACE(pin, ' ', ''). On the green-screen the query looked fine. In anything else we get the hex values of the characters in the field. I am sure it is an encoding thing, but how do I fix it? Here is the statement I used to create the view: CREATE VIEW RLIC2GIS AS SELECT REPLACE(RCAPIN, ' ', '') AS RCAPIN13 , RLICNO, RONAME, ROADR1, ROADR2, ROCITY, ROSTAT, ROZIP1, ROZIP2, RGRID, RRADR1, RRADR2, RANAME, RAADR1, RAADR2, RACITY, RASTAT, RAZIP1, RAZIP2, REGRES, RPENDI, RBLDGT, ROWNOC, RRCODE, RROOMS, RUNITS, RTUNIT, RPAID, RAMTPD, RMDYPD, RRFUSE, RNUMCP, RDATCP, RINSP, RCAUKY, RCAPIN, RAMTYR, RYREXP, RDELET, RVARIA, RMDYIN, RDTLKI, ROPHN1, ROPHN2, ROCOM1, ROCOM2, RAPHN1, RAPHN2, RACOM1, RACOM2, RNOTES FROM RLIC2 UPDATE: I posted the answer below. A: We ended up using concat and substring to get the results we wanted. CREATE VIEW RLIC2GIS AS SELECT CONCAT(SUBSTR(RCAPIN,1,3),CONCAT(SUBSTR(RCAPIN,5,2), CONCAT(SUBSTR(RCAPIN,8,2), CONCAT(SUBSTR(RCAPIN,11,3), SUBSTR(RCAPIN, 15,3))))) AS CAPIN13, RLICNO, RONAME, ROADR1, ROADR2, ROCITY, ROSTAT, ROZIP1, ROZIP2, RGRID, RRADR1, RRADR2, RANAME, RAADR1, RAADR2, RACITY, RASTAT, RAZIP1, RAZIP2, REGRES, RPENDI, RBLDGT, ROWNOC, RRCODE, RROOMS, RUNITS, RTUNIT, RPAID, RAMTPD, RMDYPD, RRFUSE, RNUMCP, RDATCP, RINSP, RCAUKY, RCAPIN, RAMTYR, RYREXP, RDELET, RVARIA, RMDYIN, RDTLKI, ROPHN1, ROPHN2, ROCOM1, ROCOM2, RAPHN1, RAPHN2, RACOM1, RACOM2, RNOTES FROM RLIC2 A: The problem here might be that what you think is the blank character in that field is actually some other unprintable character. You can use the following SQL to see what ASCII character is at the 4th position: SELECT ascii(substr(RCAPIN,4,1)) FROM YOUR-TABLE Then you would be able to use a replace for that character instead of the blank space: SELECT replace(RCAPIN,chr(9)) FROM YOUR-TABLE A: Try using NULL rather than an empty string. i.e. REPLACE(RCAPIN, ' ', NULL)
{ "language": "en", "url": "https://stackoverflow.com/questions/123489", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Passing apache2 digest authentication information to a wsgi script run by mod_wsgi I've got the directive <VirtualHost *> <Location /> AuthType Digest AuthName "global" AuthDigestDomain / AuthUserFile /root/apache_users <Limit GET> Require valid-user </Limit> </Location> WSGIScriptAlias / /some/script.wsgi WSGIDaemonProcess mywsgi user=someuser group=somegroup processes=2 threads=25 WSGIProcessGroup mywsgi ServerName some.example.org </VirtualHost> I'd like to know in the /some/script.wsgi def application(environ, start_response): start_response('200 OK', [ ('Content-Type', 'text/plain'), ]) return ['Hello'] What user is logged in. How do I do that? A: Additional information about Apache/mod_wsgi and access, authentication and authorization mechanisms can be found in: http://code.google.com/p/modwsgi/wiki/AccessControlMechanisms The information isn't passed by default because doing so could leak password information to applications which maybe shouldn't get it. A: add WSGIPassAuthorization On: <VirtualHost *> <Location /> AuthType Digest AuthName "global" AuthDigestDomain / AuthUserFile /root/apache_users <Limit GET> Require valid-user </Limit> </Location> WSGIPassAuthorization On WSGIScriptAlias / /some/script.wsgi WSGIDaemonProcess mywsgi user=someuser group=somegroup processes=2 threads=25 WSGIProcessGroup mywsgi ServerName some.example.org </VirtualHost> Then just read environ['REMOTE_USER']: def application(environ, start_response): start_response('200 OK', [ ('Content-Type', 'text/plain'), ]) return ['Hello %s' % environ['REMOTE_USER']] More information at mod_wsgi documentation.
{ "language": "en", "url": "https://stackoverflow.com/questions/123499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How can I take a photo with my iPhone app? I'm writing an iPhone app with Cocoa in xcode. I can't find any tutorials or sample code that shows how to take photos with the built in camera. How do I do this? Where can I find good info? Thanks! A: The UIImagePickerController class lets you take pictures or choose them from the photo library. Specify the source type as UIImagePickerControllerSourceTypeCamera. See also this question previously asked: Access the camera with iPhone SDK A: Just Copy and paste following code into your project to get fully implemented functionality. where takePhoto and chooseFromLibrary are my own method names which will be called on button touch. Make sure to reference outlets of appropriate buttons to these methods. -(IBAction)takePhoto :(id)sender { UIImagePickerController *imagePickerController = [[UIImagePickerController alloc] init]; if ([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]) { [imagePickerController setSourceType:UIImagePickerControllerSourceTypeCamera]; } // image picker needs a delegate, [imagePickerController setDelegate:self]; // Place image picker on the screen [self presentModalViewController:imagePickerController animated:YES]; } -(IBAction)chooseFromLibrary:(id)sender { UIImagePickerController *imagePickerController= [[UIImagePickerController alloc] init]; [imagePickerController setSourceType:UIImagePickerControllerSourceTypePhotoLibrary]; // image picker needs a delegate so we can respond to its messages [imagePickerController setDelegate:self]; // Place image picker on the screen [self presentModalViewController:imagePickerController animated:YES]; } //delegate methode will be called after picking photo either from camera or library - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { [self dismissModalViewControllerAnimated:YES]; UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage]; [myImageView setImage:image]; // "myImageView" name of any UIImageView. } A: Use UIImagePickerController. There is a good tutorial on this here. http://www.zimbio.com/iPhone/articles/1109/Picking+Images+iPhone+SDK+UIImagePickerController You should set the source type to UIImagePickerControllerSourceTypeCamera or UIImagePickerControllerSourceTypePhotoLibrary. Note that these two types result in very different displays on the screen. You should test both carefully. In particular, if you are nesting the UIImagePickerController inside a UINavigationController, you can end up with multiple navigation bars and other weird effects if you are not careful. See also this thread A: Answer posted by @WQS works fine, but contains some methods that are deprecated from iOS 6. Here is the updated answer for iOS 6 & above: -(void)takePhoto { UIImagePickerController *imagePickerController = [[UIImagePickerController alloc] init]; if ([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]) { [imagePickerController setSourceType:UIImagePickerControllerSourceTypeCamera]; } // image picker needs a delegate, [imagePickerController setDelegate:self]; // Place image picker on the screen [self presentViewController:imagePickerController animated:YES completion:nil]; } -(void)chooseFromLibrary { UIImagePickerController *imagePickerController= [[UIImagePickerController alloc]init]; [imagePickerController setSourceType:UIImagePickerControllerSourceTypePhotoLibrary]; // image picker needs a delegate so we can respond to its messages [imagePickerController setDelegate:self]; // Place image picker on the screen [self presentViewController:imagePickerController animated:YES completion:nil]; } //delegate methode will be called after picking photo either from camera or library - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { [self dismissViewControllerAnimated:YES completion:nil]; UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage]; [myImageView setImage:image]; // "myImageView" name of any UImageView. } Don't Forget to add this in your view controller.h : @interface myVC<UINavigationControllerDelegate, UIImagePickerControllerDelegate> A: Here is my code that i used to take picture for my app - (IBAction)takephoto:(id)sender { picker = [[UIImagePickerController alloc] init]; picker.delegate = self; [picker setSourceType:UIImagePickerControllerSourceTypeCamera]; [self presentViewController:picker animated:YES completion:NULL]; } -(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { img = [info objectForKey:@"UIImagePickerControllerOriginalImage"]; [imageview setImage:img]; [self dismissViewControllerAnimated:YES completion:NULL]; } if you want to retake picture just simple add this function -(void)imagePickerControllerDidCancel:(UIImagePickerController *)picker { [self dismissViewControllerAnimated:YES completion:NULL]; }
{ "language": "en", "url": "https://stackoverflow.com/questions/123503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Finding pixels per inch in wxDC In wxWidgets, how can you find the pixels per inch on a wxDC? I'd like to be able to scale things by a real world number like inches. That often makes it easier to use the same code for printing to the screen and the printer. A: does this help? (from the manual) wxDC::GetPPI wxSize GetPPI() const Returns the resolution of the device in pixels per inch. A: ...or wxDC::GetSizeMM which return the horizontal and vertical resolution in millimetres.
{ "language": "en", "url": "https://stackoverflow.com/questions/123504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: C# Compiler Incorrectly Optimizes Code I have a ASP.NET application running on a remote web server and I just started getting this error: Method not found: 'Void System.Collections.Generic.ICollection`1..ctor()'. I disassembled the code in the DLL and it seems like the compiler is incorrectly optimizing the code. (Note that Set is a class that implements a set of unique objects. It inherits from IEnumerable.) This line: Set<int> set = new Set<int>(); Is compiled into this line: Set<int> set = (Set<int>) new ICollection<CalendarModule>(); The CalendarModule class is a totally unrelated class!! Has anyone ever noticed .NET incorrectly compiling code like this before? Update #1: This problem seems to be introduced by Microsoft's ILMerge tool. We are currently investigating how to overcome it. Update #2: We found two ways to solve this problem so far. We don't quite understand what the underlying problem is, but both of these fix it: * *Turn off optimization. *Merge the assemblie with ILMerge on a different machine. So we are left wondering if the build machine is misconfigured somehow (which is strange considering that we have been using the machine to build releases for over a year now) or if it is some other problem. A: Ahh, ILMerge - that extra info in your question really helps with your problem. While I wouldn't ever expect the .net compiler to fail in this way I would expect to occasionally see this sort of thing with ILMerge (given what it's doing). My guess is that two of your assemblies are using the same optimisation 'trick', and once merged you get the conflict. Have you raised the bug with Microsoft? A workaround in the meantime is to recompile the assemblies from source as a single assembly, saving the need for ILMerge. As the csproj files are just XML lists they're basically easy to merge, and you could automate that as an extra MSBuild step. A: Are you sure that the assembly you're looking at was actually generated from the source code in question? Are you able to reproduce this problem with a small test case? Edit: if you're using Reflector, it's possible that the MSIL to C# conversion isn't correct -- Reflector isn't always 100% accurate at decompiling. What does the MSIL look like? Edit 2: Hmm... I just realized that it can't be Reflector at fault or you wouldn't have gotten that error message at runtime. A: This is more likely to be an issue with the reflection tool than with the .Net compilation. The error you're getting - a constructor not found during remoting is most likely to be a serialisation issue (all serialisable classes need a parameterless constructor). The code found from your reflection tool is more likely to throw a typecast exception. A: I agree with both Curt and Beds; this sounds like something is seriously wrong. The optimizer has worked for all of us and no such bugs have been reported (that I know of) - could it be that you are, in fact, doing something wrong? Sidenote: I'd also like to point out System.Collections.Generic.HashSet<T> which is in .Net fx 3.5 and does exactly what a Set<> class should. A: Was code recently deployed to that server? Could someone have pushed a build without your knowledge? Can you go to source control, pull the latest, and duplicate the issue? At this point, with the given information, I doubt it's the compiler. A: Ouch. If this really is ILMerge at fault, please keep this topic up-to-date with your findings -- I use ILMerge as a key step in constructing a COM interop assembly.
{ "language": "en", "url": "https://stackoverflow.com/questions/123506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to use .NET 3.0 with Visual Studio 2005? My Google-fu is failing me on this question. I have a coworker who has Visual Studio 2005 on his machine. I have Visual Studio 2008. He wants to open a project I wrote in C# 3.0, and we've gotten that far, but VS2005 barfs on the 3.0 code, like var. He has the 3.0 and 3.5 frameworks installed, as well as the Visual Studio 2005 Extensions for Windows Workflow. What else does he need? Or are we pursuing a lost cause, a wild goose chase spurred by my thinking that I heard this was possible somewhere one time? Please don't suggest he install VS2008 or the Express edition. That simply isn't possible at this time. :( A: So far as I understand it, this isn't possible. If you weren't using the new C# 3.0 code features, he should be able to work with a project created in VS2008 (and compile it against the 2.0 framework), but I don't think the 2005 compiler is ever going to be able to cope with the new syntax. A: You can recreate the project file in vs2005 and then update the headers on the files to vs2005 and you are back in business. Have a look at Rick Strahls Blog for more details on how its done. Also worth looking at the project converter in Visual Studio 2005/2008 Interoperability You may also need the Visual Studio 2005 extensions for .Net 3.0 to be installed. WWF Extensions A: The IDE itself may not support the 3.0 functionality. If you can live without the 3.0 features you can compile to 2.0 which he should be able to run ok.
{ "language": "en", "url": "https://stackoverflow.com/questions/123524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I place HTML content above a Flash movie? A site I'm working on has Flash headers (using swfobject to embed them). Now I'm required to code in a bit of HTML that's supposed to overlap the Flash movie. I've tried setting z-index on the Flash element's container and the (absolutely positioned) div but it keeps "vanishing" behind the Flash movie. I'm hoping for a CSS solution, but if there's a bit of JS magic that will do the trick, I'm up for it. Update: Thanks, setting wmode to "transparent" mostly fixed it. Only Safari/Mac still hid the div behind the flash on first show. When I'd switch to another app and back it would be in front. I was able to fix this by setting the div's initial styles to display: none; and make it visible via JS half a second after the page has loaded. A: Follow-up note: As you found in your update, getting HTML to display on top of Flash is currently a finicky proposition, and even with the JS magic you found you should expect that the Flash will block out your HTML for some viewers using off-browsers, older versions, and so on. If reaching an arbitrarily large browsing audience is important to you (mobile devices, for example), then redesigning your content to avoid the overlap may save you headaches in the long run. A: Make sure the FlashVar "wmode" is set to "transparent" or "opaque," but NOT the default, "windowed"... then you should be able to use CSS z-index A: I would like to add, that you have to remember to set WMODE parameters ("transparent") in both OBJECT and EMBED tags! Follow the link for details: http://kb2.adobe.com/cps/142/tn_14201.html A: use code in following style, it works in Firefox and chrome <object id='myId' width='700' height='500'> <param name='movie' value='images/ann/$imagename' /> <param name='wmode' value='transparent' /> <!--[if !IE]>--> <object type='application/x-shockwave-flash' data='images/ann/$imagename' width='700' height='500' wmode='transparent'> <!--<![endif]--> <div> <h1>Please download flash player</h1> <p><a href='http://www.adobe.com/go/getflashplayer'><img src='http://www.adobe.com/images/shared/download_buttons/get_flash_player.gif' alt='Get Adobe Flash player' /></a></p> </div> <!--[if !IE]>--> </object> <!--<![endif]--> </object> A: Set this flash variable like this s1.addParam("wmode","transparent"); then in the div tag use this style style="z-index:inherit; The problem will be solved. A: Like Steve Paulo said, then comes the fun part when the HTML that's sitting on top of your flash is calling more flash... Oh the fun we had with that one, which involved setting the z-index to be actually be lower to account for flash thinking it's the bees knees and therefore must always be on top.
{ "language": "en", "url": "https://stackoverflow.com/questions/123529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Washed out user interface - is there any way to correct for it? This is an interesting conundrum. We have a WPF app that has a Vista-like theme using soft pastels, Aero control templates, etc. What we've noticed is that this UI appears incredibly washed out on low quality LCDs. Is there a way to boost color saturation application-wide or are we at the mercy of bad monitors? Can we even tell, programmatically if we're being displayed on a low quality monitor? EDIT: Basically, these answers are what I wanted someone to confirm, without me leading people to say them! It was a management decision higher than me to go with this appearance and I needed some corroboration. Thanks everyone! A: You chose a bad palette. Do some work on the UI; introduce more natural contrast. You wouldn't want to add programming to work around the bad palette choice, even if you could. Just change the colors. A: I am not sure if WPF allows you to do anything, but my guess is that you can't directly control a user's monitor. You can get things about the user's computer, namely bitdepth, but to adjust on-the-fly graphical information would be hugely expensive (processor wise). You could write a routine that does it - changes the color of the graphics or such thing, but why? It's the client machine - you really should program with the idea that you have no control over it. If it is washed out on their screens, then they need better hardware, or they need to adjust the brightness/contract on their monitors correctly. It's basically out of your realm of control. A: Going off what the previous two said, here's where an understanding of color theory can come in handy. There's nothing you can do to control the saturation or hue of people's monitors; some folks might be using your app in grayscale, for all you know. As such, it's important to start with a well-chosen, versatile set of colors and shades. A general scheme that encompasses as many different setups as possible is a good starting point for a UI. A: Go and check out two screen casts at: Mark Miller on The Science of a Great User Experience Part 1 Mark Miller on The Science of a Great User Experience Part 2 There is some information on colours and contrasts for UI that might be of some help, plus a lot of other good information. A: A wild idea would be to implement a saturation shader and set it on the window :) that way the user can control the saturation by himself! but..like i said... a wild idea, probably not a good one!
{ "language": "en", "url": "https://stackoverflow.com/questions/123537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Modifying Existing .NET Assemblies Is there a way to modify existing .NET assemblies without resorting to 3rd party tools? I know that PostSharp makes this possible but I find it incredibly wasteful that the developler of PostSharp basically had to rewrite the functionality of the whole System.Reflection namespace in order to make existing assemblies modifiable. System.Reflection.Emit only allows the creation of new, dynamic assemblies. However, all the builder classes used here inherit from the basic reflection classes (e.g. TypeBuilder inherits from System.Type). Unfortunately, there doesn't seem to be a way to coerce an existing, dynamically loaded type into a type builder. At least, there's no official, supported way. So what about unsupported? Does anyone know of backdoors that allow to load existing assemblies or types into such builder classes? Mind, I am not searching for ways to modify the current assembly (this may even be an unreasonable request) but just to modify existing assemblies loaded from disc. I fear there's no such thing but I'd like to ask anyway. In the worst case, one would have to resort to ildasm.exe to disassemble the code and then to ilasm.exe for reassembly but there's no toolchain (read: IL reader) contained in .NET to work with IL data (or is there?). /EDIT: I've got no specific use case. I'm just interested in a general-purpose solution because patching existing assemblies is a quite common task. Take obfuscators for example, or profilers, or AOP libraries (yes, the latter can be implemented differently). As I've said, it seems incredibly wasteful to be forced to rewrite large parts of the already existing infrastructure in System.Reflection. @Wedge: You're right. However, there's no specific use case here. I've modified the original question to reflect this. My interest was sparked by another question where the asker wanted to know how he could inject the instructions pop and ret at the end of every method in order to keep Lutz Roeder's Reflector from reengineering the (VB or C#) source code. Now, this scenario can be realized with a number of tools, e.g. PostSharp mentioned above and the Reflexil plugin for Reflector, that, in turn, uses the Cecil library. All in all, I'm just not satisfied with the .NET framework. @Joel: Yes, I'm aware of this limitation. Thanks anyway for pointing it out, since it's important. @marxidad: This seems like the only feasible approach. However, this would mean that you'd still have to recreate the complete assembly using the builder classes, right? I.e. you'd have to walk over the whole assembly manually. Hmm, I'll look into that. A: It would help if you could provide a more specific use case, there are probably better ways to fix your problem than this. In .NET 3.5 it's possible to add extension methods to existing framework classes, perhaps that is enough? A: One important point: if the assembly is signed, any changes will fail and you'll end up with a dud. A: Mono.Cecil also allows you to remove the strong name from a given assembly and save it back as an unsigned assembly. Once you remove the strong name from the assembly, you can just modify the IL of your target method and use the assembly as you would any other assembly. Here's the link for removing the strong name with Cecil: http://groups.google.com/group/mono-cecil/browse_thread/thread/3cc4ac0038c99380/b8ee62b03b56715d?lnk=gst&q=strong+named#b8ee62b03b56715d Once you've removed the strong name, you can pretty much do whatever you want with the assembly. Enjoy! A: You can use MethodInfo.GetMethodBody().GetILAsByteArray(), modify that, and then plug it back into MethodBuilder.CreateMethodBody(). A: .NET Framework assemblies are signed and just as Joel Coehoorn said you'll get a dud. A: You could load the assembly using IronRuby, and mixin all the functionality you can dream of
{ "language": "en", "url": "https://stackoverflow.com/questions/123540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: INSERT INTO a temp table, and have an IDENTITY field created, without first declaring the temp table? I need to select a bunch of data into a temp table to then do some secondary calculations; To help make it work more efficiently, I would like to have an IDENTITY column on that table. I know I could declare the table first with an identity, then insert the rest of the data into it, but is there a way to do it in 1 step? A: Oh ye of little faith: SELECT *, IDENTITY( int ) AS idcol INTO #newtable FROM oldtable http://msdn.microsoft.com/en-us/library/aa933208(SQL.80).aspx A: You commented: not working if oldtable has an identity column. I think that's your answer. The #newtable gets an identity column from the oldtable automatically. Run the next statements: create table oldtable (id int not null identity(1,1), v varchar(10) ) select * into #newtable from oldtable use tempdb GO sp_help #newtable It shows you that #newtable does have the identity column. If you don't want the identity column, try this at creation of #newtable: select id + 1 - 1 as nid, v, IDENTITY( int ) as id into #newtable from oldtable A: Good Question & Matt's was a good answer. To expand on the syntax a little if the oldtable has an identity a user could run the following: SELECT col1, col2, IDENTITY( int ) AS idcol INTO #newtable FROM oldtable That would be if the oldtable was scripted something as such: CREATE TABLE [dbo].[oldtable] ( [oldtableID] [numeric](18, 0) IDENTITY(1,1) NOT NULL, [col1] [nvarchar](50) NULL, [col2] [numeric](18, 0) NULL, ) A: If you want to include the column that is the current identity, you can still do that but you have to explicitly list the columns and cast the current identity to an int (assuming it is one now), like so: select cast (CurrentID as int) as CurrentID, SomeOtherField, identity(int) as TempID into #temp from myserver.dbo.mytable A: To make things efficient, you need to do declare that one of the columns to be a primary key: ALTER TABLE #mytable ADD PRIMARY KEY(KeyColumn) That won't take a variable for the column name. Trust me, you are MUCH better off doing a: CREATE #myTable TABLE (or possibly a DECLARE TABLE @myTable) , which allows you to set IDENTITY and PRIMARY KEY directly. A: If after the *, you alias the id column that is breaking the query a secondtime... and give it a new name... it magically starts working. select IDENTITY( int ) as TempID, *, SectionID as Fix2IDs into #TempSections from Files_Sections A: IIRC, the INSERT INTO command uses the schema of the source table to create the temp table. That's part of the reason you can't just try to create a table with an additional column. Identity columns are internally tied to a SQL Server construct called a generator.
{ "language": "en", "url": "https://stackoverflow.com/questions/123557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "48" }
Q: SQL Server 2005: T-SQL to temporarily disable a trigger Is it possible to disable a trigger for a batch of commands and then enable it when the batch is done? I'm sure I could drop the trigger and re-add it but I was wondering if there was another way. A: DISABLE TRIGGER { [ schema_name . ] trigger_name [ ,...n ] | ALL } ON { object_name | DATABASE | ALL SERVER } [ ; ] http://msdn.microsoft.com/en-us/library/ms189748(SQL.90).aspx followed by the inverse: ENABLE TRIGGER { [ schema_name . ] trigger_name [ ,...n ] | ALL } ON { object_name | DATABASE | ALL SERVER } [ ; ] http://msdn.microsoft.com/en-us/library/ms182706(SQL.90).aspx A: Another approach is to effectively disable the trigger without actually disabling it, using an additional state variable that is incorporated into the trigger. create trigger [SomeSchema].[SomeTableIsEditableTrigger] ON [SomeSchema].[SomeTable] for insert, update, delete as declare @isTableTriggerEnabled bit; exec usp_IsTableTriggerEnabled -- Have to use USP instead of UFN for access to #temp @pTriggerProcedureIdOpt = @@procid, @poIsTableTriggerEnabled = @isTableTriggerEnabled out; if (@isTableTriggerEnabled = 0) return; -- Rest of existing trigger go For the state variable one could read some type of lock control record in a table (best if limited to the context of the current session), use CONTEXT_INFO(), or use the presence of a particular temp table name (which is already session scope limited): create proc [usp_IsTableTriggerEnabled] @pTriggerProcedureIdOpt bigint = null, -- Either provide this @pTableNameOpt varchar(300) = null, -- or this @poIsTableTriggerEnabled bit = null out begin set @poIsTableTriggerEnabled = 1; -- default return value (ensure not null) -- Allow a particular session to disable all triggers (since local -- temp tables are session scope limited). -- if (object_id('tempdb..#Common_DisableTableTriggers') is not null) begin set @poIsTableTriggerEnabled = 0; return; end -- Resolve table name if given trigger procedure id instead of table name. -- Google: "How to get the table name in the trigger definition" -- set @pTableNameOpt = coalesce( @pTableNameOpt, (select object_schema_name(parent_id) + '.' + object_name(parent_id) as tablename from sys.triggers where object_id = @pTriggerProcedureIdOpt) ); -- Else decide based on logic involving @pTableNameOpt and possibly current session end Then to disable all triggers: select 1 as A into #Common_DisableTableTriggers; -- do work drop table #Common_DisableTableTriggers; -- or close connection A potentially major downside is that the trigger is permanently slowed down depending on the complexity of accessing of the state variable. Edit: Adding a reference to this amazingly similar 2008 post by Samuel Vanga. A: ALTER TABLE table_name DISABLE TRIGGER TRIGGER_NAME -- Here your SQL query ALTER TABLE table_name ENABLE TRIGGER TRIGGER_NAME A: Sometimes to populate an empty database from external data source or debug a problem in the database I need to disable ALL triggers and constraints. To do so I use the following code: To disable all constraints and triggers: sp_msforeachtable "ALTER TABLE ? NOCHECK CONSTRAINT all" sp_msforeachtable "ALTER TABLE ? DISABLE TRIGGER all" To enable all constraints and triggers: exec sp_msforeachtable @command1="print '?'", @command2="ALTER TABLE ? WITH CHECK CHECK CONSTRAINT all" sp_msforeachtable @command1="print '?'", @command2="ALTER TABLE ? ENABLE TRIGGER all" I found that solution some time ago on SQLServerCentral, but needed to modify the enable constraints part as the original one did not work fully A: Not the best answer for batch programming, but for others finding this question in search of a quick and easy way to temporarily disable a trigger, this can be accomplished in SQL Server Management Studio. * *expand the triggers folder on the table *right-click the trigger *disable Follow the same process to re-enable. A: However, it is almost always a bad idea to do this. You will mess with the integrity of the database. Do not do it without considering the ramifications and checking with the dbas if you have them. If you do follow Matt's code be sure to remember to turn the trigger back on. ANd remember the trigger is disabled for everyone inserting, updating or deleting from the table while it is turned off, not just for your process, so if it must be done, then do it during the hours when the database is least active (and preferably in single user mode). If you need to do this to import a large amount of data, then consider that bulk insert does not fire the triggers. But then your process after the bulk insert will have to fix up any data integrity problems you introduce by nor firing the triggers. A: To extend Matt's answer, here is an example given on MSDN. USE AdventureWorks; GO DISABLE TRIGGER Person.uAddress ON Person.Address; GO ENABLE Trigger Person.uAddress ON Person.Address; GO
{ "language": "en", "url": "https://stackoverflow.com/questions/123558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: How to validate phone numbers using regex I'm trying to put together a comprehensive regex to validate phone numbers. Ideally it would handle international formats, but it must handle US formats, including the following: * *1-234-567-8901 *1-234-567-8901 x1234 *1-234-567-8901 ext1234 *1 (234) 567-8901 *1.234.567.8901 *1/234/567/8901 *12345678901 I'll answer with my current attempt, but I'm hoping somebody has something better and/or more elegant. A: You'll have a hard time dealing with international numbers with a single/simple regex, see this post on the difficulties of international (and even north american) phone numbers. You'll want to parse the first few digits to determine what the country code is, then act differently based on the country. Beyond that - the list you gave does not include another common US format - leaving off the initial 1. Most cell phones in the US don't require it, and it'll start to baffle the younger generation unless they've dialed internationally. You've correctly identified that it's a tricky problem... -Adam A: /^(?:(?:\(?(?:00|\+)([1-4]\d\d|[1-9]\d+)\)?)[\-\.\ \\\/]?)?((?:\(?\d{1,}\)?[\-\.\ \\\/]?)+)(?:[\-\.\ \\\/]?(?:#|ext\.?|extension|x)[\-\.\ \\\/]?(\d+))?$/i This matches: - (+351) 282 43 50 50 - 90191919908 - 555-8909 - 001 6867684 - 001 6867684x1 - 1 (234) 567-8901 - 1-234-567-8901 x1234 - 1-234-567-8901 ext1234 - 1-234 567.89/01 ext.1234 - 1(234)5678901x1234 - (123)8575973 - (0055)(123)8575973 On $n, it saves: * *Country indicator *Phone number *Extension You can test it on https://regex101.com/r/kFzb1s/1 A: After reading through these answers, it looks like there wasn't a straightforward regular expression that can parse through a bunch of text and pull out phone numbers in any format (including international with and without the plus sign). Here's what I used for a client project recently, where we had to convert all phone numbers in any format to tel: links. So far, it's been working with everything they've thrown at it, but if errors come up, I'll update this answer. Regex: /(\+*\d{1,})*([ |\(])*(\d{3})[^\d]*(\d{3})[^\d]*(\d{4})/ PHP function to replace all phone numbers with tel: links (in case anyone is curious): function phoneToTel($number) { $return = preg_replace('/(\+*\d{1,})*([ |\(])*(\d{3})[^\d]*(\d{3})[^\d]*(\d{4})/', '<a href="tel:$1$3$4$5">$1 ($3) $4-$5</a>', $number); // includes international return $return; } A: Although the answer to strip all whitespace is neat, it doesn't really solve the problem that's posed, which is to find a regex. Take, for instance, my test script that downloads a web page and extracts all phone numbers using the regex. Since you'd need a regex anyway, you might as well have the regex do all the work. I came up with this: 1?\W*([2-9][0-8][0-9])\W*([2-9][0-9]{2})\W*([0-9]{4})(\se?x?t?(\d*))? Here's a perl script to test it. When you match, $1 contains the area code, $2 and $3 contain the phone number, and $5 contains the extension. My test script downloads a file from the internet and prints all the phone numbers in it. #!/usr/bin/perl my $us_phone_regex = '1?\W*([2-9][0-8][0-9])\W*([2-9][0-9]{2})\W*([0-9]{4})(\se?x?t?(\d*))?'; my @tests = ( "1-234-567-8901", "1-234-567-8901 x1234", "1-234-567-8901 ext1234", "1 (234) 567-8901", "1.234.567.8901", "1/234/567/8901", "12345678901", "not a phone number" ); foreach my $num (@tests) { if( $num =~ m/$us_phone_regex/ ) { print "match [$1-$2-$3]\n" if not defined $4; print "match [$1-$2-$3 $5]\n" if defined $4; } else { print "no match [$num]\n"; } } # # Extract all phone numbers from an arbitrary file. # my $external_filename = 'http://web.textfiles.com/ezines/PHREAKSANDGEEKS/PnG-spring05.txt'; my @external_file = `curl $external_filename`; foreach my $line (@external_file) { if( $line =~ m/$us_phone_regex/ ) { print "match $1 $2 $3\n"; } } Edit: You can change \W* to \s*\W?\s* in the regex to tighten it up a bit. I wasn't thinking of the regex in terms of, say, validating user input on a form when I wrote it, but this change makes it possible to use the regex for that purpose. '1?\s*\W?\s*([2-9][0-8][0-9])\s*\W?\s*([2-9][0-9]{2})\s*\W?\s*([0-9]{4})(\se?x?t?(\d*))?'; A: I believe the Number::Phone::US and Regexp::Common (particularly the source of Regexp::Common::URI::RFC2806) Perl modules could help. The question should probably be specified in a bit more detail to explain the purpose of validating the numbers. For instance, 911 is a valid number in the US, but 911x isn't for any value of x. That's so that the phone company can calculate when you are done dialing. There are several variations on this issue. But your regex doesn't check the area code portion, so that doesn't seem to be a concern. Like validating email addresses, even if you have a valid result you can't know if it's assigned to someone until you try it. If you are trying to validate user input, why not normalize the result and be done with it? If the user puts in a number you can't recognize as a valid number, either save it as inputted or strip out undailable characters. The Number::Phone::Normalize Perl module could be a source of inspiration. A: I answered this question on another SO question before deciding to also include my answer as an answer on this thread, because no one was addressing how to require/not require items, just handing out regexs: Regex working wrong, matching unexpected things From my post on that site, I've created a quick guide to assist anyone with making their own regex for their own desired phone number format, which I will caveat (like I did on the other site) that if you are too restrictive, you may not get the desired results, and there is no "one size fits all" solution to accepting all possible phone numbers in the world - only what you decide to accept as your format of choice. Use at your own risk. Quick cheat sheet * *Start the expression: /^ *If you want to require a space, use: [\s] or \s *If you want to require parenthesis, use: [(] and [)] . Using \( and \) is ugly and can make things confusing. *If you want anything to be optional, put a ? after it *If you want a hyphen, just type - or [-] . If you do not put it first or last in a series of other characters, though, you may need to escape it: \- *If you want to accept different choices in a slot, put brackets around the options: [-.\s] will require a hyphen, period, or space. A question mark after the last bracket will make all of those optional for that slot. *\d{3} : Requires a 3-digit number: 000-999. Shorthand for [0-9][0-9][0-9]. *[2-9] : Requires a digit 2-9 for that slot. *(\+|1\s)? : Accept a "plus" or a 1 and a space (pipe character, |, is "or"), and make it optional. The "plus" sign must be escaped. *If you want specific numbers to match a slot, enter them: [246] will require a 2, 4, or 6. (?:77|78) or [77|78] will require 77 or 78. *$/ : End the expression A: Better option... just strip all non-digit characters on input (except 'x' and leading '+' signs), taking care because of the British tendency to write numbers in the non-standard form +44 (0) ... when asked to use the international prefix (in that specific case, you should discard the (0) entirely). Then, you end up with values like: 12345678901 12345678901x1234 345678901x1234 12344678901 12345678901 12345678901 12345678901 +4112345678 +441234567890 Then when you display, reformat to your hearts content. e.g. 1 (234) 567-8901 1 (234) 567-8901 x1234 A: Do a replace on formatting characters, then check the remaining for phone validity. In PHP, $replace = array( ' ', '-', '/', '(', ')', ',', '.' ); //etc; as needed preg_match( '/1?[0-9]{10}((ext|x)[0-9]{1,4})?/i', str_replace( $replace, '', $phone_num ); Breaking a complex regexp like this can be just as effective, but much more simple. A: I work for a market research company and we have to filter these types of input alllll the time. You're complicating it too much. Just strip the non-alphanumeric chars, and see if there's an extension. For further analysis you can subscribe to one of many providers that will give you access to a database of valid numbers as well as tell you if they're landlines or mobiles, disconnected, etc. It costs money. A: I wrote simpliest (although i didn't need dot in it). ^([0-9\(\)\/\+ \-]*)$ As mentioned below, it checks only for characters, not its structure/order A: .* If the users want to give you their phone numbers, then trust them to get it right. If they do not want to give it to you then forcing them to enter a valid number will either send them to a competitor's site or make them enter a random string that fits your regex. I might even be tempted to look up the number of a premium rate horoscope hotline and enter that instead. I would also consider any of the following as valid entries on a web site: "123 456 7890 until 6pm, then 098 765 4321" "123 456 7890 or try my mobile on 098 765 4321" "ex-directory - mind your own business" A: It turns out that there's something of a spec for this, at least for North America, called the NANP. You need to specify exactly what you want. What are legal delimiters? Spaces, dashes, and periods? No delimiter allowed? Can one mix delimiters (e.g., +0.111-222.3333)? How are extensions (e.g., 111-222-3333 x 44444) going to be handled? What about special numbers, like 911? Is the area code going to be optional or required? Here's a regex for a 7 or 10 digit number, with extensions allowed, delimiters are spaces, dashes, or periods: ^(?:(?:\+?1\s*(?:[.-]\s*)?)?(?:\(\s*([2-9]1[02-9]|[2-9][02-8]1|[2-9][02-8][02-9])\s*\)|([2-9]1[02-9]|[2-9][02-8]1|[2-9][02-8][02-9]))\s*(?:[.-]\s*)?)?([2-9]1[02-9]|[2-9][02-9]1|[2-9][02-9]{2})\s*(?:[.-]\s*)?([0-9]{4})(?:\s*(?:#|x\.?|ext\.?|extension)\s*(\d+))?$ A: I was struggling with the same issue, trying to make my application future proof, but these guys got me going in the right direction. I'm not actually checking the number itself to see if it works or not, I'm just trying to make sure that a series of numbers was entered that may or may not have an extension. Worst case scenario if the user had to pull an unformatted number from the XML file, they would still just type the numbers into the phone's numberpad 012345678x5, no real reason to keep it pretty. That kind of RegEx would come out something like this for me: \d+ ?\w{0,9} ?\d+ * *01234467 extension 123456 *01234567x123456 *01234567890 A: I found this to be something interesting. I have not tested it but it looks as if it would work <?php /* string validate_telephone_number (string $number, array $formats) */ function validate_telephone_number($number, $formats) { $format = trim(ereg_replace("[0-9]", "#", $number)); return (in_array($format, $formats)) ? true : false; } /* Usage Examples */ // List of possible formats: You can add new formats or modify the existing ones $formats = array('###-###-####', '####-###-###', '(###) ###-###', '####-####-####', '##-###-####-####', '####-####', '###-###-###', '#####-###-###', '##########'); $number = '08008-555-555'; if(validate_telephone_number($number, $formats)) { echo $number.' is a valid phone number.'; } echo "<br />"; $number = '123-555-555'; if(validate_telephone_number($number, $formats)) { echo $number.' is a valid phone number.'; } echo "<br />"; $number = '1800-1234-5678'; if(validate_telephone_number($number, $formats)) { echo $number.' is a valid phone number.'; } echo "<br />"; $number = '(800) 555-123'; if(validate_telephone_number($number, $formats)) { echo $number.' is a valid phone number.'; } echo "<br />"; $number = '1234567890'; if(validate_telephone_number($number, $formats)) { echo $number.' is a valid phone number.'; } ?> A: You would probably be better off using a Masked Input for this. That way users can ONLY enter numbers and you can format however you see fit. I'm not sure if this is for a web application, but if it is there is a very click jQuery plugin that offers some options for doing this. http://digitalbush.com/projects/masked-input-plugin/ They even go over how to mask phone number inputs in their tutorial. A: Here's one that works well in JavaScript. It's in a string because that's what the Dojo widget was expecting. It matches a 10 digit North America NANP number with optional extension. Spaces, dashes and periods are accepted delimiters. "^(\\(?\\d\\d\\d\\)?)( |-|\\.)?\\d\\d\\d( |-|\\.)?\\d{4,4}(( |-|\\.)?[ext\\.]+ ?\\d+)?$" A: Note that stripping () characters does not work for a style of writing UK numbers that is common: +44 (0) 1234 567890 which means dial either the international number: +441234567890 or in the UK dial 01234567890 A: If you just want to verify you don't have random garbage in the field (i.e., from form spammers) this regex should do nicely: ^[0-9+\(\)#\.\s\/ext-]+$ Note that it doesn't have any special rules for how many digits, or what numbers are valid in those digits, it just verifies that only digits, parenthesis, dashes, plus, space, pound, asterisk, period, comma, or the letters e, x, t are present. It should be compatible with international numbers and localization formats. Do you foresee any need to allow square, curly, or angled brackets for some regions? (currently they aren't included). If you want to maintain per digit rules (such as in US Area Codes and Prefixes (exchange codes) must fall in the range of 200-999) well, good luck to you. Maintaining a complex rule-set which could be outdated at any point in the future by any country in the world does not sound fun. And while stripping all/most non-numeric characters may work well on the server side (especially if you are planning on passing these values to a dialer), you may not want to thrash the user's input during validation, particularly if you want them to make corrections in another field. A: My inclination is to agree that stripping non-digits and just accepting what's there is best. Maybe to ensure at least a couple digits are present, although that does prohibit something like an alphabetic phone number "ASK-JAKE" for example. A couple simple perl expressions might be: @f = /(\d+)/g; tr/0-9//dc; Use the first one to keep the digit groups together, which may give formatting clues. Use the second one to trivially toss all non-digits. Is it a worry that there may need to be a pause and then more keys entered? Or something like 555-1212 (wait for the beep) 123? A: pattern="^[\d|\+|\(]+[\)|\d|\s|-]*[\d]$" validateat="onsubmit" Must end with a digit, can begin with ( or + or a digit, and may contain + - ( or ) A: For anyone interested in doing something similar with Irish mobile phone numbers, here's a straightforward way of accomplishing it: http://ilovenicii.com/?p=87 PHP <?php $pattern = "/^(083|086|085|086|087)\d{7}$/"; $phone = "087343266"; if (preg_match($pattern,$phone)) echo "Match"; else echo "Not match"; There is also a JQuery solution on that link. EDIT: jQuery solution: $(function(){ //original field values var field_values = { //id : value 'url' : 'url', 'yourname' : 'yourname', 'email' : 'email', 'phone' : 'phone' }; var url =$("input#url").val(); var yourname =$("input#yourname").val(); var email =$("input#email").val(); var phone =$("input#phone").val(); //inputfocus $('input#url').inputfocus({ value: field_values['url'] }); $('input#yourname').inputfocus({ value: field_values['yourname'] }); $('input#email').inputfocus({ value: field_values['email'] }); $('input#phone').inputfocus({ value: field_values['phone'] }); //reset progress bar $('#progress').css('width','0'); $('#progress_text').html('0% Complete'); //first_step $('form').submit(function(){ return false; }); $('#submit_first').click(function(){ //remove classes $('#first_step input').removeClass('error').removeClass('valid'); //ckeck if inputs aren't empty var fields = $('#first_step input[type=text]'); var error = 0; fields.each(function(){ var value = $(this).val(); if( value.length<12 || value==field_values[$(this).attr('id')] ) { $(this).addClass('error'); $(this).effect("shake", { times:3 }, 50); error++; } else { $(this).addClass('valid'); } }); if(!error) { if( $('#password').val() != $('#cpassword').val() ) { $('#first_step input[type=password]').each(function(){ $(this).removeClass('valid').addClass('error'); $(this).effect("shake", { times:3 }, 50); }); return false; } else { //update progress bar $('#progress_text').html('33% Complete'); $('#progress').css('width','113px'); //slide steps $('#first_step').slideUp(); $('#second_step').slideDown(); } } else return false; }); //second section $('#submit_second').click(function(){ //remove classes $('#second_step input').removeClass('error').removeClass('valid'); var emailPattern = /^[a-zA-Z0-9._-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,4}$/; var fields = $('#second_step input[type=text]'); var error = 0; fields.each(function(){ var value = $(this).val(); if( value.length<1 || value==field_values[$(this).attr('id')] || ( $(this).attr('id')=='email' && !emailPattern.test(value) ) ) { $(this).addClass('error'); $(this).effect("shake", { times:3 }, 50); error++; } else { $(this).addClass('valid'); } function validatePhone(phone) { var a = document.getElementById(phone).value; var filter = /^[0-9-+]+$/; if (filter.test(a)) { return true; } else { return false; } } $('#phone').blur(function(e) { if (validatePhone('txtPhone')) { $('#spnPhoneStatus').html('Valid'); $('#spnPhoneStatus').css('color', 'green'); } else { $('#spnPhoneStatus').html('Invalid'); $('#spnPhoneStatus').css('color', 'red'); } }); }); if(!error) { //update progress bar $('#progress_text').html('66% Complete'); $('#progress').css('width','226px'); //slide steps $('#second_step').slideUp(); $('#fourth_step').slideDown(); } else return false; }); $('#submit_second').click(function(){ //update progress bar $('#progress_text').html('100% Complete'); $('#progress').css('width','339px'); //prepare the fourth step var fields = new Array( $('#url').val(), $('#yourname').val(), $('#email').val(), $('#phone').val() ); var tr = $('#fourth_step tr'); tr.each(function(){ //alert( fields[$(this).index()] ) $(this).children('td:nth-child(2)').html(fields[$(this).index()]); }); //slide steps $('#third_step').slideUp(); $('#fourth_step').slideDown(); }); $('#submit_fourth').click(function(){ url =$("input#url").val(); yourname =$("input#yourname").val(); email =$("input#email").val(); phone =$("input#phone").val(); //send information to server var dataString = 'url='+ url + '&yourname=' + yourname + '&email=' + email + '&phone=' + phone; alert (dataString);//return false; $.ajax({ type: "POST", url: "http://clients.socialnetworkingsolutions.com/infobox/contact/", data: "url="+url+"&yourname="+yourname+"&email="+email+'&phone=' + phone, cache: false, success: function(data) { console.log("form submitted"); alert("success"); } }); return false; }); //back button $('.back').click(function(){ var container = $(this).parent('div'), previous = container.prev(); switch(previous.attr('id')) { case 'first_step' : $('#progress_text').html('0% Complete'); $('#progress').css('width','0px'); break; case 'second_step': $('#progress_text').html('33% Complete'); $('#progress').css('width','113px'); break; case 'third_step' : $('#progress_text').html('66% Complete'); $('#progress').css('width','226px'); break; default: break; } $(container).slideUp(); $(previous).slideDown(); }); }); Source. A: I would also suggest looking at the "libphonenumber" Google Library. I know it is not regex but it does exactly what you want. For example, it will recognize that: 15555555555 is a possible number but not a valid number. It also supports countries outside the US. Highlights of functionality: * *Parsing/formatting/validating phone numbers for all countries/regions of the world. *getNumberType - gets the type of the number based on the number itself; able to distinguish Fixed-line, Mobile, Toll-free, Premium Rate, Shared Cost, VoIP and Personal Numbers (whenever feasible). *isNumberMatch - gets a confidence level on whether two numbers could be the same. *getExampleNumber/getExampleNumberByType - provides valid example numbers for all countries/regions, with the option of specifying which type of example phone number is needed. *isPossibleNumber - quickly guessing whether a number is a possible phonenumber by using only the length information, much faster than a full validation. *isValidNumber - full validation of a phone number for a region using length and prefix information. *AsYouTypeFormatter - formats phone numbers on-the-fly when users enter each digit. *findNumbers - finds numbers in text input. *PhoneNumberOfflineGeocoder - provides geographical information related to a phone number. Examples The biggest problem with phone number validation is it is very culturally dependant. * *America * *(408) 974–2042 is a valid US number *(999) 974–2042 is not a valid US number *Australia * *0404 999 999 is a valid Australian number *(02) 9999 9999 is also a valid Australian number *(09) 9999 9999 is not a valid Australian number A regular expression is fine for checking the format of a phone number, but it's not really going to be able to check the validity of a phone number. I would suggest skipping a simple regular expression to test your phone number against, and using a library such as Google's libphonenumber (link to GitHub project). Introducing libphonenumber! Using one of your more complex examples, 1-234-567-8901 x1234, you get the following data out of libphonenumber (link to online demo): Validation Results Result from isPossibleNumber() true Result from isValidNumber() true Formatting Results: E164 format +12345678901 Original format (234) 567-8901 ext. 123 National format (234) 567-8901 ext. 123 International format +1 234-567-8901 ext. 123 Out-of-country format from US 1 (234) 567-8901 ext. 123 Out-of-country format from CH 00 1 234-567-8901 ext. 123 So not only do you learn if the phone number is valid (which it is), but you also get consistent phone number formatting in your locale. As a bonus, libphonenumber has a number of datasets to check the validity of phone numbers, as well, so checking a number such as +61299999999 (the international version of (02) 9999 9999) returns as a valid number with formatting: Validation Results Result from isPossibleNumber() true Result from isValidNumber() true Formatting Results E164 format +61299999999 Original format 61 2 9999 9999 National format (02) 9999 9999 International format +61 2 9999 9999 Out-of-country format from US 011 61 2 9999 9999 Out-of-country format from CH 00 61 2 9999 9999 libphonenumber also gives you many additional benefits, such as grabbing the location that the phone number is detected as being, and also getting the time zone information from the phone number: PhoneNumberOfflineGeocoder Results Location Australia PhoneNumberToTimeZonesMapper Results Time zone(s) [Australia/Sydney] But the invalid Australian phone number ((09) 9999 9999) returns that it is not a valid phone number. Validation Results Result from isPossibleNumber() true Result from isValidNumber() false Google's version has code for Java and Javascript, but people have also implemented libraries for other languages that use the Google i18n phone number dataset: * *PHP: https://github.com/giggsey/libphonenumber-for-php *Python: https://github.com/daviddrysdale/python-phonenumbers *Ruby: https://github.com/sstephenson/global_phone *C#: https://github.com/twcclegg/libphonenumber-csharp *Objective-C: https://github.com/iziz/libPhoneNumber-iOS *JavaScript: https://github.com/ruimarinho/google-libphonenumber *Elixir: https://github.com/socialpaymentsbv/ex_phone_number Unless you are certain that you are always going to be accepting numbers from one locale, and they are always going to be in one format, I would heavily suggest not writing your own code for this, and using libphonenumber for validating and displaying phone numbers. A: Here's a wonderful pattern that most closely matched the validation that I needed to achieve. I'm not the original author, but I think it's well worth sharing as I found this problem to be very complex and without a concise or widely useful answer. The following regex will catch widely used number and character combinations in a variety of global phone number formats: /^\s*(?:\+?(\d{1,3}))?([-. (]*(\d{3})[-. )]*)?((\d{3})[-. ]*(\d{2,4})(?:[-.x ]*(\d+))?)\s*$/gm Positive: +42 555.123.4567 +1-(800)-123-4567 +7 555 1234567 +7(926)1234567 (926) 1234567 +79261234567 926 1234567 9261234567 1234567 123-4567 123-89-01 495 1234567 469 123 45 67 89261234567 8 (926) 1234567 926.123.4567 415-555-1234 650-555-2345 (416)555-3456 202 555 4567 4035555678 1 416 555 9292 Negative: 926 3 4 8 800 600-APPLE Original source: http://www.regexr.com/38pvb A: Have you had a look over at RegExLib? Entering US phone number brought back quite a list of possibilities. A: My attempt at an unrestrictive regex: /^[+#*\(\)\[\]]*([0-9][ ext+-pw#*\(\)\[\]]*){6,45}$/ Accepts: +(01) 123 (456) 789 ext555 123456 *44 123-456-789 [321] 123456 123456789012345678901234567890123456789012345 *****++[](][((( 123456tteexxttppww Rejects: mob 07777 777777 1234 567 890 after 5pm john smith (empty) 1234567890123456789012345678901234567890123456 911 It is up to you to sanitize it for display. After validating it could be a number though. A: I found this to work quite well: ^\(*\+*[1-9]{0,3}\)*-*[1-9]{0,3}[-. /]*\(*[2-9]\d{2}\)*[-. /]*\d{3}[-. /]*\d{4} *e*x*t*\.* *\d{0,4}$ It works for these number formats: 1-234-567-8901 1-234-567-8901 x1234 1-234-567-8901 ext1234 1 (234) 567-8901 1.234.567.8901 1/234/567/8901 12345678901 1-234-567-8901 ext. 1234 (+351) 282 433 5050 Make sure to use global AND multiline flags to make sure. Link: http://www.regexr.com/3bp4b A: Here's my best try so far. It handles the formats above but I'm sure I'm missing some other possible formats. ^\d?(?:(?:[\+]?(?:[\d]{1,3}(?:[ ]+|[\-.])))?[(]?(?:[\d]{3})[\-/)]?(?:[ ]+)?)?(?:[a-zA-Z2-9][a-zA-Z0-9 \-.]{6,})(?:(?:[ ]+|[xX]|(i:ext[\.]?)){1,2}(?:[\d]{1,5}))?$ A: If you're talking about form validation, the regexp to validate correct meaning as well as correct data is going to be extremely complex because of varying country and provider standards. It will also be hard to keep up to date. I interpret the question as looking for a broadly valid pattern, which may not be internally consistent - for example having a valid set of numbers, but not validating that the trunk-line, exchange, etc. to the valid pattern for the country code prefix. North America is straightforward, and for international I prefer to use an 'idiomatic' pattern which covers the ways in which people specify and remember their numbers: ^((((\(\d{3}\))|(\d{3}-))\d{3}-\d{4})|(\+?\d{2}((-| )\d{1,8}){1,5}))(( x| ext)\d{1,5}){0,1}$ The North American pattern makes sure that if one parenthesis is included both are. The international accounts for an optional initial '+' and country code. After that, you're in the idiom. Valid matches would be: * *(xxx)xxx-xxxx *(xxx)-xxx-xxxx *(xxx)xxx-xxxx x123 *12 1234 123 1 x1111 *12 12 12 12 12 *12 1 1234 123456 x12345 *+12 1234 1234 *+12 12 12 1234 *+12 1234 5678 *+12 12345678 This may be biased as my experience is limited to North America, Europe and a small bit of Asia. A: This is a simple Regular Expression pattern for Philippine Mobile Phone Numbers: ((\+[0-9]{2})|0)[.\- ]?9[0-9]{2}[.\- ]?[0-9]{3}[.\- ]?[0-9]{4} or ((\+63)|0)[.\- ]?9[0-9]{2}[.\- ]?[0-9]{3}[.\- ]?[0-9]{4} will match these: +63.917.123.4567 +63-917-123-4567 +63 917 123 4567 +639171234567 09171234567 The first one will match ANY two digit country code, while the second one will match the Philippine country code exclusively. Test it here: http://refiddle.com/1ox A: My gut feeling is reinforced by the amount of replies to this topic - that there is a virtually infinite number of solutions to this problem, none of which are going to be elegant. Honestly, I would recommend you don't try to validate phone numbers. Even if you could write a big, hairy validator that would allow all the different legitimate formats, it would end up allowing pretty much anything even remotely resembling a phone number in the first place. In my opinion, the most elegant solution is to validate a minimum length, nothing more. A: I wouldn't recomend using a regex for this. Like the top answer, strip all the ugliness from the phone number, so that you're left with a string of numeric characters, with an 'x', if extensions are provided. In Python: Note: BAD_AREA_CODES comes from a text file that you can grab from on the web. BAD_AREA_CODES = open('badareacodes.txt', 'r').read().split('\n') def is_valid_phone(phone_number, country_code='US'): """for now, only US codes are handled""" if country_code: country_code = country_code.upper() #drop everything except 0-9 and 'x' phone_number = filter(lambda n: n.isdigit() or n == 'x', phone_number) ext = None check_ext = phone_number.split('x') if len(check_ext) > 1: #there's an extension. Check for errors. if len(check_ext) > 2: return False phone_number, ext = check_ext #we only accept 10 digit phone numbers. if len(phone_number) == 11 and phone_number[0] == '1': #international code phone_number = phone_number[1:] if len(phone_number) != 10: return False #area_code: XXXxxxxxxx #head: xxxXXXxxxx #tail: xxxxxxXXXX area_code = phone_number[ :3] head = phone_number[3:6] tail = phone_number[6: ] if area_code in BAD_AREA_CODES: return False if head[0] == '1': return False if head[1:] == '11': return False #any other ideas? return True This covers quite a bit. It's not a regex, but it does map to other languages pretty easily. A: Working example for Turkey, just change the d{9} according to your needs and start using it. function validateMobile($phone) { $pattern = "/^(05)\d{9}$/"; if (!preg_match($pattern, $phone)) { return false; } return true; } $phone = "0532486061"; if(!validateMobile($phone)) { echo 'Incorrect Mobile Number!'; } $phone = "05324860614"; if(validateMobile($phone)) { echo 'Correct Mobile Number!'; } A: It's near to impossible to handle all sorts of international phone numbers using simple regex. You'd be better off using a service like numverify.com, they're offering a free JSON API for international phone number validation, plus you'll get some useful details on country, location, carrier and line type with every request. A: Find String regex = "^\\+(?:[0-9] ?){6,14}[0-9]$"; helpful for international numbers. A: As there is no language tag with this post, I'm gonna give a regex solution used within python. The expression itself: 1[\s./-]?\(?[\d]+\)?[\s./-]?[\d]+[-/.]?[\d]+\s?[\d]+ When used within python: import re phonelist ="1-234-567-8901,1-234-567-8901 1234,1-234-567-8901 1234,1 (234) 567-8901,1.234.567.8901,1/234/567/8901,12345678901" phonenumber = '\n'.join([phone for phone in re.findall(r'1[\s./-]?\(?[\d]+\)?[\s./-]?[\d]+[-/.]?[\d]+\s?[\d]+' ,phonelist)]) print(phonenumber) Output: 1-234-567-8901 1-234-567-8901 1234 1-234-567-8901 1234 1 (234) 567-8901 1.234.567.8901 1/234/567/8901 12345678901 A: Note It takes as an input a US mobile number in any format and optionally accepts a second parameter - set true if you want the output mobile number formatted to look pretty. If the number provided is not a mobile number, it simple returns false. If a mobile number IS detected, it returns the entire sanitized number instead of true. function isValidMobile(num,format) { if (!format) format=false var m1 = /^(\W|^)[(]{0,1}\d{3}[)]{0,1}[.]{0,1}[\s-]{0,1}\d{3}[\s-]{0,1}[\s.]{0,1}\d{4}(\W|$)/ if(!m1.test(num)) { return false } num = num.replace(/ /g,'').replace(/\./g,'').replace(/-/g,'').replace(/\(/g,'').replace(/\)/g,'').replace(/\[/g,'').replace(/\]/g,'').replace(/\+/g,'').replace(/\~/g,'').replace(/\{/g,'').replace(/\*/g,'').replace(/\}/g,'') if ((num.length < 10) || (num.length > 11) || (num.substring(0,1)=='0') || (num.substring(1,1)=='0') || ((num.length==10)&&(num.substring(0,1)=='1'))||((num.length==11)&&(num.substring(0,1)!='1'))) return false; num = (num.length == 11) ? num : ('1' + num); if ((num.length == 11) && (num.substring(0,1) == "1")) { if (format===true) { return '(' + num.substr(1,3) + ') ' + num.substr(4,3) + '-' + num.substr(7,4) } else { return num } } else { return false; } } A: Try this (It is for Indian mobile number validation): if (!phoneNumber.matches("^[6-9]\\d{9}$")) { return false; } else { return true; } A: Java generates REGEX for valid phone numbers Another alternative is to let Java generate a REGEX that macthes all variations of phone numbers read from a list. This means that the list called validPhoneNumbersFormat, seen below in code context, is deciding which phone number format is valid. Note: This type of algorithm would work for any language handling regular expressions. Code snippet that generates the REGEX: Set<String> regexSet = uniqueValidPhoneNumbersFormats.stream() .map(s -> s.replaceAll("\\+", "\\\\+")) .map(s -> s.replaceAll("\\d", "\\\\d")) .map(s -> s.replaceAll("\\.", "\\\\.")) .map(s -> s.replaceAll("([\\(\\)])", "\\\\$1")) .collect(Collectors.toSet()); String regex = String.join("|", regexSet); Code snippet in context: public class TestBench { public static void main(String[] args) { List<String> validPhoneNumbersFormat = Arrays.asList( "1-234-567-8901", "1-234-567-8901 x1234", "1-234-567-8901 ext1234", "1 (234) 567-8901", "1.234.567.8901", "1/234/567/8901", "12345678901", "+12345678901", "(234) 567-8901 ext. 123", "+1 234-567-8901 ext. 123", "1 (234) 567-8901 ext. 123", "00 1 234-567-8901 ext. 123", "+210-998-234-01234", "210-998-234-01234", "+21099823401234", "+210-(998)-(234)-(01234)", "(+351) 282 43 50 50", "90191919908", "555-8909", "001 6867684", "001 6867684x1", "1 (234) 567-8901", "1-234-567-8901 x1234", "1-234-567-8901 ext1234", "1-234 567.89/01 ext.1234", "1(234)5678901x1234", "(123)8575973", "(0055)(123)8575973" ); Set<String> uniqueValidPhoneNumbersFormats = new LinkedHashSet<>(validPhoneNumbersFormat); List<String> invalidPhoneNumbers = Arrays.asList( "+210-99A-234-01234", // FAIL "+210-999-234-0\"\"234", // FAIL "+210-999-234-02;4", // FAIL "-210+998-234-01234", // FAIL "+210-998)-(234-(01234" // FAIL ); List<String> invalidAndValidPhoneNumbers = new ArrayList<>(); invalidAndValidPhoneNumbers.addAll(invalidPhoneNumbers); invalidAndValidPhoneNumbers.addAll(uniqueValidPhoneNumbersFormats); Set<String> regexSet = uniqueValidPhoneNumbersFormats.stream() .map(s -> s.replaceAll("\\+", "\\\\+")) .map(s -> s.replaceAll("\\d", "\\\\d")) .map(s -> s.replaceAll("\\.", "\\\\.")) .map(s -> s.replaceAll("([\\(\\)])", "\\\\$1")) .collect(Collectors.toSet()); String regex = String.join("|", regexSet); List<String> result = new ArrayList<>(); Pattern pattern = Pattern.compile(regex); for (String phoneNumber : invalidAndValidPhoneNumbers) { Matcher matcher = pattern.matcher(phoneNumber); if(matcher.matches()) { result.add(matcher.group()); } } // Output: if(uniqueValidPhoneNumbersFormats.size() == result.size()) { System.out.println("All valid numbers was matched!\n"); } result.forEach(System.out::println); } } Output: All valid numbers was matched! 1-234-567-8901 1-234-567-8901 x1234 1-234-567-8901 ext1234 ... ... ... A: Although it's not regex, you can use the function validate_phone() from the Python library DataPrep to validate US phone numbers. Install it with pip install dataprep. >>> from dataprep.clean import validate_phone >>> df = pd.DataFrame({'phone': ['1-234-567-8901', '1-234-567-8901 x1234', '1-234-567-8901 ext1234', '1 (234) 567-8901', '1.234.567.8901', '1/234/567/8901', 12345678901, '12345678', '123-456-78987']}) >>> validate_phone(df['phone']) 0 True 1 True 2 True 3 True 4 True 5 True 6 True 7 False 8 False Name: phone, dtype: bool A: /\b(\d{3}[^\d]{0,2}\d{3}[^\d]{0,2}\d{4})\b/ A: Simple regex and other tricks works. .* but showing an Hint / Example / Placeholder / Tooltip for the input. Then verifying on the frontend before submitting that the format is actually correct is a best experience out there. This would simplify formats for an inexperienced user. A: If at all possible, I would recommend to have four separate fields—Area Code, 3-digit prefix, 4 digit part, extension—so that the user can input each part of the address separately, and you can verify each piece individually. That way you can not only make verification much easier, you can store your phone numbers in a more consistent format in the database. A: since there are so many options to write a phone number, one can just test that are enough digits in it, no matter how they are separated. i found 9 to 14 digits work for me: ^\D*(\d\D*){9,14}$ true: * *123456789 *1234567890123 *+123 (456) 78.90-98.76 false: * *123 *(1234) 1234 *9007199254740991 *123 wont do what you tell me *+123 (456) 78.90-98.76 #543 ext 210>2>5>3 *(123) 456-7890 in the morning (987) 54-3210 after 18:00 and ask for Shirley if you do want to support those last two examples - just remove the upper limit: (\d\D*){9,} (the ^$ are not needed if there's no upper limit)
{ "language": "en", "url": "https://stackoverflow.com/questions/123559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1045" }
Q: How to get the temperature of motherboard of a PC (and other hardware statistics)? does any one know how to get the current motherboard, processor or HD temperature statistics? In GNU/Linux, I know I could use something like hddtemp or sensord to get the info, and then parse it... but in Windows: How can I do this? And, Can it be done with with C# or Java or any other hight level programming language? Thanks! A: I would argue that when the right configurations are in place, it can be superior to windows's one. http://www.lm-sensors.org/ is what does all the work. I had that plugged into RRDgraph & Munin and I was monitoring the temperature of my room over a period of almost a year and had nice pretty graphs. Also showed me my CPU fan was slowly wearing down, and I could see the line sloping down over a long period and know it was on the way out. http://www.lm-sensors.org/browser/lm-sensors/trunk/doc/developers/applications is what you want. (Oh wait, I fail. You're on *nix wanting to do it on Windows, my bad :( ..um.. well. Good luck. Maybe I'll leave this here in case somebody finds your post while searching for the contrary) Back when I did use windows, all I recall is Ye' Old Motherboard Monitor ( Discontinued ). Wiki article says there is speedfan and that looks like your best option. Programmatically, I guess you'll have to find the hardware specs and dig through Windows API and stackloads of arbitrary bus address offsets. A: As @AndrewJFord suggest these methods vary from vendor to vendor, indeed from part to part, but I'll make some generalisations if that's ok. * *As far as I know all current mainstream processors by Intel, AMD and IBM have on-board thermal sensors with known exposed APIs for reading this data. I'm no expert in these APIs so don't know how similar they are but I'd be surprised if Intel's and AMDs API are that much different. If I were you I'd search for an open-source 'system management tool' (there are a few like this written as Apple Widgets by the way) and see how they do it. *Motherboards vary a hell of a lot, some have extensive thermal senesoring, some none and all will have fairly different APIs. I'd start by contacting the support people at the company who makes your mobo-of-choice. *In general I believe that only very high-end 15krpm SAS disks have built-in thermal sensors, I know that some mid-range systems have sensors taped to their case at the hub and report that back to the mobo. Really not sure on how to get at this info but again I'd start by speaking with the same people as the question above. Now I'm a big HP user and all of their kit is instrumented by something called Insight Management Agents, of which there are versions available for Windows and most Linux's. What they do is gather all the system information from all their sensors (proc, memory, mobo, fans, disks etc) and expose that via an SNMP-based polling API or via an alert-based SNMP/SMTP/MAPI interface. I dare say IBM/Dell etc will have their own equally good and functionally similar versions but I don't know them sorry. If your machines are 'off-brand'/made-from-kit or you have no control then I'm not aware of any single method of getting at all this information easily. A: The problem with temperature and other monitoring sensors is that there is no common protocol on the hardware level, nor drivers allowing to retrieve that information with common API. Software like already mentioned SpeedFan and HWMonitor (from the makers of CPU-Z utility) work by painstakingly cataloging the various sensors and bus controllers, and implementing corresponding protocols, usually using kernel-mode driver to access SMBus devices. To embed this functionality in you own software, you can either develop this functionality yourself (possibly reducing the amount of work by tailoring it to your specific hardware, and using linux code from www.lm-sensors.org as reference) or purchasing commercial library that implements it. One, used by HWMonitor, is available here. good luck A: This is going to vary quite a bit depending on your hardware. Once you figure out from your hardware vendor whether you have sensors on your motherboard, you might look into using SNMP and the HOST-RESOURCE MIB. Use the Add/Remove Windows Components Wizard under Management and Monitoring Tools to get SNMP installed. Then you should be able to ask your Windows box for lots of info using standard systems management software like OpenView or Nagios.
{ "language": "en", "url": "https://stackoverflow.com/questions/123575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: WebResource.axd giving 403 error in ASP.Net Post backs using IIS7 I installed a ASP.Net website on a Windows 2008 server, which is by default using IIS7. The website seems to work fine, but the post backs on my forms do not work. After a few hours of debugging, I realized that when I manually try to hit the WebResource.axd file in my browser (e.g. I type http://www.domain.com/WebResource.axd in the address bar), I get a HTTP 403 error (Access Denied). I'm not quite sure where to look next and my Windows 2008 security knowledge is limited. How do I go about giving access to that file? A: If you are using plesk panel or Web Application Firewall (ModSecurity) is active, disable "OWASP_CRS / LEAKAGE / ERRORS_IIS" and "OWASP_CRS / POLICY / EXT_RESTRICTED" security rules. A: Navigate to your iis config folder. Typically: c:\windows\system32\inetsrv\config and open applicationHost.config file. Then within file navigate to the <handlers> section and check that following line is present: <add name="AssemblyResourceLoader-Integrated" path="WebResource.axd" verb="GET,DEBUG" type="System.Web.Handlers.AssemblyResourceLoader" preCondition="integratedMode" /> That is if you're running in integrated mode. Check that verb GET is specified. If you are running in classic pipeline mode that this line should be present <add name="AXD-ISAPI-2.0" path="*.axd" verb="GET,HEAD,POST,DEBUG" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" preCondition="classicMode,runtimeVersionv2.0,bitness32" responseBufferLimit="0" /> If everything seems to be ok than look at your web.config in Handlers and Modules section and see if you have added <clear /> on the top of each. If you did then you have include add section in your web.config file respecting the order in of the handlers/modules specified in applicationHosting.config file. A: For hosting companies that have Plesk control panel, you can turn off the Web Application Firewall (WAF), but since purpose of WAF is to help protect your website from various attacks you should only deactivate the specific Rule IDs that are causing the issue. In my case, I needed to allow .axd files on my website (eg, Scriptresource.axd and WebResource.axd). * *Identify this specific Rule ID that is being violated by opening the "ModSecurity Logfile" on the Plesk WAF page. *Search for the 403 Access Denied message in the logfile and then look for a substring that looks like [id "942440"]. This is the rule ID that is being violated. *Switch off the "Security Rule" that applies to this issue, by typing the ID number in the "Switch off Security Rules" section on the Plesk WAF page. In this example, you would type just the numbers "920440" inside the text box. A: Not sure on that one, but it may be related to http compression in IIS. Also check that the file is accessible to the IIS User. A: Check your IIS logs - they should give a status code that has more detailed information about the error. Also, what is the nature of the error on the postback? A: This is the error that I'm getting when doing a Postback: WebForm_PostBackOptions is undefined. To my knowledge that function is contained inside the WebResource.axd file, which led me to try it in the address bar, which how I know about the 403 error... A: There is a issue in Firewall setting. Request is blocked in Firewall. contact server admin to change configuration. We got solution.
{ "language": "en", "url": "https://stackoverflow.com/questions/123585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Access Enum value using EL with JSTL I have an Enum called Status defined as such: public enum Status { VALID("valid"), OLD("old"); private final String val; Status(String val) { this.val = val; } public String getStatus() { return val; } } I would like to access the value of VALID from a JSTL tag. Specifically the test attribute of the <c:when> tag. E.g. <c:when test="${dp.status eq Status.VALID"> I'm not sure if this is possible. A: If using Spring MVC, the Spring Expression Language (SpEL) can be helpful: <spring:eval expression="dp.status == T(com.example.Status).VALID" var="isValid" /> <c:if test="${isValid}"> isValid </c:if> A: You have 3 choices here, none of which is perfect: * *You can use a scriptlet in the test attribute: <c:when test="<%= dp.getStatus() == Status.VALID %>"> This uses the enum, but it also uses a scriptlet, which is not the "right way" in JSP 2.0. But most importantly, this doesn't work when you want to add another condition to the same when using ${}. And this means all the variables you want to test have to be declared in a scriptlet, or kept in request, or session (pageContext variable is not available in .tag files). *You can compare against string: <c:when test="${dp.status == 'VALID'}"> This looks clean, but you're introducing a string that duplicates the enum value and cannot be validated by the compiler. So if you remove that value from the enum or rename it, you will not see that this part of code is not accessible anymore. You basically have to do a search/replace through the code each time. *You can add each of the enum values you use into the page context: <c:set var="VALID" value="<%=Status.VALID%>"/> and then you can do this: <c:when test="${dp.status == VALID}"> I prefer the last option (3), even though it also uses a scriptlet. This is because it only uses it when you set the value. Later on you can use it in more complex EL expressions, together with other EL conditions. While in option (1) you cannot use a scriptlet and an EL expression in the test attribute of a single when tag. A: I do not have an answer to the question of Kornel, but I've a remark about the other script examples. Most of the expression trust implicitly on the toString(), but the Enum.valueOf() expects a value that comes from/matches the Enum.name() property. So one should use e.g.: <% pageContext.setAttribute("Status_OLD", Status.OLD.name()); %> ... <c:when test="${someModel.status == Status_OLD}"/>...</c:when> A: So to get my problem fully resolved I needed to do the following: <% pageContext.setAttribute("old", Status.OLD); %> Then I was able to do: <c:when test="${someModel.status == old}"/>...</c:when> which worked as expected. A: Add a method to the enum like: public String getString() { return this.name(); } For example public enum MyEnum { VALUE_1, VALUE_2; public String getString() { return this.name(); } } Then you can use: <c:if test="${myObject.myEnumProperty.string eq 'VALUE_2'}">...</c:if> A: Here are two more possibilities: JSP EL 3.0 Constants As long as you are using at least version 3.0 of EL, then you can import constants into your page as follows: <%@ page import="org.example.Status" %> <c:when test="${dp.status eq Status.VALID}"> However, some IDEs don't understand this yet (e.g. IntelliJ) so you won't get any warnings if you make a typo, until runtime. This would be my preferred method once it gets proper IDE support. Helper Methods You could just add getters to your enum. public enum Status { VALID("valid"), OLD("old"); private final String val; Status(String val) { this.val = val; } public String getStatus() { return val; } public boolean isValid() { return this == VALID; } public boolean isOld() { return this == OLD; } } Then in your JSP: <c:when test="${dp.status.valid}"> This is supported in all IDEs and will also work if you can't use EL 3.0 yet. This is what I do at the moment because it keeps all the logic wrapped up into my enum. Also be careful if it is possible for the variable storing the enum to be null. You would need to check for that first if your code doesn't guarantee that it is not null: <c:when test="${not empty db.status and dp.status.valid}"> I think this method is superior to those where you set an intermediary value in the JSP because you have to do that on each page where you need to use the enum. However, with this solution you only need to declare the getter once. A: A simple comparison against string works: <c:when test="${someModel.status == 'OLD'}"> A: For this purposes I do the following: <c:set var="abc"> <%=Status.OLD.getStatus()%> </c:set> <c:if test="${someVariable == abc}"> .... </c:if> It's looks ugly, but works! A: When using a MVC framework I put the following in my controller. request.setAttribute(RequestParameterNamesEnum.INBOX_ACTION.name(), RequestParameterNamesEnum.INBOX_ACTION.name()); This allows me to use the following in my JSP Page. <script> var url = 'http://www.nowhere.com/?${INBOX_ACTION}=' + someValue;</script> It can also be used in your comparison <c:when test="${someModel.action == INBOX_ACTION}"> Which I prefer over putting in a string literal. A: <%@ page import="com.example.Status" %> 1. ${dp.status eq Title.VALID.getStatus()} 2. ${dp.status eq Title.VALID} 3. ${dp.status eq Title.VALID.toString()} * *Put the import at the top, in JSP page header *If you want to work with getStatus method, use #1 *If you want to work with the enum element itself, use either #2 or #3 *You can use == instead of eq A: In Java Class: public class EnumTest{ //Other property link private String name; .... public enum Status { ACTIVE,NEWLINK, BROADCASTED, PENDING, CLICKED, VERIFIED, AWARDED, INACTIVE, EXPIRED, DELETED_BY_ADMIN; } private Status statusobj ; //Getter and Setters } So now POJO and enum obj is created. Now EnumTest you will set in session object using in the servlet or controller class session.setAttribute("enumTest", EnumTest ); In JSP Page <c:if test="${enumTest.statusobj == 'ACTIVE'}"> //TRUE??? THEN PROCESS SOME LOGIC A: I generally consider it bad practice to mix java code into jsps/tag files. Using 'eq' should do the trick : <c:if test="${dp.Status eq 'OLD'}"> ... </c:if> A: I do it this way when there are many points to use... public enum Status { VALID("valid"), OLD("old"); private final String val; Status(String val) { this.val = val; } public String getStatus() { return val; } public static void setRequestAttributes(HttpServletRequest request) { Map<String,String> vals = new HashMap<String,String>(); for (Status val : Status.values()) { vals.put(val.name(), val.value); } request.setAttribute("Status", vals); } } JSP <%@ page import="...Status" %> <% Status.setRequestAttributes(request) %> <c:when test="${dp.status eq Status.VALID}"> ...
{ "language": "en", "url": "https://stackoverflow.com/questions/123598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "108" }
Q: How do you build an ASP.NET custom control with a collection property? I'm looking to do something akin to <cstm:MyControl runat="server"> <myItem attr="something" /> <myItem attr="something" /> </cstm:MyControl> What's the bare bones code needed to pull this off? Rick's example shows something akin to <cstm:MyControl runat="server"> <myItems> <cstm:myItem attr="something" /> <cstm:myItem attr="something" /> </myItems> </cstm:MyControl> I'd prefer the more terse syntax if possible. Note: Feel free to suggest a better title or description. Even if you don't have edit rights, I'm glad to edit the entry for the sake of the community. A: Here's a really simple example control that does exactly what you are looking for: namespace TestControl { [ParseChildren(true, DefaultProperty = "Names")] public class MyControl : Control { public MyControl() { this.Names = new List<PersonName>(); } [PersistenceMode(PersistenceMode.InnerDefaultProperty)] public List<PersonName> Names { get; set; } } public class PersonName { public string Name { get; set; } } } And, here is an example usage: <%@ Register Namespace="TestControl" TagPrefix="TestControl" %> <TestControl:MyControl runat="server" ID="MyControl1"> <TestControl:PersonName Name="Chris"></TestControl:PersonName> <TestControl:PersonName Name="John"></TestControl:PersonName> </TestControl:MyControl>
{ "language": "en", "url": "https://stackoverflow.com/questions/123616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: VSTS Code Coverage bug? Has anyone experienced this VSTS Code Coverage "bug?" Do you have any suggestions? I am doing code coverage analysis with Visual Studio, which is generally an easy task now with the tools that are included. However, I have an issue that I can't overcome. Let's say I have assemblies A, B, C, and D and have marked them all for coverage analysis. I run the tests and look at the results and find a report that contains A, B, and C - but not D. I investigate and find that no tests actually execute any code in D (let's say it's the asp.net front end and I don't leverage UI testing yet). Because there are no tests for D causing D to be missing from the report the total code coverage percentage and "blocks not covered" are incorrect. Does anyone know how I can do either of the following? * *Calculate the total "number of blocks" in D so that I can manually adjust the coverage report to be correct? *Get the Coverage report to automatically show the number of blocks not covered for assemblies that are instrumented for coverage but are not tested at all? While I do want test coverage to improve I am analyzing coverage reports saved at historic points in time in the code base. Thus I don't want to create a test that simply executes at least 1 block of code in each assembly and the re-calculate test coverage by running the tests. That would be a pretty time consuming work-around to something that seems like a simple problem. A: I ran into this once, it is very annoying. In my case, there were a number of dlls not covered, so I ended up estimating blocks/kb for our code base by using the covered dlls information divided by their size. Then of course to get the number of blocks for the uncovered dlls, you simply multiply your average by the size of the dll. This is not the most accurate method but it gets you a quick ballpark, and you can determine your error by calculating your known dlls and comparing against the actual values. It is helpful if you have a good number of assemblies that are calculated. Of course, you could just do a LOC count (ignoring comments) and figure on a single LOC roughly equivalent to a block. If I remember correctly that fairly accurate, and so should get you even closer. The only way I know of to force a report on uncovered assemblies is to actually write a test that loads the assembly (the test doesn't even need to do anything).
{ "language": "en", "url": "https://stackoverflow.com/questions/123619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Can you connect to sql server from Excel? I need to create some reports from a sql server database for end users to view. Is it possible to hook into sql server from excel? Update The end user will only click on the file to view the reports, asking them to do more is too much to ask! A: Sure -- in Excel 2007 click the "Data" tab, then "Connections", then click "Browse for more" and select "+NewSqlServerConnection.odc" A: Here's the solution which I use: http://mikesknowledgebase.com/pages/SQLServer/RunStoredProcedureFromExcel.htm Basically, it uses a bit of VBA to call a Stored Procedure, then displays the results in the Excel file. We use this a lot when we want to give our users an ad-hoc report without needing to add extra screens to our ASP.Net app, or redeploy new versions of our application. A: In 2007 you can indeed go under the Data tab and then "Get External Data". You can gather data from a lot of sources, including SQL Server, a webpage and Access. After connecting there's an option to renew the data: * *every x minutes *when opening the Excel sheet You can even choose to remove the data when closing the Excel sheet. A: Yes, it absolutely is, depends on what version of excel you have. In 2007 if you go under the Data tab and then "Get External Data" you will see many options to connect to various data sources including SQL A: If you want to ensure that you have NO technical requirements of your end users, an export process is a much better approach rather than linking directly to the server from the Excel file. You can save the connection information, but there are ways they can mess it up, and if they can't be trusted to configure it, it would most likely be the best bet to extract the data and give a static copy. A: You can use VBA to connect to a database and import the data. The user will only have to open the file. Your VBA code will do the retrieval and formatting of the data. A: Simplest and oldest way is to use ODBC, but with VBScript, anything is possible. A: You are probably better off creating a view (or just a query) that presents data the way you want it then using DTS (SQL 2000) or SSIS (SQL 2005) to export the information using the Microsoft Excel ODBC driver
{ "language": "en", "url": "https://stackoverflow.com/questions/123624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: IIS site on UNC share: is it problematic? I am working with a legacy ASP Classic solution which is load balanced (via external hardware and has an IIS site the home directory of which is an UNC path. I've been told that the following issues with this setup currently exist: * *When using an UNC path as home directory, there is an "index" somewhere in IIS which "caches" up to a certain amount of files of certain types, and when the limit, which defaults to 50, has been reached, subsequent requests to pages not in the cache will return 404. *When using an UNC path as home directory, when starting the IIS site, the aforementioned "cache" will start filling, which will bog down the site IIS until the cache is filled, meaning that huge sites (15,000 .asp files) are unavailable for up to 30 minutes after the IIS site starts. *When using an UNC path as home directory, if more than a certain number of simultaneous requests are made to the site, Windows will reach the "Network BIOS command limit per server", and all requests above the limit will have to wait until IIS "closes the session" to the server. I am told the limit is 100 files and not configurable. Now, all this sounds a bit weird. If I set up a new Windows 2003 server with default settings, and use it to host an ASP Classic application with 15,000 .asp files, using a share on a server as the home directory for the IIS site, will I actually run into these problems? And if so, is there a way to counter them without changing the architecture? (To clarify, the only reason the "load balancing" is important is that load balancing is the reason the files are on a share on a server. If no load balancing was needed, the files could be on the local disk.) A: Yes, it is possible, but yes, it can cause problems. When ASP.NET compiles ASPX, ASCX, and other content pages into assemblies, it creates a lot of FileSystemWatchers in order to monitor the dependencies between them so that when files change, it can recompile. These eat up NetBIOS resources. Additionally, every time you do a File.Exists or Directory.Exists call, or any other kind of IO to the site's serving path, that increases the demands on the NetBIOS limits as well. It is possible to set the NetBIOS limits through the registry to above their defaults to a point. For a small site, with relatively few directories and files, you could very successfully run off a UNC share because ASP.NET will continue to run after startup off of its compiled assemblies. However, the more directories and files you add, the more likely problems are to crop up. We tried running a mammoth site (hundreds of directories and ASPX/ASCX files) and it would run fine for a few minutes until enough urls were accessed that the NetBIOS limits were reached, and then every subsequent page view resulted in an exception. We ended up forced to use a robocopy publishing solution. In the end, you have to test to see if your site is small enough and your NetBIOS settings are high enough to run effectively. I would suggest using a spider on a test site so that you can be sure that everything that could be compiled or accessed is at least once. A: I'm not sure about your direct question on the interaction between IIS and UNC, but I would suggest on a busy site (anything busy enough to require load balancing), that you consider something other than a file share. An asp loaded by IIS across a network (i.e. a file share) will suffer negative performance implications (latency). I would suggest using something like robocopy to keep all load balanced servers in sync with a central master. In other words, deploy to a single master server (or single master location), then robocopy the files to each slave in the load balancer's pool. This will not only remove the wierd UNC issues you describe, but should also give you a nice performance boost (by removing the network hit when loading asp pages). I would expect pretty heavy performance boosts if you did this. A: For answer 3 you can change the Network BIOS command limit. Its a pretty easy registry edit fix: http://support.microsoft.com/kb/810886/en-us I have run into that particular issue myself.
{ "language": "en", "url": "https://stackoverflow.com/questions/123628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Any way to do Visual Studio "project only" build from command line? devenv mysolution.sln /build "Release|Win32" /project myproject When building from the command line, it seems I have the option of doing a /build or /rebuild, but no way of saying I want to do "project only" (i.e. not build or rebuild the specified project's dependencies as well). Does anyone know of a way? A: Don't call devenv, use the genericized build tool instead: vcbuild subproject.vcproj "release|win32" A: Depending on the structure of your build system, this may be what you're looking for: msbuild /p:BuildProjectReferences=false project.proj A: MSBuild is what you want MSBuild.exe MyProject.proj /t:build A: According to MSDN How To: Build Specific Targets in Solutions with MSBuild.exe: msbuild foo.sln /t:proj1:Rebuild;folder_of_proj2\proj2:Clean A: Thanks for the answers. I see from a bit of looking into msbuild that it can deal with a .sln file rather than a .vcproj; can this be accomplished that way instead of having to know the location of the .vcproj? Let me take a step back. We have a big solution file and I want a script to do this: * *Do a /build on the whole solution. (Sometimes some projects fail because devenv doesn't do quite as much building as necessary for the amount of change since the last build.) *For each project that fails, do a /rebuild on it. When I get to step 2 all I know is the solution filename and the names (not filenames) of the projects that failed. I could easily grep/awk/whatever the .sln file to map from one to the other, but I'm curious if msbuild offers a way to do it directly. (Ideally I could give msbuild the .sln and the names of all the projects to rebuild in a single command line, as it's a large file and takes a while to load. If that's not possible then the option of manually finding all the project filenames is probably better as loading the solution file every time would be most inefficient.) A: The following works well for building a single C++ project in VS2010: call "%VS100COMNTOOLS%"\\vsvars32.bat msbuild /detailedsummary /p:Configuration=Debug /p:Platform=x64 /t:build MY_ABSOLUTE_PATH.vcxproj Sadly, you can't simply specify a project name and a solution file to build just that project (unless you add a special configuration to your project files perhaps).
{ "language": "en", "url": "https://stackoverflow.com/questions/123632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How does versioning work with Flex remote objects and AMF? Suppose I use the [RemoteClass] tag to endow a custom Flex class with serialization intelligence. What happens when I need to change my object (add a new field, remove a field, rename a field, etc)? Is there a design pattern for handling this in an elegant way? A: Your best bet is to do code generation against your backend classes to generation ActionScript counterparts for them. If you generate a base class with all of your object properties and then create a subclass for it which is never modified, you can still add custom code while regenerating only the parts of your class that change. Example: java: public class User { public Long id; public String firstName; public String lastName; } as3: public class UserBase { public var id : Number; public var firstName : String; public var lastName : String; } [Bindable] [RemoteClass(...)] public class User extends UserBase { public function getFullName() : String { return firstName + " " + lastName; } } Check out the Granite Data Services project for Java -> AS3 code generation. http://www.graniteds.org A: Adding or removing generally works. You'll get runtime warnings in your trace about properties either being missing or not found, but any data that is transferred and has a place to go will still get there. You need to keep this in mind while developing as not all your fields might have valid data. Changing types, doesn't work so well and will often result in run time exceptions. I like to use explicit data transfer objects and not to persist my actual data model that's used throughout the app. Then your translation from DTO->Model can take version differences into account.
{ "language": "en", "url": "https://stackoverflow.com/questions/123639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Extending SQL Server full-text index to search through foreign keys I know that a SQL Server full text index can not index more than one table. But, I have relationships in tables that I would like to implement full text indexes on. Take the 3 tables below... Vehicle Veh_ID - int (Primary Key) FK_Atr_VehicleColor - int Veh_Make - nvarchar(20) Veh_Model - nvarchar(50) Veh_LicensePlate - nvarchar(10) Attributes Atr_ID - int (Primary Key) FK_Aty_ID - int Atr_Name - nvarchar(50) AttributeTypes Aty_ID - int (Primary key) Aty_Name - nvarchar(50) The Attributes and AttributeTypes tables hold values that can be used in drop down lists throughout the application being built. For example, Attribute Type of "Vehicle Color" with Attributes of "Black", "Blue", "Red", etc... Ok, so the problem comes when a user is trying to search for a "Blue Ford Mustang". So what is the best solution considering that tables like Vehicle will get rather large? Do I create another field in the "Vehicle" table that is "Veh Color" that holds the text value of what is selected in the drop down in addition to "FK Atr VehicleColor"? Or, do I drop "FK Atr VehicleColor" altogether and add "Veh Color"? I can use text value of "Veh Color" to match against "Atr Name" when the drop down is populated in an update form. With this approach I will have to handle if Attributes are dropped from the database. -- Note: could not use underscore outside of code view as everything between two underscores is italicized. A: I believe it's a common practice to have separate denormalized table specifically for full-text indexing. This table is then updated by triggers or, as it was in our case, by SQL Server's scheduled task. This was SQL Server 2000. In SQL Server you can have an indexed view with full-text index: http://msdn.microsoft.com/en-us/library/ms187317.aspx. But note that there are many restrictions on indexed views; for instance, you can't index a view that uses OUTER join. A: You can create a view that pulls in whatever data you need, then apply the full-text index to the view. The view needs to be created with the 'WITH SCHEMABINDING' option, and needs to have a UNIQUE index. CREATE VIEW VehicleSearch WITH SCHEMABINDING AS SELECT v.Veh_ID, v.Veh_Make, v.Veh_Model, v.Veh_LicensePlate, a.Atr_Name as Veh_Color FROM Vehicle v INNER JOIN Attributes a on a.Atr_ID = v.FK_Atr_VehicleColor GO CREATE UNIQUE CLUSTERED INDEX IX_VehicleSearch_Veh_ID ON VehicleSearch ( Veh_ID ASC ) ON [PRIMARY] GO CREATE FULLTEXT INDEX ON VehicleSearch ( Veh_Make LANGUAGE [English], Veh_Model LANGUAGE [English], Veh_Color LANGUAGE [English] ) KEY INDEX IX_VehicleSearch_Veh_ID ON [YourFullTextCatalog] WITH CHANGE_TRACKING AUTO GO A: As I understand it (I've used SQL Server a lot but never full-text indexing) SQL Server 2005 allows you to create full text indexes against a view. So you could create a view on SELECT Vehicle.VehID, ..., Color.Atr_Name AS ColorName FROM Vehicle LEFT OUTER JOIN Attributes AS Color ON (Vehicle.FK_Atr_VehicleColor = Attributes.Atr_Id) and then create your full-text index across this view, including 'ColorName' in the index.
{ "language": "en", "url": "https://stackoverflow.com/questions/123648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How can I share a variable or object between two or more Servlets? I would like to know if there is some way to share a variable or an object between two or more Servlets, I mean some "standard" way. I suppose that this is not a good practice but is a easier way to build a prototype. I don't know if it depends on the technologies used, but I'll use Tomcat 5.5 I want to share a Vector of objects of a simple class (just public attributes, strings, ints, etc). My intention is to have a static data like in a DB, obviously it will be lost when the Tomcat is stopped. (it's just for Testing) A: Put it in one of the 3 different scopes. request - lasts life of request session - lasts life of user's session application - lasts until applciation is shut down You can access all of these scopes via the HttpServletRequest variable that is passed in to the methods that extend from the HttpServlet class A: I think what you're looking for here is request, session or application data. In a servlet you can add an object as an attribute to the request object, session object or servlet context object: protected void doGet(HttpServletRequest request, HttpServletResponse response) { String shared = "shared"; request.setAttribute("sharedId", shared); // add to request request.getSession().setAttribute("sharedId", shared); // add to session this.getServletConfig().getServletContext().setAttribute("sharedId", shared); // add to application context request.getRequestDispatcher("/URLofOtherServlet").forward(request, response); } If you put it in the request object it will be available to the servlet that is forwarded to until the request is finished: request.getAttribute("sharedId"); If you put it in the session it will be available to all the servlets going forward but the value will be tied to the user: request.getSession().getAttribute("sharedId"); Until the session expires based on inactivity from the user. Is reset by you: request.getSession().invalidate(); Or one servlet removes it from scope: request.getSession().removeAttribute("sharedId"); If you put it in the servlet context it will be available while the application is running: this.getServletConfig().getServletContext().getAttribute("sharedId"); Until you remove it: this.getServletConfig().getServletContext().removeAttribute("sharedId"); A: Depends on the scope of the intended use of the data. If the data is only used on a per-user basis, like user login info, page hit count, etc. use the session object (httpServletRequest.getSession().get/setAttribute(String [,Object])) If it is the same data across multiple users (total web page hits, worker threads, etc) use the ServletContext attributes. servlet.getServletCongfig().getServletContext().get/setAttribute(String [,Object])). This will only work within the same war file/web applicaiton. Note that this data is not persisted across restarts either. A: Another option, share data betwheen contexts... share-data-between-servlets-on-tomcat <Context path="/myApp1" docBase="myApp1" crossContext="true"/> <Context path="/myApp2" docBase="myApp2" crossContext="true"/> On myApp1: ServletContext sc = getServletContext(); sc.setAttribute("attribute", "value"); On myApp2: ServletContext sc = getServletContext("/myApp1"); String anwser = (String)sc.getAttribute("attribute"); A: Couldn't you just put the object in the HttpSession and then refer to it by its attribute name in each of the servlets? e.g: getSession().setAttribute("thing", object); ...then in another servlet: Object obj = getSession.getAttribute("thing"); A: Here's how I do this with Jetty. https://stackoverflow.com/a/46968645/1287091 Uses the server context, where a singleton is written to during startup of an embedded Jetty server and shared among all webapps for the life of the server. Can also be used to share objects/data between webapps assuming there is only one writer to the context - otherwise you need to be mindful of concurrency.
{ "language": "en", "url": "https://stackoverflow.com/questions/123657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: How to wait for a BackgroundWorker to cancel? Consider a hypothetical method of an object that does stuff for you: public class DoesStuff { BackgroundWorker _worker = new BackgroundWorker(); ... public void CancelDoingStuff() { _worker.CancelAsync(); //todo: Figure out a way to wait for BackgroundWorker to be cancelled. } } How can one wait for a BackgroundWorker to be done? In the past people have tried: while (_worker.IsBusy) { Sleep(100); } But this deadlocks, because IsBusy is not cleared until after the RunWorkerCompleted event is handled, and that event can't get handled until the application goes idle. The application won't go idle until the worker is done. (Plus, it's a busy loop - disgusting.) Others have add suggested kludging it into: while (_worker.IsBusy) { Application.DoEvents(); } The problem with that is that is Application.DoEvents() causes messages currently in the queue to be processed, which cause re-entrancy problems (.NET isn't re-entrant). I would hope to use some solution involving Event synchronization objects, where the code waits for an event - that the worker's RunWorkerCompleted event handlers sets. Something like: Event _workerDoneEvent = new WaitHandle(); public void CancelDoingStuff() { _worker.CancelAsync(); _workerDoneEvent.WaitOne(); } private void RunWorkerCompletedEventHandler(sender object, RunWorkerCompletedEventArgs e) { _workerDoneEvent.SetEvent(); } But I'm back to the deadlock: the event handler can't run until the application goes idle, and the application won't go idle because it's waiting for an Event. So how can you wait for an BackgroundWorker to finish? Update People seem to be confused by this question. They seem to think that I will be using the BackgroundWorker as: BackgroundWorker worker = new BackgroundWorker(); worker.DoWork += MyWork; worker.RunWorkerAsync(); WaitForWorkerToFinish(worker); That is not it, that is not what I'm doing, and that is not what is being asked here. If that were the case, there would be no point in using a background worker. A: Almost all of you are confused by the question, and are not understanding how a worker is used. Consider a RunWorkerComplete event handler: private void OnRunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) { if (!e.Cancelled) { rocketOnPad = false; label1.Text = "Rocket launch complete."; } else { rocketOnPad = true; label1.Text = "Rocket launch aborted."; } worker = null; } And all is good. Now comes a situation where the caller needs to abort the countdown because they need to execute an emergency self-destruct of the rocket. private void BlowUpRocket() { if (worker != null) { worker.CancelAsync(); WaitForWorkerToFinish(worker); worker = null; } StartClaxon(); SelfDestruct(); } And there is also a situation where we need to open the access gates to the rocket, but not while doing a countdown: private void OpenAccessGates() { if (worker != null) { worker.CancelAsync(); WaitForWorkerToFinish(worker); worker = null; } if (!rocketOnPad) DisengageAllGateLatches(); } And finally, we need to de-fuel the rocket, but that's not allowed during a countdown: private void DrainRocket() { if (worker != null) { worker.CancelAsync(); WaitForWorkerToFinish(worker); worker = null; } if (rocketOnPad) OpenFuelValves(); } Without the ability to wait for a worker to cancel, we must move all three methods to the RunWorkerCompletedEvent: private void OnRunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) { if (!e.Cancelled) { rocketOnPad = false; label1.Text = "Rocket launch complete."; } else { rocketOnPad = true; label1.Text = "Rocket launch aborted."; } worker = null; if (delayedBlowUpRocket) BlowUpRocket(); else if (delayedOpenAccessGates) OpenAccessGates(); else if (delayedDrainRocket) DrainRocket(); } private void BlowUpRocket() { if (worker != null) { delayedBlowUpRocket = true; worker.CancelAsync(); return; } StartClaxon(); SelfDestruct(); } private void OpenAccessGates() { if (worker != null) { delayedOpenAccessGates = true; worker.CancelAsync(); return; } if (!rocketOnPad) DisengageAllGateLatches(); } private void DrainRocket() { if (worker != null) { delayedDrainRocket = true; worker.CancelAsync(); return; } if (rocketOnPad) OpenFuelValves(); } Now I could write my code like that, but I'm just not gonna. I don't care, I'm just not. A: You can check into the RunWorkerCompletedEventArgs in the RunWorkerCompletedEventHandler to see what the status was. Success, canceled or an error. private void RunWorkerCompletedEventHandler(sender object, RunWorkerCompletedEventArgs e) { if(e.Cancelled) { Console.WriteLine("The worker was cancelled."); } } Update: To see if your worker has called .CancelAsync() by using this: if (_worker.CancellationPending) { Console.WriteLine("Cancellation is pending, no need to call CancelAsync again"); } A: You don't wait for the background worker to complete. That pretty much defeats the purpose of launching a separate thread. Instead, you should let your method finish, and move any code that depends on completion to a different place. You let the worker tell you when it's done and call any remaining code then. If you want to wait for something to complete use a different threading construct that provides a WaitHandle. A: Why can't you just tie into the BackgroundWorker.RunWorkerCompleted Event. It's a callback that will "Occur when the background operation has completed, has been canceled, or has raised an exception." A: There is a problem with this response. The UI needs to continue to process messages while you are waiting, otherwise it will not repaint, which will be a problem if your background worker takes a long time to respond to the cancel request. A second flaw is that _resetEvent.Set() will never be called if the worker thread throws an exception - leaving the main thread waiting indefinitely - however this flaw could easily be fixed with a try/finally block. One way to do this is to display a modal dialog which has a timer that repeatedly checks if the background worker has finished work (or finished cancelling in your case). Once the background worker has finished, the modal dialog returns control to your application. The user can't interact with the UI until this happens. Another method (assuming you have a maximum of one modeless window open) is to set ActiveForm.Enabled = false, then loop on Application,DoEvents until the background worker has finished cancelling, after which you can set ActiveForm.Enabled = true again. A: If I understand your requirement right, you could do something like this (code not tested, but shows the general idea): private BackgroundWorker worker = new BackgroundWorker(); private AutoResetEvent _resetEvent = new AutoResetEvent(false); public Form1() { InitializeComponent(); worker.DoWork += worker_DoWork; } public void Cancel() { worker.CancelAsync(); _resetEvent.WaitOne(); // will block until _resetEvent.Set() call made } void worker_DoWork(object sender, DoWorkEventArgs e) { while(!e.Cancel) { // do something } _resetEvent.Set(); // signal that worker is done } A: I don't understand why you'd want to wait for a BackgroundWorker to complete; it really seems like the exact opposite of the motivation for the class. However, you could start every method with a call to worker.IsBusy and have them exit if it is running. A: Hm maybe I am not getting your question right. The backgroundworker calls the WorkerCompleted event once his 'workermethod' (the method/function/sub that handles the backgroundworker.doWork-event) is finished so there is no need for checking if the BW is still running. If you want to stop your worker check the cancellation pending property inside your 'worker method'. A: The workflow of a BackgroundWorker object basically requires you to handle the RunWorkerCompleted event for both normal execution and user cancellation use cases. This is why the property RunWorkerCompletedEventArgs.Cancelled exists. Basically, doing this properly requires that you consider your Cancel method to be an asynchronous method in itself. Here's an example: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows.Forms; using System.ComponentModel; namespace WindowsFormsApplication1 { public class AsyncForm : Form { private Button _startButton; private Label _statusLabel; private Button _stopButton; private MyWorker _worker; public AsyncForm() { var layoutPanel = new TableLayoutPanel(); layoutPanel.Dock = DockStyle.Fill; layoutPanel.ColumnStyles.Add(new ColumnStyle()); layoutPanel.ColumnStyles.Add(new ColumnStyle()); layoutPanel.RowStyles.Add(new RowStyle(SizeType.AutoSize)); layoutPanel.RowStyles.Add(new RowStyle(SizeType.Percent, 100)); _statusLabel = new Label(); _statusLabel.Text = "Idle."; layoutPanel.Controls.Add(_statusLabel, 0, 0); _startButton = new Button(); _startButton.Text = "Start"; _startButton.Click += HandleStartButton; layoutPanel.Controls.Add(_startButton, 0, 1); _stopButton = new Button(); _stopButton.Enabled = false; _stopButton.Text = "Stop"; _stopButton.Click += HandleStopButton; layoutPanel.Controls.Add(_stopButton, 1, 1); this.Controls.Add(layoutPanel); } private void HandleStartButton(object sender, EventArgs e) { _stopButton.Enabled = true; _startButton.Enabled = false; _worker = new MyWorker() { WorkerSupportsCancellation = true }; _worker.RunWorkerCompleted += HandleWorkerCompleted; _worker.RunWorkerAsync(); _statusLabel.Text = "Running..."; } private void HandleStopButton(object sender, EventArgs e) { _worker.CancelAsync(); _statusLabel.Text = "Cancelling..."; } private void HandleWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) { if (e.Cancelled) { _statusLabel.Text = "Cancelled!"; } else { _statusLabel.Text = "Completed."; } _stopButton.Enabled = false; _startButton.Enabled = true; } } public class MyWorker : BackgroundWorker { protected override void OnDoWork(DoWorkEventArgs e) { base.OnDoWork(e); for (int i = 0; i < 10; i++) { System.Threading.Thread.Sleep(500); if (this.CancellationPending) { e.Cancel = true; e.Result = false; return; } } e.Result = true; } } } If you really really don't want your method to exit, I'd suggest putting a flag like an AutoResetEvent on a derived BackgroundWorker, then override OnRunWorkerCompleted to set the flag. It's still kind of kludgy though; I'd recommend treating the cancel event like an asynchronous method and do whatever it's currently doing in the RunWorkerCompleted handler. A: I'm a little late to the party here (about 4 years) but what about setting up an asynchronous thread that can handle a busy loop without locking the UI, then have the callback from that thread be the confirmation that the BackgroundWorker has finished cancelling? Something like this: class Test : Form { private BackgroundWorker MyWorker = new BackgroundWorker(); public Test() { MyWorker.DoWork += new DoWorkEventHandler(MyWorker_DoWork); } void MyWorker_DoWork(object sender, DoWorkEventArgs e) { for (int i = 0; i < 100; i++) { //Do stuff here System.Threading.Thread.Sleep((new Random()).Next(0, 1000)); //WARN: Artificial latency here if (MyWorker.CancellationPending) { return; } //Bail out if MyWorker is cancelled } } public void CancelWorker() { if (MyWorker != null && MyWorker.IsBusy) { MyWorker.CancelAsync(); System.Threading.ThreadStart WaitThread = new System.Threading.ThreadStart(delegate() { while (MyWorker.IsBusy) { System.Threading.Thread.Sleep(100); } }); WaitThread.BeginInvoke(a => { Invoke((MethodInvoker)delegate() { //Invoke your StuffAfterCancellation call back onto the UI thread StuffAfterCancellation(); }); }, null); } else { StuffAfterCancellation(); } } private void StuffAfterCancellation() { //Things to do after MyWorker is cancelled } } In essence what this does is fire off another thread to run in the background that just waits in it's busy loop to see if the MyWorker has completed. Once MyWorker has finished cancelling the thread will exit and we can use it's AsyncCallback to execute whatever method we need to follow the successful cancellation - it'll work like a psuedo-event. Since this is separate from the UI thread it will not lock the UI while we wait for MyWorker to finish cancelling. If your intention really is to lock and wait for the cancel then this is useless to you, but if you just want to wait so you can start another process then this works nicely. A: Imports System.Net Imports System.IO Imports System.Text Public Class Form1 Dim f As New Windows.Forms.Form Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click BackgroundWorker1.WorkerReportsProgress = True BackgroundWorker1.RunWorkerAsync() Dim l As New Label l.Text = "Please Wait" f.Controls.Add(l) l.Dock = DockStyle.Fill f.StartPosition = FormStartPosition.CenterScreen f.FormBorderStyle = Windows.Forms.FormBorderStyle.None While BackgroundWorker1.IsBusy f.ShowDialog() End While End Sub Private Sub BackgroundWorker1_DoWork(ByVal sender As Object, ByVal e As System.ComponentModel.DoWorkEventArgs) Handles BackgroundWorker1.DoWork Dim i As Integer For i = 1 To 5 Threading.Thread.Sleep(5000) BackgroundWorker1.ReportProgress((i / 5) * 100) Next End Sub Private Sub BackgroundWorker1_ProgressChanged(ByVal sender As Object, ByVal e As System.ComponentModel.ProgressChangedEventArgs) Handles BackgroundWorker1.ProgressChanged Me.Text = e.ProgressPercentage End Sub Private Sub BackgroundWorker1_RunWorkerCompleted(ByVal sender As Object, ByVal e As System.ComponentModel.RunWorkerCompletedEventArgs) Handles BackgroundWorker1.RunWorkerCompleted f.Close() End Sub End Class A: I know this is really late (5 years) but what you are looking for is to use a Thread and a SynchronizationContext. You are going to have to marshal UI calls back to the UI thread "by hand" rather than let the Framework do it auto-magically. This allows you to use a Thread that you can Wait for if needs be. A: Fredrik Kalseth's solution to this problem is the best I've found so far. Other solutions use Application.DoEvent() that can cause problems or simply don't work. Let me cast his solution into a reusable class. Since BackgroundWorker is not sealed, we can derive our class from it: public class BackgroundWorkerEx : BackgroundWorker { private AutoResetEvent _resetEvent = new AutoResetEvent(false); private bool _resetting, _started; private object _lockObject = new object(); public void CancelSync() { bool doReset = false; lock (_lockObject) { if (_started && !_resetting) { _resetting = true; doReset = true; } } if (doReset) { CancelAsync(); _resetEvent.WaitOne(); lock (_lockObject) { _started = false; _resetting = false; } } } protected override void OnDoWork(DoWorkEventArgs e) { lock (_lockObject) { _resetting = false; _started = true; _resetEvent.Reset(); } try { base.OnDoWork(e); } finally { _resetEvent.Set(); } } } With flags and proper locking, we make sure that _resetEvent.WaitOne() really gets only called if some work has been started, otherwise _resetEvent.Set(); might never been called! The try-finally ensures that _resetEvent.Set(); will be called, even if an exception should occur in our DoWork-handler. Otherwise the application could freeze forever when calling CancelSync! We would use it like this: BackgroundWorkerEx _worker; void StartWork() { StopWork(); _worker = new BackgroundWorkerEx { WorkerSupportsCancellation = true, WorkerReportsProgress = true }; _worker.DoWork += Worker_DoWork; _worker.ProgressChanged += Worker_ProgressChanged; } void StopWork() { if (_worker != null) { _worker.CancelSync(); // Use our new method. } } private void Worker_DoWork(object sender, DoWorkEventArgs e) { for (int i = 1; i <= 20; i++) { if (worker.CancellationPending) { e.Cancel = true; break; } else { // Simulate a time consuming operation. System.Threading.Thread.Sleep(500); worker.ReportProgress(5 * i); } } } private void Worker_ProgressChanged(object sender, ProgressChangedEventArgs e) { progressLabel.Text = e.ProgressPercentage.ToString() + "%"; } You can also add a handler to the RunWorkerCompleted event as shown here:      BackgroundWorker Class (Microsoft documentation). A: Just wanna say I came here because I need a background worker to wait while I was running an async process while in a loop, my fix was way easier than all this other stuff^^ foreach(DataRow rw in dt.Rows) { //loop code while(!backgroundWorker1.IsBusy) { backgroundWorker1.RunWorkerAsync(); } } Just figured I'd share because this is where I ended up while searching for a solution. Also, this is my first post on stack overflow so if its bad or anything I'd love critics! :) A: Closing the form closes my open logfile. My background worker writes that logfile, so I can't let MainWin_FormClosing() finish until my background worker terminates. If I don't wait for my background worker to terminate, exceptions happen. Why is this so hard? A simple Thread.Sleep(1500) works, but it delays shutdown (if too long), or causes exceptions (if too short). To shut down right after the background worker terminates, just use a variable. This is working for me: private volatile bool bwRunning = false; ... private void MainWin_FormClosing(Object sender, FormClosingEventArgs e) { ... // Clean house as-needed. bwInstance.CancelAsync(); // Flag background worker to stop. while (bwRunning) Thread.Sleep(100); // Wait for background worker to stop. } // (The form really gets closed now.) ... private void bwBody(object sender, DoWorkEventArgs e) { bwRunning = true; BackgroundWorker bw = sender as BackgroundWorker; ... // Set up (open logfile, etc.) for (; ; ) // infinite loop { ... if (bw.CancellationPending) break; ... } ... // Tear down (close logfile, etc.) bwRunning = false; } // (bwInstance dies now.) A: You can piggy back off of the RunWorkerCompleted event. Even if you've already added an event handler for _worker, you can add another an they will execute in the order in which they were added. public class DoesStuff { BackgroundWorker _worker = new BackgroundWorker(); ... public void CancelDoingStuff() { _worker.RunWorkerCompleted += new RunWorkerCompletedEventHandler((sender, e) => { // do whatever you want to do when the cancel completes in here! }); _worker.CancelAsync(); } } this could be useful if you have multiple reasons why a cancel may occur, making the logic of a single RunWorkerCompleted handler more complicated than you want. For instance, cancelling when a user tries to close the form: void Form1_FormClosing(object sender, FormClosingEventArgs e) { if (_worker != null) { _worker.RunWorkerCompleted += new RunWorkerCompletedEventHandler((sender, e) => this.Close()); _worker.CancelAsync(); e.Cancel = true; } } A: I use async method and await to wait for the worker finishing its job: public async Task StopAsync() { _worker.CancelAsync(); while (_isBusy) await Task.Delay(1); } and in DoWork method: public async Task DoWork() { _isBusy = true; while (!_worker.CancellationPending) { // Do something. } _isBusy = false; } You may also encapsulate the while loop in DoWork with try ... catch to set _isBusy is false on exception. Or, simply check _worker.IsBusy in the StopAsync while loop. Here is an example of full implementation: class MyBackgroundWorker { private BackgroundWorker _worker; private bool _isBusy; public void Start() { if (_isBusy) throw new InvalidOperationException("Cannot start as a background worker is already running."); InitialiseWorker(); _worker.RunWorkerAsync(); } public async Task StopAsync() { if (!_isBusy) throw new InvalidOperationException("Cannot stop as there is no running background worker."); _worker.CancelAsync(); while (_isBusy) await Task.Delay(1); _worker.Dispose(); } private void InitialiseWorker() { _worker = new BackgroundWorker { WorkerSupportsCancellation = true }; _worker.DoWork += WorkerDoWork; } private void WorkerDoWork(object sender, DoWorkEventArgs e) { _isBusy = true; try { while (!_worker.CancellationPending) { // Do something. } } catch { _isBusy = false; throw; } _isBusy = false; } } To stop the worker and wait for it runs to the end: await myBackgroundWorker.StopAsync(); The problems with this method are: * *You have to use async methods all the way. *await Task.Delay is inaccurate. On my PC, Task.Delay(1) actually waits ~20ms. A: oh man, some of these have gotten ridiculously complex. all you need to do is check the BackgroundWorker.CancellationPending property inside the DoWork handler. you can check it at any time. once it's pending, set e.Cancel = True and bail from the method. // method here private void Worker_DoWork(object sender, DoWorkEventArgs e) { BackgroundWorker bw = (sender as BackgroundWorker); // do stuff if(bw.CancellationPending) { e.Cancel = True; return; } // do other stuff }
{ "language": "en", "url": "https://stackoverflow.com/questions/123661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "126" }
Q: How is OpenID implemented? How would you design and implement OpenID components? (Was "How does OpenId work") I realize this question is somewhat of a duplicate, and yes, I have read the spec and the wikipedia article. After reading the materials mentioned above, I still don't have a complete picture in my head of how each step in the process is handled. Maybe what's missing is a good workflow diagram for how an implementation of OpenID works. I'm considering incorporating OpenID into one of my applications to accomodate a B2B single-sign-on scenario, and I will probably go with DotNetOpenID instead of trying to implement it myself, but I still want a better grasp of the particulars before I get started. Can anyone recommend books or websites that do a good job of explaining it all? It wouldn't hurt to have an answer that covers the basics here on this site as well. [Edit] I changed the title to be more implementation-specific, since there are obviously plenty of places to get the ten-thousand-foot view. A: This page has a nice flow diagram. I found this link on the OpenID Wiki, you might want to check there for more resources. A: I recommend Joseph Smarr's Recipe for OpenID-Enabling Your Site. I haven't read the DotNetOpenID docs, but I would hope whatever implementation you choose would also have some overview documentation and/or examples to illustrate usage of the API. A: Check out Security Now podcast, episode 95. (Actually audio) A: Jeff has a great article on OpenID where he shares his experiences: OpenID: Does The World Really Need Yet Another Username and Password? There are some links to tutorials on the official OpenID site: http://openid.net/developers/ You can get a nice login-control for OpenID (which also is used here on stackoverflow) here: http://www.idselector.com/ A: Also related: The super-famous talk by Dick Hardt on Identity 2.0, I suppose almost everyone has watched it, but if you haven't it is a must see. It is more about the reasoning of the need of things like Open ID and not necessarily about their implementation, though.
{ "language": "en", "url": "https://stackoverflow.com/questions/123671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Exporting tab-delimited files in SSRS 2005 In this MSDN article, MS explains how to specify other delimiters besides commas for csv-type exports from SSRS 2005, however, literal tab characters are stripped by the config file parser, and it doesn't appear that MS has provided a workaround. This entry on Microsoft Connect seems to confirm this. Has anyone developed a way to export tab-delimited files from SSRS 2005? Or perhaps developed an open-source custom renderer to get the job done? Note: I've heard of manually appending &rc:FieldDelimiter=%09 via URL access, but that's not an acceptable workaround for my users and doesn't appear to work anyways. A: In case anyone needs it this is working very well for me. <Extension Name="Tabs" Type="Microsoft.ReportingServices.Rendering.DataRenderer.CsvReport,Microsoft.ReportingServices.DataRendering"> <OverrideNames> <Name Language="en-US">Tab-delimited</Name> </OverrideNames> <Configuration> <DeviceInfo> <OutputFormat>TXT</OutputFormat> <Encoding>ASCII</Encoding> <FieldDelimiter>&#9;</FieldDelimiter> <!-- or as this --> <!-- <FieldDelimiter xml:space="preserve">[TAB]</FieldDelimiter> --> <FileExtension>txt</FileExtension> </DeviceInfo> </Configuration> </Extension> A: I used a select query to format the data and BCP to extract the data out into a file. In my case I encapsulated it all in a stored procedure and scheduled it using the SQL Agent to drop files at certain times. The basic coding is similar to: use tempdb go create view vw_bcpMasterSysobjects as select name = '"' + name + '"' , crdate = '"' + convert(varchar(8), crdate, 112) + '"' , crtime = '"' + convert(varchar(8), crdate, 108) + '"' from master..sysobjects go declare @sql varchar(8000) select @sql = 'bcp "select * from tempdb..vw_bcpMasterSysobjects order by crdate desc, crtime desc" queryout c:\bcp\sysobjects.txt -c -t, -T -S' + @@servername exec master..xp_cmdshell @sql Please have a look at the excellent post creating-csv-files-using-bcp-and-stored-procedures. A: My current workaround is to add a custom CSV extension as such: <Extension Name="Tabs" Type="Microsoft.ReportingServices.Rendering.CsvRenderer.CsvReport,Microsoft.ReportingServices.CsvRendering"> <OverrideNames> <Name Language="en-US">Tab-delimited (requires patch)</Name> </OverrideNames> <Configuration> <DeviceInfo> <Encoding>ASCII</Encoding> <FieldDelimiter>REPLACE_WITH_TAB</FieldDelimiter> <Extension>txt</Extension> </DeviceInfo> </Configuration> </Extension> ...you can see I'm using the text "REPLACE_WITH_TAB" as my field delimiter, and then I use a simple platform-independent Perl script to perform a sed-like fix: # all .txt files in the working directory @files = <*.txt>; foreach $file (@files) { $old = $file; $new = "$file.temp"; open OLD, "<", $old or die $!; open NEW, ">", $new or die $!; while (my $line = <OLD>) { # SSRS 2005 SP2 can't output tab-delimited files $line =~ s/REPLACE_WITH_TAB/\t/g; print NEW $line; } close OLD or die $!; close NEW or die $!; rename($old, "$old.orig"); rename($new, $old); } This is definitely a hack, but it gets the job done in a fairly non-invasive manner. It only requires: * *Perl installed on the user's machine *User's ability to drag the .pl script to the directory of .txt files *User's ability to double-click the .pl script A: Call me Mr Silly but wouldn't it be simpler to have XML returned from a stored proc or a SQL statement? An XSLT transformation to CSV is trivial. Or you could write an equally trivial ASP.NET page that obtains the data using ADO.NET, clears the output stream, sets the mime type to text/csv and writes CSV to it. Oops, I see you want a delimiter other than comma. But both of the above solutions can still be applied. If you go the ASP way you could have a parameter page that lets them pick the delimiter of their choice.
{ "language": "en", "url": "https://stackoverflow.com/questions/123672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Should I be more concerned with coupling between packages or between units of distribution? I have been looking at metrics for coupling and also look at DSM. One of the tools I've been using looks at coupling between 'modules' with a module being a unit of distribution (in this case a .net assembly). I feel that I should be more interested in looking at coupling between packages (or namespaces) than with units of distribution. Should I be more concerned with coupling between packages/namespaces (ensure that abstractions only depend on abstractions, concrete types depend on abstractions and their are no cycles in the dependencies so that refactoring and extending is easy) or should I be concerned with whether I can deploy new versions without needing to update unchanged units of distribution? What does anyone else measure? For what it's worth, my gut feel is that if I focus on the package/namespace coupling then the unit of distribution coupling will come for free or at least be easier. A: First, it's easy to go overboard looking at dependencies and coupling. Make sure you aren't over complicating it. With that disclaimer out of the way, here's what I suggest. There's really 3 different views to dependency/coupling management: 1) physical structure (i.e. assembly dependencies) 2) logical structure (i.e. namespace dependencies) 3) implementation structure (i.e. class dependencies) For large apps, you will need to at least examine all 3, but you can usually prioritize. For client deployed apps, number 1 can be very important (i.e. for things like plug-ins). For apps deployed inside the enterprise (i.e. asp.net), item #1 usually turns out to be not so important (excluding frameworks reused across multiple apps). You can usually deploy the whole app easy enough not to take the overhead of a complicated structure for #1. Item #2 tends to be more of a maintainability issue. Know your layer boundaries and their relationship to namespaces (i.e. are you doing 1 layer per namespace or are you packaged differently at the logical level). Sometimes tools can help you enforce your layer boundaries by looking at the logical dependency structure. Item #3 is really about doing good class design. Every good developer should put forth a pretty good amount of effort into ensuring he is only taking on the proper dependencies in his classes. This is easier said than done, and is typically a skill that has to be acquired over time. To get a bit closer to the heart of your question, item #1 is really about how the projects are laid out in the VS solution. So this isn't an item to measure. It's more of something you setup at the beginning and let run. Item #2 is something you might use a tool to check during builds to see if the developers have broken any rules. It's more of a check than a measure really. Item #3 is really the one you'd want to take a good look at measuring. Finding the classes in your codebase which have a high amount of coupling are going to be pain points down the road, ensure the quality on those guys. Also, measuring at this level allows you to have some insight into the quality (overall) of the codebase as it's evolved. In addition, it can give you a red flag if someone checks some really raunchy code into your codebase. So, if you want to prioritize, take a quick look at #1 and #2. Know what they should look like. But for most apps, item #3 should be taking the most time. This answer, of course, excludes huge frameworks (like the .NET BCL). Those babies need very careful attention to #1. :-) Otherwise, you end up with problems like this: "Current versions of the .NET Framework include a variety of GUI-based libraries what wouldn't work properly in Server Core" http://www.winsupersite.com/showcase/win2008_ntk.asp Where you can't run .NET on a GUI-less install of Windows Server 2008 because the framework takes dependencies on the GUI libraries... One final thing. Make sure you are familiar with the principles behind good dependency/coupling management. You can find a nice list here: http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod A: coupling and dependency cycles between units of distribution is more "fatal" because it can make it really difficult to deploy your program - and sometimes it can also make it really difficult to even compile your program. you are mostly right, a good top-level design that divides the code into logical packages and clear and predefined dependencies only will get you most of the way, the only thing missing is correct separation of the packages into units of distributions.
{ "language": "en", "url": "https://stackoverflow.com/questions/123687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can a modeless VB6 application do cleanup when the application is shutting down? A VB6 application is using the Interop Forms Toolkit to work with forms written in .NET. The documentation for the toolkit advises calling a method on the toolkit to advise the toolkit when the VB6 application is shutting down. The VB6 application uses a Sub Main procedure that loads a splash screen, then displays several modeless forms. When Sub Main completes, the application is still running. How can the application detect that it is shutting down and call the cleanup method on the Toolkit? A: In a module (probably the same one that contains Sub Main), create a public sub (e.g AppCleanUp) that will hold your cleanup code. Add a class to your project (e.g. clsAppCleanup). In this class, add code in the Class_Terminate event handler that calls the sub you created in the previous step. In a module (probably the same one that contains Sub Main), define a variable of clsAppCleanup. In Sub Main, instantiate the clsAppCleanup. When the app is shutting down, The terminate event on the class will cause the cleanup code to run. A: Its been a while since I wrote in VB6 but if I remember correctly you can use the Unload event to call your cleanup code (it similar to the closing event in .net). You can also check that there are no other forms in the VB6 app still running A: Create a module that contains a FormCount variable. This variable will be shared by all forms in your application. Increment the FormCount variable in every form's Form_Initialize method. Decrement FormCount in every form's Form_Terminate method. When FormCount drops back to 0, you can notify your form toolkit that of the forms have been unloaded. You won't have to worry about multi-threading issues because VB6 creates single-threaded applications, so one form's Initialize (or Terminate) method will run to completion before any others begin execution.
{ "language": "en", "url": "https://stackoverflow.com/questions/123688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What CLR/.NET bytecode tools exist? I'm well aware of Java tools for manipulating, generating, decompiling JVM bytecode (ASM, cglib, jad, etc). What similar tools exist for the CLR bytecode? Do people do bytecode manipulation for the CLR? A: Mono.Cecil is a great tool like ASM. It's a subproject of Mono, and totally open source. It even provides better feature than System.Reflection. A: ILDASM and Reflector come to mind. A: Bytecode is a binary format. .NET assemblies work pretty different in terms of how they store the execution instructions. Instead of compiling down to a bytecode-like structure, .NET languages are compiled into an Intermediate Language (in fact, it's called just that--IL). This is a human readable language that looks sorta like an object-oriented version of assembler. So in terms of examining or manipulating the IL for individual assemblies, tools like Reflector and ILDASM allow you to conveniently view the IL for any assembly. Manipulation is a bit different, I'd suggest taking a look at some of the AOP tools in the .NET space. I'd also suggest taking a look at Phoenix, which is a compiler project that MS has in the works. It has some really cool post-compile manipulation features. If you want to know more about the .NET AOP tools, I'd suggest opening another question (that's a whole other can of worms). There are also several books that will teach you the ins and outs of IL. It's not a very complicated language to learn. A: Reflector is always good, but Mono.Cecil is the best tool you can possibly ask for overall. It's invaluable for manipulating CIL in any way. A: NDepend allows you to do .NET assemblies static analysis (code metrics, dependency analysis, etc.). NDepend is very useful to get an overview of the structure of your .NET assemblies using dependency matrix, dependency graphs and treemap metrics visualizations. It is also integrated with Reflector: for example you can detect the important types and methods in your assemblies using respectively a Type/Method Rank metric (a code metric similar to Google Page Rank), and jump directly from NDepend to Reflector to get the disassembled code in C#, VB.NET, etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/123690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Add ellipses in text I've got a Label with a user-selected directory path. Of course some paths are longer than others. I'm using a Resizer on the control the Label lives in, and would love it if I could have variable eliding of the path. c:\very\long\path\to\a\filename.txt collapsing to c:...\filename.txt or c:\very...\filename.txt. You get the picture - bigger window gives more info, shrink it down and you still get the important parts of the path. I'd love it if I didn't have to have a custom control, but I can live with it. Custom Text Wrapping in WPF seems like it might do the job, but I'm hoping for something simpler. EDIT Sorry, I meant to convey that I want the eliding to vary based on width of the Label. A: That example you gave is for non-rectangular containers. If you don't need that you can use a Value Converter. If its bigger than the label, you put ellipses: Not tested example: class EllipsisConverter : IValueConverter { #region IValueConverter Members public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { string path = (string)value; if (path.Length > 100) { return path.Substring(0, 100) + "..."; }else{ return path; } } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } #endregion }
{ "language": "en", "url": "https://stackoverflow.com/questions/123714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Php Check If a Static Class is Declared How can i check to see if a static class has been declared? ex Given the class class bob { function yippie() { echo "skippie"; } } later in code how do i check: if(is_a_valid_static_object(bob)) { bob::yippie(); } so i don't get: Fatal error: Class 'bob' not found in file.php on line 3 A: bool class_exists( string $class_name [, bool $autoload ]) This function checks whether or not the given class has been defined. A: You can also check for existence of a specific method, even without instantiating the class echo method_exists( bob, 'yippie' ) ? 'yes' : 'no'; If you want to go one step further and verify that "yippie" is actually static, use the Reflection API (PHP5 only) try { $method = new ReflectionMethod( 'bob::yippie' ); if ( $method->isStatic() ) { // verified that bob::yippie is defined AND static, proceed } } catch ( ReflectionException $e ) { // method does not exist echo $e->getMessage(); } or, you could combine the two approaches if ( method_exists( bob, 'yippie' ) ) { $method = new ReflectionMethod( 'bob::yippie' ); if ( $method->isStatic() ) { // verified that bob::yippie is defined AND static, proceed } }
{ "language": "en", "url": "https://stackoverflow.com/questions/123718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: 401 response code for json requests with ASP.NET MVC How to disable standard ASP.NET handling of 401 response code (redirecting to login page) for AJAX/JSON requests? For web-pages it's okay, but for AJAX I need to get right 401 error code instead of good looking 302/200 for login page. Update: There are several solutions from Phil Haack, PM of ASP.NET MVC - http://haacked.com/archive/2011/10/04/prevent-forms-authentication-login-page-redirect-when-you-donrsquot-want.aspx A: I wanted both Forms authentication and to return a 401 for Ajax requests that were not authenticated. In the end, I created a custom AuthorizeAttribute and decorated the controller methods. (This is on .Net 4.5) //web.config <authentication mode="Forms"> </authentication> //controller [Authorize(Roles = "Administrator,User"), Response302to401] [AcceptVerbs("Get")] public async Task<JsonResult> GetDocuments() { string requestUri = User.Identity.Name.ToLower() + "/document"; RequestKeyHttpClient<IEnumerable<DocumentModel>, string> client = new RequestKeyHttpClient<IEnumerable<DocumentModel>, string>(requestUri); var documents = await client.GetManyAsync<IEnumerable<DocumentModel>>(); return Json(documents, JsonRequestBehavior.AllowGet); } //authorizeAttribute public class Response302to401 : AuthorizeAttribute { protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext) { if (!filterContext.HttpContext.User.Identity.IsAuthenticated) { if (filterContext.HttpContext.Request.IsAjaxRequest()) { filterContext.Result = new JsonResult { Data = new { Message = "Your session has died a terrible and gruesome death" }, JsonRequestBehavior = JsonRequestBehavior.AllowGet }; filterContext.HttpContext.Response.StatusCode = 401; filterContext.HttpContext.Response.StatusDescription = "Humans and robots must authenticate"; filterContext.HttpContext.Response.SuppressFormsAuthenticationRedirect = true; } } //base.HandleUnauthorizedRequest(filterContext); } } A: In classic ASP.NET you get a 401 http response code when calling a WebMethod with Ajax. I hope they'll change it in future versions of ASP.NET MVC. Right now I'm using this hack: protected void Application_EndRequest() { if (Context.Response.StatusCode == 302 && Context.Request.Headers["X-Requested-With"] == "XMLHttpRequest") { Context.Response.Clear(); Context.Response.StatusCode = 401; } } A: You could also use the Global.asax to interrupt this process with something like this: protected void Application_PreSendRequestHeaders(object sender, EventArgs e) { if (Response.StatusCode == 401) { Response.Clear(); Response.Redirect(Response.ApplyAppPathModifier("~/Login.aspx")); return; } } A: I don't see what we have to modify the authentication mode or the authentication tag like the current answer says. Following the idea of @TimothyLeeRussell (thanks by the way), I created a customized Authorize attribute (the problem with the one of @TimothyLeeRussell is that an exception is throw because he tries to change the filterContext.Result an that generates a HttpException, and removing that part, besides the filterContext.HttpContext.Response.StatusCode = 401, the response code was always 200 OK). So I finally resolved the problem by ending the response after the changes. [AttributeUsage(AttributeTargets.Class | AttributeTargets.Method)] public class BetterAuthorize : AuthorizeAttribute { protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext) { if (filterContext.HttpContext.Request.IsAjaxRequest()) { //Set the response status code to 500 filterContext.HttpContext.Response.StatusCode = (int)HttpStatusCode.Unauthorized; filterContext.HttpContext.Response.StatusDescription = "Humans and robots must authenticate"; filterContext.HttpContext.Response.SuppressFormsAuthenticationRedirect = true; filterContext.HttpContext.Response.End(); } else base.HandleUnauthorizedRequest(filterContext); } } A: You can call this method inside your action, HttpContext.Response.End(); Example public async Task<JsonResult> Return401() { HttpContext.Response.StatusCode = (int)HttpStatusCode.Unauthorized; HttpContext.Response.End(); return Json("Unauthorized", JsonRequestBehavior.AllowGet); } From MSDN: The End method causes the Web server to stop processing the script and return the current result. The remaining contents of the file are not processed. A: The ASP.NET runtime is developed so that it always will redirect the user if the HttpResponse.StatusCode is set to 401, but only if the <authentication /> section of the Web.config is found. Removing the authentication section will require you to implement the redirection to the login page in your attribute, but this shouldn't be a big deal. A: You could choose to create a custom FilterAttribute implementing the IAuthorizationFilter interface. In this attribute you add logic to determine if the request are supposed to return JSON. If so, you can return an empty JSON result (or do whatever you like) given the user isn't signed in. For other responses you would just redirect the user as always. Even better, you could just override the OnAuthorization of the AuthorizeAttribute class so you don't have to reinvent the wheel. Add the logic I mentioned above and intercept if the filterContext.Cancel is true (the filterContext.Result will be set to an instance of the HttpUnauthorizedResult class. Read more about "Filters in ASP.NET MVC CodePlex Preview 4" on Phil Haacks blog. It also applies to the latest preview.
{ "language": "en", "url": "https://stackoverflow.com/questions/123726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: How do I remove code duplication between similar const and non-const member functions? Let's say I have the following class X where I want to return access to an internal member: class Z { // details }; class X { std::vector<Z> vecZ; public: Z& Z(size_t index) { // massive amounts of code for validating index Z& ret = vecZ[index]; // even more code for determining that the Z instance // at index is *exactly* the right sort of Z (a process // which involves calculating leap years in which // religious holidays fall on Tuesdays for // the next thousand years or so) return ret; } const Z& Z(size_t index) const { // identical to non-const X::Z(), except printed in // a lighter shade of gray since // we're running low on toner by this point } }; The two member functions X::Z() and X::Z() const have identical code inside the braces. This is duplicate code and can cause maintenance problems for long functions with complex logic. Is there a way to avoid this code duplication? A: C++17 has updated the best answer for this question: T const & f() const { return something_complicated(); } T & f() { return const_cast<T &>(std::as_const(*this).f()); } This has the advantages that it: * *Is obvious what is going on *Has minimal code overhead -- it fits in a single line *Is hard to get wrong (can only cast away volatile by accident, but volatile is a rare qualifier) If you want to go the full deduction route then that can be accomplished by having a helper function template<typename T> constexpr T & as_mutable(T const & value) noexcept { return const_cast<T &>(value); } template<typename T> constexpr T * as_mutable(T const * value) noexcept { return const_cast<T *>(value); } template<typename T> constexpr T * as_mutable(T * value) noexcept { return value; } template<typename T> void as_mutable(T const &&) = delete; Now you can't even mess up volatile, and the usage looks like decltype(auto) f() const { return something_complicated(); } decltype(auto) f() { return as_mutable(std::as_const(*this).f()); } A: You could also solve this with templates. This solution is slightly ugly (but the ugliness is hidden in the .cpp file) but it does provide compiler checking of constness, and no code duplication. .h file: #include <vector> class Z { // details }; class X { std::vector<Z> vecZ; public: const std::vector<Z>& GetVector() const { return vecZ; } std::vector<Z>& GetVector() { return vecZ; } Z& GetZ( size_t index ); const Z& GetZ( size_t index ) const; }; .cpp file: #include "constnonconst.h" template< class ParentPtr, class Child > Child& GetZImpl( ParentPtr parent, size_t index ) { // ... massive amounts of code ... // Note you may only use methods of X here that are // available in both const and non-const varieties. Child& ret = parent->GetVector()[index]; // ... even more code ... return ret; } Z& X::GetZ( size_t index ) { return GetZImpl< X*, Z >( this, index ); } const Z& X::GetZ( size_t index ) const { return GetZImpl< const X*, const Z >( this, index ); } The main disadvantage I can see is that because all the complex implementation of the method is in a global function, you either need to get hold of the members of X using public methods like GetVector() above (of which there always need to be a const and non-const version) or you could make this function a friend. But I don't like friends. [Edit: removed unneeded include of cstdio added during testing.] A: Yes, it is possible to avoid the code duplication. You need to use the const member function to have the logic and have the non-const member function call the const member function and re-cast the return value to a non-const reference (or pointer if the functions returns a pointer): class X { std::vector<Z> vecZ; public: const Z& z(size_t index) const { // same really-really-really long access // and checking code as in OP // ... return vecZ[index]; } Z& z(size_t index) { // One line. One ugly, ugly line - but just one line! return const_cast<Z&>( static_cast<const X&>(*this).z(index) ); } #if 0 // A slightly less-ugly version Z& Z(size_t index) { // Two lines -- one cast. This is slightly less ugly but takes an extra line. const X& constMe = *this; return const_cast<Z&>( constMe.z(index) ); } #endif }; NOTE: It is important that you do NOT put the logic in the non-const function and have the const-function call the non-const function -- it may result in undefined behavior. The reason is that a constant class instance gets cast as a non-constant instance. The non-const member function may accidentally modify the class, which the C++ standard states will result in undefined behavior. A: For those (like me) who * *use c++17 *want to add the least amount of boilerplate/repetition and *don't mind using macros (while waiting for meta-classes...), here is another take: #include <utility> #include <type_traits> template <typename T> struct NonConst; template <typename T> struct NonConst<T const&> {using type = T&;}; template <typename T> struct NonConst<T const*> {using type = T*;}; #define NON_CONST(func) \ template <typename... T> auto func(T&&... a) \ -> typename NonConst<decltype(func(std::forward<T>(a)...))>::type \ { \ return const_cast<decltype(func(std::forward<T>(a)...))>( \ std::as_const(*this).func(std::forward<T>(a)...)); \ } It is basically a mix of the answers from @Pait, @DavidStone and @sh1 (EDIT: and an improvement from @cdhowie). What it adds to the table is that you get away with only one extra line of code which simply names the function (but no argument or return type duplication): class X { const Z& get(size_t index) const { ... } NON_CONST(get) }; Note: gcc fails to compile this prior to 8.1, clang-5 and upwards as well as MSVC-19 are happy (according to the compiler explorer). A: If you don't like const casting, I use this C++17 version of the template static helper function suggested by another answer, with and optional SFINAE test. #include <type_traits> #define REQUIRES(...) class = std::enable_if_t<(__VA_ARGS__)> #define REQUIRES_CV_OF(A,B) REQUIRES( std::is_same_v< std::remove_cv_t< A >, B > ) class Foobar { private: int something; template<class FOOBAR, REQUIRES_CV_OF(FOOBAR, Foobar)> static auto& _getSomething(FOOBAR& self, int index) { // big, non-trivial chunk of code... return self.something; } public: auto& getSomething(int index) { return _getSomething(*this, index); } auto& getSomething(int index) const { return _getSomething(*this, index); } }; Full version: https://godbolt.org/z/mMK4r3 A: While most of answers here suggest to use a const_cast, CppCoreGuidelines have a section about that: Instead, prefer to share implementations. Normally, you can just have the non-const function call the const function. However, when there is complex logic this can lead to the following pattern that still resorts to a const_cast: class Foo { public: // not great, non-const calls const version but resorts to const_cast Bar& get_bar() { return const_cast<Bar&>(static_cast<const Foo&>(*this).get_bar()); } const Bar& get_bar() const { /* the complex logic around getting a const reference to my_bar */ } private: Bar my_bar; }; Although this pattern is safe when applied correctly, because the caller must have had a non-const object to begin with, it's not ideal because the safety is hard to enforce automatically as a checker rule. Instead, prefer to put the common code in a common helper function -- and make it a template so that it deduces const. This doesn't use any const_cast at all: class Foo { public: // good Bar& get_bar() { return get_bar_impl(*this); } const Bar& get_bar() const { return get_bar_impl(*this); } private: Bar my_bar; template<class T> // good, deduces whether T is const or non-const static auto& get_bar_impl(T& t) { /* the complex logic around getting a possibly-const reference to my_bar */ } }; Note: Don't do large non-dependent work inside a template, which leads to code bloat. For example, a further improvement would be if all or part of get_bar_impl can be non-dependent and factored out into a common non-template function, for a potentially big reduction in code size. A: I think Scott Meyers' solution can be improved in C++11 by using a tempate helper function. This makes the intent much more obvious and can be reused for many other getters. template <typename T> struct NonConst {typedef T type;}; template <typename T> struct NonConst<T const> {typedef T type;}; //by value template <typename T> struct NonConst<T const&> {typedef T& type;}; //by reference template <typename T> struct NonConst<T const*> {typedef T* type;}; //by pointer template <typename T> struct NonConst<T const&&> {typedef T&& type;}; //by rvalue-reference template<typename TConstReturn, class TObj, typename... TArgs> typename NonConst<TConstReturn>::type likeConstVersion( TObj const* obj, TConstReturn (TObj::* memFun)(TArgs...) const, TArgs&&... args) { return const_cast<typename NonConst<TConstReturn>::type>( (obj->*memFun)(std::forward<TArgs>(args)...)); } This helper function can be used the following way. struct T { int arr[100]; int const& getElement(size_t i) const{ return arr[i]; } int& getElement(size_t i) { return likeConstVersion(this, &T::getElement, i); } }; The first argument is always the this-pointer. The second is the pointer to the member function to call. After that an arbitrary amount of additional arguments can be passed so that they can be forwarded to the function. This needs C++11 because of the variadic templates. A: Nice question and nice answers. I have another solution, that uses no casts: class X { private: std::vector<Z> v; template<typename InstanceType> static auto get(InstanceType& instance, std::size_t i) -> decltype(instance.get(i)) { // massive amounts of code for validating index // the instance variable has to be used to access class members return instance.v[i]; } public: const Z& get(std::size_t i) const { return get(*this, i); } Z& get(std::size_t i) { return get(*this, i); } }; However, it has the ugliness of requiring a static member and the need of using the instance variable inside it. I did not consider all the possible (negative) implications of this solution. Please let me know if any. A: How about moving the logic into a private method, and only doing the "get the reference and return" stuff inside the getters? Actually, I would be fairly confused about the static and const casts inside a simple getter function, and I'd consider that ugly except for extremely rare circumstances! A: A bit more verbose than Meyers, but I might do this: class X { private: // This method MUST NOT be called except from boilerplate accessors. Z &_getZ(size_t index) const { return something; } // boilerplate accessors public: Z &getZ(size_t index) { return _getZ(index); } const Z &getZ(size_t index) const { return _getZ(index); } }; The private method has the undesirable property that it returns a non-const Z& for a const instance, which is why it's private. Private methods may break invariants of the external interface (in this case the desired invariant is "a const object cannot be modified via references obtained through it to objects it has-a"). Note that the comments are part of the pattern - _getZ's interface specifies that it is never valid to call it (aside from the accessors, obviously): there's no conceivable benefit to doing so anyway, because it's 1 more character to type and won't result in smaller or faster code. Calling the method is equivalent to calling one of the accessors with a const_cast, and you wouldn't want to do that either. If you're worried about making errors obvious (and that's a fair goal), then call it const_cast_getZ instead of _getZ. By the way, I appreciate Meyers's solution. I have no philosophical objection to it. Personally, though, I prefer a tiny bit of controlled repetition, and a private method that must only be called in certain tightly-controlled circumstances, over a method that looks like line noise. Pick your poison and stick with it. [Edit: Kevin has rightly pointed out that _getZ might want to call a further method (say generateZ) which is const-specialised in the same way getZ is. In this case, _getZ would see a const Z& and have to const_cast it before return. That's still safe, since the boilerplate accessor polices everything, but it's not outstandingly obvious that it's safe. Furthermore, if you do that and then later change generateZ to always return const, then you also need to change getZ to always return const, but the compiler won't tell you that you do. That latter point about the compiler is also true of Meyers's recommended pattern, but the first point about a non-obvious const_cast isn't. So on balance I think that if _getZ turns out to need a const_cast for its return value, then this pattern loses a lot of its value over Meyers's. Since it also suffers disadvantages compared to Meyers's, I think I would switch to his in that situation. Refactoring from one to the other is easy -- it doesn't affect any other valid code in the class, since only invalid code and the boilerplate calls _getZ.] A: For a detailed explanation, please see the heading "Avoid Duplication in const and Non-const Member Function," on p. 23, in Item 3 "Use const whenever possible," in Effective C++, 3d ed by Scott Meyers, ISBN-13: 9780321334879. Here's Meyers' solution (simplified): struct C { const char & get() const { return c; } char & get() { return const_cast<char &>(static_cast<const C &>(*this).get()); } char c; }; The two casts and function call may be ugly, but it's correct in a non-const method as that implies the object was not const to begin with. (Meyers has a thorough discussion of this.) A: I'd suggest a private helper static function template, like this: class X { std::vector<Z> vecZ; // ReturnType is explicitly 'Z&' or 'const Z&' // ThisType is deduced to be 'X' or 'const X' template <typename ReturnType, typename ThisType> static ReturnType Z_impl(ThisType& self, size_t index) { // massive amounts of code for validating index ReturnType ret = self.vecZ[index]; // even more code for determining, blah, blah... return ret; } public: Z& Z(size_t index) { return Z_impl<Z&>(*this, index); } const Z& Z(size_t index) const { return Z_impl<const Z&>(*this, index); } }; A: Is it cheating to use the preprocessor? struct A { #define GETTER_CORE_CODE \ /* line 1 of getter code */ \ /* line 2 of getter code */ \ /* .....etc............. */ \ /* line n of getter code */ // ^ NOTE: line continuation char '\' on all lines but the last B& get() { GETTER_CORE_CODE } const B& get() const { GETTER_CORE_CODE } #undef GETTER_CORE_CODE }; It's not as fancy as templates or casts, but it does make your intent ("these two functions are to be identical") pretty explicit. A: It's surprising to me that there are so many different answers, yet almost all rely on heavy template magic. Templates are powerful, but sometimes macros beat them in conciseness. Maximum versatility is often achieved by combining both. I wrote a macro FROM_CONST_OVERLOAD() which can be placed in the non-const function to invoke the const function. Example usage: class MyClass { private: std::vector<std::string> data = {"str", "x"}; public: // Works for references const std::string& GetRef(std::size_t index) const { return data[index]; } std::string& GetRef(std::size_t index) { return FROM_CONST_OVERLOAD( GetRef(index) ); } // Works for pointers const std::string* GetPtr(std::size_t index) const { return &data[index]; } std::string* GetPtr(std::size_t index) { return FROM_CONST_OVERLOAD( GetPtr(index) ); } }; Simple and reusable implementation: template <typename T> T& WithoutConst(const T& ref) { return const_cast<T&>(ref); } template <typename T> T* WithoutConst(const T* ptr) { return const_cast<T*>(ptr); } template <typename T> const T* WithConst(T* ptr) { return ptr; } #define FROM_CONST_OVERLOAD(FunctionCall) \ WithoutConst(WithConst(this)->FunctionCall) Explanation: As posted in many answers, the typical pattern to avoid code duplication in a non-const member function is this: return const_cast<Result&>( static_cast<const MyClass*>(this)->Method(args) ); A lot of this boilerplate can be avoided using type inference. First, const_cast can be encapsulated in WithoutConst(), which infers the type of its argument and removes the const-qualifier. Second, a similar approach can be used in WithConst() to const-qualify the this pointer, which enables calling the const-overloaded method. The rest is a simple macro that prefixes the call with the correctly qualified this-> and removes const from the result. Since the expression used in the macro is almost always a simple function call with 1:1 forwarded arguments, drawbacks of macros such as multiple evaluation do not kick in. The ellipsis and __VA_ARGS__ could also be used, but should not be needed because commas (as argument separators) occur within parentheses. This approach has several benefits: * *Minimal and natural syntax -- just wrap the call in FROM_CONST_OVERLOAD( ) *No extra member function required *Compatible with C++98 *Simple implementation, no template metaprogramming and zero dependencies *Extensible: other const relations can be added (like const_iterator, std::shared_ptr<const T>, etc.). For this, simply overload WithoutConst() for the corresponding types. Limitations: this solution is optimized for scenarios where the non-const overload is doing exactly the same as the const overload, so that arguments can be forwarded 1:1. If your logic differs and you are not calling the const version via this->Method(args), you may consider other approaches. A: C++23 has updated the best answer for this question thanks to deducing this: struct s { auto && f(this auto && self) { // all the common code goes here } }; A single function template is callable as a normal member function and deduces the correct reference type for you. No casting to get wrong, no writing multiple functions for something that is conceptually one thing. A: I came up with a macro that generates pairs of const/non-const functions automatically. class A { int x; public: MAYBE_CONST( CV int &GetX() CV {return x;} CV int &GetY() CV {return y;} ) // Equivalent to: // int &GetX() {return x;} // int &GetY() {return y;} // const int &GetX() const {return x;} // const int &GetY() const {return y;} }; See the end of the answer for the implementation. The argument of MAYBE_CONST is duplicated. In the first copy, CV is replaced with nothing; and in the second copy it's replaced with const. There's no limit on how many times CV can appear in the macro argument. There's a slight inconvenience though. If CV appears inside of parentheses, this pair of parentheses must be prefixed with CV_IN: // Doesn't work MAYBE_CONST( CV int &foo(CV int &); ) // Works, expands to // int &foo( int &); // const int &foo(const int &); MAYBE_CONST( CV int &foo CV_IN(CV int &); ) Implementation: #define MAYBE_CONST(...) IMPL_CV_maybe_const( (IMPL_CV_null,__VA_ARGS__)() ) #define CV )(IMPL_CV_identity, #define CV_IN(...) )(IMPL_CV_p_open,)(IMPL_CV_null,__VA_ARGS__)(IMPL_CV_p_close,)(IMPL_CV_null, #define IMPL_CV_null(...) #define IMPL_CV_identity(...) __VA_ARGS__ #define IMPL_CV_p_open(...) ( #define IMPL_CV_p_close(...) ) #define IMPL_CV_maybe_const(seq) IMPL_CV_a seq IMPL_CV_const_a seq #define IMPL_CV_body(cv, m, ...) m(cv) __VA_ARGS__ #define IMPL_CV_a(...) __VA_OPT__(IMPL_CV_body(,__VA_ARGS__) IMPL_CV_b) #define IMPL_CV_b(...) __VA_OPT__(IMPL_CV_body(,__VA_ARGS__) IMPL_CV_a) #define IMPL_CV_const_a(...) __VA_OPT__(IMPL_CV_body(const,__VA_ARGS__) IMPL_CV_const_b) #define IMPL_CV_const_b(...) __VA_OPT__(IMPL_CV_body(const,__VA_ARGS__) IMPL_CV_const_a) Pre-C++20 implementation that doesn't support CV_IN: #define MAYBE_CONST(...) IMPL_MC( ((__VA_ARGS__)) ) #define CV ))(( #define IMPL_MC(seq) \ IMPL_MC_end(IMPL_MC_a seq) \ IMPL_MC_end(IMPL_MC_const_0 seq) #define IMPL_MC_identity(...) __VA_ARGS__ #define IMPL_MC_end(...) IMPL_MC_end_(__VA_ARGS__) #define IMPL_MC_end_(...) __VA_ARGS__##_end #define IMPL_MC_a(elem) IMPL_MC_identity elem IMPL_MC_b #define IMPL_MC_b(elem) IMPL_MC_identity elem IMPL_MC_a #define IMPL_MC_a_end #define IMPL_MC_b_end #define IMPL_MC_const_0(elem) IMPL_MC_identity elem IMPL_MC_const_a #define IMPL_MC_const_a(elem) const IMPL_MC_identity elem IMPL_MC_const_b #define IMPL_MC_const_b(elem) const IMPL_MC_identity elem IMPL_MC_const_a #define IMPL_MC_const_a_end #define IMPL_MC_const_b_end A: Typically, the member functions for which you need const and non-const versions are getters and setters. Most of the time they are one-liners so code duplication is not an issue. A: I did this for a friend who rightfully justified the use of const_cast... not knowing about it I probably would have done something like this (not really elegant) : #include <iostream> class MyClass { public: int getI() { std::cout << "non-const getter" << std::endl; return privateGetI<MyClass, int>(*this); } const int getI() const { std::cout << "const getter" << std::endl; return privateGetI<const MyClass, const int>(*this); } private: template <class C, typename T> static T privateGetI(C c) { //do my stuff return c._i; } int _i; }; int main() { const MyClass myConstClass = MyClass(); myConstClass.getI(); MyClass myNonConstClass; myNonConstClass.getI(); return 0; } A: This DDJ article shows a way using template specialization that doesn't require you to use const_cast. For such a simple function it really isn't needed though. boost::any_cast (at one point, it doesn't any more) uses a const_cast from the const version calling the non-const version to avoid duplication. You can't impose const semantics on the non-const version though so you have to be very careful with that. In the end some code duplication is okay as long as the two snippets are directly on top of each other. A: To add to the solution jwfearn and kevin provided, here's the corresponding solution when the function returns shared_ptr: struct C { shared_ptr<const char> get() const { return c; } shared_ptr<char> get() { return const_pointer_cast<char>(static_cast<const C &>(*this).get()); } shared_ptr<char> c; }; A: Didn't find what I was looking for, so I rolled a couple of my own... This one is a little wordy, but has the advantage of handling many overloaded methods of the same name (and return type) all at once: struct C { int x[10]; int const* getp() const { return x; } int const* getp(int i) const { return &x[i]; } int const* getp(int* p) const { return &x[*p]; } int const& getr() const { return x[0]; } int const& getr(int i) const { return x[i]; } int const& getr(int* p) const { return x[*p]; } template<typename... Ts> auto* getp(Ts... args) { auto const* p = this; return const_cast<int*>(p->getp(args...)); } template<typename... Ts> auto& getr(Ts... args) { auto const* p = this; return const_cast<int&>(p->getr(args...)); } }; If you have only one const method per name, but still plenty of methods to duplicate, then you might prefer this: template<typename T, typename... Ts> auto* pwrap(T const* (C::*f)(Ts...) const, Ts... args) { return const_cast<T*>((this->*f)(args...)); } int* getp_i(int i) { return pwrap(&C::getp_i, i); } int* getp_p(int* p) { return pwrap(&C::getp_p, p); } Unfortunately this breaks down as soon as you start overloading the name (the function pointer argument's argument list seems to be unresolved at that point, so it can't find a match for the function argument). Although you can template your way out of that, too: template<typename... Ts> auto* getp(Ts... args) { return pwrap<int, Ts...>(&C::getp, args...); } But reference arguments to the const method fail to match against the apparently by-value arguments to the template and it breaks. Not sure why.Here's why.
{ "language": "en", "url": "https://stackoverflow.com/questions/123758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "302" }
Q: How do you manage web developers remotely? I'm the leader of a small web development team, and I have a feeling that we will have a couple telecommuters joining the team pretty soon (either new employees, or existing employees that will begin telecommuting). Any idea how to effectively manage and collaborate with developers working remotely? Most of the work we do is client-driven. We're doing agile development (or our version of it, anyway), but since it's mostly client work, we can't really assign a feature to a developer and set them lose for a week or two like we might be able to with a desktop app or something like that. The biggest problem we have when people occasionally work from home is collaborating - it's tough to work together without the benefit of a whiteboard and hand-waving. It seems like software development is perfect for telecommuting, but I haven't been able to find many good resources about the practical aspects of working remotely within a development team. Has anyone else had any experience with this? A: I freelance a lot and in doing so work remotely a lot of the time. These are the things that make my life as easy as possible (so might be things you want to "suggest"). I think they're mostly common-sense, but you never know... * *[Everyone] Communicate well. When you're having a conversation face-to-face, you can be verbose and explain things in a round-a-bout way. When you're limited to email, IM and phone, all parties need to explain themselves fully but succinctly. I find that summarising long emails into request/action points goes a long way towards getting things done well. *[Everyone] Have a online project tracking space. Most tend to use a ticket system or some description, where action points can be assigned to members. It wouldn't hurt to use this same space for tracking emails and sharing whiteboard ideas. Most online project apps allow for that by default. *[Management] Don't pester devs. If you need something urgently, set the status of the ticket, give them a call and chase them up later on in the day. Half-hourly emails asking "is it done yet?" does more harm than good! *[Management] Make sure messages get passed along. If a dev says "somebody needs to do something", it's your job to make sure the message is passed along to the right person. There are few things more annoying than passing a message to a project manager for them to accidentally sit on it. I don't want to have to chase up things like that because it's, frankly, not what I'm being paid for. *[Management] Make sure people have something to do. If you send them home with nothing on their task list that they can immediately action, they're not going to put in the effort. It's a damned sight harder to keep yourself productive at home than it is in the office when you've little or nothing that you can do. You might have to juggle tasks if there's a blocker. A: I work at home full time. Here are things that help in my small (6 people) team. Set up rules for using IM. For example, allow remote workers to block off time not to be interrupted by email or IM. Require workers to keep status up-to-date somewhere (IM, Yammer, etc) which helps keep them accountable to stay on task. Stay in touch without being a distraction. Meet in person occasionally if possible. Nothing can replace a face-to-face meeting. Skype is ok for group meetings, but not if whiteboards are involved. Use SharedView or another screen sharing program for collaborating. Screenshots/screen captures are helpful as well to make sure both parties are on the same page. A: "Any idea how to effectively manage and collaborate with developers working remotely?" What does "effectively" mean? I can be negative and assume it means "with me, the project leader in control of everything". I can be positive and assume you want people to be as effective as possible. Sometimes, "effective" is management-speak for "under my control". Or it means "not screwing around." The question, then is "effectively doing what?" Effectively "working" is rather vague. Hence my leap to the dark side of project management. [Which, I admit, is probably wrong. But without specific team productivity problems, the question has no answer.] "it's tough to work together without the benefit of a whiteboard and hand-waving" This is only sometimes true, there are lots of replacements. The "hand-waving" over the internet happens more slowly and more thoroughly. The group-think around the whiteboard is fun -- it's a kind of party. However, for some of us, it's not very productive. I need hours to digest and consider and work out alternatives; I'm actually not effective in the group whiteboard environment. I find it more effective to use the alternative "slow-motion" whiteboard technologies. I like to see a draft pitch for an idea. Comment on it. Refine it. A lot like a Wiki or Stackoverflow. I really like the internet RFC model -- here's my idea; comment on it. When there are no more improvements, that's as good as it's going to get. A: I work in Mississippi and my home office is in Michigan. I spend several hours a day pair programming with my team with ease. The tools I use are: * *SharedView *Remote Deskop Assistance *Live Meeting *Oovoo *Skype Depending on who and how many will depend on the tool I use. "Use the right tool for the job and invest in a damn good headset." - Me. A: I've generally used some time of community based software such as a wiki, blog, or forum to handle the documentation areas. We also have a Cisco phone system and use some capabilities of the system. I'd also recommend live meeting or webex to do frequent team meetings. Skype and IM clients such as Live Messenger are also good tools. For the short status updates, twitter does the trick. A: Check out the Agile Scrum methodology with VSTS. Scrum forces us to have daily 15 minutes meeting and small mile stones , It makes sure the effective togetherness and tight communication. Make sure you use Task,Bug assignment etc through VSTS A: I agree with John Sheehan's response. I am a consultant and manage other consultants - both on a project basis (as PM) and on a client basis across projects. I have worked with developers on a purely remote basis as well as telecommuting (meaning the majority of time we are co-located). Working remotely is a matter of trust and communication. Co-locating is best, but if you work remotely, simply create a culture of frequent communication. IM and phone are great for this, email less so. If you have a less than communicative co-worker, it is up to you as the manager to reach out. Ask for status. Force code-checkin on a frequent basis for review. [EDIT] - Yes, don't pester and set expectations! Be clear and concise. A: First of all use scrum (daily scrum calls, scrum board w/ burndown chart (wikis do a great job there), iteration in sprints etc). Next to that use tools that make it more easy to collaborate remotely like skype and VNC (maybe campfire?) and a wiki. I worked for 2 years on a project w/ people in 3 countries on 2 continents and various time zones and it worked quite well. The key is having tools and methodologies that make it more difficult for people to "hide", so that everything you and your team does is visible. A: I find clear communication and staying on task are challenging with virtual teams. I try to use regular scheduled update meetings (over the phone or video conference) with a written agenda to help with these challenges. At the front on the agenda list the major milestones and the near term milestones. The first item is always "check progress" each team member simply updates us on when they expect to finish the particular tasks involved. We try not to get involved in long stories here. It's simply "what are you going to do and when". Once the progress check is done deal with any other issues raised in during the last week and any issues the team has that can be sorted out whilst you are in the meeting. Anything let over (such as new issues raised) needs to have the question asked "who is needs to sort this out and when". Once you set a common format for the meeting you can do this weekly in 30-45 minutes with teams of 5-8 people. Keep it short and sweet so it isn't viewed as an imposition. Keep it focused on actions and schedule so it can be valuable. A: I'm currently the PM of a smaller project that has two developers (myself and another developer that works out of the office). We are currently having daily SCRUM meetings, which last for about 15 minutes. We discuss what got done the previous day, what problems were encountered and what I can do to help with these problems, and what will be done tomorrow. They're pretty quick and seemed to be very helpful. A: Using a Time Tracking Software for your remote employees can greatly help you in managing the team. While hiring a remote employee, you would be concerned about, * *The amount of time spent in getting a task done. *The quality of the work done. *Collaboration based on the progress of the project. *The real time progress on a task. *Collaborating to solve bugs and logical errors. I was in your situation a while ago and then I tried StaffTimerApp and it helped me in the following ways. * *A Time Tracking Software gives crystal clear statistics about the time spent on getting a task done. StaffTimerApp captures screenshots and converts them into billable and non-billable hours. Hence, you would know if any time was wasted while getting the work done. You would also know the exact amount of time spent in getting the work done. If you pay your contractor by the hour, this application can help you tremendously. *If you use a time tracking software that captures screenshots, you can look at them to analyse the quality of work that is being delivered. I used this feature and was able to save some tasks from derailing. *A Time Tracking Software lets the employer know how far along the employee is with the task, hence the information extracted by Time Tracking will make collaboration easier. StaffTimerApp proved to be very helpful as I was able to collaborate with the other employees based on this information. *The screen sharing feature equipped me with the power of viewing my employee's laptop screen in real time. This way I would get to know about the progress on a task. So you need a good Time Tracking Software with great productivity analytics and employee monitoring capabilities to feel comfortable with hiring a remote developer.
{ "language": "en", "url": "https://stackoverflow.com/questions/123772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is OOP & completely avoiding implementation inheritance possible? I will choose Java as an example, most people know it, though every other OO language was working as well. Java, like many other languages, has interface inheritance and implementation inheritance. E.g. a Java class can inherit from another one and every method that has an implementation there (assuming the parent is not abstract) is inherited, too. That means the interface is inherited and the implementation for this method as well. I can overwrite it, but I don't have to. If I don't overwrite it, I have inherited the implementation. However, my class can also "inherit" (not in Java terms) just an interface, without implementation. Actually interfaces are really named that way in Java, they provide interface inheritance, but without inheriting any implementation, since all methods of an interface have no implementation. Now there was this article, saying it's better to inherit interfaces than implementations, you may like to read it (at least the first half of the first page), it's pretty interesting. It avoids issues like the fragile base class problem. So far this makes all a lot of sense and many other things said in the article make a lot of sense to me. What bugs me about this, is that implementation inheritance means code reuse, one of the most important properties of OO languages. Now if Java had no classes (like James Gosling, the godfather of Java has wished according to this article), it solves all problems of implementation inheritance, but how would you make code reuse possible then? E.g. if I have a class Car and Car has a method move(), which makes the Car move. Now I can sub-class Car for different type of cars, that are all cars, but are all specialized versions of Car. Some may move in a different way, these need to overwrite move() anyway, but most would simply keep the inherited move, as they move alike just like the abstract parent Car. Now assume for a second that there are only interfaces in Java, only interfaces may inherit from each other, a class may implement interfaces, but all classes are always final, so no class can inherit from any other class. How would you avoid that when you have an Interface Car and hundred Car classes, that you need to implement an identical move() method for each of them? What concepts for code reuse other than implementation inheritance exist in the the OO world? Some languages have Mixins. Are Mixins the answer to my question? I read about them, but I cannot really imagine how Mixins would work in a Java world and if they can really solve the problem here. Another idea was that there is a class that only implements the Car interface, let's call it AbstractCar, and implements the move() method. Now other cars implement the Car interface as well, internally they create an instance of AbstractCar and they implement their own move() method by calling move() on their internal abstract Car. But wouldn't this be wasting resources for nothing (a method calling just another method - okay, JIT could inline the code, but still) and using extra memory for keeping internal objects, you wouldn't even need with implementation inheritance? (after all every object needs more memory than just the sum of the encapsulated data) Also isn't it awkward for a programmer to write dummy methods like public void move() { abstractCarObject.move(); } ? Anyone can imagine a better idea how to avoid implementation inheritance and still be able to re-use code in an easy fashion? A: The problem with most example against inheritance are examples where the person is using inheritance incorrectly, not a failure of inheritance to correctly abstract. In the article you posted a link to, the author shows the "brokenness" of inheritance using Stack and ArrayList. The example is flawed because a Stack is not an ArrayList and therefore inheritance should not be used. The example is as flawed as String extending Character, or PointXY extending Number. Before you extend class, you should always perform the "is_a" test. Since you can't say Every Stack is an ArrayList without being wrong in some way, then you should not inheirit. The contract for Stack is different than the contract for ArrayList (or List) and stack should not be inheriting methods that is does not care about (like get(int i) and add()). In fact Stack should be an interface with methods such as: interface Stack<T> { public void push(T object); public T pop(); public void clear(); public int size(); } A class like ArrayListStack might implement the Stack interface, and in that case use composition (having an internal ArrayList) and not inheritance. Inheritance is not bad, bad inheritance is bad. A: You could also use composition and the strategy pattern.link text public class Car { private ICar _car; public void Move() { _car.Move(); } } This is far more flexible than using inheritance based behaviour as it allows you to change at runtime, by substituting new Car types as required. A: You can use composition. In your example, a Car object might contain another object called Drivetrain. The car's move() method could simply call the drive() method of it's drivetrain. The Drivetrain class could, in turn, contain objects like Engine, Transmission, Wheels, etc. If you structured your class hierarchy this way, you could easily create cars which move in different ways by composing them of different combinations of the simpler parts (i.e. reuse code). A: To make mixins/composition easier, take a look at my Annotations and Annotation Processor: http://code.google.com/p/javadude/wiki/Annotations In particular, the mixins example: http://code.google.com/p/javadude/wiki/AnnotationsMixinExample Note that it doesn't currently work if the interfaces/types being delegated to have parameterized methods (or parameterized types on the methods). I'm working on that... A: It's funny to answer my own question, but here's something I found that is pretty interesting: Sather. It's a programming language with no implementation inheritance at all! It knows interfaces (called abstract classes with no implementation or encapsulated data), and interfaces can inherit of each other (actually they even support multiple inheritance!), but a class can only implement interfaces (abstract classes, as many as it likes), it can't inherit from another class. It can however "include" another class. This is rather a delegate concept. Included classes must be instantiated in the constructor of your class and are destroyed when your class is destroyed. Unless you overwrite the methods they have, your class inherits their interface as well, but not their code. Instead methods are created that just forward calls to your method to the equally named method of the included object. The difference between included objects and just encapsulated objects is that you don't have to create the delegation forwards yourself and they don't exist as independent objects that you can pass around, they are part of your object and live and die together with your object (or more technically spoken: The memory for your object and all included ones is created with a single alloc call, same memory block, you just need to init them in your constructor call, while when using real delegates, each of these objects causes an own alloc call, has an own memory block, and lives completely independently of your object). The language is not so beautiful, but I love the idea behind it :-) A: Short answer: Yes it is possible. But you have to do it on purpose and no by chance ( using final, abstract and design with inheritance in mind, etc. ) Long answer: Well, inheritance is not actually for "code re-use", it is for class "specialization", I think this is a misinterpretation. For instance is it a very bad idea to create a Stack from a Vector, just because they are alike. Or properties from HashTable just because they store values. See [Effective]. The "code reuse" was more a "business view" of the OO characteristics, meaning that you objects were easily distributable among nodes; and were portable and didn't not have the problems of previous programming languages generation. This has been proved half rigth. We now have libraries that can be easily distributed; for instance in java the jar files can be used in any project saving thousands of hours of development. OO still has some problems with portability and things like that, that is the reason now WebServices are so popular ( as before it was CORBA ) but that's another thread. This is one aspect of "code reuse". The other is effectively, the one that has to do with programming. But in this case is not just to "save" lines of code and creating fragile monsters, but designing with inheritance in mind. This is the item 17 in the book previously mentioned; Item 17: Design and document for inheritance or else prohibit it. See [Effective] Of course you may have a Car class and tons of subclasses. And yes, the approach you mention about Car interface, AbstractCar and CarImplementation is a correct way to go. You define the "contract" the Car should adhere and say these are the methods I would expect to have when talking about cars. The abstract car that has the base functionality that every car but leaving and documenting the methods the subclasses are responsible to handle. In java you do this by marking the method as abstract. When you proceed this way, there is not a problem with the "fragile" class ( or at least the designer is conscious or the threat ) and the subclasses do complete only those parts the designer allow them. Inheritance is more to "specialize" the classes, in the same fashion a Truck is an specialized version of Car, and MosterTruck an specialized version of Truck. It does not make sanse to create a "ComputerMouse" subclase from a Car just because it has a Wheel ( scroll wheel ) like a car, it moves, and has a wheel below just to save lines of code. It belongs to a different domain, and it will be used for other purposes. The way to prevent "implementation" inheritance is in the programming language since the beginning, you should use the final keyword on the class declaration and this way you are prohibiting subclasses. Subclassing is not evil if it's done on purpose. If it's done uncarefully it may become a nightmare. I would say that you should start as private and "final" as possible and if needed make things more public and extend-able. This is also widely explained in the presentation"How to design good API's and why it matters" See [Good API] Keep reading articles and with time and practice ( and a lot of patience ) this thing will come clearer. Although sometime you just need to do the work and copy/paste some code :P . This is ok, as long you try to do it well first. Here are the references both from Joshua Bloch ( formerly working in Sun at the core of java now working for Google ) [Effective] Effective Java. Definitely the best java book a non beginner should learn, understand and practice. A must have. Effective Java [Good API]Presentation that talks on API's design, reusability and related topics. It is a little lengthy but it worth every minute. How To Design A Good API and Why it Matters Regards. Update: Take a look at minute 42 of the video link I sent you. It talks about this topic: "When you have two classes in a public API and you think to make one a subclass of another, like Foo is a subclass of Bar, ask your self , is Every Foo a Bar?... " And in the minute previous it talks about "code reuse" while talking about TimeTask. A: Inheritance is not necessary for an object oriented language. Consider Javascript, which is even more object-oriented than Java, arguably. There are no classes, just objects. Code is reused by adding existing methods to an object. A Javascript object is essentially a map of names to functions (and data), where the initial contents of the map is established by a prototype, and new entries can be added to a given instance on the fly. A: You should read Design Patterns. You will find that Interfaces are critical to many types of useful Design Patterns. For example abstracting different types of network protocols will have the same interface (to the software calling it) but little code reuse because of different behaviors of each type of protocol. For some algorithms are eye opening in showing how to put together the myriad elements of a programming to do some useful task. Design Patterns do the same for objects.Shows you how to combine objects in a way to perform a useful task. Design Patterns by the Gang of Four
{ "language": "en", "url": "https://stackoverflow.com/questions/123773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Parent.FindControl() not working? I have a page that has an iframe From one of the pages within the iframe I want to look back and make a panel on the default page invisible because it is overshadowing a popup I tried using Parent.FindControl but it does not seem to be working. I am positive I have the right id in the findcontrol because I used Firebug to inspect the panel and I copied the id from there Does anyone know what I am missing? A: I didn't completely follow your problem, but I'll take my best shot. It sounds like you have an ASP.NET page, that has an iframe in it that refers to another ASP.NET page, and in that page that was requested by the iframe you want to modify the visibility of the item contained in the page that contains the iframe. If my understanding of your problem is correct, then you have some somewhat nasty problems here. * *What's actually happening at the browser level is the first page gets loaded, and that page has an iframe in it that is making a second request to the server. *This second request can't FindControl your control, because it isn't in the same page, and isn't alive during that request. So you have some alternatives here: * *Get rid of the iframe and use a panel. This will put them both in the same request, and able to find each other. *(Additionally) When you do this you are going to want to use Page.FindControl() not Parent.FindControl() as the FindControl method just searches through the Control's child control collection, and I presume your control will be somewhere else on the page. *On the client side in the iframe you could use some javascript code to access the outer page's DOM, and set the visibility of it there. A: Parent document: <body> <input type="text" id="accessme" value="Not Accessed" /> ... </body> Document in iframe: <head> ... <script type="text/javascript"> function setValueOfAccessme() { window.parent.document.getElementById("accessme").value = "Accessed"; } </script> </head> <body onload="setValueOfAccessme();"> </body> The document inside the iframe accesses the document object on the window object on load, and uses the getElementId() function to set the value of the input inside the body of the parent document. A: For starters, FindControl isn't a function in Javascript. A: Alternatively here's a more helpful find control routine... Public Shared Function MoreHelpfulFindControl(ByVal parent As UI.Control, ByVal id As String) As UI.Control If parent.ID = id Then Return parent For Each child As UI.Control In parent.Controls Dim recurse As UI.Control = MoreHelpfulFindControl(child, id) If recurse IsNot Nothing Then Return recurse Next Return Nothing End Function
{ "language": "en", "url": "https://stackoverflow.com/questions/123776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Logging ALL Queries on a SQL Server 2008 Express Database? Is there a way to tell SQL Server 2008 Express to log every query (including each and every SELECT Query!) into a file? It's a Development machine, so the negative side effects of logging Select-Queries are not an issue. Before someone suggests using the SQL Profiler: This is not available in Express (does anyone know if it's available in the Web Edition?) and i'm looking for a way to log queries even when I am away. A: SQL Server Profiler: * *File → New Trace *The "General" Tab is displayed. *Here you can choose "Save to file:" so its logged to a file. *View the "Event Selection" Tab *Select the items you want to log. *TSQL → SQL:BatchStarting will get you sql selects *Stored Procedures → RPC:Completed will get you Stored Procedures. More information from Microsoft: SQL Server 2008 Books Online - Using SQL Server Profiler Update - SQL Express Edition: A comment was made that MS SQL Server Profiler is not available for the express edition. There does appear to be a free alternative: Profiler for Microsoft SQL Server 2005 Express Edition A: There is one more way to get information about queries that has been executed on MS SQL Server Express described here. Briefly, it runs smart query to system tables and gets info(text, time executed) about queries(or cached query plans if needed). Thus you can get info about executed queries without profiler in MSSQL 2008 Express edition. SELECT deqs.last_execution_time AS [Time], dest.TEXT AS [Query] FROM sys.dm_exec_query_stats AS deqs CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest ORDER BY deqs.last_execution_time DESC A: You can log changes. SQL Server 2008 will make this especially easy with Change Data Capture. But SQL Server isn't very good at logging SELECTs. It is theoretically possible with the profiler, but it will kill your performance. You might "get away with it" on your desktop, but I think you'll notice your machine acting slow enough to cause problems. And it definitely won't work after any kind of deployment. One important point a couple others have missed already: unless they changed something for 2008 I didn't hear about, you can't trigger a SELECT. A: …Late answer but I hope it would be useful to other readers here… Using SQL Server Express with advanced auditing requirements such as this is not really optimal unless it’s only in development environment. You can use traces (www.broes.nl/2011/10/profiling-on-sql-server-express/) to get the data you need but you’d have to parse these yourself. There are third party tools that can do this but their cost will be quite high. Log explorer from ApexSQL can log everything but select and Idera’s compliance manager will log select statements as well but it’s cost is a lot higher. A: Just for the record, I'm including the hints to use DataWizard's SQL Performance Profiler as a separate answer since it's really the opposite to the answer pointing at SQL Server Profiler. There is a free trial for 14 days, but even if you need to buy it, it's only $20 for 3 servers (at the moment of writing, 2012-06-28). This seems more than fair to me considering the thousands everybody using SQL Server Express edition has saved. I've only used the trial so far and it offers exactly what the OP was looking for: a way to trace all queries coming in to a specific database. It also offers to export a trace to an XML file. The paid version offers some more features but I haven't tried them yet. Disclaimer: I'm just another developer messing with DBs from time to time and I'm in no way affiliated with DataWizard. I just so happened to like their tool and wanted to let people know it existed as it's helped me out with profiling my SQL Server Express installation. A: I would either use triggers or use a third party software such as Red Gate to check out your SQL log files. A: Seems that you can create traces using T-SQL http://support.microsoft.com/kb/283790/ That might help. A: The SQL query below can show simple query logs: SELECT last_execution_time, text FROM sys.dm_exec_query_stats stats CROSS APPLY sys.dm_exec_sql_text(stats.sql_handle) ORDER BY last_execution_time This is how it looks like below:
{ "language": "en", "url": "https://stackoverflow.com/questions/123781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "62" }
Q: Design question: How would you design a messaging/inbox system? Many websites have the concept of sending messages from user to user. When you send a message to another user, the message would show up in their inbox. You could respond to the message, and it would show up as a new entry in that message thread. You should be able to see if you've read a given message already, and messages that have got a new response should be able to be at the top. How would you design the classes (or tables or whatever) to support such a system? A: user id name messages id to_user_id from_user_id title date message_post id message_id user_id message date classes would reflect this sort of schema A: You might want to extend Owen's schema to support bulk messages where the message is stored only once. Also modified so there's only one sender, and many receivers (there's never more than one sender in this scheme) user id name message id recipient_id content_id date_time_sent date_time_read response_to_message_id (refers to the email this one is in response to - threading) expires importance flags (read, read reply, etc) content id message_id sender_id title message There are many, many other features that could be added, of course, but most people think of the above features when they think "email". -Adam A: It's a rather simple table structure. A to/from, subject and then the message. Now the important thing is the date fields. The DateSent tells when it was sent, the DateRead tells that the message was read, and the DateDeletedTo tells that the TO user deleted it, and the DateDeletedFROM tells that the FROM user deleted it (these are logical deletes for this example). tblMessage ID BIGINT ToUserID GUID/BIGINT FromUserID GUID/BIGINT Subject NVARCHAR(150) Message NVARCHAR(Max) DateDeletedFrom DATETIME DateDeletedTo DATETIME DateSent DATETIME DateRead DATETIME A: I'm actually doing this as part of some internal development at work. Make a table called [Messages] and give it the following columns. * *mID (message ID) *from_user *to_user *message *time *tID (thread ID) *read (a boolean) Something like that should work for the table design. The classes depend on what system you're designing it on. A: Table Message: id INTEGER recipient_id INTEGER -- FK to users table sender_id INTEGER -- ditto subject VARCHAR body TEXT Table Thread parent_id -- FK to message table child_id -- FK to message table Then, you could just go through the Thread table to get a thread of messages.
{ "language": "en", "url": "https://stackoverflow.com/questions/123783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to make my code run on multiple cores? I have built an application in C# that I would like to be optimized for multiple cores. I have some threads, should I do more? Updated for more detail * *C# 2.0 *Run on Windows Vista and Windows Server 2003 Updated again * *This code is running as a service *I do not want to have the complete code... my goal here is to get your experience and how to start. Like I say, I have already use threads. What more can I do? A: You might want to take a look at the parallel extensions for .NET http://msdn.com/concurrency A: You might want to read Herb Sutter's column 'Effective Concurrency'. You'll find those articles here, with others. A: To be able to utilize multiple cores more efficiently you should divide your work up in parts that can be executed in parallel and use threads to divide the work over the cores. You could use threads, background workers, thread pools, etc A: For C#, start learning the LINQ-way of doing things, then make use of the Parallel LINQ library and its .AsParallel() extension. A: I'd generalize that writing a highly optimized multi-threaded process is a lot harder than just throwing some threads in the mix. I recommend starting with the following steps: * *Split up your workloads into discrete parallel executable units *Measure and characterize workload types - Network intensive, I/O intensive, CPU intensive etc - these become the basis for your worker pooling strategies. e.g. you can have pretty large pools of workers for network intensive applications, but it doesn't make sense having more workers than hardware-threads for CPU intensive tasks. *Think about queuing/array or ThreadWorkerPool to manage pools of threads. Former more finegrain controlled than latter. *Learn to prefer async I/O patterns over sync patterns if you can - frees more CPU time to perform other tasks. *Work to eliminate or atleast reduce serialization around contended resources such as disk. *Minimize I/O, acquire and hold minimum level of locks for minimum period possible. (Reader/Writer locks are your friend) 5.Comb through that code to ensure that resources are locked in consistent sequence to minimize deadly embrace. *Test like crazy - race conditions and bugs in multithreaded applications are hellish to troubleshoot - often you only see the forensic aftermath of the massacre. Bear in mind that it is entirely possible that a multi-threaded version could perform worse than a single-threaded version of the same app. There is no excuse for good engineering measurement. A: Understanding the parallelism (or potential for parallelism) in the problem(s) you are trying to solve, your application and its algorithms is much more important than any details of thread synchronization, libraries, etc. Start by reading Patterns for Parallel Programming (which focuses on 'finding concurrency' and higher-level design issues), and then move on to The Art of Multiprocessor Programming (practical details starting from a theoretical basis).
{ "language": "en", "url": "https://stackoverflow.com/questions/123792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Design question: How would you design a recurring event system? If you were tasked to build an event scheduling system that supported recurring events, how would you do it? How do you handle when an recurring event is removed? How could you see when the future events will happen? i.e. When creating an event, you could pick "repeating daily" (or weekly, yearly, etc). One design per response please. I'm used to Ruby/Rails, but use whatever you want to express the design. I was asked this at an interview, and couldn't come up with a really good response that I liked. Note: was already asked/answered here. But I was hoping to get some more practical details, as detailed below: * *If it was necessary to be able to comment or otherwise add data to just one instance of the recurring event, how would that work? *How would event changes and deletions work? *How do you calculate when future events happen? A: I've had to do this before when I was managing the database end of the project. I requested that each event be stored as separate events. This allows you to remove just one occurrence or you could move a span. It's a lot easier to remove multiples than to try and modify a single occurrence and turn it into two. We were then able to make another table which simply had a recurrenceID which contained the information of the recurrence. A: @Joe Van Dyk asked: "Could you look in the future and see when the upcoming events would be?" If you wanted to see/display the next n occurences of an event they would have to either a) be calculated in advance and stored somewhere or b) be calculated on the fly and displayed. This would be the same for any evening framework. The disadvantage with a) is that you have to put a limit on it somewhere and after that you have to use b). Easier just to use b) to begin with. The scheduling system does not need this information, it just needs to know when the next event is. A: I started by implementing some temporal expression as outlined by Martin Fowler. This takes care of figuring out when a scheduled item should actually occur. It is a very elegant way of doing it. What I ended up with was just a build up on what is in the article. The next problem was figuring out how in the world to store the expressions. The other issue is when you read out the expression, how do those fit into a not so dynamic user interface? There was talk of just serializing the expressions into a BLOB, but it would be difficult to walk the expression tree to know what was meant by it. The solution (in my case) is to store parameters that fit the limited number of cases the User Interface will support, and from there, use that information to generate the Temporal Expressions on the fly (could serialize when created for optimization). So, the Schedule class ends up having several parameters like offset, start date, end date, day of week, and so on... and from that you can generate the Temporal Expressions to do the hard work. As for having instances of the tasks, there is a 'service' that generates tasks for N days. Since this is an integration to an existing system and all instances are needed, this makes sense. However, an API like this can easily be used to project the recurrences without storing all instances. A: When saving the event I would save the schedule to a store (let's call it "Schedules" and I'd calculate when the event was to fire the next time and save that as well, for instance in "Events". Then I'd look in "Events" and figure out when the next event was to take place and go to sleep until then. When the app "wakes up" it would calculate when the event should take place again, store this in "Events" again and then perform the event. Repeat. If an event is created while sleeping the sleep is interrupted and recalculated. If the app is starting or recovering from a sleep event or similar, check "Events" for passed events and act accordingly (depending on what you want to do with missed events). Something like this would be flexible and would not take unnecessary CPU cycles. A: Off the top of my head (after revising a couple things while typing/thinking): Determine the minimum recurrence-resolution needed; that's how often the app runs. Maybe it's daily, maybe every five minutes. For each recurring event, store the most recent run time, the run-interval and other goodies like expiration time if that's desirable. Every time the app runs, it checks all events, comparing (today/now + recurrenceResolution) to (recentRunTime + runInterval) and if they coincide, fire the event. A: When I wrote a calendar app for myself mumble years ago, I basically just stole the scheduling mechanism from cron and used that for recurring events. e.g., Something taking place on the second Saturday of every month except January would include the instruction "repeat=* 2-12 8-14 6" (every year, months 2-12, the 2nd week runs from the 8th to the 14th, and 6 for Saturday because I used 0-based numbering for the days of the week). While this makes it quite easy to determine whether the event occurs on any given date, it is not capable of handling "every N days" recurrence and is also rather less than intuitive for users who aren't unix-savvy. To deal with unique data for individual event instances and removal/rescheduling, I just kept track of how far out events had been calculated for and stored the resulting events in the database, where they could then be modified, moved, or deleted without affecting the original recurrent event information. When a new recurring event was added, all instances were immediately calculated out until the existing "last calculated" date. I make no claim that this is the best way to do it, but it is a way, and one which works quite well within the limitations I mentioned earlier. A: If you have a simple reoccuring event, such as daily, weekly or a couple days a week, whats wrong with using buildt in scheduler/cron/at functionallity? Creating an executable/console app and set up when to run it? No complicated calendar, event or time management. :) //W
{ "language": "en", "url": "https://stackoverflow.com/questions/123793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "48" }
Q: Setting placeholder properties from the controller? I have an if/else statement in the controller and 2 different types of views to display depending on the outcome. So I have my 2 views in a .aspx page with a placeholder around each. How do I set the property placeholder.Visible from the controller? Or by the theory of MVC should I even be doing that sort of presentation in the controller? It sounds like I would be blurring the seperation of front-end output. A: Don't do that.. just have two aspx pages/views. Don't worry about placeholders. To elaborate, yes, you can do that kind of logic in an action method. And yes, you can have an action method render a view conditionally. This is normal! If the logic in the action gets to be so much that you have difficulty maintaining the unit test, refactor
{ "language": "en", "url": "https://stackoverflow.com/questions/123796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Visual Studio Freezing/TFS Window Might be off screen I am using Visual Studio 2005 with Team Foundation Server. When I right click a file under the source control and choose "compare" VS appears to freeze until I hit escape. My guess is that the window that is supposed to be popping up is somewhere I can't get to. I tried minimizing all the windows that I can and it is nowhere to be found. A: Try the keyboard shortcut to get to the window's main menu () then hit 'M' for move and hit an arrow key to attach the window to the mouse - then at the next move of the mouse it should jump to it. Experiment with a window you can see first. A: i had the same problem when trying to check in to TFS - no dialog and ESC escape key undid the freeze. I had recently, before the problem, changed my Laptop + Monitor configuration as follows: from Primary screen being the laptop and secondary screen being the monitor to primary screen being the monitor and secondary being laptop. I got rid of my secondary screen and tried again. SUre enough the invisible dialog was no longer invisible. A: I had to disable my Second Screen as well. Now the check-in screen where you can add a comment IS visible.
{ "language": "en", "url": "https://stackoverflow.com/questions/123803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Blank Page in JSF If my code throws an exception, sometimes - not everytime - the jsf presents a blank page. I´m using facelets for layout. A similar error were reported at this Sun forumn´s post, but without answers. Anyone else with the same problem, or have a solution? ;) Due to some requests. Here follow more datails: web.xml <error-page> <exception-type>com.company.ApplicationResourceException</exception-type> <location>/error.faces</location> </error-page> And the stack related to jsf is printed after the real exception: ####<Sep 23, 2008 5:42:55 PM GMT-03:00> <Error> <HTTP> <comp141> <AdminServer> <[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1222202575662> <BEA-101107> <[weblogic.servlet.internal.WebAppServletContext@6d46b9 - appName: 'ControlPanelEAR', name: 'ControlPanelWeb', context-path: '/Web'] Problem occurred while serving the error page. javax.servlet.ServletException: viewId:/error.xhtml - View /error.xhtml could not be restored. at javax.faces.webapp.FacesServlet.service(FacesServlet.java:249) at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:226) at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:124) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:283) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175) at weblogic.servlet.internal.RequestDispatcherImpl.invokeServlet(RequestDispatcherImpl.java:525) at weblogic.servlet.internal.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:261) at weblogic.servlet.internal.ForwardAction.run(ForwardAction.java:22) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(Unknown Source) at weblogic.servlet.internal.ErrorManager.handleException(ErrorManager.java:144) at weblogic.servlet.internal.WebAppServletContext.handleThrowableFromInvocation(WebAppServletContext.java:2201) at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2053) at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1366) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:200) at weblogic.work.ExecuteThread.run(ExecuteThread.java:172) javax.faces.application.ViewExpiredException: viewId:/error.xhtml - View /error.xhtml could not be restored. at com.sun.faces.lifecycle.RestoreViewPhase.execute(RestoreViewPhase.java:180) at com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java:248) at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:117) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:244) at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:226) at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:124) I´m using the jsf version Mojarra 1.2_09, richfaces 3.2.1.GA and facelets 1.1.13. Hope some help :( A: I think this largely depends on your JSF implementation. I've heard that some will render blank screens. The one we were using would throw error 500's with a stack trace. Other times out buttons wouldn't work without any error for the user. This was all during our development phase. But the best advice I can give you is to catch the exceptions and log them in an error log so you have the stack trace for debugging later. For messages that we couldn't do anything about like a backend failing we would just add a fatal message to the FacesContext that gets displayed on the screen and log the stack trace. A: I fixed a similar problem in my error.jsp page today. This won't be exactly the same as yours, but it might point someone in the right direction if they're having a similar problem. My problem seemed to be coming from two different sources. First, the message exception property wasn't being set in some of the servlets that were throwing exceptions caught by the error page. The servlets were catching and rethrowing exceptions using the ServletException(Throwable rootCause) constructor. Second, in the error page itself, the original author had used scriptlet code to parse the message using String.split(message, ";"); Since the message was null this failed. I was getting a NullPointerException in my error log, along with the message "Problem occurred while serving the error page." These two things combined to give me a blank page at the URL of the servlet that was throwing the original exception. I fixed my problem by providing my own error message when I rethrow exceptions in my servlets using the ServletException(String message, Throwable rootCause) constructor, so the error message will no longer be null. I also rewrote the error.jsp page using EL instead of scriptlet code, but that wasn't strictly necessary. A: For a blank page on JSF 2, place a breakpoint in ExceptionHandlerWrapper.handle or a class overriding this method. In my case it was due to custom code which was a too restrictive and the error was not logged.
{ "language": "en", "url": "https://stackoverflow.com/questions/123809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Which JMS implementation do you use? We are using ActiveMQ 5.2 as our implementation of choice and we picked it a while ago. It performs well enough for our use right now. Since its been a while, I was wondering what other Java Message Service implementations are in use and why? Surely there are more than a few. A: We rely on AMQ (5.1) via the Camel framework, and there haven't been any issues. AMQ 4 was a tad more fishy. A: WebLogic JMS provider when using WebLogic. Works great. A: TIBCO EMS. It's a commercial message service with Java/JMS, C, .net, and other bindings for it. A: Sun's Open source OpenMQ (https://mq.dev.java.net/). You can get free and paid support for the same. See this blog post about some comparison with ActiveMQ, etc -- http://alexismp.wordpress.com/2008/06/06/openmq-the-untold-story/. I've heard that OpenMQ is more stable. ActiveMQ is more flexible. as in, you can use it with more languages. There are probably more people on ActiveMQ's mailing list than OpenMQ. A: In one of the recent projects I was in we used Sonic MQ. Good overall implementation with good bindings to .NET. We had a little of scalability problems, but I have to admit that the scalability requirements were very strict: if I can recall correctly, something like 20,000 messes a second with no delays allowed between the 200 different clients (every client had to receive every message at the same time). A: I've used JBossMQ, which comes with JBoss app server up to version 4, and which is solid but limited. JBoss Messaging was the replacement, comes with JBossAS 5, and is a huge improvement. ActiveMQ I have a real dislike for. The developer(s) seem to have gone for performance and features to the detriment of stability, and it's phenomenally buggy. Given that it's the JMS fabric for Geronimo, I worry. A: IBM WebSphere MQ 5 and 6 Active MQ 5.2.0 Also Check out Micro QueueManager at http://codingjunky.com/page5/page4/page4.html It is small, easy to install and use for smaller projects. A: Before delving into JMS, consider AMQP as well - might be a new standard. JMS providers I worked with (in varying degrees): TIBCO EMS - very quick and robust, good API support, Java friendly, native C API exists. Best commercial choice I've used. Websphere MQ (and its JMS implementation) - so, so. Pub/sub not exactly quick, many configurations options and choices are 'strange' and overly complex from the long history of that product. Just look at the amount of documentation... Solace JMS - very high throughput (the JMS broker is built in hardware !), good choices of connecting protocols (MQTT, AMQP, XML over http as admin protocols) Fiorano MQ - used to be agressive in marketing but lost a lot of market share, maturity concerns Sonic MQ - solid product, also supports a C API Active MQ - if you want to go with an open-source product (unexpensive support, great community, limited add-on products, limited enterprise features) this is probably your best choice. Works out of the box and is the backbone of several tools like Apache Camel, for example. A: We are using SonicMQ, JBossMQ and the "micro broker" of Lotus Expeditor Integrator. We are using them for different purposes: -JBossMQ is used internally and to communicate out of all our Java EE applications which run on JBoss. -Lotus Expeditor is used in "remote sites" where we do only have limited resources and IT staff -SonicMQ is our Messaging backbone, we use it for connecting central systems, but also to connect remote systems in approx. 1000 sites. We are having good experiences with all of them, but our experience is also that with a more complex environment you have to do a more active administration of the Messaging system. This became especially true with SonicMQ at our site :-) . From a performance perspective we made the best experiences with SonicMQ especially in Queue based persistent messaging. A: I have used ActiveMQ in production for a couple of years but I was never happy about its stability (specially with it clustered-enabled). Never looked back after switching to OpenMQ. You might want to look into RabbitMQ or ZeroMQ.
{ "language": "en", "url": "https://stackoverflow.com/questions/123817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "48" }
Q: Using mercurial's mq for managing local changes I have a local mercurial repository with some site-specific changes in it. What I would like to do is set a couple files to be un-commitable so that they aren't automatically committed when I do an hg commit with no arguments. Right now, I'm doing complicated things with mq and guards to achieve this, pushing and popping and selecting guards to prevent my changes (which are checked into an mq patch) from getting committed. Is there an easier way to do this? I'm sick of reading the help for all the mq commands every time I want to commit a change that doesn't include my site-specific changes. A: I would put those files in .hgignore and commit a "sample" form of those files so that they could be easily recreated for a new site when someone checks out your code. A: I know that bazaar has shelve and unshelve commands to push and pop changes you don't want included in a commit. I think it's pretty likely that Mercurial has similar commands. A: I'm curious why you can't have another repo checked out with those specific changes, and then transplant/pull/patch to it when you need to update stuff. Then you'd be primarily working from the original repo, and it's not like you'd be wasting space since hg uses links. I commonly have two+ versions checked out so I can work on a number of updates then merge them together.
{ "language": "en", "url": "https://stackoverflow.com/questions/123826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How much process should a single developer follow? Is a formal process too much? Since I didn't do a good job writing the last question, and most of the answers were good, but not at all in the direction I intended for the question to go in, I deleted it and remade it as this question. I'm a solo developer on my own projects, generally very small things, but I have a few ideas that might turn into FOSS projects. I believe in documentation (to varying degrees, depending on the specific project and the end user), source control, and project management (including bug tracking, time management, and so on). However, I'm not sure how much of a formal process I should be following. Perhaps just keeping a README, associated design/requirements documents, and in-code comments under source control is sufficient. Or maybe there's an agile process that is suitable for a single developer to follow. Or maybe I should take an old-school waterfall model for each project. What kinds of processes exist for or can be adopted to a solo developer, if I even need a formal process? EDIT: I realize that there are tasks that I'm doing to be doing like documentation and source control. However, I'm not sure about the how part of the question. As a solo developer, should I adopt a more agile approach (if so, which "branch" of Agile - XP? Scrum? RAD?) or a more conventional approach (waterfall or spiral?)? A: Even if you don't need process to promote good communication between team members, process can help you compensate for the fact that you aren't as superhuman as you thought you were when you were 18 :) The type and amount of 'paperwork' you decide to do depends on your own strengths and weaknesses. Bad memory? Write down your designs and thoughts daily. Good seeing trees but not forests? Make sure you are extra careful with your requirements and designs. Good seeing forests but not trees? Detailed task lists, time estimates, and frequent deliverables are your friend. It boils down to: what are you likely to mess up, and which processes help with your particular way of working. A: Remember that while you may be solo now, those projects may become successful enough that others join you. So while you may not need all that extra stuff now, eventually you might wish you had some design documentation and instructions for building things, managing the source code repository, etc. Also remember that those "other people" may be you in a few years, when you have forgotten everything you know now. (You're young--you don't know yet just how quickly memories disappear.) So think of what you'd like to record for the benefit of your future self. A: You definitely need a process, there's a lot of non-code data that goes into managing and supporting a project. Without a process you'll be suffering quickly, re-hashing design ideas because you forgot all those good reasons for not doing something or re-learning how to branch svn b/c you only do it once a month. Documentation about design, design decisions, operations, etc is all vital for any significant project. And testing, source control, etc are all good development practices and should be done regardless of the project size. A: This is a very broad question, but perhaps I can help by sharing an experience of mine. I worked for almost 5 years on a hobby game coding project with a couple of friends of mine. A very thightly knit group of developers, we usually tugged our machines into a single apartment for a weekend to develop the project. My point here is that it could be compared to a single person effort, as we were all there to decide on the important design decisions, and so on. 'Process?' No, none I can identify, even in retrospect. The one thing that kept the source in control was following an 'agile developement' paradigm which we decided to implement from the start: refactor mercilessly. We did, and holy hell did it break the whole game apart all the time. But it did keep the source clean, and when we decided to go for 'stable releases' every now and then, it all seemed to come toghether. A: Referring the page you linked to - I say follow the processes. I am a solo dev and I follow these processes. You can't write software without knowing your requirements and pre-requisistes. As others have said, get to know how you work and your strengths & weaknesses. Also, sometimes you'll get stuck & get for a little outside help. Nobody knows everything. The whole process takes time (often never ends) and I have killed far too many brain cells over the years to store every detail in my head. Mind maps, flow charts and things like OneNote are good for non-coding long-term memory. Try to keep the bulk of it in one place or at least linked together so you don't have to try to remember where to look for it. A: Follow your heart.
{ "language": "en", "url": "https://stackoverflow.com/questions/123837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Get the resolution of a jpeg image using C# and the .NET Environment? Our clients will be uploading images to be printed on their documents and we have been asked to come up with a way to get the resolution of the image in order to warn them if the image has too low of a resolution and will look pixalated in the end-product If it comes to it we could also go with the dimensions if anyone knows how to get those but the resolution would be preferred Thank you A: It depends what you are looking for... if you want the DPI of the image then you are looking for the HorizontalResolution which is the DPI of the image. Image i = Image.FromFile(@"fileName.jpg"); i.HorizontalResolution; If you want to figure out how large the image is then you need to calculate the measurements of the image which is: int docHeight = (i.Height / i.VerticalResolution); int docWidth = (i.Width / i.HorizontalResolution); This will give you the document height and width in inches which you could then compare to the min size needed. A: DPI take sense when printing only. 72dpi is the Mac standard and 96dpi is the Windows standard. Screen resolution only takes pixels into account, so a 72dpi 800x600 jpeg is the same screen resolution than a 96dpi 800x600 pixels. Back to the '80s, Mac used 72dpi screen/print resolution to fit the screen/print size, so when you had an image on screen at 1:1, it correspond to the same size on the printer. Windows increased the screen resolution to 96dpi to have better font display.. but as a consequence, the screen image doesn't fit the printed size anymore. So, for web project, don't bother with DPI if the image isn't for print; 72dpi, 96dpi, even 1200dpi should display the same. A: Image image = Image.FromFile( [file] ); GraphicsUnit unit = GraphicsUnit.Point; RectangleF rect = image.GetBounds( ref unit ); float hres = image.HorizontalResolution; float vres = image.VerticalResolution; A: System.Drawing.Image Image newImage = Image.FromFile("SampImag.jpg"); newImage.HorizontalResolution
{ "language": "en", "url": "https://stackoverflow.com/questions/123838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Is there an alternative to SVCUTIL.EXE for generating WCF Web service proxies? Am I missing something or is there truly no alternative (yet, I hope) to SVCUTIL.EXE for generating WCF Web service proxies? A: I would strongly suggest that you look through the auto generated configurations before just using them, the autogenerated stuff is full of garbage. Try looking at this article by Miquel Castro: WCF the Manual Way... the Right Way A: I usually just use a ChannelFactory for a given interface. Provided the interface has the adequate WCF attributes, it should work fairly well. Here's a client example for a duplex channel: DuplexChannelFactory<IServerWithCallback> cf = new DuplexChannelFactory<IServerWithCallback>( new CallbackImpl(), new NetTcpBinding(), new EndpointAddress("net.tcp://localhost:9080/DataService")); A: If you're looking for a command-line alternative or standalone GUI then no - I don't know of any. However, if you're wondering about usage while developing in VS, VS2008's add service reference is an alternative that can save you some headache. A: Doh. I was reading old docs and just realized that the Add Service Reference does the grunge for you. THank you!
{ "language": "en", "url": "https://stackoverflow.com/questions/123842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What's the best way to develop against WordPress on Windows when you already have IIS/SQL Server installed? If you want to develop against WordPress (i.e., have a local instance running on your machine so you can develop themes, get blogs and sites laid out, etc.) and you're running Windows on your development machine with IIS and SQL Server already installed, what's the best way to do it? I found a method online which sets up a little "mini" server on Windows running instances of Apache and MySQL but they didn't advise using it on a machine with IIS already installed. Obviously one could install Apache and MySQL and do it that way but given what Windows affords you (i.e., methods of running PHP in IIS - I think Windows Server 2008 is even optimized for this), is that the best way? Are there ways to run WordPress with SQL Server as the backend? (I wouldn't think so but I thought I'd throw that out there). And are there methods differing on the version of Windows (i.e., XP, Vista, Vista64) A: I run XAMPP on a thumbdrive and install WordPress (usually multiple instances of it) on there. Then I start up XAMPP when I'm going to work on Wordpress development. EDIT: this setup does require that IIS be stopped when the XAMPP server is running (or some byzantine configuration magic that I've never bothered to figure out. Since most of my personal needs for local IIS development are handled by the Visual Studio built-in instance of IIS, which can run side-by-side with XAMPP, I rarely have bother with anything else, but that probably won't work for everyone. A: Install PHP, run Wordpress in IIS. Install MySQL which can be run side-by-side with MSSQL. The only thing you'll miss using IIS over Apache is mod_rewrite for prettier URLs. Avoid running IIS and Apache on the same machine if at all possible. IIS likes to bind to all available IPs blocking Apache from binding to an IP, which you can get around if necessary, but it's not immediately clear what's happening. I've been running this setup for years. A: Since you are interested in developing for Wordpress I strongly suggest you use the most common WP setup: Apache, PHP and MySQL. You can run Apache and IIS at the same time (I have IIS listening on port 81 and Apache on 80) or you can run only one at a time (create 2 bat files to start/stop the servers using the net start/stop command). You can use IIS, PHP, MySQL to run Wordpress but there are some subtle differences that can drive you crazy or cause problems when you deploy on Apache. A: You can certainly run IIS and Apache on the same box. We do it currently with Documentum/Apache and IIS on the same server. Just pick a range of addresses for one web server - 808x for Apache for example. You should also consider using Thinstall from VMWare where you can virutalize an entire application - registry, .Net and all - distribute as a single .EXE. We do this now for packaging applications that don't play well together. You might want to virtualize Wordpress/Appache/MySql and set an IP (808x) for that configuration. This way you can move this to any server with IIS and it'll play well with different configurations.
{ "language": "en", "url": "https://stackoverflow.com/questions/123848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Delaying an exception I have a method that periodically (e.g. once in every 10 secs) try to connect to a server and read some data from it. The server might not be available all the time. If the server is not available the method throws an exception. What would be the best way to implement a wrapper method that doesn't throw an exception except if the server wasn't available for at least one minute? A: Keep track of when the last time you successfully reached the server was. If the server throws an exception, catch it and compare to the last time you reached the server. If that time is more than a minute, rethrow the exception. A: In pseudocode. //Create Timer //Start Timer bool connected = false; while (!connected) try { //Connect To DB connected = true; } catch (Exception ex) { if (more than 1 minute has passed) throw new Exception(ex); } } A: You will have to record the time that you originally try to connect to the server and then catch the exception. if the time that the exception is caught is more than the original time + 1 minute, rethrow the exception. If not, retry. A: Ideally you can put a timeout on the call to the server. Failing that do a thread.sleep(600) in the catch block and try it again and fail if the second one doesn't return. A: Remember that exception handling is just a very specialized use of the usual "return" system. (For more technical details, read up on "monads".) If the exceptional situation you want to signal does not fit naturally into Java's exception handling system, it may not be appropriate to use exceptions. You can keep track of error conditions the usual way: Keep a state variable, update it as needed with success/failure info, and respond appropriately as the state changes. A: You could have a retry count, and if the desired count (6 in your case) had been met then throw an exception int count = 0; CheckServer(count); public void CheckServer(count) { try { // connect to server } catch(Exception e) { if(count < MAX_ATTEMPTS) { // wait 10 seconds CheckServer(count++) } else { throw e; } } } A: You can set a boolean variable for whether or not the server connection has succeeded, and check it in your exception handler, like so: class ServerTester : public Object { private bool failing; private ServerConnection serverConnection; private Time firstFailure; public ServerTester(): failing(false) { } public void TestServer() throws ServerException { try { serverConnection.Connect(); failing = false; } catch (ServerException e) { if (failing) { if (Time::GetTime() - firstFailure > 60) { failing = false; throw e; } } else { firstFailure = Time::GetTime(); failing = true; } } } } I don't know what the actual time APIs are, since it's been a while since I last used Java. This will do what you ask, but something about it doesn't seem right. Polling for exceptions strikes me as a bit backwards, but since you're dealing with a server, I can't think of any other way off the top of my head.
{ "language": "en", "url": "https://stackoverflow.com/questions/123862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Where can I find good Domain Driven Design resources? What are the best places to find out everything there is to know about Domain-Driven Design, from beginner to advanced. * *Books *Websites *Mailing lists *User groups *Conferences *etc A: Wikipedia has some useful information, especially its summary of how DDD relates to other approaches. http://en.wikipedia.org/wiki/Domain-driven_design It also links to two presentations by Eric Evans * *http://www.infoq.com/presentations/model-to-work-evans *http://www.infoq.com/presentations/strategic-design-evans A: This article is a good introduction on how to do DDD in practice. A: Here are some interesting sources: * *the DDD book by Eric Evans *the free DDD Quickly book *the DDD newsgroup A: Maybe read the book Domain Driven Design? A: I recommend Domain Driven Design from Eric Evans, it's a great book on the subject. A: Here are some informative sources: * *An interview with Eric Evans on Software Engineering Radio *A book which applies the principles of DDD using an example in C# *A podcast on Getting Started With Domain-Driven Design by Rob Conery *A conversation between Scott Hanselman and Rob Conery on Learning DDD. A: Late answer, perhaps :) however, in case anyone is still interested, I found some very useful information and considerations on DDD on Epic.NET project site. A: Applying Domain-Driven Design and Patterns is a very good book on the subject. Lots of good examples as well as discussion of related subjects like test driven development and how they apply. Also check out domaindrivendesign.org. A: Casey Charlton has created a new DDD resource site at http://dddstepbystep.com/. It is a great reference site and has lots of info for DDD newbies and experts alike.
{ "language": "en", "url": "https://stackoverflow.com/questions/123886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: Ways to do "related searches" functionality I've seen a few sites that list related searches when you perform a search, namely they suggest other search queries you may be interested in. I'm wondering the best way to model this in a medium-sized site (not enough traffic to rely on visitor stats to infer relationships). My initial thought is to store the top 10 results for each unique query, then when a new search is performed to find all the historical searches that match some amount of the top 10 results but ideally not matching all of them (matching all of them might suggest an equivalent search and hence not that useful as a suggestion). I imagine that some people have done this functionality before and may be able to provide some ideas of different ways to do this. I'm not necessarily looking for one winning idea since the solution will no doubt vary substantially depending on the size and nature of the site. A: I've tried a number of different approaches to this, with various degrees of success. In the end, I think the best approach is highly dependent on the domain/topics being searched, and how the users form queries. Your thought about storing previous searches seems reasonable to me. I'd be curious to see how it works in practice (I mean that in the most sincere way -- there are many nuances that can cause these techniques to fail in the "real world", particularly when data is sparse). Here are some techniques I've used in the past, and seen in the literature: * *Thesaurus based approaches: Index into a thesaurus for each term that the user has used, and then use some heuristic to filter the synonyms to show the user as possible search terms. *Stem and search on that: Stem the search terms (eg: with the Porter Stemming Algorithm and then use the stemmed terms instead of the initially provided queries, and given the user the option of searching for exactly the terms they specified (or do the opposite, search the exact terms first, and use stemming to find the terms that stem to the same root. This second approach obviously takes some pre-processing of a known dictionary, or you can collect terms as your indexing term finds them.) *Chaining: Parse the results found by the user's query and extract key terms from the top N results (KEA is one library/algorithm that you can look at for keyword extraction techniques.) A: have you considered a matrix of with keywords on 1 axis vs. documents on another axis. once you find the set of vetors representing the keywords, find sets of keyword(s) found in your initial result set and then find a way to rank the other keywords by how many documents they reference or how many times they interset the intial result set.
{ "language": "en", "url": "https://stackoverflow.com/questions/123900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I sign a Java applet using a certificate in my Mac keychain? I have a self-signed root certificate with just the code signing extension (no other extensions) in my Mac keychain; I use it to sign all code coming out of ∞labs using Apple's codesign tool and it works great. I was looking to expand myself a little and doing some Java development. I know Apple provides a KeyStore implementation that reads from the Keychain, and I can list all certificates I have in the 'chain with: keytool -list -provider com.apple.crypto.provider.Apple -storetype KeychainStore -keystore NONE -v However, whenever I try to use jarsigner to sign a simple test JAR file, I end up with: $ jarsigner -keystore NONE -storetype KeychainStore -providerName Apple a.jar infinitelabs_codesigning_2 Enter Passphrase for keystore: <omitted> jarsigner: Certificate chain not found for: infinitelabs_codesigning_2. infinitelabs_codesigning_2 must reference a valid KeyStore key entry containing a private key and corresponding public key certificate chain. What am I doing wrong? (The certificate was created following Apple's instructions for obtaining a signing identity.) A: I think that your keystore entry alias must be wrong. Are you using the alias name of a keystore object with an entry type of "keyEntry"? The same command works perfectly for me. From the jarsigner man page: When using jarsigner to sign a JAR file, you must specify the alias for the keystore entry containing the private key needed to generate the signature. A: Have you tried to export the key from the apple keychain and import it via keytool? Perhaps Apple hasn't properly integrated keytool with their keychain (not like they have a stellar track record with supporting Java). Edit: Hmm... I just tried taking a key that worked from the java store that I imported into the apple keychain (has a private/public key) and it doesn't work. So ether my importing is wrong, you cannot access the apple Keychain in this way, or something else is going wrong :-) A: I have been trying to do this as well. I went through a few contortions and, using Keystore Explorer and I lost my public key. Can I recover it from a private key? , I was able to extract the certificate, private key, and public key from the .keystore file and move them into an OSX keychain. Note that in this case I probably didn't need the public key. If I give jarsigner the name of the private key (as opposed to the name of my self-signed certificate based on that key), then I get the error you mentioned. My guess then is that your problem is one of the following * *Your keychain contains the cert but not the private key *Your keychain contains the private key but not the cert *"infinitelabs_codesigning_2" refers to the private key rather than the cert I'm able to use your jarsigner command line (thanks!) and get correct results, which I checked with jarsigner -verify.
{ "language": "en", "url": "https://stackoverflow.com/questions/123902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: C# Download all files in HTTP directory How do I download all files in a directory and all subdirectories on an HTTP server? A: By using a command-line tool like wget rather than reinventing the wheel. A: If directory browsing is enabled on the server then you can crawl the directory listings, i.e. Use HttpWebRequest to get the listing page, parse the response to find the file links, download each file (also with HttpWebRequest), navigate to each subfolder, rinse and repeat. If directory browsing isn't enabled then you can't really download ALL files in ALL subdirectories because you can't know they exist. However, you could still use HttpWebRequest to crawl the exposed web pages and download any linked files that are of interest.
{ "language": "en", "url": "https://stackoverflow.com/questions/123911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can one simplify network byte-order conversion from a BinaryReader? System.IO.BinaryReader reads values in a little-endian format. I have a C# application connecting to a proprietary networking library on the server side. The server-side sends everything down in network byte order, as one would expect, but I find that dealing with this on the client side is awkward, particularly for unsigned values. UInt32 length = (UInt32)IPAddress.NetworkToHostOrder(reader.ReadInt32()); is the only way I've come up with to get a correct unsigned value out of the stream, but this seems both awkward and ugly, and I have yet to test if that's just going to clip off high-order values so that I have to do fun BitConverter stuff. Is there some way I'm missing short of writing a wrapper around the whole thing to avoid these ugly conversions on every read? It seems like there should be an endian-ness option on the reader to make things like this simpler, but I haven't come across anything. A: There is no built-in converter. Here's my wrapper (as you can see, I only implemented the functionality I needed but the structure is pretty easy to change to your liking): /// <summary> /// Utilities for reading big-endian files /// </summary> public class BigEndianReader { public BigEndianReader(BinaryReader baseReader) { mBaseReader = baseReader; } public short ReadInt16() { return BitConverter.ToInt16(ReadBigEndianBytes(2), 0); } public ushort ReadUInt16() { return BitConverter.ToUInt16(ReadBigEndianBytes(2), 0); } public uint ReadUInt32() { return BitConverter.ToUInt32(ReadBigEndianBytes(4), 0); } public byte[] ReadBigEndianBytes(int count) { byte[] bytes = new byte[count]; for (int i = count - 1; i >= 0; i--) bytes[i] = mBaseReader.ReadByte(); return bytes; } public byte[] ReadBytes(int count) { return mBaseReader.ReadBytes(count); } public void Close() { mBaseReader.Close(); } public Stream BaseStream { get { return mBaseReader.BaseStream; } } private BinaryReader mBaseReader; } Basically, ReadBigEndianBytes does the grunt work, and this is passed to a BitConverter. There will be a definite problem if you read a large number of bytes since this will cause a large memory allocation. A: I built a custom BinaryReader to handle all of this. It's available as part of my Nextem library. It also has a very easy way of defining binary structs, which I think will help you here -- check out the Examples. Note: It's only in SVN right now, but very stable. If you have any questions, email me at cody_dot_brocious_at_gmail_dot_com.
{ "language": "en", "url": "https://stackoverflow.com/questions/123918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Automatic newlines and formatting for blogging software I'm writing by own blogging engine in PHP with a MYSQL backend database. MY question is: How would you go about making user comments and blog posts include newlines wherever they are appropriate? For example, if a user hits the return key in the message/comments box how would this translate into a new line that would show in the browser when the comment is viewed? A: PHP has a function: nl2br which turns new lines into <br /> www.php.net/nl2br A: Replace \n\n with </p><p> and then replace \n with <br>. PS: Pirate day was last week :). A: nl2br() (http://php.net/nl2br) is perfectly good, however that Wordpress Guy (Matt Mullenweg) has a really good function, which is a bit more advanced as it converts double line breaks to paragraphs instead (better semantically). You can find it in the Wordpress source code or here: http://ma.tt/scripts/autop/ A: It's also important what your using for a comment editor. If your using a standard textbox then yes, nl2br is what your looking for. If your going a bit more advanced such as using a WYSIWYG editor like tinyMCE, then it has configuration that can handle that for you. A: If you happen to need more formatting options (beyond paragraphs), employ something like Text_Wiki or PHP Markdown. The advantages would be: * *no need to allow HTML and deal with all the filtering (that's good :-)) *clear/familiar guide to formatting data *a lot of flexibility when you generate the HTML (in the end for display) The disadvantages: * *no HTML (blessing and curse ;-)) *people might not be familiar the syntax
{ "language": "en", "url": "https://stackoverflow.com/questions/123920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to find USB drive letter? I'm writing a setup program to install an application to a USB drive. The application is meant to be used only from USB drives, so it would save an extra step for the user by automatically selecting USB drive to install to. I might explore using Nullsoft or MSI for install, but since I'm mostly familiar with .NET I initially plan to try either custom .NET installer or setup component on .NET. Is it possible to determine the drive letter of a USB flash drive on Windows using .NET? How? A: You could use: from driveInfo in DriveInfo.GetDrives() where driveInfo.DriveType == DriveType.Removable && driveInfo.IsReady select driveInfo.RootDirectory.FullName A: This will enumerate all the drives on the system without LINQ but still using WMI: // browse all USB WMI physical disks foreach(ManagementObject drive in new ManagementObjectSearcher( "select * from Win32_DiskDrive where InterfaceType='USB'").Get()) { // associate physical disks with partitions foreach(ManagementObject partition in new ManagementObjectSearcher( "ASSOCIATORS OF {Win32_DiskDrive.DeviceID='" + drive["DeviceID"] + "'} WHERE AssocClass = Win32_DiskDriveToDiskPartition").Get()) { Console.WriteLine("Partition=" + partition["Name"]); // associate partitions with logical disks (drive letter volumes) foreach(ManagementObject disk in new ManagementObjectSearcher( "ASSOCIATORS OF {Win32_DiskPartition.DeviceID='" + partition["DeviceID"] + "'} WHERE AssocClass = Win32_LogicalDiskToPartition").Get()) { Console.WriteLine("Disk=" + disk["Name"]); } } // this may display nothing if the physical disk // does not have a hardware serial number Console.WriteLine("Serial=" + new ManagementObject("Win32_PhysicalMedia.Tag='" + drive["DeviceID"] + "'")["SerialNumber"]); } Source A: C# 2.0 version of Kent's code (from the top of my head, not tested): IList<String> fullNames = new List<String>(); foreach (DriveInfo driveInfo in DriveInfo.GetDrives()) { if (driveInfo.DriveType == DriveType.Removable) { fullNames.Add(driveInfo.RootDirectory.FullName); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/123927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Do you use special comments on bug fixes in your code? Some of my colleagues use special comments on their bug fixes, for example: // 2008-09-23 John Doe - bug 12345 // <short description> Does this make sense? Do you comment bug fixes in a special way? Please let me know. A: I tend not to comment in the actual source because it can be difficult to keep up to date. However I do put linking comments in my source control log and issue tracker. e.g. I might do something like this in Perforce: [Bug-Id] Problem with xyz dialog. Moved sizing code to abc and now initialise later. Then in my issue tracker I will do something like: Fixed in changelist 1234. Moved sizing code to abc and now initialise later. Because then a good historic marker is left. Also it makes it easy if you want to know why a particular line of code is a certain way, you can just look at the file history. Once you've found the line of code, you can read my commit comment and clearly see which bug it was for and how I fixed it. A: Only if the solution was particularly clever or hard to understand. A: I usually add my name, my e-mail address and the date along with a short description of what I changed, That's because as a consultant I often fix other people's code. // Glenn F. Henriksen (<email@company.no) - 2008-09-23 // <Short description> That way the code owners, or the people coming in after me, can figure out what happened and they can get in touch with me if they have to. (yes, unfortunately, more often than not they have no source control... for internal stuff I use TFS tracking) A: While this may seem like a good idea at the time, it quickly gets out of hand. Such information can be better captured using a good combination of source control system and bug tracker. Of course, if there's something tricky going on, a comment describing the situation would be helpful in any case, but not the date, name, or bug number. The code base I'm currently working on at work is something like 20 years old and they seem to have added lots of comments like this years ago. Fortunately, they stopped doing it a few years after they converted everything to CVS in the late 90s. However, such comments are still littered throughout the code and the policy now is "remove them if you're working directly on that code, but otherwise leave them". They're often really hard to follow especially if the same code is added and removed several times (yes, it happens). They also don't contain the date, but contain the bug number which you'd have to go look up in an archaic system to find the date, so nobody does. A: I don't put in comments like that, the source control system already maintains that history and I am already able to log the history of a file. I do put in comments that describe why something non-obvious is being done though. So if the bug fix makes the code less predictable and clear, then I explain why. A: Comments like this are why Subversion lets you type a log entry on every commit. That's where you should put this stuff, not in the code. A: Whilst I do tend to see some comments on bugs inside the code at work, my personal preference is linking a code commit to one bug. When I say one I really mean one bug. Afterwards you can always look at the changes made and know which bug these were applied to. A: I do it if the bug fix involves something that's not straightforward, but more often than not if the bugfix requires a long explanation I take it as a sign that the fix wasn't designed well. Occasionally I have to work around a public interface that can't change so this tends to be the source of these kinds of comments, for example: // <date> [my name] - Bug xxxxx happens when the foo parameter is null, but // some customers want the behavior. Jump through some hoops to find a default value. In other cases the source control commit message is what I use to annotate the change. A: That style of commenting is extremely valuable in a multi-developer environment where there is a range of skills and / or business knowledge across the developers (e.g. - everywhere). To the experienced knowledgable developer the reason for a change may be obvious, but for newer developers that comment will make them think twice and do more investigation before messing with it. It also helps them learn more about how the system works. Oh, and a note from experience about the "I just put that in the source control system" comments: If it isn't in the source, it didn't happen. I can't count the number of times the source history for projects has been lost due to inexperience with the source control software, improper branching models etc. There is only one place the change history cannot be lost - and that's in the source file. I usually put it there first, then cut 'n paste the same comment when I check it in. A: Over time these can accumulate and add clutter. It's better to make the code clear, add any comments for related gotchas that may not be obvious and keep the bug detail in the tracking system and repository. A: No I don't, and I hate having graffiti like that litter the code. Bug numbers can be tracked in the commit message to the version control system, and by scripts to push relevant commit messages into the bug tracking system. I do not believe they belong in the source code, where future edits will just confuse things. A: Often a comment like that is more confusing, as you don't really have context as to what the original code looked like, or the original bad behavior. In general, if your bug fix now makes the code run CORRECTLY, just simply leave it without comments. There is no need to comment correct code. Sometimes the bug fix makes things look odd, or the bug fix is testing for something that is out of the ordinary. Then it might be appropriate to have a comment - usually the comment should refer back to the "bug number" from your bug database. For example, you might have a comment that says "Bug 123 - Account for odd behavior when the user is in 640 by 480 screen resolution". A: If you add comments like that after a few years of maintaining the code you will have so many bug fix comments you wouldn't be able to read the code. But if you change something that look right (but have a subtle bug) into something that is more complicated it's nice to add a short comment explaining what you did, so that the next programmer to maintain this code doesn't change it back because he (or she) thinks you over-complicated things for no good reason. A: No. I use subversion and always enter a description of my motivation for committing a change. I typically don't restate the solution in English, instead I summarize the changes made. I have worked on a number of projects where they put comments in the code when bug fixes were made. Interestingly, and probably not coincidentally, these were projects which either didn't use any sort of source control tool or were mandated to follow this sort of convention by fiat from management. Quite honestly, I don't really see the value in doing this for most situations. If I want to know what changed, I'll look at the subversion log and the diff. Just my two cents. A: If the code is corrected, the comment is useless and never interesting to anybody - just noise. If the bug isn't solved, the comment is wrong. Then it makes sense. :) So just leave such comments if you didn't really solved the bug. A: To locate ones specific comment we use DKBUGBUG - which means David Kelley's fix and reviewer can easily identity, Ofcourse we will add Date and other VSTS bug tracking number etc along with this. A: Don't duplicate meta data that your VCS is going to keep for you. Dates and names should be in the automatically added by the VCS. Ticket numbers, manager/user names that requested the change, etc should be in VCS comments, not the code. Rather than this: //$DATE $NAME $TICKET //useful comment to the next poor soul I would do this: //useful comment to the next poor soul A: If the code is on a live platform, away from direct access to the source control repository, then I will add comments to highlight the changes made as a part of the fix for a bug on the live system. Otherwise, no the message that you enter at checkin should contain all the info you need. cheers, Rob A: When I make bugfixes/enhancements in third party libraries/component I often make some comments. This makes it easier find and move the changes if I need to use a newer version of the library/component. In my own code I seldom comments bugfixes. A: I don't work on multi-person projects, but I sometimes add comments about a certain bug to a unit test. Remember, there's no such thing as bugs, just insufficient testing. A: Since I do as much TDD as possible (everything else is social suicide, because every other method will force you to work endless hours), I seldomly fix bugs. Most of the time I add special remarks like this one to the code: // I KNOW this may look strange to you, but I have to use // this special implementation here - if you don't understand that, // maybe you are the wrong person for the job. Sounds harsh, but most people who call themselves "developers" deserve no other remarks.
{ "language": "en", "url": "https://stackoverflow.com/questions/123936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: What causes "Invalid advise flags" run-time error in Excel VBA? I have a Excel macro that generates a this error whenever it gets input of a specific format. Does anyone knows in general what an advise flag is OR where can I find information on this type of error? Thanks Runtime error -2147221503 (80040001): Automation error, Invalid advise flags A: It's part of Microsoft's OLE component. See msdn This doesn't solve your problem, but maybe you can find more info there. A: For anyone pulling their hair out; this could also be a custom Error and unrelated to 'Invalid advise flags': Err.Raise (vbObjectError + 1) Note: vbObjectError = - 2147221504
{ "language": "en", "url": "https://stackoverflow.com/questions/123945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Convert WPF to PDF I have a WPF application and I want to be able to save the output of the application to a PDF document, the item I want to save can be a Visual a Drawing or a FixedDocument (I can create it as any of those options, and it's easy to convert between them). Is there any library that can convert directly from WPF to PDF (without writing code to walk the visual tree and recreate it using PDF primitives). A: If you got your WPF visual tree rendered in to XPS then try this XPS to PDF converter NIXPS. A: For this sceneario I recommend using XpsDocumentWriter class to get a XPS document, then converting it to PDF using Amyuni PDF Creator Disclaimer: I work for Amyuni Technologies A: We have just released a new library that facilitates this: NiPDF v1.0 Here is a link to an example on our site that converts WPF to PDF. It is a 100% managed .NET assembly, and you don't need to learn an arcane API top be able to use it.
{ "language": "en", "url": "https://stackoverflow.com/questions/123947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How to get/set logical directory path in python In python is it possible to get or set a logical directory (as opposed to an absolute one). For example if I have: /real/path/to/dir and I have /linked/path/to/dir linked to the same directory. using os.getcwd and os.chdir will always use the absolute path >>> import os >>> os.chdir('/linked/path/to/dir') >>> print os.getcwd() /real/path/to/dir The only way I have found to get around this at all is to launch 'pwd' in another process and read the output. However, this only works until you call os.chdir for the first time. A: The underlying operational system / shell reports real paths to python. So, there really is no way around it, since os.getcwd() is a wrapped call to C Library getcwd() function. There are some workarounds in the spirit of the one that you already know which is launching pwd. Another one would involve using os.environ['PWD']. If that environmnent variable is set you can make some getcwd function that respects it. The solution below combines both: import os from subprocess import Popen, PIPE class CwdKeeper(object): def __init__(self): self._cwd = os.environ.get("PWD") if self._cwd is None: # no environment. fall back to calling pwd on shell self._cwd = Popen('pwd', stdout=PIPE).communicate()[0].strip() self._os_getcwd = os.getcwd self._os_chdir = os.chdir def chdir(self, path): if not self._cwd: return self._os_chdir(path) p = os.path.normpath(os.path.join(self._cwd, path)) result = self._os_chdir(p) self._cwd = p os.environ["PWD"] = p return result def getcwd(self): if not self._cwd: return self._os_getcwd() return self._cwd cwd = CwdKeeper() print cwd.getcwd() # use only cwd.chdir and cwd.getcwd from now on. # monkeypatch os if you want: os.chdir = cwd.chdir os.getcwd = cwd.getcwd # now you can use os.chdir and os.getcwd as normal. A: This also does the trick for me: import os os.popen('pwd').read().strip('\n') Here is a demonstration in python shell: >>> import os >>> os.popen('pwd').read() '/home/projteam/staging/site/proj\n' >>> os.popen('pwd').read().strip('\n') '/home/projteam/staging/site/proj' >>> # Also works if PWD env var is set >>> os.getenv('PWD') '/home/projteam/staging/site/proj' >>> # This gets actual path, not symlinked path >>> import subprocess >>> p = subprocess.Popen('pwd', stdout=subprocess.PIPE) >>> p.communicate()[0] # returns non-symlink path '/home/projteam/staging/deploys/20150114-141114/site/proj\n' Getting the environment variable PWD didn't always work for me so I use the popen method. Cheers!
{ "language": "en", "url": "https://stackoverflow.com/questions/123958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: URL Encryption in Java What is the best way to encrypt an URL with parameters in Java? A: The only way to do this is to use SSL/TLS (https). If you use plain old HTTP, the URL will definitely be sent in the clear. A: Unfortunatelly almost noting is simple in java :-) , for this simple and usual task I wasnt able to find a prepared library, I ended up writing this (this was the source): import java.net.URLDecoder; import java.net.URLEncoder; import javax.crypto.Cipher; import javax.crypto.SecretKey; import javax.crypto.SecretKeyFactory; import javax.crypto.spec.PBEParameterSpec; /** * An easy to use class to encrypt and decrypt a string. Just call the simplest * constructor and the needed methods. * */ public class StringEncryptor { private Cipher encryptCipher; private Cipher decryptCipher; private sun.misc.BASE64Encoder encoder = new sun.misc.BASE64Encoder(); private sun.misc.BASE64Decoder decoder = new sun.misc.BASE64Decoder(); final private String charset = "UTF-8"; final private String defaultEncryptionPassword = "PAOSIDUFHQWER98234QWE378AHASDF93HASDF9238HAJSDF923"; final private byte[] defaultSalt = { (byte) 0xa3, (byte) 0x21, (byte) 0x24, (byte) 0x2c, (byte) 0xf2, (byte) 0xd2, (byte) 0x3e, (byte) 0x19 }; /** * The simplest constructor which will use a default password and salt to * encode the string. * * @throws SecurityException */ public StringEncryptor() throws SecurityException { setupEncryptor(defaultEncryptionPassword, defaultSalt); } /** * Dynamic constructor to give own key and salt to it which going to be used * to encrypt and then decrypt the given string. * * @param encryptionPassword * @param salt */ public StringEncryptor(String encryptionPassword, byte[] salt) { setupEncryptor(encryptionPassword, salt); } public void init(char[] pass, byte[] salt, int iterations) throws SecurityException { try { PBEParameterSpec ps = new javax.crypto.spec.PBEParameterSpec(salt, 20); SecretKeyFactory kf = SecretKeyFactory.getInstance("PBEWithMD5AndDES"); SecretKey k = kf.generateSecret(new javax.crypto.spec.PBEKeySpec(pass)); encryptCipher = Cipher.getInstance("PBEWithMD5AndDES/CBC/PKCS5Padding"); encryptCipher.init(Cipher.ENCRYPT_MODE, k, ps); decryptCipher = Cipher.getInstance("PBEWithMD5AndDES/CBC/PKCS5Padding"); decryptCipher.init(Cipher.DECRYPT_MODE, k, ps); } catch (Exception e) { throw new SecurityException("Could not initialize CryptoLibrary: " + e.getMessage()); } } /** * * method to decrypt a string. * * @param str * Description of the Parameter * * @return String the encrypted string. * * @exception SecurityException * Description of the Exception */ public synchronized String encrypt(String str) throws SecurityException { try { byte[] utf8 = str.getBytes(charset); byte[] enc = encryptCipher.doFinal(utf8); return URLEncoder.encode(encoder.encode(enc),charset); } catch (Exception e) { throw new SecurityException("Could not encrypt: " + e.getMessage()); } } /** * * method to encrypting a string. * * @param str * Description of the Parameter * * @return String the encrypted string. * * @exception SecurityException * Description of the Exception */ public synchronized String decrypt(String str) throws SecurityException { try { byte[] dec = decoder.decodeBuffer(URLDecoder.decode(str,charset)); byte[] utf8 = decryptCipher.doFinal(dec); return new String(utf8, charset); } catch (Exception e) { throw new SecurityException("Could not decrypt: " + e.getMessage()); } } private void setupEncryptor(String defaultEncryptionPassword, byte[] salt) { java.security.Security.addProvider(new com.sun.crypto.provider.SunJCE()); char[] pass = defaultEncryptionPassword.toCharArray(); int iterations = 3; init(pass, salt, iterations); } } A: java security api(http://java.sun.com/javase/technologies/security/) + url encoding A: It depends on your threat model. For example, if you want to protect the parameters sent by your Java app to your server from an attacker who has access to the communication channel, you should consider communicating with the server via TLS/SSL (i.e., HTTPS in your case) and the likes. If you want to protect the parameters from an attacker who has access to the machine where your Java client app runs, then you're in deeper trouble. A: If you really can't use SSL, I'd suggest a pre-shared key approach and adding a random iv. You can use any decent symmetric encryption method ex. AES using a pre-shared key you're communicating out of band (email, phone etc.). Then you generate a random initialization vector and encrypt your string with this iv and the key. Finally you concatenate your cipher text and the iv and send this as your parameter. The iv can be communicated in the clear without any risk. A: Are you sure you don't mean URL encode? Encoding is available through java.net.URLEncoder.encode. A: The standard way to encrypt HTTP traffic is to use SSL. However, even over HTTPS, the URL and any parameters in it (i.e. a GET request) will be sent in the clear. You would need to use SSL and do a POST request to properly encrypt your data. As pointed out in the comments parameters will be encrypted no matter what HTTP method you use, as long as you use an SSL connection.
{ "language": "en", "url": "https://stackoverflow.com/questions/123976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Is there a better way to get hold of a reference to a movie clip in actionscript using a string without eval I have created a bunch of movie clips which all have similar names and then after some other event I have built up a string like: var clipName = "barLeft42" which is held inside another movie clip called 'thing'. I have been able to get hold of a reference using: var movieClip = Eval( "_root.thing." + clipName ) But that feels bad - is there a better way? A: Movie clips are collections in actionscript (like most and similar to javascript, everything is basically key-value pairs). You can index into the collection using square brackets and a string for the key name like: _root.thing[ "barLeft42" ] That should do the trick for you... A: The better way, which avoids using the deprecated eval, is to index with square brackets: var movieClip = _root.thing[ "barLeft42" ] But the best way is to keep references to the clips you make, and access them by reference, rather than by name: var movieClipArray = new Array(); for (var i=0; i<45; i++) { var mc = _root.thing.createEmptyMovieClip( "barLeft"+i, i ); // ... movieClipArray.push( mc ); } // ... var movieClip = movieClipArray[ 42 ]; A: You can use brackets and include variables within them... so if you wanted to loop through them all you can do this: for (var i=0; i<99; i++) { var clipName = _root.thing["barLeft"+i]; }
{ "language": "en", "url": "https://stackoverflow.com/questions/123979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: how to determine USB Flash drive manufacturer? I need my program to work only with certain USB Flash drives (from a single manufacturer) and ignore all other USB Flash drives (from any other manufacturers). is it possible to check that specific USB card is inserted on windows using .NET 2.0? how? if I find it through WMI, can I somehow determine which drive letter the USB drive is on? A: You could use unmanaged Win32 API calls to handle this. http://www.codeproject.com/KB/system/EnumDeviceProperties.aspx A: Going through either Win32 CM_ (Device Management) or WMI and grabbing the PNP ID. Look for VID (Vendor ID). I see information for the device I just inserted under Win32_USBControllerDevice and Win32_DiskDrive. A: You may be able to get this information through WMI. Below is a vbs script (copy to text file with .vbs to run) which uses WMI to get some information about Win32_DiskDrive objects. The Manufacturer info might just say Standard Disk Drive, but the Model number may have what you are looking for. Set Drives = GetObject("winmgmts:{impersonationLevel=impersonate,(Backup)}").ExecQuery("select * from Win32_DiskDrive") for each drive in drives Wscript.echo "Drive Information:" & vbnewline & _ "Availability: " & drive.Availability & vbnewline & _ "BytesPerSector: " & drive.BytesPerSector & vbnewline & _ "Caption: " & drive.Caption & vbnewline & _ "CompressionMethod: " & drive.CompressionMethod & vbnewline & _ "ConfigManagerErrorCode: " & drive.ConfigManagerErrorCode & vbnewline & _ "ConfigManagerUserConfig: " & drive.ConfigManagerUserConfig & vbnewline & _ "CreationClassName: " & drive.CreationClassName & vbnewline & _ "DefaultBlockSize: " & drive.DefaultBlockSize & vbnewline & _ "Description: " & drive.Description & vbnewline & _ "DeviceID: " & drive.DeviceID & vbnewline & _ "ErrorCleared: " & drive.ErrorCleared & vbnewline & _ "ErrorDescription: " & drive.ErrorDescription & vbnewline & _ "ErrorMethodology: " & drive.ErrorMethodology & vbnewline & _ "Index: " & drive.Index & vbnewline & _ "InterfaceType: " & drive.InterfaceType & vbnewline & _ "LastErrorCode: " & drive.LastErrorCode & vbnewline & _ "Manufacturer: " & drive.Manufacturer & vbnewline & _ "MaxBlockSize: " & drive.MaxBlockSize & vbnewline & _ "MaxMediaSize: " & drive.MaxMediaSize & vbnewline & _ "MediaLoaded: " & drive.MediaLoaded & vbnewline & _ "MediaType: " & drive.MediaType & vbnewline & _ "MinBlockSize: " & drive.MinBlockSize & vbnewline & _ "Model: " & drive.Model & vbnewline & _ "Name: " & drive.Name & vbnewline & _ "NeedsCleaning: " & drive.NeedsCleaning & vbnewline & _ "NumberOfMediaSupported: " & drive.NumberOfMediaSupported & vbnewline & _ "Partitions: " & drive.Partitions & vbnewline & _ "PNPDeviceID: " & drive.PNPDeviceID & vbnewline & _ "PowerManagementSupported: " & drive.PowerManagementSupported & vbnewline & _ "SCSIBus: " & drive.SCSIBus & vbnewline & _ "SCSILogicalUnit: " & drive.SCSILogicalUnit & vbnewline & _ "SCSIPort: " & drive.SCSIPort & vbnewline & _ "SCSITargetId: " & drive.SCSITargetId & vbnewline & _ "SectorsPerTrack: " & drive.SectorsPerTrack & vbnewline & _ "Signature: " & drive.Signature & vbnewline & _ "Size: " & drive.Size & vbnewline & _ "Status: " & drive.Status & vbnewline & _ "StatusInfo: " & drive.StatusInfo & vbnewline & _ "SystemCreationClassName: " & drive.SystemCreationClassName & vbnewline & _ "SystemName: " & drive.SystemName & vbnewline & _ "TotalCylinders: " & drive.TotalCylinders & vbnewline & _ "TotalHeads: " & drive.TotalHeads & vbnewline & _ "TotalSectors: " & drive.TotalSectors & vbnewline & _ "TotalTracks: " & drive.TotalTracks & vbnewline & _ "TracksPerCylinder: " & drive.TracksPerCylinder & vbnewline next A: EDIT: Added code to print drive letter. Check if this example works for you. It uses WMI. Console.WriteLine("Manufacturer: {0}", queryObj["Manufacturer"]); ... Console.WriteLine(" Name: {0}", c["Name"]); // here it will print drive letter The full code sample: namespace WMISample { using System; using System.Management; public class MyWMIQuery { public static void Main() { try { ManagementObjectSearcher searcher = new ManagementObjectSearcher("root\\CIMV2", "SELECT * FROM Win32_DiskDrive"); foreach (ManagementObject queryObj in searcher.Get()) { Console.WriteLine("DeviceID: {0}", queryObj["DeviceID"]); Console.WriteLine("PNPDeviceID: {0}", queryObj["PNPDeviceID"]); Console.WriteLine("Manufacturer: {0}", queryObj["Manufacturer"]); Console.WriteLine("Model: {0}", queryObj["Model"]); foreach (ManagementObject b in queryObj.GetRelated("Win32_DiskPartition")) { Console.WriteLine(" Name: {0}", b["Name"]); foreach (ManagementBaseObject c in b.GetRelated("Win32_LogicalDisk")) { Console.WriteLine(" Name: {0}", c["Name"]); // here it will print drive letter } } // ... Console.WriteLine("--------------------------------------------"); } } catch (ManagementException e) { Console.WriteLine(e.StackTrace); } Console.ReadLine(); } } } I think those properties should help you distinguish genuine USB drives from the others. Test with several pen drives to check if the values are the same. See full reference for Win32_DiskDrive properties here: http://msdn.microsoft.com/en-us/library/aa394132(VS.85).aspx Check if this article is also of any help to you: http://social.msdn.microsoft.com/Forums/en-US/csharpgeneral/thread/48a9758c-d4db-4144-bad1-e87f2e9fc979 A: If Win32_DiskDrive objects do not yield the information you are looking for, you could also look at Win32_PhysicalMedia class of WMI objects. They have Manufacturer, Model, PartNumber, and description properties which may prove useful. A: Perhaps #usblib: http://www.icsharpcode.net/OpenSource/SharpUSBLib/ A: Hi try this in using WMI Option Explicit Dim objWMIService, objItem, colItems, strComputer ' On Error Resume Next strComputer = "." Set objWMIService = GetObject("winmgmts:\\" _ & strComputer & "\root\cimv2") Set colItems = objWMIService.ExecQuery(_ "Select Manufacturer from Win32_DiskDrive") For Each objItem in colItems Wscript.Echo "Computer: " & objItem.SystemName & VbCr & _ "Manufacturer: " & objItem.Manufacturer & VbCr & _ "Model: " & objItem.Model Next Modelcould be more helpful than Manufacturer. You look at FirmwareRevision if you want to lock you app now to only one Manufacturer and one (some) Firmware Revision. Hope it helps. A: Just in case anyone else is crazy enough to do this in C++-CLI, here's a port of smink's answer: using namespace System; using namespace System::Management; void GetUSBDeviceList() { try { ManagementObjectSearcher^ searcher = gcnew ManagementObjectSearcher("root\\CIMV2", "SELECT * FROM Win32_DiskDrive"); for each (ManagementObject^ queryObj in searcher->Get()) { Console::WriteLine("DeviceID: {0}", queryObj["DeviceID"]); Console::WriteLine("PNPDeviceID: {0}", queryObj["PNPDeviceID"]); Console::WriteLine("Manufacturer: {0}", queryObj["Manufacturer"]); Console::WriteLine("Model: {0}", queryObj["Model"]); for each (ManagementObject^ b in queryObj->GetRelated("Win32_DiskPartition")) { Console::WriteLine(" Name: {0}", b["Name"]); for each (ManagementBaseObject^ c in b->GetRelated("Win32_LogicalDisk")) { Console::WriteLine(" Name: {0}", c["Name"]); // here it will print drive letter } } // ... Console::WriteLine("--------------------------------------------"); } } catch (ManagementException^ e) { Console::WriteLine(e->StackTrace); } Console::ReadLine(); } Note: I had to manually add a reference to the System.Management library in my porject properties.
{ "language": "en", "url": "https://stackoverflow.com/questions/123986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: QueryString malformed after URLDecode I'm trying to pass in a Base64 string into a C#.Net web application via the QueryString. When the string arrives the "+" (plus) sign is being replaced by a space. It appears that the automatic URLDecode process is doing this. I have no control over what is being passed via the QueryString. Is there any way to handle this server side? Example: http://localhost:3399/Base64.aspx?VLTrap=VkxUcmFwIHNldCB0byAiRkRTQT8+PE0iIHBsdXMgb3IgbWludXMgNSBwZXJjZW50Lg== Produces: VkxUcmFwIHNldCB0byAiRkRTQT8 PE0iIHBsdXMgb3IgbWludXMgNSBwZXJjZW50Lg== People have suggested URLEncoding the querystring: System.Web.HttpUtility.UrlEncode(yourString) I can't do that as I have no control over the calling routine (which is working fine with other languages). There was also the suggestion of replacing spaces with a plus sign: Request.QueryString["VLTrap"].Replace(" ", "+"); I had though of this but my concern with it, and I should have mentioned this to start, is that I don't know what other characters might be malformed in addition to the plus sign. My main goal is to intercept the QueryString before it is run through the decoder. To this end I tried looking at Request.QueryString.toString() but this contained the same malformed information. Is there any way to look at the raw QueryString before it is URLDecoded? After further testing it appears that .Net expects everything coming in from the QuerString to be URL encoded but the browser does not automatically URL encode GET requests. A: I'm having this exact same issue except I have control over my URL. Even with Server.URLDecode and Server.URLEncode it doesn't convert it back to a + sign, even though my query string looks as follows: http://localhost/childapp/default.aspx?TokenID=0XU%2fKUTLau%2bnSWR7%2b5Z7DbZrhKZMyeqStyTPonw1OdI%3d When I perform the following. string tokenID = Server.UrlDecode(Request.QueryString["TokenID"]); it still does not convert the %2b back into a + sign. Instead I have to do the following: string tokenID = Server.UrlDecode(Request.QueryString["TokenID"]); tokenID = tokenID.Replace(" ", "+"); Then it works correctly. Really odd. A: I had similar problem with a parameter that contains Base64 value and when it comes with '+'. Only Request.QueryString["VLTrap"].Replace(" ", "+"); worked fine for me; no UrlEncode or other encoding helping because even if you show encoded link on page yourself with '+' encoded as a '%2b' then it's browser that changes it to '+' at first when it showen and when you click it then browser changes it to empty space. So no way to control it as original poster says even if you show links yourself. The same thing with such links even in html emails. A: If you use System.Uri.UnescapeDataString(yourString) it will ignore the +. This method should only be used in cases like yours where when the string was encoded using some sort of legacy approach either on the client or server. See this blog post: http://blogs.msdn.com/b/yangxind/archive/2006/11/09/don-t-use-net-system-uri-unescapedatastring-in-url-decoding.aspx A: The suggested solution: Request.QueryString["VLTrap"].Replace(" ", "+"); Should work just fine. As for your concern: I had though of this but my concern with it, and I should have mentioned this to start, is that I don't know what other characters might be malformed in addition to the plus sign. This is easy to alleviate by reading about base64. The only non alphanumeric characters that are legal in modern base64 are "/", "+" and "=" (which is only used for padding). Of those, "+" is the only one that has special meaning as an escaped representation in URLs. While the other two have special meaning in URLs (path delimiter and query string separator), they shouldn't pose a problem. So I think you should be OK. A: You could manually replace the value (argument.Replace(' ', '+')) or consult the HttpRequest.ServerVariables["QUERY_STRING"] (even better the HttpRequest.Url.Query) and parse it yourself. You should however try to solve the problem where the URL is given; a plus sign needs to get encoded as "%2B" in the URL because a plus otherwise represents a space. If you don't control the inbound URLs, the first option would be preferred as you avoid the most errors this way. A: If you URLEncode the string before adding it to the URL you will not have any of those problems (the automatic URLDecode will return it to the original state). A: Well, obviously you should have the Base64 string URLEncoded before sending it to the server. If you cannot accomplish that, I would suggest simply replacing any embedded spaces back to +; since b64 strings are not suposed to have spaces, its a legitimate tactic... A: System.Web.HttpUtility.UrlEncode(yourString) will do the trick. A: As a quick hack you could replace space with plus character before base64-decoding. A: I am by no means a C# developer but it looks like you need to url ENCODE your Base64 string before sending it as a url. A: Can't you just assume a space is a + and replace it? Request.QueryString["VLTrap"].Replace(" ", "+"); ;)
{ "language": "en", "url": "https://stackoverflow.com/questions/123994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How can I tell if a DOM element is visible in the current viewport? Is there an efficient way to tell if a DOM element (in an HTML document) is currently visible (appears in the viewport)? (The question refers to Firefox.) A: I tried Dan's answer, however, the algebra used to determine the bounds means that the element must be both ≤ the viewport size and completely inside the viewport to get true, easily leading to false negatives. If you want to determine whether an element is in the viewport at all, ryanve's answer is close but the element being tested should overlap the viewport, so try this: function isElementInViewport(el) { var rect = el.getBoundingClientRect(); return rect.bottom > 0 && rect.right > 0 && rect.left < (window.innerWidth || document.documentElement.clientWidth) /* or $(window).width() */ && rect.top < (window.innerHeight || document.documentElement.clientHeight) /* or $(window).height() */; } A: We have now a native javascript Intersection Observer API from which we can detect elements either they are in the viewport or not. Here is example const el = document.querySelector('#el') const observer = new window.IntersectionObserver(([entry]) => { if (entry.isIntersecting) { console.log('ENTER') return } console.log('LEAVE') }, { root: null, threshold: 0.1, // set offset 0.1 means trigger if atleast 10% of element in viewport }) observer.observe(el); body { height: 300vh; } #el { margin-top: 100vh; } <div id="el">this is element</div> A: Here's my solution. It will work if an element is hidden inside a scrollable container. Here's a demo (try re-sizing the window to) var visibleY = function(el){ var top = el.getBoundingClientRect().top, rect, el = el.parentNode; do { rect = el.getBoundingClientRect(); if (top <= rect.bottom === false) return false; el = el.parentNode; } while (el != document.body); // Check it's within the document viewport return top <= document.documentElement.clientHeight; }; I only needed to check if it's visible in the Y axis (for a scrolling Ajax load-more-records feature). A: I think this is a more functional way to do it. Dan's answer do not work in a recursive context. This function solves the problem when your element is inside others scrollable divs by testing any levels recursively up to the HTML tag, and stops at the first false. /** * fullVisible=true only returns true if the all object rect is visible */ function isReallyVisible(el, fullVisible) { if ( el.tagName == "HTML" ) return true; var parentRect=el.parentNode.getBoundingClientRect(); var rect = arguments[2] || el.getBoundingClientRect(); return ( ( fullVisible ? rect.top >= parentRect.top : rect.bottom > parentRect.top ) && ( fullVisible ? rect.left >= parentRect.left : rect.right > parentRect.left ) && ( fullVisible ? rect.bottom <= parentRect.bottom : rect.top < parentRect.bottom ) && ( fullVisible ? rect.right <= parentRect.right : rect.left < parentRect.right ) && isReallyVisible(el.parentNode, fullVisible, rect) ); }; A: See the source of verge, which uses getBoundingClientRect. It's like: function inViewport (element) { if (!element) return false; if (1 !== element.nodeType) return false; var html = document.documentElement; var rect = element.getBoundingClientRect(); return !!rect && rect.bottom >= 0 && rect.right >= 0 && rect.left <= html.clientWidth && rect.top <= html.clientHeight; } It returns true if any part of the element is in the viewport. A: The most accepted answers don't work when zooming in Google Chrome on Android. In combination with Dan's answer, to account for Chrome on Android, visualViewport must be used. The following example only takes the vertical check into account and uses jQuery for the window height: var Rect = YOUR_ELEMENT.getBoundingClientRect(); var ElTop = Rect.top, ElBottom = Rect.bottom; var WindowHeight = $(window).height(); if(window.visualViewport) { ElTop -= window.visualViewport.offsetTop; ElBottom -= window.visualViewport.offsetTop; WindowHeight = window.visualViewport.height; } var WithinScreen = (ElTop >= 0 && ElBottom <= WindowHeight); A: /** * Returns Element placement information in Viewport * @link https://stackoverflow.com/a/70476497/2453148 * * @typedef {object} ViewportInfo - Whether the element is… * @property {boolean} isInViewport - fully or partially in the viewport * @property {boolean} isPartiallyInViewport - partially in the viewport * @property {boolean} isInsideViewport - fully inside viewport * @property {boolean} isAroundViewport - completely covers the viewport * @property {boolean} isOnEdge - intersects the edge of viewport * @property {boolean} isOnTopEdge - intersects the top edge * @property {boolean} isOnRightEdge - intersects the right edge * @property {boolean} isOnBottomEdge - is intersects the bottom edge * @property {boolean} isOnLeftEdge - is intersects the left edge * * @param el Element * @return {Object} ViewportInfo */ function getElementViewportInfo(el) { let result = {}; let rect = el.getBoundingClientRect(); let windowHeight = window.innerHeight || document.documentElement.clientHeight; let windowWidth = window.innerWidth || document.documentElement.clientWidth; let insideX = rect.left >= 0 && rect.left + rect.width <= windowWidth; let insideY = rect.top >= 0 && rect.top + rect.height <= windowHeight; result.isInsideViewport = insideX && insideY; let aroundX = rect.left < 0 && rect.left + rect.width > windowWidth; let aroundY = rect.top < 0 && rect.top + rect.height > windowHeight; result.isAroundViewport = aroundX && aroundY; let onTop = rect.top < 0 && rect.top + rect.height > 0; let onRight = rect.left < windowWidth && rect.left + rect.width > windowWidth; let onLeft = rect.left < 0 && rect.left + rect.width > 0; let onBottom = rect.top < windowHeight && rect.top + rect.height > windowHeight; let onY = insideY || aroundY || onTop || onBottom; let onX = insideX || aroundX || onLeft || onRight; result.isOnTopEdge = onTop && onX; result.isOnRightEdge = onRight && onY; result.isOnBottomEdge = onBottom && onX; result.isOnLeftEdge = onLeft && onY; result.isOnEdge = result.isOnLeftEdge || result.isOnRightEdge || result.isOnTopEdge || result.isOnBottomEdge; let isInX = insideX || aroundX || result.isOnLeftEdge || result.isOnRightEdge; let isInY = insideY || aroundY || result.isOnTopEdge || result.isOnBottomEdge; result.isInViewport = isInX && isInY; result.isPartiallyInViewport = result.isInViewport && result.isOnEdge; return result; } A: As a public service: Dan's answer with the correct calculations (element can be > window, especially on mobile phone screens), and correct jQuery testing, as well as adding isElementPartiallyInViewport: By the way, the difference between window.innerWidth and document.documentElement.clientWidth is that clientWidth/clientHeight doesn't include the scrollbar, while window.innerWidth/Height does. function isElementPartiallyInViewport(el) { // Special bonus for those using jQuery if (typeof jQuery !== 'undefined' && el instanceof jQuery) el = el[0]; var rect = el.getBoundingClientRect(); // DOMRect { x: 8, y: 8, width: 100, height: 100, top: 8, right: 108, bottom: 108, left: 8 } var windowHeight = (window.innerHeight || document.documentElement.clientHeight); var windowWidth = (window.innerWidth || document.documentElement.clientWidth); // http://stackoverflow.com/questions/325933/determine-whether-two-date-ranges-overlap var vertInView = (rect.top <= windowHeight) && ((rect.top + rect.height) >= 0); var horInView = (rect.left <= windowWidth) && ((rect.left + rect.width) >= 0); return (vertInView && horInView); } // http://stackoverflow.com/questions/123999/how-to-tell-if-a-dom-element-is-visible-in-the-current-viewport function isElementInViewport (el) { // Special bonus for those using jQuery if (typeof jQuery !== 'undefined' && el instanceof jQuery) el = el[0]; var rect = el.getBoundingClientRect(); var windowHeight = (window.innerHeight || document.documentElement.clientHeight); var windowWidth = (window.innerWidth || document.documentElement.clientWidth); return ( (rect.left >= 0) && (rect.top >= 0) && ((rect.left + rect.width) <= windowWidth) && ((rect.top + rect.height) <= windowHeight) ); } function fnIsVis(ele) { var inVpFull = isElementInViewport(ele); var inVpPartial = isElementPartiallyInViewport(ele); console.clear(); console.log("Fully in viewport: " + inVpFull); console.log("Partially in viewport: " + inVpPartial); } Test-case <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="description" content=""> <meta name="author" content=""> <title>Test</title> <!-- <script src="http://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script> <script src="scrollMonitor.js"></script> --> <script type="text/javascript"> function isElementPartiallyInViewport(el) { // Special bonus for those using jQuery if (typeof jQuery !== 'undefined' && el instanceof jQuery) el = el[0]; var rect = el.getBoundingClientRect(); // DOMRect { x: 8, y: 8, width: 100, height: 100, top: 8, right: 108, bottom: 108, left: 8 } var windowHeight = (window.innerHeight || document.documentElement.clientHeight); var windowWidth = (window.innerWidth || document.documentElement.clientWidth); // http://stackoverflow.com/questions/325933/determine-whether-two-date-ranges-overlap var vertInView = (rect.top <= windowHeight) && ((rect.top + rect.height) >= 0); var horInView = (rect.left <= windowWidth) && ((rect.left + rect.width) >= 0); return (vertInView && horInView); } // http://stackoverflow.com/questions/123999/how-to-tell-if-a-dom-element-is-visible-in-the-current-viewport function isElementInViewport (el) { // Special bonus for those using jQuery if (typeof jQuery !== 'undefined' && el instanceof jQuery) el = el[0]; var rect = el.getBoundingClientRect(); var windowHeight = (window.innerHeight || document.documentElement.clientHeight); var windowWidth = (window.innerWidth || document.documentElement.clientWidth); return ( (rect.left >= 0) && (rect.top >= 0) && ((rect.left + rect.width) <= windowWidth) && ((rect.top + rect.height) <= windowHeight) ); } function fnIsVis(ele) { var inVpFull = isElementInViewport(ele); var inVpPartial = isElementPartiallyInViewport(ele); console.clear(); console.log("Fully in viewport: " + inVpFull); console.log("Partially in viewport: " + inVpPartial); } // var scrollLeft = (window.pageXOffset !== undefined) ? window.pageXOffset : (document.documentElement || document.body.parentNode || document.body).scrollLeft, // var scrollTop = (window.pageYOffset !== undefined) ? window.pageYOffset : (document.documentElement || document.body.parentNode || document.body).scrollTop; </script> </head> <body> <div style="display: block; width: 2000px; height: 10000px; background-color: green;"> <br /><br /><br /><br /><br /><br /> <br /><br /><br /><br /><br /><br /> <br /><br /><br /><br /><br /><br /> <input type="button" onclick="fnIsVis(document.getElementById('myele'));" value="det" /> <br /><br /><br /><br /><br /><br /> <br /><br /><br /><br /><br /><br /> <br /><br /><br /><br /><br /><br /> <div style="background-color: crimson; display: inline-block; width: 800px; height: 500px;" ></div> <div id="myele" onclick="fnIsVis(this);" style="display: inline-block; width: 100px; height: 100px; background-color: hotpink;"> t </div> <br /><br /><br /><br /><br /><br /> <br /><br /><br /><br /><br /><br /> <br /><br /><br /><br /><br /><br /> <input type="button" onclick="fnIsVis(document.getElementById('myele'));" value="det" /> </div> <!-- <script type="text/javascript"> var element = document.getElementById("myele"); var watcher = scrollMonitor.create(element); watcher.lock(); watcher.stateChange(function() { console.log("state changed"); // $(element).toggleClass('fixed', this.isAboveViewport) }); </script> --> </body> </html> A: Update: Time marches on and so have our browsers. This technique is no longer recommended and you should use Dan's solution if you do not need to support version of Internet Explorer before 7. Original solution (now outdated): This will check if the element is entirely visible in the current viewport: function elementInViewport(el) { var top = el.offsetTop; var left = el.offsetLeft; var width = el.offsetWidth; var height = el.offsetHeight; while(el.offsetParent) { el = el.offsetParent; top += el.offsetTop; left += el.offsetLeft; } return ( top >= window.pageYOffset && left >= window.pageXOffset && (top + height) <= (window.pageYOffset + window.innerHeight) && (left + width) <= (window.pageXOffset + window.innerWidth) ); } You could modify this simply to determine if any part of the element is visible in the viewport: function elementInViewport2(el) { var top = el.offsetTop; var left = el.offsetLeft; var width = el.offsetWidth; var height = el.offsetHeight; while(el.offsetParent) { el = el.offsetParent; top += el.offsetTop; left += el.offsetLeft; } return ( top < (window.pageYOffset + window.innerHeight) && left < (window.pageXOffset + window.innerWidth) && (top + height) > window.pageYOffset && (left + width) > window.pageXOffset ); } A: Based on dan's solution, I had a go at cleaning up the implementation so that using it multiple times on the same page is easier: $(function() { $(window).on('load resize scroll', function() { addClassToElementInViewport($('.bug-icon'), 'animate-bug-icon'); addClassToElementInViewport($('.another-thing'), 'animate-thing'); // repeat as needed ... }); function addClassToElementInViewport(element, newClass) { if (inViewport(element)) { element.addClass(newClass); } } function inViewport(element) { if (typeof jQuery === "function" && element instanceof jQuery) { element = element[0]; } var elementBounds = element.getBoundingClientRect(); return ( elementBounds.top >= 0 && elementBounds.left >= 0 && elementBounds.bottom <= $(window).height() && elementBounds.right <= $(window).width() ); } }); The way I'm using it is that when the element scrolls into view, I'm adding a class that triggers a CSS keyframe animation. It's pretty straightforward and works especially well when you've got like 10+ things to conditionally animate on a page. A: Most of the usages in previous answers are failing at these points: -When any pixel of an element is visible, but not "a corner", -When an element is bigger than viewport and centered, -Most of them are checking only for a singular element inside a document or window. Well, for all these problems I've a solution and the plus sides are: -You can return visible when only a pixel from any sides shows up and is not a corner, -You can still return visible while element bigger than viewport, -You can choose your parent element or you can automatically let it choose, -Works on dynamically added elements too. If you check the snippets below you will see the difference in using overflow-scroll in element's container will not cause any trouble and see that unlike other answers here even if a pixel shows up from any side or when an element is bigger than viewport and we are seeing inner pixels of the element it still works. Usage is simple: // For checking element visibility from any sides isVisible(element) // For checking elements visibility in a parent you would like to check var parent = document; // Assuming you check if 'element' inside 'document' isVisible(element, parent) // For checking elements visibility even if it's bigger than viewport isVisible(element, null, true) // Without parent choice isVisible(element, parent, true) // With parent choice A demonstration without crossSearchAlgorithm which is usefull for elements bigger than viewport check element3 inner pixels to see: function isVisible(element, parent, crossSearchAlgorithm) { var rect = element.getBoundingClientRect(), prect = (parent != undefined) ? parent.getBoundingClientRect() : element.parentNode.getBoundingClientRect(), csa = (crossSearchAlgorithm != undefined) ? crossSearchAlgorithm : false, efp = function (x, y) { return document.elementFromPoint(x, y) }; // Return false if it's not in the viewport if (rect.right < prect.left || rect.bottom < prect.top || rect.left > prect.right || rect.top > prect.bottom) { return false; } var flag = false; // Return true if left to right any border pixel reached for (var x = rect.left; x < rect.right; x++) { if (element.contains(efp(rect.top, x)) || element.contains(efp(rect.bottom, x))) { flag = true; break; } } // Return true if top to bottom any border pixel reached if (flag == false) { for (var y = rect.top; y < rect.bottom; y++) { if (element.contains(efp(rect.left, y)) || element.contains(efp(rect.right, y))) { flag = true; break; } } } if(csa) { // Another algorithm to check if element is centered and bigger than viewport if (flag == false) { var x = rect.left; var y = rect.top; // From top left to bottom right while(x < rect.right || y < rect.bottom) { if (element.contains(efp(x,y))) { flag = true; break; } if(x < rect.right) { x++; } if(y < rect.bottom) { y++; } } if (flag == false) { x = rect.right; y = rect.top; // From top right to bottom left while(x > rect.left || y < rect.bottom) { if (element.contains(efp(x,y))) { flag = true; break; } if(x > rect.left) { x--; } if(y < rect.bottom) { y++; } } } } } return flag; } // Check multiple elements visibility document.getElementById('container').addEventListener("scroll", function() { var elementList = document.getElementsByClassName("element"); var console = document.getElementById('console'); for (var i=0; i < elementList.length; i++) { // I did not define parent, so it will be element's parent if (isVisible(elementList[i])) { console.innerHTML = "Element with id[" + elementList[i].id + "] is visible!"; break; } else { console.innerHTML = "Element with id[" + elementList[i].id + "] is hidden!"; } } }); // Dynamically added elements for(var i=4; i <= 6; i++) { var newElement = document.createElement("div"); newElement.id = "element" + i; newElement.classList.add("element"); document.getElementById('container').appendChild(newElement); } #console { background-color: yellow; } #container { width: 300px; height: 100px; background-color: lightblue; overflow-y: auto; padding-top: 150px; margin: 45px; } .element { margin: 400px; width: 400px; height: 320px; background-color: green; } #element3 { position: relative; margin: 40px; width: 720px; height: 520px; background-color: green; } #element3::before { content: ""; position: absolute; top: -10px; left: -10px; margin: 0px; width: 740px; height: 540px; border: 5px dotted green; background: transparent; } <div id="console"></div> <div id="container"> <div id="element1" class="element"></div> <div id="element2" class="element"></div> <div id="element3" class="element"></div> </div> You see, when you are inside the element3 it fails to tell if it's visible or not, because we are only checking if the element is visible from sides or corners. And this one includes crossSearchAlgorithm which allows you to still return visible when the element is bigger than the viewport: function isVisible(element, parent, crossSearchAlgorithm) { var rect = element.getBoundingClientRect(), prect = (parent != undefined) ? parent.getBoundingClientRect() : element.parentNode.getBoundingClientRect(), csa = (crossSearchAlgorithm != undefined) ? crossSearchAlgorithm : false, efp = function (x, y) { return document.elementFromPoint(x, y) }; // Return false if it's not in the viewport if (rect.right < prect.left || rect.bottom < prect.top || rect.left > prect.right || rect.top > prect.bottom) { return false; } var flag = false; // Return true if left to right any border pixel reached for (var x = rect.left; x < rect.right; x++) { if (element.contains(efp(rect.top, x)) || element.contains(efp(rect.bottom, x))) { flag = true; break; } } // Return true if top to bottom any border pixel reached if (flag == false) { for (var y = rect.top; y < rect.bottom; y++) { if (element.contains(efp(rect.left, y)) || element.contains(efp(rect.right, y))) { flag = true; break; } } } if(csa) { // Another algorithm to check if element is centered and bigger than viewport if (flag == false) { var x = rect.left; var y = rect.top; // From top left to bottom right while(x < rect.right || y < rect.bottom) { if (element.contains(efp(x,y))) { flag = true; break; } if(x < rect.right) { x++; } if(y < rect.bottom) { y++; } } if (flag == false) { x = rect.right; y = rect.top; // From top right to bottom left while(x > rect.left || y < rect.bottom) { if (element.contains(efp(x,y))) { flag = true; break; } if(x > rect.left) { x--; } if(y < rect.bottom) { y++; } } } } } return flag; } // Check multiple elements visibility document.getElementById('container').addEventListener("scroll", function() { var elementList = document.getElementsByClassName("element"); var console = document.getElementById('console'); for (var i=0; i < elementList.length; i++) { // I did not define parent so it will be element's parent // and it will do crossSearchAlgorithm if (isVisible(elementList[i],null,true)) { console.innerHTML = "Element with id[" + elementList[i].id + "] is visible!"; break; } else { console.innerHTML = "Element with id[" + elementList[i].id + "] is hidden!"; } } }); // Dynamically added elements for(var i=4; i <= 6; i++) { var newElement = document.createElement("div"); newElement.id = "element" + i; newElement.classList.add("element"); document.getElementById('container').appendChild(newElement); } #console { background-color: yellow; } #container { width: 300px; height: 100px; background-color: lightblue; overflow-y: auto; padding-top: 150px; margin: 45px; } .element { margin: 400px; width: 400px; height: 320px; background-color: green; } #element3 { position: relative; margin: 40px; width: 720px; height: 520px; background-color: green; } #element3::before { content: ""; position: absolute; top: -10px; left: -10px; margin: 0px; width: 740px; height: 540px; border: 5px dotted green; background: transparent; } <div id="console"></div> <div id="container"> <div id="element1" class="element"></div> <div id="element2" class="element"></div> <div id="element3" class="element"></div> </div> JSFiddle to play with: http://jsfiddle.net/BerkerYuceer/grk5az2c/ This code is made for more precise information if any part of the element is shown in the view or not. For performance options or only vertical slides, do not use this! This code is more effective in drawing cases. A: My shorter and faster version: function isElementOutViewport(el){ var rect = el.getBoundingClientRect(); return rect.bottom < 0 || rect.right < 0 || rect.left > window.innerWidth || rect.top > window.innerHeight; } And a jsFiddle as required: https://jsfiddle.net/on1g619L/1/ A: As simple as it can get, IMO: function isVisible(elem) { var coords = elem.getBoundingClientRect(); return Math.abs(coords.top) <= coords.height; } A: The new Intersection Observer API addresses this question very directly. This solution will need a polyfill as Safari, Opera and Internet Explorer don't support this yet (the polyfill is included in the solution). In this solution, there is a box out of view that is the target (observed). When it comes into view, the button at the top in the header is hidden. It is shown once the box leaves the view. const buttonToHide = document.querySelector('button'); const hideWhenBoxInView = new IntersectionObserver((entries) => { if (entries[0].intersectionRatio <= 0) { // If not in view buttonToHide.style.display = "inherit"; } else { buttonToHide.style.display = "none"; } }); hideWhenBoxInView.observe(document.getElementById('box')); header { position: fixed; top: 0; width: 100vw; height: 30px; background-color: lightgreen; } .wrapper { position: relative; margin-top: 600px; } #box { position: relative; left: 175px; width: 150px; height: 135px; background-color: lightblue; border: 2px solid; } <script src="https://polyfill.io/v2/polyfill.min.js?features=IntersectionObserver"></script> <header> <button>NAVIGATION BUTTON TO HIDE</button> </header> <div class="wrapper"> <div id="box"> </div> </div> A: I found it troubling that there wasn't a jQuery-centric version of the functionality available. When I came across Dan's solution I spied the opportunity to provide something for folks who like to program in the jQuery OO style. It's nice and snappy and works like a charm for me. Bada bing bada boom $.fn.inView = function(){ if(!this.length) return false; var rect = this.get(0).getBoundingClientRect(); return ( rect.top >= 0 && rect.left >= 0 && rect.bottom <= (window.innerHeight || document.documentElement.clientHeight) && rect.right <= (window.innerWidth || document.documentElement.clientWidth) ); }; // Additional examples for other use cases // Is true false whether an array of elements are all in view $.fn.allInView = function(){ var all = []; this.forEach(function(){ all.push( $(this).inView() ); }); return all.indexOf(false) === -1; }; // Only the class elements in view $('.some-class').filter(function(){ return $(this).inView(); }); // Only the class elements not in view $('.some-class').filter(function(){ return !$(this).inView(); }); Usage $(window).on('scroll',function(){ if( $('footer').inView() ) { // Do cool stuff } }); A: The simplest solution as the support of Element.getBoundingClientRect() has become perfect: function isInView(el) { const box = el.getBoundingClientRect(); return box.top < window.innerHeight && box.bottom >= 0; } A: Update In modern browsers, you might want to check out the Intersection Observer API which provides the following benefits: * *Better performance than listening for scroll events *Works in cross domain iframes *Can tell if an element is obstructing/intersecting another Intersection Observer is on its way to being a full-fledged standard and is already supported in Chrome 51+, Edge 15+ and Firefox 55+ and is under development for Safari. There's also a polyfill available. Previous answer There are some issues with the answer provided by Dan that might make it an unsuitable approach for some situations. Some of these issues are pointed out in his answer near the bottom, that his code will give false positives for elements that are: * *Hidden by another element in front of the one being tested *Outside the visible area of a parent or ancestor element *An element or its children hidden by using the CSS clip property These limitations are demonstrated in the following results of a simple test: The solution: isElementVisible() Here's a solution to those problems, with the test result below and an explanation of some parts of the code. function isElementVisible(el) { var rect = el.getBoundingClientRect(), vWidth = window.innerWidth || document.documentElement.clientWidth, vHeight = window.innerHeight || document.documentElement.clientHeight, efp = function (x, y) { return document.elementFromPoint(x, y) }; // Return false if it's not in the viewport if (rect.right < 0 || rect.bottom < 0 || rect.left > vWidth || rect.top > vHeight) return false; // Return true if any of its four corners are visible return ( el.contains(efp(rect.left, rect.top)) || el.contains(efp(rect.right, rect.top)) || el.contains(efp(rect.right, rect.bottom)) || el.contains(efp(rect.left, rect.bottom)) ); } Passing test: http://jsfiddle.net/AndyE/cAY8c/ And the result: Additional notes This method is not without its own limitations, however. For instance, an element being tested with a lower z-index than another element at the same location would be identified as hidden even if the element in front doesn't actually hide any part of it. Still, this method has its uses in some cases that Dan's solution doesn't cover. Both element.getBoundingClientRect() and document.elementFromPoint() are part of the CSSOM Working Draft specification and are supported in at least IE 6 and later and most desktop browsers for a long time (albeit, not perfectly). See Quirksmode on these functions for more information. contains() is used to see if the element returned by document.elementFromPoint() is a child node of the element we're testing for visibility. It also returns true if the element returned is the same element. This just makes the check more robust. It's supported in all major browsers, Firefox 9.0 being the last of them to add it. For older Firefox support, check this answer's history. If you want to test more points around the element for visibility―ie, to make sure the element isn't covered by more than, say, 50%―it wouldn't take much to adjust the last part of the answer. However, be aware that it would probably be very slow if you checked every pixel to make sure it was 100% visible. A: A better solution: function getViewportSize(w) { var w = w || window; if(w.innerWidth != null) return {w:w.innerWidth, h:w.innerHeight}; var d = w.document; if (document.compatMode == "CSS1Compat") { return { w: d.documentElement.clientWidth, h: d.documentElement.clientHeight }; } return { w: d.body.clientWidth, h: d.body.clientWidth }; } function isViewportVisible(e) { var box = e.getBoundingClientRect(); var height = box.height || (box.bottom - box.top); var width = box.width || (box.right - box.left); var viewport = getViewportSize(); if(!height || !width) return false; if(box.top > viewport.h || box.bottom < 0) return false; if(box.right < 0 || box.left > viewport.w) return false; return true; } A: I had the same question and figured it out by using getBoundingClientRect(). This code is completely 'generic' and only has to be written once for it to work (you don't have to write it out for each element that you want to know is in the viewport). This code only checks to see if it is vertically in the viewport, not horizontally. In this case, the variable (array) 'elements' holds all the elements that you are checking to be vertically in the viewport, so grab any elements you want anywhere and store them there. The 'for loop', loops through each element and checks to see if it is vertically in the viewport. This code executes every time the user scrolls! If the getBoudingClientRect().top is less than 3/4 the viewport (the element is one quarter in the viewport), it registers as 'in the viewport'. Since the code is generic, you will want to know 'which' element is in the viewport. To find that out, you can determine it by custom attribute, node name, id, class name, and more. Here is my code (tell me if it doesn't work; it has been tested in Internet Explorer 11, Firefox 40.0.3, Chrome Version 45.0.2454.85 m, Opera 31.0.1889.174, and Edge with Windows 10, [not Safari yet])... // Scrolling handlers... window.onscroll = function(){ var elements = document.getElementById('whatever').getElementsByClassName('whatever'); for(var i = 0; i != elements.length; i++) { if(elements[i].getBoundingClientRect().top <= window.innerHeight*0.75 && elements[i].getBoundingClientRect().top > 0) { console.log(elements[i].nodeName + ' ' + elements[i].className + ' ' + elements[i].id + ' is in the viewport; proceed with whatever code you want to do here.'); } }; A: Here is a function that tells if an element is in visible in the current viewport of a parent element: function inParentViewport(el, pa) { if (typeof jQuery === "function"){ if (el instanceof jQuery) el = el[0]; if (pa instanceof jQuery) pa = pa[0]; } var e = el.getBoundingClientRect(); var p = pa.getBoundingClientRect(); return ( e.bottom >= p.top && e.right >= p.left && e.top <= p.bottom && e.left <= p.right ); } A: All answers I've encountered here only check if the element is positioned inside the current viewport. But that doesn't mean that it is visible. What if the given element is inside a div with overflowing content, and it is scrolled out of view? To solve that, you'd have to check if the element is contained by all parents. My solution does exactly that: It also allows you to specify how much of the element has to be visible. Element.prototype.isVisible = function(percentX, percentY){ var tolerance = 0.01; //needed because the rects returned by getBoundingClientRect provide the position up to 10 decimals if(percentX == null){ percentX = 100; } if(percentY == null){ percentY = 100; } var elementRect = this.getBoundingClientRect(); var parentRects = []; var element = this; while(element.parentElement != null){ parentRects.push(element.parentElement.getBoundingClientRect()); element = element.parentElement; } var visibleInAllParents = parentRects.every(function(parentRect){ var visiblePixelX = Math.min(elementRect.right, parentRect.right) - Math.max(elementRect.left, parentRect.left); var visiblePixelY = Math.min(elementRect.bottom, parentRect.bottom) - Math.max(elementRect.top, parentRect.top); var visiblePercentageX = visiblePixelX / elementRect.width * 100; var visiblePercentageY = visiblePixelY / elementRect.height * 100; return visiblePercentageX + tolerance > percentX && visiblePercentageY + tolerance > percentY; }); return visibleInAllParents; }; This solution ignored the fact that elements may not be visible due to other facts, like opacity: 0. I have tested this solution in Chrome and Internet Explorer 11. A: Now most browsers support getBoundingClientRect method, which has become the best practice. Using an old answer is very slow, not accurate and has several bugs. The solution selected as correct is almost never precise. This solution was tested on Internet Explorer 7 (and later), iOS 5 (and later) Safari, Android 2.0 (Eclair) and later, BlackBerry, Opera Mobile, and Internet Explorer Mobile 9. function isElementInViewport (el) { // Special bonus for those using jQuery if (typeof jQuery === "function" && el instanceof jQuery) { el = el[0]; } var rect = el.getBoundingClientRect(); return ( rect.top >= 0 && rect.left >= 0 && rect.bottom <= (window.innerHeight || document.documentElement.clientHeight) && /* or $(window).height() */ rect.right <= (window.innerWidth || document.documentElement.clientWidth) /* or $(window).width() */ ); } How to use: You can be sure that the function given above returns correct answer at the moment of time when it is called, but what about tracking element's visibility as an event? Place the following code at the bottom of your <body> tag: function onVisibilityChange(el, callback) { var old_visible; return function () { var visible = isElementInViewport(el); if (visible != old_visible) { old_visible = visible; if (typeof callback == 'function') { callback(); } } } } var handler = onVisibilityChange(el, function() { /* Your code go here */ }); // jQuery $(window).on('DOMContentLoaded load resize scroll', handler); /* // Non-jQuery if (window.addEventListener) { addEventListener('DOMContentLoaded', handler, false); addEventListener('load', handler, false); addEventListener('scroll', handler, false); addEventListener('resize', handler, false); } else if (window.attachEvent) { attachEvent('onDOMContentLoaded', handler); // Internet Explorer 9+ :( attachEvent('onload', handler); attachEvent('onscroll', handler); attachEvent('onresize', handler); } */ If you do any DOM modifications, they can change your element's visibility of course. Guidelines and common pitfalls: Maybe you need to track page zoom / mobile device pinch? jQuery should handle zoom/pinch cross browser, otherwise first or second link should help you. If you modify DOM, it can affect the element's visibility. You should take control over that and call handler() manually. Unfortunately, we don't have any cross browser onrepaint event. On the other hand that allows us to make optimizations and perform re-check only on DOM modifications that can change an element's visibility. Never Ever use it inside jQuery $(document).ready() only, because there is no warranty CSS has been applied in this moment. Your code can work locally with your CSS on a hard drive, but once put on a remote server it will fail. After DOMContentLoaded is fired, styles are applied, but the images are not loaded yet. So, we should add window.onload event listener. We can't catch zoom/pinch event yet. The last resort could be the following code: /* TODO: this looks like a very bad code */ setInterval(handler, 600); You can use the awesome feature pageVisibiliy of the HTML5 API if you care if the tab with your web page is active and visible. TODO: this method does not handle two situations: * *Overlapping using z-index. *Using overflow-scroll in element's container. *Try something new - The Intersection Observer API explained. A: I find that the accepted answer here is overly complicated for most use cases. This code does the job well (using jQuery) and differentiates between fully visible and partially visible elements: var element = $("#element"); var topOfElement = element.offset().top; var bottomOfElement = element.offset().top + element.outerHeight(true); var $window = $(window); $window.bind('scroll', function() { var scrollTopPosition = $window.scrollTop()+$window.height(); var windowScrollTop = $window.scrollTop() if (windowScrollTop > topOfElement && windowScrollTop < bottomOfElement) { // Element is partially visible (above viewable area) console.log("Element is partially visible (above viewable area)"); } else if (windowScrollTop > bottomOfElement && windowScrollTop > topOfElement) { // Element is hidden (above viewable area) console.log("Element is hidden (above viewable area)"); } else if (scrollTopPosition < topOfElement && scrollTopPosition < bottomOfElement) { // Element is hidden (below viewable area) console.log("Element is hidden (below viewable area)"); } else if (scrollTopPosition < bottomOfElement && scrollTopPosition > topOfElement) { // Element is partially visible (below viewable area) console.log("Element is partially visible (below viewable area)"); } else { // Element is completely visible console.log("Element is completely visible"); } }); A: This checks if an element is at least partially in view (vertical dimension): function inView(element) { var box = element.getBoundingClientRect(); return inViewBox(box); } function inViewBox(box) { return ((box.bottom < 0) || (box.top > getWindowSize().h)) ? false : true; } function getWindowSize() { return { w: document.body.offsetWidth || document.documentElement.offsetWidth || window.innerWidth, h: document.body.offsetHeight || document.documentElement.offsetHeight || window.innerHeight} } A: This is the easy and small solution that has worked for me. Example: You want to see if the element is visible in the parent element that has overflow scroll. $(window).on('scroll', function () { var container = $('#sidebar'); var containerHeight = container.height(); var scrollPosition = $('#row1').offset().top - container.offset().top; if (containerHeight < scrollPosition) { console.log('not visible'); } else { console.log('visible'); } }) A: All the answers here are determining if the element is fully contained within the viewport, not just visible in some way. For example, if only half of an image is visible at the bottom of the view, the solutions here will fail, considering that "outside". I had a use case where I'm doing lazy loading via IntersectionObserver, but due to animations that occur during pop-in, I didn't want to observe any images that were already intersected on page load. To do that, I used the following code: const bounding = el.getBoundingClientRect(); const isVisible = (0 < bounding.top && bounding.top < (window.innerHeight || document.documentElement.clientHeight)) || (0 < bounding.bottom && bounding.bottom < (window.innerHeight || document.documentElement.clientHeight)); This is basically checking to see if either the top or bottom bound is independently in the viewport. The opposite end may be outside, but as long as one end is in, it's "visible" at least partially. A: I use this function (it only checks if the y is inscreen since most of the time the x is not needed) function elementInViewport(el) { var elinfo = { "top":el.offsetTop, "height":el.offsetHeight, }; if (elinfo.top + elinfo.height < window.pageYOffset || elinfo.top > window.pageYOffset + window.innerHeight) { return false; } else { return true; } } A: Here is a snippet to check if the given element is fully visible in its parent: export const visibleInParentViewport = (el) => { const elementRect = el.getBoundingClientRect(); const parentRect = el.parentNode.getBoundingClientRect(); return ( elementRect.top >= parentRect.top && elementRect.right >= parentRect.left && elementRect.top + elementRect.height <= parentRect.bottom && elementRect.left + elementRect.width <= parentRect.right ); } A: const isHTMLElementInView = (element: HTMLElement) => { const rect = element?.getBoundingClientRect() if (!rect) return return rect.top <= window.innerHeight && rect.bottom >= 0 } This function checks if the element is in the viewport on vertical level. A: For a similar challenge, I really enjoyed this gist which exposes a polyfill for scrollIntoViewIfNeeded(). All the necessary Kung Fu needed to answer is within this block: var parent = this.parentNode, parentComputedStyle = window.getComputedStyle(parent, null), parentBorderTopWidth = parseInt(parentComputedStyle.getPropertyValue('border-top-width')), parentBorderLeftWidth = parseInt(parentComputedStyle.getPropertyValue('border-left-width')), overTop = this.offsetTop - parent.offsetTop < parent.scrollTop, overBottom = (this.offsetTop - parent.offsetTop + this.clientHeight - parentBorderTopWidth) > (parent.scrollTop + parent.clientHeight), overLeft = this.offsetLeft - parent.offsetLeft < parent.scrollLeft, overRight = (this.offsetLeft - parent.offsetLeft + this.clientWidth - parentBorderLeftWidth) > (parent.scrollLeft + parent.clientWidth), alignWithTop = overTop && !overBottom; this refers to the element that you want to know if it is, for example, overTop or overBottom - you just should get the drift... A: Domysee's answer https://stackoverflow.com/a/37998526 is close to correct. Many examples use "completely contained in the viewport" and his code uses percentages to allow for partially visible. His code also addresses the "is a parent clipping the view" question, which most examples ignore. One missing element is the impact of the parent's scrollbars - getBoundingClientRect returns the outer rectangle of the parent, which includes the scroll bars, not the inner rectangle, which doesn't. A child can hide behind the parent scroll bar and be considered visible when it isn't. The recommended observer pattern isn't appropriate for my use case: using the arrow keys to change the currently selected row in a table, and make sure the new selection is visible. Using an observer for this would be excessively convoluted. Here's some code - it includes an additional hack (fudgeY) because my table has a sticky header that isn't detectable by straightforward means (and handling this automatically would be pretty tedious). Also, it uses decimal (0 to 1) instead of percentage for the required visible fraction. (For my case I need full y, and x isn't relevant). function intersectRect(r1, r2) { var r = {}; r.left = r1.left < r2.left ? r2.left : r1.left; r.top = r1.top < r2.top ? r2.top : r1.top; r.right = r1.right < r2.right ? r1.right : r2.right; r.bottom = r1.bottom < r2.bottom ? r1.bottom : r2.bottom; if (r.left < r.right && r.top < r.bottom) return r; return null; } function innerRect(e) { var b,r; b = e.getBoundingClientRect(); r = {}; r.left = b.left; r.top = b.top; r.right = b.left + e.clientWidth; r.bottom = b.top + e.clientHeight; return r; } function isViewable(e, fracX, fracY, fudgeY) { // ref https://stackoverflow.com/a/37998526 // intersect all the rects and then check the result once // innerRect: mind the scroll bars // fudgeY: handle "sticky" thead in parent table. Ugh. var r, pr, er; er = e.getBoundingClientRect(); r = er; for (;;) { e = e.parentElement; if (!e) break; pr = innerRect(e); if (fudgeY) pr.top += fudgeY; r = intersectRect(r, pr); if (!r) return false; } if (fracX && ((r.right-r.left) / (er.right-er.left)) < (fracX-0.001)) return false; if (fracY && ((r.bottom-r.top) / (er.bottom-er.top)) < (fracY-0.001)) return false; return true; }
{ "language": "en", "url": "https://stackoverflow.com/questions/123999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1224" }
Q: Why is software support for Bidirectional text (Hebrew,Arabic) so poor? While most operating systems and web browsers have very good support for bidirectional text such as Hebrew and Arabic, most commercial and open-source software does not: * *Most text editors, besides the original notepad and the visual studio editor, does a very poor job. (And I tried dozens of them). *I could not find any file compare tool doing a decent job - No even Beyond-Compare. *Same thing for software and packages dealing with charting and reporting. Some questions I have: * *Do you share the same pain I do? *Is the software you write bidirectional compliant? Do you have bug reports about it? *Do you even know what are the issues involved? Do you test for them? *Any suggestions on how to make the software world a better place for bidirectional language speakers? A: Do you share the same pain I do? No. And that's probably the answer: most people have no idea how bidirectional languages work. I for example have some troubles working with that. Because I'm interested in that topic quite a bit I was reading pango sources a while back, and that's probably the second reason why the support sucks: it's damn hard to get right. I think the GNOME project has one of the best support for bidirectional user interfaces thanks to Pango (of course I can't verify that because I wouldn't be able to spot the problems). But because you said "open source": I think the globalization support in open source projects is generally outstanding. Linux sucks are pretty much everything, but internationalization is something they get right. gettext is still one of the few translation systems that has a (I know half baked but) working pluralization system. Is the software you write bidirectional compliant? Do you have bug reports about it? Probably not. I'm working on a web publishing software currently and that's one of the things I haven't tested at all so far :-( Do you even know what are the issues involved? Do you test for them? Bi-directional support is not no the direct roadmap. So no tests for them, where the issues are I know from the translation interface I wrote for Plurk. Any suggestions on how to make the software world a better place for bidirectional language speakers? For an open source project: ask guys to help you that know where the issues are. For closed source? Hire someone who knows. A: I think there are two main answers to this: 1) Most languages read left-to-right, so people either think they can get away with not having it or just don't even think about it in the first place. 2) It can be hard to support it, depending on what your project is. If your tools/libraries don't support it, your software probably won't either. And it's not just hard in a programming sense, but hard to get it right when the programmers aren't familiar with right-to-left languages. As I understand it, to really properly support bi-directional text, some things in the UI must also be flipped to look "right." The only reason I know anything about this is because I work with a guy who speaks Arabic as his native language and I've talked to him about it a little. I still don't know much about it. Our products only pretty recently started supporting Arabic and I haven't been a part of that effort. A: Simple, get more bidirectional language speakers to voice their concerns! With so few bidirectional language users around, I'd imagine that bidirectional text support is pretty low on most people's priority lists. The more bug reports you and other bidirectional language speakers file, though, the more the problem will be addressed. A: If you break up a string into substrings and display them individually you will break the OS bidi rendering, also if you add some mostly innocent symbols (like a - for example) you will mess up the text display. The two things you have to know to write bidi-compatible software is: * *Always display entire strings, never try to display parts of a larger string. *Always test any formatting code with bidi text. And if you are writing a text editor, word processor or anything that requires high end typography and you can't follow rule 1 above then writing a bidi rendering engine is a lot of work. A: I'm left-handed, and deal with similar issues in the physical world. It's a natural part of being in the minority, that businesses primarily cater to the majority. If you think there are problems with bidirectional text, you should check out the Turkish i problem sometime.. Anyhow, I think what will happen is either that text processing will become very standardized, and the libraries will do things correctly, or you'll have to wait until the app becomes big enough to warrant adding good support.. A: I know ltr text in Flash is a pain in the ass - I've heard it's easier for web pages, although you've got to be careful how you process strings so they don't get mixed up. This is an awfully subjective question, by the way, one that's impossible to find a 'solution' for - are you sure this is the right place to ask it? A: I myself has been researching around on how to add native BiDi to Android. Results so far: lots of work, Android practically lacks real BiDi. The issue is that the world of computers is all about internet and sharing, especially open-source software. This means dominant languages are the concern, and if you note english is actually the standard and other (mostly western) languages are provided as side translations. I speak Arabic/Hebrew/English. With computers I use almost only englis, with arabic/hebrew for local stuff (news, online tv, ...) which is handled well by web browsers. However since I bought Samsung Galaxy and started updating firmware I starting noting how big the problem is :( A: A note regarding some of the answers - There are no "bidirectional languages". a language is either left to right or right to left (or top to bottom...). A Text or a String can be bidirectional if it contains both say Hebrew and English. Regarding the question, Firefox seem to work swell for me. Also MSWord and that's pretty much everything I use Hebrew in. A: Any suggestions on how to make the software world a better place for bidirectional language speakers? Unfortunately, I don't think the situation will improve unless there are a lot more RTL-language-speakers participating in global affairs... which seems unlikely. Currently we have Israel which is a very technologically advanced society, but very small and nearly all the educated people speak English. And then there are the Arab countries and others that use Arabic script, which don't produce and consume nearly as much information as the Western world, according to studies I've seen.
{ "language": "en", "url": "https://stackoverflow.com/questions/124002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to overcome vs_needsnewmetadata error in Data Flow task? I have an SSIS package that copies the data in a table from one SQL Server 2005 to another SQL Server 2005. I do this with a "Data Flow" task. In the package config file I expose the destination table name. Problem is when I change the destination table name in the config file (via notepad) I get the following error "vs_needsnewmetadata". I think I understand the problem... the destination table column mapping is fixed when I first set up the package. Question: what's the easiest way to do the above with an ssis package? I've read online about setting up the metadata programmatically and all but I'd like to avoid this. Also I wrote a C# console app that does everything just fine... all tables etc are specified in the app.config ... but apparently this solution isn't good enough. A: Have you set DelayValidation to False on the Data Source Destination properties? If not, try that. Edit: Of course that should be DelayValidation to True, so it just goes ahead and tries rather than checking. Also, instead of altering your package in Notepad, why not put the table name in a variable, put the variable into an Expression on the destination, then expose the variable in a .DtsConfig configuration file? Then you can change that without danger. A: Matching source destination column with case sensitive has done the work for me. Like in my case SrNo_prod was column in dev and using it we developed the dtsx, while it is been created as SrNo_Prod in prod, after making case change from P to p, we got successful execution of package. A: Check if the new destination table has the same columns as the old one. I believe the error occurs if the columns are different, and the destination can no longer map its input columns to the table columns. If two tables have the same schema, this error should not occur. A: If all you are doing is copying data from one SQL2005 server to another I would just create a Linked Server and use a stored proc to copy the data. An SSIS package is overkill. How to Create linked server Once the linked server is created you would just program something like... INSERT INTO server1.dbo.database1.table1(id,name) SELECT id, name FROM server2.dbo.database1.table1 As far the SSIS package I have always had to reopen and rebuild the package so that the meta data gets updated when modifying the tables column properties.
{ "language": "en", "url": "https://stackoverflow.com/questions/124005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }